text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Exploring the Mpc Environment of the Quasar ULAS J1342+0928 at z = 7.54 Theoretical models predict that z ≳ 6 quasars are hosted in the most massive halos of the underlying dark matter distribution and thus would be immersed in protoclusters of galaxies. However, observations report inconclusive results. We investigate the 1.1 proper-Mpc2 environment of the z = 7.54 luminous quasar ULAS J1342+0928. We search for Lyman-break galaxy (LBG) candidates using deep imaging from the Hubble Space Telescope (HST) in the Advanced Camera for Surveys (ACS)/F814W, Wide Field Camera 3 (WFC3)/F105W/F125W bands, and Spitzer/Infrared Array Camera at 3.6 and 4.5 μm. We report a zphot=7.69−0.23+0.33 LBG with magF125W = 26.41 at 223 projected proper kpc (pkpc) from the quasar. We find no HST counterpart to one [C ii] emitter previously found with the Atacama Large millimeter/submillimeter Array (ALMA) at 27 projected pkpc and z [C II]=7.5341 ± 0.0009 (Venemans et al. ). We estimate the completeness of our LBG candidates using results from Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey/GOODS deep blank field searches sharing a similar filter setup. We find that >50% of the z ∼ 7.5 Lyman-break galaxies (LBGs) with magF125W > 25.5 are missed due to the absence of a filter redward of the Lyman break in F105W, hindering the UV color accuracy of the candidates. We conduct a QSO-LBG clustering analysis revealing a low LBG excess of 0.46−0.08+1.52 in this quasar field, consistent with an average or low-density field. Consequently, this result does not present strong evidence of an LBG overdensity around ULAS J1342+0928. Furthermore, we identify two LBG candidates with a z phot matching a confirmed z = 6.84 absorber along the line of sight to the quasar. All these galaxy candidates are excellent targets for follow-up observations with JWST and/or ALMA to confirm their redshift and physical properties. INTRODUCTION Understanding the formation of the first massive galaxies and black holes and their role in reionizing the universe is one of the main problems in modern cosmology.However, it is still challenging to identify these distant sources and subsequently characterize their properties.Quasars are the most luminous non-transient sources known and can be studied in detail at the earliest cosmic epochs (e.g.Fan et al. 2023).Despite quasars being very rare sources (∼ 1 per Gpc 3 at t age < 1 Gyr, Schindler et al. 2023), multiple observational efforts during the past decade have revealed a significant (> 400) population of quasars in the epoch of reionization within the first billion years of the universe, at redshift z > 5.5 (e.g Venemans et al. 2013Venemans et al. , 2015;;Bañados et al. 2016;Mazzucchelli et al. 2017b;Matsuoka et al. 2019;Reed et al. 2019;Yang et al. 2019;Gloudemans et al. 2022;Yang et al. 2023).These observations evidence a dramatic decline of the spatial density of luminous quasars at z > 6 and suggest that we are closing into the epoch when the first generation of supermassive black holes (SMBHs) emerged in the early universe (Wang et al. 2019a). Only eight quasars are known at z > 7, and three are at z > 7.5: J0313-1806 at z = 7.64 (Wang et al. 2021), J1342+0928 at z = 7.54 (Bañados et al. 2018a), and J1007+2115 at z = 7.52 (Yang et al. 2020).These early quasars are powered by ≳ 10 8 M ⊙ black holes (e.g Yang et al. 2021;Farina et al. 2022) and the large majority reside in extremely star-forming galaxies (> 100 -1000 M ⊙ /yr; e.g.Venemans et al. 2020).In order to sustain both the tremendous black hole growth and the intense star formation, current theoretical models posit that these systems lie in highly biased regions of the universe at that time, where gas can fragment and form a large number of surrounding galaxies (e.g.Springel et al. 2005;Volonteri & Rees 2006;Costa et al. 2014).These quasar environments could possibly host powerful sources of ionizing photons such as bright Lymanα emitters, or have nearby halos hosting these galaxies (Overzier et al. 2009).Consequently, these massive quasars are thought to be indicators of protoclusters defined as galaxy overdensities that will evolve by z ∼ 0 into the most massive (≥ 10 14 M ⊙ ) virialized clusters (Overzier 2016).Studying the environment of quasars hosting SMBHs as early as at z ∼ 7.5 is crucial to understand the large-scale structure and the feeding of gas in the first massive galaxies and black holes in the universe. To probe the presence of such protoclusters, one can perform deep imaging observations to select galaxy candidates, and compare their number density to that observed in "blank fields", i.e. field without a quasar.However, whether quasars at z ∼ 6 reside in overdense regions is heavily debated in the observational side of the literature.Discrepancies in these findings can be explained by the different observational techniques used to identify galaxies around quasars.This includes photometric searches for Lyman-break galaxies (LBGs, Zheng et al. 2006;Morselli et al. 2014;Simpson et al. 2014;Champagne et al. 2023), for Lyman-α emitters (LAEs, Mazzucchelli et al. 2017a), or for a combination of both (e.g., Ota et al. 2018).Also, spectroscopic confirmations of galaxies (e.g., Bosman et al. 2020;Mignoli et al. 2020), or [C ii] emitters and sub-millimeter galaxy searches (e.g., Decarli et al. 2017;Champagne et al. 2018;Meyer et al. 2022) have been undertaken in the literature.Recently, leveraging the capabilities of JWST near-infrared spectra, a substantial influx of [O III]-emitting galaxies has been unveiled in the environments of z ≳ 5 quasars (Kashino et al. 2023;Wang et al. 2023).Moreover, these studies encompass diverse physical areas and rely on different methods for evaluating the presence of an overdensity (e.g., Overzier 2022).Finally, the results are affected by cosmic variance given the handful of z ∼ 6 quasar fields inspected (García-Vergara et al. 2019). The highest-redshift simulations available from Costa et al. (2014) demonstrate that overdensities of (LBGs) and young LAEs around quasars up to z ∼ 6.2 can be probed within a 1.2 proper-Mpc 2 (pMpc) 2 environment using the HST ACS Wide Field Channel.The highestredshift quasar whose environment has been studied so far, and using this observational strategy, is ULAS J1120+0641 at z = 7.1 (Simpson et al. 2014).Given the rapidly decreasing number density of luminous quasars at z > 7 (Wang et al. 2019a) where the formation of SMBHs posits challenges not only on theories of black hole formation but also on large-scale structure assembly (e.g., Habouzit et al. 2016a,b), it is crucial to observationally inspect the environments of quasars at the highest-redshift known, i.e. z ∼ 7.5.In this work, we search for LBG candidates at z ∼ 7.5 in the immediate ∼ 1 pMpc 2 environment of the z = 7.54 quasar ULAS J1342+0928, using deep imaging data collected with the Hubble Space Telescope (HST ), and Spitzer /Infrared Array Camera (IRAC).This quasar hosts one of the earliest and most massive SMBHs with a mass ∼ 0.9 × 10 9 M ⊙ , that is actively accreting at near Eddington rates with L bol /L Edd ∼ 1.1 (Onoue et al. 2020).The host galaxy is already evolved with high amount of gas and dust resulting in SFR of ∼ 150 M ⊙ yr −1 , with a metallicity comparable to the solar neighborhood (Novak et al. 2019).Additionally, a study of the optical/NIR spectrum of ULAS J1342+0928 identified a strong absorber at z = 6.8 on its line-of-sight (Simcoe et al. 2020).This massive and active quasar in the early universe is an ideal candidate to now look for a galaxy overdensity and trace its large-scale structure. This paper is organized as follows: we describe the HST data and their reduction in §2, followed by the HST photometry, noise calculation, and aperture corrections in §3.We also include available Spitzer /IRAC photometry ( §3.3).The selection criterion and photometricredshift analysis to create the final catalog of LBG candidates are described in §4.Details on the properties of the resulting galaxy candidates are discussed in §5.The results, catalog completeness, and the interpretation of findings in relation to the density of the quasar field are discussed in §6.Finally, we summarize our results and provide further outlook in §7.Throughout this article we adopt a cosmology with: H 0 = 70 km s −1 Mpc −1 , Ω M = 0.3, Ω Λ = 0.7.Using this cosmology, the age of the Universe is 679 Myr at the redshift of ULAS J1342+0928, and 1" corresponds to 4.99 properkpc (pkpc).All magnitudes provided are in the AB system. OBSERVATIONS Usually, at least three filters are occupied to identify galaxies in the epoch of reionization using the Lymanbreak technique.The bluest filter serves to spot the spectral break in the galaxy continuum emission produced by the intergalactic medium (IGM) absorption.Hence, no or very little flux is expected to be detected in this filter.A contiguous filter is centered on the expected wavelength of the Lyman-break serving as the drop-out and detection band, and redder filters are used to observe the continuum emission.In this section, we describe the HST data obtained to select LBG candidates in the environment of ULAS J1342+0928 and the reduction process. HST data and reduction We use observations obtained with the Advanced Camera for Survey (ACS) and Wide Field Camera 3 (WFC3) on board HST between June 2018 and June 2019 (PI: Bañados, Prog ID:1165).We obtained data in the F814W (ACS, 13 orbits) serving as the nondetection filter, and the F105W and F125W filters (WFC3, 8 and 4 orbits each, respectively).To maximize the ACS surveyed area, WFC3 near-infrared imaging were observed in a 2 × 2 mosaic strategy.The final effective area covered to search for LBGs is computed based on the ACS/F814W image, as this area is covered by all three filters.The calculation masks out bad pixels in the weight map, resulting in an area of 12.28 arcmin2 .All filter transmission curves are presented in Figure 1 with the rest-frame UV spectrum of ULAS J1342+0928 overlaid (Bañados et al. 2018a).The presented observations achieve a 5σ limiting AB magnitudes of 28.20, 27.83, 27.46 in the F814W, F105W, and F125W bands, respectively,as calculated with a 0. ′′ 4-diameter circular aperture. We use the bias subtracted, flat-fielded, and cosmicray cleaned, reduced images provided by STScI, and implement an ad-hoc method to ensure a good astrometric match between the different filters using DrizzlePac. 1 Indeed, such an alignment is nontrivial due to the small number of stars found in the field, which complicates standard reduction routines.We start by considering the HST WFC/F814W pipeline-reduced flc.fits files, downloaded from the MAST archive 2 .In order to create reference catalogs with enough sources, we run Source Extractor (Bertin & Arnouts 1996) on each image, after cleaning them from cosmic rays contamination using the astrodrizzle routine with cosmic ray cr clean=True.We use tweakreg to align the uncleaned, original flc.fits images, utilizing these Source Extractor-created reference catalogs, each containing ∼1000 sources.The final combined image in F814W is obtained using astrodrizzle, with skymethod='match' and combine type='median'.We run again Source Extractor on the final, drizzled F814W image, and use this new catalog (4544 sources) as a reference to match the WFC3/F105W and F125W HST iYJ RGB color-image of the field around the quasar ULAS J1342+0928.The quasar is in the center as presented in the circled region.Overlaid are the high-redshift galaxy candidates selected in this work as described in §5.The source C-4636 is identified as an LBG candidate with a photometric redshift of z phot = 7.69.Candidate C-[C ii] is a [C ii] emitter previously identified in Venemans et al. (2020) to be at z [C II] = 7.5341 ± 0.0009.This candidate lacks HST or Spitzer counterpart emission, making it a dust-obscured candidate in the environment of the quasar.Additional LBG candidates in the observed field, C-4966 and C-5764 are at z phot = 6.91 and 6.89, respectively. images to the F814W.In detail, we use tweakreg on the F105W and F125W HST pipeline-reduced flt.fits files, with the F814W catalog as reference, searchrad=3.and minobj=6.We drizzled the matched files to obtain the final F125W and F105W images, with the same astrodrizzle parameters used for the F814W filter, and final scale=0.′′ 05, in order to match the pixel scale of the WFC3 images to that of ACS. In order to check the goodness of our match, we compared the coordinates of sources recovered in all three HST final images, considering only the 30 brightest objects.The final mean deviation within the astrometric solutions of the filters is ∼0.′′ 03.If we compare instead their astrometry with the GAIA DR2 catalog (Gaia Collaboration et al. 2018), the mean difference in the coordinates of the recovered sources (10 in F814W, 13 in both F105W and F125W) is ∼0.′′ 05.We note that the final F105W image is affected by an artifact, due to the presence of a satellite trail in one of the flt exposure.We decided to not discard this exposure to obtain the deepest image, but caution is needed when examining sources close to the trail.The final reduced images F814W, F105W and F125W (hereafter i 814 , Y 105 , and J 125 ) are presented in Figure 2 as an RGB color image created with JS9-4L (v2.2;Mandel & Vikhlinin 2018). Point-Spread Function (PSF) Matching Finding high-redshift galaxies requires very accurate colors from photometric measurements in different bands.We calculate the photometry in fixed aperture diameters of 0. ′′ 4, as later discussed in Section 4, and therefore imaging in all bands need to be matched to the same PSF.The size in pixels of the PSF in the i 814 , Y 105 , and J 125 images are 2.6, 4.45, and 4.55, respectively.The reference matching image is the one with the largest PSF full width at half maximum (FWHM), which in this case is the J 125 band.We decided against using stars to build the PSF in each band because of their scarcity.Hence, to perform the PSF matching we therefore relied on the standard HST PSFs produced with a high level of precision by STScI3 from the _flt / _flc frame.The matching kernel for image convolution is produced with pyPHER (Boucaud et al. 2016) to make the final PSF-matched images. MAKING THE CATALOGS This analysis follows closely the procedure for Lymanbreak detection in Rojas-Ruiz et al. (2020).We utilize the Software Source Extractor v2.25.0 to measure the photometry of the sources in all three HST filters in dual-mode with a coadded Y 105 + J 125 as the detection image, which serves to maximize the signal-to-noise (S/N) and minimize the number of spurious sources resulting in the catalogs.The errors provided by Source Extractor depend on the RMS map.We build this RMS map for each band from the sky flux measurements in the science image (SCI) found with a 2.5-σ clipping, and the reduced weight image (WHT) as follows: The flux of the objects is measured in a small Kron elliptical aperture (PHOT AUTOPARAMS 1.2, 1.7) which is subsequently corrected up to total magnitudes using the flux measured in a larger Kron aperture (PHOT AUTOPARAMS 2.5, 3.5), as previously done in high-redshift galaxy studies (e.g., Bouwens et al. 2010Bouwens et al. , 2021;;Finkelstein et al. 2010Finkelstein et al. , 2022)).To identify pointlike sources in our catalog, we avoid relying solely on the CLASS STAR parameter from Source Extractor that can be misleading when investigating high-redshift sources (see Finkelstein et al. 2015;Morishita et al. 2018), we also perform photometry in a 0. ′′ 4-diameter circular aperture.Comparing the ratio between the Kron elliptical aperture and the 0. ′′ 4 circular aperture sizes helps identify point-like sources such as stars or bad pixels.This circular aperture also serves as a high S/N measurement of the source at the targeted wavelengths and is thus relevant for the S/N cuts in our criteria for selecting candidates at z ∼ 7.5 as described in §4. Upon visual inspection of the segmentation map produced from the Source Extractor run, the combination of parameters DETECT THRESH = 1.5 and DE-TECT MINAREA = 7 maximize the number of sources detected while lowering the spurious fraction. Noise Calculation We perform an empirical noise calculation of the images to account for the partially correlated noise characteristic of drizzled HST images (Casertano et al. 2000).While Source Extractor calculates the flux uncertainties from individual uncorrelated pixels in the RMS map, the procedure described in Papovich et al. (2016) accounts for correlated and uncorrelated noise.We closely follow this empirical noise estimate as described below. For images with exclusively uncorrelated pixels, the noise is measured in a circular aperture of N pixels which scale following σ n = σ 1 × √ N , where σ 1 is the pixel-topixel standard deviation of the background.Conversely, the noise from completely correlated pixels is measured as σ n = σ 1 × N .In our HST images, the noise truly 2. N is the number of pixels in the area of the aperture with diameters 2.0, 3.0, 4.0, 5.0, 6.0, 8.0, 10, 12, 14, 16, 18, 20 (pixel scale is 0. ′′ 05).Note how the noise grows with a bigger aperture, as expected from the equation.The red line shows the best fit correlating the noise and aperture size N , which is used to find the α and β free parameters that contribute to the noise estimate. varies among both correlations as N β where 0.5 < β < 1, and this can be estimated for the whole image with the parameterized equation: where α has to be a positive value.Note that we do not include the Poisson correction of the equation from Papovich et al. (2016) as it did not contribute to the calculation of the noise in the HST images.We measure the noise in each of the three HST images by first placing randomly-distributed apertures in the sky background with growing sizes from 0. ′′ 1 to 1. ′′ 0 in diameter.Then, we use the curve fit Python function with the Levenberg-Marquardt least-squares method to fit for the noise found in the random apertures with increasing size (see Figure 3).The calculated noise values in each image are applied to estimate the flux errors for the Kron and 0. ′′ 4 apertures.For the Kron aperture, N is calculated as the number of pixels in the ellipse defined by the semi-major (A IMAGE) and semi-minor (B IMAGE) axes measured by Source Extractor. Corrections to the Photometry Catalogs The resulting Source Extractor catalogs with the calculated flux errors are then corrected for Galactic dust attenuation following the Cardelli et al. (1989) extinction curve with an R v = 3.1, as motivated in similar studies of high-redshift galaxies (e.g., Rojas-Ruiz et al. 2020;Finkelstein et al. 2022;Tacchella et al. 2022).We use Schlafly & Finkbeiner (2011) to correct for galactic extinction and find a color excess E(B−V ) = 0.025.The zero points for the final catalog are calculated according to the newest 2020 HST photometric calibrations for ACS and WFC3, which apply to the observed dates of the images.The zero points in AB magnitude are 25.9360, 26.2646, and 26.2321 for i 814 , Y 105 , and J 125 , respectively.We also apply in all filters an aperture correction from the large (2.5, 3.5) to small (1.2, 1.7) Kron aperture photometry measured in the J 125 , to account for the missing PSF flux in the smaller aperture. Spitzer/IRAC Photometry Additional Spitzer /IRAC 3.6µm and 4.5µm images covering the same area of the quasar environment explored with HST are available from cycle 16 archival database (PI: Decarli).Each of the IRAC mosaics has an exposure time of 3.4 hrs, and the 3σ limiting depth for point sources is ≈ 0.8 µJy in both Channels 1 and 2. These additional photometric bands provide crucial information to distinguish true high-redshift galaxies from lower-redshift contaminants.These IRAC bands allow for better building of the spectral energy distribution (SED), and hence to better differentiate between a dusty Balmer-break galaxy at z ∼ 2 and a high-redshift candidate of interest at z ∼ 7.5.The FWHM of the IRAC point response function (PRF) is ≈ 1. ′′ 8 in Channels 1 and 24 ; this is about two orders of magnitude larger than in HST.Therefore, in order to match the sources from the two data sets the IRAC PSFs need to be modeled in order to correct for deblending of sources and calculate accurate flux and flux errors.The mosaics and modeling are performed following Kokorev et al. (2022), and are briefly described here. To obtain photometry from the IRAC imaging, a PSF model method is produced using the tools from the Great Observatories Legacy Fields IR Analysis Tools (GOLFIR; Brammer 2022).This modeling method uses a high-resolution prior, which is built from combining the HST /ACS and WFC3 images.This resulting image is combined with the IRAC PSF using a matching kernel to finally obtain the low-resolution templates.The original IRAC images are divided into homogeneous 4 × 4 patches of 120.′′ 0, which are allowed to overlap to improve the modeling.The brightest stars and sources with high signal-to-noise ratios in the IRAC and HST images are manually masked to avoid large residuals from the fit.IRAC model imaging are first generated for the brightest objects in the J 125 catalog doing a leastsquares fit of the low-resolution IRAC patches to the original IRAC data to obtain the modeled fluxes.The flux errors are simply the diagonal of the covariance matrix of the model.Similarly, for the fainter sources in the catalog but the least-squares fit normalizations are then adopted as the IRAC flux densities.The resulting photometry from this PSF modeling method is used for the rest of the analysis. SELECTION OF GALAXY CANDIDATES Galaxy candidates neighboring the quasar ULAS J1342+0928 are found with a similar method as previous work in the literature (Rojas-Ruiz et al. 2020;Finkelstein et al. 2022;Bagley et al. 2024).We rely on the photometric redshift technique by fitting the best SED model to the HST and Spitzer photometry.We refine the catalog of candidates by applying S/N cuts, quality checks between the low and high-redshift fitting, and color-color comparisons to low-redshift interlopers and MLT brown dwarfs (see §5.1).The different steps to obtain the catalog of galaxy candidates are described in this section. Photometric Redshifts with EAZY We use the "Easy and Accurate Z phot from Yale" (EAZY; Brammer et al. 2008) version 2015-05-08 to calculate the photometric redshifts of all sources in our catalogs.EAZY calculates the probability distribution function of photometric redshifts P (z) based on a minimized χ 2 fit of the observed photometry in all given filters to different SED models of known galaxy types.EAZY includes the 12 tweak fsps QSF 12 v3 Flexible Stellar Population Synthesis (FSPS) models (Conroy et al. 2009;Conroy & Gunn 2010), the template from Erb et al. (2010) of the young, low-mass and blue galaxy BX418 at z = 2.3 exhibiting high equivalent-width (EW) of nebular lines and Lyα, and a version of this galaxy without the Lyα emission to mimic attenuation from the intergalactic medium (IGM) while preserving strong optical emission lines.All these 14 templates are fed equally into EAZY so that it constructs the best-fitting models from a linear combination of the templates to the flux and flux errors of the source measured in the Kron 1.2, 1.7 elliptical aperture (see Section 3).For each source, EAZY applies IGM absorption following Inoue et al. (2014) for redshift steps of ∆z = 0.01.Initially, we consider the redshift probability distribution when giving the templates freedom from z = 0.01 − 12 and assume a flat luminosity prior, as galaxy colors at z ≳ 6 are not yet well understood (Salmon et al. 2018).This wide redshift range is chosen to allow the comparison between the probability of galaxies to be at high-(z > 5) and low-(z < 5) redshifts. Selection Criteria for Catalog We build the final catalog of galaxy candidates applying the following selection criteria to the results of the photometric redshift fits from EAZY: • S/N i814 < 2.0 measured in the 0. ′′ 4 circular aperture, implying a non-detection in i 814 . • S/N Y105 or S/N J125 > 5.0 also measured in the 0. ′′ 4 circular aperture, to ensure the source is detected at high redshift while also potentially selecting strong Lyman-α emitters where the flux would only be detected in the Y 105 , or galaxies with strongly absorbed Lyman-α producing continuum emission only in the J 125 . • The integrated redshift probability P (z) calculated from EAZY at P (6 < z < 12) > 60%, securing that a high-redshift solution dominates over the total probability distribution. • The integral of the primary peak of the total integrated distribution P (z peak ) > 50%. • The redshift probability distribution at z = 7.5 is higher than the neighboring distributions, in ∆z = 1 bins: We do not place a cut in the half-light radius of the source in order to include in the catalog possible active galactic nuclei (AGN) sources, which would exhibit a more point-like morphology.However, this parameter is reported in Table 2 and is considered during the visual inspection step.Note that the half-light radius r 0.5 of a star in our survey in the J 125 band is 2.65 pixels, or 0. ′′ 13. Using the above criteria we find 5 LBG candidates where one is the quasar ULAS J1342+0928, and two are identified as diffraction spikes from visual inspection.The succeeding catalog is thus composed of the Wavelength ( m) recovered quasar and two galaxy candidates.We note that decreasing the S/N threshold so that S/N Y105 or S/N J125 > 3.0 results in a large contamination due to sources with a marginal detection in just one band (25), diffraction spikes (13), bad pixels or other detector artifacts ( 25).An additional test fitting only a lower-redshift solution was performed to better discriminate among possible low-redshift contaminants.For this, we set EAZY to freely fit the 14 SED templates over a redshift span z = 0.01 − 5. We then compared the χ 2 of the bestfit template from this lower-redshift solution χ 2 lowz to that at higher-redshift χ 2 highz with the redshift span z = 0.01 − 12.If ∆χ 2 l−h = χ 2 lowz − χ 2 highz < 4 the goodness of the fit is lower than the threshold of 95% confidence interval, which means the source can be similarly fit with a high and a lower redshift solution (see e.g.Finkelstein et al. 2022;Bagley et al. 2024).We discarded one candidate that did not pass this test, as it had a ∆χ 2 l−h = 2.4 with a redshift solution z low = 2.5 and z high = 7.9.The final catalog thus contains the quasar and one LBG candidate at z ∼ 7.5 passing the test with ∆χ 2 l−h = 16.51. GALAXY CANDIDATES IN THE QUASAR FIELD In this section, we present the results from our search of galaxy candidates associated with the quasar ULAS J1342+0928 environment at z ∼ 7.5.We further comment on the inspection of our HST +Spitzer /IRAC data * .For the bottom two candidates, our fit preferred a solution at z ∼ 6.8 − 6.9, in the redshift range of an absorber at z = 6.84 in the line-of-sight of the quasar Simcoe et al. (2020).Column 1 is the candidate ID.Columns 2-3 are the RA and DEC calculated in degrees.Columns 4-6 present the integral of the redshift probability distribution in three redshift bins (see §4.2).Column 7 presents the photometric redshift with the highest probability and its 68% confidence interval as calculated with EAZY.Column 8 shows the difference in the best-fit χ 2 for the low-(z < 5) and high-(z < 12) redshift solutions.Column 9 is the spectroscopic redshift of the sources when available.Column 10 is the projected distance of the candidate to the quasar.† Systemic redshift measured using ALMA observations of the [C ii]−158µm emission line from the quasar's host galaxy in Bañados et al. (2019) * Systemic redshift calculated from [C ii]−158µm observations with ALMA in Venemans et al. (2020) at the position of gas-rich [C ii]-emitter at z ∼ 7.5 previously identified in Venemans et al. (2020).Finally, we explore additional galaxy candidates at a slightly lower redshift than that of the quasar, at z ∼ 7. Figure 4 shows the postage stamps of the LBG candidates, their SED, and photometric redshift solution from EAZY with both the high-redshift and lower-redshift fits.Table 1 summarizes the properties of these LBGs and [C ii]emitter. 5.1. A galaxy candidate at z ∼ 7.5 We recover the quasar with a photometric redshift of z phot = 7.59, where its systemic redshift measured from [C ii] emission is z = 7.5400 ± 0.0003 (Bañados et al. 2019).We find a new LBG candidate, C-4636, at z phot = 7.69 (see Figure 4).This photometric redshift is slightly higher than that of the quasar given the very flat redshift probability distribution between z = 7.5 − 8, but the solution dominates among the other redshift distributions with P (7 < z < 8) = 80%, constraining its association with the environment of the quasar.Moreover, C-4636 is at a projected distance of 223 pkpc from the quasar.This distance is similar to that of galaxies found around other high-z quasars in the literature, which showed strong quasar-galaxy clustering (e.g., Morselli et al. 2014;Farina et al. 2017;Mignoli et al. 2020;Meyer et al. 2022).Further exploration of the QSO-LBG clustering for quasar ULAS J1342+0928 and this candidate is described in §6.2. We examine the candidate's i 814 −Y 105 and Y 105 −J 125 colors to evaluate possible stellar contamination.We take MLT-dwarf stars from the IRTF SpeX Library developed by Burgasser (2014) and compare their colors to those of LBGs and quasars at z = 6.5 − 8.5.The color of candidate C-4636 in Y 105 − J 125 = 0.7 and its compact morphology with a half-light radius r 0.5 = 0. ′′ 10 is comparable with the average radius of stars in this field of r 0.5 = 0. ′′ 13 ± 0. ′′ 01, make it a possible MLTdwarf contaminant (see Figure 6).However, the ratio of the flux in the total Kron to the 0. ′′ 4 aperture of 1.45 ± 0.07, and the Source Extractor stellarity parameter of CLASS STAR = 0.08 do not classify this source as a star.Additionally, galaxies at this redshift would show a distinct SED from MLT contaminants at λ > 2µm, which can be determined even with shallow IRAC imaging (e.g Finkelstein et al. 2022;Bagley et al. 2024).The non-detection in our IRAC/3.6µmand 4.5µm imaging support the high-redshift nature of this candidate.Thus, we still consider this source as a good LBG candidate in the physical environment of ULAS J1342+0928.We recognize that additional filter information redder than the J 125 band would provide further insights into the characterization of this source. The compact morphology could suggest that C-4636 is an AGN.This is consistent with theoretical simulations which show that AGNs tend to cluster near quasars with SMBH 10 8 − 10 9 M ⊙ (Costa et al. 2014), and some observational cases already seen at z > 5 (e.g.McGreer et al. 2016;Connor et al. 2019;Yue et al. 2021;Maiolino et al. 2023;Scholtz et al. 2023).Existing 45ks Chandra observations of this field do not show any X-ray signal at this location (Bañados et al. 2018b).Future JWST NIRSpec spectrum targeting strong nebular emission lines could be used for a Baldwin, Philips, & Terlevich (BPT) diagnostic diagram (Baldwin et al. 1981) to distinguish whether this source is an AGN. Dusty Star-Forming Galaxy A galaxy candidate at 27 projected pkpc from the quasar ULAS J1342+0928 had been previously identified in Venemans et al. (2020).This candidate was recovered with ALMA observations targeting the restframe [C ii]-158µm emission of the quasar and found to be at z [C II] = 7.5341 ± 0.0009.In order to study the properties of this source in other wavelengths, we looked for any counterpart emission in the HST and Spitzer /IRAC data.However, we did not detect this galaxy in any of the five images (see Figure 5).Since no flux is recovered up to SNR ∼ 2, the possibility of this galaxy being a low-redshift interloper is strongly disfavored.Furthermore, many studies have concluded that there is a significant population of dust-obscured galaxies that have no rest-frame optical counterpart detected at the current observational limits (e.g.Mazzucchelli et al. 2019;Wang et al. 2019b;Meyer et al. 2022).Therefore, this [C ii]-emitter could be one of these dusty starforming galaxies (DSFG) in the environment of ULAS J1342+0928.Currently, the only detection available of this galaxy is from ALMA 223 GHz observations of its [C ii] emission at 0.10 ± 0.03 Jy km s −1 , and no dust continuum was recovered (f cont < 0.06 mJy; Venemans et al. 2020).Given the information at hand, we are not able to place meaningful constraints or predictions on the SED of this source.Further ALMA or JWST observations are necessary to confirm the nature of this source and study its properties.The current ALMA observations of this field cover only a FOV ∼ 39", i.e. ∼ 194 pkpc, around the quasar, hence we are not able to investigate any counterpart for the LBG candidates found in this work. Additional Candidates at z ∼ 7 Simcoe et al. ( 2020) inspected the optical/NIR spectrum of ULAS J1342+0928, and identified a strong metal absorber at z = 6.84 spanning ∼ 150 km s −1 on its line-of-sight.This galaxy has not been directly observed in emission yet.Hence, we also explore candidates at z ∼ 7 to search for any counterparts (e.g.Neeleman et al. 2019).We begin by selecting all sources with P (6 < z < 12) > 60%, and a P (6 < z < 7) higher than neighboring distributions in ∆z = 1 bins.After visually inspecting the candidates and evaluating the best fits for the high and lower-redshift solutions ∆χ 2 l−h , similar to the process used for the z ∼ 7.5 galaxy candidates search, we identify two galaxy candidates.C-4966 has a photometric redshift of z phot = 6.91 and a probability distribution P (6 < z < 7) = 55% 0.5 0.0 0.5 1.0 1.5 2.0 and P (6.5 < z < 7.5) = 71%.C-5764 is found at z phot = 6.89 and has a P (6 < z < 7) = 68% and P (6.5 < z < 7.5) = 79% (see Figures 2, 4, and Table 1).Both candidates favor a redshift solution closer to z = 6.5 − 7.5.Evaluating different Ly-α properties for these candidates using our color models for LBGs at z = 6.5 − 8.5 in Figure 6, we find that similar colors would be reproduced with either (FWHM= 1000 Å; EW = 15 Å) or (FWHM= 2000 Å; EW= 15 Å), rather than a more typical narrow Ly-α line for LAEs (FWHM= 200 Å and EW = 100 Å or lower).Spectroscopically confirming these galaxies could point to galaxy clustering in the field at z ∼ 6.8, near the absorber. Completeness The Lyman break of LBG candidates at z ∼ 7.5 falls at the observed wavelength λ ∼ 1µm, which is positioned right in the middle of the Y 105 band used in this work (see Figure 1).This implies that even if the source is detected in Y 105 , its Y 105 − J 125 color would be red.No contiguous filter (e.g.HST JH 140 or H 160 ) is available to us to robustly measure the rest-frame UV color redward of the Ly-α break.Although our photometry in the Sptizer /IRAC 3.6 µm and 4.5 µm bands complement the galaxy SED and help determine its dust components to rule out low-redshift contaminants, this is only possible for galaxies already robustly selected at high-redshift with HST imaging up to λ ∼ 2µm.Our data currently has a wider gap in the wavelength coverage (1.4µm −3.6µm, or J 125 − IRAC/3.6µm),causing a considerable number of galaxies to never enter the selection catalog.Taking into account all these limitations, we compare the found number density (one new LBG) to what we would recover with a consistent selection of galaxy candidates in the same redshift span of z ∼ 7−8, using comparable data from blank fields. The Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS) GOODS North and South Deep survey presented in Finkelstein et al. (2015) provides a similar filter coverage (I 814 , Y 105 , J 125 , H 160 ) at comparative depths to our HST survey around ULAS J1342+0928.This filter coverage helps to assess the completeness of our data since we can attempt to reproduce the number density of LBGs in a blank field, but using just the I 814 , Y 105 , and J 125 bands and our selection criteria for high-redshift galaxies.We build the photometry catalog for EAZY using the fluxes from the GOODS catalog, and the flux errors from our limiting magnitudes in the HST filters to match the noise to that of our images.We run EAZY in the same setup and select LBG candidates with the criteria employed for the analysis of the quasar field (see §4.2).Using these three HST filters alone we recover 31 sources.This is much lower than the 125 sources with a photometric redshift between z = 7 − 8 in the GOODS catalog that are selected when including the additional photometry in H 160 .Hence, we recover in total only 31/125, i.e. ∼ 25% of the sources. This test shows that the selection of z ∼ 7.5 galaxies based on our HST filter set is strongly incomplete.The recovered fraction of galaxies and their magnitudes in J 125 is shown in bins of ∆0.5 mag in Figure 7.Note that the faintest galaxy candidate recovered from this catalog has J 125 = 27.53,whereas the GOODS catalog from Finkelstein et al. (2015) has sources as dim as J 125 = 28.76.For galaxies in GOODS at J 125 < 26.5, i.e. in the range of C-4636 (J 125 = 26.41;see Table 2), we obtain the highest recovery rate of 43%.The completeness drops to ∼ 10% between J 125 = 27.5 − 28.0.However, we note that our 5σ limiting magnitude in this band is deeper reaching up to J 125,5σ = 27.46 (see dashed line in Figure 7).(1986).The 5σ J125 limiting magnitude from this work is denoted with the dashed red line. Exploring the Environment of ULAS J1342+0928 In the context of clustering, the probability of finding an excess of LBGs around a quasar is determined by the two-point correlation function, represented as 1+ξ QG (r).Here the quasar and LBG (QSO-LBG) crosscorrelation is expressed in a power-law form ξ QG (r) = r/r QG 0 −γ , where r QG 0 signifies the cross-correlation length and γ denotes the slope of the function.The highest redshift at which the QSO-LBG clustering has been studied is z ∼ 4 (García-Vergara et al. 2017), resulting in r QG 0 = 8.83h −1 cMpc with a fixed slope γ = 2.0.We adopt these measurements and assume no evolution of the QSO-LBG cross correlation between z = 4 and z = 7.5 (830 Myr).The resulting LBG excess as a function of comoving radius around ULAS J1342+0928 is presented with the magenta curve in Figure 8.A quasar field presenting a QSO-LBG excess consistent or above this curve would be considered to reside in a high density region, suggestive of an overdensity.We calculate the excess based on our observation of one QSO-LBG pair relative to the expected number of LBGs in the field.This expected number is calculated from: the number density of LBGs at z = 7.5 as interpolated from the z = 7 and z = 8 rest-UV luminosity functions from Finkelstein et al. (2015); our completeness fraction from §6.1; and the comoving cylindrical volume with a radius equivalent to the comoving distance from the quasar to candidate C-4636 and a comoving line-of-sight from the quasar's redshift ∆z = ±0.3,corresponding to the redshift uncertainties on the C-4636 z phot estimated with EAZY (see §5.1 and Table 1).Note that in this estimation we do not account for candidate C-[C ii] identified with ALMA in the environment of the quasar, given that this galaxy is not UV-bright. Figure 8 shows that at the comoving radius to C-4636 corresponding to 1.9 cMpc, we observe an LBG excess of 0.46 +1.52 −0.08 (green circle with error bars), indicating that this quasar field is consistent with cosmic density (black dashed line where LBG excess = 1), or slightly underdense.This field is incompatible with an overdensity of UV-bright galaxies as our result is at least 1.4 times below the clustering expectations (magenta curve) around z = 4 quasars from García-Vergara et al. (2017).Note that this measurement is limited by low number statistics.Therefore, a larger area coverage or optimal set of filters to improve the completeness of LBGs, and spectroscopic follow-ups, would be necessary to fully characterize the environment of this quasar. JWST observations from program GTO 1219 (PI: Luetzgendorf) aim at confirming the LBG candidate C-4636 at z ∼ 7.5 using NIRSpec MSA spectroscopy.The observations cover 0.7µm to 3.1µm with the G140H/F070LP and G235H/F170LP grating and filter combination.This setup offers a high resolution power of ∼ 1, 000 and ∼ 2, 700, respectively, enabling sensitivity to UV metal emission lines such as C iv λ1549, C iii] λλ1907,1909 and Mg ii λ2798.These lines would characterize the ionization and chemical enrichment of the galaxy (e.g.Hutchison et al. 2019).The Lymanbreak at z ∼ 7.5 from the galaxy would potentially be observed providing additional confirmation of the candidate. Recent studies using JWST NIRCam/WFSS spectra have demonstrated overdensities around quasars at slightly lower redshifts.Wang et al. (2023) found 10 [O III] emitting galaxies in the environment of the quasar J0305-3150 at z = 6.6, probing an overdensity of galaxies in this field.Furthermore, Kashino et al. (2023) compiled a comprehensive catalog of [O III] emitting galaxies at 5.3 < z < 7.0 in the field of the quasar J0100+2802 at z = 6.327.Among the 117 [O III] emitters, 24 were associated with the quasar environment, revealing a clear overdensity of galaxies.These findings therefore demonstrate the efficacy of investigating quasar environments by observing strong UV-rest emission lines of galaxies.Applying a similar strategy to the z = 7.54 quasar ULAS J1342+0928, the use of NIR-Cam/WFSS with the F430M filter would be suitable for identifying such [O III] emitting galaxies.2017), assuming no evolution between z = 4 and z = 7.5 (see §6.2).Uncertainties due to cosmic variance are not considered.Accounting for our completeness and the LBG we found at z = 7.69 with projected distance to the quasar of 223 pkpc (1.9 cMpc), we calculate an LBG excess of 0.46 +1.52 −0.08 (green circle with 1σ errors from Gehrels 1986), consistent with an average or lowdensity field.We note that our result is limited by low Poisson statistics. Even though simulations show that massive quasars such as ULAS J1342+0928 are good indicators of galaxy overdensities, the opposite has also been observed (e.g.Bañados et al. 2013;Simpson et al. 2014;Mazzucchelli et al. 2017a).There is no evidence for an evolutionary trend of overdensities with redshift as for example Mignoli et al. (2020) found both LBGs and LAEs in the environment of a quasar at z = 6.31.On the contrary, Goto et al. 2017 did not find any LAEs around a z = 6.4 quasar and points out to the possibility that the quasar formation drains out the available matter within the ∼ 1 pMpc.There are indeed many physical processes at play in the formation of a quasar and its environment.A possible method to suppress or delay star/galaxy formation within a few pMpc from the quasar is its UV radiation (e.g.Ota et al. 2018;Costa et al. 2019;Lambert et al. 2024). In a different scenario, supernova-driven galactic winds could simply sparse the galaxies further away from the quasar and thus reduce the number density observed (e.g. by a factor of up to 3.7 in the HST /ACS area; Costa et al. 2014).Finally, there is also the possibility of the environment being fully dominated by dust-obscured galaxies, and no LBGs or LAEs can be found with traditional photometric techniques.In order to probe this scenario for the ULAS J1342+0928 field, further ALMA observations covering a larger area could unveil this population of galaxy candidates.One can further explore their chemical properties by, e.g., rest-frame optical observations with JWST (e.g Decarli et al. 2017;García-Vergara et al. 2022). 6.3.Galaxy-Absorber Association at z ∼ 6.8 Analysis of the z = 6.84 absorber detected in the spectrum of ULAS J1342+0928 by Simcoe et al. (2020) suggests that this system may be classified as a Damped Lyman Alpha (DLA) system, with a fiducial column density of N HI = 10 20.6 cm −2 (Simcoe et al. 2020).Galaxies originating such absorbers at z ∼ 4 are typically located at impact parameters of ≲ 50 pkpc (e.g., Neeleman et al. 2017Neeleman et al. , 2019)).However, recent studies based on z < 2 MgII (λλ2796, 2803 Å) absorbers, showed that group environment may give rise to stronger and more widespread absorption systems within a projected distance of ≲ 480 pkpc (e.g., Nielsen et al. 2018;Fossati et al. 2019;Dutta et al. 2020).If this behavior holds at high redshift (see Doughty & Finlator 2023, for implication on the end of the reionization on absorption systems), this can explain the relatively large impact parameters observed for our two candidates C-4966 and C-5764 (290 pkpc, 476 pkpc, assuming z = 6.9, respectively).The two z ∼ 6.9 galaxy candidates need to be spectroscopically confirmed to establish the physical link with the metal absorption system detected at z ∼ 6.8 by Simcoe et al. (2020). SUMMARY We present the results of a search for Lymanbreak galaxy candidates (LBGs) in the environment of the z = 7.54 quasar ULAS J1342+0928.We used HST +Spitzer /IRAC observations designed to look for LBGs in the ∼ 1 proper-Mpc 2 environment of the quasar.Here, we present newly obtained deep HST ACS/WFC i 814 and WFC3 Y 105 and J 125 bands.We use the HST observations to select LBG candidates with photometric redshift z ∼ 7.5.Shallower Spitzer /IRAC 3.6 µm and 4.5 µm observations are utilized to constrain the high-redshift solution of the galaxies selected. The final catalog results in the recovery of the quasar and one LBG at z = 7.69, with magnitude J 125 = 26.4 and at a projected distance of only 223 pkpc from the quasar.An additional candidate previously identified in the environment of ULAS J1342+0928 using ALMA band 6 observations and with z [C II] = 7.5341 ± 0.0009 (Venemans et al. 2020) is not detected in any of the five bands used in this work.This is a potential dustobscured star-forming galaxy candidate at z = 7.5 just 27 pkpc in projection from the quasar. Galaxy candidates at lower photometric redshifts z = 6.91 and z = 6.89 are identified in the data set and, interestingly, are at a redshift that is consistent with a z = 6.84 absorber in the line of sight previously identified in the quasar spectrum in Simcoe et al. (2020). The completeness of galaxy candidates found at z ∼ 7.5 in our survey compared to blank fields from GOODS (Finkelstein et al. 2015), proves to be low even at the brightest magnitudes J 125 < 26.5 (∼ 40%).This low completeness can be explained by the fact that a z ∼ 7.5 LBG begins to drop out halfway through the Y 105 , leading to biased results favoring candidates with redder Y − J colors.Taking into account this caveat, we investigate the Quasar-LBG clustering in this field following the studies at z ∼ 4 in García-Vergara et al. (2017) and assuming no evolution for clustering.We find that this quasar field is not consistent with an overdensity of LBGs, but instead with cosmic density or even an underdense region, noting that this result is heavily influenced by the limitations imposed by Poisson statistics given the sample of only one LBG candidate.This outcome is puzzling considering the recent findings of overdense quasar environments at z = 6.3 and 6.6 in Kashino et al. (2023); Wang et al. (2023), respectively.The limitations show that spectroscopy might be crucial as these studies looked for galaxies emitting [O III] rather than relied on the Lyα signature. The quasar ULAS J1342+0928 is one of the most extreme objects in the universe and there are truly several strategies to further explore its environment.First, our work demonstrates that it is expected to find more LBG galaxy candidates using further HST or JWST with a more complete set of filters in the near-infrared.Alternatively, ALMA mosaic observations covering the quasar field could reveal a potential population of dustobscured galaxies.Additionally, we could rely on the power of JWST spectra to find galaxies in the field of ULAS J1342+0928 by looking for their [O III] emission.Finally, expanding the search of galaxies to a wider area of up to 10 comoving-Mpc could probe necessary to thoroughly investigate the environment of this z = 7.54 quasar (Overzier 2016;Chiang et al. 2013). Figure 2 . Figure 2. HST iYJ RGB color-image of the field around the quasar ULAS J1342+0928.The quasar is in the center as presented in the circled region.Overlaid are the high-redshift galaxy candidates selected in this work as described in §5.The source C-4636 is identified as an LBG candidate with a photometric redshift of z phot = 7.69.Candidate C-[C ii] is a [C ii] emitter previously identified in Venemans et al. (2020) to be at z [C II] = 7.5341 ± 0.0009.This candidate lacks HST or Spitzer counterpart emission, making it a dust-obscured candidate in the environment of the quasar.Additional LBG candidates in the observed field, C-4966 and C-5764 are at z phot = 6.91 and 6.89, respectively. Figure 3 . Figure3.Noise calculation for the field in the three-HST bands following Equation2.N is the number of pixels in the area of the aperture with diameters 2.0, 3.0, 4.0, 5.0, 6.0, 8.0, 10, 12, 14, 16, 18, 20 (pixel scale is 0. ′′ 05).Note how the noise grows with a bigger aperture, as expected from the equation.The red line shows the best fit correlating the noise and aperture size N , which is used to find the α and β free parameters that contribute to the noise estimate. Figure 4 . Figure 4. Galaxy candidates resulting from our search including the quasar ULAS J1342+0928 at z = 7.54, and a new LBG candidate in its environment.Left: Postage stamps of each candidate in the iY J HST filters (3.′′ 0 × 3. ′′ 0), and the two Spitzer /IRAC bands (12.′′ 0 × 12. ′′ 0).Middle: The best-fit SED of the high-redshift solution of the candidate is presented in blue, with non-detections in red as 1σ upper limits.We present the SED of the low-redshift solution in a dotted grey line.Right: The P (z) versus z from EAZY with the best-fitting redshift (za) in blue, and a vertical pink line indicating the redshift of the quasar for reference.Note that the redshift probability distributions are highly favored at z = 7 − 8 for the top two panels, where the quasar is ID: 6381.The LBG candidate C-4636 has a slightly higher redshift solution at z phot = 7.69 because of the flat P (z) across z = 7.5 − 8.The bottom two candidates favor a slightly lower redshift solution at z ∼ 7. Figure 6 . Figure 6.Color-Color diagram using the HST bands.The quasar ULAS J1342+0928 is marked with a crimson diamond, indicating a strong Lyman Break in the i814 − Y105 color.We show typical quasars (red diamonds) and LBGs (magenta circles) colors in the redshift range z = 6.5 − 8.5, with a redshift step ∆z = 0.05.The MLT-dwarf stars are denoted by yellow stars.LBG candidate C-4636 identified at z phot = 7.69 with EAZY, is marked with a magenta cross.Blue pentagons represent the z ∼ 7 LBG candidates, showing distinctively bluer Y105 − J125 colors compared to z ∼ 7.5 LBGs in the diagram. Figure 7 . Figure 7. Assessment of the completeness of our selection technique of high-redshift galaxy candidates around ULAS J1342+0928 presented in magnitude bins of ∆J 125 = 0.5 and 1σ Poisson errors calculated withGehrels (1986).The 5σ J125 limiting magnitude from this work is denoted with the dashed red line. Figure 8 . Figure 8. Predicted LBG excess as a function of radius around our quasar at z = 7.54.The pink curve represents the LBG excess taking into account the uncertainty on the determination of the QSO-LBG clustering cross-correlation r QG 0 from García-Vergara et al. (2017), assuming no evolution between z = 4 and z = 7.5 (see §6.2).Uncertainties due to cosmic variance are not considered.Accounting for our completeness and the LBG we found at z = 7.69 with projected distance to the quasar of 223 pkpc (1.9 cMpc), we calculate an LBG excess of 0.46+1.52−0.08 (green circle with 1σ errors from Gehrels 1986), consistent with an average or lowdensity field.We note that our result is limited by low Poisson statistics. , Table 1 . EAZY fit of Galaxy Candidates in the Quasar Field Note-Thistable presents the catalog of galaxy candidates in the quasar field, selected here with EAZY using HST and Spitzer photometry.The reported values at the top correspond to the recovered quasar ULAS J1342+0928 with confirmed systemic redshift z = 7.54 † , candidate C-4636 at z ∼ 7.5, and dust-obscured candidate C-[C ii] identified with a systemic redshift z [C II] Table 2 . Photometry of HST and Spitzer Selected Galaxy CandidatesNote-This table presents the photometry of the high-redshift galaxy candidates.Column 1 is the candidate ID.Columns 2-6 are the calculated signal-to-noise values from the 0. ′′ 4-diameter circular aperture in the HST bands, and from the Spitzer photometry.Columns 7-11 are the calculated AB magnitudes, where the limiting magnitudes correspond to 3σ estimates.Column 12 is the half-light radius of the object in arcseconds.
12,481
2024-04-03T00:00:00.000
[ "Physics" ]
First-Year Evaluation of Mexico’s Tax on Nonessential Energy-Dense Foods: An Observational Study Background In an effort to prevent continued increases in obesity and diabetes, in January 2014, the Mexican government implemented an 8% tax on nonessential foods with energy density ≥275 kcal/100 g and a peso-per-liter tax on sugar-sweetened beverages (SSBs). Limited rigorous evaluations of food taxes exist worldwide. The objective of this study was to examine changes in volume of taxed and untaxed packaged food purchases in response to these taxes in the entire sample and stratified by socioeconomic status (SES). Methods and Findings This study uses data on household packaged food purchases representative of the Mexican urban population from The Nielsen Company’s Mexico Consumer Panel Services (CPS). We included 6,248 households that participated in the Nielsen CPS in at least 2 mo during 2012–2014; average household follow-up was 32.7 mo. We analyzed the volume of purchases of taxed and untaxed foods from January 2012 to December 2014, using a longitudinal, fixed-effects model that adjusted for preexisting trends to test whether the observed post-tax trend was significantly different from the one expected based on the pre-tax trend. We controlled for household characteristics and contextual factors like minimum salary and unemployment rate. The mean volume of purchases of taxed foods in 2014 changed by -25 g (95% confidence interval = -46, -11) per capita per month, or a 5.1% change beyond what would have been expected based on pre-tax (2012–2013) trends, with no corresponding change in purchases of untaxed foods. Low SES households purchased on average 10.2% less taxed foods than expected (-44 [–72, –16] g per capita per month); medium SES households purchased 5.8% less taxed foods than expected (-28 [–46, –11] g per capita per month), whereas high SES households’ purchases did not change. The main limitations of our findings are the inability to infer causality because the taxes were implemented at the national level (lack of control group), our sample is only representative of urban areas, we only have 2 y of data prior to the tax, and, as with any consumer panel survey, we did not capture all foods purchased by the household. Conclusions Household purchases of nonessential energy-dense foods declined in the first year after the implementation of Mexico’s SSB and nonessential foods taxes. Future studies should evaluate the impact of the taxes on overall energy intake, dietary quality, and food purchase patterns (see S1 Abstract in Spanish). Methods and Findings This study uses data on household packaged food purchases representative of the Mexican urban population from The Nielsen Company's Mexico Consumer Panel Services (CPS). We included 6,248 households that participated in the Nielsen CPS in at least 2 mo during 2012-2014; average household follow-up was 32.7 mo. We analyzed the volume of purchases of taxed and untaxed foods from January 2012 to December 2014, using a longitudinal, fixed-effects model that adjusted for preexisting trends to test whether the observed post-tax trend was significantly different from the one expected based on the pre-tax trend. We controlled for household characteristics and contextual factors like minimum salary and unemployment rate. The mean volume of purchases of taxed foods in 2014 changed by -25 g (95% confidence interval = -46, -11) per capita per month, or a 5.1% change beyond what would have been expected based on pre-tax (2012-2013) trends, with no corresponding change in purchases of untaxed foods. Low SES households purchased on average 10.2% less taxed foods than expected (-44 [-72, -16] g per capita per month); medium SES households purchased 5.8% less taxed foods than expected (-28 [-46, -11] g per capita per month), whereas high SES households' purchases did not change. The main limitations of our findings are the inability to infer causality because the taxes were implemented at the national level (lack of control group), our sample is only representative of urban areas, we only have 2 y of data prior to the tax, and, as with any consumer panel survey, we did not capture all foods purchased by the household. Conclusions Household purchases of nonessential energy-dense foods declined in the first year after the implementation of Mexico's SSB and nonessential foods taxes. Future studies should evaluate the impact of the taxes on overall energy intake, dietary quality, and food purchase patterns (see S1 Abstract in Spanish). Author Summary • Why Was This Study Done? To date, there has been very limited research as to how larger health-related food/ beverage taxes change household food purchases, or whether low socioeconomic status (SES) households are more responsive to such taxes. • What Did the Researchers Do and Find? Using a dataset that follows household food purchases over time, we examined whether the volume of taxed foods showed greater declines in the post-tax period than we would have expected based on trends in the volume of taxed food purchases prior to the tax. We also examined whether post-tax changes in the volume of taxed food purchases was greater among low SES households. We found that the mean volume of purchases of taxed foods in 2014 declined by 25 g per capita per month, or a 5.1% change beyond what would have been expected based on pre-tax (2012-2013) trends. There were no changes in the purchase of untaxed foods in the post-tax period. Low SES households' purchases of taxed foods declined by 10.2% and medium SES households by 5.8%, whereas high SES did not change. • What Do These Findings Mean? These findings show that in the post-tax period, purchases of taxed foods declined more than we would expect if pre-tax trends had simply continued, particularly among low and medium SES households. Future research should explore how these shifts are linked to changes in the nutritional quality of the overall diet. Introduction Currently, the prevalence of overweight and obesity in Mexico is over 33% for children and about 70% for adults [1,2], and, in 2006, the prevalence of type 2 diabetes in adults was 14.4% [3]. Concurrent with the rise in obesity and diabetes were large increases in sugarsweetened beverage (SSB) and nonessential energy-dense food (often termed "junk food") intake [4][5][6][7]. Worldwide, Mexico is the fourth largest per-capita consumer of energy-dense, ultraprocessed food and drinks, including SSBs, sweet and savory snacks, breakfast cereals, confectionery, ice cream, biscuits, spreads, sauces, and ready-meals [8]. To prevent continued increases in obesity and diabetes, in January 2014, the Mexican government implemented a 1 peso-per-liter tax on SSBs (equivalent to approximately 10% tax) and an 8% tax on nonessential foods with energy density !275 kcal/100 g. In Mexico, total prices including the tax price are included on the shelf label, so the price consumers see includes the tax. The law defined nonessential foods in the following categories: chips and snacks, candies and sweets, chocolate, puddings, peanut and hazelnut butters, ice cream and ice pops, and cereal-based products with substantial added sugar. Based on the 2012 National Health and Nutrition Survey (ENSANUT), the intake of non-basic foods high in sugar or fat (a food classification similar to the tax) contributes 11% to 18% of daily caloric intake across age groups [9]. Worldwide, there is very limited empirical evidence on the effect of food/nutrient taxes [10,11]. While analysis of Mexican food and beverage taxes revealed that during 2014 purchases of taxed beverages declined 6% beyond what was expected compared to pre-existing trends, it is unclear whether taxed food purchases also declined, or whether households of lower socioeconomic status (SES) were more or less responsive. Because both the nonessential energy-dense food and the SSB taxes were implemented concurrently, we cannot evaluate the independent effect of each. Therefore, the objective of the current work was to longitudinally examine changes in the volume of taxed and untaxed food purchases after both taxes were implemented, relative to the counterfactual (i.e., expected volume of taxed and untaxed food purchases if the taxes had not been implemented), overall and by SES subgroups. Participants This study uses data on volume of household food purchases from January 2012 to December 2014 from The Nielsen Company's Mexico Consumer Panel Services (CPS). The analysis used de-identified data and was granted an exemption from the University of Chapel Hill and National Institute of Public Health (INSP) institutional review boards. Enumerators visit participating households every 2 wk to collect diaries of purchases and receipts and register purchases by checking the pantry and a designated bin where the household members keep empty product packages. All items available with a barcode are scanned by the enumerator. The data for each purchase includes number of units, volume, price paid, and date of purchase. Nielsen CPS samples households from 53 cities with >50,000 inhabitants and estimates weights for each household so that the sample is representative of the urban Mexican population. From all households that participated in the Nielsen CPS in at least 2 mo during January 2012-December 2014, we excluded three households because of incomplete data on covariates. Our analytical sample includes 204,584 household-months, across 6,248 unique households. Average household follow-up was 32.7 mo; 78% participated in all 36 mo. Covariates SES categories were based on those provided by The Nielsen Company, which are defined with a score system that classifies households in seven categories as proposed by the Mexican Association of Market Intelligence and Opinion. This measure of SES was validated and is the standard one used in market research in Mexico. The score considers the education level of the member with the largest household income contribution and seven household assets: number of rooms, type of floor, number of bathrooms, shower, gas range, number of light bulbs, and number of cars. The cutoff points for the seven categories are defined a priori to capture specific household characteristics and are not based on a population distribution; therefore, the sample in each category is not equal (e.g., the extreme categories combined have <10% of the sample). We classified SES as low (lower two categories), medium (middle three categories), and high (higher two categories). Additional variables include household composition (nine variables, each with the number of household members that were within each gender/age group [as presented in S1 Table]) and contextual measures (state-quarter unemployment rates [12] and minimum salary [13] adjusted by state-quarter consumer price index). See S1 Table for descriptive statistics on the sample. Food Categories In this paper, we focus on volumes of overall taxed and untaxed foods and on subcategories of each. Classification of foods into untaxed and taxed categories was conducted by a team of registered dietitians from Mexico. In the case of law ambiguities for food classification, we consulted with the Ministry of Finances for clarification. For further description of each subcategory and the food classification process, see S2 Table. Our analysis does not cover all food categories that households purchase; we did not include categories for which Nielsen CPS does not collect data or did not collect data consistently throughout the 36 months of the analysis. Examples of food categories not analyzed are chocolates, candies, and sweet bread from bakeries (taxed if energy density !275 kcal/100 g, though small bakeries were exempt from the tax in 2014), and unpackaged produce, tortillas, and unsweetened bread from bakeries (mainly untaxed). Statistical Analysis All analyses were conducted in Stata, version 13 (College Station, TX). We first describe unadjusted, mean per capita volume purchases of taxed and untaxed foods (g/capita/month) from January 2012 to December 2014. Because the tax was implemented at one point in time across the entire country and, hence, we did not have a control population, we compared the purchases before and after the tax. Our pre-specified analytical strategy was based on the approach used by Colchero et al. in evaluating Mexico's SSB tax [14] and in other research using longitudinal food purchase data to evaluate the effects of retailer-and industry-led initiatives, such as the United States Healthy Weight Commitment effort to reduce calories in the food supply [15,16]. Specifically, to account for the ongoing 2012-2013 trend, and to avoid assuming a decrease in purchases in 2014 was attributable to the tax if there was already a downward trend, we extrapolated with model predictions the 2012-2013 trend through 2014 and used it as our counterfactual (i.e., what was expected to happen without a tax in 2014 based on the 2012-2013 trend). We used a fixed effects model to predict the mean adjusted volume purchased in each month pre-tax, post-tax observed, and post-tax counterfactual. More detail about this pre-specified analytical strategy and deviations from this strategy are summarized in S1 Protocol. The model specification was as follows: The unit of analysis (g/capita/month) was the per capita volume of Food purchases in household h, month m, and year y. To assess changes within the year, we included a semester effect. We tried 2-, 3-, and 4-month periods, but due to high month-to-month variation and an unclear cyclic annual pattern in 2012's and 2013's purchases using shorter periods, we used a semester period (the second semester was always higher than the first in 2012 and 2013 for both taxed and untaxed foods). Regardless of the period length used in the model, the annual change remained unchanged. The TS interaction term allowed the semester effect to vary before and after the tax. Additionally, we controlled for the aforementioned household and contextual covariates. Using this model, we predicted the mean adjusted volume purchased in each month pre-tax (2012-2013), post-tax observed (2014), and post-tax counterfactual (2014 but as if T = 0) to determine the absolute and relative differences over time. We present the predictions in the results section and regression coefficients in S3 Table. Because the food purchase data had a skewed distribution, we tested a generalized linear model with log-link, which gives unbiased estimates [17]. Results from either a generalized linear model with log-link or a linear regression model were similar; hence, we used a linear regression to be able to use a fixed effects estimator. Fixed effects are advantageous because they control for unobservable time-invariant characteristics. We conducted this analysis separately for the taxed and untaxed categories in the entire sample. We then performed analyses stratified by SES (low, medium, high) using the same specification as Eq 1 but without SES as a predictor variable. Stratified models allowed us to compare not only the tax effect, but also the absolute amount of purchases in each SES category. The SES coefficients in Eq 1 estimate the difference in the amount of purchases if a household changes SES category (intra-household), whereas our interest was on the difference in the amount of purchases across households with different levels of SES (inter-household). Nearly all households purchased some food from untaxed (99.7% of households) and taxed (96%) foods each month. However, for subcategories, there was a large proportion of non-consumers. As a result, we used a two-part model [18] using probit and linear regression models with the same specification as above, except that the fixed effects estimator was not used. The two-part model was as follows: Total amount of food subcategory purchase (g/capita/ month) = [Probability of food subcategory purchase (probability/month)] Ã [Amount of food subcategory if purchased (g/capita/month)] FoodSub hmy ¼ PrðFoodSub hmy > 0Þ Ã ½FoodSub hmy jPrðFoodSub hmy > 0Þ In all analyses, we used the household weights provided by Nielsen and estimated standard errors via bootstrapping by drawing 1,000 random samples with replacement with selection at the household level. Fig 1 shows the unadjusted mean volume trends for total taxed and untaxed food purchases and by subcategory. The 2012-2013 average of total volume of taxed food purchases was 505 g/ capita/month, whereas the 2014 average was 474 g/capita/month, while the averages of total untaxed foods were 1,585 g/capita/month in 2012-2013 and 1,596 g/capita/month in 2014. As can be seen, purchases have high month-to-month variation. Table 1 shows the adjusted absolute and relative differences between the counterfactual and observed volumes purchased in the post-tax period. On average, the total volume of taxed purchases had an absolute decline of 25 g per capita per month (p < 0.05), or a -5.1% relative change beyond what would have been expected based on pre-tax trends. The decline in volume of taxed purchases was 3.4% in the first semester of 2014 but growing to 6.7% in the second semester. Results As can be seen in Fig 2, the effect during the second semester was larger because, based on previous trends, the purchases were expected to increase in the second semester, but they remained stable throughout 2014. No differences were detected for untaxed purchases. Overall, low SES households bought less taxed food before and after the tax compared to their higher SES counterparts but showed the greatest response to the tax (Table 1, S1 Fig). On average, in 2014, low SES households purchased 10.2% less taxed foods than expected (p < 0.05), whereas medium SES households purchased 5.8% less taxed foods (p < 0.05), and high-income households' purchases did not change. subcategories of taxed and untaxed food purchases from the two-part model. The total amount purchased is estimated by multiplying the probability of any purchase during a month by the amount purchased in months with purchases higher than zero. In Table 2, we present the predicted probability, amount, and total, whereas in S2 and S3 Figs, we only present the total. The greatest changes in total purchases were observed among taxed salty snacks (-6.3% beyond expected, p < 0.05) and taxed cereal-based sweets (-5.2% beyond expected, p < 0.05), while taxed non-cereal-based sweets and ready-to-eat cereals did not change. Interestingly, in the case of taxed salty snacks, what drove the overall change was a change in the probability of purchasing. Among untaxed foods, there were only significant declines in the volume of sugar and sugar substitutes purchased compared to what was expected based on the pre-tax trend (-8.9%, p < 0.05), though the absolute volume of purchases continued to increase. In other words, the upward trend in the volume of sugar and sugar substitutes observed in 2012-2013 continued in 2014, but was smaller than expected (see S3 Fig). Purchases of the food group "other" increased by 8.0% (p < 0.05). Discussion For the first full year after Mexico's taxes on SSBs and nonessential energy-dense food taxes, we find significant changes in the observed per capita volume of household purchases of taxed foods compared to the counterfactual (i.e., what was expected based on pre-tax trends). Overall, we find that taxed foods declined by 25 g/capita/month (-5.1%), whereas untaxed food purchases did not change (-0.3%). Moreover, we find much larger declines for lower SES households (-10.2%), whereas medium SES households changed by 5.8% and high SES households did not change. Empirical evidence on the effect of food and nutrient taxes is limited. With regards to Denmark's short-lived saturated fat tax, one study of household food purchases found a 10%-15% reduction in purchases of butter, blends, margarines, and oils in the first 9 mo of implementation, when the increase in price of these products was 8%-22% [10]. A recent evaluation using cruder expenditure data from an income and expenditure national survey of the Hungarian tax on foods high in salt, sugar, or caffeine found a 3.4% decrease in the volume purchased of processed foods after the tax, with no corresponding change in unprocessed foods [11], though these processed food categories were not necessarily reflective of taxed versus untaxed foods. The present results show that, at least in Mexico, a relatively modest tax can, in the short run, result in a substantial decline in volume purchased of taxed foods. However, it is important to consider that taxes could affect purchases with other mechanisms in addition to the increase of price. Press coverage or public discussion of the tax can help discourage the consumption of the taxed products in the population; but, for the nonessential energy-dense tax in Mexico, the coverage has been small relative to the SSB tax [19,20]. On the other hand, the presence of the SSB tax could have had an effect on the purchases of nonessential energy dense foods, because these items might be complementary and consumed together. Previous estimations of SSB own and cross-price elasticities in Mexico reported that for a 1% increase in the price of SSB, the purchase of candies and snacks would decrease 0.44% and 0.23%, respectively, and that for a 1% increase in the price of candies and snacks, these would decrease 1.15% and 0.98%, respectively [21]. Therefore, it is likely that the decrease we found in taxed foods is due to both the SSB and the nonessential energy-dense food taxes. The reduction of 25 g/capita/month represents 70 to 110 kcal (energy density is at least 275 kcal/100 g, but based on the ENSANUT 2012, the mean energy density for the intake of taxed foods is 430 kcal/100 g). Although in absolute terms this reduction is small, the purchases captured in Nielsen only represent a fraction of all household purchases, and real absolute change in energy intake from taxed food might be larger. The changes in taxed foods were for salty snacks and cereal-based sweets. Interestingly, for salty snacks, all the change was due to changes in probability of purchasing, suggesting that, for this item, people prefer to decrease the frequency of purchases rather than the amount. Moreover, we saw smaller-than-expected increases in the volume of sugar and sugar substitutes, suggesting that households are not necessarily substituting sugary home-prepared foods or beverages for pre-packaged taxed sweets. Lower SES households were more responsive to the tax than middle SES households, while higher SES households showed no statistically significant change in purchases, consistent with results of the evaluation of Mexico's SSB tax [22]. This is important, considering that in Mexico, although lower SES groups still have slightly lower prevalence of obesity and diabetes [23], the costs associated with obesity and its comorbidities represent a higher proportion of their income. In other countries such as the US, where obesity prevalence is highest among people with low SES, a similar response to such a tax could lead to decreased disparities in diet and obesity. Long-term effects must be monitored, as we expect the industry to develop strategies in response to the tax, including product reformulation. For example, in the jam and spreads categories, we found that in 2014, a number of products were reformulated to fall under the 275 kcal/100 g threshold. The authors of the evaluation of Hungary's junk food tax also reported that a sizable proportion (40%) of Hungary's food manufacturers reported reformulating products to avoid taxation [11]. A great complexity of implementing a food tax is to define the characteristics of the foods subject to it. If only selected unhealthy foods are taxed, individuals can substitute with other unhealthy untaxed foods; on the other hand, if the tax categorization is too broad, many relatively healthy products will also be affected, increasing the cost of food without the public health benefit [24,25]. Overall, this tax successfully targeted unhealthy foods, as it focused on processed foods and did not disincentive traditional cooking ingredients such as sugar and fats (a criticism the Danish fat tax has received) [26]. However, the use of a single energy-dense cut-point in the Mexican tax without other nutritional attributes left out foods that are otherwise considered unhealthy (e.g., most ice creams were untaxed), whereas foods like peanuts and nuts were taxed. Moreover, sorting products out into "essential" versus "nonessential" is an iterative process, and throughout 2014 there were clarifications on the initial law ambiguities, representing about 2.3% of all products (see S2 Table). In contrast, new Chilean controls on food marketing that will go into effect July 1, 2016, uses as a cutoff not only energy but also sodium, saturated fat, and total sugar for foods and beverages separately [27]. An additional complexity of analyzing the Mexican tax is that each producer interprets the law and determines the total amount they have to pay (without reporting for which products they are paying). Thus, we cannot be certain which exact products were actually taxed. This work had several important limitations. First, we were unable to capture and analyze all foods that households purchased, including unpackaged produce, chocolates, candies, tortilla, and bread from bakeries. However, even for foods that were collected consistently in the Nielsen CPS, we captured only 474 g/capita/month of taxed foods in 2014. This is lower than what we would expect an average person to purchase, particularly if we compare to Euromonitor retail sales of 1,236 g/capita/month (excluding chocolates and bread from bakeries) or to the National Institute of Statistics and Geography's (INEGI's) manufacturer's industry survey of 1245 g/capita/month (excluding chocolate) (S4 and S5 Tables). Similar to other consumer panel surveys, it is expected that purchases from Nielsen CPS would be lower, because INEGI's data is of total sales (including food services and exports), and also because Euromonitor and INEGI use aggregate food categories that include untaxed items. Still, we are missing some amount of food purchases, most likely items purchased and consumed away from home. It is possible that the items not captured in the Nielsen dataset have a different trend than that found in our results. However, as can be seen in S4 and S5 Tables, INEGI's and Euromonitor's sales also display a decrease of 4.2% to 6.2% compared to 2013 for taxed foods and no change or slight increase for untaxed foods. Our model and counterfactual comparisons allowed us to examine what happened post-tax compared to what would have happened if the pre-tax trends had continued. However, this comparison assumes that pre-tax trends would have continued, which may not have been the case, and we cannot rule out that these results may have been influenced by other concomitant changes unrelated to taxes, including economic trends and anti-obesity and public health campaigns and regulations. [28]. Another limitation is that we only have 2 y of data prior to the tax. The discussion of the SSB tax and the overall obesity issue has intensified since late 2012 [19]. Capturing the effect of tax discussions on purchases beyond the effect of the tax itself is of interest; however, we do not have data before 2012 to use as a comparison and to be able to assess this. Finally, our sample was only representative of urban Mexican households in cities with more than 50,000 inhabitants. This sample represents 63% of the Mexican population and 75% of food and beverage expenditures [29], but we do not know if rural households responded differently to these taxes. Regardless, this study provides the first snapshot of overall trends in food purchasing a year after the nonessential food tax was passed. Future work will extend this analysis by examining changes in the nutrient profile of nonessential foods in response to this tax, including sugar, saturated fat, energy density, and sodium. Conclusion This evaluation of Mexico's nonessential food and SSB taxes shows that the volume of taxed food purchases declined over what was expected, and that these results were similar in direction and magnitude to declines in SSBs in response to the SSB tax. Declines after the tax were statically significant among low and medium SES households and for selected food subcategories (salty snacks and cereal-based sweets). Our results can orient Mexican policymakers, who every year decide on the continuation of the tax, as well as policymakers from others countries currently considering the implementation of foods taxes. However, the impact of this tax on overall energy intake, dietary quality, and food purchase patterns, as well as how these changes relate to weight status, remains to be studied.
6,398
2016-07-01T00:00:00.000
[ "Medicine", "Economics" ]
Table in Gradshteyn and Ryzhik: Derivation of definite integrals of a Hyperbolic Function We present a method using contour integration to derive definite integrals and their associated infinite sums which can be expressed as a special function. We give a proof of the basic equation and some examples of the method. The advantage of using special functions is their analytic continuation which widens the range of the parameters of the definite integral over which the formula is valid. We give as examples definite integrals of logarithmic functions times a trigonometric function. In various cases these generalizations evaluate to known mathematical constants such as Catalan's constant and $\pi$. Introduction We will derive integrals as indicated in the abstract in terms of special functions. Some special cases of these integrals have been reported in Gradshteyn and Ryzhik [1]. In 1867 David Bierens de Haan [9] derived hyperbolic integrals of the form ∞ 0 sinh(ax) e −mx (log(α) − x) k − e mx (log(α) + x) k (cosh(ax) + cos(t)) 2 dx In our case the constants in the formulas are general complex numbers subject to the restrictions given below. The derivations follow the method used by us in [8]. The generalized Cauchy's integral formula is given by This method involves using a form of equation (1) then multiplys both sides by a function, then takes a definite integral of both sides. This yields a definite integral in terms of a contour integral. Then we multiply both sides of equation (1) by another function and take the infinite sum of both sides such that the contour integral of both equations are the same. 2 Derivation of the definite integral of the contour integral We use the method in [8]. Here the contour is similar to Figure 2 in [8]. Using a generalization of Cauchy's integral formula we first replace x by ix + log(a) then multiply both sides by e mx for the first equation and the replace x with −x and multiplying both sides by e −mx to get the second equation. Then we subtract these two equations, followed by multiplying both sides by − where the logarithmic function is defined in equation (4.1.2) in [15]. We then take the definite integral over x ∈ [0, ∞) of both sides to get from equation (2.5.48.18) in [14] and the integrals are valid for a, m, k, t and α complex and −1 < Re(w + m) < 0 and Re(α) = 0. We are able to switch the order of integration over w and x using Fubini's theorem since the integrand is of bounded measure over the space C × [0, ∞). 3 Derivation of the infinite sum of the contour integral Derivation of the first contour integral In this section we will again use the generalized Cauchy's integral formula to derive equivalent contour integrals. First we multiply equation (1) by e imt/α /2i then replace by x by p + it/α for the first equation and then p − it/α for the second equation to get (4) Then we replace p with πi(2p + 1)/a + log(α) and multiply both sides by − 2π Then we multiply both sides by − 2iπ a 2 e iπm(2y+1) a and take the sum over p ∈ [0, ∞) and simplify the left-hand side in terms of the Lerch function to get from equation (1.232.3) in [1] where csch(ix) = −i csc(x) from equation (4.5.10) in [15] and Im(w) > 0 for the sum to converge. The log terms cannot be combined in general. Derivation of the second contour integral Next we will derive the second equation by using equation (6), multiplying by m csc(t) and taking the infinite sum over p ∈ [0, ∞) to get (7) Then we replace k with k − 1 to get [15] and Im(w) > 0 for the sum to converge. Definite integral in terms of the Lerch function Since the right-hand sides of equation (3), (6) and (8) are equivalent we can equate the left-hand sides simplify the factorial to get The integral in equation (11) can be used as an alternative method to evaluating the Lerch function. 6 Evaluation of special cases of definite Integrals 6.1 Special case 1 For this special case we will form a second equation using (11) by replacing m by −m taking the difference from the original equation and simplifying to get Definite integral in terms of the Hurwitz zeta function Using equation (14) and setting m = 1 and a = 2 to get Next we apply L'Hôpital's rule to the right-hand side as k → 0 to get from entry (1) in Table (64:4:2) in [11], where −π < Re(t) < π. Example 2 Using equation (21) and setting t = π/3 simplifying to get x tanh(ax)sech(ax) cosh(mx) Next we take the first partial derivative with respect to m and simplifying to get Discussion In this article we derived the integrals of hyperbolic and logarithmic functions in terms of the Lerch function. Then we used these integral formula to derive known and new results. We were able to produce a formal derivation for equation (27) Table 27 in Bierens de Haan [9] and equation (3.514.4) in [1] not previously published. The results presented were numerically verified for both real and imaginary values of the parameters in the integrals using Mathematica by Wolfram. In this work we used Mathematica software to numerically evaluate both the definite integral and associated Special function for complex values of the parameters k, α, a, m and t. We considered various ranges of these parameters for real, integer, negative and positive values. We compared the evaluation of the definite integral to the evaluated Special function and ensured agreement. Conclusion In this paper, we have derived a method for expressing definite integrals in terms of Special functions using contour integration. The contour we used was specific to solving integral representations in terms of the Hurwitz zeta function. We expect that other contours and integrals can be derived using this method.
1,408.4
2021-02-24T00:00:00.000
[ "Mathematics" ]
Adenosine A1 Receptor Deficiency Aggravates Extracellular Matrix Accumulation in Diabetic Nephropathy through Disturbance of Peritubular Microenvironment Background We previously observed that adenosine A1 receptor (A1AR) had a protective role in proximal tubular megalin loss associated with albuminuria in diabetic nephropathy (DN). In this study, we aimed to explore the role of A1AR in the fibrosis progression of DN. Methods We collected DN patients' samples and established a streptozotocin-induced diabetes model in wild-type (WT) and A1AR-deficient (A1AR−/−) mice. The location and expression of CD34, PDGFRβ, and A1AR were detected in kidney tissue samples from DN patients by immunofluorescent and immunohistochemical staining. We also analyzed the expression of TGFβ, collagen (I, III, and IV), α-SMA, and PDGFRβ using immunohistochemistry in WT and A1AR−/− mice. CD34 and podoplanin expression were analyzed by Western blotting and immunohistochemical staining in mice, respectively. Human renal proximal tubular epithelial cells (HK2) were cultured in medium containing high glucose and A1AR agonist as well as antagonist. Results In DN patients, the expression of PDGFRβ was higher with the loss of CD34. The location of PDGFRβ and TGFβ was near to each other. The A1AR, which was colocalized with CD34 partly, was also upregulated in DN patients. In WT-DN mice, obvious albuminuria and renal pathological leisure were observed. In A1AR−/− DN mice, more severe renal tubular interstitial fibrosis and more extracellular matrix deposition were observed, with lower CD34 expression and pronounced increase of PDGFRβ. In HK2 cells, high glucose stimulated the epithelial-mesenchymal transition (EMT) process, which was inhibited by A1AR agonist. Conclusion A1AR played a critical role in protecting the tubulointerstitial fibrosis process in DN by regulation of the peritubular microenvironment. Introduction Diabetic nephropathy (DN), the leading cause of end-stage renal disease (ESRD) [1], manifests with a progressive increase in proteinuria, with the deposition of extracellular matrix (ECM) components and subsequent glomerulosclerosis and tubulointerstitial fibrosis. As the major pathological changes and the crucial role in the progression of DN [2], renal fibrosis is triggered by high glucose, which destroyed the stability of the renal peritubular microenvironment. The renal peritubular microenvironment is comprised of peritubular capillaries, renal tubular epithelial cells, and interstitium between them. During the progress of renal interstitial fibrosis of DN, epithelial-mesenchymal transition (EMT) in tubular epithelial cells is a crucial event [3], with the loss of cell-cell contact, dysfunction of tight junction, and myofibroblast generation [4]. Adenosine, a nucleoside that is a constituent of RNA and yields adenine and ribose on hydrolysis, could regulate cell function via the P1 purinoceptor family signaling [5]. The adenosine receptors, identified as adenosine A1, A2a, A2b, and A3, are integral membrane proteins widely distributed among vertebrates [5]. Previous studies revealed that A2a receptor (A2aAR) could attenuate the development of tubulointerstitial fibrosis and glomerulosclerosis in diabetic rats, as well as other renal fibrosis models [6][7][8]. A2B receptor (A2bAR) was responsible for a profibrotic and proinflammatory response in renal fibroblasts [9], either the adenosine A3 receptor (A3AR) [10]. The adenosine A1 receptor (A1AR) is widely studied in kidneys for its crucial role in tubuleglomerular feedback (TGF) [11] and the anti-inflammatory role in different acute kidney injury models, such as renal septic AKI and IR injury [12][13][14][15]. Our previous study in diabetic Akita mice (Ins2 +/-) with A1AR ablation showed more prominent mesangial expansion and interstitial fibronectin staining than Akita mice with A1AR and wild-type control [16]. Recently, we also demonstrated that A1AR played a protective role in proximal tubular megalin loss in DN, in which the mechanism might associate with the caspase-1-related pyroptosis pathway [17]. However, little attention has been focused on the role of A1AR in the chronic fibrotic process of DN. Some studies have confirmed the EMT process in human renal proximal tubular epithelial cells (HK-2) when incubated in medium containing high glucose or transforming growth factor β1 (TGFβ1) [18,19], but without data on the role of A1AR in the EMT process of HK2 cells. The injury of renal microvessels, consisting of glomerular capillaries and peritubular capillaries, can lead to ischemia and hypoxia of local tissue, which induces the activation of fibroblasts and the deposition of extracellular matrix components [20]. CD34, a marker of vascular endothelial cell, has been identified to be associated with vascular injury of DN [21]. A recent study suggested that A2BAR could ameliorate the pulmonary microvascular endothelial injury induced by lipopolysaccharide [22]. However, little attention has been focused on the effects of A1AR on microvascular injury-related fibrosis in DN. The pericytes, which were important in maintaining vascular integrity and EPO production [23][24][25], could be activated by ischemia and hypoxia secondary to the loss of peritubular capillaries (PTC). The increase of PDGFRβ, a marker of pericytes, demonstrated that the pericytes transit to myofibroblasts, leading to the process in fibrogenesis [26,27] and the deposition of ECM [20]. A previous study revealed that activation of A1AR triggered contraction of pericytes [28], but the further relationship between A1AR and pericyte transition during fibrosis of DN was not elucidated. In this study, we aimed to disclose the role of A1AR in the fibrosis process of DN. The EMT and ECM accumulation were observed in DN patients, established diabetes model in A1AR -/mice, and cultured HK2 cells treated with high glucose and A1AR agonist as well as antagonist. The renal peritubular microenvironment was evaluated preliminarily, including the tubular cells, peritubular capillaries, pericytes, and tight junction. Materials and Methods The reagents and antibodies are listed in Table S1. 2.1. Patients. Patients who were diagnosed as DN by biopsy were included in Peking Union Medical College Hospital (PUMCH) from January 2015 to December 2017. The con-trol group included the patients (n = 14) with isolated microscopic hematuria diagnosed as a glomerular minor lesion (GML) without the foot fusion of podocytes. The study protocol was approved by the Institutional Ethics Committee at PUMCH (2014-2-18), and all subjects signed written informed consent. 2.2. Animals. Healthy male C57BL/6 mice weighing 18~22 g (age: 6~7 weeks) were acquired from Beijing Vital River Laboratory Animal Technology Company. Male A1AR-deficient (A1AR -/-) mice were presented by Professor Jurgen Schnermann from NIDDK of NIH (USA). All mice were kept in a specific pathogen-free (SPF) environment and adaptively fed for one week (ambient temperature 20~24°C, relative humidity 50%-55%, light cycle 12-12 hrs, and free drink and food) before the establishment of the diabetes model. The protocol of the animal experiment was approved by the PUMCH Institutional Ethics Committee of Animal Care and Use (ID: XHDW-2014-0024). All animal experiments were conducted following the national guidelines and the relevant national laws on the protection of animals. Animal Models. Wild-type (WT) and A1AR -/-(KO) mice, matched with age, weight, and blood glucose, were randomly assigned to three groups (6 mice in each group), including the wild-type control group (WT-control), the wild-type diabetes group (WT-DM), and the A1AR-deficient diabetes group (A1AR −/-DM). The type 1 diabetes mouse models were established by injecting STZ (Sigma, USA) (120 mg/kg, i.p.) dissolved in sodium citrate buffer (pH = 4:5) for two consecutive days, and the control mice were treated with sodium citrate buffer. Diabetes was confirmed with random blood glucose higher than 16.7 mmol/L, accompanied by polydipsia, polyphagia, polyuria, and emaciation. All mice were observed for 16 weeks before sacrifice. 2.4. Cell Culture and Treatment. Human renal proximal tubular epithelial cell lines (HK2) were obtained from the Cell Resource Center of the Chinese Academy of Medical Sciences. HK2 cells were cultured in DMEM/F12 medium (Gibco, USA) with 10% fetal bovine serum (FBS, Gibco, USA) and 1% penicillin-streptomycin solution. The cells were confirmed as HK2 by cellular morphological characteristic identification under phase-contrast microscopy and the cellular specific markers (cytokeratin 18 and megalin) with immunofluorescence staining. HK2 cells were in a serumfree medium for 12 hrs before treatment, then exposed to low glucose (5 mmol/L), high glucose (25 mmol/L), and high mannitol (25 mmol/L), respectively, for 24 and 72 hrs. Appropriate concentrations of A1AR agonist (CCPA, 0.1 μmol/L, Sigma, USA) and antagonist (DPCPX, 1 μmol/L, Sigma, USA) were chosen by CCK8 and LDH assay. After 24 hr coculturing with high glucose and CCPA or DPCPX, HK2 cells were harvested for Western blotting analysis. mice were sacrificed by anesthesia and perfused with 0.9% precooled saline from the heart and aorta, and the kidneys were rapidly dissected. The renal cortex and medulla were separated, snap-frozen in liquid nitrogen, and stored at -80°C for the next experiment. 2.6. Histology. The three-μm-thick paraffin-embedded kidney tissue sections were stained with hematoxylin and eosin (HE) and Masson's trichrome for light microscopy (Olympus, Japan). A transmission electron microscope was used to distinguish the detachment between pericytes and endothelial cells (JEM-1400plus, Japan). 2.7. Immunohistochemistry and Immunofluorescence. Threeμm sections cut from paraffin-embedded tissue were deparaffinized, rehydrated, and antigen retrieved, then incubated with the primary antibody overnight at 4°C. Secondary antibodies were HRP-conjugated goat anti-rabbit (ImmunoReagents, USA). All section images were viewed with the microscope (Eclipse 80i; Nikon, Japan) with a digital photograph camera (DS-U1; Nikon, Japan). IHC staining was analyzed using ImagePro Plus 6.0 by calculating the percentage of the positive area. Scoring was evaluated by a "blinded" investigator on coded slides. At least eight fields were selected randomly to cover the majority of the cortex per specimen for photodocumentation. Immunofluorescent (IF) staining was performed on serial sections of patients with biopsy-confirmed DN using standard methods. Secondary antibodies were DyLight 488 AffiniPure Goat Anti-Rabbit IgG (H+L) or DyLight 594 AffiniPure Goat Anti-Mouse IgG (H+L) (Abbkine, USA). The micrographs of immunofluorescent stains were captured by confocal laser microscopy (Leica, Germany). 2.8. Western Blotting Analysis. Total protein was extracted from the renal cortex and HK2 cells for immunoblotting analysis with the primary antibodies for A1AR, CD34, collagen I, collagen III, TGFβ, vimentin, and occludin. β-actin was used as the internal reference protein. The secondary antibody was HRP-conjugated goat anti-rabbit; then, an enhanced chemiluminescence detection system (Tanon 5200, China) was used to detect the immunoblotting signals. Quantification was performed by ImageJ Microsoft (NIH, USA). 2.9. Statistical Analysis. The unpaired t-test (two-tailed) was used to compare the difference between the two groups. One-way ANOVA with Dunnett's multiple comparison test was used for comparison among the multiple groups. P < 0:05 was considered statistically significant. The values were presented as the mean ± SEM. All statistical analysis was performed by GraphPad Prism 7 software. Peritubular Capillary Loss with Activation of PDGFRβ and TGFβ in DN Patients. Immunohistochemical semiquantitative analysis showed that the expression of CD34 was lower in DN patients (Figure 1(a)), as well as the higher expression of PDGFRβ in DN patients, compared to GML patients (Figures 1(b) and 1(c)). The location of CD34 and PDGFRβ was adjacent to each other (Figure 1(d)). Moreover, A1AR Journal of Diabetes Research and CD34 were colocated at the brush border of proximal tubule cells and peritubular capillaries in DN patients by immunofluorescent staining, while not in GML patients (Figure 2(b)). A1AR Deletion Exacerbated Fibrosis Process and Vascular Endothelial Cell Injury. We successfully established type 1 diabetes mouse models induced by STZ. The experiment flow chart and basic physiological index of mice in 3 groups were shown ( Figure 3). The blood glucose level was significantly higher in both WT-DN and the KO-DN mice than WT-control mice, while more pronounced in the KO-DN group (Figure 3(b)). Both diabetic groups had markedly higher urine volume and 24 hr urinary albumin excretion, as well as lower body weight (Figures 3(c) Journal of Diabetes Research to-back structure with focal interstitial fibrosis at week 16, compared to WT-control, while KO-DN mice presented with more severe tubulointerstitial fibrosis than that in WT-DN mice. In the tubular interstitial, Western blotting showed that the CD34 expression was consecutively decreased in the order of WT-control, WT-DN, and KO-DN mice (Figure 5(a)), while the podoplanin expression was consecutively increased in these 3 groups (Figure 5(b)) by immunohistochemical staining. A semiquantitative analysis showed that collagen (I, III, and IV), TGFβ, and α-SMA expression was increased in WT-DN compared to WT-control mice, while KO-DN mice showed a more prominent elevation than WT-DN mice (Figures 6(a)-6(c), 6(e), and 6(f)). Besides, the expression of PDGFRβ presented the same changing trend as fibrosis markers in 3 groups of mice ( Figure 6(d)). We also observed that the detachment between pericytes and endothelial cells became much severe in KO-DN, compared to WT-DN mice by the electron microscope ( Figure 5(c)). The Protective Role of A1AR in the EMT Process In Vitro. In HK2 cells cultured with high glucose medium, the expression of mesenchymal markers, including collagen 1 and vimentin, was increased compared to the normal glucose Journal of Diabetes Research and high mannitol groups (Figures 7(a) and 7(c)). Furthermore, adding A1AR antagonist (DPCPX) to high glucose medium increased collagen 1 and vimentin expression, while CCPA (A1AR agonist) inhibited the expression of them (Figures 7(a) and 7(c)). Moreover, the loss of occludin was observed in the high glucose groups and more obvious by DPCPX but abolished by CCPA (Figures 7(a) and 7(c)). Discussion A1AR is widely studied in kidneys for its key role in tubuleglomerular feedback (TGF). In this study, we first clarified the protective effects of A1AR on fibrosis progression in DN. In A1AR-deficient mice, the aggravated interstitial fibrosis is accompanied by the loss of peritubular capillaries, EMT of tubular cells, and the activation of pericyte, while the process could be abolished by A1AR agonist and aggravated with A1AR antagonist in HK2 cells. TGFβ upregulation and tight junction dysfunction were also involved in the ECM accumulation process. First, we proved that the fibrosis of DN was triggered by the loss of integrity in the peritubular microenvironment, which was composed of peritubular capillary endothelial cell, pericyte, interstitial matrix, renal proximal tubule, and Journal of Diabetes Research lymphatic endothelial (Figure 8). The loss of peritubular capillaries, indicated by decreased CD34 expression, was an independent predictor of renal fibrosis in DN patients and animal models. Since CD34 is positive in both vessel endothelium and lymphatic endothelium, we excluded the injury of lymphatic endothelia [29] by podoplanin staining. In contrast to the renal blood vessel impairment, lymphatic vessel proliferation was obvious in DN mice, which was also the risk factor for tubulointerstitial fibrosis [29][30][31]. Besides the ischemia, poor waste drainage also promotes fibrosis further. The compensation for lymphatic vessel proliferation serves as export of interstitial fluid, inflammatory cells, and cytokines, but they might not function well under this condition [32]. Pericytes were activated by detaching from the 9 Journal of Diabetes Research abnormal vascular endothelial cells, which is consistent with the previous report in obstructive fibrosis of the kidney [27]. The pericyte differentiation is a primary source of the myofibroblast [27,33,34], which became a key step in producing the pathogenic collagen [35]. The increased expression of interstitial PDGFRβ was used to describe the distribution of pericytes and myofibroblasts [36,37] and could attenuate the renal fibrosis by blocking it [38]. The epithelial-mesenchymal transition (EMT) and secretion of TGFβ were observed in HK2 cells stimulated by high glucose. EMT in tubular epithelial cells is a crucial event in the progression of renal interstitial fibrosis of DN, which is considered another source of myofibroblast generation [4]. This transition is characterized by the loss of cell-cell contact, identified by the decrease of integral membrane proteins, occludin, and claudins. Moreover, the upregulation of mesenchymal markers, including fibronectin, vimentin, and collagen I, is another character of EMT [19]. In our study, the EMT process and the expression of TGFβ were alleviated when added A1AR agonist into the high glucose culture medium. Thus, these findings indicate the direct protective role of A1AR in EMT in vitro, which is an important process of renal interstitial fibrosis in DN. We firstly clarified the protective effects of A1AR on the fibrosis progress of DN in A1AR-deficient DN mice and in vitro HK2 cells with A1AR antagonist or agonist. In this study and our previous data, both the diabetic Akita mice (Ins2 +/-) with A1AR ablation [16] and A1AR-deficient STZ mice showed more prominent mesangial expansion and interstitial fibrosis. The role of A1AR in fibrosis was controversial [39][40][41] in the unilateral ureteral obstruction rat model of renal fibrosis. The A1AR mRNA level was increased significantly on day 5 [42], while it failed to observe significant variation in the A1AR mRNA level at week 2 and week 4 [43]. In this study, we confirmed the biphasic change of A1AR protein in WT-DN mice that A1AR initially elevated at week 4 but decreased at week 16 in the time-dependent fibrosis process diabetic mouse model. In the diabetic kidney, hyperglycemia activates microvascular pericytes to detach from the endothelial cells. Then, pericytes differentiated to myofibroblasts and migrated into the interstitium to produce large amounts of collagen, inducing pathologic extracellular matrix deposition. Consistent injury leads to unstable vasculature, capillary loss, interstitial matrix expansion, and tubular interstitial fibrosis. A1AR, widely distributed in the renal proximal tubule, interstitial, and vascular endothelial cells, might play a protective role in this procedure by inhibiting microvascular pericyte transformation and vascular loss. Abbreviations: PDGFR: platelet-derived growth factor receptor; TGFβ: transforming growth factor β; A1AR: A1 adenosine receptor. Since the A1AR is expressed at the brush border of PTC, with the progress of the EMT and ECM accumulation, the A1AR loss might be secondary to the dysfunction of PTC. Besides, we observed more obvious peritubular capillary loss, pericyte transformation with PDGFRβ expression, and more pronounced fibrosis in KO-DN mice. Moreover, A1AR could stimulate cell proliferation and promote wound healing in the EA.hy926 endothelial cells [44]. Together with our observation that CD34 and A1AR were adjacent to each other in the renal peritubular microenvironment in DN patients (Figure 2), it is a reasonable assumption that A1AR might attenuate renal fibrosis by protecting vascular endothelial cells in DN. Although we firstly confirmed the direct protective role of A1AR in fibrosis of DN and EMT in HK-2 cells, there are some limitations. Because there is no mature technique to isolate renal pericytes successfully up to now, we cannot carry out the experiments in vitro to elaborate on the exact mechanism of the process by A1AR activation. Although we have proved that A1AR played a protective effect on megalin loss by inhibiting the pyroptosis-related caspase-1/IL-18 signaling in DN [17], to provide a more theoretical basis for A1AR agonist treatment of DN in the future, more experiments in vitro are needed to clarify the direct relationship between A1AR and peritubular microenvironment. Conclusions In summary, our study suggested that A1AR plays a protective role in renal fibrosis progression of DN in keeping the integrity of the tubular microenvironment. These findings suggest that the activation of A1AR may be a potential therapeutic strategy against DN. Data Availability The data used to support the findings of this study are included within the article, and the data about reagents and antibodies used to support the findings of this study are included within the supplementary information file. Disclosure The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Conflicts of Interest The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
4,422.6
2021-10-11T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
Differences Help Recognition: A Probabilistic Interpretation This paper presents a computational model to address one prominent psychological behavior of human beings to recognize images. The basic pursuit of our method can be concluded as that differences among multiple images help visual recognition. Generally speaking, we propose a statistical framework to distinguish what kind of image features capture sufficient category information and what kind of image features are common ones shared in multiple classes. Mathematically, the whole formulation is subject to a generative probabilistic model. Meanwhile, a discriminative functionality is incorporated into the model to interpret the differences among all kinds of images. The whole Bayesian formulation is solved in an Expectation-Maximization paradigm. After finding those discriminative patterns among different images, we design an image categorization algorithm to interpret how these differences help visual recognition within the bag-of-feature framework. The proposed method is verified on a variety of image categorization tasks including outdoor scene images, indoor scene images as well as the airborne SAR images from different perspectives. Introduction Visual information understanding is a long-standing topic that has been extensively discussed in both the communities of vision research and artificial intelligence. Over last decades, a large number of prominent works have been devoted to this challenging task from diverse ranges of perspectives including biological investigation [1][2][3], psychological research [4][5][6] and computational methods [7][8][9][10]. In this paper, we propose a computational approach to reveal the differences of various images and will interpret how the differences help computers to conduct visual recognition tasks. The fundamental observation that inspires our work is the behavior of human beings to distinguish different images. Psychologically, when distinguishing two images, we pay much attention to their differences rather than the common characters. For example, when categorizing two images from bedroom category and office category, the most representative information that helps recognition are the ''beds'' and ''computers''. When finding the patterns for beds, it is easy to tell that the image is from the bedroom. Similarly, when finding the features of a computer in an image, it probably describes an office scenario. However, in both of these two images, other common patterns are also collected, e.g. the wall and the ground. These common features help less to distinguish these two images because they appear in both of them. Such desired information selection mechanism is easily implemented as visual information processing by human brain. Unfortunately, it can hardly be addressed by computers. Therefore, to fill the gap, we propose a probabilistic computational framework to model such behavior by statistical machine learning [11]. To better describe the computational model, we clarify two important concepts in statistics, i.e. generative model and discriminative model [12]. Loosely speaking, generative models, e.g. Gaussian Mixture Models (GMM) and the topic model, play the role of interpreting how the observations are generated/sampled from a probabilistic distribution. They are different from discriminative models in that generative models learn a joint probability distribution of both the observations and the corresponding labels. Generative models are used in machine learning for either modeling data directly (i.e. modeling observations drawn from a probability density function), or an intermediate step of forming a conditional probability density function. In contrast, discriminative model generally reveals what makes data different among classes. In machine learning, discriminative model is always described by a conditional probability distribution which expresses the dependence of the labels on the observed features. Discriminative models differ from generative models in that they do not allow one to generate samples from the joint distribution of labels and features. However, for tasks such as classification, they do not require the joint distribution and, in most cases, can yield superior performance. Widely used discriminative models in machine learning include the Support Vector Machine (SVM), Multinomial Logistic Regression (MNL) [12] and Conditional Random Fields (CRF) [13]. In our work, a generative computational model is developed to describe how the image features in different classes are generated by a joint probability density function (pdf) of features and their labels. In addition, a discriminative model is incorporated into the Bayesian model to reveal the discriminative patterns among different classes. The whole model is solved in an Expectation-Maximization manner for parameter estimation and latent variables inference. Using Bayesian model and probabilistic approaches for visual information understanding has been extensively discussed in communities of computer vision and machine learning. In [14], a Bayesian model is used for pattern detection, e.g. circle object, on images. However, their approach shed no light on the discriminative aspect. Probabilistic inspired metric [15,16] has also been incorporated into typical subspace models for visual feature extraction. But these approaches do not explicitly enhance the discriminative property of the extracted features. Different from existing works on image feature extraction, the model discussed in this paper is fully conducted in a Bayesian paradigm. Thanks to the flexibility of probabilistic models, the features selected from our model exactly reveal the discriminative information of an image in a much detailed manner. In previous works, discriminative methods have been exploited into typical image coding framework to improve the image categorization performances. In [17] and [18], the discriminative functionality has been incorporated into the sparse coding method to learn the discriminative dictionary. However, the sparse coding methods pay too many computational costs to learning the dictionary and encoding one image, which requires solving the ' 1 type optimization for multiple times. In [19], a probabilistic based discriminative kernel has been defined to evaluate the distances of two images. It shed light on discriminative learning and could achieve very good performances by enhancing image differences. However, the work [19] only implicitly exploits the image differences and could not explicitly indicate what kind of features on the image are discriminative features. In our approach, thanks to the generative framework, we could exactly tell what kind of features cause the differences among images. Moreover, we will introduce how to construct the discriminative codebook by utilizing these image differences. Then, image encoding may follow the efficient paradigm of soft assignment [20]. To verify the effectiveness of the proposed framework, we apply the model on a number of image categorization tasks including natural scene recognition [21], earth observation (airborne images) recognition and indoor scene categorization [22]. Meanwhile, the experiment is conducted on bi-class classification and multi-class scene categorization tasks. It is glad to see from the bi-class categorization task that with the proposed framework, we can even accomplish the categorization work without training extra classifiers. The features learned by our model naturally exhibit significant discriminant structures for classification. Materials and Methods In this part, we describe our probabilistic model that interprets how the contents of different images are ''generated''. To describe the image contents, we follow the bag-of-feature method [23] to extract the image features at the pixel-level. In detail, a grid-based method is used to extract the dense SIFT features [24] from the images. The SIFT features are extracted on 16616 pixel patches sampled every 8 pixels. For the ease of presentation, we define different image categories as l~f1:::mg, which represents m classes. After extracting features from all the images, N featureand-label pairs, i.e. (x i ,l i ) are obtained, where x i records the SIFT feature and l i denotes the label of a certain feature. Method In this paper, we consider local image features as two types: discriminative features and common features. The discriminative features are desired for classification purpose which significantly represent the class attributes. On the contrary, the common features appear in different categories and do not imply significant label information. To simultaneously model the generation of feature-and-label pairs, a probabilistic model is defined as where w i is the latent variable which equals to one if the i th feature is a discriminative feature and zero if the feature belongs to the common feature set. In (1), P D ( : ) and P C ( : ) are used to represent the joint pdf of the feature-and-label pairs of the discriminative part and common part, respectively. Meanwhile, the probabilistic generative model for P(x i ,l i ) is subject to the following expression, To mathematically describe the joint pdf of discriminative features and their labels, i.e. P D (x i ,l i )~P D (x i )P D (l i~k Dx i ), the conditional probability P D (l i~k Dx i ) should be explicitly defined. These discriminative features are class-specific and are different between categories. Therefore, this kind of features imply prominent label information and, mathematically, can be well explained by a discriminative functionality. In this paper, we exploit the Multinomial Logistic Regression (MNL) to model the conditional probability P(l i~k Dx i ) due to the following two reasons. First, an MNL explicitly minimizes the logistic empirical losses as the objective whose performances have been widely admitted in a number of practical applications [25,26]. As indicated in [12], the accuracy of MNL is similar to support vector machine (SVM) in addressing many real-world tasks. Meanwhile, the remarkable advantage of MNL for our case is its explicit probabilistic definition and its ease of being incorporated into our probabilistic model seamlessly. Accordingly, we define the pdf for the discriminative part as, where h~(w,b) is the parameter of the MNL. Besides, a Gaussian mixture model (GMM) is adopted to encode the prior of where p t is the mixture coefficient and N ( : Dm,s) is the Gaussian distribution with m and s as the mean and the standard deviation, respectively. In this paper, we select the number of mixture components as m which corresponds to the number of classes. Similarly, it is easy to model the common feature generation part as, In the above equation a GMM model is used to model the prior components selected in the model. Besides, it is interesting to note that (4) describes the common features in multiple classes, therefore, the feature does not exhibit significant category attribute. Mathematically, the feature is independent to the label where m is the number of classes. Till now, we have explained both the discriminative part and the common part of the model. An overview of the proposed method is provided in Fig. 1. Here, we remark on the parameter set C used in the model. C~fh,m,s,m m,ŝ sg, where h,(m,s) are the parameters for the logistic regression and GMM in (3), respectively. (m m,ŝ s) is the parameter of the GMM in (4). In statistics, to estimate the parameters, a maximum likelihood (ML) formulation should be maximized, where P(X ,LDC)~P N i~1 P(x i ,y i DC). Unfortunately, the above loglikelihood is not solved analytically due to the incorporation of the latent variable W~fw 1 :::w N g. In this paper, we use the Expectation-Maximization (EM) algorithm to efficiently solve the model. An EM solution EM method is widely used in statistics to solve probabilistic models with latent variables [27,28]. In [29], it is proven that EM method establishes the lower bound of the likelihood in (5) using the Jensen's inequality. Then, the lower bound is iteratively maximized and the whole optimization is solved by alternating between two steps, i.e. E-step and the M-step. N In the E-step, the conditional expectation for the complete data likelihood is determined with the current estimation of parameters. N In the M-step, the parameter is updated by maximizing the conditional expectation in the E-step. The EM algorithm is a special case of the more general Majorization Minimization (MM) algorithm [30]. In our model, we first calculate the conditional expectation for the complete-data likelihood [31], Here, for the ease of illustration, we define the data set as D~X , f Lg. Then, in the M-step, the Q-function is maximized to get the new parameter, i.e. C kz1~a rg max C Q(C,C k ). The E-step and the M-step are iteratively processed until the convergence is reached. In our simulation, the convergence criteria is regarded as satisfied when the change of the objective between two iterations is below 1e {3 . In our model, one critical step is to estimate conditional probability for the latent variables given the model parameters. We define u i and u u i as follows, Physically, u i represents the probability that the i th feature is a discriminative feature and, on the contrary, u u i records the Repeat E-Step: Estimate u k i and u u k i according to (7); Calculate Q(C,C k ) according to (6) with the current estimation of u k i , u u k i ; M-Step: probability that the i th feature belongs to the common feature set. Accordingly, we present the whole EM framework in Table 1. The detailed derivations for the EM algorithm and parameter updating rules are provided in File S1. To solve the EM model in Table 1, the MNL in the discriminative part is initialized by fitting the original data with a logistics regression. In the other part of the model describing the common features, a Gaussian Mixture Model is applied to fit all the local image features, and then the obtained parameters are used for initialization. Feature Encoding by Exploiting Image Differences In previous parts, we have stated how to determine the discriminative patterns from multiple images. However, the final goal for many scene recognition tasks is for categorization. The selected features are only local features (extracted at the pixellevel). Therefore, to capture the information of the whole image, some further coding steps are desired to generate features at the image level. In computer vision, one prevalent method for image coding is the bag-of-feature method in which all the local features on the images are assigned to a dictionary with multiple codewords. To make this paper self-contained, we briefly review the procedures of bag-of-feature method for image-level feature encoding. As discussed previously, on one image, we can generate many local SIFT features. These local features cannot be directly used for image-level understanding. To describe an image, we should generate an image-level feature by exploiting the local descriptors on it. One prevalent approach to construct such an image-level feature is to construct a codebook and then assign local descriptors to the codebook, forming the image-level feature for classification. Codebook construction is a training procedure, which clusters bags of local features, e.g. SIFT features, on the training images into n representative codewords. Then, during the second step, the features on the test images are assigned to their corresponding codewords in the codebook. An image-level feature, i.e. a histogram, is accordingly generated during codewords assignment to represent an image. Fig. 2 summarizes the main procedures of codebook generation. Firstly, the local features in each image should be extracted and then thousands of local descriptors from multiple images are clustered into n centers. These centers are the codewords in the codebook. Therefore, the codewords in the dictionary are not determined by any specific local feature. Instead, they capture the common properties of all the local features from training images. After constructing the codebook, the local image features are assigned to the codewords to form an image-level histogram. In this paper, we adopt the soft assignment method introduced in [20] to conduct codeword assignment. From the above discussions, it is apparent that the codewords play very critical role in generating an image-level feature. Therefore, it is desired that the generated codewords could enhance the discriminative information of different images. Fortunately, the proposed Bayesian model has provided feature types (i.e. discriminative or common features) for all the local features. Accordingly, it is nontrivial to exploit such prior to generate more descriptive codewords for image categorization. In typical codewords generation, all the local features from the training images are used to generate the codewords by clustering. In this work, the codewords are only clustered by those discriminative local features selected by our algorithm. We regard x i as a discriminative feature if u i wt and t is fixed as 0:9 in our simulation. The selected discriminative features reveal the differences among multiple images. Therefore, when using these discriminative local features for clustering, the centers, i.e. the codewords, may better reveal the discriminative information on the images. Results In this part, we experimentally verify the performance of the proposed statistical model on treating real world images. We will report some numerical properties of the statistical model and then two experiments on bi-class and multi-class classification will be respectively conducted to verify the effectiveness of the selected discriminative features for image categorization. But before the experimental discussions, we first state the datasets that are used in the experiments. Data Collection The data are from three datasets, Synthetic Aperture Radar (SAR) dataset, Fifteen Scene dataset and MIT Indoor Scene dataset, among which SAR dataset is established by us. SAR consists of six ground categories: city, countryside, mountain, river, seaside, and water area. Each category in SAR contains more than 50 images. Most of these scene images are captured in China with our own airborne SAR devices. The original SAR data are converted into gray scale images and the experiments are only performed on these pseudo-gray-scale images. These images are cropped from a large SAR map to around 500|500 pixels. The Fifteen Scene is a dataset of fifteen natural scene categories released by Fei-fei et al [21]. The image classes include bedroom, suburb scene, industrial, kitchen, living room, coast, forest, highway, inside city, mountain, open country, street, office, store, and tall building. Each category in Fifteen Scene dataset contains over 200 images. This dataset has been widely used in the field of computer vision as a benchmark dataset for scene categorization. In our experiment of bi-class recognition, most of the image classes are from the Fifteen Scene dataset. In high level vision, indoor scene recognition is a challenging open problem. Most scene recognition models that perform well with respect to outdoor scenes work poorly, however, on indoor scene recognition. The MIT Indoor Scene dataset is released by Quattoni et al. in [22]. The dataset contains 67 indoor categories, and a total of 15620 images. The dataset contains indoor scene images ranging from airport inside, bookstore, library, mall, and video store to other scenes such as warehouse, dining room, fast food restaurant, and computer room. The number of images varies across categories, but there are at least 100 images for each category. This dataset has become a benchmark for indoor scene recognition. Numerical Performance For numerical analysis, we first report some results associated with the convergence of the learning model. As our statistical model is solved with an EM framework, we first show the objective value of the Q-function, i.e. Eq. (6), in Fig. 3 (a). From the result, it is obvious that the Q-objective function of EM algorithm increases along with the iterations. It is also apparent that the whole optimization framework reaches convergence within 20 iterations. Besides, we also report the learning results of the whole EM algorithm. The basic pursuit of Table 1 is to find the features that capture significant label information of different classes. With this purpose, latent variables u i are introduced for each feature in the training process. According to Table 1, the discriminative saliency variables u i are the desired output of the probability model. If u i is large, the corresponding feature x i is probably a discriminative feature whereas a small u i implies that x i is a common feature shared by multiple classes. To find the numerical property of the learning results, we conduct experiments on a bi-classes classification task. We apply our model to the Seaside and Water Area classification task of Fig. 4 (a). The statistical results of the distribution of u i are reported in Fig. 3 (b) where the abscissa divides the probability of [0,1] into ten regions. The ordinate records how many local features in the training set fall into the corresponding region. From the results, it is obvious that most features are separated into two groups of discriminative features (with large u i ) and common features (with small u i ), respectively. The latent variables larger than 0.9 indicate that their corresponding features are very likely to be discriminative ones and will probably help recognition. In contrast, those on the left with probability lower than 0.1 denote common features in both classes, whereas these features have little contribution to the recognition. The results also serve to verify that our statistical model could exactly distinguish these two kinds of features from practical images. Results on Bi-class Recognition In this subsection, the proposed model is implemented on real image classification tasks. First, we consider performing the discriminative feature selection experiments on bi-class recognition cases. Six pairs of classes that cover indoor, outdoor and aerial scenarios are tested in the experiment. For each pair of classes, our model tries to select out the common features as well as the features that are more distinguishable for classification (discriminative features). The sample images of each classification task and our feature selection results are provided in the sub-figures (a) to (f) in Fig. 4. In these images, the red dots represent the discriminative features (i.e. the differences) and the yellow dots are common features shared by two classes. Specifically, in our experiment, all features from images of each class are included and clustered into 4000 centers by K-means. Then, the clustered features (as many as 8000 for two classes) are included in the training process of the model. Discriminative and common clustered features will be accordingly determined by the estimation of their latent variables after the convergence of the EM algorithm. We analyze the visual results in Fig. 4. In sub-figure (a), the images of Seaside class and Water Area class both have water regions. Therefore, the water regions are selected as common features and other contents are labeled as discriminative features. This observation is also found in Fig. 4 (b) to (d) that the sky in these images are selected as common features and other contents carrying significant category information are denoted as differences. In Fig. 4 (e), for the indoor scenario, the walls are regarded as common features while the bed in bedroom class and the computer in office class are selected as discriminative features. Finally, when distinguishing the images from airport class and bookstore class in Fig. 4 (f), the ground and the wall provide less category information. The discriminative patterns mainly distribute on the counter (in airport scenario) and books (in bookstore scenario). The discussions aforementioned only provide intuitive results. More strictly, we will verify the effectiveness of discriminative feature selection (DFS) via quantitative analysis. The prominent contribution of this work is that we can identify the discriminative local features from common ones in multiple images. Therefore, by utilizing these discriminative features, a discriminative codebook (DC) is consequently constructed for image-level histogram generation. The DC is different from typical codebook (TC) that in DC the codewords are the centers of the discriminative local features after Bayesian learning. On the other hand, in typical codebook, the codewords are clustered by using all the local SIFT features from the training images. For image classification, in typical BoF framework, the image features (obtained after image encoding) are fed into a discriminative classifier, e.g. a Support Vector Machine (SVM) for final categorization. However, in this bi-class categorization task, if the discriminative codebook are utilized, we can even classify the images just by their feature saliency without training an extra classifier. The basic idea for feature-saliency-based bi-class categorization algorithm can be well illustrated by the following toy demo. We generate the discriminative codebook for the biclasses categorization task in Fig. 4(a). After generating the codebook by only using the discriminative features, we assign the images to the codewords according to [20] and the visual results are provided in Fig. 5. It is obvious that in the Seaside histogram, more SIFT features are assigned to the first 100 codeword. Note that these 100 codewords are generated by clustering all the discriminative features from Seaside determined by our model. Therefore, by simply calculating the energy distribution of the histogram, the image can be accordingly classified. In the previous paragraph, we have introduced the featuresaliency-classifier (FSC) for bi-classes image classification by using discriminative codebook. Of course, the same features can also be classified following the routine way of training an SVM. We will experimentally compare these two classifiers with the same Table 2. Besides, we also report the classification results on the features generated by using typical codebook. In the typical codebook, the codewords no longer contain class information and thus we cannot classify the images by exploiting feature saliency. They can only be classified by training an SVM. In this simulation, 1000 codewords are selected in the codebook. For training, 10 images in the SAR dataset are randomly selected as training samples. For the remaining five tests on natural images, 50 images in each category are selected as training samples. In each task, the experiments are randomly repeated for 10 times and the average classification accuracy and their standard deviations are tabulated in Table 2. In the Table, the first and the second row report the classification results by using the DFS-codebook. The first row shows the classification result with FSC, and the second row shows classification accuracy with SVM classifier. In these two rows, the same features are fed into two different classifiers, i.e. the FSC and SVM. From the result, obviously, SVM has higher performance than that of FSC in five out of six tasks. There is no surprise to see such improvements because SVM is a strong classifier that requires extra training costs. FSC only vote for the classification by counting the feature saliency and requires no training. By reporting the results of FSC in this part, we just want to claim that the features learned after DFS itself have exhibited significant class attributes and they can even be easily classified by a classier as simple as FSC. From the result, it is also interesting to note that for the same features with DFS-codebook, the performance of FSC is comparable to the results of SVM. The third row in the table shows the classification results by using typical codebook, which means that the codebook is generated by simply clustering all the local features in the images. It is obvious that the codewords in this codebook have no discriminative characters. By comparing the classification results in the second and third rows, it is found that by using the same classifier, the performance of DFS-codebook is much better than the typical codebook. This finding serves to verify that utilizing the differences among images actually helps recognition. Results on Multi-class Categorization In previous parts, the proposed algorithm is evaluated on biclass classification task. In this part, we consider a more general case of applying the proposed model to improve the classification accuracy on multi-class tasks. For multi-class case, the learning procedures of discriminative feature selection are processed via a one vs. others paradigm. In a nutshell, we assume that there are m classes in total. To select the discriminative features for the p th class, the features for the p th class are regarded as positive features and the features from other m{1 classes are treated as negative ones. For the ease of calculation, both the positive and negative features are clustered into 4000 centers. After discriminative feature selection by the EM iterations, we only keep the discriminative features for the positive dataset, denoted as F p , in our selection result. These features in F p represent those that help to distinguish the images in the p th class from others. These procedures are repeated for m times and the final codewords are clustered with all the features in F p ,Vp~1:::m. To verify the results, three datasets discussed above are all included in the experiment. In the SAR dataset, 10 images per category are used as training samples and the rest are test samples. In Fifteen Scene dataset, 100 images in each category are randomly selected as training samples and the rest are for test. In Indoor Scene dataset, we use the training and test samples provided in [22]. In the simulation, the multi-class SVM is applied and we strictly follow the implementations in some previous works [20,32] to conduct the experiments. In addition, we vary the number of codewords as 200, 400, 600, and 800 to investigate the robustness of the proposed method. Moreover, two benchmark feature coding algorithms, i.e. hard assignment [23] and kernel assignment [20] are respectively applied to the selected codewords for image-level feature generation. As indicated in [22], the categorization of indoor images is quite challenging and therefore we further exploit the 2{level spatial-pyramid [23] to improve the accuracy on MIT Indoor Scene dataset. Fig. 6 shows the average classification accuracy of three datasets with regard to different sizes of codebook. It is shown that with the increase of the number of codewords, the classification accuracy is correspondingly improved. As expected, by using the same codebook, kernel assignment achieves higher performance than hard assignment. In Fig. 6, solid lines refer to categorization accuracies achieved from the codebooks generated by discriminative feature selection (DFS) using our proposed model, whereas the dashed lines represent those with respect to codebooks generated by simply clustering all features from multiple classes. It is indicated that by using discriminative feature selection (DFS) to generate the codebooks, the categorization accuracies on three datasets with regard to different sizes of the codebook are apparently enhanced. It is worth noting that on SAR dataset which has only six categories, with DFS there is a considerable improvement on classification performance. In contrast, for the challenging Indoor Scene dataset with 67 categories, the enhancement in classification is less significant. We attribute this to the difficulty in selecting discriminative features from a large number of classes. Specifically, it is hard to separate discriminative features from the common features among as many as 67 classes, while it is comparatively easy to address 6 classes. But, even on these difficult task, our DFS method still improves the classification accuracy for 1:3% on the MIT Indoor Scene dataset with 800 codewords. In the discussions aforementioned, we have verified the effectiveness of DFS on image categorization. Related image categorization work [23] has indicated that the classification accuracy could be significantly improved by using the pyramid methods. In order to achieve classification results comparable with other state-of-the-arts, we further follow the idea in [23] to conduct the classification with a 3-level pyramid. The number of codewords is fixed as 800. The experimental results are repeated for 10 times and the corresponding results (average accuracy + std %) are reported in Table 3. In the indoor scene dataset, to make the result comparable with previous published paper, we directly use the training and testing list provided in [22] and the experiment is only implemented once. In Table 3, it is indicated that by using 3{level spatial-pyramid and a larger number of codewords, the classification performance is significantly enhanced. On SAR dataset, the classification accuracy can achieve higher than 87%. Specifically, on Fifteen Scene dataset, the classification accuracy is as high as 83:7%, which is comparable with many state-of-the-arts. On the very challenging indoor scene dataset, we produce a categorization rate as high as 38:6% by the DFS method with soft assignment. The result is higher than some recent reported results by using the same training and testing samples [22]. Limitations of Study Although there is much remains to be done, our work has introduced the preliminary idea of finding image differences by a generative probabilistic model. In other words, we have shown that discovering image differences among multiple classes indeed helps recognition tasks. Although the presented studies have yielded some preliminary findings, a few limitations still exist in the current work. First, in this paper, the discovery of image differences is fully implemented in a computational statistical framework. Our work fully relies on a data-driven manner to identify the discriminative patterns. Therefore, the learning results are partially determined by what kind of local descriptors are used as the input of the model. In this work, we choose the most prevalent image descriptor, i.e. the SIFT feature, to represent the contents of images. However, describing image content by a single feature will be possibly insufficient. Therefore, there exists some artifacts on the discriminative feature selection results in Fig. 4. In our future works, we may consider to find the differences among images by using multiple descriptors. Besides, with regard to the image classification accuracy, the performance of this method is not the highest reported one on some benchmarks [19]. In previous works for image categorization, multiple implementations have been adopted to improve the classification performances. For example, multiple descriptors were used in some systems with nonlinear classifier. The final classification results are voted by different discriminative machines. Therefore, there is no surprise to find some higher accuracy in these papers. In contrast, in this paper we only used one kind of descriptors with a single SVM classifier for image categorization. Even with such a simple implementation, after spatial pyramid enhancement [23], the classification results are very competitive with previous published results that used multiple approaches for image understanding. Meanwhile, we would like to emphasize that the core contribution of this work is not in the pursuit of the highest quantity on the classification accuracy. The aim of this paper is to reveal that differences among images indeed help recognition. Accordingly, the experiments designed in this work all focus on this prominent goal. Conclusions This paper proposes a generative model to interpret how different and common features are generated in images from multiple classes. The whole mathematical framework can be efficiently solved in an EM paradigm with very robust convergence. The experimental results from different perspectives, including visual effectiveness comparison, bi-class categorization and multi-class classifications, all verify that differences on images is very critical for improving the classification performances. Although we concern on the task of image categorization, the effectiveness of the proposed model is obviously beyond the scope discussed in this paper. The statistical model is a a general method to distinguish discriminative features from common ones. Therefore, it can be applied to a diversity of practical applications that involve discriminative feature selection. Supporting Information File S1 Derivations for the EM algorithm. (PDF) Author Contributions
7,992.6
2013-06-03T00:00:00.000
[ "Computer Science" ]
Reduction of Nitroaromatics by Gold Nanoparticles on Porous Silicon Fabricated Using Metal-Assisted Chemical Etching In this study, we investigated the use of porous silicon (PSi) fabricated using metal-assisted chemical etching (MACE) as a substrate for the deposition of Au nanoparticles (NPs) for the reduction of nitroaromatic compounds. PSi provides a high surface area for the deposition of Au NPs, and MACE allows for the fabrication of a well-defined porous structure in a single step. We used the reduction of p-nitroaniline as a model reaction to evaluate the catalytic activity of Au NPs on PSi. The results indicate that the Au NPs on the PSi exhibited excellent catalytic activity, which was affected by the etching time. Overall, our results highlighted the potential of PSi fabricated using MACE as a substrate for the deposition of metal NPs for catalytic applications. Introduction Noble metal nanoparticles (NPs) have attracted considerable interest because of their unique chemical and physical properties, including their electronic [1], optical [2], magnetic [3], and catalytic properties [4]. These properties distinguish noble metal NPs from bulk metals and make them highly useful for a wide range of applications, such as catalysis, sensing, optics, and fuel cells. In various oxidation and reduction reactions, Au NPs have demonstrated great potential as highly efficient catalysts [5]. However, the tendency of NPs to aggregate in solutions because of their high surface energy may reduce their catalytic activity. To solve this problem, different approaches have been explored for stabilizing NPs, including capping them with organic molecules [6] or polymers [7] or dispersing them onto solid supports [8]. However, although organic molecules or polymers can serve as capping agents to prevent aggregation, they may also reduce the catalytic activity of NPs. By contrast, solid supports offer higher stability but often require time-consuming separation procedures to isolate the catalysts from the reaction system. Two-dimensional (2D) graphene, which has unique properties, such as a high specific surface area, excellent electrical conductivity, high charge carrier mobility, and high mechanical strength, has emerged as a promising support for various types of NPs [9]. Considerable interest has also been directed toward constructing size-and shape-controlled noble metal NPs supported on 2D carbon materials [10]. However, at the nanoscale, metal particles with highly active centers are not at thermodynamic equilibrium and are prone to aggregation with solid supports. Therefore, strategies for stabilizing NPs, such as capping them with multifarious stabilizers, designing core-shell structures, or anchoring them onto specific supports, must be explored. Aromatic amines are crucial building blocks in organic synthesis and in the pharmaceutical industry [11]. Their synthesis can be achieved by reducing corresponding nitroaromat-ics [12]. However, these reduction reactions require catalysts for efficient conversion [13]. Although hydrogen gas and NaBH 4 are commonly used as reducing agents, noble metal NPs, such as Pt, Au, Ag, and Pd NPs, have been reported to be effective catalysts in the presence of NaBH 4 [14][15][16][17][18]. However, the aggregation of these NPs in the reaction system limits their catalytic efficiency, necessitating the development of novel techniques for ensuring well-dispersed stabilization of the NPs. To this end, various materials have been explored for stabilizing nanosized noble metal catalysts for catalyzing nitroaromatics. Kim et al. [19] synthesized a core-satellite structure using poly(N-isopropylacrylamide-acrylamide) and Au NPs for the photothermal-mediated catalytic reduction of 4-nitrophenol. Dong et al. [20] prepared Ag NPs dispersed in a nano-silica nanocatalyst, which exhibited excellent catalytic activity in the reduction of 4-nitrophenol and 2-nitroaniline using NaBH 4 in water at room temperature. Importantly, the nanocatalyst could be easily recovered and reused for at least ten cycles in both reduction reactions, demonstrating its good stability. Pandey et al. [21] synthesized highly stable dispersions of Pt NPs in guar gum, a natural, non-toxic, and eco-friendly biopolymer, serving as both the reducing and capping agent precursor in an aqueous medium. The catalytic activity of biopolymer-supported Pt NPs was demonstrated in the liquid-phase reduction of p-nitrophenol and p-aminophenol. The catalytic reduction of nitroaromatics achieved a remarkable efficiency of 97% within a total reaction time of 320 s at room temperature. Graphene oxide (GO) and reduced graphene oxide (r-GO) were also utilized to stabilize the noble NPs [22,23]. Without the need for additional reductants, surfactants, or protecting ligands, metallic noble metals were deposited on partially r-GO mats through a simple redox reaction between noble metal precursors and GO in an aqueous solution. These GO-or r-GO-supported noble NPs exhibited excellent catalytic activity for the selective reduction of nitroaromatic compounds. Cai et al. [24] developed a novel nanostructured catalyst comprising small and uniform Au NPs with a diameter of approximately 5 nm and ceria nanotubes (CeO 2 NTs). The catalytic performance of the Au NPs/CeO 2 NT catalyst in the reduction of 4-nitrophenol to 4-aminophenol was significantly higher compared to similar catalysts composed of chemically prepared AuNPs or commercially available CeO 2 powder as the support. The superior catalytic activity can be attributed to the unique surface properties of the synthesized Au NPs/CeO 2 NT catalyst, as well as the interaction between the barrier-free surface of Au NPs and surface defects (oxygen vacancies) of CeO 2 NTs, leading to the presence of oxidized Au species. Chen et al. [25] conducted a comprehensive investigation of the reduction of p-nitrophenol by NaBH 4 in the presence of raspberry-like composite sub-microspheres composed of poly(allylamine hydrochloride)-modified polymer poly(glycidyl methacrylate) with tunable Au NPs. They systematically examined the effects of polyelectrolyte concentration, the ratio of polymer spheres to Au NPs, and solution pH during composite synthesis on various reaction parameters such as the induction period, reaction time, average reaction rate, and average turnover frequency. They also proposed a mechanism to explain the observed enhancement in catalytic activity, which involves the active epoxy groups present on the polymer spheres and the strong adsorption of p-nitrophenolate anions onto the positively charged spheres. Metal-assisted chemical etching (MACE) is a simple and versatile method for fabricating porous silicon (PSi) structures without requiring electrochemically etched electrodes [26]. The main principle of MACE is to deposit a noble metal on the surface of a Si substrate and then immerse the substrate in an etching solution containing fluoride and an oxidizing agent to induce an etching reaction [27]. MACE can be used to produce various PSi structures for which the pore size, porosity, and surface morphology can be controlled by adjusting the composition and concentration of the etching solution as well as the type, thickness, and distribution of the metal catalyst [28]. In conventional MACE, a mixture of HF and H 2 O 2 is commonly used as an etching solution [29]. HF reacts with the oxygen atoms on the surface of the Si substrate to form fluorosilicic acid, which further dissolves the Si surface. Simultaneously, the noble metal catalyst on the surface of the Si substrate serves as an active site for catalyzing the etching reaction by promoting the generation of holes (positive charges) in the Si substrate through the reduction of H 2 O 2 . These holes are then injected into the interface between the Si substrate and the metal catalyst, resulting in the oxidation and dissolution of the Si substrate in the etching solution [30]. During the etching process, the metal catalyst serves as a cathodic reaction zone, whereas the Si substrate serves as an anodic reaction zone [31]. Because of their high surface area and tunable pore size, PSi or PSi substrates have been widely used for the deposition of metal NPs for catalyzing nitroaromatic compounds [32][33][34] in a different fabrication technique. To our knowledge, this is the first study to explore the use of PSi fabricated using MACE as a substrate for the deposition of Au NPs for the reduction of nitroaromatic compounds. As a substrate, PSi provides a high surface area for the deposition of Au NPs, and MACE allows for the fabrication of a well-defined porous structure with a tunable pore size. In this study, we used scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDS) to characterize Au NPs on PSi. We used p-nitroaniline (PNA) reduction as a model reaction to evaluate the catalytic activity of Au NPs. Our results indicate that the Au NPs on PSi exhibited excellent catalytic activity toward the reduction of PNA. They also indicated that the catalytic activity of the PSi substrate was affected by the etching time. That is, as the etching time increased, the surface area of the PSi increased, which increased the atomic weight percentage of Au NPs immobilized on the surface. Because the surface area available for the catalytic reaction increased, the catalytic activity also increased. However, at a certain point, further increasing the etching time resulted in a decrease in catalytic activity, which may have been attributable to the aggregation of Au NPs. Overall, these findings may have major implications for the development of efficient and cost-effective catalysts for various organic transformations. Materials and Methods Briefly, N-type Si wafers with a resistivity of 1-10 Ω·cm and a crystal orientation of (100) were cut using a glass cutter into 1.5 × 1.5 cm 2 square pieces, ultrasonically cleaned with methanol, acetone, and deionized (DI) water for 15 min in each solution, and dried with a nitrogen gas gun. The cleaned Si substrates were then placed vertically in an acidresistant Teflon cell of 20 mL in size. Subsequently, a MACE mixture containing HF (48%), H 2 O 2 (30%), DI water, and HAuCl 4 (3 mM) at a ratio of 1:5:2:4 (volume ratio) of 12 mL was added to the Teflon cell. Etching was then conducted at room temperature without stirring for different durations. After the etching process was completed, the PSi substrate was removed from the etching solution and rinsed with anhydrous alcohol and DI water to remove any residual HF solution. Finally, the PSi substrate was dried using a nitrogen gas gun. The fabrication process of electrochemically etched PSi is described in detail in previous studies [35]. In detail, a ±20 V, 40 W source measure unit (PXI 4130) was used as power supply for offering a constant voltage mode at a current density of 30 mA/cm 2 for 30 min. An etching solution containing HF, ethanol, and DI water at a ratio of 1:2:1 (volume ratio) of 12 mL was added. A catalytic solution containing 10 mL of PNA at various concentrations and 33 mg of NaBH 4 was then premixed for 1 h using a magnetic stirrer. The prepared PSi and catalytic solution were then placed together in a glass vial for absorbance measurement at various time intervals. Finally, surface morphological analysis, elemental analysis, and EDS mapping were performed using a multifunction environmental field emission scanning electron microscope equipped with an energy-dispersive X-ray spectrometer (Hitachi SU-5000, Hitachi, Tokyo, Japan). Results and Discussion The conversion of a nitroaromatic molecule to an aniline molecule involves the hydrogenation-dehydration of nitroaniline to form a nitrogen-oxygen double bond, followed by hydrogenation to produce hydroxylamine, and finally, further hydrogenationdehydration to yield the product, p-phenylenediamine, as shown in Figure 1a. The hydrogenation-dehydration of nitroaniline: Nitroaniline undergoes hydrogenation-dehydration under appropriate conditions and in the presence of a suitable catalyst such as nickel or platinum. This reaction leads to the removal of the nitro group (NO 2 ) and the formation of a nitrogen-oxygen double bond. In the compound formed in the previous step, the nitrogen-oxygen double bond reacts with hydrogen gas, resulting in the formation of hydroxylamine (NH 2 OH). This step involves a hydrogenation reaction where the nitrogen-oxygen double bond is reduced to an amino group. The final step involves the hydrogenation-dehydration of hydroxylamine. Under suitable conditions, hydroxylamine reacts with hydrogen gas once again, undergoing dehydration. This reaction removes the hydroxyl group (OH) and ultimately yields p-phenylenediamine. tion-dehydration of hydroxylamine. Under suitable conditions, hydroxylamine reacts with hydrogen gas once again, undergoing dehydration. This reaction removes the hydroxyl group (OH) and ultimately yields p-phenylenediamine. Figure 1b depicts the reduction of PNA to p-phenylenediamine (PPD) in the presence of NaBH4 through the catalytic reaction of Au NPs inside PSi fabricated using MACE. The mechanism of nitroaromatic reduction catalyzed by Au NPs involves four steps: adsorption, hydrogen atom generation, activation, and product formation. In the first step, a nitroaromatic molecule adsorbs, either chemically or physically, onto the Au surface, where it interacts with the surface electrons of Au NPs. In the second step, hydrogen gas molecules adsorb on the Au surface and dissociate into hydrogen atoms. This dissociation reaction is promoted by Au, which acts as a catalyst. In the third step, the nitroaromatic molecule is activated by the adsorbed hydrogen atoms, and the nitro group is reduced to an amino group, forming an intermediate aminoaromatic compound. In the fourth step, the intermediate aminoaromatic compound undergoes further reaction to form a corresponding reduced product. This product desorbs from the Au surface through different pathways. The mechanism of nitroaromatic reduction catalyzed by Au NPs involves four steps: adsorption, hydrogen atom generation, activation, and product formation. In the first step, a nitroaromatic molecule adsorbs, either chemically or physically, onto the Au surface, where it interacts with the surface electrons of Au NPs. In the second step, hydrogen gas molecules adsorb on the Au surface and dissociate into hydrogen atoms. This dissociation reaction is promoted by Au, which acts as a catalyst. In the third step, the nitroaromatic molecule is activated by the adsorbed hydrogen atoms, and the nitro group is reduced to an amino group, forming an intermediate aminoaromatic compound. In the fourth step, the intermediate aminoaromatic compound undergoes further reaction to form a corresponding reduced product. This product desorbs from the Au surface through different pathways. In the presence of NaBH 4 , the reaction between PNA and PPD was detected by monitoring the absorption spectra of the solution. In this reaction, the N-H bond of nitroaniline underwent a hydrogen transfer process with the hydrogen atoms of NaBH 4 , resulting in the formation of PPD. Over time, the reduction of PNA lowered the degree of absorption, and a new absorption peak corresponding to PPD emerged. Consequently, the color of the solution changed from yellow, indicating the presence of PNA, to transparent, indicating the presence of PPD. This reaction was visible to the naked eye. As shown in Figure 2a, the beakers on the right and left contain identical mixtures of PNA and NaBH 4 . However, the beaker on the left contained MACE-PSi, whereas that on the right did not. After the mixture was allowed to stand for 1 h, the difference in color between the two solutions became evident to the naked eye. Specifically, the solution on the left, which contained PSi, changed from yellow to transparent, whereas the solution on the right, which lacked PSi, remained unchanged. During the catalytic reaction, absorption spectra were used to monitor the reduction of PNA to PPD (Figure 2b). In the ultraviolet-visible spectral region, the peak position of PNA was located between 300 and 400 nm, which differed from that of PPD. Therefore, whether a reaction occurred was determined by comparing the absorption spectra before and after the reaction. As the reaction proceeded, the characteristic peak of PNA at approximately 400 nm gradually decreased, whereas the peak of PPD at approximately 300 nm gradually increased. The disappearance of a peak at 400 nm and the appearance of a peak at 300 nm verified the successful reduction of PNA to PPD. As presented in Figure 2c, no changes in absorption were observed in a controlled experiment without the addition of PSi. The absorption spectra of the PNA/NaBH 4 solution remained unchanged, confirming the catalytic function of MACE-PSi. Therefore, electrochemically etched PSi was used as a catalytic substrate, and the same catalytic experiments were conducted. In addition, temporal changes in solution absorption were recorded. As indicated in Figure 2d, because PSi contains no metallic Au, the absorbance of the solution remained unchanged. itoring the absorption spectra of the solution. In this reaction, the N-H bond of nitroaniline underwent a hydrogen transfer process with the hydrogen atoms of NaBH4, resulting in the formation of PPD. Over time, the reduction of PNA lowered the degree of absorption, and a new absorption peak corresponding to PPD emerged. Consequently, the color of the solution changed from yellow, indicating the presence of PNA, to transparent, indicating the presence of PPD. This reaction was visible to the naked eye. As shown in Figure 2a, the beakers on the right and left contain identical mixtures of PNA and NaBH4. However, the beaker on the left contained MACE-PSi, whereas that on the right did not. After the mixture was allowed to stand for 1 h, the difference in color between the two solutions became evident to the naked eye. Specifically, the solution on the left, which contained PSi, changed from yellow to transparent, whereas the solution on the right, which lacked PSi, remained unchanged. During the catalytic reaction, absorption spectra were used to monitor the reduction of PNA to PPD (Figure 2b). In the ultraviolet-visible spectral region, the peak position of PNA was located between 300 and 400 nm, which differed from that of PPD. Therefore, whether a reaction occurred was determined by comparing the absorption spectra before and after the reaction. As the reaction proceeded, the characteristic peak of PNA at approximately 400 nm gradually decreased, whereas the peak of PPD at approximately 300 nm gradually increased. The disappearance of a peak at 400 nm and the appearance of a peak at 300 nm verified the successful reduction of PNA to PPD. As presented in Figure 2c, no changes in absorption were observed in a controlled experiment without the addition of PSi. The absorption spectra of the PNA/NaBH4 solution remained unchanged, confirming the catalytic function of MACE-PSi. Therefore, electrochemically etched PSi was used as a catalytic substrate, and the same catalytic experiments were conducted. In addition, temporal changes in solution absorption were recorded. As indicated in Figure 2d, because PSi contains no metallic Au, the absorbance of the solution remained unchanged. Generally, the etching time of PSi is the most direct experimental parameter for modifying the surface morphology of PSi. Studies have indicated that the surface morphology of PSi can be used to determine changes in the deposition of Au thin films in surface-enhanced Raman scattering [35]. In the current study, PSi was fabricated over various etching times, and its catalytic effects on PNA were compared under the same catalytic conditions. The correlation between etching time and catalytic efficiency was also investigated by analyzing changes in the absorbance spectrum of PNA at a peak wavelength of 380 nm. Figure 3a presents the time-dependent absorbance peaks of PNA at 380 nm that were observed in the presence of PSi over various etching times. All PSi substrates prepared over various etching times and served as catalyst materials for PNA. However, the sample etched for 20 min exhibited the highest catalytic performance. After the initial absorbance peak intensity (A 0 ) was compared with the absorbance peak intensity after a certain period of time (A t ), a logarithmic calculation was performed to establish the correlation between the calculated values and elapsed time (Figure 3b). According to the results, the PSi sample etched for 20 min exhibited the highest catalytic performance for PNA. Therefore, the reduction rate and time constant (k) values were calculated using a pseudo first-order reaction [14], as shown below: 380 nm. Figure 3a presents the time-dependent absorbance peaks of PNA at 380 nm that were observed in the presence of PSi over various etching times. All PSi substrates prepared over various etching times and served as catalyst materials for PNA. However, the sample etched for 20 min exhibited the highest catalytic performance. After the initial absorbance peak intensity (A0) was compared with the absorbance peak intensity after a certain period of time (At), a logarithmic calculation was performed to establish the correlation between the calculated values and elapsed time (Figure 3b). According to the results, the PSi sample etched for 20 min exhibited the highest catalytic performance for PNA. Therefore, the reduction rate and time constant (k) values were calculated using a pseudo first-order reaction [14], as shown below: The results are presented in Figure 3c. The PSi sample etched for 20 min exhibited the highest catalytic constant, followed by the sample etched for 30 min, whose catalytic constant was higher than that of the sample etched for 10 min but lower than that of the sample etched for 20 min. Of the samples, the PSi sample etched for 40 min exhibited the lowest catalytic performance for PNA. The results are presented in Figure 3c. The PSi sample etched for 20 min exhibited the highest catalytic constant, followed by the sample etched for 30 min, whose catalytic constant was higher than that of the sample etched for 10 min but lower than that of the sample etched for 20 min. Of the samples, the PSi sample etched for 40 min exhibited the lowest catalytic performance for PNA. Auric acid (HAuCl 4 ) is utilized as a metal catalyst in the MACE process to fabricate PSi, similar to the use of silver nitrate [28]. Auric acid, a metal salt containing gold (Au), acts as a catalyst in the HF etching solution during the MACE process. The gold ions (Au 3+ ) within auric acid react with silicon fluoride (SiF 6 2− ) present in the etching solution. This reaction leads to the dissolution of Au species and the etching of the silicon material. By adjusting the concentration of auric acid, it is possible to control the formation of pores and the structural characteristics of porous silicon during the MACE process [29]. When auric acid is employed as the metal catalyst in MACE, the internal structure of PSi can contain gold nanoparticles [28]. This occurs due to the reduction of Au 3+ from HAuCl 4 during the etching process, resulting in their deposition inside the pores of the silicon material. When the silicon material is immersed in the etching solution containing HAuCl 4 , the Au 3+ ions react with the silicon material present in the solution. Through this process, Au 3+ ions are reduced to gold atoms, which then deposit inside the pores, forming gold nanoparticles within the PSi. Figures 4-7 present both the surface morphological and elemental analysis results for the PSi prepared over various etching times and the EDS mapping results for the Au elements. Figure 4a is an SEM image of a PSi after 10 min of etching and reveals a porous structure. Figure 4b is an SEM image of the same sample but at a higher magnification and reveals a pore diameter of approximately 250 nm and fluff-like protrusions resembling Au structures at the locations at which the pores connect to one another. As indicated in Figure 4c, elemental analysis verified the presence of Au. Figure 4d is an EDS mapping image generated using Au elements, with green dots indicating the locations of Au. Both surface morphological analysis and elemental analysis confirmed the existence of a porous structure and Au in the PSi sample. Using the same analytical techniques, we analyzed a PSi sample etched for 20 min ( Figure 5). As shown in Figure 5a, etching for 20 min resulted in larger pores in the PSi, indicating a smaller diameter of Si. As indicated in Figure 5b, the pore size of the PSi was approximately 300 nm, which is larger than that of the PSi sample etched for 10 min. The figure also clearly reveals fluffy protrusions on the surface of the PSi, which are distributed more uniformly than those on the surface of the PSi sample etched for 10 min (Figure 4b). Figure 6b, the pore diameter was approximately 150 nm, and the diameter of the PSi substantially increased. However, the hairy protrusions observed in the samples that were etched for 10 and 20 min were not observed on the surface of the PSi sample that was etched for 30 min. In addition, elemental analysis revealed a weight percentage of 7.5 wt% for Au in the PSi sample etched for 30 min, which was higher than that of the PSi sample etched for 10 min (6.7 wt%) but lower than that of the PSi sample etched for 20 min (8.2 wt%). These findings may explain why the PSi sample etched for 30 min was more effective than the PSi sample etched for 10 min at catalyzing PNA but not as effective as the PSi sample etched for 20 min. As shown in Figure 6d, Au element mapping revealed an uneven distribution of Au on the surface of the PSi, with some green dots exhibiting aggregation. Figure 7 depicts the SEM and EDS results of a PSi sample etched for 40 min. As shown in Figure 7a, the surface morphology of the PSi changed, forming pore structures with an average size of approximately 1 µm and an irregular shape. Unlike the mesh pore structure of the PSi samples etched for 10 and 20 min, the diameter of the PSi was large (Figure 7b). However, similar to the PSi sample etched for 30 min, the surface of the PSi sample etched for 40 min did not exhibit fluffy protrusions. Elemental analysis indicated that the PSi sample etched for 40 min contained 7.9 wt% Au, which was higher than those of the PSi samples etched for 10 and 30 min. However, the catalytic effect of the PSi sample etched for 40 min did not exceed those of the PSi samples etched for 10 and 20 min. One possible reason for PNA being subject to a catalytic effect is Au content. In this case, the surface morphology of the Au may have also influenced the catalytic effect. Figures 4-7 present both the surface morphological and elemental analysis results for the PSi prepared over various etching times and the EDS mapping results for the Au elements. Figure 4a is an SEM image of a PSi after 10 min of etching and reveals a porous structure. Figure 4b is an SEM image of the same sample but at a higher magnification and reveals a pore diameter of approximately 250 nm and fluff-like protrusions resembling Au structures at the locations at which the pores connect to one another. As indicated in Figure 4c, elemental analysis verified the presence of Au. Figure 4d is an EDS mapping image generated using Au elements, with green dots indicating the locations of Au. Both surface morphological analysis and elemental analysis confirmed the existence of a porous structure and Au in the PSi sample. Using the same analytical techniques, we analyzed a PSi sample etched for 20 min ( Figure 5). As shown in Figure 5a, etching for 20 min resulted in larger pores in the PSi, indicating a smaller diameter of Si. As indicated in Figure 5b, the pore size of the PSi was approximately 300 nm, which is larger than that of the PSi sample etched for 10 min. The figure also clearly reveals fluffy protrusions on the surface of the PSi, which are distributed more uniformly than those on the surface of the PSi sample etched for 10 min ( Figure 4b). Figure 5c depicts the elemental analysis results for the PSi sample etched for 20 min, with the results indicating a higher Au content than that of the PSi sample etched for 10 As a final step, PSi was etched for 20 min, and PNA was catalyzed at different concentrations. This testing method clarified the catalytic effect of PSi on different concentrations of PNA and thereby validated our experimental results. Because we used the same PSi for catalysis, our results confirmed that the produced sample played a role in repeated catalysis. The experimental results are presented in Figure 8. When the PNA concentration was low, the catalytic effect was relatively high. Therefore, we calculated the catalytic rate k by using the obtained PNA absorption spectrum change ( Table 1). The results indicated that the k value was large when the concentration was low. However, when the PNA concentration exceeded 1.2 mM, the catalytic rate of PSi remained unchanged. Therefore, this concentration may be the catalysis and saturation concentration of PSi for PNA. However, by analyzing the catalytic usage times at the same concentration, we discovered that the catalytic efficiency decreased by approximately 35%. Hence, further studies are required to improve the catalytic efficiency and optimize the run time of the catalytic process by adjusting the concentration of auric acid. rectly confirm our hypothesis that the structure and distribution of Au on porous surfaces influence the catalytic effect of PSi on PNA. As a final step, PSi was etched for 20 min, and PNA was catalyzed at different concentrations. This testing method clarified the catalytic effect of PSi on different concentrations of PNA and thereby validated our experimental results. Because we used the same PSi for catalysis, our results confirmed that the produced sample played a role in repeated catalysis. The experimental results are presented in Figure 8. When the PNA concentration was low, the catalytic effect was relatively high. Therefore, we calculated the catalytic rate k by using the obtained PNA absorption spectrum change ( Table 1). The results indicated that the k value was large when the concentration was low. However, when the PNA concentration exceeded 1.2 mM, the catalytic rate of PSi remained unchanged. Therefore, this concentration may be the catalysis and saturation concentration of PSi for PNA. However, by analyzing the catalytic usage times at the same concentration, we discovered that the catalytic efficiency decreased by approximately 35%. Hence, further studies are required to improve the catalytic efficiency and optimize the run time of the catalytic process by adjusting the concentration of auric acid. Conclusions To our knowledge, this is the first study to investigate the use of PSi fabricated by MACE as a carrier for Au NPs and to evaluate its catalytic performance in terms of nitroaromatic compound reduction. Overall, the PSi substrate provided a high surface area and tunable pore size, which facilitated the deposition and catalytic reaction of Au NPs. The Au NPs were then characterized using SEM and EDS, and their catalytic activity was evaluated with nitroaromatic reduction used as a model reaction. The results indicate that Au NPs on PSi exhibit excellent catalytic activity and that the catalytic activity of PSi substrates is affected by the etching time. In addition, a longer etching time results in a larger surface area of PSi and a higher atomic weight percentage of immobilized Au NPs, which leads to higher catalytic activity. However, an excessive etching time results in the aggregation of Au NPs and reduces catalytic activity. These findings have major implications for the development of efficient and cost-effective catalysts for different organic transformation reactions.
7,144.2
2023-06-01T00:00:00.000
[ "Chemistry", "Materials Science" ]
Structure of Micelles Calcium Didodecyl Sulfate: A SAXS Study Copyright: © 2015 Mahapatra P, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Surfactant molecules (e.g., CTAB, SDS etc.) in dilute aqueous solutions self assemble to form variety of supra-molecular structures such as micelles, vesicles and liquid crystalline structures [1][2][3]. The simplest aggregate of the surfactant molecules is referred to as a micelle. In general, micelles could be of various shapes and sizes such as spherical, ellipsoidal, cylindrical, thread-like or dislike depending on the architecture of the surfactant molecule [4][5][6]. For example, while the micelles of conventional surfactants in dilute solutions are spherical, gemini surfactants possess a thread-like shape [4,6]. In fact, this shape is directly dependent on the spacer length [6]. This study deals with aggregation behavior of calcium didodecyl sulfate, Ca(DS) 2 , which constitutes of a cylindrical monomer [7,8]. Anionic surfactant molecules such as Sodium Dodecyl Sulfate, NaDS, ionize in aqueous solution and the corresponding micelles are aggregates of DSions [2,9]. The Na + ions of NaDS molecules, known as counter-ions, tend to stay near the negatively charged DSmicellar surface. Joshi et al. [9] have studied the micellar structures of a series of univalent anionic surfactants (LiDS, NaDS, KDS, RbDS and CsDS) and shown that the size and shape of the micelle is quite sensitive to the change in counter-ion. The study of aggregation behavior of dianionic surfactant Ca(DS) 2 is of special interest as this particular surfactant provides a system where counter-ions are divalent. The molecule of dianionic surfactant is cylindrical (Figure 1a) as it consists of two tails attached to a divalent ion [7,8,10,11]. It is crucial to note that monomer of catanionic surfactant (Figure 1b), is also cylindrical [12][13][14][15][16] though there are subtle differences between dianionic and catanionic surfactants. The dianionic surfactants are formed by mixing of divalent salt in an anionic surfactant solution and removing of unreacted ions [7]. On the other hand, the catanionic monomers are formed by mixing of cationic and anionic surfactants [15] and thus there is strong electrostatic interaction between the two head groups. This implies that the nature of bonds and their strengths in dianionic systems would be different from those found in catanionic systems. While the formation of vesicles and flexible cylinders has been seen for catanionic surfactants [11,12,16], it is of interest to study the aggregation behavior of dianionic surfactant Ca(DS) 2 . Small Angle X-ray Scattering (SAXS) is a well established technique for studying the structure of materials on a length scale of 10-1000 Å simultaneously and this is routinely used for studying structures of micellar solutions [17][18][19]. It may be mentioned that X-rays scattering power of different elements is different and it almost scales as the atomic number of the element. Thus heavier elements are seen more prominently in presence of light elements. It is this property of X-ray scattering that makes SAXS as an ideal technique for studying core-shell structure of micelles [20,21]. This paper reports the sizes and shapes of Ca(DS) 2 micelles as obtained from Small Angle X-ray Scattering (SAXS) studies. The following section presents the experimental setup and other relevant details. Details of the proposed model and data analysis involved are given in Section 3. Section 4 shows the results and various insights obtained from this study. the raw-materials without further purification. Calcium didodecyl sulfate, CDS, was prepared by mixing SDS and calcium chloride using subsequent centrifugation-redispersion-filtration technique. All the samples were prepared using Millipore ultra pure water (resistivity 18.2 MΩ.cm). The CDS was obtained as a white precipitate, the mixture was heated to 60ºC to complete the reaction and to redissolve the precipitate, then kept for a couple of days to precipitate again and to phase separate out. The conductivity of clear solution was noted (~9.72 mS/cm) and discarded to eliminate the unreacted and other materials. The bottom precipitate layer was redispersed in pure water, heated to 60ºC, cooled back to room temperature and centrifuged at 9000 rpm for 20 min at room temperature. The supernatant clear solution was discarded. The procedure was repeated until the conductivity of supernatant solution became less than 1% of the initial clear solution's conductivity. The conductivity of the supernatant solution was found to be 94.7 μS/cm (corresponding to 0.00415 mM of NaCl). The precipitate of purified CDS was dried in an oven at 50ºC for one week. It was seen that purity of CDS was higher than 98% as determined by atomic absorption inductively coupled plasma (AAS-ICP method). SAXS experiments SAXS experiment involves scattering of a monochromatic beam of X-rays from the sample and measuring the scattered X-ray intensity in a region of small scattering angles. This experiment provides scattered X-ray intensity I(q) as a function of wave vector transfer q ( 4 sin π θ λ = , where λ is wave length of incident X-rays and 2θ is the scattering angle). The typical q range covered in SAXS studies cover is about 0.001 Å -1 to 0.5 Å -1 . The commercial SAXS machines operate with line collimation and those on synchrotron sources use pin-hole geometry. The present studies on micellar solutions of CDS were carried out using SAXSess camera (Anton Paar, Austria) with line collimation. The incident radiation (wavelength=1.542 Å) was Cu Kα X-rays from a PANalytical X-ray source (PW3830 X-ray generator) at 40 kV and 40 mA. The scattered X-ray intensities were collected in a two-dimensional position sensitive imaging plate, and integrated over a linear profile to convert into one-dimensional (I(q) vs. q) scattering data. The sample to detector distance was 264.5 mm. The sample holder used was a capillary made of quartz having inner diameter ~1.5 mm and 10 μm thickness. Exposure time was 6 hours per sample. The measurements have been made on micellar solutions of CDS for surfactant concentrations of 0.5, 1.0, 2.5, and 20 weight % (or 8.8, 17.7, 44.92 and 438.01 mM/dm 3 ) respectively. All the samples were heated up to 65°C (turbidity disappears) and were maintained at that temperature for 5 min. Thereafter, they were gradually cooled down to room temperature, and reheated to 55°C for SAXS studies. The sample temperature was maintained at 55 ± 0.2°C using temperature controller (TCS120, Anton Paar) for all the studies. Scattering data for the background, obtained under similar conditions, was subtracted from the sample data to obtain scattering from selfassembled aggregates of CDS. Millipore ultra pure water was used as reference/background matrix. Data Analysis and Model The intensity ( ) A q of X-rays scattered from a micellar solution is expressed in terms of the elementary scattering amplitude ( ) A q of the micelles and of the structure factor ( ) S q as given in various sources [19,22] as n c c N n , n mic is the number of micelles per unit volume, c surfactant concentration, c M critical concentration of micelle formation, and N A Avogadro's number. The notation denotes thermodynamic averaging. n ag , aggregation number, the number of dodecyl sulfate molecule in a micelle. Scattering amplitude A(q) depends on size and shape of micelle and the inter-micellar structure factor S(q) depends on the way micelles are distributed in the solution. In case of dilute solutions, inter-micellar interactions are negligible and the expression for intensity reduces to 2 ( ) ( ) Here χ, scaling factor had been introduced to normalize the fitted curves to the maximum intensity values in the experimental scattering data. A constant term (B), independent of q, was introduced to account for any residual incoherent scattering due to background after subtraction. Both χ, and B were taken as free-parameters in the data analysis. Depending on the shape or model for the micelle, the expression for scattering amplitude A(q) can be derived [17,23] depending on the shape and size of the scattering particle. A(q) depends on the contrast factor, which is decided by the difference in electron density of the particle and that of the solvent. At times, as in case of micelles, different parts of the particle may have different electron densities and this is reflected in expression for A(q) [24]. For example, when a micelle has core-shell structure, the contrast factor for the shell could be different from that for the core. Micellar model and expression for x-ray scattering amplitude The micelle is modeled as built of two cofocal ellipsoidal shells in line with Borbely et al. [25] with distance d between their respective semi-major and semi-minor axes [25][26][27]. That is, b c the semi-minor axis of the core and b is the thickness of the outer shell ( Figure 2). The scattering contrast within the inner shell (core region), ρ ∆ c and in the region between the two shells (shell region), ρ ∆ sh were assumed constant. The core contains the dodecyl chains without any water molecules. It is assumed that the entire hydrocarbon chain lies within the core. The core the consisting of hydrocarbon chains is expected to have lower electron density as compared to background matrix i.e. water. The shell region consists of the head-groups ( - where, where, The thermodynamic average scattering amplitude is given by integral over the equally probable orientations given by Eq. 6, which were evaluated numerically. The volume of the non-wetted hydrocarbon core ( C V ) and the whole micelle ( M V ) was defined as where, The ρ ∆ sh term being a constant gets absorbed in the scaling factor term χ in Eq. 2, and while evaluating the intensity it does not affect the overall behavior of the model profile. Therefore, absolute values of scattering lengths are not required for this model. It is customary to obtain Pair Distance Distribution Function (PDDF), p(r), from SAXS data and compare it with that based on micellar model. p(r) can be calculated from the measured X-ray intensity (obtained using Eq. 2) by inverse Fourier transformation and is given by The free parameters were obtained by fitting the model intensity curve to the indirect Fourier transformed form-factor data for the best fit (least square). After the free parameters were determined, dependent parameters (n ag , α) were computed by simultaneously solving Eqs. 7 and 8. PDDF was obtained from the fitted intensity profile and was compared with PDDF obtained from Indirect Fourier Transformation analysis. An approximate quality of the fit is indicated by 2 . R -value, which represents the deviation of the transformation fitted points from the calculated curve. Results and Discussion Pair distance distribution function (PDDF) Figure 3 shows the measured SAXS distributions for all the samples after the data have been corrected for background and empty sample holder contributions. These data along with Eq. 10 were used to calculate the pair distance distribution functions (PDDF) and the results are shown in Figure 4. This involved smoothing and de-smearing the profiles to eliminate deteriorating effects from slit length and slit width. This has been done by computing and fitting the "smoothened" scattering curve to the measured one using the p(r) function generated by a series of cubic B-splines [31,32]. The solid lines in Figure 3 are based on p(r), which was generated using cubic B-splines. And the structure factor which was evaluated assuming hard-sphere model (Percus Yevick approximation with average structure-factor). The PDDF obtained by inverse Fourier Transformation is similar to what one expects from a prolate ellipsoidal micelle with a core shell structure [17]. The above p(r) is similar to that for elongated particles suggesting that CDS micelles are prolate ellipsoidal. The maximum dimension (D max ) of the micelle or the major axis of the ellipsoidal micelle can be obtained from the PDDF as p(r)=0 for r ≥ D max . Similarly, the radius R c of the micellar core and the radius R cs (=R c + d) or diameter D cs of shell of counter-ions can be easily identified in the PDDF. It is seen that the value of D max or the major axis of micelle decreases with increase in surfactant concentration. The radius (R c ) of micelle core, which signifies the hydrocarbon chain length, decreases with increasing total concentration suggesting that chains are more folded in concentrated solutions. The shell diameter (D cs ) and the shell thickness also increases with increasing concentration showing higher number of counter-ions attached to the aggregates and hence implying lower effective charge. The presence of a second peak/distortion in the later part of the curves is due to polydispersity of the samples. Fitting of model based theoretical curves to experimental data SAXS data have been analyzed in terms of the above mentioned model of the micelle also. That is I(q) for the above model was calculated and fitted to resolution corrected experimental data with semi-minor axis of the core (b c ), axial ratio of the core (ς ), shell thickness (d), background (B) and normalizing constant χ as parameters. Figure 5 shows the best fitted scattering curves for the ellipsoidal shape models together with the resolution corrected experimental data. The values of the fitted parameters are given in Table 1. It is seen that calculated curves agree very well with the experimental data except at high q values. It can be seen that the semi-minor axis of the core, axial ratio and the shell thicknes decrease with increasing surfactant concentration. The maximum dimension which denotes the major axis of the whole micelle equaling 2( ) ς + c b d obtained are 9.29, 8.69, 8.00 and 7.63 nm, respectively for 0.5, 1, 2.5 and 20 wt% surfactant. These values are about 1 nm larger than those obtained from experimental PDDF. The core radius at all concentrations is similar to those obtained from PDDF although it is smaller than the expected hydrocarbon chain length, showing possible chain folding in these systems [27]. It is interesting to note that axial ratio of ellipsoidal micelle decreases with increasing concentration. From the trend, it is expected that the particle might exist as rod-like structures at very low concentration. The ratio of scattering lengths ρ ρ ∆ ∆ c sh decreases very slightly with increasing surfactant from which it can be inferred that some water molecules are present in the core region and their amount increases with increasing concentration. There is a decrease both in aggregation number and the effective fractional charge associated with each micelle for increasing total surfactant concentration. This decrease in fractional charge is probably responsible for the increase of shell thickness due to presence of attached counter-ions. We also notice that total charge on each micelle decreases by 3 times upon increasing concentration from 0.5 to 20 wt%. PDDF based on above model has also been compared with the experimentally obtained PDDF and there is reasonable agreement between them. Figure 6 shows the experimental and the calculated PDDF for 0.5% micellar solutions at 55°C. It can be seen that the model can perfectly simulate the initial part (low r) of the PDDF curve, which is connected with the inner core dimensions. In later part of the curve, a peak occurs in the fitted curve at lower values of r but it extends and terminates at higher r values as compared to experimental one. This indicates that the value of shell thickness d obtained from the fit is slightly lower (the inflexion point near the second peak roughly denotes semi-minor axis of the micelle [24]) compared to actual system. The fitted curve also gives higher maximum dimension of the micelle. It may be mentioned that agreement between experimental PPDF and the model based theoretical PDDF was better up to large q values for concentrated solutions. This shows that experimental PDDF as obtained from Fourier transform of I(q) data can be fruitfully used for obtaining qualitative information about the aggregate. Conclusions Calcium didodecyl sulfate (CDS) powder has been prepared and its purity was quantified using conductivity measurements and inductive coupled plasma methods. Micellar solutions were prepared by dissolving appropriate quantities of CDS powder in water. The structure of micelles of CDS for several different surfactant concentrations have been studied using SAXS. The qualitative information about the shape and size of micelle was obtained by generating the pair distance distribution function (PDDF) by indirect Fourier transformation method. It was found that CDS micelle is ellipsoidal in shape and its core-shell structure was clearly indicated in PDDF. The values of various size and structural parameters of the micelle have been obtained by fitting the scattering data to a core and shell ellipsoidal micellar model. These studies show that size (aggregation number and major axis of micelle) of micelle in CDS solutions decreases with increase in surfactant concentration. Further it was seen that micellar charge also decreases with increase in surfactant concentration.
3,925.6
2015-07-30T00:00:00.000
[ "Materials Science", "Chemistry" ]
Sb Surface Modification of Pd by Mimetic Underpotential Deposition for Formic Acid Oxidation The newly proposed mimetic underpotential deposition (MUPD) technique was extended to modify Pd surfaces with Sb through immersing a Pd film electrode or dispersing Pd/C powder in a Sb(III)-containing solution blended with ascorbic acid (AA). The introduction of AA shifts down the open circuit potential of Pd substrate available to achieve suitable Sb modification. The electrocatalytic activity and long-term stability towards HCOOH electrooxidation of the Sb modified Pd surfaces (film electrode or powder catalyst) by MUPD is superior than that of unmodified Pd and Sb modified Pd surfaces by conventional UPD method. The enhancement of electrocatalytic performance is due to the third body effect and electronic effect, as well as bi-functional mechanism induced by Sb modification which result in increased resistance against CO poisoning. Introduction Direct formic acid fuel cells (DFAFCs) are considered promising power sources of clean and environment-friendly energy for miniature and portable electronic devices because of excellent OPEN ACCESS performance, such as high power density [1,2].The direct formic acid fuel cell has a theoretical open circuit potential of 1.48 V, higher than that of direct methanol fuel cell (1.18 V) [3].The improvement of performance of DFAFCs depends on fabrication of high-efficient electrocatalysts.The commonly-used anodic catalyst for DFAFCs is platinum black, on which the formic acid electrooxidation occurs via a dual pathway [4,5], which consists of the direct pathway without CO poison and the indirect pathway with the formation of CO as poisonous intermediates.The resulting CO intermediates are strongly adsorbed on the Pt surface and block the active sites, then decrease the activity.In this regards, platinum is not so favorable for practical formic acid fuel cell application because of the CO intermediates build-up, poisoning the catalysts, and degenerating the fuel cell performance gradually [3,6]. Many studies have confirmed that palladium is a more efficient catalyst with higher catalytic activity for the electrooxidation of formic acid [6][7][8][9][10][11].The excellent property derives from the extraordinary formic acid oxidation mechanism on Pd which is different from the dual pathway mechanism on Pt.Briefly, on the Pd surface, the electrooxidation of formic acid occurs via a dominantly direct pathway with a minimized buildup of CO on the surface (The formation of COad on a Pd electrode in formic acid solutions at the OCP and practical working potentials has been confirmed by Wang et al., using in situ high-sensitivity attenuated-total-reflection surface-enhanced infrared absorption spectroscopy (ATR-SEIRAS), proposing that the reduction of the FA dehydrogenation product CO2 should be mainly responsible for the above COad formation [12]).Unfortunately, the activity of Pd is unstable and deactivation exists during formic acid oxidation due to gas build-up on the anodic side of a fuel cell [13,14], catalyst leaching or impurities in the formic acid or intermediate species [15].However, a majority of literature proved that it is mainly CO-like intermediates accumulated on Pd surface that degenerate the activity of Pd, and it reached a consensus among most research workers [16,17].On this issue, much effort has been made to improve the catalytic activity and stability through alloying or surface modification with metallic adatoms, such as Sb.Yu et al. [18] fabricated carbon supported PdSb alloy catalysts which show much better resistance to poisoning (deactivation) and decrease the accumulation of CO on the catalyst surface during formic acid oxidation.Masel et al. [19] have studied the effects of Sb adatoms on the performance of a DFAFC.They showed that electrochemical surface modification of Pd by Sb adatoms enhances the oxidation of formic acid by more than two-fold in an electrochemical cell.For Sb modification, previous approaches, such as irreversible adsorption (IRA) and traditional UPD method required external potential controlled desorption of partial Sb, which were not suitable for scaled synthesis or upgrading of practical powder catalysts [20][21][22].Among the known Sb surface modification method, mimetic underpotential deposition (MUPD) technique was a newly proposed electroless approach to achieve sufficient surface modification [23].Compared with underpotential deposition followed by potential controlled desorption of partial Sb adatoms usually applied in Sb modification on Pt, MUPD requires no external potential control and is a versatile electroless approach extended for surface nanoengineering of electrocatalysts [24]. In this work, we extended the MUPD strategy to the modification of Sb on Pd surface of film electrode and Pd/C powder by introducing ascorbic acid (AA) as a mild reductant to Sb(III)-containing modification solution.Besides, for comparison, Pd film substrate was also modified by Sb UPD.We studied the influence of Sb modification on Pd surface for the electro-oxidation of formic acid by cyclic oltammetry and chronoamperometry together with anodic stripping voltammetry of pre-adsorbed CO. Sb UPD on Pd Film Electrodes Different from Sb UPD on Pt surfaces [20,25], coverage of Sb (θSb, a coefficient based on the ratio of Pd sites filled and not filled) on Pd surface was tuned by controlling the UPD time.In this paper, Sb UPD on fresh Pd films was performed for 10 s, 20 s, 30 s, respectively.The modified Pd film electrodes were marked as Sb/Pd(UPD).Figure 1a depicts cyclic voltammograms of unmodified Pd film and Sb/Pd(UPD) electrodes in 0.5 M H2SO4.It can be seen that the hydrogen adsorption/desorption region was restrained for Sb modified Pd film electrodes.A conspicuous peak near 0.66 V in positive scan is due to the oxidative dissolution of Sb modifiers on Pd surface.The peak of negative scan in high potential is ascribed to reduction of oxygenous species formed on positive scan.Based on hydrogen adsorption-desorption charge with or without Sb modifiers, θSb on Pd surface can be evaluated through the following equation [22]: where QO-H and QSb-H is the charge for oxidation of adsorbed hydrogen on unmodified and Sb modified Pd, respectively.By calculating, it is found that the Pd electrodes through UPD for 10 s, 20 s, 30 s enable θSb to reach a value of 0.52, 0.62 and 0.66, respectively.This revealed that coverage of Sb on Pd surface increased with UPD time.Formic acid oxidation was chose as a probe reaction to compare the catalytic activity of Sb/Pd(UPD) electrodes with various θSb.Figure 1b showed that peak current density of HCOOH electrooxidation on Sb/Pd(UPD) increased with θSb from 0.52 to 0.62 and then dropped down with θSb from 0.62 to 0.66, which might follow a volcano-like relationship between θSb and electrocatalytic activity of Sb/Pd(UPD) (seen from inset in Figure 1b).The inset presents a plot showing the direct relation between catalytic activity and θSb.A high open circuit potential of ca.0.29 V at 30 s was seen in single 0.1 mM APT (curve a) due to the oxygen-containing species that spontaneously formed on Pd surface, thus limited effective modification of Sb on Pd.With addition of 20 mM AA to 0.1 mM APT aqueous solution (curve b), the OCP negatively shifted to 0.12 V at 30 s because the ascorbic acid served as mild reductant removed the oxygen-containing species to ensure freshly reduced Pd surfaces for better Sb modification [23].Upon this, Sb MUPD was carried out through immersing Pd film electrode into modification solution for 30 s which was the optimal MUPD time reported by Cai et al. [23] when modifying bulk Pt electrode and powder catalyst by MUPD.Hydrogen region properties of Sb/Pd(MUPD) electrode were studied by cyclic voltammetry in 0.5 mol L −1 H2SO4.Observed in Figure 3a, after pure Pd film electrode was immersed in Sb containing solution for just 30 s, the area of hydrogen region was severely shrinked due to Sb coverage on Pd surface, thus leading to restriction of hydrogen adsorption/desorption, and, therefore, θSb herein reached 0.67. Sb MUPD on Pd Film Electrodes To compare the electrocatalytic activity of modified and unmodified Pd film electrodes, linear sweep voltammograms for formic acid oxidation are recorded on pure Pd film, optimal Sb/Pd(UPD) with UPD time for 20 s and Sb/Pd(MUPD) in 0.5 M H2SO4 containing 0.5 M HCOOH.As can be seen in Figure 3b, the formic acid oxidation current density in low potential was weak on unmodified pure Pd film.A small anodic peak was observed below 0 V which might be assigned to oxidative desorption of hydrogen produced in decomposition of formic acid over Pd surface at open circuit [26] and the main larger peak centered at 0.3 V was attributed to the direct oxidation of formic acid to CO2 (black curve).For Sb/Pd(UPD), the shape of LSV was all the same except that the current density of formic acid oxidation was higher than that of unmodified Pd film (red curve).In the case of Sb/Pd(MUPD), not only the peak current density was further increased, but the main peak potential and onset potential of formic acid oxidation shifted negatively by 100 mV and 80 mV, respectively (blue curve).It indicated that the electrocatalysis of formic acid oxidation was significantly enhanced at low potentials on Sb/Pd(MUPD) compared with unmodified Pd and Sb/Pd(UPD).The long-term electrocatalytic activities of the modified or unmodified Pd film electrodes are explored by polarizing pure Pd film, Sb/Pd(UPD) and Sb/Pd(MUPD) electrodes at 0.2 V in 0.5 M H2SO4 + 0.5 M HCOOH for 3600 s. Figure 3c showed the corresponding \ curves.The current density on pure Pd film was intensively decayed in the initial stage (black curve) because of Pd surface poisoning by CO intermediates produced in self-decomposition of formic acid.For Sb/Pd(UPD) and Sb/Pd(MUPD), the current density for HCOOH electrooxidation was enhanced maximally and the decay became weak.During the whole testing (3600 s), the current density followed the order of Sb/Pd(MUPD) > Sb/Pd(UPD) > unmodified Pd which was consistent with the results in Figure 3b.Namely, the electrocatalytic performances were improved on Pd film electrodes by Sb modification due to the so-called third body effect, which accelerated formic acid oxidation through direct pathway [19,23].Specifically, Sb modification can break adjacent Pd active sites which is favorable to dehydration of formic acid molecules to produce water and CO, thus CO poisoning on Pd surfaces were inhibited to some extend. To further explore the poisoning resistant effect after Sb modification, pre-adsorbed CO stripping voltammetry was performed.Figure 3d showed CO stripping voltammograms for pure Pd film, Sb/Pd(UPD) and Sb/Pd(MUPD) in 0.5 mol L −1 H2SO4.It can be observed that on pure Pd film, the oxidative stripping peak was normally sharp and located at 0.66 V. Otherwise, both the onset and the peak potential of CO oxidation were significantly shifted to lower potentials on Sb/Pd(UPD) and Sb/Pd(MUPD) compared to unmodified pure Pd film.Therefore, the presence of Sb promoted the oxidation of CO adsorbed on Pd [18].Seen from Figure 3d, the promotion can be explained by electronic effect or bi-functional mechanism induced by Sb.The electronic effect leads to a weakening of the CO-Pd interaction [27] and makes the direct pathway of formic acid oxidation being predominant [23].In bi-functional mechanism, Sb adatoms provide active sites for -OH formation at lower potentials than on pure Pd, and -OH promotes the oxidative removal of adsorbed poisoning intermediates during formic acid oxidation [18,20,28]. Sb MUPD on Pd/C Powder Catalyst The MUPD approach was further applied to modify 40 wt.% Pd/C(BASF) powder catalyst.Figure 4a shows XRD patterns of Pd/C before or after Sb modification by MUPD.The main diffraction peaks at 40.06°, 46.68°, 68.08°, 82.08°, 86.60°are characteristic peaks of Pd(111), ( 200), ( 220), (311), (222) plane which suggest face-centered cubic structure of metallic Pd.The peak located at 33.92° is attributed to palladium oxide on Pd/C (BASF), but the peak disappeared after immersed in the MUPD modification solution, which may be attributed to reduction of PdO by ascorbic acid.Any peak was observed for Sb or PdSb alloy among the XRD peaks because the Sb content is extremely low, as well as Sb is highly dispersed in active structure on Pd surface or Sb exists as amorphous structure. Figure 4b shows cyclic voltammograms within hydrogen adsorption/desorption region on unmodified Pd/C and Sb modified Pd/C by MUPD.It was found that the hydrogen region was partially restrained, as well on Pd/C(MUPD) with a relatively small θSb of 0.26, which was far below optimal θSb value around 0.6.This may be due to partial adsorption of Sb on active carbon supports leading to limited Sb modification on Pd and less higher electrocatalytic activity for Pd/C(MUPD) (seen from Figure 4c).Despite of this, Figure 4d showed that the catalytic stability was enhanced further.CO anodic stripping voltammograms (Figure 4e) revealed negative shift of both peak potential and onset potential of CO oxidation by 70 mV and 50 mV, respectively.It is suggested that the resistance against CO poisoning was enhanced on Pd/C after Sb MUPD modification. The Sb modification on Pd surface was carried out by the recently proposed MUPD method.For the film substrate, Pd thin film was first electrodeposited on a electrochemically cleaned glass carbon (GC, Φ = 3 mm) electrode by cyclic voltammetric scanning in the potential range between −0.15 V and 0.42 V vs. SCE in the electrolyte of 0.1 M HClO4 and 5 mM PdCl2.Then, to achieve Sb MUPD, the Pd film electrode was immersed in the aqueous solution containing 0.1 mM antimony potassium tartrate (APT) and 20 mM ascorbic acid (AA) for 30 s, rinsed with ultrapure Milli-Q water.To make a comparison, the fresh Pd film was modified via Sb UPD process in which the electrode was modified in 0.5 M H2SO4 containing 0.1 mM APT at the UPD potential of 0.25 V vs. SCE for certain time.For the powder catalyst, catalyst ink was prepared by dispersing 2 mg of Pd/C (40 wt.%, BASF) in 1 mL of ethanol with 120 μL of Nafion (5 wt.%) under sonication.An aliquot of the catalyst ink was transferred onto GC electrode with a Pd loading of 28 μg cm −2 .After the ink was dried in air, the catalyst coating was modified with the same procedure as Pd film substrate for MUPD. X-ray Diffraction X-ray diffraction (XRD) for Sb modified Pd/C was performed using a Bruker D8-Advance X-ray diffractometer (Karlsruhe, Germany) equipped with Cu kα radiation (λ = 0.15406 nm), employing a scanning rate of 0.02° s −1 in the 2θ range from 20° to 90°. Electrochemical Measurements Electrochemical measurements were performed in a conventional three-electrode cell with a CHI 660E workstation (CH Instruments, Shanghai Chenhua, Shanghai, China) in 0.5 M H2SO4 without or with 0.5 M HCOOH solution deaerated by bubbling pure nitrogen (99.999%).The unmodified or modified Pd film electrode or powder catalyst covered GC electrode served as the working electrodes.A platinum guaze was used as the counter electrode, a saturated calomel electrode (SCE) as the reference electrode.For CO anodic stripping voltammetry, CO was pre-adsorbed on the Pd surface at the potential of −0.1 V in CO saturated 0.5 M H2SO4 and then oxidized (stripped) with an anodic potential scan.The values of current density in this paper are normalized by electrode geometric surface area (0.07065 cm 2 ).All electrochemical measurements were performed at room temperature. Conclusions The facile electroless MUPD method has been extended to the modification of Sb on Pd surfaces by immersing Pd film substrate and dispersing Pd/C powder into modification aqueous solution containing Sb(III) and ascorbic acid.As a mild reducing agent, ascorbic acid removed oxygenous species to shift down the open circuit potential of Pd substrate for achieving a sub-monolayer of Sb.The Sb/Pd(MUPD) exhibited enhanced electrocatalytic activity towards formic acid oxidation compared to unmodified pure Pd film and Sb modified Pd film by conventional UPD method.As for Pd/C powder catalyst, the electrocatalytic activity was improved by Sb MUPD.These improvements or enhancements derive from the third body effect, electronic effect and bi-functional mechanism resulting in stronger resistance against poisoning by CO poisons. Figure 2 Figure 2 compared the open circuit potential recorded on the Pd film electrode in 0.1 mM APT solution with or without 20 mM AA.A high open circuit potential of ca.0.29 V at 30 s was seen in Figure 2 . Figure 2. Open circuit potential (OCP) curves recorded on a Pd film electrode in 0.1 mM APT aqueous solution without (a) or with (b) 20 mM AA.
3,672.2
2015-07-28T00:00:00.000
[ "Materials Science" ]
A comparasion of the leading sectors contibuting to economic growth before and after the economic crisis in East Kalimantan : An econometrical approach East Kalimantan province is not only rich in the natural resources of mining, but also in agriculture. The objective of this research is to know the extent of the contribution of the main sector's like agriculture, mining, industry and trade sectors before and after the occurrence of the economic crisis in East Kalimantan. The results of this study indicate that (a). Almost all of the independent variables used, namely agriculture, mining, industry and trade sectors have insignificant impact on the growth of per capita income before and after the economic crisis, except for an agriculture sector, (b). The agriculture sector has a positive influence and significantly to the growth of per capita income before the economic crisis, but after the economic crisis it was not significant impact on the growth of income per capita. Introduction The advantage of Indonesia compared to other countries in the world that are a lot natural wealth, condition of the season that all year long is always illuminated by the sun that causes the land to become fertile and vegetation may develop properly, unlike to countries that have four seasons, however with this rich of natural resources and if not offset by good management would cause scarcity.An agriculture sector including gardening, forestry and fisheries is a natural resource that is rare and only found in a few countries which have sunshine all year round including Indonesia.Forest products are exported to some neighboring countries without first being processed causing Indonesia forest reserves quickly depleted and scarce.In the agricultural sector is to be the main sector on the Government of the New Order with organizations of rice, but in its development of this sector could not offset its population while diminishing farmland.Like other sectors in the agricultural sector, the mining sector has also become the leading sector in several provinces such as Riau, South Sumatra, East Kalimantan and Papua Provinces.Where the contribution of Gross Domestic Product (GDP) to four provinces above 50% (Statistical Agency, 2010). The positive impacts of the mining sector are: (a) contribute to the improvement of education and population growth in Obuasi (Mensah, 2011:62), (b) and with the international mining industry, developing countries will be able to build an regime of mining sector based on economic equality, mutual benefit and equitably (Tawiah and Baah, 2011:1), (c) the availability of jobs where 30 percent of arrivals to find work (Addei and Kwadjose, 2011:103), d).Technology transferred particularly in mining (Pourush and Thanai, 2012:9).However, the mining sector also has negatively affected on health and the environment, the increasing number of sufferers pain breathing (Addei and Kwadjose, 2011:5), coal mining causes damage to the environment through soil degradation, destruction of forests, water and air pollution, noise and loss of wildlife habitat (Gurdeep Sing, 2008:22). In the industrial sector either manufacturing or processing industries of petroleum and natural gas become a substitute for the agricultural sector which was hard pressed by the population for housing.In the industrial sector of the New Order Government that Indonesia would be an "Asian Tiger" like China, Japan and Korea, however due to the economic crisis in 1997 caused Indonesia slumped into a state that required the help of foreigners like the International Monetary Fund (IMF), or other donor countries.The trade sector including the hotel and restaurant is the follow-up of the third sector, it means that this sector supported by other sectors such as tourism, agriculture, mining and industry sectors. East Kalimantan province as one of the largest producers of natural resources in Indonesia that rely on mining and oil and gas processing industry is expected to encourage the growth of per capita income and lower levels of poverty is both directly and indirectly.These considerations were necessary given the natural resources owned by the East Kalimantan cannot be updated so that necessary anticipative steps to support other sectors which can be used as its driving force in boosting economic growth and decrease poverty in East Kalimantan. Types and sources of Data The study uses secondary data in the form of time series and in the annual covers 1983 -1996 and 2001 -2014 and data cross-sectional to 9 districts/cities in East Kalimantan, because the necessary data in this research is the macroeconomic that include such as: a. Data of percentage of the growth in GDP and population of district/city as an indicator of the growth of per capita income, GDP of mining and excavation sector, as b.Data of percentage of the agriculture and the trade, hotel and restaurant sector for 9 districts/cities in East Kalimantan.The data obtained using the methods of the library search.These data are expected to be obtained either through the Website of the Central Board of Statistics of East Kalimantan Province. Data Collection Methods Data collection methods in this study by using methods of the library search.Then the data are grouped into a group that is tailored to the needs of the data processing.Whereas the processing of data is carried out by using two software, namely Ms. Excel and Eviews version 6 for parameter estimation, statistical testing and validation testing of the model. Formulation of the Research Model The formulation of the research model could be seen in this figure: The equation model of economic growth prior to economic crisis In the equation of economic growth in this study using per capita income growth prior to economic crisis affected by variable agriculture, mining, industry and trade sectors.The results of the estimation of the growth model as shown in table 1. a. t-test The t-test results shows that all independent variables namely variables used agriculture, mining, industry and trade have t-counted is smaller than the t-table (2.358) on the significance of one percent and only one variable is agriculture (Agr) has a value of (2.0947) above the t-table (1.658) on the significance of the five percent which means that most of the independent variables do not affect on the growth of per capita income. b. F-test The F-test results show the number of the figure which 1.8040 under F-table (2.47) with significance of one per cent.Thus it can be concluded that all the variables used such as the agricultural, the mining, the industrial and the trade sectors variables have no effect on the variable growth of per capita income in East Kalimantan.……. ( 2) showed a number of R 2 is 0.2114.This means that the independent variables have an effect on the dependent variable (Growth) of 21.14 percent, while the rest of 78.86 percent caused by factors outside the model. d. Multicollinierity-test The methods used to find out whether there is mutual correlation among independent variables by the way know the variance inflation factor (VIF).The general rule used to know of any multicollinierity if the VIF > 10, then this means that the occurrence of high multicollinierity fellow variables exogenous. The equation model of economic growth after the economic crisis. In the equation of growth in this study using per capita income growth after the economic crisis affected by variable agriculture, mining, industry and trade sectors.The results of the estimation of the growth model as shown in table 3. a. t-test The t-test results shows that all independent variables namely variables used agriculture, mining, industry and trade sectors after the economic crisis has t-test is smaller than the ttable (2.358) on the significance of one percent and t-table (1.658) on the significance of the five percent which means that all independent variables (Agr, Mng, Ind. and Trade) did not have any affect on the growth of the per capita income. b. F-test The F-test is used to see the influence of the independent variables to a dependent variable simultaneously.The F-test results indicate the number of 2.108567 in which the figure is under F-table (2.47) at a rate of one per cent significance.Thus it can be concluded that the variables of the agricultural, the mining, the industrial and the trade sectors variables have no effect on the variable growth of per capita income in East Kalimantan. c. R 2 -test R 2 test used to find out the dependent variable changes variations (growth) due to the variations of independent variable changes (agricultural, mining, industry and trade sectors), test results showed a number of R 2 is 0.060384.This means that the independent variables have an effect on the dependent variable (YP) of 6.04 percent, while the rest of 93.96 percent caused by factors outside the model. d. Multicollinierity test The methods used to find out whether there is mutual correlation among independent variables by the way know the variance inflation factor (VIF).The general rule used to know of any multicollinierity if the VIF > 10, then this means that the occurrence of high multicollinierity fellow variables exogenous.Source: Data Processing Comparison of the economic growth equation models before and after the economic crisis. The coefficient and the probability of independent variables such as agriculture, mining, industry and trade sectors before and after the economic crisis in East Kalimantan shows as follows: 5 shows that the coefficients of the independent variables (agriculture, mining, industry and trade) are not much different between before and after the economic crisis hit Indonesia and East Kalimantan.Almost all of the independent variables are not significantly affect on economic growth measured from growth of per capita income, unless the variable has a positive influence on mining sector with 10 percent significance level affects economic growth prior to the economic crisis in East Kalimantan. Discussion A comparison of the agricultural, mining, industry and trading sectors to economic growth before and after the economic crisis The three of the four independent variables used in this study i.e., the agriculture, industry and trade sectors do not have significant influences both one percent, five percent and ten percent to the growth of income per capita, the only an agriculture sector variable has a positive influence and with five percent significance level before the economic crisis in East Kalimantan. The important findings in this research is that the agriculture sector has a positive influence of 27,3512 to per capita income growth prior to economic crisis, which means that the growing of the agriculture sector amounted to one percent would increase the per capita income districts/cities in East Kalimantan amounting to 27,35 per cent, this is in accordance with the opinion of Himani (2014) and Sertoglu et al (2017), that the agricultural sector had a positive influence to the economic growth in Nigeria and similarly to the opinion of Omorogiuwa, et al (2014) stating that the role of agricultural economics can be encourage the whole economy in Nigeria.In contrast to the opinion of the Uma et al ( 2013) that the agricultural sector did not have significant influence to the economic wealth of Nigeria. The mining sector had a positive influence but not significant before and after the economic crisis, it is contrary to Mensah (2011:10-11), Connolly and Orsmond (2011:47) and Tawiah and Baah (2011:7), and Pourush Thanai, (2012:8) with case studies in Ghana, Australia and India that positively impact mining to economic growth, education and development and transfer of new technologies especially in the field of mining, but also contrary to the opinion of Ahmad Komarulzaman and Armida S, Alisyahbana (2006) as well as Sudarlan et al ( 2015)) that the mining sector had negatively impact on regional and economic growth in Indonesia.The industry sector in East Kalimantan is an industry that is derived from the natural resources in the form of petroleum and natural gas has a positive coefficient value and not significantly to economic growth both before and after the economic crisis in the East Kalimantan.The trade sector has a negative coefficient before the economic crisis and statistically not significant to economic growth in East Kalimantan, but after the economic crisis, the trade sector has a negative coefficient and with ten percent significance level, this does not fit the opinion of Yakubu (2014) that international trade has a positive and significant influence to economic growth in Nigeria, likewise the opinion of Sun and Heshmati (2010) that international trade has a positive influence and significantly to economic growth in China. Conclusion Based on the results of the discussion which had been expressed earlier, can be summed up into a few things: a.The agriculture sector variable has positive influence to the increasing of per capita income growth only prior to the economic crisis in East Kalimantan.But after the economic crisis this sector did not have any impact on the per capita income growth. b.The mining, industry and trade variables have no effect on the growth of per capita income both before and after the economic crisis in East Kalimantan. c.There are no significant differences occurred the influence of major sectors such as agriculture, mining, industry and trade sectors both before and after the economic crisis in East Kalimantan. Figure 1 . Figure 1.The formulation of the research model In this research would be used by multiple linear regression equation models using regression ordinary least square (OLS) to see a direct relationship between the agricultural, the mining, the industry and trade, hotel and restaurant sector to economic growth.In this model, economic growth is treated as an endogenous variable, while the percentage of agriculture, mining, industry and trade sectors are treated as exogenous variables.The structural equation model so that it is used in this research is: Growth = ƒ (Agr, Mng, Ind, Trade) .................................(1)It is assumed that all variables in the above structural equations have a relationship that is linear, and then the form of the equation can be formulated such that the linear regression models qualify.So the equation becomes: Table 1 . The estimation model of economic growth prior to economic crisis Table 2 . Multicolinierity Test of Growth Prior To Economic Crisis Table 3 . The estimation model of economic growth after economic crisis Table 4 . Multicolinierity Test of growth after the economic crisis Table 5 . Comparison of growth equations before and after the economic crisis
3,267.6
2018-04-22T00:00:00.000
[ "Economics" ]
ROS-responsive hydrogels with spatiotemporally sequential delivery of antibacterial and anti-inflammatory drugs for the repair of MRSA-infected wounds Abstract For the treatment of MRSA-infected wounds, the spatiotemporally sequential delivery of antibacterial and anti-inflammatory drugs is a promising strategy. In this study, ROS-responsive HA-PBA/PVA (HPA) hydrogel was prepared by phenylborate ester bond cross-linking between hyaluronic acid-grafted 3-amino phenylboronic acid (HA-PBA) and polyvinyl alcohol (PVA) to achieve spatiotemporally controlled release of two kinds of drug to treat MRSA-infected wound. The hydrophilic antibiotic moxifloxacin (M) was directly loaded in the hydrogel. And hydrophobic curcumin (Cur) with anti-inflammatory function was first mixed with Pluronic F127 (PF) to form Cur-encapsulated PF micelles (Cur-PF), and then loaded into the HPA hydrogel. Due to the different hydrophilic and hydrophobic nature of moxifloxacin and Cur and their different existing forms in the HPA hydrogel, the final HPA/M&Cur-PF hydrogel can achieve different spatiotemporally sequential delivery of the two drugs. In addition, the swelling, degradation, self-healing, antibacterial, anti-inflammatory, antioxidant property, and biocompatibility of hydrogels were tested. Finally, in the MRSA-infected mouse skin wound, the hydrogel-treated group showed faster wound closure, less inflammation and more collagen deposition. Immunofluorescence experiments further confirmed that the hydrogel promoted better repair by reducing inflammation (TNF-α) and promoting vascular (VEGF) regeneration. In conclusion, this HPA/M&Cur-PF hydrogel that can spatiotemporally sequential deliver antibacterial and anti-inflammatory drugs showed great potential for the repair of MRSA-infected skin wounds. Introduction The skin separates the internal environment of the human body from the external environment and plays a crucial role in the human body [1, 2].However, skin is very vulnerable to injury, which includes external factors (surgery, pressure, burns and cuts, etc.) and pathological factors (diabetes or vascular disease, etc.) [3].These injuries can cause gaps in the protective barrier of the skin, allowing pathogens such as bacteria to attack the human body through gaps.Research showed that methicillin-resistant Staphylococcus aureus (MRSA) can exist in 7-30% of wounds [4], and MRSA may spread into the blood, even endanger life.Because MRSA is an antibiotic-resistant pathogen that can cause multiple serious infections, research by the World Health Organization have shown that the mortality rate of MRSAinfected patients is 64% higher than that of other infected patients [5].Therefore, effective treatment strategies for MRSA infection are particularly important for people's lives and health. At present, there are many strategies for the treatment of MRSA-infected wounds, such as antibiotics, bacteriophages and nanomedicine platforms [6,7].Among numerous treatment methods, the use of antibiotics that can kill drug-resistant bacteria can directly and effectively treat MRSA infection, but the unreasonable use of drug doses gradually reduces the therapeutic effect of antibiotics, and even forms the antibiotic resistance [8,9].Therefore, designing and developing better biocarrier to control antibiotics release is a mainstream direction for the treatment of MRSA infection [10][11][12].Among the current medical wound dressings (gauze, adhesive bandage, foam and hydrogel, etc.) [13][14][15], hydrogel is a good platform for antibiotic controlled release [16].At the same time, hydrogel offers the advantages that other materials do not have, such as moisturizing, self-healing, and on-demand functional designs [17,18].Therefore, it is of great practical significance to develop an antibacterial hydrogel-based antibiotic delivery system for MRSAinfected wounds. After the formation of the wound, the locally recruited inflammatory cells immediately migrate to the wound site, making the wound repair enter the inflammatory phase [19,20].However, excessive oxidative stress at the wound site can produce excessive ROS [21], which in turn triggers chronic inflammation [22], creating a vicious cycle.On the other hand, the colonization of MRSA in the wound bed undoubtedly leads to severe infection [23], exacerbating the inflammatory response and causing tissue damage [24].Infection also weakens immune cells, making it difficult to reverse pathological changes, prolonging the inflammatory phase and hindering further wound repair [25,26].Therefore, in order to treat infected wounds, it is necessary to remove bacteria [27,28], reduce ROS and inflammation [29][30][31], so as to restore balance in the microenvironment [32,33] of the wound site and facilitate orderly subsequent repair. There have been studies on hydrogels for the treatment of MRSA infection from the perspective of antibacterial, antiinflammatory and antioxidant, but the load of drugs is mostly reflected in the controlled release of a single drug.For example, due to the presence of carboxyl groups in pectin and gelatin, curcumin (Cur)-loaded photocrosslinked hydrogels composed of methacrylated gelatin and methacrylated pectin can release more Cur under alkaline conditions, showing great advantages for the treatment of infected wounds [34].Singh et al. prepared a hydrogel system by chitosan and poly(N-isopropylacrylamideco-methacrylic acid) (PNIPAM-co-MAA) microgels.Due to the temperature-responsiveness of PNIPAM and the pHresponsiveness of the carboxylic acid groups in MAA, the release of moxifloxacin in the hydrogel can achieve dual-responsive control of temperature and pH [35].The above studies have shown that a single controlled release of a drug can only achieve a single purpose, which cannot meet the multiple needs of infected wounds for antibacterial and anti-inflammatory etc., and also cannot meet the spatiotemporally sequential delivery of multiple needs.Therefore, design of hydrogel dressings that can deliver different drugs at different times to treat MRSA-infected wounds is a promising strategy. Moxifloxacin is a broad-spectrum fluoroquinolone antibiotic [36], which is an appropriate choice for the treatment of skin bacterial infections.Curcumin is a kind of diketone polyphenol compound, which has many functions such as antibacterial, anti-inflammatory, and antioxidant [37][38][39].The safety of Cur has been certified by the World Health Organization and U.S. Food and Drug Administration [40].Besides, the industrial production of Cur is relatively mature, and the product is cheap and economical [41].Based on the hydrophilicity of moxifloxacin hydrochloride and the hydrophobicity of Cur, they can be designed to exist in two different forms in hydrogels, that is, moxifloxacin hydrochloride can be loaded in the hydrogel by directly mixed with the hydrogel precursor solution.And for Cur, it can be firstly mixed with Pluronic F127 (PF) to form Cur-encapsulated PF micelles (Cur-PF), and then loaded into the hydrogel to achieve sustained release of Cur [42].So, the release of Cur is characterized by sustained release, while the release of moxifloxacin directly loaded in the hydrogel is characterized by high efficiency and fast release.This is consistent with the treatment characteristics of MRSA-infected wounds, and has not been reported. In this study, ROS-responsive HPA hydrogel loaded with antibiotic moxifloxacin (M) and anti-inflammatory ingredient Cur-PF was prepared by cross-linking of phenylboronic acid ester between hyaluronic acid-grafted 3-amino phenylboronic acid (HA-PBA) and polyvinyl alcohol (PVA) to treat MRSA-infected wound healing.In this hydrogel, the phenylboronic acid ester bond formed by HA-PBA and PVA has ROS responsiveness, which can realize the responsive release of drugs.Moxifloxacin and Cur exist in different forms in the hydrogel, which makes them spatially different from each other.Besides, the spatial difference between the two drugs results in a difference in their release rate, which further results in a time difference.Therefore, the hydrogel designed above can the hydrogel prepared in this study showed rapidly release of moxifloxacin to endow antibacterial property, and exhibited sustained release of Cur for anti-inflammatory under ROS-responsive conditions.Those differences meet the different needs of MRSA-infected wounds at different treatment periods based on spatiotemporally sequential delivery timespace sequential release of the two drugs.The swelling, degradation, self-healing, biocompatibility, responsive sequential release, antibacterial, anti-inflammatory and antioxidant properties of the hydrogel were tested, and their effectiveness in repairing in full-thickness skin was verified in a MRSA-infected mouse skin wound model.This is the first time to realize the spatiotemporally sequential delivery of antibacterial and antiinflammatory drugs on hyaluronic acid (HA)-based hydrogel for repairing the MRSA-infected skin wound of mouse. Preparation of hyaluronic acid grafted 3-amino phenylboronic acid (HA-PBA) To prepare the hydrogels loaded with Cur and moxifloxacin, the Cur-PF was dissolved in distilled water at a concentration of 30 wt%.Then, Cur-PF and HA-PBA were mixed in advance with a certain volume ratio, and the final concentration of Cur-PF was 5 wt%.The mass content of moxifloxacin and Cur in Cur-PF was the same, and they were premixed into HA-PBA solution.Then, the above mixed solution was mixed with PVA solution, and the hydrogels obtained were named as HPA1/M&Cur-PF, HPA2/ M&Cur-PF and HPA3/M&Cur-PF, respectively. The characterization of hydrogels The tests of nuclear magnetic resonance ( 1 H-NMR) [44], Fourier transform infrared spectroscopy (FT-IR) [45], field emission scanning electron microscopy (SEM) [46], transmission electron microscope (TEM) [43], swelling [47], degradation [48], self-healing, rheological and mechanical properties [49], DPPH scavenging [50], ROS scavenging [51] and biocompatibility were all carried out according to the literature [52].And the operational details can be found in SI.All animal experiments were conducted in accordance with the current guidelines for experimental animal care, and were approved by the Professional Committee of Xi'an Jiaotong University. In vitro drug release assay The drug release characteristics of HPA/M&Cur-PF hydrogels were tested in PBS or 1 mM H 2 O 2 for moxifloxacin and Cur.The drug released from the hydrogel was analyzed by UV-Vis spectrophotometer at 420 nm (Cur) and 288.57nm (Moxifloxacin), respectively [53].The details can be found in SI. Antibacterial property test of the hydrogels To test the antibacterial properties of the released drugs from the HPA/M&Cur-PF hydrogels, the samples were placed on solid medium (nutrient agar) in contact with the bacteria and the zones of inhibition around each sample were measured to record the antibacterial effect of HPA/M hydrogel loaded with moxifloxacin, and HPA/M&Cur-PF hydrogel loaded with moxifloxacin and Cur [44].The details can be found in SI. Anti-inflammatory experiments of HPA hydrogels Macrophage polarization was induced by lipopolysaccharide (LPS).Hydrogel leachate was used instead of culture medium, and after incubation for 48 h, total RNA of macrophages was isolated and reverse-transcribed and amplified for further analysis of related gene expression [54]. Wound healing in an in vivo MRSA infection model To further evaluate the promoting effect of HPA/M&Cur-PF hydrogel on wound healing, a wound healing model of MRSAinfected mouse back skin was established.The details can be found in SI. Histological and immunohistochemical evalu ation Collect wound specimens on the Days 3, 7, and 14 after treatment.Then, hematoxylin-eosin (HE) staining was performed to evaluate the epidermal regeneration and inflammation of the wound.Masson staining was used to evaluate collagen deposition in wound beds.On the other hand, immunofluorescence staining was performed using TNF-a and VEGF antibodies, respectively. Statistical analysis All experimental data were statistically analyzed, and the results were expressed as mean ± SD.Statistical differences were determined by one-way ANOVA and a Student t-test.In all cases, if P < 0.05, there is a significant difference. Ethics approval All protocols about animal experiments were approved by the animal research committee of Xi'an Jiaotong University (approval number: 2023-1469). Synthesis of hydrogel In this study, based on the dynamic phenylboronic acid ester bond between HA-PBA and PVA, and Cur-PF and antibiotic moxifloxacin (M), a series of hydrogel dressings with good antibacterial, anti-inflammatory and antioxidant effects, and stimulus-responsive drug release in different spatiotemporal sequences were prepared.Figure 1 showed the overall strategy to prepare HPA/M&Cur-PF hydrogels for MRSA-infected skin wounds healing.Firstly, PBA was grafted onto HA through amidation reaction, forming HA-PBA (Figure 1A).Secondly, Cur was encapsulated in PF by taking advantage of the self-assembly characteristics of PF to form Cur-PF (Figure 1B). Figure 1C is the structural diagram of PVA. Figure 1D showed the specific preparation procedure of HPA/M&Cur-PF hydrogel, namely Cur-PF and moxifloxacin were first mixed with HA-PBA precursor solution, and then HA-PBA in the mixed solution formed phenylboronic acid ester dynamic bond through the combination of phenylboronic acid group with the diol group structure on PVA.The hydrogel was named as HA-PBA/PVA1 (HPA1), HA-PBA/PVA2 (HPA2), and HA-PBA/PVA3 (HPA3) according to the final concentration of PVA in the hydrogel varying from 10, 20-30 mg/ml.Figure 1E showed the application of HPA/M&Cur-PF hydrogel in the MRSAinfected skin wound of mice.Based on the response of phenylboronic acid ester dynamic bond to ROS, and the different loading forms of moxifloxacin and Cur in hydrogel, two drugs in the HPA/M&Cur-PF hydrogel achieved stimulus-responsive release in different spatiotemporal sequences.When the HPA/M&Cur-PF hydrogel was applied to the MRSA-infected skin wounds of mice, it can achieve responsive anti-inflammatory and antioxidant property on the basis of rapid antibacterial action, and synergistically promote the wound repair. As shown in Figure 2A, the peak of HA-PBA at 7-8 ppm in the 1 H-NMR spectrum comes from hydrogen on the benzene ring, demonstrating the successful grafting of PBA.The chemical shifts (d, ppm) of the peaks were assigned as below: 7.58 (m, 4H, A), 1.89 (s, 3H, B).It could be seen that phenylboronic acid was successfully grafted.Through integral calculation, the grafting ratio of PBA was 14.2%.Meanwhile, as shown in Figure 2B, the changes in the peaks at 1459 and 1517 cm −1 in the FT-IR spectrum came from benzene ring, and the peak at 1340 cm −1 is attributed to the stretching vibration of the B-O, once again proving the successful grafting of PBA.The TEM image of Cur-PF in Figure 2C confirmed the formation of Cur-PF micelles with a diameter of around 300 nm, which is consistent with the results of previous studies [43].As shown in the Supplementary Figure S1, the diameter of Cur-PF micelles was tested by using dynamic light scattering, and the experimental results showed that it is distributed in the range of 220-458 nm, which is consistent with the TEM results.Figure 2D showed the state of HPA hydrogel before and after gelation.Both HA-PBA and PVA are in the liquid state with fluidity.After mixing and shaking them in a certain proportion within 2 min, the gelatinized HPA hydrogel without flowing state can be observed.The test tube inversion method was used to measure the gelation time at constant temperature of 25 � C. Supplementary Table S1 showed the average gelation time of the HPA2 hydrogel was 50.4 ± 2.4 s, while the gelation time of the HPA2/PF hydrogel was a little longer, about 69.3 ± 2.1 s, which may be due to the surfactant of PF [55,56].Figure 2E showed the SEM images of all hydrogels.With the increase of PVA concentration in HPA1, HPA2 and HPA3 hydrogel, the crosslinking-density of hydrogels was increased, and the pore size also became smaller.When the PF micelles were added, it can be seen that the pore size of HPA2/PF hydrogel was more uniform than that of HPA2, which may be caused by the nature of non-ionic surfactant of PF. Figure 2F showed the statistics of the pore diameter of all hydrogels, which more intuitively showed that with increasing of PVA concentration, the pore size of the hydrogel gradually decreased from 191.8 ± 49.3 mm of HPA1 hydrogel to 92.6 ± 30.7, 68.3 ± 24.4 and 66.2 ± 10.3 mm for HPA2, HPA3 and HPA2/PF hydrogels, respectively.And the pore size of HPA2/ PF hydrogel was more uniform compared to HPA2 hydrogel. Mechanical properties, swelling, degradation and self-healing of hydrogels With the introduction of the wet healing theory [57], maintaining a certain level of humidity at the wound site is beneficial for the repair of skin wounds [58].Figure 3A showed the equilibrium swelling ratio of HPA hydrogels, all hydrogels can absorb more than 60 times water of their own mass.The swelling ratio of HPA hydrogels decreased with increasing of PVA concentration, and their swelling ratios were 8878.8 ± 470.1%, 7360.0 ± 372.5% and 5971.1 ± 322.1% for HPA1, HPA2 and HPA3, respectively.This should be due to the increase of PVA content, which provides more crosslinkable sites and increases the crosslinking density within a certain range.Besides, the swelling ratio of HPA2/PF was 6560.5 ± 107.3%, slightly lower than that of HPA2 hydrogel, because of hydrogen bonding between PF micelles and HPA hydrogel network.The good swelling properties of the HPA hydrogels make it natural and great advantages in the management of wound exudate, and it can absorb the exudate well while maintaining the wound moist. Biodegradability is a crucial criterion for measuring medical materials [59].Therefore, the degradation performance of HPA hydrogels was further evaluated.Figure 3B showed that all hydrogels have good degradability.With increasing of PVA concentration, the crosslinked network of hydrogels was closer, so the degradation rate of HPA3 hydrogels was the slowest.Specifically, after 3 days of testing, the remaining weight of HPA1, HPA2 and HPA3 hydrogels groups remained 54.6 ± 1.8%, 72.0 ± 4.0%, and 83.0 ± 1.7%, respectively.Since the addition of PF micelles maked the crosslinked network of hydrogel more closely, HPA2/PF degraded more slowly than HPA2, showed 88.0 ± 1.7% remaining weight after 3 days of testing.Importantly, the remaining weight of all hydrogels was less than 30% after the 12 days test.Experiments verified that all hydrogel dressings prepared in this study had reasonable degradation properties. Appropriate modulus is very important for hydrogel dressings.It can provide a good mechanical matching between hydrogel dressings and skin tissues, ensure a comfortable sense of wear during use, and reduce physical damage to damaged tissues.Therefore, rheological tests were used to evaluate the properties of HPA hydrogels and Figure 3C showed that with increasing of PVA concentration, the modulus of the hydrogel showed a trend of gradual increase.The storage modulus of HPA1 was 40.6 Pa, that of HPA2 was 139.1 Pa, and that of HPA3 was 424.7 Pa.Due to the addition of PF micelles, the modulus of HPA2/PF hydrogel was also increased to 227.4 Pa compared with HPA2 hydrogel.The thermal stability of the hydrogels in the range of 25-40 � C was tested.As shown in the Supplementary Figure S2, the modulus of HPA2 and HPA2/PF hydrogels did not change much.In addition, we can clearly observe that the HPA2/PF hydrogel can still maintain the gel-forming state under the high temperature condition of 40 � C. The hydrogel has good thermal stability between 25 � C and 40 � C, so when applied to the skin surface, the hydrogel can still maintain its own stability even under body temperature conditions. Skin's inevitable stretching during human activities is very easy to damage the hydrogel dressings.Therefore, higher requirements are put forward to the mechanical properties of hydrogels.As shown in Figure 3D, it can be seen from the stressstrain test that all hydrogels showed good compressibility.When the strain was 80%, with the increase of PVA concentration, the stress of the hydrogels was 2161.7,2445.9 and 3547.0Pa, respectively, and all hydrogels were unbroken.After the addition of PF micelles, the stress of HPA2/PF hydrogel increased to 4998.7 Pa at the strain of 80%.The above experimental results showed that the mechanical properties of the hydrogel gradually increase with increasing of PVA concentration, and it can be effectively improved by adding PF micelles.The results of the above tests showed that the properties of HPA hydrogels, including pore size, swelling, degradation, modulus, and mechanical properties, can be easily regulated by adjusting the ratio between components.This tunable property provides a broader selectivity for the hydrogel to adapt to different wound states and repair stages. The damage of hydrogel dressing will lose the protective effect to wound, which puts forward requirements for the self-healing performance of skin wound dressing.As shown in Figure 3E, after the hydrogel was cut into two halves, it can quickly self-heal and merge within 5 min.Figure 3F showed that when the strain was higher than 1500%, G 00 of hydrogel was higher than G 0 , which means that the hydrogel network was damaged.Therefore, by constantly switching low strain (c ¼ 1%) and high strain (c ¼ 1500%), the quantitative self-healing function was tested.At the beginning of the test, at the first low strain (c ¼ 1%), G 0 and G 00 was 241.3 and 138.4 Pa, respectively, and G 0 > G 00 .After switching to the high strain (c ¼ 1500%), G 0 changed to 27.6 Pa and G 00 changed to 44.7 Pa, G 00 > G 0 , which means that the hydrogel network crashed.In the following tests, when c was cyclically transformed, G 0 and G 00 can recover to the initial value, with no significant difference in modulus compared to the first test.In conclusion, due to the existence of phenylboronic acid ester dynamic bond, the HPA hydrogel prepared in this study has good self-healing performance, which provides the quickly restore of structural integrity when it is ruptured under external force, not only ensures the physical barrier effect, but also avoids the possible bacterial invasion after rupture.As shown in the Supplementary Figure S3, shear rate scanning measurements indicate that all hydrogels in this experiment exhibit shear-thinning behavior, where the viscosity of the material depends on the shear force, and thus the hydrogels in this work are injectable. ROS-responsive drug release of hydrogels The biggest problem of using antibiotics is bacterial resistance [60].Loading drugs in hydrogels can greatly avoid the repeated use of drugs and reduce the generation of bacterial resistance.In particular, the ROS response caused by the presence of phenylboronic acid dynamic ester bond in HPA hydrogel can realize intelligent on-demand drug delivery in the hydrogel system [61].As shown in Figure 3G, when 200 ll of 10 mM H 2 O 2 was added to the prepared 500 ll HPA2/M&Cur-PF hydrogel, a certain fluidity of the hydrogel can be observed just within 20 min, and the hydrogel network collapsed completely at 4 h.This is mainly due to the destruction of the hydrogel network structure caused by the reaction of the phenylboronic acid ester structure with H 2 O 2 .As shown in Supplementary Figure S4, the modulus of HPA2 hydrogels and HPA2/PF hydrogels were tested under 1 mM H 2 O 2 or PBS for the same time, respectively.At 10 min, the modulus of the hydrogel in PBS did not change much compared to the initial value, but the modulus of the hydrogel in H 2 O 2 decreased dramatically by about 200 Pa.At 20 min, the modulus of the hydrogel in PBS decreased due to the swelling and absorbing of water, and at this time, the hydrogel in H 2 O 2 was close to the state of non-gelation.In this study, two drugs were loaded into the hydrogel dressings.One was moxifloxacin with antibacterial effect directly mixed in the hydrogel, and the other was Cur with anti-inflammatory and antioxidant functions encapsulated in PF micelles.As shown in Figure 3H, the release rate of moxifloxacin in PBS is relatively slow, and the release time can be more than 250 h.While in 1 mM H 2 O 2 release solution, the release amount of moxifloxacin in 36 h can reach more than 80%.The above results showed that hydrogel can deliver antibiotics quickly and effectively after being applied to wounds due to the ROS responsiveness of phenylboronic acid ester bonds.These results mean that when bacterial infection is serious, leading to excessive inflammation and the production of a large amount of ROS, the ROS response can be quickly activated, so that the antibiotic can be released rapidly to prevent the continuation of severe infection in a short time.The release amount of Cur in PBS and H 2 0 2 was 14% and 38% at 24 h.In summary, moxifloxacin can be rapidly released more than 80% within 36 h, and Cur can be continuously released within 288 h under the action of 1 mM H 2 O 2 .In a word, the spatiotemporally sequential release of these two drugs allows for a rapid antimicrobial treatment and then sustained anti-inflammatory and antioxidant effect at the site of the infected wound. Antibacterial properties of hydrogels The antibacterial effect of the antibiotics released from the hydrogel was verified by the inhibition zone test.The control group is just the holes punched on the agar plate by the punch, so the diameter size and shape of the control group do not change with time and environmental factors, and at the same time to ensure that the initial diameter is the same between every group, and to confirm whether the experimental group formed a ring of inhibition around the holes.As shown in Figure 4A, at the 12 h, the diameter of the inhibition zone for HPA2/M hydrogel and HPA2/ M&Cur-PF hydrogel against E. coli were 2.20 ± 0.02 and 2.26 ± 0.02 cm, respectively, and the diameter of the inhibition zone against MRSA were 2.08 ± 0.01 and 2.05 ± 0.01 cm, respectively.This suggested that the antibacterial effect of HPA2/M&Cur-PF hydrogels is better than that of HPA2/M hydrogels.Until the 60 h, there was still obvious inhibition zone.At this time, the diameter of HPA2/M hydrogel and HPA2/M&Cur-PF hydrogel against E. coli was 1.30 ± 0.09 and 1.50 ± 0.02 cm, respectively, and the diameter of the inhibition zone against MRSA was 1.24 ± 0.02 and 1.33 ± 0.01 cm, respectively.The appearance of inhibition zone indicates that moxifloxacin and Cur-PF loaded in the hydrogel diffuse to the surrounding environment, thus killing the bacteria in a certain area.In addition, the inhibition zone diameter of HPA2/ M&Cur-PF hydrogel for MRSA was statistically larger than HPA2/ M hydrogel, and showed significant different (P < 0.05).This is because Cur also has antibacterial effect [62], which is synergistic with moxifloxacin.More obviously, at the 84 h, the inhibition zone of HPA2/M hydrogel had disappeared, while the inhibition zone of HPA2/M&Cur-PF hydrogel was still slightly larger than 7 mm, suggesting that HPA2/M&Cur-PF hydrogels had better antibacterial effect than HPA2/M hydrogels.In conclusion, the hydrogel dressings prepared in this study had good sustained antibacterial effect. Biocompatibility of hydrogel Good biocompatibility is an essential prerequisite for biomedical materials [63].In this study, the prepared hydrogels were tested from the aspects of blood compatibility and cell compatibility.As shown in Figure 5A, in the blood compatibility test, the materials in experimental group showed comparable to or even lower hemolysis ratio than that of the PBS group, with hemolysis ratio of below 5%, which was considered to be a good range of blood compatibility.The test results indicated that they will not cause significant hemolysis when applied to biological tissues.Figure 5B showed the cell compatibility of the hydrogels.Because of the good adhesion and proliferation effect of HA on cells [64], the cell viability of HPA2 hydrogel group was higher than that of the control group after co-culture.The cell viability of the HPA2/M&Cur-PF group compared to the control group was 80.3%, 82.1%, and 94.4% in the first 3 days, respectively.Although the cell viability of the experimental group was slightly decreased in the first 2 days of testing compared to the control group, the significant differences were all P < 0.05, indicating that the material was not toxic.Figure 5C exhibited the Live/Dead staining images of L929 cells after co-cultured with the hydrogel leachate for one day, which is consistent with the quantitative statistical results of cell viability.In general, the hydrogel dressings prepared in this study had good biocompatibility. Antioxidation and anti-inflammation of hydrogel If MRSA-infected wounds are not treated in a timely and appropriate manner, it can easily change to severe infection, causing severe inflammation at the wound site, and producing a large amount of ROS through oxidative stress [65,66].Therefore, we first used the scavenging experiment of stable free radical DPPH• to evaluate the antioxidant properties of the hydrogels.Figure 5D showed that the presence of PBA group brings little DPPH• scavenging capacity to the hydrogels.Furthermore, due to the good antioxidant capacity of Cur, 500 ll of HPA2/Cur-PF hydrogel can scavenge more than 90% of DPPH.In addition, Figure 5E further showed the ROS scavenging ability of the hydrogels by DCFH-DA staining.LPS-induced macrophage (RAW 264.7) produced a large amount of ROS, while normal macrophage (RAW 264.7) cells in the control group only produced a small amount of ROS and the HPA2/Cur-PF hydrogel group showed similar or even lower ROS intensity than the control group.The statistical results in Figure 5F showed that the ROS production in the experimental group was not significantly different from that in the control group.As is well known, high concentrations of ROS can cause DNA damage and cell death, while low concentrations of ROS play a crucial role in signal transduction and wound repair [67].Therefore, the experimental group didn't show a significant different amount of ROS compared to the control group, which is very beneficial for wound repair.As shown in Figure 5G and H, the significantly increased expression levels of TNF-a and IL-1b in the LPS group indicate that macrophage (RAW 264.7) cells were successfully induced into an inflammatory state, while the expression levels of both inflammatory factors were significantly reduced under the action of the hydrogel group, indicating that the hydrogel has significant anti-inflammatory effects.In summary, the HPA2/Cur-PF hydrogels have suitable antioxidant and anti-inflammatory effects, which lays a strong foundation for its use as a dressing for wound with MRSA infection. Promoting effect of HPA/M&Cur-PF hydrogel on healing of MRSA-infected skin wounds A series of in vitro experiments have verified the good antibacterial, anti-inflammatory and antioxidant effects of HPA/M&Cur-PF hydrogel.We further established a MRSA-infected mouse back skin wound model, and evaluated the repair-promoting effect of the hydrogel prepared in this study.Figure 6A showed the entire time course from modeling to wound repair.First, a circular wound with a diameter of 8 mm was created on the back skin of mouse, and 10 ll of 10 8 CFU/ml MRSA were injected.On the 3rd, 7th, and 14th day, the skin at the wound site was observed and sectioned for study.The commercially available Tegaderm TM dressing was selected as the control group [68].Based on the previous characterization tests, HPA2 hydrogel was selected as the representative and applied in the subsequent animal experiments.The experimental groups were HPA2 hydrogel, HPA2/M hydrogel loaded with moxifloxacin (the concentration of moxifloxacin was 2 mg/ml), HPA2/Cur-PF hydrogel loaded with Curencapsulated PF (the concentration of Cur was 2 mg/ml), and HPA2/M&Cur-PF hydrogel loaded with moxifloxacin and Curencapsulated PF (the concentration of moxifloxacin and Cur were both 2 mg/ml).In Figure 6B, the wounds in each group showed different phenomena over time after surgery.Control group showed significant suppuration on the third day after treatment, which even lasted until the seventh day.This is consistent with the clinical phenomenon that MRSA-infected wounds have severe bacterial infection with a lot of inflammation and even difficulty in healing.The HPA2 hydrogel group did not have a good anti-infection effect compared to the other three experimental groups, and no significant sepsis was found.In Figure 6C, the wound areas of each group were plotted over time.In Figure 6D, the wound closure ratio of each group was analyzed.Compared with the control group, after 7 days of treatment, there was a significant difference (P < 0.01) in the wound closure of HPA2, HPA2/M, HPA2/Cur-PF, and HPA2/M&Cur-PF group, respectively.Besides, there was a significant difference in the wound closure of HPA2/M&Cur-PF and HPA2 (P < 0.05), which indicated that the hydrogel loaded with the two drugs promoted better wound repair.On the 14th day, there still was a significant difference (P < 0.01) between HPA2/M&Cur-PF and HPA2/M and HPA2/Cur-PF, which proved that moxifloxacin and Cur promoted MRSA-infected wound healing, and the synergistic effect of the two drugs was more favorable for wound repair.In conclusion, on the 14th day, all hydrogel groups showed good healing effect.The HPA2/M&Cur-PF hydrogel group showed the best healing effect, with almost complete wound closure.But the control group still had an obvious wound, with 34.3% of the remaining wound area, which was significantly different from the other hydrogel groups (P < 0.001). The effect of each group on wound repair was further evaluated by HE staining.In Figure 6E, on the third day, a large amount of inflammation can be seen in the control group, while the inflammation in other hydrogel groups is relatively low.Figure 6G showed the statistics of inflammation.Due to the effects of moxifloxacin and Cur, the inflammation of HPA2/M, HPA2/Cur-PF and HPA2/M&Cur-PF hydrogel groups is significantly lower than that of the control group and HPA2 hydrogel group (P < 0.01).The dual effect of antibacterial and anti-inflammatory effects makes the relative amount of inflammation between HPA2/M&Cur-PF hydrogel group and HPA2/M and HPA2/Cur-PF have a significant difference (P < 0.01).In Figure 6E, on the seventh day, the epidermal regeneration in the control group was discontinuous, accompanied by blood scabs, which was mainly because the wound site was infected by MRSA and the bacteria were not removed in time, and the infection also made it difficult for wound healing to continue the next step.However, obvious continuous epidermal regeneration was seen in HPA2/M, HPA2/Cur-PF and HPA2/ M&Cur-PF hydrogel groups.In Figure 6H, statistics showed that the thickness of regenerated epidermis in HPA2/M&Cur-PF hydrogel group was the thickest, which was significantly different from the other four groups (P < 0.01).The metabolism of collagen participates in the whole process of wound repair and the regenerated collagen constitutes an important part of the repaired wound, so its importance is obvious.In Figure 6F, masson staining showed the deposition of collagen in each group on the seventh day.And in Figure 6I, a statistical analysis of the masson staining was performed.The relative collagen deposition of the four test groups was significantly higher than that of the control group, and the difference was significant compared with the control group (P < 0.05).Among the experimental groups, the HPA2/ M&Cur-PF group, which was more effective for wound repair, had 3.4 times more collagen deposition than the control group.In summary, HPA2/M&Cur-PF hydrogel dressings can promote wound closure, reduce inflammation and promote collagen deposition in MRSA-infected mouse skin wound healing. Immunofluorescence staining analysis Wound healing is a complex process, and the expression of relevant cytokines can reflect some specific situations during the wound healing process [69,70].The third day after wound formation is considered to be the inflammatory period.At this time, a proinflammatory cytokine, tumor necrosis factor (TNF-a) [71], was selected to evaluate the effect of hydrogel on inflammation control.From the immunofluorescence staining analysis of TNFa in Figure 7A and the statistics in Figure 7B, it can be seen that the colonization of bacteria at the wound leads to the aggravation of inflammation, and there were significant differences between control and HPA2/M, HPA2/Cur-PF and HPA2/ M&Cur-PF hydrogel groups, respectively.Specifically, inflammation in HPA2/M and HPA2/Cur-PF hydrogel groups was significantly reduced.The reduction in inflammation in the HPA2/M hydrogel was due to the antibacterial effect of moxifloxacin, while the reduction in inflammation in HPA2/Cur-PF hydrogel was due to the antibacterial and anti-inflammatory effects of Cur.Most importantly, HPA2/M&Cur-PF hydrogel group has the synergistic effect of two drugs, so it showed the lowest inflammation.The generation of blood vessels is an essential physiological process in wound healing, which can rebuild normal blood flow for wound tissue, provide nutrition and oxygen for the tissue, and accelerate the process of wound repair [72,73].On the seventh day, vascular endothelial growth factor (VEGF) [74] was used to evaluate the neovascularization at the wound.Figure 7A showed the immunofluorescence staining analysis of VEGF and Figure 7C showed the statistical results.Significant differences (P < 0.01) were found between the control group and the HPA2/M hydrogel group, the HPA2/Cur-PF hydrogel group and the HPA2/ M&Cur-PF hydrogel group, respectively.In HPA2/M and HPA2/ Cur-PF hydrogel groups, due to the effect of two drugs to avoid bacterial infection and inflammation in the early stage, the hydrogel promoted the formation of new blood vessels.In conclusion, based on the expression of TNF-a and VEGF, HPA2/ M&Cur-PF hydrogel showed better therapeutic effect in MRSAinfected skin wounds. Conclusion In this study, HA-based HPA/M&Cur-PF hydrogel dressing with spatiotemporally sequential delivery of antibacterial and antiinflammatory drugs was constructed based on the dynamic bond of phenylboronic acid ester for the first time, and was used to repair MRSA-infected skin wounds.The rheological properties, self-healing, biocompatibility, responsiveness, spatiotemporally sequential drug delivery, antibacterial, antioxidant and antiinflammatory properties of hydrogels were verified.Under ROS conditions, hydrogels can release more than 80% of moxifloxacin within 36 h to perform quickly anti-infection effect, and release Cur up to 288 h to perform a sustained anti-inflammation effect.Finally, in the MRSA-infected mouse skin wound healing, the hydrogel-treated group exhibited faster wound closure, reduced inflammation, and promoted epidermal growth and collagen deposition.Immunofluorescence staining results also demonstrated the hydrogels' ability to reduce inflammation while also promoting angiogenesis.In conclusion, HPA/M&Cur-PF hydrogel dressing has a significant effect on the repair of skin wounds infected by MRSA, providing ideas for responsive spatiotemporally sequential drug delivery strategies. Figure 2 . Figure 2. (A) 1 H-NMR spectrum of HA-PBA, A represents the hydrogen on the benzene ring and B represents the hydrogen on the methyl group.(B) FT-IR spectra of HA, PBA, and HA-PBA.(C) The TEM image of Cur-PF micelle.(D) Gelation display of HPA hydrogel.(E) HPA1, HPA2, HPA3 and of HPA2/PF hydrogels' SEM image (scale bar: 200 lm) and (F) pore size diameter statistics. Figure 3 . Figure 3. (A) Swelling behavior of HPA hydrogels.(B) Degradation behavior of HPA hydrogels.(C) Rheological behavior of HPA hydrogels.(D) The strainstress curves of HPA hydrogels during the compression test.(E) Self-healing display of HPA hydrogels.(F) The rheological properties of HPA2/PF hydrogel when alternate step strain switched from 1% to 1500%.(G) ROS-responsive properties of hydrogels.The release of (H) moxifloxacin and (I) curcumin from HPA2/M&Cur-PF hydrogels in PBS or H 2 O 2 . Figure 4 . Figure 4. (A) The inhibition zone of hydrogel for E. coli and MRSA within 96 h.In the figure, 1, 2 and 3 represent the control group, the HPA2/M group and the HPA2/M&Cur-PF group, respectively.Statistics of inhibition zone of hydrogel for (B) E. coli and (C) MRSA within 96 h ( � P < 0.05). Figure 5 . Figure 5. (A) Hemolysis ratio of hydrogel.(B) Cell compatibility of hydrogel on L929 cells within 3 days test.(C) Live/dead straining of L929 cells on the first day.(D) DPPH scavenging statistics of hydrogel.(E) Representative images of ROS scavenging experiments and (F) statistical results for fluorescent areas.Statistic of expression results of (G) TNF-a and (H) IL-1b after application of hydrogels on macrophages ( � P < 0.05, �� P < 0.01, ��� P < 0.001). Figure 6 . Figure 6.(A) Schematic diagram of the in vivo wound healing experimental program in an infected full-thickness skin defect model.(B) The pictures of wounds on the 3rd, 7th, and 14th day were divided into five groups: Tegaderm TM film dressing (control), HPA2 hydrogel, HPA2/M hydrogel, HPA2/Cur-PF hydrogel and HPA2/M&Cur-PF hydrogel, and (C) plotting of wound area over time.(D) Statistics on changes in wound closure ratio.(E) HE staining of the wound site on the 3rd, 7th, and 14th day.(F) Masson staining of the wound site on the seventh day.(G) Statistics of relative inflammation at the wound site on the third day.(H) Thickness of regenerated epidermis at the wound site on the seventh day.(I) Statistics of relative collagen deposition at the wound site on the seventh day ( � P < 0.05, �� P < 0.01, ��� P < 0.001). Figure 7 . Figure 7. (A) Immunofluorescence staining images of TNF-a on the third day and VEGF on the seventh day, with green indicating TNF-a expression and red indicating VEGF.(B) Quantitative analysis of the relative area percentage (n ¼ 3) of TNF-a and (C) VEGF.For all quantitative analyses, the commercial film group data for TNF-a on the third day and VEGF on the seventh day were set as 100% ( � P < 0.05, �� P < 0.01).
8,981
2023-12-09T00:00:00.000
[ "Medicine", "Materials Science" ]
Modeling the heat balance of a solar concentrator heliopyrolysis device reactor . The article presents the principle scheme of the heliopyrolysis device with a solar concentrator and the mathematical model of the equations representing the heat balance of the heliopyrolysis reactor. Based on the mathematical modeling method, the energy balance of the heliopyrolysis reactor was theoretically studied, and graphs representing the temperature change of the reactor surface were obtained when the solar radiation falling on the concentrator was Ir=600÷900 W/m 2 in the climatic conditions of the region of Karshi (Uzbekistan). Based on modeling in the SOLTRACE program, the graphs of the temperature field at different points of the solar parabolic concentrator are determined to change depending on the energy of the incident solar radiation. In the experimental device, it was determined that an average temperature of 200-300 ℃ can be generated in the reactor within one hour. Experiments show that in the conditions of the city of Karshi, it is possible to create a regime of 200÷500 ℃ sufficient for biomass pyrolysis through a parabolic solar concentrator in the daytime mode. Introduction Currently, rational use of natural fuel resources and ensuring energy efficiency are important tasks.Effective use of renewable energy sources is important in saving energy resources.The energy potential of solar and biomass energy from renewable energy sources is great, and their practical use is highly effective in terms of energy, ecology and economy [1].The increase in the population and the development of the utility service system lead to an increase in energy consumption.With the increase in the population, there are problems such as saving food and energy resources without harming the environment.To solve these problems, organic fuels (coal, oil, natural gas) use should be radically reduced [2][3]. In the world, the use of renewable energy sources such as wind, solar, water and biomass is considered a priority, and special scientific research is required for the effective use of these energy sources and the creation of technological devices based on them.For this, it will be necessary to create energy-efficient technologies, reduce harmful gases released from them, optimize the share of extracted organic fuel, and increase the efficiency of renewable energy sources.It is planned to increase the share of energy produced by renewable energy sources in the world from 8.5% in 2005 to 25% in 2025 [4]. Pyrolysis is a modern effective method of obtaining solid (coal), gaseous (biogas) and liquid (tar and oil) biofuels from biomass and various organic wastes.Pyrolysis is a method of turning organic waste and biomass into steam-gas by heating them in an airless environment, and then cooling them to produce gas, liquid, and solid hydrocarbon products.liquid, gaseous and solid alternative fuels are obtained from organic waste within hours.In a pyrolysis device, it is possible to generate three different types of alternative fuels and heat energy (hot water) at the same time, which dramatically reduces the amount of "greenhouse" gases released into the environment [5]. The process of biomass pyrolysis is a high-temperature thermochemical process, in which the average temperature regime is 500÷700 ℃.Biomass is one of the classic renewable energy sources, which can be processed into solid, gaseous and liquid alternative fuels.By processing biomass, it is possible to firstly dispose of agricultural and local organic waste, secondly to obtain cheap fuel, and thirdly to reduce the amount of toxic gases released into the atmosphere.Possibilities of using biomass for energy purposes provide energy, environmental and economic benefits at the same time [6]. Currently, it is important to use solar concentrators to use solar energy in technological processes that require high temperatures.In recent years, in the world and in Uzbekistan, scientific research on the use of solar energy in various technological processes has been conducted and practical results have been achieved.The analysis of devices for obtaining energy from biomass shows that it is necessary to solve problems such as reducing the energy capacity of the raw material processing process, optimizing the energy balance of the device, and increasing its energy efficiency [7][8][9][10][11][12][13][14][15][16][17][18][19]. The purpose of the study is to model the heat balance of the reactor of the heliopyrolysis device with a solar concentrator and justify the thermal technical parameters. Materials and methods Reducing the energy used for self-extraction in pyrolysis devices is one of the main problems.Because in order to ensure the required (350-500 ℃) temperature regime in the reactor, energy (heat) must be spent initially.It is usually done by using coal, natural gas or electricity as an energy source for the processes carried out in the pyrolysis device.The reason is that it takes a lot of heat energy to break down biomass waste.Additional heating of biomass requires excessive energy consumption.This problem can be solved by fully covering the required energy with solar energy during daytime operation. The method of effective use of solar concentrators for biomass heliopyrolysis is considered in this research work.Taking into account the solar energy potential of the region, the principle scheme of the pyrolysis device for the thermal processing of biomass was created (Figure 1). Production of biofuel from biomass in a heliopyrolysis device includes the following technological processes.The heliopyrolysis device consists of the main reactor 1, the moving pipe 2 of the gas mixture separated from the biomass, the heat exchanger 3 and the parabolic concentrator 12.The heliopyrolysis device is mainly designed for obtaining biofuels from biomass.Sunlight is concentrated in the parabolic concentrator 12 and heats the biomass inside the reactor.When the temperature inside the reactor rises above 300-350 ℃, biogas is released from the biomass.As biogas moves through pipe 2 and passes through the center of parabolic cylindrical concentrator 13, the speed of movement increases and its pressure is measured by manometer 17.Biogas goes to condenser 3. Incoming 10 and outgoing water pipes 11 are connected to the heliopyrolysis condenser, where the biofuel separated from the biomass is condensed.The part of biofuel separated into gas is first passed through the water filter 7, then through the activated carbon filter 8 and collected in the gas holder 9.The liquefied part of the fuel passes through the screw 14-15-16 and is collected in the tank.The use of the heliopyrolysis device in Kashkadarya, Surkhondarya, Bukhara and Samarkand regions of our Republic with hot climate conditions gives effective results and allows saving fuel and energy resources.In this research work, the theory of heat-mass exchange of thermal engineering and solar devices and methods of calculating heat balance equations were used. We can construct a mathematical model of the heat balance equation for a heliopyrolysis reactor as follows: Here, sup Q  the heat supplied to the heliopyrolysis reactor, kW; The energy supplied to the heliopyrolysis reactor is determined as follows: The energy generated due to solar energy entering the aperture of the parabolic solar concentrator is determined as follows: . .sol con irrad sol con Here, irrad I  solar radiation falling on the surface of the concentrator, W/m 2 . The aperture of a parabolic solar concentrator is determined using the following expression: The optical efficiency coefficient of the concentrator is determined as follows: Here, g R  reflection coefficient of concentrator glass surface; g   light absorption coefficient of concentrator glass surface; cos  the angle of incidence of sunlight on the surface of the concentrator. The solar energy falling on the side surfaces of the heliopyrolysis reactor is determined as follows: Here, reac F  the surface of the side surfaces of the reactor, 2 m . The amount of heat used to increase the biomass loaded into the heliopyrolysis reactor to the temperature of the pyrolysis process is determined using the following expression: Here, ℃. The heat lost from the external side surfaces of the heliopyrolysis reactor is determined as follows: , Here, amb t  ambient temperature, ℃; d  heat exchange time. The heat transfer coefficient for a heliopyrolysis reactor is calculated using the following equation: When calculating the heat lost from the glass surface in the radiation method, it is determined by the following equation: 10 ( ) When calculating the heat lost from the reactor to the air in the convective method, it is determined by the following equation: It is important to determine the Nu number in the process of heat exchange between the reactor devoir and the environment, and it is calculated using the following expression [20]: If we take into account the wind speed in the process of heat exchange, then it is necessary to determine the Re number: The efficiency of the heliopyrolysis device is determined as follows: If we combine equations ( 4), ( 8), ( 9), ( 10), (11) in the heat balance equations, the following equation is formed: From this, the equation of time dependence of the temperature formed during biomass pyrolysis is derived: Table 1.Paraboloid solar concentrator parameters. Parameters Designation Value Unit Concentrator diameter Results and Discussion Heliopyrolysis method of biomass processing has a number of advantages over other methods, it is an energy efficient and environmentally friendly method.In this case, the intensity of sunlight depends on the climatic conditions of the region and the position of the sun, so it is possible to ensure uniform heating of the surface of the heliopyrolysis reactor using stationary solar concentrators.The latitude angle for the city of Karshi is 39 o .As a result of meteorological measurements in the city of Karshi, the amount of solar energy falling on 1 m 2 of surface is 600-900 W/m 2 .Measurement results were made during sunny hours of the day (08:00 -17:00).The calculation results are presented in table 5. The results of the study were taken at the GLOBAL SOLAR ATLAS site in monthly sections of the average annual solar radiation in a 24-hour time interval.The maximum average direct solar energy was 5,563 kW in August, and the minimum average value was 2,517 kW in December [21]. Conclusion The efficiency of biofuel production varies mainly depending on the operating temperature of the pyrolysis reactor, the type of biomass and the duration of operation.The experiments carried out in the heliopyrolysis device showed that the optical efficiency coefficient was 0,7 for raising the internal temperature of the reactor to 400 ℃ for the pyrolysis process of 2 kg of biomass with a moisture content of 20% and an aperture surface of 3 m 2 used a parabolic solar concentrator.As a result, 0,9 kW of energy was generated from biomass, which can be used for alternative fuels.By using solar concentrators in biomass pyrolysis, the amount of fuel used for the process was fully covered by the sun.The results obtained through the experiment can be used in the design and calculation of the heliopyrolysis device. in the reactor, kW. temperature of the pyrolysis process, ℃; b t  biomass temperature, ,Table 5 .kW 7 Fig. 3 . Fig. 3. Values of solar radiation in different months for the city of Karshi. ,Fig. 5 . Fig. 5. Graph of dependence of heliopyrolysis process on time and solar radiation. Fig. 6 . Fig. 6.A solar concentrator modeled using the SOLTRACE program and its graphs at different values.
2,574.6
2023-01-01T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
An Advanced Private Social Activity Invitation Framework with Friendship Protection Due to the popularity of social networks and human-carried/human-affiliated devices with sensing abilities, like smartphones and smart wearable devices, a novel applicationwas necessitated recently to organize group activities by learning historical data gathered from smart devices and choosing invitees carefully based on their personal interests. We proposed a private and efficient social activity invitation framework. Our main contributions are (1) defining a novel friendship to reduce the communication/update cost within the social network and enhance the privacy guarantee at the same time; (2) designing a strong privacy-preserving algorithm for graph publication, which addresses an open concern proposed recently; (3) presenting an efficient invitee-selection algorithm, which outperforms the existing ones. Our simulation results show that the proposed framework has good performance. In our framework, the server is assumed to be untrustworthy but can nonetheless help users organize group activities intelligently and efficiently. Moreover, the new definition of the friendship allows the social network to be described by a directed graph. To the best of our knowledge, it is the first work to publish a directed graph in a differentially private manner with an untrustworthy server. Introduction Nowadays, social networks are pervading our lives in nearly every possible form and corner [1][2][3][4][5][6][7], as people use them to connect, interact, and share with their peers.In particular, the ubiquity of smart phones and various social network applications have made the global social network flourish over recent years.One common and critical service provided by social networks is organizing group activities.Unfortunately, most social networks offer only rudimentary invitation mechanisms, which send invitations either oneby-one manually or to everyone automatically.Besides, most group activities are filled strictly with a first-come, firstserved manner.These services are ill-suited for frequent, small ad hoc events such as outdoor activities: inviting every possible candidate increases the likelihood of a group where few people know anybody else except for the host; however, it is tedious to manually search for a well-acquainted social group that performs the same kinds of exercise, at the same time and place [8].From the invitees' perspective, they might be overwhelmed by a plethora of different activity invitations that they are not willing to attend since invitations are typically sent out without considering the real interest, ability, and social habit of each invitee. The popularity of human-carried/human-affiliated devices with sensing abilities, like smartphones and smart wearable devices, has opened up a large resource for sensory data, which has necessitated many novel sophisticated applications.For example, smart watches are usually equipped with an array of different sensors such as compasses, proximity sensors, accelerometers, gyroscopes, altimeters, barometers, and GPS [9].These can be used to collect various data such as location, route, distance, pace/speed, duration, and elevation changes for different activities attended by the owner.By analyzing these personal data with state-of-the-art mining or learning algorithms, the habits of the device owners, including their preferred activities, schedule, and location, can be easily derived.This habit information can, in turn, be used to help the owners find group activities appropriate for them. Based on this observation, Ai et al. [10] first proposed an efficient and personalized group activity organizing framework by learning historical data gathered from smart devices and choosing invitees carefully for an activity.However, they did not consider the risk of the privacy leakage of participants' sensitive information, such as habits, age, and gender.Later, Tong et al. [8] designed a private group activity organizing framework and proposed the adoption of differential privacy to secure participants' personal information.Tong et al. [8] considered a practical scenario, where three parties, including an untrustworthy activity organizer app, current app users, and potential users, are involved.After registering on this app, users can either organize activities by submitting a request to the server or receive invitations from the app server.Users have the capability of adding each other as friends.In order to receive more interesting invitations, app users need to divulge personal information such as age, gender, locational preferences, and historical data from their wearable devices.In particular, Tong et al. [8] assumed that the activity organizer app is untrustworthy, mainly due to the reason that the app developers are motivated by advertising revenue therefore attempting to attract more users by releasing some useful information about current users.The main contribution of Tong et al. 's work [8] is to protect existing users' privacy while satisfying all three parties involved.The primary drawback, however, is that it allows the entire social network to be released to the public after naive sanitization approaches like removing user IDs.This may leave users open to privacy risks, especially reidentification attacks [11,12]. In our work, based on the same three-party scenario assumption by Tong et al. [8], we designed a new group activity organizing framework with a stronger privacy guarantee and a more efficient invitee-selection algorithm.More precisely, our contributions in this research can be summarized as follows. (1) A novel definition of friendship: we introduced a more flexible definition of the friendship between a pair of users, which asks user "Who do you like doing activities with?" instead of "Who is your friend?"In previous works [8,10], the friendship is defined mutual.However, a person could enjoy doing activities with another person without having the other person reciprocate the same feeling.This makes sense for the event invitation framework since its purpose is not to keep track of actual mutual friendships, but which users enjoy doing activities with whom.A more accurate term for this relationship would be "preferred friend" or "directed friend." Such "directed" friendship notion brings several benefits.First of all, such friendships can be described easily by a directed graph = (, ), where the vertex set represents the user set and the arc set shows the corresponding directed friendships.That is, if V likes doing activities with , then (V, ) ∈ . Second, there is no need for other users to accept a friendship request, meaning two users do not have to directly communicate or have mutual agreement on friendship.This relieves the workload of updating the social network.Last but not least, since friendships are not bidirectional, having one user's report does not compromise information about the remaining users.In other words, such friendships enhance the privacy protection for the users. (2) Stronger privacy guarantee: we added an efficient algorithm to make the graph satisfy a strong privacy guarantee, differential privacy, and thus allow the app server to release the underlying graph of the entire social network without jeopardizing users' privacy.Differential privacy requires no computational/informational assumptions about attackers, data type-agnosticity, composability, and so on [13]. Since the app server is untrustworthy, we need to hide structure information before it is uploaded to the server.We applied the Randomized Response Technique (RRT) [14] to all vertices (or users).That is, each user's friendships will be perturbed before being reported to the server.For example, a user will report the true (fake, resp.)relationship with a probability 1 − (, resp.),where the parameter ∈ (0, 1] is usually a small number.Such a randomized response strategy ensures the existence of connection from one user to the other to be hidden in the output graph while keeping the low distortion of the graph and preserving the most useful information about the graph.To the best of our knowledge, this is the pioneer work that this technique is applied in a directed graph under the existence of an untrusted server. (3) A more efficient invitation sending mechanism: in order to select appropriate candidates as invitees, Ai et al. [10] proposed a greedy algorithm, k-core, based on the k-core (undirected) graph theory.Our work designed a novel greedy algorithm, named as advanced k-core (Adv-k-core), to improve the kcore algorithm.The k-core algorithm starts with the original graph, sets = 1, and then iteratively deletes all vertices with a degree less than in the current graph. gradually increases and the algorithm terminates when the size of the remaining graph reaches a lower bound.Our Adv-k-core deletes vertices more carefully by assigning higher priority to the vertex with the least impact on other vertices. (4) Experimental validation: in order to evaluate the performance of our activity invitation framework, we simulated an outdoor activity invitation system, where at most 1,000 users are created with different profiles, including age, gender, free time schedules, activity types, activity levels, and locational ranges.Then, at most 5,000 different activity events are generated, each of which requires a specific age range, time range, activity type, activity level, and location.Our experiments show that the privacy-preserving algorithm protects the structure of the social network effectively and the Adv-k-core algorithm improves the original k-core algorithm extensively. The rest of the paper is organized as follows.Section 2 reviews related works; the proposed framework is introduced in Section 3; Section 4 shows the simulation results; and Section 5 concludes our paper. Related Work Organizing group activities via social media, such as Facebook, Twitter, Plancast, Meetup, Yahoo!Upcoming, and Eventbrite, are quite popular in the era of "Internet of Everything."However, most of these social media offer only rudimentary functions for organizing group activities [8].Take Facebook as an example; it allows users to create public or private events, but the organizer can only choose to send invitations one-by-one or to everyone. There is plenty of research in the literature on social networks; the following two are the ones most related to our work.Ai et al. [10] first made the proposal to design the social event invitation framework based on historical data of smart devices.They also presented two greedy invitationdisseminating algorithms.Their framework, however, is impractical as it assumes the existence of a trusted and altruistic server.Besides, few privacy protection approaches were applied to guarantee the security or confidentiality of users' personal information.Recently, Tong et al. [8] considered a more realistic scenario in which the server is selfish and possibly untrustworthy.They concentrated more on the privacy issue such that existing users will be sufficiently protected while satisfying all involved parties simultaneously.Nevertheless, Tong et al. [8] only protected personal data such as age, gender, free time schedules, activity types, activity levels, and locational ranges, while leaving the underlying graph structure of the entire social network open to privacy risks, especially reidentification attacks [11,12]. Differential privacy [14][15][16][17][18] is a strictly provable and security-controlled privacy model to provide a very strong privacy guarantee.It can quantify the extent to which individuals' privacy in a data set is preserved, while maintaining the usefulness of the data set.Differential privacy has proven to be extremely successful since its inception.The most popular differential privacy mechanisms include the Laplace mechanism [14], exponential mechanism [19], geometric mechanism [20], and Gaussian mechanism [17,21]. The problem of graph publication under differential privacy has been well investigated.Generally speaking, there are two main techniques: direct publication and modelbased publication.By direct publication, the output graph is constructed by directly adding noise to each edge or vertex, followed by a postprocessing step (probably a rounding step).For example, given an undirected graph and assuming edges are independent, adding Laplace noise to each cell of the adjacency matrix and then rounding each cell to 1's or 0's is a trivial Laplace mechanism to preserve the privacy.However, such an approach may severely deteriorate the graph structure.Recently, there are two differential privacy algorithms, TmF [22] and EdgeFlip [23], in this category for undirected graph publication.The algorithms for modelbased publication inject noise to some intermediary quantities or structures, such as graph spectral, instead of directly to the original graph.The output graph will be regenerated from these noisy intermediary structures.Popular algorithms in this category include 1K-series, 2K-series [24,25], Kronecker graph model [13], graph spectral analysis [26], DER [27], HRG-MCMC [28], and ERGM [29].Most existing privacypreserving algorithms for graph publication assume the graph is undirected and published by a trusted and altruistic server. Privacy-Enhanced Activity Invitation Framework In this section, we introduce our novel privacy-enhanced activity invitation framework (refer to Figure 1).Following As introduced, we make a realistic assumption that the server is untrustworthy, given that it is motivated by advertising to its existing users and gaining profits.In order to bolster its income, the server will strive to provide quality services to maintain current members and also try to entice new users by releasing some statistical information about current users and providing online querying services.As a result, existing users or new registers may have trouble deciding whether to report their personal information honestly, including age, gender, and "Who I like doing activities with."On the one hand, the server will definitely learn users' habits more accurately if users could provide candid information, which in turn leads to better services.On the other hand, users should be worried by the possibility of having their personal information leaked. The following shows how our design works in detail.Once a person registers on the app, the server will create and maintain a profile for him/her until he/she wants to destroy the account.If the user is a smart wearable device owner, the front-end app will seek authorization to access his/her historical data which contains records pertaining to activities.Otherwise, users need to fill their own profiles manually based on their understanding and estimation of their abilities.Whenever a user needs to update or report his/her personal information to the server, the front-end, user-side app will automatically obfuscate the given personal information before being transferred to the server so that the information is protected by differential privacy.If a user wants to organize an activity, a request will be first sent to the server.Then the server will analyze users' historical data and estimate the users' abilities or levels for each type of activity; the routine times they are free; and a locational range, indicating the rough area in which he/she is willing or able to travel in order to participate in the activity.Based on the above estimated habits about existing users, the server will disseminate the invitations to appropriate candidates via the Adv-k-core algorithm such that all of the invitees meet the group activity requirements and have a high chance to attend the activities.Since any privacy-preserving algorithm that satisfies differential privacy will protect the individual's information regardless of the adversary's background information [13], the server can release the statistical information about the current users safely to the public.Our framework does not need to keep track of actual mutual friendships, but which users enjoy doing activities with whom.To depict such relationship among the users, we first define the concept of directed friendship and then use a directed graph to simulate the entire social network. Definition 1 (directed friendship).For any two users A and B, if A likes attending activities together with B, one says B is A's directed friend. While the traditional friendship is a symmetric relation, our definition implies an asymmetric relation between users.Let = (, ) represent the underlying directed graph, where a vertex V ∈ denotes a user.An arc from V to means that user V likes attending activities together with .Such a definition allows each user to update his/her neighbors independently, which not only reduces workload but also enhances the privacy guarantee for the users. Graph Publication via Differential Privacy 3.1.1.Preliminary.Differential privacy [14,16,17] is a privacy model that offers strong privacy guarantees under the assumption of a powerful adversary.In particular, the adversary could have nearly unlimited background knowledge.The model works by injecting artificial noise to the disclosed data set such that no one can tell whether an entry in the data set has been changed or not.On the other hand, differential privacy guarantees the released information is still useful.Formally, given two datasets where only one entry is altered, the probability distribution of the outputs for a statistical analysis of one data set should be nearly identical to the distribution of the other's. Let x ∈ X and x ∈ X be two data sets.The distance between the two datasets, denoted as (x, x ), is the minimum number of sample changes that are required to change x into x .If (x, x ) = 1, that is, if x and x differ by at most one entry, then we say that x and x are neighbors. Definition 2 (edge-neighboring graphs).One says two directed graphs 1 = ( 1 , 1 ) and Definition 3 (vertex-neighboring graphs).One says two directed graphs Here (V) denotes the set of incident incoming and outgoing arcs on V. A query is a function whose domain is the collection of data sets.The output of the query is usually denoted as (x).The global sensitivity Δ of the given query is defined as where ‖ ⋅ ‖ is a norm function.Our proposed framework is trying to hide the true friendship information for each user against queries like "how many neighbors does a user have?"It is not difficult to check that the sensitivity is 1 under the edge-neighboring notion and at most − 1 in the worst case under the vertex-neighboring notion.We adopt the edgeneighboring notion in our work for the sake of low sensitivity. Definition 4 (-differential privacy [14,30]).A mechanism or randomized function M : X → R provides -differential privacy if and only if for all pairs of neighboring data sets x and x , and all subset ⊂ Range(M), it holds that The parameter , deemed privacy budget, controls the level of privacy.Usually, the value of is small; say ∈ (0, 1].Intuitively speaking, the parameter gives the upper bound on the output difference when the mechanism is applied to a data set and any one of its neighbors.From inequality (2), Pr[M(x) ∈ ] and Pr[M(x ) ∈ ] become closer when decreases, implying more effort to distinguish the neighboring data sets and therefore indicating a stronger privacy guarantee. The Laplacian mechanism [17] and exponential mechanism [19] are two of the most popular -differentially private mechanisms.Generally speaking, the Laplace mechanism is typically used when the output is numerical, whereas the exponential mechanism is applied to nonnumerical outputs.In particular, the exponential mechanism is more suited for situations where we need to select the "optimal" response but adding noise directly to (x) can completely destroy its value. Definition 6 (exponential mechanism [19]).The exponential mechanism M selects and outputs an element ∈ Range() with probability proportional to exp ((x, )/2Δ), where : is a utility function that maps data set/output pairs to utility scores, and the sensitivity of is defined as (x, ) − (x , ) . (5) Our Differential Privacy Mechanism. There are two main types of noise injection strategies: output perturbation and input perturbation.Namely, the -differentially private mechanisms are usually designed by either perturbing the output of the query or adding noise to the input data set.Obviously, the output perturbation requires a trusted server to hold the authentic data sets while the input perturbation is more flexible as the data can be perturbed before being transferred to the server.Our framework assumes an untrustworthy server, and therefore an input perturbation strategy will be adopted. Both the Laplacian and exponential mechanisms mentioned in Section 3.1.1can be modified to perturb the input rather than output.These two mechanisms can be applied to obfuscate different types of users' raw data, such as age, activity types, or activity ranges [8].Since our work concentrates on the protection of users' friendships, we add a novel privacy-preserving mechanism, named as Pert, in the random response manner (refer to Algorithm 1).More precisely, each user reports his/her real friendship information with a probability 1 − , where ∈ (0, 1].The larger is, the more arcs in the graph are randomized. Theorem 7. Our graph perturbation algorithm Pert guarantees 𝜖-differential privacy. Proof.Suppose 1 = (, 1 ) and 2 = (, 2 ) are two edge-neighboring graphs.Assume 2 = 1 ∪ {(V, )}.Let 1 and 2 represent the perturbed version of 1 and 2 , respectively.Note that represents the same set of users in both graphs.The probability that two edge-neighboring graphs are perturbed to the same graph is determined by the value assigned to the differing arc (V, ).According to the algorithm Pert, an arc in the input graph maintains its original value with a probability 1 − and flips its value with a probability .For any , depending on whether (V, ) ∈ , we have where the last inequality is due to the value of = /(1 + ).This proves the theorem according to the definition of differential privacy. (i) If all M are defined on the same data set, then According to the Composition Theorem, combining several differentially private algorithms results in a new differentially private algorithm at a cost of linearly increasing privacy budget in the worst case.For each user, his/her profile can be described by a tuple, where each dimension represents one type of data.Injecting noises to different data field with different -differentially private algorithms M 1 , . . ., M , (max{ })-differential privacy will be guaranteed if data fields are independent; otherwise, (∑ =1 )-differential privacy will be guaranteed. Improved 𝑘-Core Algorithm.The server's main job is to select invitees to meet the request of organizing a group activity from some user.Following Ai et al. [10] and Tong et al. [8], we assume that having friends attend an activity A d d(V, ) to with probability (7) return the resultant graph = (, ) Algorithm 1: Graph perturbation algorithm Pert. Input: A directed -core graph and a group size Output: P i c kt h eV such that after its deletion (6) resulting in miminum number of vertices (7) w i t hd e g r e e≤ D e l e t eV ( 9) else (10) = + 1 (11) return the remaining list Algorithm 2: Improved -core algorithm Adv-k-core. will improve participants' overall experience.Therefore, the server needs to ensure that a number of friends will also be invited for each invitee. We adopt the concept of -core graph to simulate a qualified social network where each user has at least friends.Suppose is a subgraph of such that users in satisfy all the requirements for an activity.Let () and () denote the vertex and arc set of , respectively.We say is a -core graph if each vertex V ∈ () has at least directed friends.Let + (V) = { | ∃(V, ) ∈ ()} be the set of neighbors of V in graph , and let | + (V)| denote its cardinality, or the degree of V in .Suppose a group activity has a limited capacity , and is the statistical response rate for similar past activities.The task then becomes choosing + / invitees such that each person also has friends invited. Ai et al. [10] presented a greedy invitee-selection algorithm, k-core.The k-core algorithm starts with the original graph and sets = 1; and then it iteratively deletes all vertices with a degree less than in the current graph.When deleting the vertices, the highest priority will be assigned to the vertex with the minimum degree.As gradually increases, the algorithm terminates when the size of the remaining graph is ( + /).We propose an improved k-core algorithm, denoted as Adv-k-core (refer to Algorithm 2).Adv-k-core works very similar to k-core with the exception of the vertex deletion step.We scan through the whole graph and find the vertex with the least impact on other vertices, in respect to the number of vertices with degree less than current by the deletion. Experiments Two experiments were designed to evaluate the performance of our activity invitation framework.In these experiments, an outdoor activity invitation system is simulated, where at most 1000 users are created with different profiles, including friendships, age, gender, free time schedules, activity types, activity levels, and locational ranges.Then, at most 5,000 different activity events are generated, each of which requires a specific age range, time range, activity type, activity level, and location.As previously mentioned, each participant must satisfy all of the event's requirements.A random response rate ∈ [0.6, 1) is generated uniformly for each user in advance.When a user receives an invitation, another random number re ∈ [0, 1) is generated.If re < , he/she accepts the invitation; otherwise, there will be no response.All experiments were implemented with Java and conducted under OS X EL Capitan with processor, 3.5 GHz Intel Core i5, and memory, 16 GB 1600 MHz DDR3. Experiment 1. As shown in Section 3.1, users' sensitive information has been theoretically secured by our differential privacy algorithms.In particular, the graph structure of the social network can be protected by the algorithm Pert.Since the algorithm Pert hides users' friendship by perturbing the arcs, the graph structure can be changed, which might affect users' usage experience.For example, suppose a user originally has 5 friends in the social network and the number may decrease to 0 after the Pert algorithm is applied, which excludes this user from the invitee pool. Our first experiment is to investigate whether existing users will receive worse services if they report noisy friendships to the server.We define the utility for each existing user as the ratio of accepted invitations in the original graph to the number of accepted invitations in the perturbed graph.Denote this ratio by .That is, The quantity 0 = | − 1| tending to 0 indicates that our framework could still provide qualified servers to existing users despite users reporting noisy information to the server. For simplicity, we still name 0 as the utility. In this experiment, we set privacy budget ∈ {0.05 : 0.05 : 1}, where the notation {ℓ : Δ : } denotes an arithmetic sequence of numbers with lower bound ℓ, upper bound , and constant difference Δ between the consecutive terms.For example, {1 : 2 : 10} = {1, 3,5, 7,9}.To figure out how the utility 0 behaves as the privacy budget varies, the average utility 0 was calculated for each privacy budget .Additionally, we tested 4 scenarios aiming at investigating the scalability of the algorithm Pert.More precisely, we revoked the Pert algorithm to inject noises to the outdoor activity invitation systems with the following settings: (ii) 200 users and 5000 invitations; (iii) 500 users and 1000 invitations; (iv) 500 users and 5000 invitations. To calculate the average utility 0 , our Adv-k-core algorithm was used to select invitees.The results for the size-200 and size-500 outdoor activity invitation systems are shown in Figures 2 and 3, respectively. Figures 2 and 3 show the average utility 0 is relatively small, that is, 0 ≤ 0.02, in most cases.This demonstrates the service quality for existing users is not jeopardized severely even if they report noisy friendships to the server.Besides, the experiment results show the excellent scalability of our Pert algorithm.As the privacy budget increases, the privacy guarantee becomes weaker according to the definition of differential privacy, resulting in better services received by the existing users.Consequently, 0 should decrease towards 0 along the axis, which is verified by Figures 2 and 3. Actually, when = 0.35, our Pert algorithm has already achieved a satisfying utility. Experiment 2. Our second experiment is to study the efficiency of our invitation-selection algorithm Adv-k-core.Suppose 1 and 1 are the value of and the number of remaining arcs after the algorithm k-core terminates. Similarly, let 2 and 2 be the value of and the number of remaining arcs after the algorithm Adv-k-core stops.Then define two measures The smaller values of and mean more average neighbors in the resulting graph after the application of Adv-k-core, compared with the one obtained by the employment of kcore.Therefore, they further indicate a closer related invitee pool, which implies invitees have higher chance to accept the invitation. To study how the values of and change as the graph size changes, we applied both algorithms Adv-k-core and kcore in graphs with multiple sizes.For each size, a number of graphs of the same size were generated and then the average values of and were obtained over these graphs, which was designed to show how stable our algorithm Adv-k-core could improve the algorithm k-core.In our experiment setting, let the set of graph sizes be {100 : 100 : 1000} and we calculated the average values of and over graphs, where ∈ {50, 100, 200, 500}.The results are shown in Figure 4. From Figure 4, we can observe that 2 ≥ 1 and 2 ≥ 1 almost hold for all graph sizes in the experiment, which implies our algorithm Adv-k-core indeed produces a closer related invitee pool.Moreover, we find is always smaller than .This indicates the original k-core algorithm generates a less "consistent" adjacency in the sense that both high degree users and low degree users can be selected, which in turn results in a smaller value.In other words, in the resultant graph after the application of our Adv-kcore algorithm, the variance of the numbers of neighbors is relatively smaller.Besides, we can claim that our Adv-kcore algorithm improves the k-core algorithm steadily as the values of and are quite stable when the graph size increases. Conclusion This paper follows the recent works by Ai et al. [10] and Tong et al. [8].We presented a private and efficient social activity invitation framework where the server is assumed to be untrustworthy but can nonetheless help users organize group activities intelligently and efficiently.Our main contributions are (1) a novel definition of friendship to reduce the communication/update cost among the network while simultaneously enhancing data security and user confidence; (2) a strong privacy-preserving algorithm for graph publication, which addresses the concern proposed by Tong et al. [8]; (3) an efficient invitee-selection algorithm.Our simulation results show that our proposed framework has good performance.In our current research, we assumed each data field is independent with each other and queries from the adversary are also independent.In the future, we will consider more complicated queries and the correlation among data fields. Figure 2 :Figure 3 : Figure 2: Average utility 0 for existing users under Pert and Adv-k-core.Here, the outdoor activity invitation system involves 200 users.(a, b) show the average utility 0 after 1000 and 5000 activities created, respectively. Figure 4 : Figure 4: A description of how values of and change as the graph size changes.(a, b, c, d) were obtained by 50, 100, 200, and 500 repetitions, respectively.
6,949.6
2017-01-01T00:00:00.000
[ "Computer Science" ]
Pre-Service Teachers’ Academic Identity and their Lived Experiences in Remote Learning: The New Normal in Curriculum Practice The Ministry of Tertiary Institutions of South Africa charged post-secondary institutions to implement measures to achieve the government's social distancing policy. Institutions shifted to remote learning to sustain their core business of teaching and learning. However, there were concerns with the implementation of these measures. For instance, pre-service teachers were seen as ill-equipped and poorly supported during remote learning. This paper aims to contextualise the identity of pre-service economic and management science teachers and reflect on their experiences of curriculum practice during remote learning. Architecture theory was used as the main lens for this study. Furthermore, the goal is to reflect on their adaptation to remote learning as the new normal. Participants’ experiences and factors that affected them are discussed as data collected using the critical participatory action learning and action research (CPALAR) approach as a form of critical education science. Critical discourse analysis was used to arrive at the following broad findings: firstly, higher learning institutions are obligated to create practical learning experiences for pre-service teachers. Secondly, participants were directly affected academically, socially, and psychologically. This paper concludes with the recommendation that hybrid learning as the new normal is the future of teaching and learning and should be embraced. INTRODUCTION Across the globe, governments enacted a range of legislative measures for regulating institutions in the face of the COVID-19 pandemic. The measures included lockdown and social distancing protocols which various institutions were tasked to observe. 1 The shutdown of educational institutions seems to have resulted in the adoption of remote teaching and learning. Pre-service economic and management science teachers were confronted with an academic identity crisis as they were unprepared and Using electronic gadgets and media to deliver instructional content for remote learning provides access to information, ease of updating content, personalised instruction in curriculum practice, standardised content, and accountability. Remote learning increases access to information and supports and enhances teaching and learning. In agreement with the statement, Koutsouba, Koutsouba, and Gkiosos allude that remote learning involves a combination of pre-service teachers and teacher-educator communication; thus, remote learning is the application of processes to create, distribute, manage, and enable learning through access to an online network. 10 Tshelane discusses the range of research on academic work, especially as roles change to reflect a changing university sector that has taken place in recent years in response to the challenges of a changing environment and new roles facing academics. 11 Researchers are addressing the challenge of a changing environment and new roles by researching academic work and identity. 12 In addition to the teacher's "personal histories, rooted in educational experiences, professional experiences and cultural encounters," the teacher's identity is "subject to constant negotiation due to changing contextual elements, such as the classroom culture, instructional materials, and reactions from students and colleagues." 13 The notion of academic identity is, therefore, one that embraces an individual's ideas regarding the kind of work they are interested in, their values and ambitions, as well as their commitments and affiliations. Academic identities are always constructed, negotiated, and actualised in everyday practices and institutional settings based on one's own experiences and those of others. In this sense, academic identity is not a static entity but is constantly reshaped and redefined by interaction, reflections and discussions with others, time, and changing contexts. For this reason, our approach to the academic identity of pre-service economic and management science teachers takes into account the prevailing cultural narratives of identity represented by pre-service EMS teachers in the context of their reflections and interactions about the curriculum practice that shaped their experiences during the transition and learning using remote learning systems. Remote learning is characterised in the South African context as an ICT-enhanced practice utilised by teacher educators at universities, initially with the availability of e-mail, online learner management systems, and free Wi-Fi in libraries and across campus to enable pre-service teachers to engage in a learning process. According to studies, technological interventions for teaching, learning, and evaluation help students develop their critical thinking skills while empowering teacher educators to effectively distribute and provide knowledge. The acquisition of knowledge in the curriculum enables the ease of journey for self-discovery and identity in the education space. Moreover, the remote is a flexible or lively process rather than a static one. Tulaskar and Turunen deduce that remote learning may be defined as a method used to establish teaching and learning processes via the use of the internet and information technology devices. 14 This implies that remote learning will also improve with technological developments over time. The use of remote learning in curriculum practice is one of the reforms in the South African package that caught pre-service economic and management science teachers unprepared and unequipped to learn remotely. Additionally, approximately 65 per cent of students at the institutions of higher learning said that they had learnt less during the lockdown because of the transition from classrooms to online learning. 15 Despite their efforts to continue studying and training, about half believed their studies would be delayed and 9 per cent thought they would fail. Remote learning has been poorly implemented in higher education for various reasons, including the quality of teaching, the availability of support services, equipment and infrastructure, and the creation of a conducive study environment. 16 This study similarly intended to capture the discussions by pre-service economic and management science teachers on their lived experiences in curriculum practice and how these experiences contextualised the search to reshape their identity that will embrace hybrid teaching and learning as the future of instructional delivery and the new normal. They may need to reshape their identities to align with their teaching and learning contexts rather than create new identities. THEORETICAL FRAMEWORK The study is underpinned by architecture theory. The term theory of architecture was originally simply an accepted translation of the Latin term "ratiocinatio" used by Vitruvius to differentiate intellectual from practical knowledge in architectural education. 17 Such reasoned judgements are an essential part of the creative architectural process. In addition, Vitruvius suggests that building can be designed only by a continuous creative, intellectual dialectic between imagination and reason in the mind of each creator. This theory analyses the origins and development of architectural form, style, ideologies, movements, and architects throughout history. 18 Vitruvius believed that architecture depends on several principles, each with its own weight and value on the projects. 19 Thus the first one is ordered. He considered that a building is made out of small parts or modules and the selection of these modules will give the building order and proper form. The second is the arrangement. To achieve a good border in arrangement, Vitruvius asked the architects to rely on careful thinking, pay attention to all the details and use innovation to solve each problem. The third and fourth principles are harmony and symmetry. These two principles are connected, and each moves the project towards beauty. A beautiful design should have its height suited to its width and length. Symmetry is the relational beauty between each element of the design. This concept inspired "Vitruvian Man" by Leonardo Davinci. 20 Explanation of symmetry is embodied in the use of the human body analogy, where harmony and symmetry are found between the forearm, foot, palm finger and all the other body parts in the same concept applies to the perfect building. 21 Architectural theory is based on the present and the future and how the future is planned, built or organised. The theory of architecture is relevant to this study because of the advocacy that architecture cannot be taught and that people can only guide people during the process using intellectual dialectic. 22 The study adopts the Vitruvius approach to principles of teaching and learning, which allows the pre-service economic and management science teachers to deepen the discussions as they reflect on their lived experiences with the aim of reshaping their identities to align with the new normal. Reflecting on lived experiences further revealed how pre-service economic and management science teachers' experiences continue a path that will enhance their motivation to learn remotely and sustain teaching and learning in the future. RESEARCH METHODOLOGY The study used a qualitative research methodology to address the research objectives. It is also part of an ongoing study that intends to examine the experiences of pre-service economic and management science teachers on the use of remote learning and how this mode of study can be translated into hybrid learning in preparation for future teaching and learning. The study used a free attitude discussions technique to generate data using the critical participatory action learning and action research (CPALAR) approach as a form of critical education science. CPALAR is referred to since it pilgrimages three principles of responsible research innovations such as recognition of participants, establishing professional learning communities and critical reflections deliberately embracing diversity characterised in the unequal context of South African education. CPALAR allowed participants to share their lived experiences and critical reflections, in this case, by deliberately embracing diversity in the unequal context of South African education. 23 The data was provided by economic and management sciences pre-service teachers. Economic and management sciences teacher education is intended to empower student teachers to teach grades 8 and 9 learners, focusing on financial literacy, business studies and economics. Participants and Selection Procedure The participants in the study were pre-service economic and management science teachers in one of the twenty-six institutions of higher learning in South Africa. The pre-service economic and management science teachers in this study were final year teacher education students under the stream economic and management science, Bachelor of Education: Economic and Management Science (B.Ed EMS). For the purpose of this study, PST is used as a pseudonym to represent pre-service economic and management science teachers. The students were selected purposefully as they come from the same stream and individually had their share of the same challenge of academic identity crisis emanating from the transition from face-to-face into remote learning. The pre-service economic and management science teachers in this study were final-year teacher education students under the economic and management science stream. Twenty pre-service teachers were recruited and encouraged to be part of the study using a purposeful sampling approach. The study was conducted with nine pre-service economic and management science teachers. Their profile is depicted below: Instrumentation The researchers used a free attitude discussion to collect data from participants using an online learner management system. The discussion was online due to lockdown regulations that did not permit faceto-face contact. The discussions in the unstructured interview allowed the participants to freely share their experiences of remote learning and how that impacted their academic identity. Ethical Consideration The participants in this study were guaranteed their freedom to contribute to a free attitude discussion. They were also informed that they could withdraw from the study whenever they felt uncomfortable. We also took extra measures to hide the participants' identities throughout the study. For confidentiality under the findings, we used pseudonyms to identify pre-service economic and management science teachers as participants in the study: Pre-service teacher 1 is identified as PST 1, Pre-service teacher 2 is identified as PST 2, Pre-service teacher 3 is identified as PST 3, Pre-service teacher 4 is identified as PST 4, Pre-service teacher 5 is identified as PST 5, Pre-service teacher 6 is identified as PST 6, Pre-service teacher 7 is identified as PST, 7 Pre-service teacher 8 is identified as PST 8, and Pre-service teacher 9 is identified as PST 9. This study will use the term pre-service economic and management science teachers and participants interchangeably. It was important for this study to achieve trustworthiness in the same way that reliability and validity are monitored. The data collection process was conducted with the utmost honesty and respect to maintain quality, credibility, trustworthiness, reliability, conformability, and transferability, but no approval was sought. The study was designed and conducted by the researchers at their discretion as no funding was sort for conducting the study. RESULTS AND DISCUSSIONS Teaching practice during remote learning The COVID-19 pandemic is continually causing tremendous harm to people all over the world, and it is impermissible for the study not to mention this in this issue when we look towards a new normal and more positive future. Pre-service teacher education programs recognise the importance of classroom practice, also referred to as teaching practice, in improving the quality of pre-service teachers and educators. This strategy is widely used in teacher education programmes in South Africa and is also used internationally. 24 There is also a general understanding that the more teachers practice their teaching, the more proficient they will become in the content knowledge, methodology, and pedagogical content knowledge of pre-service teachers. It also allows them to form their own identity as professionals. This is further discussed by Omodan and Ige, raising a need for the studies to ensure the curriculum is tailored towards students' content knowledge and whether the new normal is more productive, as well as detects if the academic identity of pre-service economic and management teachers could be easily redefined using remote learning methodologies. 25 During the discussions intended to establish if pre-service teachers were confident enough to teach economic and management science in grades 8 and 9 using remote teaching and learning techniques, their responses seemed to confirm their confidence. The comments by PST 7 are notable in this regard: "I can confidently say that remote learning has exposed me to sufficient online information that will empower me with pedagogical content knowledge. I can also say that I will be able to deliver a lesson using the same mode of teaching and learning. My concern is that I may be placed in a school where remote learning is not yet practised." The imposed lockdown regulations had an impact on an average of 2.3 million students enrolled in post-secondary education and training institutions. 26 New educational policies and regulations were established for the education sector, including academic timetable adjustments, new instructional programmes, methods of delivery, and curriculum catch-up. Figure 1 below depicts the teaching and learning patterns that existed in the South African schooling context during the COVID-19 pandemic. It illustrates the comparison between use and access to remote learning in rural and urban areas. Figure 1 Remote learning rate in South African Schools 27 Remote learning programmes were not only designed by the institutions of higher learning for pre-service teachers, but the school also designed programmes that encouraged a transition to remote learning. However, the learning dynamics in South Africa reflect an adaptation of a rotational system by a majority of the schools, as depicted in Figure 1 above, while pre-service teachers were being prepared for teaching in the school using remote learning. Statistics South Africa's new research reveals that only 11.7 per cent of schools in the country offer remote learning choices, as depicted in Figure 1 above. 28 Rotating alternative programmes was offered more frequently than remote learning in the schools. And there was a clear urban-rural divide, with urban learners receiving twice as many remote learning opportunities as rural learners. It remains the responsibility of the institutions of higher learning to ensure a link between the curriculum they practise in preparing pre-service teachers for their future role as professional teachers. Additionally, Dube, Makura, Modise, and Tarman further argue that COVID-19 has called for reconfiguration in how the curriculum has been implemented in the institutions of higher learning to produce disciplines that are relevant to the market and to adapt to pandemic-induced changes in reality. 29 There are increasing calls for tertiary institutions to revisit and reconfigure their methodologies to prepare and empower pre-service teachers with relevant pedagogical content knowledge that will apply to hybrid learning as the future and the new normal in teaching and learning. Lived Experience during Remote Learning Experiences in access and connectivity It is a particularly challenging time for pre-service teachers and teacher educators as they transition through a career change and uncertainty in terms of their academic and professional identity. The rapid move to remote learning instruction to engage students has resulted in significantly increased workloads for teacher educators as they work to shift teaching content and materials into the online 27 Statistics South Africa., "Education Series Volume VIII COVID- 19 andBarriers to Participation in Education in South Africa, 2020 Report No. 92-01-08." (Pretoria, 2022 space, as well as become proficient with the necessary software tools and learner management systems. 30 As seen, teaching and learning institutions are experiencing difficulty adjusting to the move to remote learning and teachers are struggling to adjust to what is likely to be the "new normal" for a lengthy period. Up to 78 per cent of the participants in this study, as depicted in Figure 1 which represented the pre-service teachers between the ages of 18 to 25 years, indicated that they did not experience challenges regarding the use of technological gadgets while they raised concerns over access to those gadgets and connectivity. While the participants in the age group 25 to 35 years did raise concerns over a need for training on how to use some of the instruments that would have been provided for access to remote learning, they also indicated the downside, which seems to share the concerns by the younger participants on issues of access and connectivity. PST 1 "Since last year, I have not really encountered any problems with using remote learning except for challenges like data problems. And with the remote learning method, I have realised that my marks have improved. The other thing is that I did not really know how to write an assignment. I did not know how to do proper research. I did not know how to do the referencing but with the introduction of lockdown, it is like I only had myself and had to learn how to do all these things successfully." PST 5 "I am so excited. My research mark is good through e-learning. What else? I am independent. I do not have to rely on other people, even when we do assignments. I know I am well and can do it by myself, so I always try to shine. Not in a bad way, but by being an example to others that this is actually how you should do it. Especially with the references, I realised that most students still do not know how to do it. And before COVID-19, I was literally one of those students. Remote learning is just so good." PST 8 "For me, the first few months of remote learning were a nightmare; it gave me a lot of sleepless nights. I was used to attending lectures and writing notes down. Abruptly abandoning that learning method and switching to remote learning was not easy for me. The struggle to access data prolonged my agony." Experiences of impact on academics One of the participants shared how the institution had inadequately prepared them for remotely attending lessons. Some of them fell behind because of their inability to keep up with the remote attendance of lessons. This is evident from the narrative shared by one of the participants: PST 4 "The institution was not ready to adapt. Even the lecturers had not received training, which in turn gave us a problem when we had to learn through them. Worse for remote learning is that as much as we do the assignments online, working from home was not the same as accessing unlimited Wi-Fi and data on campus; it made the transition a traumatic experience. I practically fell behind my work." The main challenge in the discussions arose from the participant's location as well as the meagre infrastructure where the participant is located. Isolation has a distinct impact on academic performance, much more so if the student lives miles away from the enrolled university. 31 Some 30 Cho,The Complexity and Hybridity of Social Identity, Jayrome Lleva Núñez, "Lived Experience of Overcoming the Feeling of Isolation in Distance Learning in the Philippines: A Phenomenological Inquiry ," Pakistan Journal of Distance & Online Learning 7, no. 2 (2021): 56-68, http://journal.aiou.edu.pk/journal1/index.php/PJDOL/article/view/1330. participants shared the narrative that their home environment was not enabled and conducive to learning. This was evident in the response PST 7 gave, which is noted: I come from a rural area in the KwaZulu Natal province. I was supposed to travel to the nearest town to access the network but with the travel restrictions, I had to seek the highest point in the mountains to access the internet network at the expense of my safety. Rural areas are famous for harbouring wild animals. Additionally, there was an increased breakdown of communication between the pre-service teachers and the teacher educators. Sometimes pre-service teachers will e-mail the lecturer seeking clarification on a missed assessment due to network issues, but the lecturer will not respond. As a learner management system, Blackboard was used by institutions of higher learning to reach out to students. This system was also compromised due to frequent breakdowns of the system. Teaching and learning, including a messaging system that allows communication, would also be compromised, delaying communication with pre-service teachers. PST 2 said, "Several times, the lecturer will not respond to e-mails and this was frustrating." This lack of communication contributed to anxiety and uncertainty about the future. Ultimately, it became apparent that pre-service economic and management science teachers recognised the impact of remote learning, that it is efficient and effective, and is likely to be part of hybrid learning in the future as the new normal. In the future, it is necessary to investigate unstable Internet connections, expensive and inadequate data provision by the university, and a home environment that does not support learning and how innovative ideas can be engineered to support and enable hybrid learning as the future of teaching and learning. CONCLUSIONS AND RECOMMENDATIONS Having examined teacher identity within the changing landscapes of their social environment, as well as the changing plotlines of their learning context, this study found that pre-service teachers do not necessarily have to create new identities in change but rather adopt the changing environment theory, and re-imagining their academic identity. This indicates that further research on this topic is needed to clarify the identities of pre-service economic and management science teachers in the practice of the curriculum that advocated for hybrid teaching and learning post their learning in the classroom and also include the field where they will practise as novice teachers. A person's experiences impact how they think, feel and act. Omodan and Tsotetsi suggest that the impact of education in creating the experiences of individuals should not be underrated. 32 Improving educational processes, whether in formal or informal settings, is an essential aspect of personal growth and adds to self-re-imagination and identity through engagements with others. The effective integration of remote learning into the classroom can ensure that pre-service teachers meaningfully interact with information. In this sense, pre-service teachers will develop the ability to think critically, improving their language, comprehension, cognition, and critical thinking skills. 33 Hence, integrating remote learning within the classroom will empower pre-service teachers to participate actively in the information culture. One benefit of remote learning is that it improves pre-service teachers ' performance and engagement in the learning process. Grynyuk et al. also become flexible and even motivated. 34 Additionally, it engages both the teacher educator and preservice teachers by promoting active participation and self-regulated learning. Since the teacher educator would have achieved all the objectives and the pre-service teachers would have understood the content, the teacher educator would not be required to repeat one topic repeatedly. Since research has shown different opinions and beliefs of students towards remote learning, 35 students' attitudes toward remote learning often reflect how they feel about experiences and their likes and dislikes. They indicate that attitudes, beliefs, and behaviours are linked to their experiences. 36 Furthermore, literature discovered that students in developing countries had a positive attitude towards remote learning. 37 This is because remote technology is believed to increase students' motivation and self-esteem. Active students have more control of what and when they learn and can also help to create a productive learning environment that enhances critical thinking skills. 38 It is worth noting that while technology may improve the classroom situation and engage students more effectively, the development of technology cannot replace the physical learning space, which fosters a healthy teacher educatorpre-service teacher rapport, which the traditional method best provides. This is supported by Suprapto, Zamroni, Abidah, and Wulandar in their assertion that remote learning experience as the new normal will have advantages and disadvantages. 39 Hence this study encourages hybrid learning as the future, incorporating both teaching and learning methods as the instruction that enforces the curriculum practice.
5,777.8
2022-11-02T00:00:00.000
[ "Education", "Economics" ]
Short-term power load forecasting based on combined kernel Gaussian process hybrid model As one of the countries with the most energy consumption in the world, electricity accounts for a large proportion of the energy supply in our country. According to the national basic policy of energy conservation and emission reduction, it is urgent to realize the intelligent distribution and management of electricity by prediction. Due to the complex nature of electricity load sequences, the traditional model predicts poor results. As a kernel-based machine learning model, Gaussian Process Mixing (GPM) has high predictive accuracy, can multi-modal prediction and output confidence intervals. However, the traditional GPM often uses a single kernel function, and the prediction effect is not optimal. Therefore, this paper will combine a variety of existing kernel to build a new kernel, and use it for load sequence prediction. In the electricity load prediction experiments, the prediction characteristics of the load sequences are first analyzed, and then the prediction is made based on the optimal hybrid kernel function constructed by GPM and compared with the traditional prediction model. The results show that the GPM based on the hybrid kernel is not only superior to the single kernel GPM but also superior to some traditional prediction models such as ridge regression, kernel regression and GP. 1Introduction Power load forecasting is conducive to the intelligentization of fuel procurement, equipment maintenance and load distribution. For example, in the urban power system, power load forecasting is very important for power system management and energy trading [1][2]. Since the electric power process is a complex dynamic system, the prediction difficulty is relatively high, and the prediction effect of using manual observation and trend analysis methods is poor [3]. In recent years, after continuous exploration, domestic and foreign scholars have proposed a series of effective load intelligent forecasting methods, such as time series method, artificial neural network (ANN) and support vector machine (SVM) forecasting. In 2000, Tresp first proposed the Gaussian Process Mixture (GPM) model [4]. Using the "divide and conquer" strategy, samples were divided into several groups, and each sample group was assigned a Gaussian Process (GP) model for learning prediction. It not only has better predictive ability, but also can output confidence interval [5]. Therefore, this paper combines single cores to construct a new kernel function on this basis, and selects the optimal combined kernel function for the load sequence through experiments, and then achieves the optimal prediction effect. 2Principle of Gaussian Process Mixture Model Learning algorithm of Gaussian process mixture model This paper adopts the iterative learning algorithm of hidden variable posterior hard partition proposed by Chen [6]. Compared with the traditional MCMC, VB or EM learning algorithm of the GPM model, the hardpartition iterative learning algorithm uses a sampling approximation strategy. In step E, the learning samples are allocated according to the maximum posterior probability criterion, and each step is estimated by the maximum likelihood method in step M. The undetermined parameters of the GP component greatly reduce the computational complexity of the algorithm. The specific implementation steps of the algorithm are as follows: The first step: For a given learning sample, divide it into several groups by k-means clustering algorithm; Step 2: Independent learning of each GP component participating in the mixing based on maximum likelihood estimation; The third step: According to the maximum posterior probability criterion, re-designate the group of the learning sample. If the re-designated result is consistent with the previous round, the iterative algorithm stops and outputs the final result; otherwise, it returns to the second step. Step 4: After the learning process is over, for a given test sample, if the corresponding target output is predicted, the group can also be specified according to the maximum posterior probability criterion. Then the test samples are assigned to the first group, and the prediction distribution can be obtained from the prediction formula of the single GP component. The required learning sample in this predictive formula is the learning sample assigned to the group in the last iteration. 3Prediction Algorithm of Combined Kernel Gaussian Process Mixture Model GPM is a mixture of multiple relatively independent GPs, and each GP processes its corresponding sample component. There is a single Gaussian process with noise, and its expression is: [7][8][9][10]. among them SEIs the most commonly used kernel function, And it has the best effect on infinitely differentiable time series forecasting, and it has high requirements for time series smoothness. RQIt is another commonly used core. Its advantage is that after the sequence phase space is reconstructed, as the delay increases, RQThe forecast effect is relatively stable. MaIt is a highly versatile core, and there are three common forms. Adjust the parameters to adapt to the sequence of different degrees of smoothness, but the parameters are not properly selected, based on MaKernel function GPM Even loss of predictive ability. The three kinds of function expressions are shown in (2) to (6). 1) Square exponential function (SE): After the above three functions are transformed into matrix form, all their eigenvalues are not less than zero, that is, the above three functions are all positive semidefinite functions. According to Mercer's theorem, any positive semi-definite function can be used as the kernel function, so SE, RQ, and Ma can all be used as the kernel function of the GPM model. In addition, according to its combination and addition, all satisfy the conditions of Mercer's theorem, and it can also be used as the kernel function of GPM. The combined kernel functions used in this paper are formulas (7) to (10). (6) are the variance of the kernel function, which controls the local correlation of input variables; is the feature width, which controls the smoothness of the model. Can make It is a vector composed of undetermined hyperparameters contained in the GPM model, and its value needs to be determined during model learning. Analysis of power load sequence forecast characteristics Let's start with the four aspects of autocorrelation function, partial autocorrelation function, maximum Lyapunov exponent and saturated correlation dimension, and analyze the characteristics of load series forecasting in depth. (1) Autocorrelation function The autocorrelation function describes in detail the dependence of a certain moment in the sequence on another moment. By setting the fixed time delay parameter, the correlation degree between the initial time and any time within the time delay range can be obtained. Now the autocorrelation function is calculated for the electric load sequence, and the maximum time delay is 200. The power load sequence in Figure 1 starts from the 20th time delay, and the value has fallen below the confidence interval, which proves that it is a set of nonlinear sequences. (2) Partial autocorrelation function The partial autocorrelation function is a good indicator of the stationarity of the time series. Now the partial autocorrelation function is calculated for the electric load sequence, and the maximum time delay is 100. The result is shown in Figure 2. It can be seen from the figure that there is a single large peak at time t+1, and the values after time t+5 mostly converge within the confidence interval, which proves that the power load sequence is non-stationary. (3) The largest Lyapunov exponent Lyapunov exponent can well reflect the chaotic characteristics of time series. In this paper, the Wolf method loop is used to obtain the maximum Lyapunov exponent of the load sequence under 20 cycles. Since the minimum unit of the electric load sequence is 15 minutes, each cycle of the electric load is set to 3 hours. The result is shown in Figure 3. It can be seen from the figure that the maximum Lyapunov exponents of the series are all greater than zero, verifying that the load series have certain chaotic characteristics. Figure 3 The largest Lyapunov exponent of the power load sequence (4) Saturated correlation dimension According to the correlation dimension, it can be distinguished whether the time series has random or chaotic characteristics. Now find the correlation dimension for the load sequence, and the embedding dimension is taken from 2 to 8. The result is shown in Figure 4. It can be seen from the figure that the correlation dimension of the sequence is saturated with the increase of the embedding dimension, so it is verified that the power load sequence has certain chaotic characteristics. memory is 4GB, software platform is matlab 2010a. In the power load sequence prediction experiment, the learning samples are from the 201st time to the 500th time, and the test samples are from the 501st time to the 800th time. In order to fully demonstrate the improvement effect of the proposed combined kernel function on the GPM model and the advantages of GPM over the traditional forecasting model, this article first predicts the power load together with three single kernel functions and multiple combined kernel functions under the same experimental parameters, that is, the final prediction The kernel function used is: SE, RQ, Ma, SE+RQ, SE+Ma, RQ+Ma and RQ+SE+Ma; Then GPM and traditional models are used to predict load under common parameters. The traditional models involved in the comparison are Kernel-Regression (K-R), Ridge-Regression (R-R) and GP models. Among them, K-R is a kernel-based regression prediction model. By adjusting the optimal window width, the prediction result with the smallest error can be obtained gradually. R-R is a biased estimation regression model, through improved least squares estimation method, to obtain more reliable prediction results. As the basis of GPM model, GP has been widely used in various forecasts. The following two indicators are used to quantitatively evaluate the pros and cons of the prediction results: (1) Root mean square error ( RMSE ): and smaller the prediction effect, the better. In the phase space reconstruction link, because the traditional mutual information method and pseudo-neighborhood method are relatively time-consuming, this paper uses a grid traversal search to obtain the optimal parameters. In the grid search for the optimal parameters, the optimal discriminating criteria are equations (11) . After setting all parameters, predict the electric load sequence. The blue line in Figure 5(a) is the true value of the sequence, and the red line is the predicted value obtained. Therefore, the higher the fit of the blue and red double lines, the better the prediction effect. The abscissa of the dot diagram in Figure 5(b) represents the true value of the humidity sequence, and the ordinate represents the predicted value. The more blue dots in the figure are concentrated on the main diagonal, the better the prediction effect. In this paper, Gaussian Process Mixture (GPM) model is used for power load forecasting, and its real load sequence is used for forecasting experiments. GPM prediction uses an iterative learning algorithm for hard partitioning of hidden variables posterior, which improves the prediction efficiency of the model. GPM is a kernel-based machine learning model. When the kernel function changes, the prediction effect will also change, and when the sequence distribution characteristics are complex, the combined kernel function is more comprehensive than a single kernel. Therefore, in this paper, we have conducted an in-depth study in the direction of kernel functions, combining three common single kernel functions (SE, RQ, and Ma) to form a new combined kernel function, and verifying its improvement effect in experiments. The following conclusions can be drawn through experiments: (1) Power load has strong nonlinearity, nonstationarity, chaotic characteristics and certain short-term predictability; (2) For the phase space reconstruction ameter embedding dimension and time delay τ, generally speaking, with the increase and decrease, the prediction accuracy wll gradually increase, but the value of should not be too large; (3) There is no obvious rule for the number of modes of GPM. But for the power load sequence in this paper, it is largely affected by the three periods of morning, middle and evening. Through the traversal search, the optimal number of modes in prediction is; (4) In ower load forecasting, the SE+RQ+Ma combined kernel is the best forecasting effect, and the GPM forecasting effect based on the optimal kernel function is better than traditional forecasting models. Therefore, the selection of the combined kernel function largely depends on the sequence. When the sequence is different, the selection of the combined kernel will change, but the prediction effect is better than that of a single kernel. Finally, GPM can adaptively select the optimal combination kernel function when facing different load sequences
2,879.8
2021-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Compensation for Group Velocity of Polychromatic Wave Measurement in Dispersive Medium The estimation of instantaneous frequency (IF) method is introduced to compensate for the group velocity of electromagnetic wave in dispersive medium. The location of the reflected signal can be obtained by using the time-frequency cross correlation (TFCC), following which it is used to extract the transmitted signal from the total signal acquired. The signal propagated in the dispersive medium is attenuated and distorted by the attenuation characteristics, which depend on the frequency of the medium. By using the IF curve calculated for the transmitted signal, the changed center frequency and time terms can be obtained. The obtained terms are used to compensate for the group velocity error induced by signal distortion and attenuation. Through experiments and simulation, the accuracy of the proposed method is 2% higher than that of the conventional method when the signal propagates over a long distance. Introduction It is important to accurately measure the group velocity of electromagnetic wave in a dispersive medium to detect defects and measure distances.Since the group velocity is dependent on the frequency, and dispersion occurs when a polychromatic wave composed of multiple frequencies propagates in a dispersive medium, it is difficult to measure the exact group velocity [1]. group velocity correction is necessary when a signal propagates in any medium other than air.Detection methods based on the electromagnetic wave, called reflectometry methods, are useful for fault localization and monitoring of the health of a cable with insulator.Reflectometry methods can be categorized into time domain reflectometry (TDR), frequency domain reflectometry (FDR), and time-frequency domain reflectometry (TFDR) [2][3][4][5][6][7] depending on the incident signal type.TDR and FDR use a step pulse and sinusoidal pulse defined in the time domain and frequency domain, respectively.They have been widely used in cable diagnostics owing to their ease of implementation.However, since the incident signals of TDR and FDR defined one domain are used, it is difficult to distinguish the acquired signal from the noise.In contrast to TDR and FDR, TFDR has a higher signal-to-noise (SNR) because the incident signal of TFDR is used as a chirp signal that has characteristics in both the time and frequency domains.In TFDR, the impedance discontinuity distance is measured by obtaining the time delay of the reflected signal generated from the impedance discontinuity point for a given group velocity.Since the group velocity is determined by the permittivity and permeability of the propagation medium, the propagation can be obtained from the information about the propagation medium.The time-frequency cross correlation function (TFCC) which is a measure of the similarity of two signals, and calculates the time delay, is based on the Wigner-Ville distribution, which is an energy distribution in time-frequency analysis.However, since the group velocity depends on the frequency, if the bandwidth is wide or distortion is increased by long-distance propagation, the error of the group velocity also becomes large.Especially, in case of submarine cable lengths of several hundred km, speed compensation is essential.Our research group has been investigated the compensation for group velocity [8].The compensation method proposed in the previous method [8] does not consider the sweep rate of the reflected signal to be changed.That is, since the frequency components included in the chirp signal have different velocities, compensation is performed without taking into account dispersion, which causes an measurement error. In this paper, we propose a method for compensating the group velocity of poly-chromatic wave in a dispersive medium based on instantaneous frequency (IF) estimation.Using TFCC, the location of transmitted signal is obtained and the signal is extracted from the total acquired signal.IF curves of incident and transmitted signal are derived by using the phase unwrapping process.The shifted center frequency and shortened time duration of transmitted signal are obtained based on the Fast Fourier Transform (FFT) and estimation of the IF curve.The group velocity compensation is carried out through the derived terms. Group Velocity Measurement Based on Electromagnetic Wave Theory The proposed group velocity measurement method consists of two main parts.The first part comprises the transmitted signal detection method using TFCC.After the first step, the time offset and shifted center frequency of the transmitted signal are used to compensate for the group velocity.In electromagnetic theory, the group velocity of an electromagnetic plane wave can be derived as follows [1]: where f , f r , f m are the operating frequency, relaxation frequency at which the imaginary part of the complex permittivity reaches a maximum, and resonant frequency, respectively.∞ , s are the permittivity at infinity frequency and static permittivity, respectively.µ s is the static permeability. According to (1), the group velocity depends on the operating frequency.When the polychromatic wave comprising several frequencies propagates in the dispersive media, the wave is distorted and a group velocity error occurs because of the frequency dependency of group velocity.Especially when the signal is transmitted over a long distance, the more high frequency components are attenuated, and velocity error is induced.In other words, as the signal propagates, the group velocity changes continuously with the propagation distance.In this paper, the incident signal is the signal that we injected into the cable, the reflected signal is generated at the cable termination, and transmitted signals are signals that are acquired by the oscilloscope through the inductive couplers as the incident signal flows along the cable, and the signals acquired after being reflected at the cable termination are classified as reflected signals, and the reflected signal is included in the transmitted signal.We use the Gaussian linear chirp signal as the incident polychromatic wave, and the incident signal is represented as follows: where T 1 is the duration of the incident signal, A is the amplitude, ξ 1 is the normalized angular frequency sweep rate, and ω is the normalized angular frequency.The transmitted signal is expressed as follows: where η is the magnitude of the attenuation coefficient at the travelling distance, and d is the time delay of the transmitted signal.Also, because the transmitted signal has high frequency attenuation, we define the changed parameters ξ 2 and T 2 as frequency sweep rate and time duration of transmitted signal.The normalized cross-correlation between the incident signal and the transmitted signal is used to detect the transmitted signal from the cable termination.This can be expressed as follows [3]: where E s is the energy of the incident signal in Wigner-Ville distribution, E r is the energy of the transmitted signal, and is the correlation operator.Through the normalized cross-correlation process, the transmitted signal can be extracted from the total acquired signal. Compensation of Group Velocity in Dispersive Medium For the compensation of group velocity, we measure the IF curves of incident and transmitted signals.To acquire IF curves, the incident and transmitted signals are transformed through Hilbert transform for as following analytic representation: where, H t {x} is a output at a time t of Hilbert transform filter applied the signal r, M and Φ are the magnitude and the instantaneous phase of the transmitted signal.And then, the instantaneous phase is derived as following: where, abs(•) means the absolute function.Figure 1 shows the illustration of the compensation of IF curves.In Figure 1, f s,c and f r,c are the center frequency of incident signal and transmitted signal. To derive the time shifted terms, a( f ), b( f ), of transmitted signal according to operation frequency, we obtained the shifted term using the delayed incident signal by the time delay, d, and the reflection signal as follows. The concept of group velocity of polychromatic signal is associated with narrow-band pulses concentrated in the neighborhood of center frequency, ω 0 , with an effective frequency band |ω − ω 0 | ≤ ∆ω, where ∆ω ω 0 .Furthermore, the compensated time delay between incident signal and transmitted signal is derived as following: And then, compensated group velocity is derived as follows: v com = l/t com .where, l is the known length of target cable.In future work, we derive the frequency and group velocity, v( f ), curve based on (1).And then, we assume that the center frequency of transmitted signal is linearly decrease with time.According to this assumption, the center frequency of transmitted signal can be expressed as: Experimental Results Figure 2 shows the illustration of the proposed system.The system is composed of (1) digital phosphor oscilloscope (DPO); (2) arbitrary waveform generator (AWG); and (3) inductive couplers.AWG generates the reference signal and apply the signal to target cable though the inductive coupler.The inductive coupler uses the electromagnetic induction phenomenon to apply a signal to the cable without connection between the cable core and signal line.In this paper, we use three inductive couplers and coupler 1 is used to apply a signal to cable, couplers 2 and 3 acquire the signal.An incident signal propagates along the cable and was acquired in the oscilloscope through the couplers 2 and 3.For ease of understanding, the signals were numbered according to the order in which they were acquired.The transmitted signals acquired through the couplers 2 and 3, before the signal being reflected, were numbered 1 and 2 and located at 0 m and 40 m, respectively.The distance propagated based on the first acquired signal though coupler 2 becomes the position of the signal.The signals reflected from the cable termination were in turn acquired via couplers 3 and 2, which were located at 80 m and 120 m.In order to verify the variation of group velocity due to the dispersion, we converted the experimental system into an equivalent circuit model using the simulation tool.Through the tuning of loss factor in the simulation tool, we conducted the simulation to compare the signal passing through the lossy medium with lossless medium.The comparison results are shown in Figure 3.The results consist of 3 types simulations: (a) lossless cable: 40 m; (b) lossy cable: 40 m; (c) lossy cable: 80 m.To verify the effect of loss factor, we compared the transmitted signal in lossy medium with that in lossless medium (simulations: (a) and (b)).Also, simulations were conducted with different cable lengths in order to analyze whether the group velocity of the signal varies due to the reflection (simulations: (b) and (c)).As shown in Figure 3a, the incident signals of each simulation are identical, but the reflected signals of each did not match.Figure 3b shows an enlarged view of the reflected signals.The highest peak in the time domain of the signal can be thought of as the highest energy point, and the group velocity of the signal can be determined by the time delay between these peak points, and this time delay is called the time of arrival of group velocity.Comparing the waveforms of (a) and (b), the time delay between the peak points of the reflected signal generated at 40 m and 80 m are 0.102 • 10 −6 s and 0.207 • 10 −6 s, respectively.If there was no change in group velocity depending on the travel distance, the time of arrival of group velocity of the reflected signal generated at 80 m had to be 0.204 × 10 −6 m.More time delays mean slower group velocity.These results show that the group velocity is decreasing as the signal propagates.Since the incident signal is a positive chirp signal, the rear part of the signal in time domain contains a high frequency component.As shown in the reflected signals of red and black line of Figure 3b, in the front part, the zero-crossing points of each signal are matched, but the zero-crossing points in the rear part do not coincide with each other.These results indicate that the higher frequency components of the signal are attenuated as the signal propagates in the lossy medium.Comparing the reflected signals generated at 80 m, we verified that the reflection only affect the magnitude of signal, not group velocity.Figure 4a shows the total acquired signal from the inductive couplers after signal restoration process [9,10].As seen in the fourth signal in Figure 4a, because the signal is difficult to distinguish from the noise, TFCC is used to roughly find the position of the transmitted signal.To evaluate the accuracy of proposed method, we solved the true value of group velocity using the time of arrival of group velocity of the signals in the time domain.However, the reflected signal at 120 m is too small to find out due to the attenuation.Because of this, it is very difficult to obtain the group velocity through the time delay between the highest peak points of the signal in the time domain, and the group velocity in the farther than 120 m can not be calculated.The TFCC graph is shown in the Figure 4b.As the signal propagates, attenuation of the signal occurs, which slows the group velocity and increases the time delay.Figure 4b depicts a TFCC graph based on the constant group velocity measured on 40 m.Therefore, the distance errors of 80 m and 120 m is getting larger.Based on unwrapping algorithm and Hilbert transform, the instantaneous phase of the transmitted signal can be derived and shown in Figure 5a.The signal having the positive slope of the instantaneous phase was extracted, and the frequency band of the extracted signal was obtained by FFT algorithm.In Figure 5a, the time duration of the signal is obtained by extracting the signal where the slope of the instantaneous phase is positive.The Figure 5b shows the frequency band of second signal of total signal.The changed values, time duration and frequency region, were obtained and substituted into Equation ( 7) to compensate the group velocity.As seen in Table 1, the group velocity in TFCC method seems to be equal regardless of the propagation distance.On the contrary, in the proposed method, the shifted terms , a( f ), b( f ), c( f ), can be obtained from the center frequency, f rc , of the received signal from Figures 1 and 5.The compensation time delay term is calculated according to Equation (8).The travelling distance, D, is the known value of the cable length and is the integral of the velocity determined by the center frequency of the transmitted signal.Based on the derived compensation time delay, t com , and the distance value, D, the average velocity during propagation of the transmitted signal can be obtained and are shown in Table 1.The measurement values in Table 1 were calculated by the time of arrival of group velocity of signal in Figure 5a.As the distance between each signal is set as 40 m, the group velocity, measurement value, can be obtained by dividing the distance by the measured group delay.The accuracy values in Table 1 were calculated by dividing the group velocity of proposed method by measurement value.As seen in Table 1, when the signal propagates to a short distance, the existing method is highly accurate, but the existing method does not reflect the change in group velocity when the signal propagates over a long distance.On the other hand, the proposed method compensates the group velocity change due to the dispersion, so that the accuracy of group velocity is good regardless of the distance. Conclusions In this paper, we proposed the group velocity compensation method using the derived time and center frequency offset terms based on the estimation of instantaneous frequency (IF).The proposed method can be divided two part.The one was the transmitted signal detector algorithm based on TFCC and multiple inductive couplers system.The second is the compensation algorithm, and the compensation terms was obtained based on the IF curve which was derived from Hilbert transform and phase unwrap algorithm of the transmitted signal.The variation of group velocity of chirp signal by dispersion in lossy medium was verified using a simulation tool.Through the comparison experiments with compensation without consideration of sweep rate change and existing methods using TFCC, superiority of the proposed method was proved.Although the group velocity of a signal propagating a short distance has similar accuracy in both the conventional method and the proposed method, when the signal propagates over a long distance (120 m), the proposed method has 2% better accuracy than the conventional method.This paper proposes a new method to compensate the group velocity error due to the dispersion of the chirp signal in the lossy medium, and this method can be applied to the detection of defect, it is possible to localize the fault in long-distance lines, such as submarine HVDC cable, without error. Conflicts of Interest: The authors declare no conflict of interest. f r,c (t) = a • t + b where, a < 0 and b are the constant.The travelling distance, D can be derived as:D = t 0 +t com t 0 v( f r,c (t))dt.This article compensates the speed with time delay compensation, and next it will develop this to estimate the distance through accurate propagation speed and time delay without travelling distance information. FrequencyFigure 1 . Figure 1.Illustration of compensation method based on IF curves. Figure 5 . Figure 5. (a) Estimation of instantaneous phase, (b) frequency band of transmitted signal. Table 1 . Estimation results of group velocity.
3,941
2017-12-18T00:00:00.000
[ "Geology" ]
A novel high-dimensional trajectories construction network based on multi-clustering algorithm A multiple clustering algorithm based on high-dimensional automatic identification system (AIS) data is proposed to extract the important waypoints in the ship’s navigation trajectory based on selected AIS attribute features and construct a route network using the waypoints. The algorithm improves the accuracy of route network planning by using the latitude and longitude of the historical voyage trajectory and the heading to the ground. Unlike the navigation clustering method that only uses ship latitude and longitude coordinates, the algorithm first calculates the major waypoints using Clustering in QUEst (CLIQUE) and Balance Iterative Reducing and Clustering Using Hierarchies (BIRCH) algorithms, and then builds the route network using network construction. Under the common PC specification (i5 processor), this algorithm forms 440 major waypoints from 220,133 AIS data and constructs a route network with directional features in 5 min, which is faster in computing speed and more suitable for complex ship trajectory differentiation and can extend the application boundary of ship route planning. Data clustering is considered as the main method for dividing a huge amount of data into groups for more precise analysis. Cluster analysis is the grouping or clustering of data according to the inherent similarities and characteristics between the data [2][3][4]. The clustering result may show the target ship's trajectories and traffic volume distribution [5,6]. As a standard data mining method, ship trajectory clustering integrates AIS data of different ships into different categories. It is beneficial for shipping companies and maritime authorities to understand marine traffic's operational status and characteristics. Density, graph, partition, and hierarchical-based clustering algorithms are often used for ship trajectory clustering. K-means clustering, a representative of partitioningbased clustering methods, has been widely used in related research for its simplicity and efficiency. Song [7] designed an improved K-means trajectory clustering method based on suburban curve fitting to get the traffic flow parameters of each direction and category at intersections. However, since this method only considers the case of smooth traffic, it cannot explain vehicle trajectory with fault discontinuity in the scenario of a complex traffic situation. Wang and Bai [8] used the Min-Max K-mean clustering error method to modify the global K-means algorithm to overcome the undesirable effects at initialization. Han [9] proposed an online learning model combining K-means clustering and gated recurrent unit (GRU) neural network for trajectory prediction. Tyagi and Trivedi [10] proposed a hybrid K-means algorithm to obtain clustering results for color images and refined the clustering results using the ant colony optimization (ACO) algorithm. Jiang [11] proposed an identification scheme for classifying and monitoring moving targets on sea based on structural database techniques and K-means. However, this method is sensitive to data noise and cluster center, which is less effective for noisy data. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a representative method of density clustering. Density clustering starts from the perspective of sample density and checks the connectivity between samples, and continuously extends the clusters based on the connectable samples to obtain the final clustering results. In 2017, Zhao [12] proposed a parameter determination DBSCAN algorithm based on statistical methods for trajectory clustering in waters with uneven distribution of ship trajectories. Yet, only the applicability in simple cases was considered. In 2019, Zhao [5] proposed a DP (Douglas-Peucker) compression and density-based trajectory clustering method for marine traffic pattern recognition on previous research and evaluated and compared a large number of ship navigation trajectories in Beilun-Zhoushan port, China. Wang [13,14] proposed a ship trajectory clustering algorithm based on the hierarchical density of noise application space clustering based on Zhao's research. The research on trajectory clustering did not solve the problem of route planning, though it attempted to improve the clustering effect continuously from the perspective of optimization. To better understand the ship navigation information, a maritime route network needs to be extracted from the ship's historical voyage trajectory, through which the network can help the relevant personnel to carry out route planning. In 2016, Dobrkovic [15] proposed for the first time the use of genetic algorithms to extract maritime traffic networks from AIS data. To enable long-term forecasting and planning of ship routes, in 2018, Dobrkovic combined quadratic trees and genetic algorithms to construct a maritime route network inclusive of incomplete and noisy AIS data [16]. Filipiak [17] pointed out the poor computational performance in Dobrkovic's study and proposed a parallel genetic algorithm combined with KD-B trees to extract the maritime route network from AIS data. Ni [18,19] proposed an improved genetic algorithm for ship path planning that compensates for the inherent deficiencies of local optimization in order to achieve a balance between the local and global optimization capabilities of genetic algorithms in ship paths. Wang [20,21] proposed a quadratic optimization genetic algorithm incorporating ship motion characteristics to aid automatic route planning in complex environments. Zhao [22] proposed a hybrid multi-iterative route planning method based on an improved particle swarm optimization-genetic algorithm, aiming to optimize the shiprelated meteorological risks, fuel consumption, and navigation time, and to improve the diversity of route planning; However, only the effects of wind, waves, and anti-navigation on the ship were considered, while the effects of other maritime vessels on the ship were ignored. Chen [23] combined fuzzy control and genetic algorithm with building a route planning system for underwater vehicles, which can provide strong robustness. Route planning algorithm is an aspect of an unmanned ground vehicle obstacle avoidance system. Liu [24] proposed an improved A-Star algorithm for ship path planning that integrated route length, obstacle dynamics, navigation rules, and maneuverability constraints. In particular, the currents of the ocean were considered in the algorithm. Unfortunately, the precise maneuvering characteristics of the ship were not used. Sun [25] used fuzzy neural networks for scheduling the ship's path in complex navigation tasks. Also, fuzzy logic was used to process statistical data and neural networks optimize navigation routes. Proportion Integral Differential (PID) method was introduced in the decision system to ensure the stability of the decision system. Though previous research has shown some solid results in the analysis of navigation history data and path predictions, those methods still lack connections with real world scenarios. In the first place, some studies find the major waypoints manually, which is highly subjective and error-prone, especially for the complex open water environment. In addition, those previous research calculated the ship's direction by only using the longitude and latitude information, which costs much computational power and sometimes can be mistaken. Most importantly, when performing trajectories clustering, AIS coordinate information is often used as input for better classification; however, with only longitude and latitude information, the output from those clustering processes can only generate results from a mathematical or statical perspective, while its practical performance can hardly be evaluated. This paper proposes a new multi-level clustering algorithm based on high-dimensional ship AIS data to find the major waypoints on the ship trajectory and provides a basis for later ship navigation environment analysis. Methodological overview The proposed method analyzes ship trajectories from high dimensions by automatically clustering paths with multi-clustering algorithms and shipping network reconstructions. Firstly, data pre-processing is performed by removing abnormal AIS data for noise cancellation purposes. Then AIS data are further processed in two steps: trajectory trimming and trajectory compression. Secondly, CLIQUE-BIRCH algorithm is used for trajectory clustering and waypoints discovery of AIS data, and clustering performance metrics are proposed to judge the method's performance. Finally, a newly proposed network construction method is used to connect the waypoints and construct a sea route network, and the constructed route network is evaluated with examples. Figure 1 gives the methodological overview of the research. AIS data preprocessing The presence of forwarding, loss, and data errors in AIS data can lead to many anomalies in ship trajectories built directly from AIS data. Therefore, AIS data needs to be pre-processed before building the model. Since the navigation situation of ships in and around ports is more complicated than that in open sea, and there are berthing, anchoring and wandering situations of ships, and the density of ships is higher, the navigation situation of ships in and around ports needs to be studied separately. On the other hand, since low-speed ships will affect the navigation efficiency, and the goal of the study is to build an efficient route network to assist in route planning, removing the data of low-speed ships will help improve the ship navigation efficiency, reduce the navigation cost and the navigation safety risk. In summary, three methods will be used to improve the quality of AIS data. First, according to the relevant ship management experience, the AIS data points will be matched with the ports and the data less than 185.2 km (100 nautical miles) from the ports will be eliminated; second, the data of ships sailing at less than 7 knots will be labeled and eliminated; finally, in order to reduce the total amount of AIS data studied, the Douglas-Peuker (DP) algorithm will be used for trajectory compression. The DP algorithm [5,26,27] has been widely used to remove redundant AIS data points from ship trajectories, while still preserving the original ship route shape characteristics, and the study does not lose generality. CLIQUE-BIRCH Since the previous use of AIS data in trajectory clustering algorithms only contains latitude and longitude information, lacking consideration of other attributes, much information is often lost in the clustering results. At the same time, when introducing new attributes, it is necessary to calculate the values such as direction attributes from latitude and longitude information. For example, information such as draught, weather, and fuel consumption cannot be expressed by latitude and longitude, so the traditional algorithm fails to consider them, and the clustering results naturally cannot help relevant departments to make efficient route planning. Therefore, a novel multi-level clustering algorithm network based on high-dimensional AIS data is proposed. First, the latitude, longitude, and Course over Ground (COG) from the AIS data points are input into the CLIQUE algorithm to cluster the navigation trajectories with directional features. The CLIQUE algorithm can efficiently handle highdimensional data by automatically discovering the highest-dimensional subspaces in which high-density clustering exists. It is insensitive to the order of input tuples without assuming any canonical data distribution, and scales linearly with the size of the input data, and has good scalability when the dimensionality of the data increases [28]. Second, using BIRCH algorithm to find and generate waypoints on the identified navigation trajectory automatically. The algorithm can effectively identify the noise points and quickly cluster the clustered AIS data of the navigation track to efficiently identify the waypoints on the navigation tracks [29]. The identified waypoints add directional information compared to the waypoints obtained by the conventional method. Network construction After the waypoints are refined, a complete route network needs to be constructed from these waypoints. In previous studies, little attention was paid to the construction of route networks. Since waypoints are extracted from numerous AIS data, there must be some connection between waypoints and AIS data that can help to construct route networks. Therefore, a network construction method based on the connection between AIS data and waypoints is proposed. First, each waypoint will be the center of a circle, and a radius of size r will be set whenever necessary. The AIS data within each circle will be marked with correlation to that waypoint. Then, the route trajectory is extracted from the historical AIS data to traverse all the waypoints to build the circle, and the waypoints are connected in the order of connection to build the complete route network. Definition of ship trajectory Using MMSI to distinguish different ship trajectories, the ship's trajectory can be described by Trajectory = {ship i ship i , i = 1, 2, . . . , m } , where ship i is the trajectory of ship i and m is the number of ships, and ship i is defined in (1): where k is the sequence number of AIS data points in each trajectory, n is the total number of AIS data points in each trajectory, p k i is the state vector of the kth AIS data point of the ith ship, and lat k i and lon k i are the coordinates of ship i at T k i . AIS data preprocessing The purpose of this step is to pre-process the AIS data. The first step is to prune the AIS data. The AIS data points are matched with the port information. The AIS data points near the port are removed, and the distance between the port warp points and the AIS data points is calculated using the Haversine formula: with where ϕ 1 and ϕ 2 are dimensions, 1 and 2 are longitudes, d is the distance between the two places and R is the radius of the Earth. The second step is to compress the trimmed AIS data by using the DP algorithm to improve the clustering efficiency without losing shape features. The steps of DP algorithm are as follows: for the trajectory composed of many AIS data points, the first step is to set the distance threshold D . The second step is to connect the first and last points of the trajectory into a straight line, find spot the vertical distance from all AIS data points on the trajectory to the straight line, and find the maximum distance d max ; the second part uses d max to compare with the pre-given threshold D . If d max < D , then all the middle points on this trajectory will be discarded, and take the straight line section as the approximation of the trajectory, and the processing of this section of the trajectory is finished; the third step, if d max > D , keep the AIS data point corresponding to d max , and use this store as the boundary to divide the trajectory into two parts, and repeat the method for these two parts, that is, repeat the second and third steps until all d max is smaller than D , when the compression of the trajectory is finished. Figure 2 shows the process of compression of trajectories by the DP algorithm. Obviously, the compression effect of DP algorithm is related to the threshold value, the higher the threshold value, the greater the compression degree, the more the AIS data points are reduced; Conversely, the lower the compression degree, the more the AIS data points are retained and the shape tends to be closer to the original trajectory. CLIQUE clustering AIS data are multidimensional data containing many different attributes, but traditional trajectory clustering methods often consider only latitude and longitude, or when introducing attributes such as direction, they need to be calculated with the help of latitude and longitude, which tends to cause a waste of computational resources; there may also be a situation in which the calculated direction is not consistent with the actual direction. Therefore, the CLIQUE algorithm was proposed to solve the above problems. CLIQUE clustering was proposed by Agrawal et al. [30] as a grid-based clustering method for discovering density-based clusters in a subspace. CLIQUE has the advantage of efficient grid clustering, is insensitive to the input order of the data, and does not require the assumption of any canonical data distribution. It scales linearly with the size of the input data, has good scalability as the number of data dimensions increases, and is very effective for clustering high-dimensional data in large databases. CLIQUE works by dividing each dimension into non-overlapping intervals, thus dividing the entire embedding space of data objects into cells. It uses a density threshold to identify dense cells and sparse cells. A cell is dense if the number of objects mapped to it exceeds that density threshold. The main strategy of CLIQUE to identify candidate search spaces is to use the monotonicity of dense cells with respect to dimensionality. This is based on the a priori nature of frequent pattern and association rule mining usage. In the context of subspace clustering, the monotonicity is stated as follows: a k-dimensional (> 1) cell c has at least 1 point only if each (k − 1)-dimensional projection of c (which is a (k − 1)-dimensional cell) has at least 1 point. CLIQUE performs clustering through two phases. In the first stage, CLIQUE divides the d-dimensional data space into a number of rectangular cells that do not overlap with each other and identifies dense cells from them. CLIQUE finds dense cells in all subspaces. To do this, CLIQUE divides each dimension into intervals and identifies intervals containing at least l points, where l is the density threshold. Then, CLIQUE iteratively connects the subspaces. CLIQUE checks whether the number of points in it satisfies the density threshold. The iteration terminates when no candidate is generated or none of the candidates are dense. In the second stage, CLIQUE uses the dense cells in each subspace to assemble clusters that may have arbitrary shapes. The idea is to use the minimum description length (MDL) principle to cover the connected dense cells using the maximal region, where the maximal region is a hyperrectangle into which each cell is dense and the region cannot be extended in any dimension of the subspace. It is difficult to find the best description of clusters in general. Therefore, CLIQUE uses a greedy algorithm. It starts with an arbitrary dense cell, finds the largest region covering that cell, and then continues the process on the remaining dense cells that have not yet been covered. The greedy algorithm terminates when all dense cells are covered. The steps of the CLIQUE algorithm are shown in Algorithm 1. BIRCH clustering The traditional method of finding waypoints requires a batch of manually identified waypoints, which are fed into the algorithm to help find waypoints. Such an approach depends on the quality of the manually identified waypoints, and if the quality is poor, the generated waypoints will not be referable. Therefore, the BIRCH algorithm is used to find the waypoints on the navigation trajectory automatically. The BIRCH algorithm [29] is a distance-based hierarchical clustering algorithm that takes memory space into account to obtain the best possible clustering results with limited memory (usually very small compared to the dataset) and to reduce the input and output of the dataset. The algorithm takes into account the time/space efficiency of the clustering process, the sensitivity of the data input and the accuracy of the final clustering results, particularly suitable for processing large data sets. The flow of the BIRCH algorithm is shown in Algorithm 2. The BIRCH algorithm aggregates information about clusters by clustering features (CF) description, and then clusters are clustered. Suppose a cluster contains N dimensional data objects {x i } , then the clustering features of the cluster are defined as follows: where N is the number of objects in the cluster, LN is the linear sum of N objects (i.e., N i=1 x i ), and SS is the sum of squares of objects (i.e., N i=1 x 2 i ), which records the key metric for computing clustering and efficient use of storage. The measure of distance between clusters are derived by these clustering features: The clustering feature tree in the BIRCH algorithm is a highly balanced tree that stores the features of clusters for hierarchical clustering. According to the definition of CF tree, the non-leaf nodes in the tree contain children, and they store the sum of the CF values of their children, i.e., the clustering features containing the children. The CF tree contains two types of parameters: the non-leaf node branching factor B , the leaf node branching factor L and the threshold T . The branching factor B limits the maximum number of children per non-leaf node, i.e., each non-leaf node contains at most B children; the branching factor L limits the maximum number of children per leaf node; and T limits the maximum radius (or diameter) of the cluster in which a leaf node exists. Figure 3 is an example of a CF tree diagram. In addition, the shape of the clustering feature tree can be changed by adjusting the size of the threshold and the branching factor, and then the clustering effect of different parameter combinations is evaluated using the Silhouette Score. Finally, based on the construction of the clustering feature tree, the clustering effect is evaluated based on the input class n_clusters (the optimal number of storage nodes). The nodes in the corresponding hierarchy are selected as clusters and are used as the clustering results and output. Network construction The waypoints were extracted from the AIS data through Sect. 3.4. In order to construct the route network, the following operations will be performed: First, each ship corresponds to a unique MMSI number, and each AIS data contains time information, so the route of each ship is extracted in chronological order according to the MMSI serial number. Second, each waypoint is taken as the center of a circle, and the radius r is set to match the AIS within this circle with the waypoints. Third, traverse all the circles formed by waypoints with the extracted sailing trajectory, and if there are AIS data points on the sailing trajectory fall into the circle with the waypoint as the center, mark this AIS data as having a connection with the waypoint, and then connect the waypoints sequentially according to the chronological order of AIS data. Finally, all the waypoints are traversed by different routes, and a route network with directions is constructed. The steps of the route network construction method are shown in Algorithm 3. Result This section presents a case of a proposed multilevel clustering algorithm network based on high-dimensional AIS data. For the proof of concept, the case study area was randomly selected and the regional geographic information was extracted as follows. Latitude: 37.105536 • N to 40.940382 • N ; Longitude: 117.620811 • E to 125.452704 • E . In order to clearly demonstrate the effectiveness of the proposed algorithm and to avoid the influence of undesirable AIS data, in this case, the AIS data of container ships sailing at a speed no less than 7 knots, i.e., not in the vicinity of the port, were investigated. The configuration of this case study is shown in Table 1. Data processing The first step is to prune the AIS data. AIS data of 30 days from June 1 to June 30, 2021, were selected based on the geographic information and setting boundaries of the study area. First, a total of 220,133 AIS data were obtained by reading the initial AIS data, and 110,368 AIS data were obtained as the study data set by excluding the AIS data with speed not exceeding 7 knots and distance less than 185.2 km from the port (the distance of about 100 nautical miles from the port was considered as close to the port). The second step is to reduce the amount of AIS data by DP algorithm. While ensuring the shape characteristics of the route trajectory, 50 m was selected as the threshold value for each trajectory in order to reduce the AIS data points. Meanwhile, the value was determined based on the characteristics of the local AIS data and can be further improved by adaptive design to optimize the results. The purpose of this step is to improve the clustering speed and further obtain better clustering results. At the end of the compression process, the AIS data points are reduced from 110,368 to 25,420, with a compression ratio of 76.97%. CLIQUE clustering: directional trajectories Unlike road traffic, marine traffic is not restricted by roads, and ships have more freedom when sailing, so there exists this mixed area and situation of ships with different driving directions. In the route planning, if we can divide the navigation road in different directions according to the direction of the ship navigation, it will help to improve the ship navigation efficiency, reduce the navigation safety risk and the navigation cost. In order to divide the sailing trajectories with directional characteristics, after pre-processing the AIS data, the trajectories are clustered using CLIQUE, and the latitude, longitude and COG are input into the algorithm. Due to the large range of the Bohai Sea, in order to improve the accuracy of clustering, the Bohai Sea is divided into Laizhou Bay, Liaodong Bay, Bohai Bay, West Korea Bay and Bohai Strait according to the composition of the Bohai Sea. Since the amount of AIS data in Laizhou Bay, Liaodong Bay, Bohai Bay and West Korea Bay is similar to that of Bohai Strait, the same parameters are set for these four areas, i.e., the interval of the number of grid cells per dimension is set to 10 and the threshold of outlier points is set to 10; while for Bohai Strait, due to the relatively large amount of data, the parameters are adjusted accordingly, i.e., the interval of the number of grid cells per dimension is set to 10 and the threshold of outlier points is set to 10. The interval of the number of grid cells in each dimension is set to 45, and the outlier threshold is set to 35. The clustering results are shown in Fig. 4 (where a, b, c, d and e correspond to Laizhou Bay, Liaodong Bay, Bohai Bay, West Korea Bay and Bohai Strait, respectively). Each color represents a different direction of the channel, and the obtained trajectory AIS data points have latitude, longitude and COG. BIRCH clustering: major waypoints identification After obtaining the AIS data of the main channel, the BIRCH algorithm will be used to find the waypoints. The main three parameters of the BIRCH algorithm are set to a threshold value of 0.4, n clusters of 2, and a branching factor of 50. The node centers of the constructed clustered feature trees are used as clustering results and outputs. BIRCH provides a clustering method for very large datasets. By focusing on densely occupied regions, it makes sense of large clustering problems and creates a compact summary. The effectiveness of clustering using the BIRCH algorithm is evaluated using the Sil-houette_Score. The evaluation scores are shown in Table 2. A Silhouette_Score greater than 0.5 or better provides good evidence of the truthfulness of the clustering in the data. Therefore, the clustering effect of the selected parameters is ideal. Figure 5 shows the results of BIRCH clustering, where A is Laizhou Bay, B is Liaodong Bay, C is Bohai Bay, D is West Korea Bay, and E is Bohai Strait. The blue AIS data points are the main shipping lanes, and the orange points are the clustering results, i.e., the waypoints found by BIRCH; a is the waypoints of Laizhou Bay, b Liaodong Bay, c Bohai Bay, d West Korea Bay, and e Bohai Strait. The generated waypoints contain information such as longitude, latitude, and COG. Network construction After the waypoints are extracted, the waypoints will be connected according to the method proposed in Sect. 3.5. First, the ship trajectory data within the Bohai Sea from January 1 to June 30, 2021 were extracted, and 18,853 ship trajectories were extracted from the data within these six months according to the unique MMSI number and the time tag of AIS data corresponding to each ship. In order to obtain a more comprehensive route network, the waypoints extracted in Sect. 3.5 are optimized, and a more comprehensive search of the five main parts of the Bohai Sea was conducted, and finally a total of 440 waypoints were extracted. Since ports are also an important part of the route network when constructing the route network, a total of 29 ports in the Bohai Sea are also considered. According to the method in Sect. 3.5, the 440 waypoints and 29 ports are traversed by 18,853 ship trajectories to build the route network. Figure 6 shows the constructed route network. As shown in Fig. 6, each waypoint is marked with a different color, and each color represents a different COG direction. The COG directions corresponding to different colors can be found by the gradient spectrum on the right in Fig. 6. As can be seen in Fig. 6, there are some port points that are not connected with other waypoints. This is because, only container ships are considered in the study, and no data of other ship types are used, while some ports are not container ports, so there are no records of container ship arrivals, so these ports are not connected with other waypoints when the route construction process is carried out. It also appears in Fig. 6, that there is no connection between two waypoints. Since there are errors and missing AIS data, the extracted route trajectory will inevitably have incomplete trajectories. In order to remedy this deficiency, 6 months of navigational trajectory data were used to minimize the inability to connect between waypoints caused by missing trajectories. As a whole, the constructed route network is complete and the routes in different directions can be clearly identified, which is beneficial for route planning afterward. The quality of the constructed route network will be discussed in the Discussion section. Comparison with clustering algorithm In comparison with the methods in Sect. 4.2, the traditional DBSCAN algorithm, K-means algorithm, and CLIQUE algorithm without inputting directional information are used for AIS data trajectory clustering. Trajectory clustering from AIS data using DBSCAN, AIS data points identified as noise points are represented by black dots as shown in Fig. 7a. Too many noise points are generated using DBSCAN, and a large number of trajectories features as well as information are lost as shown in Fig. 7b. After removing the noise points, the retained trajectories are less, and many AIS data points are grouped in the same clusters, and no useful track information is obtained. Figure 7. a. b shows the problems encountered by DBSCAN when dealing with larger data volumes and uneven density datasets. As shown in Fig. 7.c, the K-means algorithm is used to cluster the trajectories of AIS data, and 10 classes of clusters are set in advance first. From the results, it can be seen that each cluster is interlaced together, and the AIS data points in the same cluster are scattered. Although the original trajectory characteristics can be retained, it is impossible to extract the routes according to the clustering results; As shown in Fig. 7d, the CLIQUE algorithm without inputting direction information is used, and for the obtained clustering results, each cluster is mixed together, and fewer trajectories are retained after removing a large number of AIS data points. Compared with several methods used in Fig. 7, the CLIQUE algorithm that inputs latitude, longitude, and COG can retain the original trajectories effectively, on the basis of which more information carried by COG is used to divide the flight paths with directional characteristics, leading to better clustering effect. Case study The evaluation method uses the output to construct a recommended route and then compares it with the actual proposed route. The vessel's route is compared with the route generated using the navigation software-Vessel Value Visualization. For testing purposes, two routes were selected within the Bohai Sea: Tianjin to Dalian and Baiyuquan to Dalian. As shown in Fig. 8, the black points are the trajectories planned by Vessel Value Visualization, and the orange points are the trajectories based on the constructed sea route network, where a is the comparison from Tianjin to Dalian and b is the comparison from Baiyuquan to Tianjin. The paths provided by the constructed route network and the real route trajectory have different numbers of waypoints, so it is not easy to compare the two. When comparing trajectories, Hausdorff [13] distance is usually used for calculation, and this method consists of three main parts: vertical distance, parallel distance, and angular distance. This method calculates the distance between trajectory segments in three aspects: parallel distance, perpendicular distance, and angular distance. For two way point based route trajectories traj A = {a 1 , a 2 , . . . , a n } and traj B = b 1 , b 2 , . . . , b n , their Hausdorff distances are calculated by Eq. 5 where · denotes the Euclidean distance between the coordinate points in ship trajectory A and the coordinate points in ship trajectory B. H(traj A , traj B ) is the basic form of Hausdorff distance, i.e., it is the maximum value between h traj A , traj B is and h traj B , traj A . In this design, the shape similarity between two trajectories can be obtained without considering their lengths. The similarity of the trajectories of the two routes in Fig. 8 is shown in Table 3, while Table 3 randomly selected six ports in Bohai. The trajectories of the routes generated by the network constructed in Sect. 4.4 are compared with the trajectories of the routes planned in Ship Vision, and the results are shown in Table 3. As can be seen from Table 3, except between ports where container ships do not sail, the route planning made by using the way points found in Sect. 4.3 and the route network constructed in Sect. 4.4 is consistent with the real historical route and the recommended route by Vessel Value Visualization, and the results are better. The proposed method has successfully verified that ocean trajectories can be clustered on the basis of higher dimensional data with a modified number of classes. When using CLIQUE-BIRCH for waypoint extraction, a total of one minute was used, and when traversing all waypoints with 18,853 navigational trajectories, the route network was constructed in less than 4 min, for a total time of less than 5 min. Compared with other methods such as genetic algorithm which takes more than 3 h on average to extract waypoints and more than 5 min on average to build route networks, the proposed method shows a significant reduction in computational time, saving computational resources and time costs. However, the current study focuses on only one additional dimension (direction) and the number of classes still needs to be modified manually, so that the self-clustering algorithm for classifying ocean trajectories remains a major problem. Conclusion As a significant first step to realize a multi-featured clustering route network construction method based on real-world AIS data, a proof of concept is presented to exploit the numerous attributes contained in AIS data, useful information in AIS data is mined to cluster route trajectories with directions, and waypoints on the route are identified by one additional layer of the clustering algorithm, and a maritime route network is constructed by connecting waypoints according to the connections between AIS data points. Since the dataset used in the experiment is a real-world AIS dataset, the experimental results can be extended and can be used for waypoint finding and maritime route network construction in more sea areas. The focus of this paper is on the search of sea waypoints and the construction of route networks. In the research process, only container ships with sailing speed not less than 7 knots are considered, and no other ship type factors are considered. Also, other factors affecting route selection, such as weather, fuel consumption and distance, are not studied. Therefore, in the subsequent studies, the selection of waypoints and the construction of route network by different factors will be considered more comprehensively. Future work Waypoint's detection and multi-featured network construction has potential to shape the future ship path planning field. In order to go beyond the proposed method, more features besides direction properties can be included in this work. Besides, based on the performance of current method, the clustering process can also be applied to real-time applications. In addition to that, the detection of waypoints can be further divided into classes to find the different layers of the network.
8,365
2021-11-18T00:00:00.000
[ "Engineering", "Computer Science" ]
Leaf Recognition Based on Elliptical Half Gabor and Maximum Gap Local Line Direction Pattern Plant identification via leaf images is very meaningful to agricultural information. The existing methods were based on one or two kinds of the three distinct characteristics in leaf images including leaf contours, textures and veins. This limits their recognition performance and scope of application. This paper describes a novel counting-based leaf recognition method, which can directly and effectively combine all of the three kinds of significant characteristics in leaf images. In order to obtain the stable and independent local line responses from leaf contour, texture and vein, elliptical half Gabor is introduced and convoluted with the raw grayscale leaf images, and then maximum gap local line direction patterns are extracted from the local line responses and normalized in direction by cyclically right shifting these patterns until the most numerous bit plane with a value of 1 to the left bit. The histogram of the normalized patterns is calculated and regarded as the counting-based local structure descriptor, and support vector machine is utilized as the classifier. Experimental results on three frequently used leaf databases show that the proposed approach yields a better performance in terms of the classification accuracy, applicability and feasibility in comparison with the state of the art methods. I. INTRODUCTION Automatic plant identification systems are very meaningful to agricultural information and ecological protection. The biological or phytochemical property-based techniques such as morphological anatomy, molecular biology and phytochemistry require complex processing, so they are not suitable for online applications [1], whereas, the plant recognition based on image analysis can extract plant features directly from living plants, and is suitable for online applications. The images from flowers, fruits, roots and leaves can be used for plant recognition, among them, leaf images are the most feasible ones, so plant recognition based on leaf images has attracted more attentions [1]. Popular leaf image recognition works pay attention to shape features [2]- [7]. In these methods, leaf contours are firstly determined by a pre-processing course, and then the curvature scale space [2], [3], inner distance shape The associate editor coordinating the review of this manuscript and approving it for publication was Wei Zhang. context [4]- [6] or multiscale distance matrix [7] approaches are utilized to extract global shape features invariant to position, scale and direction variations. Kumar et al. [2] design a mobile application for plant identification. They extract curvature features from the pre-processed leaf images, and use a nearest neighbor classifier with histogram intersection as the distance metric for classification. Ling and Jacobs [4], [5] propose a shape classification method called inner distance shape context. They sample points along the boundary of a shape, and build a 2D histogram descriptor at each point. This histogram represents the distance and angle from each point to all other points, along a path restricted to lie entirely inside the leaf shape. Belhumeur et al. [6] present an automatic plant identification system using inner distance shape context and a nearest neighbor classifier. Hu et al. [7] construct a leaf image recognition method with the multiscale distance matrix and the nearest neighbor rule with Euclidean distance. These methods yield a good identification performance for the plants with significantly different leaf contours, but they are generally sensitive to the quality of the pre-processing VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ results. In practice, there are many different plant species with similar overall contours, and the same kind of plant species possesses leaves with different overall shapes. Hence, their discriminability is not strong enough for all plant recognition applications. Besides leaf contours, texture and vein are the other two kinds of the most distinct and significant characteristics in leaf images. The contour-based methods extract features only from the boundaries of leaves, and neglect other useful characteristics in the leaf images. In order to alleviate the drawbacks as mentioned above and make them less sensitive to the quality of the pre-processing results, a lot of works extend text analysis techniques to leaf classification. Casanova et al. [8] calculate the energies of the responses for Gabor filters as texture features. Liu et al. [9] propose a leaf classification method using wavelet transforms and support vector machine. In our early work [10], we combine dual-scale decomposition with local binary descriptors for plant leaf recognition. Naresh and Nagendraswamy [11] extract the local texture structure using a modified Local Binary Patterns (LBP) approach. Tang et al. [12] combine Gray Level Co-Occurrence Matrix (GLCM) with LBP for tea leaf classification. Meanwhile, some works focus on the vein characteristics. Fu and Chi [13] combine a thresholding approach with neural network for extracting vein patterns from leaf images. Park et al. [14] classify vein structures with the pattern of end points and branch points. Larese et al. [15] extract vein patterns using hit or miss transform from legume leaves. Compared with the contour-based methods, these texture or vein based methods are less sensitive to the quality of the pre-processing results, and achieve reasonable results in the case of different plant species with similar overall contours or the same kind of plant leaves with different overall shapes, but their classification accuracies are generally lower than the state of the art contour-based methods for the universal plant recognition. Nevertheless, these works adequately demonstrate that texture and vein patterns are very helpful to leaf identification. Deep learning techniques are also used for leaf identification [16], [17]. Grinblat et al. [18] train a convolutional neural network with vein morphological patterns for leaf identification. Lee et al. [19] design a multiscale fusion convolutional neural network to fuse the features extracted from leaf images with different scales. Hu et al. [20] learn useful leaf features directly from the raw leaf image data using a convolutional neural network, and quantify the learned features based on a deconvolutional network for species identification. They also analyze and justify the subset of features that are most important to describe leaf data via feature visualization techniques, and find that venation structure is a very important feature for identification especially when shape feature alone is inadequate. These methods have reported a promising recognition performance. Moreover, some of them start directly from the raw leaf images and free of the pre-processing course. However, it is well known that deep convolutional neural networks require very large amounts of training data, the number of samples in existing leaf image databases is still far from matching the scale and variety of existing general major databases for images, videos or languages. So, to efficiently train a deep architecture to recognize plant leaf images, much larger datasets are required, preferably with more than a million images and higher category variability [18]. Recently, Zhao et al. [21] find that it is better to ''count'' the number of certain shape patterns rather than to match the extracted global shape features in a point-wise manner. Based on this idea, they propose a counting-based shape descriptor for identifying plant species. Similarly, Cerutti et al. [22] utilize a sequence-like structured representation to track the spatial information from curvature space and present a smartphone application for identifying plant species. Zhang et al. [23] apply Warshall algorithm [24] to label propagation of leaf contours to obtain a label matrix, and then the matrix is dealt with discriminant neighbors to form an optimum projecting to low dimensionality, the resultant descriptors are classified by the nearest neighbor classifier. Benefitting from the idea of counting, these methods generally outperform over the conventional contour-based methods in terms of classification accuracy and applicability, but they extract counting-based shape features only from the boundaries of leaves, and neglect other useful characteristics such as veins and textures in the leaf images. Furthermore, their sensitivity to the quality of the pre-processing results is in line with the conventional contour-based methods. In order to take advantage of the three kinds of significant characteristics in leaf images and make leaf recognition free of the pre-processing course, motivated by aforementioned inspirational conclusions about leaf identification including importance of venation structure [18] and advantage of counting the number of local shape patterns over matching global shape features point by point [21]- [24], we propose a novel counting-based local structure descriptor for identifying plant species. In order to fair combine the three most significant characteristics including shape, venation and main texture in leaf images, we design a new kind of Gabor wavelet namely elliptical half Gabor wavelet to highlight the local dominant orientation information of leaf contours, veins and main textures, and then we present an improved local line direction coding approach named as Maximum Gap Local Line Direction Pattern (MGLLDP) to extract local dominant orientation structure patterns from the elliptical half Gabor wavelet domain. The histogram of the normalized local dominant orientation structure patterns is regarded as the counting-based local structure descriptor, and support vector machine is utilized as the classifier. The remainder of this paper is organized as follows. Section II provides a brief review of local line direction pattern and support vector machine. The proposed counting-based local structure descriptor is described in Section III, and the experimental results are presented in Section IV. In Section V, the concluding remarks are provided. II. LOCAL LINE DIRECTION PATTERN AND SUPPORT VECTOR MACHINE A. LOCAL LINE DIRECTION PATTERNS As a kind of local image descriptors, Local Binary Pattern (LBP) [25] has been successfully applied to many computer vision applications. Currently, the trend of LBP is to encode the edge gradient information rather than intensity information such as Local Directional Pattern (LDP) [26], Enhanced Local Directional Pattern (ELDP) [27], Gradient Directional Pattern (GDP) [28] and Local Directional Number (LDN) pattern [29]. Since the edge gradient is more stable than the pixel intensity, these local descriptors yield a better recognition performance than original LBP. Recently, Luo et al. [30] proposed a new kind of local descriptors named as Local Line Directional Pattern (LLDP) for palmprint recognition, in which, instead of edge gradients, the local line directional information was encoded. Let where m k is the k − th minimum directional response. The reason for b i (x) = 1 as x < 0 is that palmprints are dark lines in palmprint images. From the experimental results in [20], the LLDP outperforms over the LBP-like codes based on edge gradients. B. SUPPORT VECTOR MACHINE Support vector machine (SVM) is a frequently-used classifier developed by Vapnik [31] based on statistical learning theory. The fundamental principle of SVM can be demonstrated as follows: For two classes of linearly separable problems, as shown in Fig. 1, • and denote the two classes of samples respectively, H represents the optimal separating hyperplane to be determined, H 1 and H 2 are two hyperplanes parallel to H and no training sample falls between them. According to statistical learning theory, H should be the separating hyperplane which can separate the two categories with the maximum margin so that structural risk can be minimized. Consequently, the problem of solving H is equivalent to a constrained optimization problem. Given a training data set of N points {x i , y i }, the task is therefore equal to under the constraints N i=1 λ i y i = 0 and λ i ≥ 0, i = 1, 2, · · · , N , where λ i and λ j are Lagrange multiplying factors. For linearly non-separable problems, the corresponding form is given by where K (x i , x j ) is called kernel function selected from the typical functions expressed as follows: Radial basis function kernel: Hyperboloidal tangent kernel: In this work, we select the radial basis function with σ = 6.28 as the kernel function for support vector machine. III. THE PROPOSED METHOD The block diagram of our method is shown in Fig. 2. The input raw color leaf image is converted to a gray level image f (i, j), and then it is convoluted with the elliptical half Gabor wavelets to extract the line responses for 12 orientations located in (i, j). Afterwards, maximum gap local line direction patterns are extracted from these line responses, and normalized in direction by cyclically right shifting these patterns until the most numerous bit plane with a value of 1 to the left bit. The histogram of the normalized patterns is calculated and regarded as the counting-based local structure descriptor, and support vector machine is utilized as the classifier. A. ELLIPTICAL HALF GABOR WAVELET Gabor wavelet is one of the most effective tools for texture and orientation analysis due to its useful properties including accurate time-frequency localization, robustness against varying brightness and contrast of images, etc. [32]. The real part of classical 2D-Gabor wavelet is expressed as [33] G(x, y, θ, µ, σ ) where µ is the radial frequency per unit length, θ denotes the orientation of the Gabor function, and σ represents the standard deviation of Gaussian envelope. Apparently, as shown in Fig. 3, the classical 2D-Gabor wavelet has two drawbacks in line response analysis. One is the usage of symmetric Gaussian envelope forming isotropic weight parameters, which results in unwanted distortions to line responses from the pixels far away from direction θ. The other is that the response VOLUME 8, 2020 of classical 2D-Gabor wavelet for a certain orientation θ contains line responses in two directions θ and θ +π, that is to say, it is difficult to obtain the two directional line responses independently using the classical 2D-Gabor wavelet. In order to overcome the second drawback, Fei et al. [33] proposed a modified 2D-Gabor wavelet for palmprint recognition namely half Gabor wavelet. However, their work contains a crucial mistake in the definition of the half-Gabor filters. In our early work, we present the correct version, which was defined as [34] r G(x, y, θ, µ, σ ) = G(x, y, θ, µ, σ ) if (x cos θ +y sin θ) ≥ −T 0 else (8) and where threshold T is a nonnegative number, in their work, T was set as 2. Obviously, they split the classical 2D-Gabor wavelet into two Gabor filters along the direction perpendicular to θ with an overlap region of 2T width, providing Gabor filters r G(x, y, θ, µ, σ ) for orientation θ and s G(x, y, θ, µ, σ ) for orientation θ + π. In order to overcome aforementioned two drawbacks synchronously, combining the idea of half Gabor wavelet, we provide an improved 2D-Gabor wavelet namely elliptical half Gabor wavelet, which is defined as and EH s G(x, y, θ, µ, σ l , σ s ) where EH s G(x, y, θ, µ, σ l , σ s ) denotes the real part of 2D-Gabor wavelet with an 2D-elliptical Gaussian envelope, which is given by where σ l denotes the major axis of the elliptical envelope, and σ s represents its minor axis. Note that the long axis of the elliptical envelope along orientation θ, and the short axis perpendicular to θ. As shown in Fig. 3 and 4, in elliptical half Gabor wavelet, the Gabor filter at θ direction is split into two half Gabor filters along the direction perpendicular to θ, which can provide two directional line responses for θ and θ + π independently. Furthermore, compared with the Gabor filter for classical 2D-Gabor wavelet, the elliptical half Gabor filters are more compact in the direction perpendicular to θ, and enlarged along θ orientation. This means that our elliptical half Gabor wavelet can effectively reduce the impact on line responses from the pixels deviated from the θ orientation and enhance the contributions from the concerned pixels along θ. B. MAXIMUM GAP LOCAL LINE DIRECTION PATTERNS It has been proved empirically that LLDP defined in Eq.1 outperforms over the LBP-like codes based on edge gradients or pixel intensities [20], but there are still some weaknesses in the original definition of LLDP. The first shortcoming comes from the determination of m k which was simply set to the third minimum directional response in reference [20] i.e. only the first two dominant orientations were encoded as 1. Since in a real scene image, the number of dominant orientations per pixel is different, for example, as shown in Fig. 5(a), the number of principal directions of the point a, b, c, d is 3,4,0,2, respectively, it is not proper only considering first two dominant orientations for all pixels. This will cause to omit part of principal direction information for pixels with a number of principal directions greater than 2 such as point a, and bring unwanted clutters for pixels with a number of principal directions less than 2 such as point c. The second shortcoming is that in the light of the original definition of LLDP, a non-zero LLDP code will be generated for each pixel even if the pixel is located in smooth region like pixel c. This is unreasonable, and the non-zero LLDPs from pixels located in the smooth region will disturb and degrade overall discriminability of LLDP. We address the two shortcomings of the original LLDP, and propose an improved LLDP namely Maximum Gap Local Line Direction Pattern (MGLLDP). Given a line response set where T is a threshold, D g denotes the maximum gap in {m i }(i = 0, 1, . . . L), and m g represents the superior of the two line responses associated with the maximum gap D g which can be obtained as follows 1.) Sort {m i }, (i = 0, 1, · · · L) in ascending order. 2.) Calculate the differences between adjacent values of the ascending sequence, a difference sequence {D i }, (i = 0, 1, · · · L − 1) is obtained. 3.) The maximum gap D g is determined by calculating the maximum of the difference sequence {D i }, (i = 0, 1, · · · L − 1). 4.) The larger one of the two line responses associated with the maximum gap is regarded as m g . The advantages of MGLLDP over the original LLDP are 1.) the number of principal directions for each pixel center can be adaptively determined, as a result, all principal directions can be properly encoded, no principal direction is neglected and no unwanted clutter is pulled in. 2.) for the pixels located in smooth region, which maximum gaps are less than the threshold T , their MGLLDPs equal zero. This can alleviate the distortions from the pixels without principal direction. Furthermore, by varying the threshold T , one can balance extracting the weak principal direction information with filtering the distortions. The coding images for leaf image shown in Fig. 5(a) using the combination of the original Gabor with LLDP and our method based on the elliptical half Gabor and MGLLDP with different threshold are shown in Fig. 5(b), (c), (d), (e) and (f). From these figures, it can be seen that coding image Fig. 5(c) is more clear and distinct than coding image Fig. 5(b). This maybe due to that the elliptical half Gabor can effectively reduce the unwanted distortions from the pixels far away from the direction to be analyzed and the number of principal directions for each pixel center can be adaptively determined, and so no clutter is pulled in during the MGLLDP coding phase. Furthermore, from coding images Fig. 5(d), (e) and (f), it can be observed that we can availably balance extracting the weak principal direction information with filtering the distortions by adjusting the threshold T . C. THE NORMALIZATION OF MGLLDP IN ORIENTATION AND SCALE Obviously, MGLLDP is invariant to images' position change, but relatively sensitive to the change of orientation and scale. In order to improve the robustness of MGLLDP to orientation changes, we provide a directional normalization method for MGLLDP, which is described as VOLUME 8, 2020 1.) Count the number of 1 for each bitplane in overall MGLLDPs extracted from an image with size of N ×M . 2.) cyclically right shift all MGLLDPs until the most numerous bit plane with a value of 1 to the left bit. The histogram H (MGLLDP) of the normalized MGLLDPs is calculated, in order to make H (MGLLDP) less sensitive to scale change, the histogram is normalized by The normalized histogram H (MGLLDP) is regarded as the counting-based local structure descriptor. IV. SIMULATION RESULTS AND PERFORMANCE ANALYSIS In this section, we will conduct experiments on three frequently used leaf databases: Swedish, Flavia and ICL database, in comparison with half Gabor and LLDP version and eight representative methods including two contour-based methods i.e. Inner Distance Shape Context (IDSC) [4] and Multiscale Distance Matrix (MDM) [7], two texture-based methods namely Gabor based method [8] and Wavelet based method [9], two deep learning-based methods named as deep learning on veins [18] and multiscale fusion convolutional neural network [20], two recent counting-based methods i.e. pattern counting approach [21] and Label propagation projection [23]. A. SELECTION OF THRESHOLD T AND PREPARATION OF LEAF IMAGES In order to balance extracting the weak principal direction information with filtering the distortions, in this subsection, an appropriate threshold is determined via experiment, in which ICL database is considered. The ICL leaf database downloaded from [35] was provided by the Intelligent Computing Laboratory (ICL) of institute of intelligent machines, Chinese academy of sciences, which contains 17,032 leaf images from 221 plant species. In this experiment, 30 plant species are randomly selected from ICL leaf database, the first ten leaf images for each plant species are regarded as training samples and the remainders are used for testing. The threshold T changes from 10 to 90 with 10 increments, the corresponding classification accuracies are determined and summarized in Fig. 6. Based on this figure, threshold T is set to 40 in subsequent experiments. Since the normalization of MGLLDP in orientation and scale is provided in our method, our method directly works on the corresponding grey-level leaf images, while for the compared methods, the plant leaf images were prepared strictly in accordance with their original references. Notes that some compared methods [8], [9], [21], [23] require an orientation and scale normalization operation in their preparation courses. In addition, the two shape-based methods [4], [7] directly started with the contour of leaf, there is no details about how to obtain the contour in their original papers, and so, for the two methods, the contour was obtained by using the contour extraction approach in [21]. Considering that limited training samples do not effectively highlight the advantages of deep learning, each image in training sets is further augmented to 20 images by the horizontal mirror, rotation (from 0 to 180 with 10 increments) and illumination changes simulated by Gamma correction with Gamma varying from 1.2 to 2.4 with 0.2 increments. All images are scaled to 256 ×256 pixels for multiscale fusion convolutional neural network. For deep learning on vein method, the hit or miss transform in [15] is used to extract vein morphological patterns, and then a central pach (100×100 pixels) of the resultant image is cropped and the rest of the image is discarded. B. PERFORMANCE 1) ON SWEDISH DATABASE The Swedish leaf database contains 1125 leaf images from 15 different Swedish trees with 75 images per species. We randomly select 25 leaf images per species for training and the rest 50 images were used for test. The classification accuracies for the aforementioned methods are listed in Table 1. As can be seen, the classification accuracies of texture-based methods are generally lower than the contour-based methods. Among the contour-based methods, the counting-based methods yield a better performance than the methods matching the extracted contour features in a point-wise manner, and the lower performance of deep learning on vein method may due to that they only consider a central patch (100×100 pixels) of leaf images. Benefitting from the advantages of elliptical half Gabor and MGLLDP, our method yields a 5% increment on the classification accuracy of Half Gabor and LLDP method and outperforms the other eight methods. 2) ON FLAVIA DATABASE Flavia database downloaded from [36] contains 1907 leaf images of blades without petioles from 32 different species. The first 25 leaf images per species are used as the training set and the other as the test set. The classification accuracies for the aforementioned methods are determined and listed in Table 2. As can be observed, our method also performs the best among the nine methods, compared with the results on Swedish leaf database, the performance of the texture-based methods increases slightly, while the classification accuracy of the contour-based methods declines within a small range, which indicates that petioles indeed provide useful information for contour-based methods [7], and unwanted distortions for the texture-based methods. 3) ON ICL DATABASE The ICL leaf database contains 17,032 leaf images from 221 plant species. Three datasets are obtained by carefully selecting the samples with specific characteristics from ICL database. Dataset A consists of all samples of ICL database. The species in dataset B are carefully selected and most of them without vein and texture or with weak veins and textures, part of them are shown in Fig. 7, which contains 3390 leaf images from 44 plant species. Dataset C consists of 2312 leaf images from 30 species, as shown in Fig. 8, most of them with similar overall contours and different veins or textures. For the three datasets, the first 15 leaf images per species are used as the training set and the other as the test set, the classification results are summarized in Table 3. From Table 3, we have several observations 1.) the classification accuracies of these methods on ICL leaf database are generally lower than the results on the two previous databases. For the deep learning methods, this may be due to that the number of training samples for each species is reduced from 25 to 15. As to the other methods, the reason could be that the samples in the ICL leaf database were captured at more complicated conditions and with more diversified characteristics. Since our method contains the normalized processes for orientation and scale changes, and is invariant to images' position change, moreover, the elliptical half Gabor wavelet inherits the useful properties of Gabor wavelet including accurate time-frequency localization, robustness against varying brightness and contrast of images, which can effectively enhance the contributions from the concerned pixels, our method can better adapt to various complicated conditions and performs the best among the seven methods. 2.) For the leaf images with weak veins and textures i.e. dataset B, the performance of texture-based methods and deep learning methods has been significantly VOLUME 8, 2020 reduced, while that of contour-based methods has improved slightly. Our method still performs the best among the nine methods, but there have been a slight decline. This indicates that our counting-based local structure descriptor is able to represent contours very well, and lack of texture and vein information has negative impact on its classification capacity. 3.) On dataset C, in which, the leaf images are with similar overall contours and different veins or textures, the performance of texture-based methods has increased dramatically, while that of contour-based methods has dropped in different degrees, whereas our method outperforms the other nine methods with a slight rise. This may be due to that our counting-based local structure descriptors combine the contours, veins and textures efficiently, and parts of them from the similar contours have dragged down the rise. In brief, from the aforementioned experimental results, it can be concluded that our method outperforms the eight representative methods, and can better adapt to various complicated conditions and application situation. It is worth noting that unlike the compared methods [8], [9], [21], [23], which require a preprocessing course and an orientation and scale normalization operation in the preparation course, our method directly starts with the raw grayscale leaf images, so it has wide applicability and good feasibility. V. CONCLUSION We have proposed a novel counting-based leaf recognition method based on the elliptical half Gabor wavelet and maximum gap local line direction patterns. The advantages of our methods over the state of the art leaf recognition methods are 1.) direct and effective combination of all three kinds of significant characteristics in leaf images; 2.) high adaptability for various complicated conditions and diversified characteristics; 3.) high feasibility due to working directly on raw grayscale leaf images without the need for a preprocessing process. The experimental results on three frequently-used benchmark databases demonstrate the advantages of our method over the representative leaf recognition methods. In future work, we will consider the automatic identification of plant diseases and insect pests using leaves. Another interesting topic would be to solve the problem of how to identify plants via multiple or overlap leaves, and the possible extension applications of elliptical half Gabor and maximum gap local line direction pattern would be included.
6,718.6
2020-01-01T00:00:00.000
[ "Agricultural and Food Sciences", "Computer Science" ]
The role of certain gene polymorphisms involved in the apoptotic pathways in polycythemia vera and essential thrombocytosis BACKGROUND Polycythemia vera (PV) and essential thrombocytosis (ET) are hematological disorders characterized by excessive production of mature and functional blood cells. These cellular disorders are thought to be associated with impaired apoptosis, which is one of the major cellular death mechanisms in hematopoietic cells. OBJECTIVES In this study, our objective was to examine the association between potential polymorphisms of the Bcl 2, Bax, Fas and Fas Ligand genes involved in apoptosis and the occurrence of PV and ET. MATERIAL AND METHODS A total of 93 patients diagnosed with PV (n = 38) or ET (n = 55) at the Department of Hematology were included in this study, and 93 healthy individuals served as controls. DNA isolation was performed in blood samples obtained from both groups of subjects to determine the Bcl 2, Bax, Fas, and Fas L genotypes using the real-time PCR method. RESULTS No statistically significant differences between controls and patients were found in terms of Fas -670 G > A (rs1800682), Fas -1377 G > A (rs2234767), Fas L IVS2 -124 A > G (rs5030772), Bax -248 G > A (rs4645878) and Bcl 2 -938 C > A (rs2279115) polymorphisms, genotypes, and allele frequency (p > 0.05). CONCLUSIONS The results show that polymorphisms in the Bcl 2, Bax, Fas, and Fas Ligand genes involved in the apoptotic pathways may not play a role in the pathogenesis of PV and ET. Clonal myeloproliferative neoplasms (MPN) are classified into 2 major groups.While classical MPNs are represented by chronic Ph-positive myeloid leukemia (CML), carrying the Philadelphia (Ph) translocation and the BCR-ABL (Breakpoint Cluster Region-Abelson proto-oncogene) fusion gene, atypical MPNs consist of Ph-negative CML without the Philadelphia (Ph) translocation or the BCR-ABL fusion gene, polycythemia vera (PV), essential thrombocytosis (ET) and primary myelofibrosis (PMF).They generally occur in adulthood, with an annual incidence rate of 5-10 cases per one million people.Despite extensive, decades-long clinical and laboratory research, the etiology of BCR-ABL-negative myeloproliferative diseases is not exactly known.2][3] In 2005, the JAK2 V617F mutation was discovered and was present in the majority of PV patients and 40-60% of ET and PMF patients.This type of mutation is referred to as a "Class I" mutation and is generally different from Class II mutations, which cause the MDS syndrome phenotype and occur in the active cellular growth signal mediators, including important molecules involved in the differentiation process.On the other hand, Class III mutations include multiple repetitive somatic mutations in the genes responsible for epigenetic regulation. 4,5poptotic mechanisms function through 3 pathways: the extrinsic pathway requiring cell surface and death receptors, the intrinsic pathway of mitochondrial origin, and the perforin/granzyme pathway mediated by cytotoxic T-cells.The aberrations in these mechanisms are involved in the pathogenesis of a number of conditions, including degenerative and autoimmune disorders and hematological cancers. 6,7Deeper insight into the role of apoptosis in malignant conditions will not only shed more light on the pathogenetic mechanisms, but also may provide certain clues on how to develop more effective therapeutic strategies. Allelic variations in the promoter region of genes may result in qualitative or quantitative changes through their effects on the transcription factor binding site or other regulatory sites of the gene.Many single-nucleotide polymorphisms (SNPs) are known to be required for the development of malignancies or in pathways such as apoptosis, which play an important role in resistance to chemotherapeutic agents. 8,9hus, in this study, the role of SNPs such as Fas -670 G > A (rs1800682), Fas -1377 G > A (rs2234767), FasL IVS2 -124 A > G (rs5030772), Bax -248 G > A (rs4645878) and Bcl 2 -938 C > A (rs2279115) involved in the apoptotic pathway was explored in terms of their effects on the molecular pathogenesis of 2 MPNs: PV and ET. Study population A total of 93 individuals between 28 and 85 years of age (mean age of 59.53 ± 15.52) diagnosed with PV and ET between 2010 and 2012 at the Department of Hematology, Medical Faculty of Mersin University were included in this study.The control group consisted of 93 individuals between 40 and 81 years of age (mean age of 51.96 ± 10.73) with no disease history.Thus, blood samples obtained from a total of 186 subjects were analyzed. Extraction of genomic DNA After written informed consent was obtained from each of the study subjects, 4-5 mL of peripheral venous blood was placed into centrifuge tubes containing 1 mL of 2% EDTA.The DNA extraction was performed using Miller DNA isolation with a salting out precipitation method. 10 Genotyping The genotyping of the Fas -670 G > A (rs1800682), Fas -1377 G > A (rs2234767), FasL IVS2nt -124 A > G (rs5030772), Bax -248 G > A (rs4645878) and Bcl 2 -938 C > A (rs2279115) gene polymorphisms was performed using pre-designed TaqMan SNP Genotyping Assays (Applied Biosystems, Foster City, USA).The Assays-on-Demand SNP genotyping kit was used for the Polymerase Chain Reaction (Applied Biosystems).Single nucleotide polymorphism amplification assays were performed according to the manufacturer's instructions.In brief, 25 µL of reaction solution containing 30 ng of DNA was mixed with 12.5 µL of 2X TaqMan Universal PCR Master Mix (Applied Biosystems) and 1.25 µL of pre-developed assay reagent from the SNP genotyping product (C_9578811_10 for Fas gene -670 G > A [rs1800682]; C_12123966_10 for Fas gene -1377 G > A [rs2234767]; C_32334221_10 for FasL gene IVS2nt -124 A > G [rs5030772]; C_27848291_10 for Bax gene -248 G > A [rs4645878]; and C_3044428_30 for Bcl 2 gene -938 C > A [rs2279115], Applied Biosystems) containing two 900 nM primers and two 200 nM MGB TaqMan probes.Reaction conditions consisted of preincubation at 60°C for 1 min and at 95°C for 10 min, followed by 40 cycles at 95°C for 15 s and at 60°C for 1 min.Amplifications and analysis were performed in an ABI Prism 7500 Real-Time PCR System (Applied Biosystems), using the SDS 2.0.3 software for allelic discrimination (Applied Biosystems). Statistical analysis The Independent Samples t-test and one-way ANOVA were used to compare continuous variables between groups.Analysis of the association between groups and genotypes/alleles was done by χ 2 or likelihood ratio tests according to the expected value rule for crosstabs.The Hardy-Weinberg equilibriums were controlled in both control and study groups for all genotypes.Descriptive statistics were presented by mean and standard deviation for continuous variables, and with frequencies and percentages for categorical variables.Statistical analyses were done by SPSS v. 15 statistical package and p-values less than 0.05 were considered statistically significant. Discussion MPNs are multipotent hematopoietic stem cell disorders that are characterized by the uncontrolled growth of mature blood cells. 11There are no distinct boundaries between these disorders, which makes them capable of presenting in a number of different disease categories with the ability to evolve into each other. 12he allelic variations in the promoter region of the genes may cause qualitative or quantitative alterations through their effects on the transcription factor binding site or other regulatory sites.Many SNPs are known to be required for the development of certain malignant conditions and for pathways, such as apoptosis, which are important in terms of resistance to chemotherapy. 8,9hus, our objective was to examine the role of the Fas -670 G > A (rs1800682), Fas -1377 G > A (rs2234767), FasL IVS2 -124 A > G (rs5030772), Bax -248 G > A (rs4645878) and Bcl 2 -938 C > A (rs2279115) SNPs in the molecular pathogenesis of the 2 MPNs: PV and ET. Single nucleotide changes of Fas -670 G > A (rs1800682) and Fas -1377 G > A (rs2234767) occur in the promoter region, at the repeating sequences of the binding sites for STAT1 (signal transducers and activators of transcription 1) and SP1 (Stimulatory Protein 1).These two poly-morphisms lead to a reduced expression of Fas and Fas L genes.On the other hand, the Fas L IVS -124 (rs5030772) polymorphism is located in the second intron of the Fas L gene.Our results showed no significant associations between the genotype ratios and allele frequencies of the Fas -670, Fas L -1377 and Fas L IVS -124 polymorphisms between controls and patients with PV or ET.The deregulation of the Fas signal pathway is involved in the mechanisms of immune escape and tumorigenesis, and it is also associated with the differentiation, invasion and metastasis of cancer cells.In adult T-cell leukemia (ATL), where activated T lymphocytes were observed due to human T lymphotrophic virus type 1 (HTLV-1) infection, Farre et al. showed that the presence of the Fas -670 polymorphism was associated with clinical manifestations and survival. 13he Bcl 2 gene involved in the mitochondrial pathway of the apoptotic mechanism does play a role not only as a regulator protein for anti-apoptosis, but also as a suppressor of cell growth.It has 3 exons and 3 promoter regions.It is located 1400 bp upstream of the translation initiation site and acts as a negative regulator on P1 (first promoter), which directs the transcription.Bcl 2 not only acts as an anti-apoptosis regulator protein, but also as a proliferation inhibitor.Therefore, Bcl 2 has multiple functional effects in tumorigenesis, probably explaining why Bcl 2 expression is significant in diagnosis and why it differs according to the type of tumor.Polymorphisms altering the function/expression of the Bcl 2 gene affect the apoptotic mechanisms and potentially serve as an important marker to guide targeted treatments. 14,15In our study, the polymorphisms of the P2 promoter of the Bcl 2 gene which serve as a negative regulator element were explored.In terms of the genotype ratios and allele frequencies of the Bcl 2 -938 C > A (rs2279115) polymorphism, control subjects did not differ significantly from those with PV or ET (p > 0.05).Bcl 2 may be expressed both in normal hematopoietic cells as well as in malignant hematopoietic cells, such as the leukemic blast cells of AML.Accordingly, Moon et al. proposed that a Bcl 2 protein or its gene expression could be used as a diagnostic marker for AML after chemotherapy.Bcl 2 expression is known to be increased upon Bcl 2 induction or selection of cells overexpressing Bcl 2 in patients with AML.Also, AML patients with higher Bcl 2 protein or gene expression have a shorter survival and poor response to chemotherapy.Moon et al. concluded that the Bcl 2 -938 C > A polymorphism was linked to overall survival and remission rates following chemotherapy in patients with leukemia. 16Nückel et al. in a study involving patients with chronic lymphocytic leukemia (CLL) found that in the Bcl 2 938 C > A polymorphism, the AA genotype was associated with an increased expression of Bcl 2 and that it may be appropriate to use this as a genetic prognostic marker in patients with CLL. 17 Hwan et al. reported that there was a significant association between the Bcl 2-938 polymorphism and disposition to CML, which is a clonal myeloproliferative disorder. 18he proapoptotic function of the Bax gene involved in the induction of apoptosis and the regulation of the apoptosis pathway by many other genes, such as Bcl 2 and p53, through their interaction with the Bax gene (at least partially), have given rise to an increased interest in the Bax gene for cancer research.Recent studies have shown that the Bax gene is a tumor suppressor.Deletions of the gene have been shown to be associated with lymphoid hyperplasia and have also been found to possess a significant negative growth function from a hematopoietic aspect. 19e genotype ratios and allele frequencies of Bax-248 G > A (rs4645878) polymorphisms, which occur in the 5' untranslated region (UTR) of the Bax gene and which cause a reduction in the expression of the gene, did not differ significantly between control subjects and patients with PV or ET.However, Saxena et al. observed that the Bax -248 polymorphism occurring in the Bax promoter was associated with a decreased expression of Bax in CLL, as well as with the failure to achieve a complete response to conventional therapy. 20Skogsberg et al., however, concluded that the Bax-248 polymorphism had no role as a marker of survival and prognosis in CLL. 21he exact cause of MPNs is unknown.However, molecular-genetic studies suggest that the majority of this type of neoplasm may be the result of acquired clonal genetic events.Therefore, this group of disorders represents a good candidate for molecular diagnostic studies.Particularly in this type of hematological malignancy, a better understanding of the apoptotic mechanisms responsible for homeostasis during the growth and differentiation of hematopoietic blood cells will not only allow us to gain deeper insights into the pathogenesis of these disorders, but will also guide us in our treatment decisions.Our results showed that the Fas -670 G > A (rs1800682), FAS 1377 G > A (rs2234767), Fas L IVS2 -124 A > G (rs5030772), Bax -248 G > A (rs4645878) and Bcl 2 -938 C > A (rs2279115) polymorphisms, which are involved in extrinsic and intrinsic apoptotic pathways, are not associated with the development of PV or ET.Although SNPs in apoptosis-associated genes, and changes in the gene and protein expression have been the subject of extensive research in many cancer types, to our knowledge, no studies examining the polymorphisms of the genes involved in the apoptotic pathway in patients with PV or ET, which are classified as hematological malignant conditions, have been carried out.We believe that further studies involving a larger patient series may better elucidate these associations.Also, the possibility that the occurrence of these diseases may be associated with variants of the genes examined should also be borne in mind.Furthermore, our findings indicate the need to examine other molecular changes of the apoptotic processes in patients with PV or ET. Table 1 . The frequency of distribution of genotypes and alleles in the patient and control groups
3,192
2017-08-01T00:00:00.000
[ "Biology", "Medicine" ]
Dynamic Subcarrier Allocation for Real-Time Traffic over Multiuser OFDM Systems A dynamic resource allocation algorithm to satisfy the packet delay requirements for real-time services, while maximizing the system capacity in multiuser orthogonal frequency division multiplexing (OFDM) systems is introduced. Our proposed cross-layer algorithm, called Dynamic Subcarrier Allocation algorithm for Real-time Tra ffi c (DSA-RT), consists of two interactive components. In the medium access control (MAC) layer, the users’ expected transmission rates in terms of the number of subcarriers per symbol and their corresponding transmission priorities are evaluated. With the above MAC-layer information and the detected subcarriers’ channel gains, in the physical (PHY) layer, a modified Kuhn-Munkres algorithm is developed to minimize the system power for a certain subcarrier allocation, then a PHY-layer resource allocation scheme is proposed to optimally allocate the subcarriers under the system signal-to-noise ratio (SNR) and power constraints. In a system where the number of mobile users changes dynamically, our developed MAC-layer access control and removal schemes can guarantee the quality of service (QoS) of the existing users in the system and fully utilize the bandwidth resource. The numerical results show that DSA-RT significantly improves the system performance in terms of the bandwidth e ffi ciency and delay performance for real-time services. Introduction Demands for real-time multimedia applications are increasing rapidly for broadband wireless networks. Orthogonal frequency division multiplexing (OFDM) is considered a promising technique in such systems. In this paper, we consider multiuser systems [1] where multiple users are allowed to transmit simultaneously on different subcarriers per OFDM symbol. Mobile users on certain OFDM subchannels may experience deep frequency-selective fading in a multipath propagation environment. Since each user may have a different subchannel impulse response, a poor subchannel for one user may be a good subchannel for another user. Clearly, if a user who suffers from poor subchannel gain can be reassigned to a better subchannel, the total throughput can be increased. This is also known as multiuser diversity. Since the subcarrier gains vary from user to user, to achieve higher system capacity and spectral efficiency, it is better to allocate subcarriers and the corresponding power dynamically according to the instantaneous channel states of all users. To support QoS for multiple services, packet scheduling has been identified as an important mechanism in wired networks. When considering the multipath fading, high error rate, and time-varying channel capacity in wireless links, some new packet scheduling algorithms are developed, such as channel state dependent round Robin (CSD-RR) [2], feasible earliest due date (FEDD) [3], modified largest weighted delay first (M-LWDF) [4], and link-adaptive LWDF [5] algorithms. CSD-RR schedules the packets whose channel is in the "Good" state in a Round Robin fashion. FEDD focuses on scheduling the packet which has the smallest time to expiry and whose channel is in the "Good" state. M-LWDF schedules the packet according to max{γ j r j (t)W j (t)}, where W j (t) is the head-of-the-line packet delay for queue j, r j (t) is the channel capacity with respect to flow j, and γ j are arbitrary positive constants. M-LWDF is proven to be a throughput-optimal scheduling algorithm. Linkadaptive LWDF aims to satisfy the stringent packet delay constraints, but without any guarantees. The objectives of these algorithms are to maximize the system spectral efficiency by exploiting the random channel variations and 2 EURASIP Journal on Wireless Communications and Networking to provide QoS guarantees to the users by deferring the transmissions on the deep fading links and compensating for them when the links recover. However, all these scheduling algorithms are based on packet scheduling, and multiple frequency subcarrier scheduling, which may be implemented in multiuser OFDM systems, is not considered. In the PHY layer, the total power resource is limited. Given the required number of subcarriers of different users, how to minimize the power allocation for the users on the subcarriers under users' SNR requirements is still a problem. To solve this problem, a suboptimal subcarrier allocation algorithm based on constructive assignment and iterative improvement is proposed in [6] and adopted in [7]. The algorithm exploits the similarity between the subcarrier allocation problem and the classical assignment problem. However, the algorithm can only provide a suboptimal allocation. An optimal solution to this power minimization problem is the Kuhn-Munkres algorithm proposed for the classical assignment problem [8]. Kuhn-Munkres is based on the Hungarian algorithm [9]. OFDM subarrier allocation using this method has been studied in [10]. However, an important assumption in that paper is that the number of assigned subcarriers for the users is known. Actually, without this information, the Kuhn-Munkres algorithm cannot perform the subcarrier allocation. In addition, in most of the proposed scheduling algorithms, the dynamic variation of the number of active users in the system is ignored. In this paper, we propose a cross-layer resource allocation scheduling algorithm, named DSA-RT, for real-time services under frequency-selective fading channel in multiuser OFDM systems. This algorithm has two cooperative components: the MAC-layer scheduling/control scheme and the PHY-layer resource allocation scheme. At the MAC layer, based on queuing theory, active users' expected resource requirements to satisfy delay constrains are calculated in terms of the number of subcarriers per OFDM symbol. With the support of our MAC-layer scheduling scheme, the number of required subcarriers and the users' transmission priorities are given. At the PHY layer, based on the modified Kuhn-Munkres algorithm, a PHY-layer resource allocation algorithm is proposed to satisfy all users' requirements under the system SNR and power constraints and to decide the real subcarrier allocation for each active user. ( Users admitted to the system are termed active users. Once new users are admitted, they will be allocated resources (subcarriers) by the access control scheme.) When considering a system where the number of active users changes dynamically, if there are still subcarriers left in an OFDM symbol, the access of new mobile users will be considered. In addition, if the dropping rates of certain users violate their maximum tolerable limits, a removal scheme is triggered to remove the aggressive users so as to guarantee the QoS of the other existing users. With the cooperation of the MAC and PHY layer schemes, our proposed algorithm offers the following advantages: (1) based on queuing theory, real-time users' delay requirements can be evaluated in terms of the number of subcarriers required, leading to a more flexible scheduling algorithm which can effectively guarantee the QoS for real-time services in multiuser OFDM systems; (2) with the number of the expected subcarriers and transmission priority information from the MAC layer, the proposed PHY-layer resource allocation scheme aims to maximize the bandwidth usage under the current channel state, system SNR, and power constraints; (3) when the number of mobile users is dynamically changed, the access control and removal schemes can dynamically adjust system flows and provide delay-related guarantee for the active users in the system. The rest of this paper is organized as follows. The system model is introduced in Section 2. The detailed description of DSA-RT is presented in Section 3. The simulation results are given in Section 4. Section 5 draws the conclusions. Figure 1 shows our downlink OFDM system model at a base station (BS). As in previous work [2][3][4][5], channel state information (CSI) is assumed to be available at BSs. Assume that the frequency bandwidth is divided into N subcarriers, and there are K active users, where K is changed dynamically and follows a Poisson distribution. BSs are in charge of subcarrier scheduling and resource allocation. We assume a fixed modulation for all subcarriers. The total transmission power is constrained at P and will be optimally allocated to each subcarrier. System Model BS establishes a queue for each user. Packets are assumed to have equal length of L bits each. Head of line (HOL) packets of queues are scheduled on different subcarriers in different OFDM symbols based on transmission priorities obtained in Section 3. The transmission process for each user can be modelled as an M/G/1 queue. Define the average system time of user k as E[T k ]; the delay requirement of realtime user k can be formulated as where τ k is the delay bound of user k. Denote the channel gain obtained by user k on subcarrier n by h k,n and the number of bits supported in a subcarrier by b. Define v(k, n) to be an allocation indicator: Our objective is to maximize the total system throughput, subject to the constraints on the total transmission power, user SNR requirements, and delay constraints. The optimization problem can be expressed as follows: : where SNR k represents the SNR requirement of user k. C1 states that the total subcarriers allocated to all users are less than or equal to N; C2 shows that the total transmission power should be less than or equal to the system power limit, while satisfying all users' SNR requirements; C3 means that no more than one user transmits in the same subcarrier; C4 is the average delay requirement of each user. The solution of the above optimization problem (3) is not explicit due to the fact that C4 is not directly related to v(k, n). Thus in the following section, we will establish the relationship between them and give the suboptimal subcarrier allocation solution v(k, n) for each symbol with lower computational complexity. Cross-Layer Algorithm Description Based on queuing theory, the MAC-layer scheduling scheme is developed to calculate the users' transmission priorities and their corresponding specific bandwidth requirements in terms of the number of subcarriers. With the channel state information, users' SNR requirements and the system power constraints, the PHY-layer resource allocation scheme can deduce the maximum attainable throughput for each supported user. In addition, the MAC-layer access control and removal scheme will be triggered to adjust the number of users being served and provide the QoS guarantee for the active users in the system. MAC-Layer Scheduling Scheme. In our system, we assume that each user has one type of real-time traffic. The packet arrivals of user k follow an independent Poisson process with rate λ k , and each user has a delay upper bound τ k . Furthermore, we assume that users have infinite buffers, and the same class users have the same (λ k , τ k ) settings. Since the transmission process for each user can be modelled as an M/G/1 queue, the delay constraint on system time E[T k ] ≤ τ k is given by [11] where E[X] is the average service time and 2 , a necessary condition for the delay requirement on system time in (13) is By solving the above inequality, we can easily obtain the lower bound of the average transmission rate for user k. Since b is known by the supported modulation, we further scale the average transmission rate in terms of subcarriers, represented by R k . Given the per-link R k , the waiting time of the HOL packet w k , and the delay constraint τ k , an active user's transmission priority and exact bandwidth requirement in terms of the number of subcarriers per symbol are obtained by the following modified LWDF scheduling algorithm. In our algorithm, the system time is scaled in terms of OFDM symbol time. The remaining time to the deadline of the HOL packet at queue k is where s is the OFDM symbol time. The smaller the value of r k is, the more urgently user k needs to transmit the corresponding packet. In addition, if l k is the number of bits left in the HOL packet of user k, then till the due time of the packet, the average required transmission rate in terms of the number of subcarriers in the following symbol time is given by Compared with the deduced R k , we define the rate proportional index as follows: ζ k is defined to indicate the urgent state. Its value could be positive or negative. If its value is below zero, this means that the required number of subcarriers exceeds the average number, which indicates that congestion may happen. It is also easily observed that the smaller the value of ζ k , the more urgent the transmission of the corresponding HOL packet. As in LWDF algorithm, we also consider the factors of the waiting time and transmission rate for each user. However, instead of considering the users' attainable bandwidths, we consider the users' required bandwidths under the delay bound constraints, which are more important for real-time services. Our scheduling is described as follows. Once the channel is idle, each user will calculate its transmission priority by where α is a positive constant used to adjust the weight of r k . The function δ(·) is defined as From the above analyses, the user with the smaller value given by (10) will enjoy a higher transmission priority. From the definition of δ(·), if a user's required number of subcarriers exceeds the total number N provided by a symbol, even if we allocate the whole symbol to this user, its delay requirement will not be met. Therefore, the HOL packet of this user will be dropped to save bandwidth for other users. Up to now, our MAC-layer scheduling scheme gives the transmission priority list of the HOL packets according to (10) for the active users and their expected transmission rates in terms of the number of subcarriers in each symbol from (8). However these rates are only the users' expected rates. Considering the users' channel states, SNR requirements, and system power limit in the PHY layer, the real subcarrier allocation will be performed according to the following scheme. PHY-Layer Resource Allocation Scheme. In the MAC layer, our algorithm has already considered the real-time traffic delay requirement and given the expected transmission rate in terms of the number of subcarriers and users' transmission priorities. In the PHY layer, with the different subcarriers' channel states, system SNR and power constraints, our PHY-layer scheme aims to optimize the initial allocation indicator v(k, n) with the following constraint: To solve this problem, a dynamic PHY-layer resource allocation scheme is proposed which is divided into the following steps. (a) Initial subcarrier allocation. With the total number of subcarrier limit N, we initially assign the users the required numbers of subcarriers according to their priorities till N subcarriers are used up or all K users are assigned. (b) Power minimization. Given a subcarrier allocation, the following modified Kuhn-Munkres algorithm is used to obtain an optimal allocation to minimize the system power under the users' SNR requirements. Denote the minimized power as P min . (c) Power comparison. Compare P min with the system power limit P, and consider the following cases: (i) if P = P min , then the power resource is fully utilized, and the current subcarrier allocation v(k, n) is the final solution; (ii) if P < P min , then the system power cannot support all currently assigned subcarriers. So our scheme will reduce the subcarrier allocation from the lowest priority user. Given SNR k requirement for user k, among the assigned subcarriers for this user, the smaller the value of h k,n on subcarrier n, the larger the power consumption on it. So the subcarrier reduction will be performed in ascending subcarrier gain order one by one. Then go to Step (b) in the next iteration, till the updated P min is less than P; (iii) if P > P min , more power resource can be utilized. Then our scheme considers the remaining subcarrier resource. We represent the total number of the assigned subcarriers as N . If N = N, the subcarriers are used up, and we maintain the current v(k, n) solution. If N < N, the remaining subcarriers are assigned evenly to the current active users till the updated P min reaches P. If new users' access requirements are received, the access control scheme to be introduced in the next subsection will guide the assignment. Modified Kuhn-Munkres Algorithm. In the following, we will firstly introduced the Kuhn-Munkres algorithm to find the perfect matching with the maximum sum of edge weights for a bipartite graph. Then a modified algorithm is described for OFDM power allocation. To minimize the system power, the modified algorithm is applied with negative weights. A graph is denoted by G(V , E), where V is the vertex set, and E is the edge set of the graph. If V = V 1 ∪ V 2 with V 1 ∩ V 2 = Φ and each edge in E has one endpoint in V 1 and the other in V 2 , the graph G(V , E) is a bipartite graph, which can also be denoted as G(V 1 , V 2 , E). The bipartite graph is very useful for some applications, such as an assignment problem which can be depicted as follows. Given a weighted complete bipartite graph G = (X ∪ Y , X × Y ), where edge (x, y) has weight w(x, y), find a matching m from X to Y with maximum weight. In an application, X could be a set of workers, Y a set of jobs, and w(x, y) the earnings made by assigning worker x to job y. The goal of the assignment problem is to find the optimal (best total earnings) matching. For a bipartite graph G(V 1 , V 2 , E), if the cardinalities of V 1 and V 2 , denoted as n 1 and n 2 , are equal, then this bipartite graph is symmetric. For single objective optimization, it has been proved that the Kuhn-Munkres algorithm can always find the maximum weight matching for a bipartite graph with O(n 3 ) computational complexity. The Kuhn-Munkres algorithm is based on the procedure of the Hungarian algorithm [9]. Matrix W = [w i j ] has elements w i j , which represent the earnings of assigning worker i to job j as shown in Figure 2 (a). Step 1. Let X, Y be the bipartite sets. Initialize two labels u i and v j by u i = max j {w i j }, v j = 0, i, j = 1, . . . , k. In Figure 2 (b), the numbers written at the left and the top of the matrix express the values of u i and v j , respectively. Step 2. Obtain the excess matrix C by the following: c i j = u i + v j − w i j . This is shown in Figure 2 (c). Step 3. Find the subgraph G that includes vertices i and j satisfying c i j = 0 and the corresponding edge e i j . Then find the maximum matching m of G by the Hungarian algorithm, and underline the entries in the weight matrix. (There are various ways to find the maximum matching. See, e.g, [12].) A maximum matching is a matching with the largest possible number of edges. In this example, the maximum matching is found to be (1, 4), (2, 1), and (4, 2), as shown in Figure 2 (d). If m is a perfect matching, that is, the number of edges in a maximum matching is equal to the cardinality of worker set (k), the optimal assignment is obtained. Otherwise, go to the next step. Step 4. Let Q be a vertex cover of G , and let R = X ∩ Q and T = Y ∩ Q. The vertex cover Q is a vertex set of G which contains at least one endpoint of each edge. In this example, Q is chosen to be the nodes corresponding to Workers 1 and 3 and Job 4. So R corresponds to Workers 1 and 3, and T corresponds to Job 4. Now find = min{c i j : x i ∈ X − R, y j ∈ Y − T}. For example, if equals 1 in Figure 2, decrease u i by for the rows of X − R and increase v j by for the columns of T. Then go to Step 2. Steps 2 to 4 are repeated until the perfect matching m, that is, the optimal assignment, is obtained. For a bipartite graph G(V 1 , V 2 , E), if the cardinalities of V 1 and V 2 , denoted as n 1 and n 2 , are not equal, then this bipartite graph is asymmetric. In our modified Kuhn-Munkres algorithm, we enhance an asymmetric graph to a symmetric one, and then solve the optimization problem as in the symmetric case. Firstly, suppose that the resource on both V 1 and V 2 cannot be reused, we append |n 1 − n 2 | all-zero rows or columns to the weight matrix to construct a square matrix, and then transform the problem to a symmetric bipartite matching, as shown in Figure 3. Secondly, for some special cases in which the redundant resource may be reused, the modified Kuhn-Munkres algorithm reproduces the corresponding columns or rows till the matrix is transformed to a square matrix. If necessary, allzero columns or rows will be added. If n 1 > n 2 and the elements in V 2 is reusable, Figure 4 shows the case where the remaining elements in V 1 may reuse the elements in V 2 with the same probability. If n 1 < n 2 , given the number of required elements in V 2 by the elements in V 1 , namely, q 1 , q 2 , . . . , q n1 , then the square matrix may be constructed by reproducing the rows in demand, as shown in Figure 5. In the downlink OFDM system model, as in previous work, channel state information (CSI) is assumed to be available at base stations (BSs). In a multiuser system with frequency-selective fading, each user may experience a different channel frequency response, which is related to its location. The total frequency bandwidth is divided into N orthogonal subchannels, and suppose there are currently K active users in the system. Assume that S k is the subchannel set for user k, q k is the cardinality of set S k . The value of q k is initially obtained from the MAC layer scheduling scheme and dynamically changed by the PHY allocation scheme. Therefore, for user k, the required transmission power in time slot t is given by where h k,n is the detected subchannel gain of user k on subchannel i. Then the total system required power can be expressed as With the above problem formulation, the minimization of the system power P(t) as required in the second step of the PHY-layer allocation scheme may be converted to a bipartite matching problem. The edge weight for user k on subcarrier n is SNR k /h 2 k,n . Therefore, similar to the case illustrated in Figure 5, the modified Kuhn-Munkres algorithm may be applied to give an optimal solution to the minimization of the system power. Access Control and Removal Scheme. In real networks, the number of active users changes dynamically. Without access control, the bandwidth may be inadequate. In addition, particularly for real-time traffic, without a removal scheme, not only may the QoS of the users newly granted access not be guaranteed but also the previously granted access users will suffer from QoS degradation. Therefore, the MAC-layer access control and removal schemes are introduced in our DSA-RT algorithm. EURASIP Journal on Wireless Communications and Networking As analyzed in the previous subsection, a new user's QoS requirements should be considered when P > P min and As introduced in Section 3.1, the new user's QoS requirements can be evaluated by R k . Access control will check if this requirement can be satisfied with the remaining power and subcarrier resources. If yes, the new user can be allocated subcarrier resources; otherwise, it continues to wait. Even with access control, real-time transmission systems may still encounter an overloaded situation due to the timevarying wireless channel and variable bit rates. As presented in [13], a useful removal scheme can effectively guarantee the QoS of the existing users and will not be adversely affected by the admission of new users. Our scheme assumes that the dropping rate of user k is sampled for each constant time interval Δt, and the last sample time is t. So the dropping rate of user k is defined by where D k (t + Δt] and N k (t + Δt] are the numbers of dropped packets and the total transmitted packets of user k during time (t, t + Δt]. Assume θ k is the maximum dropping rate which user k can tolerate. At each sample time or when the number of users in the system changes, our removal scheme will select the user to be removed by the following rule: where the selected set consists of the users whose η k values violate their corresponding dropping rate bound θ k . If the traffic is bursty, we may change Δt to adjust the dropping rate more frequently. Implementation of DSA-RT. Thanks to the cooperation of the above schemes, for each OFDM symbol, our algorithm DSA-RT can give the suboptimal solution v(k, n) of the optimization problem addressed in Section 2. The computational delay is not expected to be a problem. The number of operations required by the algorithm is approximately O (N 3 ), which translates to a computational delay of a small fraction of a symbol time with the support of current chips. In addition, if we want to lower the computational delay, multiple symbols can be combined as one scheduling unit, but this will affect the scheduling efficiency. It is a tradeoff. The flow chart of the implementation of our algorithm is shown in Figure 6. Simulation Results In this section, the performance of the proposed DSA-RT scheduling algorithm is investigated and compared with CSD-RR, FEDD, and M-LWDF [2][3][4]. We consider QPSK modulation in multiuser OFDM downlink systems. However, other modulations are supported with different SNR constraints. The IFFT size is 128, and the OFDM symbol duration is equal to 200 microseconds [14]. We consider the quasistatic flat fading channel with multipath [15]. Assume that the users arrive as a Poisson process with parameter λ, and their active times in the system follow the exponential distribution with mean 10 seconds. In this section, we assume that all users have the same type of real-time traffic. During each user's active time, the packet arrivals follow the Poisson distribution. The packets have a fixed length of 1000 bytes, and the mean traffic rate is 1 Mbps. The delay bound is set to be 50 milliseconds. In simulations, we consider one type of real-time traffic, so we fixed the packet length. However, if multiple types of realtime traffics are supported, a variable length is acceptable in our algorithm. In our simulations, we vary the user arrival rates λ from 0.01 to 0.1 and compare the delay and dropping rate performance of some packet scheduling algorithms and our proposed DSA-RT algorithm. All simulations are in Matlab 7.3. The simulation time of each experiment is 100 seconds and we repeat it 100 times. The average delay is the mean of the delay of all packets not dropped. For each successfully delivered packet, the delay is calculated as the difference between the departure and arrival times. In DSA-RT, packets which have been dropped will not re-enter the system. Figure 7 shows the delay comparisons of DAS-RT and three other packet scheduling algorithms. It is obvious that our algorithm distinctly improves the delay performance, particularly when the traffic density is high. Accordingly, as shown in Figure 8, the dropping rates of our algorithm at any user arrival rate are also much lower than the other three algorithms. DSA-RT is developed to schedule at the subcarrier level and tries to provide delay guarantees for the real-time traffics. Therefore, it has the best delay and dropping rate performance. Based on the consideration of channel state, CSD-RR has better performance than M-LWDF and FEDD. By considering the system capacity and queuing, the throughput performance of M-LWDF is optimal, but the delay performance still needs to be improved. FEDD gives the packet with the earliest deadline of the highest transmission priority. However, with the bandwidth and channel state constraints, the transmission still has a high probability to fail within its deadline. Therefore, it has the poorest performance. Conclusion In this paper, DSA-RT aims to satisfy the packet delay requirements of real-time traffics in multiuser OFDM system, while maximizing the system bandwidth efficiency. This algorithm consists of two cooperative components. At the MAC layer, based on queuing theory and the modified LWDF algorithm, active users' expected transmission rates in terms of the number of subcarreirs per symbol and their corresponding transmission priorities are deduced. With different subcarrier states, based on our modified Kuhn-Munkres algorithm, a PHY-layer resource allocation scheme is developed to satisfy the users' requirements under the system SNR and power constraints. When considering a system where the number of active users changes dynamically, the access control and removal scheme can fully utilize the bandwidth resource and guarantee the QoS of the existing users in the system. Finally, compared with other widely used scheduling algorithms, simulation results show that our proposed algorithm significantly improves the system performance for real-time users in multiuser OFDM systems.
6,861.4
2009-02-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Electrocardiogram lead selection for intelligent screening of patients with systolic heart failure Electrocardiogram (ECG)-based intelligent screening for systolic heart failure (HF) is an emerging method that could become a low-cost and rapid screening tool for early diagnosis of the disease before the comprehensive echocardiographic procedure. We collected 12-lead ECG signals from 900 systolic HF patients (ejection fraction, EF < 50%) and 900 individuals with normal EF in the absence of HF symptoms. The 12-lead ECG signals were converted by continuous wavelet transform (CWT) to 2D spectra and classified using a 2D convolutional neural network (CNN). The 2D CWT spectra of 12-lead ECG signals were trained separately in 12 identical 2D-CNN models. The 12-lead classification results of the 2D-CNN model revealed that Lead V6 had the highest accuracy (0.93), sensitivity (0.97), specificity (0.89), and f1 scores (0.94) in the testing dataset. We designed four comprehensive scoring methods to integrate the 12-lead classification results into a key diagnostic index. The highest quality result among these four methods was obtained when Leads V5 and V6 of the 12-lead ECG signals were combined. Our new 12-lead ECG signal–based intelligent screening method using straightforward combination of ECG leads provides a fast and accurate approach for pre-screening for systolic HF. www.nature.com/scientificreports/ were used in this study. One included 1090 systolic HF patients with an EF of < 50%. The other included 10,000 individuals with an EF of > 50% and without HF symptoms. The EF was measured by echocardiography performed by cardiologists, and 12-lead ECG data were acquired at clinics or during hospitalization. Both datasets were provided by National Taiwan University Hospital, Hsinchu and Biomedical Park Branch. The 12-lead ECG data of all participants were obtained within one week after echocardiography identified their left ventricular EF (LVEF) greater than 50% or not. Each 12-lead ECG recording was from a single participant, without duplication. Patient selection. The patient selection process is presented in Fig. 1. Among the 1090 patients with reduced EF, 12-lead ECG data with excessive noise were excluded from this research. ECG data with excessive noise was attributed to interferences from baseline wander, power line interference, electromyography noise, and R peak detection error. Examples of noise illustrations were shown in Fig. 2. Excessive noise in the ECG signal resulted in a splitting error. Splitting errors may generate an atypical waveform map, which could mislead our model for finding EF features. Thus, such ECG signals were excluded from our study. The remaining 12-lead ECG data for 900 patients with systolic HF was used as the patient training dataset. The corresponding 900 agematched and EF-normal individuals were selected from the dataset with 10,000 individuals from health examination. Information for the patients with systolic HF and the individuals without HF are presented in Table 1. A total of 214 individuals were excluded due to having ECG data with signal splitting errors. A total of 186 testing data were randomly selected from the remaining 772 patients with systolic HF and 814 individuals without HF. The data of the systolic HF patients and the individuals without HF (total 1400) were randomly separated into two groups: 90% of data were used for training (n = 1260) and 10% of data were used for validation (n = 140). All patient's original EF values were measured with echocardiograph. The systolic HF patient (EF < 50%) and individuals without systolic HF (EF > 50%) were divided in two classes, and compared to the AI prediction class. Electrocardiogram extraction. The flow chart for the whole experiment, including ECG extraction, CWT, and 2D-CNN classification, is depicted in Fig. 3. Because the 12-lead ECG data were recorded as a JPG image, the ECG signals had to be extracted from the image. The extraction procedure involved processing the JPG image through image binarization and signal extraction to obtain pure ECG signals. Then, the ECG image was cut vertically into four parts, followed by searching for black pixels on each of three vertical line to recon- Figure 1. Data selection flowchart. A total of 900 patients with systolic HF were included in the study. For comparison, 900 age-matched individuals without systolic HF were included in the research. After ECG preprocessing, noisy data and data with ECG splitting errors were excluded. The remaining 700 patients with systolic HF and the corresponding individuals without HF were combined into one dataset. These data were then separated into groups of 1260 for training data, 140 for validation data, and 186 for testing data. www.nature.com/scientificreports/ struct the original ECG signal. The reconstructed ECG signal was then normalized and calibrated. Each ECG line was cut between two R peaks to obtain three small segments, and the middle segment was selected as the single-beat ECG compound. This procedure generated 12 single-lead ECG compounds for further processing. The details of the ECG extraction process are demonstrated in Fig. 4. Continuous wavelet transform. The 12-lead ECG signals were transformed by CWT to 2D spectra. Wavelet transform can be used to analyze time series in different frequencies that contain nonstationary power. In this research, Daubechies CWT (db8) was used to transform the ECG signals because it has a favorable balance between time and frequency localization. The CWT and Daubechies wavelet formulas are shown in Eqs. (1) and (2). Figure 2. Signals with excessive ECG noise, such as R peak detect error, electromyogram noise, baseline wandering and power line artifact, were excluded from our study. These ECG signals are severely affected by noise and therefore cannot be processed to obtain the correct ECG compound. They were excluded from our research to avoid misleading the neural network model during training. In the CWT Eq. (1), τand s correspond to the translation and scale parameters, respectively. ψ(t) is the transforming function, which also represents the mother wavelet. Matlab CWT toolbox was employed in the CWTs. In the Daubechies Eq. (2), p represents a vanishing moment 10 . The Matlab wavelet toolbox was used for the implementation of these two equations. CNN structure. The neural network programming was based on the Python and Keras application programming interfaces. The 2D-CNN structure was modified from the Visual Geometry Group (VGG) network 5 for the 12-lead ECG CWT spectra classification. The 12-lead ECG CWT spectra were first resized to 200 × 200 × 3 pixels and then passed to the 2D-CNN as inputs. A 14-layer 2D-CNN was constructed with 6 convolution layers, 6 max-pooling layers, 1 flatten layer and 1 dense output layer with softmax function. The rectified linear unit, batch normalization, and dropout functions were used after each convolution layer was applied. Binary crossentropy was defined as the loss function. An Adam optimizer was employed as the learning guide for 2D-CNN learning, and its learning rate was set as 10 -4 . For detailed 2D-CNN structure and hyperparameter information, please refer to Tables S1 and S2. In this research, the 12-lead ECG spectra were separately passed to 12 identical 2D-CNNs. Comprehensive 12-lead ECG scoring. The comprehensive 12-lead ECG scoring method is based on the 2D-CNN output layers with a softmax formula, which is displayed in Eq. (3). The logits vector from the 2D-CNN flatten layer proceeds through the softmax layer, the output class probability score of i(yi), and the class probability score summation [ j e y j (for j from 0 to 1)] 6 . In our research, the 12-lead ECG CWT spectra were separately passed to 12 2D-CNNs, which generated 12 probability scores in the individual without HF class. The 12 probability scores were employed in our comprehensive 12-lead ECG scoring method. This method integrates 12 scores into one key diagnostic index for detecting systolic HF. V5, and V6 are the three ECG leads physically closest to the left ventricle, and they may have more relevance to EF detection than other ECG leads. Also, the Leads I result shows higher accuracy (82%) than Lead II and Lead III, so this lead had been considered in our comprehensive method. The scores from these three leads were selected in our comprehensive scoring method. The four types of scoring method were designed to obtain four diagnostic indices. The first one is the average value of the 12-lead output score, named "12-lead with equal weighting;" the second index is the average of three crucial lead scores, Leads I, V5, and V6, named "Lead I, V5, and V6"; the third index is the average value of Lead I and V6 scores, named "Lead I and V6;" and the fourth index is the average value of the V5 and V6 scores, named "V5 and V6. " The 12 output prediction scores from 12 neural network softmax layers were summed and used the cutoff value of 0.5. If the summation score is greater than or equal to www.nature.com/scientificreports/ Figure 3. Illustration of the research process employed in this study. The first step was to extract the ECG signal from the JPG images. The second step was to transform each single-lead ECG signal into CWT spectra. In the final step, the spectra were trained separately in 12 models for the 12-leads, and the softmax layer output scores (ranging from 0 to 1) were recorded and applied for the comprehensive scoring method. Four comprehensive scoring methods were considered, including one where equal weights were given to the 12-leads and the key leads close to the left ventricle (Leads I, V5, and V6). www.nature.com/scientificreports/ The receiver operating characteristic curve (ROC) was used as an evaluation method in this study. The ROC curve is a common analysis method for evaluating deep learning models. Using ROC curves, the graphical display of true positives (as the y-axis) versus false positives (as the x-axis) can be observed and compared directly. The area under the ROC curve (AUC) represents the equivalent of the probability when randomly selecting a sample. The classifier ranks a randomly chosen positive sample higher than a randomly chosen negative sample. The AUC value ranges between 0 and 1, and if the AUC value is in the range of 0.5 < AUC < 1, it means the classifier has more effective predicting ability than random guesses 12 . Results Systolic HF prediction results. The 12-lead ECG data of the 900 HF patients and the 900 individuals without HF were transformed into CWT spectra. The baseline data of individuals with and without systolic HF are listed in Table 1. Furthermore, Table 2 presents the characteristics of the training, validation, and testing sets. In the testing dataset (n = 186), the mean LVEF was 32.6 ± 3.4%, and 73 (39.2%) patients had myocardial infarction, 82 (44.1%) had hypertension, and 69 (37.1) had diabetes. Those values were similar to those in the training dataset (LVEF = 32.3 ± 4.6, p = 0.25; myocardial infarction 36.7%, p = 0.16; hypertension 48.5%, p = 0.09, and diabetes 37.0%, p = 0.11). Figure 5 illustrates the original ECG data from the JPG image and the CWT spectra. The CWT spectra can concentrate the unobvious ECG linear features into 2D image, which can enhance specific features of HF for machine learning classification. Also, the tenfold cross validation had been applied to our model, and demonstrated that the V6 had the highest average accuracy of 89.07% (Table S3). Systolic HF prediction results for individual leads. A total of 1400 ECG training data and 186 ECG testing data were used in this study. The accuracy, sensitivity, specificity, and f1 score of the test dataset are revealed in Table 3. The ECG results for individual ECG leads were favorable for the classification of patients with systolic HF. Each lead had accuracy ranging from 0.71 to 0.93. In particular, lead V6 exhibited the highest accuracy (0.93), specificity (0.97), and f1 score (0.94). The full results are presented in Table 3A. www.nature.com/scientificreports/ Fig. 6. In Fig. 6A, the individual lead 2D-CNN models reveal AUC values between 0.76 and 0.96. Figure 6B reveals that the four AUC values for comprehensive scoring are between 0.96 and 0.98; these are higher than for any individual leads. The purpose of the comprehensive 12-lead ECG scoring method is to obtain one precise diagnostic index for systolic HF classification from 12 CNN models. Thus, the four comprehensive scoring methods were designed. The comprehensive scoring method's accuracy, sensitivity, specificity, and f1 score are illustrated in Table 3B. The four comprehensive scoring method results include the "12-leads with equal weighting, " "Lead I, V5, and V6, " "Lead I and V6, " and "Lead V5 and V6. " In Table 3B, Lead V5 and V6 reveals the highest accuracy of 0.94, sensitivity of 0.97, specificity of 0.89, and an f1 score of 0.94. The four comprehensive scoring methods all showed higher accuracy, sensitivity, specificity, and f1 score than the average 12-lead ECG results. Discussion A pre-screening systolic HF was established in this study. However, prospective testing of this method is still needed. The novel and comprehensive 12-lead ECG scoring method can achieve higher classification performance than individual leads. The individual leads had an average accuracy, sensitivity, and specificity of 0.75 and an f1 score of 0.83. However, the accuracy (0.94), specificity (0.97), sensitivity (0.89), and f1 score (0.94) all improved when the comprehensive 12-lead ECG scoring method was used. In the individual lead results, V6 had higher accuracy than the other 11 leads. The screening results revealed that V6 was the representative lead of the 12-leads for pre-screening patients for systolic HF. This might be because V6 is the lead physically closest to the left ventricle. In comprehensive scoring, the four methods all had high classification capability. Among them, V5 and V6 had better classification capability than the other comprehensive scoring methods. In the ROC curve, the comprehensive scoring method exhibited significantly improved AUC compared with the AUC of the 12 individual leads. Therefore, V6 single lead and the comprehensive scoring method can be highly effective for screening patients for systolic HF. In previous EF prediction research, many studies used ECG features and physiological parameters to predict EF 13 . However, by restricting the ECG features in the method, researchers may have ignored other important features. A previous study applied AI for predicting EF using the entire 12-lead ECG signal as a matrix and feeding into 2D-CNN for EF prediction 14 . But the crucial ECG leads were not identified. In many ECG AI studies, researchers used 1D-CNN to classify ECG signals, such as for atrial fibrillation classification 4,15 . We compare 1D and 2D-CNNs in this study by establishing three simple 1D-CNN models and applied the 1D-CNN structure of the two papers on our dataset 16,17 (Tables S4-S7). According to the above comparison, our 2D-CNN shows the highest accuracy in predicting systolic heart failure compared to the other three 1D-CNN models. Although such a result may be limited by the amount of training data, it still reveals that the 2D-CNN model may perform better than the 1D-CNN models when classifying using a single ECG compound. Another study applied ECG CWT in a 2D-CNN for atrial arrhythmia detection but also focused on only one lead feature 18 . These studies have revealed that ECG CWT is a powerful tool for identifying abnormal ECG www.nature.com/scientificreports/ features. In our research, all ECG leads were used to expand all ECG features to the 2D-CNN to enhance the systolic HF classification capability. A key lead, useful for pre-screening patients for systolic HF, was identified by separately training the 12-leads in identical 2D-CNN models. The key lead cannot be identified without training the 12-leads concurrently. Several ECG features have been used to assess the LV function, and the presence of prolonged QRS duration is a strong marker for diminished LV systolic function 19 . Our study also supported these findings. Under our algorithm, the widening of QRS and lower QRS amplitude imply the high probability of poor LVEF. As shown in supplemental Fig. 1, we also found other specific features that suggested poor contractility, including p wave amplitude, T amplitude and ST interval, which were also the possible indicators for poor LV contractility [20][21][22] . Our AI-assisted algorithm could combine those features and assess the possibility of LVEF of < 50% with good accuracy. On the other hand, our algorithm aimed to screen patients with reduced LVEF (LVEF < 50%) by using only 12-lead ECG, which is low-cost and easily-feasible. In many rural areas and developing countries, the difficult access to cardiologic care and imaging could cause under-diagnosis and treatment for heart failure. Our algorithm by converting 12-lead ECG to 2D images rather than raw data provides a portable, inexpensive test for ventricular systolic dysfunction. The early diagnosis of left ventricular dysfunction could permit early institution of effective therapies, such as beta-blockers, angiotensin receptor antagonists and implantable devices. Along www.nature.com/scientificreports/ with the smartphone-enable electrodes, the single-lead ECG could be acquired by using mobile applications. Our algorithm could also be incorporated into those applications to assess the ventricular function. Study limitation. The limitations of this research are deficiencies for patients with systolic HF and noise in all ECG data. For future studies to further improve screening accuracy, it is imperative to collect training data from hospitals to enhance the dataset continually. The new data can be used to improve our model's performance. Furthermore, ECG signals should be kept clear and noise-free to prevent ECG slicing issue. Also, in this study, the prevalence of disease in our study cohort does not reflect prevalence in the general population. Thus, further research is needed to assess the utility of the given cutoffs in a general, ostensibly healthy population. At last, our dataset excluded patients with heart failure symptoms but with normal left ventricular systolic function. Therefore, heart failure patients with preserved LVEF may not be identified by using our algorithm. Conclusion In this research, we revealed that ECG CWT spectra can expand all ECG features for 2D-CNN classification. With the comprehensive 12-lead ECG scoring method, systolic HF screening obtained an accuracy of over 0.94 in Lead V5 and V6 and an AUC of 0.98 in Lead I and V6. In addition, we found that the V6 lead is vital for detecting systolic HF. Overall, this study provided an effective and accurate screening method for predicting cardiac contractile dysfunction using 12-lead ECG images.
4,225.2
2021-01-21T00:00:00.000
[ "Computer Science", "Medicine" ]
Separable mixing: the general formulation and a particular example focusing on mask efficiency The aim of this short note is twofold. We formulate the general Kermack-McKendrick epidemic model incorporating static heterogeneity and show how it simplifies to a scalar Renewal Equation (RE) when separable mixing is assumed. A key feature is that all information about the heterogeneity is encoded in one nonlinear real valued function of a real variable. Inspired by work of R. Pastor-Satorras and C. Castellano, we next investigate mask efficiency and demonstrate that it is straightforward to rederive from the RE their main conclusion, that the best way to protect the population as a whole is to protect yourself. Thus we establish that this conclusion is robust, in the sense that it also holds outside the world of network models. Introduction The work described below was triggered when the third author of the present paper attended the lecture of R. Pastor-Satorras during the 'Workshop on Epidemic Modelling: Current Challenges' in Girona, 19-21 June 2023.This lecture reported on the models, methods and results of the paper [9] and culminated in a powerful qualitative insight: masks that protect the wearer against infection are, also in public health perspective, more efficient than masks that, if the wearer is infectious, protect its contacts against infection.This conclusion is derived in the context of network models. Already for quite a while the present authors are working on the manuscript [1] which aims to provide a general survey of various effects of (mainly static) heterogeneity.A natural question arose: is it possible to sustain the qualitative insight by rederiving it in the context of homogeneous mixing models?As we show below, the methodology developed in our manuscript in preparation allows us to easily provide an affirmative answer! 2. Formulation of a comprehensive model for epidemic outbreaks in heterogeneous host populations By using the word 'outbreak', we imply that demographic turnover is ignored and that infection leads to permanent immunity.Host individuals are characterized by a trait x taking values in a set Ω. We assume that Ω is a measurable space, meaning that it comes equipped with a σ-algebra.We introduce a positive measure Φ on Ω to describe the distribution of the trait in the host population.We normalize Φ(Ω) = 1 and denote the host population size by N .For a concrete example see Section 4 below. A major restriction is that the trait of an individual does not change during the outbreak (so if the trait corresponds to age, the assumption is that the duration of the outbreak is so short, that we can ignore that individuals are becoming older while it lasts).Let s(t, x), with s(−∞, x) = 1, denote the probability that an individual with trait x is susceptible at time t.When the NUMBER of infected individuals is small, demographic stochasticity has a large impact and cannot be ignored.Our description starts when a small FRACTION of the very large host population is infected.With an informal appeal to the Law of Large Numbers, we then also interpret s(t, x) as the FRACTION of individuals with trait x that is susceptible at time t.It follows that with F the force of infection as a function of time and trait.In the spirit of [5] (for a reformulation in modern language see [2]) we introduce as the key modelling ingredient (2.2) A(τ, x, ξ) = the expected contribution to the force of infection on an individual with trait x of an individual with trait ξ that became infected τ units of time ago Here A is a measurable non-negative function mapping R + × Ω × Ω into R + and A is integrable with respect to (τ, ξ) over R + × Ω. The formula expresses the force of infection as a sum of contributions of individuals that were infected time τ ago while having trait ξ.By integrating (2.3) over time, interchanging the integrals, using the differentiated version of (2.1) to evaluate and inserting the result at the rhs of (2.1), we arrive at the nonlinear abstract Renewal Equation (RE) Equation (2.4) provides a concise representation of a rather general class of models.For quantitative work the discrete time variant introduced in [3] might be more suitable, especially when Ω is (or can be approximated, in some sense, by) a finite set, see [6,7,8] for steps in this direction. An alternative way to increase the tractability is to assume separable mixing, or, in other words, to assume that A is a product of a function of x and a function of (τ, ξ), reflecting that the properties of the susceptible individual and the infected individual have independent influence on the likelihood of an encounter and concomitant transmission.We shall go one step further, and assume that A is the product of three factors, the functions a(x), b(τ ) and c(ξ).So also the age of infection and the trait of the infected individual are assumed to have independent influence on transmission. Separable mixing it follows straight away from (2.3) that the force of infection factorizes as a product of a(x) and an unknown function of time.The same holds for the cumulative force of infection and accordingly we put and find that w should satisfy the scalar nonlinear RE where Ψ : R → R is defined by In the 'trivial' case that both c and a are identically equal to one, all individuals have identical susceptibility as well as expected infectiousness, so, after all, there is no heterogeneity.In this case and (3.3) is the standard Kermack-McKendrick RE as, for instance, presented in [2].So (3.3) tells us how, in the separable mixing case, the various components of heterogeneity, viz., susceptibility a, infectiousness c and distribution Ψ, affect the nonlinearity in the RE.(Incidentally, in [4], it is shown how to efficiently derive compartmental models that incorporate heterogeneity, by choosing in (3.3) functions b that are a matrix exponential sandwiched between two vectors.)To investigate the initial phase of an outbreak, we linearize at the disease-free steady state w = 0, which amounts to replacing Ψ(w) by Ψ ′ (0)w .Inserting the trial solution w(t) = e λt we obtain the Euler-Lotka equation which has a unique positive solution λ = r whenever the Basic Reproduction Number R 0 , given by exceeds one.(The non-negativity of b guarantees that in the complex plane r is the right most root of (3.6); for R 0 < 1 there exists a solution r < 0 provided the rhs of (3.6) assumes, on the real axis, values greater than one; a sufficient condition for this to happen is that b has compact support).Note that The Herd Immunity Threshold (HIT) is, by definition, reached when w assumes the value w such that the reproduction number corresponding to the situation in which Ψ ′ (0) is replaced by Ψ ′ ( w) equals one (note that after reaching the HIT there might still be a high incidence, simply because the reservoir of already infected individuals generates a considerable force of infection; but the contents of the reservoir will gradually diminish once the HIT is reached).The HIT itself is defined as s, where s is the fraction of the population that is still susceptible when w assumes the value w.Hence For t → ∞, w tends to w(∞) characterized by and the fraction of the population that escapes is accordingly given by (3.12) Note that (3.11) implies that Ψ ′ (w(∞)) < Ψ ′ (0) R 0 (since Ψ(y) > yΨ ′ (y) for y > 0) and hence that w(∞) > w. In the next section we shall specialize the model ingredients Ω, Φ, a and c such that they reflect a situation in which a fraction f of the population wears (all the time) a mask and that wearing a mask reduces, potentially, both the susceptibility and the infectiousness. Efficiency of masks Consider a population in which a fraction f of the individuals wears a mask (whenever they are in a situation where they can come into contact with other individuals) while the complementary fraction 1 − f never wears a mask.To describe this distinction, we let Ω consist of two points, indicated by 1 and 2. We label the individuals that do not wear a mask 1 and those who do, we label 2. We specify: We assume that wearing a mask is not correlated with any property that has influence on the contact process (in principle one could imagine that the contact process is assortative, in the sense that mask wearers meet disproportionately often with other mask wearers; but by this assumption we explicitly exclude such effects).Accordingly, we adopt (3.1).Noting that this decomposition provides the freedom of incorporating multiplicative constants into the factor b, we normalize a and c by choosing: The values of a(2) and c(2) then describe the relative susceptibility and infectiousness of those who wear a mask.The idea that a mask offers protection is reflected in our assumption that these values lie in the interval [0, 1].The aim of our analysis is to investigate the influence of these values on the epidemic outbreak.Therefore we introduce parameters ϵ 1 and ϵ 2 and put: It follows that: In succession, we now consider the initial phase, the HIT and the final size, focusing on the (a)symmetry of the impact of the two parameters ϵ 1 and ϵ 2 .As (3.6) and (3.7) show, the crucial quantities for the initial phase are b(τ ) and Ψ ′ (0).From (4.5) we deduce: It follows that in the initial phase of an outbreak the two protection factors carry equal weight, in the sense that both the reproduction number R 0 and the Malthusian parameter r depend only on their product.Motivated by this observation, we shall keep the product constant, say when investigating the HIT and the final size. Proof. Define: G(w, ϵ 1 ) = (1 − f )e −w + ϵf e −ϵ 1 w , (4.8) then (3.10) can be rewritten as: Next observe that the expressions for s and for G( w, ϵ 1 ) differ only by a factor ϵ in the last term.To exploit this, we rewrite D 1 G d w □ We conclude that we should minimize ϵ 1 to maximize the susceptible fraction upon reaching the HIT or, in other words, we should maximize self protection.Theorem 4.2: Assume (4.7) with ϵ ∈ (0, 1).The fraction s(∞) that is still susceptible after the outbreak, defined in (3.12), is a decreasing function of ϵ 1 . Sketch of the proof: Define then (3.11) can be rewritten as the equation < 0 for x > 0 one can copy the reasoning in the proof of Theorem 4.1 concerning G to H and derive that both w(∞) and s(∞) are decreasing functions of ϵ 1 . From (3.12) we have Since w(∞) is a decreasing function of ϵ 1 the escape probability for those who do NOT wear a mask, represented by e −w(∞) , increases with ϵ 1 .From Theorem 4.2 it follows then that the escape probability of those who DO wear a mask, represented by e −ϵ 1 w(∞) , decreases strongly enough to make the overall per capita escape probability s(∞) decreasing as well. Stated otherwise, maximizing self protection by those who wear a face mask improves the escape probability for themselves (Figure 1a) and the population as a whole (Figure 2), but reduces the escape probability for those who do not wear a mask (Figure 1b). The intuitive 'explanation' of the overall positive effect is that when infection of an individual is prevented, automatically the secondary infections that potentially are caused by this individual are prevented.In other words, self protection occurs one step earlier in a chain.wear a mask.We find the escape probabilities s(∞, 1) and s(∞, 2) by first numerically solving equation (3.11).Then we compute the improvement factor of the escape probability by dividing the escape probability in a population with mask (for fraction f ) by the escape probability in a maskless population.Curves are shown for two choices of R 0 (no mask) and two choices of ϵ, where R 0 (no mask) is the basic reproduction number in a maskless population.Note that ϵ 2 = ϵ/ϵ 1 as assumed in (4.7). Concluding remarks From a strictly medical point of view, the chief aim of vaccination is to protect individuals against disease.In a public health perspective, however, one is interested in the effect of vaccination on transmission.Vaccination may reduce both the probability to get infected during an encounter with an infectious individual and the infectiousness, should infection nevertheless occur.Both reductions help to lower the force of infection and thus to diminish the size of an outbreak. A mask is not that different from a vaccine, it too reduces both susceptibility and infectiousness.Different constructions may be more efficient in one or the other of these reductions, see [9].This then leads to the question of what one should strive for.In [9] a clear conclusion is reached in the context of a SIR configuration network model (with 'random' distribution of the masks, i.e., with a form of proportionate mixing): if one keeps .Improvement factor of escape probability for the population as a whole.We find the escape probability s(∞) for the population as a whole by first numerically solving equation (3.11).The increase in escape probability is then computed by dividing the escape probability in a population with mask (for fraction f ) by the escape probability in a maskless population.Curves are shown for two choices of ϵ.In addition we show in Figure 2a the impact of different choices of R 0 (no mask): the basic reproduction number in a maskless population, while in Figure 2b we show the impact of different choices of f .Note that ϵ 2 = ϵ/ϵ 1 as assumed in (4.7). the product of the two reduction factors constant, one should maximize the reduction of susceptibility in order to achieve a maximal reduction of the final size. Here we checked that the same conclusion obtains when one allows, in Kermack-McKendrick spirit, for expected infectiousness described by a general function of time elapsed since exposure and for proportionate mixing of those who do and those who do not wear a mask. A secondary objective of the present paper is to demonstrate the effectiveness of a top down approach.Before we became aware of [9], we had already formulated a rather general model of an outbreak in a host population with static heterogeneity and we had studied the simplification that derives from assuming proportionate mixing.Thus the present study became, essentially, a fill in exercise. Use of AI tools declaration The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article. 1 , 1 ,Figure 1 . Figure1.Improvement factor of escape probability for type 2 individuals who always wear a mask and for type 1 individuals who wear a mask.We find the escape probabilities s(∞, 1) and s(∞, 2) by first numerically solving equation(3.11).Then we compute the improvement factor of the escape probability by dividing the escape probability in a population with mask (for fraction f ) by the escape probability in a maskless population.Curves are shown for two choices of R 0 (no mask) and two choices of ϵ, where R 0 (no mask) is the basic reproduction number in a maskless population.Note that ϵ 2 = ϵ/ϵ 1 as assumed in (4.7). 1 ,ϵ=0. 5 , Figure 2. Improvement factor of escape probability for the population as a whole.We find the escape probability s(∞) for the population as a whole by first numerically solving equation(3.11).The increase in escape probability is then computed by dividing the escape probability in a population with mask (for fraction f ) by the escape probability in a maskless population.Curves are shown for two choices of ϵ.In addition we show in Figure2athe impact of different choices of R 0 (no mask): the basic reproduction number in a maskless population, while in Figure2bwe show the impact of different choices of f .Note that ϵ 2 = ϵ/ϵ 1 as assumed in (4.7).
3,740.8
2023-07-31T00:00:00.000
[ "Medicine", "Mathematics" ]
A Low-Cost AI-Empowered Stethoscope and a Lightweight Model for Detecting Cardiac and Respiratory Diseases from Lung and Heart Auscultation Sounds Cardiac and respiratory diseases are the primary causes of health problems. If we can automate anomalous heart and lung sound diagnosis, we can improve the early detection of disease and enable the screening of a wider population than possible with manual screening. We propose a lightweight yet powerful model for simultaneous lung and heart sound diagnosis, which is deployable in an embedded low-cost device and is valuable in remote areas or developing countries where Internet access may not be available. We trained and tested the proposed model with the ICBHI and the Yaseen datasets. The experimental results showed that our 11-class prediction model could achieve 99.94% accuracy, 99.84% precision, 99.89% specificity, 99.66% sensitivity, and 99.72% F1 score. We designed a digital stethoscope (around USD 5) and connected it to a low-cost, single-board-computer Raspberry Pi Zero 2W (around USD 20), on which our pretrained model can be smoothly run. This AI-empowered digital stethoscope is beneficial for anyone in the medical field, as it can automatically provide diagnostic results and produce digital audio records for further analysis. Introduction According to a report of the WHO, cardiac and respiratory diseases are the primary causes of health problems, leading to the death of millions of people annually worldwide [1]. Early detection is the key factor in enhancing the effectiveness of intervention [1]. The stethoscope is a low-cost, yet efficient, auscultation device, which allows the assessment of cardiac and respiratory status through the evaluation of respiratory rate and effort, respiratory sounds, heart sounds, and heart rhythm [2]. However, auscultation heavily relies on a physician's experience, which is a highly subjective process [3]. Sometimes, sound signals are highly complicated when detecting various heart or lung diseases [4]. Previous studies have reported the ambiguous identification and interpretation of sounds in auscultation as a generic issue in the clinical setting [5], which should not be neglected, as it may lead to inaccurate diagnosis and mistreatment [6]. Indeed, the European Respiratory Society, International Lung Sounds Association, and American Thoracic Society call for the standardization of the nomenclature of auscultation sounds [7]. Recently, there have been significant recent advances in applying deep learning analytics in interpreting human body sounds for clinical purposes [8]. If we can develop automated methods to detect such anomalous sounds, it will improve the early detection of disease and enable the screening of a wider population than possible with manual screening [9]. However, most previous studies have focused on separately training an independent model for lung or heart sound diagnosis. It is important to have a model that can simultaneously detect abnormal lung and heart sounds given that cardiac and respiratory diseases have many common features, such as cough, tachypnoea, dyspnoea, syncope, and cyanosis, which can make diagnosis problematic [3]; additionally, it is a common clinical procedure for physicians and nurses to conduct a holistic assessment of the respiratory and cardiac system. In addition, there is a need to develop a lightweight yet powerful model for lung and heart sound diagnosis that is deployable in an embedded low-cost device and is valuable in remote areas or developing countries where Internet access may not be available. Moreover, as COVID-19 sweeps the globe, an AI-empowered electronic stethoscope can lower the infection risk in medical workers. Therefore, it is imperative to leverage artificial intelligence to assist physicians and nurses in remotely and accurately conducting auscultation. Our study aimed to fill in the gaps by contributing the following: Firstly, we developed a hybrid model that harnesses the power of CNN and best discrepancy forest (BDF, a variant of random forest) to classify cardiac and respiratory dysfunction. The experiments showed that the hybrid model outperforms the state-of-the-art methods. In addition, we designed a cost-effective digital stethoscope, which we connected to a low-cost Raspberry Pi Zero 2w single-board computer. The experiments showed that our proposed pretrained model ran smoothly on the computer. The cost of this AI-empowered stethoscope is so low (around USD 25) that it can be widely used in developing countries. This article is organized as follows: Section 2 summarizes the related studies that have applied deep learning or machine learning techniques to classify lung or heart sounds. Section 3 describes our proposed model and the datasets. Section 4 shows the results and a evaluation of our model. Finally, Section 5 presents the conclusions and future work. Heart Sound Diagnosis Artificial intelligence techniques have long been used to identify and classify heart diseases. Early works focused on traditional machine learning methods, for example, the naïve-Bayes-based electrocardiogram grating method proposed by Cheema and Singh [10], the SVM-based ventricular septal defects diagnosis method proposed by Sun et al. [11], and the rule-based classification tree proposed by Karar et al. [12]. Most of these machine learning methods achieved satisfactory accuracy in abnormal heart sound detection (on average, around 94%). Later on, people proposed neural network methods, for example, [13,14]. The average accuracy was around 90%. Recently, CNN-based methods have been built. For example, Deperlioglu [15] proposed an eight-layer CNN and achieved an accuracy of 97.90%. The ensemble CNN developed by Noman et al. achieved an accuracy of 89.22%. Yaseen et al. [16] used the mel frequency cepstral coefficient (MFCC) and discrete wavelet transform (DWT) to extract the features from heart sound signals and proposed a hybrid SVM and DNN model. Their model achieved an accuracy of more than 97%. Finally, Alqudah et al. [17] developed a new methodology using bispectrum higher-order spectral analysis and the CNN classification algorithm, which achieved an accuracy of 98.70%. Lung Sound Diagnosis Some of the pioneers of automatic lung sound diagnosis were Rocha et al. [18]. They extracted sound features (i.e. wheezes, crackles, or both) and then used machine learning models to perform classifications. However, there are two challenges in the field. The first one is that lung sound data are rare and their distribution is usually skewed across different classes. The second challenge is extracting useful features from soft breath sounds. For the first challenge, recent studies (for example, Mikolajczyk et al. [19], Nguyen et al. [20], and Lella [21]) have used data augmentation techniques, which not only add more training data to the model while resolving class imbalance issues but also improve model prediction accuracy and generalization ability. For example, Bardou et al. [22] achieved the highest satisfactory classification accuracy of approximately 97% with a large CNN model. For the second challenge, previous studies have also developed different feature extraction techniques. For example, Demir et al. [23] converted lung sounds to spectrogram images using the short-time Fourier transform method. Hai et al. used an optimized S-transform method. Shuvo et al. [24] used empirical mode decomposition and continue wavelet transform. Finally, previous studies have employed various classifiers ranging from machine learning methods (e.g., kNN, SVM, decision tree, and LDA, see [25]) to deep learning methods (e.g., CNN, CRNN, and ResNet) (see [26,27]). Materials and Methods Although previous deep learning models have achieved satisfactory performance, there is room for improvement. Firstly, none of these models can simultaneously treat heart and lung sound data. Secondly, these models are too large to be deployed to embedded devices. Finally, the classification performance of these models can be further increased with a new classifier. Motivated by the above-mentioned factors, we developed a lightweight hybrid model that leverages the power of CNN and ensemble learning. The experiments showed that the hybrid model is capable not only of diagnosing 11 types of heart and lung diseases with satisfactory performance but also of being deployed on a low-cost single-board computer. The proposed methodology constitutes multiple steps: Step 1: Both heart and lung sound data were acquired from two publicly available databases. Step 3: The data were augmented to achieve balanced classes. Step 4: The data were transformed into 2D bispectrum images. Step 5: A lightweight hybrid model was developed, which constitutes a CNN model and a forest-based classifier. Step 6: The image dataset was randomly split into two subsets: 80% as the training data and 20% as the test data. Step 7: The hybrid model was trained with the training data. Step 8: The hybrid model was tested with the test data, and multiple classification performance indicators were calculated. Step 9: The hybrid model was deployed on a Raspberry PI Zero 2W single-board computer, which was connected to a digital stethoscope. Dataset We employed two publicly available datasets that are widely used as benchmark datasets for lung or heart sound diagnosis. The lung sound dataset used is the International Conference on Biomedical Health Informatics (ICBHI) 2017 dataset [28]. The dataset was independently collected from 126 subjects in Greece and Portugal. It contains 5.5 h of audio recordings sampled at different frequencies (4 kHz, 10 kHz, and 44.1 kHz). The length of the recordings ranges from 10 s to 90 s. The respiratory sounds are professionally annotated, while taking the following conditions into account: the subject's pathological condition and the presence of respiratory anomalies (i.e., crackles and wheezes) in each respiratory cycle. The ICBHI samples include five classes: healthy (H), pneumonia (P), chronic obstructive pulmonary disease (COPD), bronchiolitis (BO), bronchiectasis (BA)m and upper respiratory tract infection (URTI). In addition, the heart sound dataset we used is the one provided by Yaseen et al. [16]. It contains 1000 sound records that are evenly distributed in five main categories (i.e., 200 records per category): normal (N), aortic stenosis (AS), mitral stenosis (MS), mitral regurgitation (MR), and mitral valve prolapse (MVP). The heart sound records were collected from different sources and resampled to an 8000 Hz frequency rate and finally converted to a mono channel. Data Preprocessing We followed the best practices in previous studies to preprocess the two sound datasets. As mentioned in previous studies, auscultation signals generally reside in the frequency range of 25-400 Hz [9]. Every data file (i.e., signal sequence) in both the lung and heart sound dataset was first processed with a 2nd-order Butterworth bandpass filter with upper and lower cut-off frequencies of 25 and 400 Hz, respectively. Then, all the sample audio signals were resampled at 1000 Hz to ensure consistency while lowering the computational cost [8]. Next, every sound signal sequence was truncated to 2.5 s (i.e., the first 2500 data points, see [17,29,30]). Every signal sequence was normalized to (−1,1) in order to reduce the effect of device/sensor variation [27]. Finally, we followed a previous approach [31] to employ a variation autoencoder (VAE) to solve the problem of imbalanced classes in the original datasets. The VAE used the mean and standard deviation layers to sample the latent vector (see Figure 1). The distribution of classes before and after data augmentation are represented in Table 1. Finally, the augmented dataset was used for our experiments. Data Augmentation The lung sound dataset is imbalanced, as one class label (i.e., COPD) has a very high number of observations and the other classes have very low numbers of observations. In addition, the heart sound dataset is relatively small (i.e., 200 samples per class). If both datasets are merged into a single one, then the distribution of the classes would be highly skewed. The performance of a deep learning model particularly depends on the quality, quantity, and relevance of the training data. Given that collecting new lung and heart sound data is an exhausting and costly process, we leveraged data augmentation to make our proposed model more robust. We followed a previous approach [31] to employ a variation autoencoder (VAE) to solve the problem of imbalanced classes in the original datasets. The VAE uses the mean and standard deviation layers used to sample the latent vector (see Figure 1, for more details, see [31]). After data augmentation, the total number of samples was increased from 1917 to 8067. The distribution of classes before and after data augmentation is represented in Table 1. The 2 datasets were then combined into 1 with 11 evenly distributed classes. Image Generation Given that sound signals are nonstationary and non-Gaussian in nature, the bispectrum is one of the most widely used higher-order spectral analysis methods to generate images from sound [30]. The bispectrum quantifies the degree of quadratic phase coupling and nonlinearity interactions in nonstationary signals. A previous study [17] showed that the accuracy of the models based on full 2D bispectrum images is significantly higher than that of those based on contours. Therefore, we followed prior studies [29,32] to define the bispectrum of a sound signal with the second-order Fourier transform with the third-order cumulants of the signal. That is, the bispectrum expresses the nature of a sound record as an image to extract the most represented features for each class [33,34]. We computed the full 2D bispectrum images of all sound records after data augmentation. The resultant images, each of which was 256 × 256 pixels, were stored in an image database with their class labels. We demonstrate a few samples of the 11 classes in Figure 2. The image database is publicly available on Github https://github.com/DataScienceSDU/Heart-Lung-Sound (accessed on 21 January 2023). Figure 3 illustrates how a sound record is transformed into an image. The image database is publicly available. Model Proposition Building on the work of Tariq et al. [8,35], we developed a lightweight hybrid model by adjusting the network structure and parameters and by replacing the last fully connected layer with a best discrepancy forest classifier. Figure 3 illustrates the architecture of the hybrid model. The hybrid model consists of two parts. The first part is a 2D CNN structure for feature extraction. The 2D CNN is composed of six layers, as shown in Table 2: • The input layer is set to 256 × 256. • The first 2D convolutional layer takes the bispectrum as the input with 24 filters. The kernel size is set to 5 × 5 with a stride of 4 × 2 and with ReLU as the activation function. • The second 2D convolutional layer has 48 filters. The kernel size is set to 5 × 5 with a stride of 1 × 1. • Thirdly, a 2D max-pooling layer is set up with a 4 × 2 kernel and a 4 × 2 stride. • Finally, a 2D convolutional is set up with 16 filters. The kernel size is set to 3 × 3 with a stride of 1 × 1. As shown in Table 2, there were only 36,400 parameters in our proposed CNN that needed to be estimated. This is much smaller than that proposed in previous studies [8,35]. We intended to keep the CNN relatively small so that our proposed model can be deployed in embedded devices that are typically computational-resource-constrained. After the high-level features are extracted through the convolutional and pooling operations, the output feature maps are transformed into a 1D vector and transferred to a fully connected layer with 64 neurons. The fully connected layer is connected to the second part of the hybrid model, a best discrepancy forest (BDF) classifier, which is a variant of random forest (RF), with 500 trees [36]. Like RF, BDF combines bagging and random selection of features in order to construct a collection of decision trees (i.e., 500 trees in this study) with controlled variance. Bagging means "bootstrap aggregating". Given a training dataset D with N observations, bagging generates m new training sets D i , each of size n, where n < N, by sampling from D randomly (RF) or systematically (BDF) and with replacement [36]. Each new train-ing set D i is used to train a single decision tree. The only difference between the BDF and the RF is the way in which n observations are selected. The RF uses the simple random sampling technique with replacement while the BDF uses the systematic sampling technique with replacement [37]. The systematic sampling is proven mathematically and empirically by the work [37] that it can make sure that the distribution of selected n observations within each new training sets D i is similar to that of the whole dataset D (please refer to [37] for the mathematical proof and empirical experimental results with 160 datasets). Our proposed model uses a BDF of 500 trees. That is, the 64 neurons of the first part CNN serve as the input of the BDF classifier. By constructing 500 trees to form a "forest", the predictions of all trees are aggregated to identify the most popular result of classification. The proposed model is illustrated in Figure 3. Model Training and Testing We followed the hold-out method by setting a random seed and then randomly split the image data generated in Section 3.4 into a training dataset (80% or 6453 images) and a test dataset (20% or 1614 images). The proposed model was first trained and then was tested with the corresponding datasets on a workstation with NVIDIA Telsa T4 GPU card of 16 GB display memory. All the systems were implemented using Tensorflow 2, using the Adam optimizer with 100 epochs, a mini batch size of 128, and cross-entropy loss. Then, the trained model was tested with the test dataset. The results in Table 3 show that, on average, our 11-class prediction hybrid model achieved 99.97% accuracy, 99.89% F1 score, 99.90% precision, 99.99% specificity, and 99.88% sensitivity. The confusion matrix shown in Table 4 indicates that among the 1614 testing samples, the hybrid model wrongly classified only two samples. To make sure that the results were not achieved by accident, we conducted a robustness check and re-evaluated the proposed model with 10-fold cross-validation. That is, the image data were randomly split into ten partitions. We used nine of those partitions for training and reserve the tenth for testing. We repeated this procedure ten times, each time reserving a different tenth for testing. The means of every performance indicator are summarized in Table 3. We concluded that the results of the 10-fold cross-validation were highly similar to those of the hold-out validations (i.e., 99.94% accuracy, 99.72% F1 score, 99.84% precision, 99.89% specificity, and 99.66% sensitivity). Finally, in order to verify whether the BDF classifier is really effective, we retrained and retested a pure CNN model without the BDF classifier. That is, the last layer of 64 neuros was directly connected to 11 classes. The pure CNN model was first trained and tested with the same hold-out datasets (i.e., 80% or 6453 images for training and 20% or 1614 images for test). The results in Table 3 show that, on average, the 11-class pure CNN model achieved 99.81% accuracy, 99.01% F1 score, 99.13% precision, 99.89% specificity, and 98.92% sensitivity. The confusion matrix shown in Table 4 indicates that, among the 1614 testing samples, the pure CNN model wrongly classified 17 samples. The pure CNN was also trained and tested with 10-fold cross validation. The results summarized in Table 3 indicate that it achieved 99.53% accuracy, 97.46% F1 score, 97.68% precision, 99.74% specificity, and 97.33% sensitivity. We also concluded that the hybrid model (i.e., CNN+BDF) outperformed the pure CNN model, especially in terms of F1 score, precision, and sensitivity. Yaseen et al. [16] 97.90% 94.50% Glosh et al. [38] 98.33% 98.33% Alqudah et al. [29] 98.70% 98.70% Model Deployment with Edge Computing We aimed to develop a lightweight yet powerful model that can assist practitioners in conducting lung and heart sound diagnoses with ordinal or digital stethoscopes. If the proposed deep learning models could be directly deployed on edge devices, then it would be possible to perform automated diagnosis and health care services at a distance. We transformed an ordinary stethoscope into a digital one as follows: We first cut the tube that connects to the disc-shaped resonator in half to fit an electret condenser microphone (CMC-9745-44P). The microphone captured the signals of lung or heart sounds. The signals were then amplified through an amplifier (NE5532N) with an op-amp. Then, the signals were converted into digital ones through an analog-to-digital converter (CM108B). Finally, the converter was connected to a Raspberry PI Zero 2W (but could be connected to any other single-board/low cost computer) through a micro-USB port. An SPI LED screen was connected to the Raspberry Pi for displaying the diagnosis result. The schematic diagram of the digital stethoscope is shown in Figure 4 and the bill of materials (BOM) is listed in Table 5. We now demonstrate how to deploy the pretrained proposed hybrid model into a Raspberry PI computer to make inferences. Firstly, given that the proposed hybrid model has two parts (i.e., CNN part to extract high-level features and BDF classifier), we converted the pretrained CNN model into a TFLite model through Tensorflow's TFLiteConverter module and saved the pretrained BDF model through the Joblib module. Then, we installed the relevant packages (i.e., TFLite Runtime, scikit-learn, soundfile, and libsndfile) on the Raspberry PI computer. We used the arecord command to record a 15 s sound through the USB-connected digital stethoscope and saved the sound signals a .wav file. We used the soundfile library to convert the .wav file into a Numpy array and then conducted the preprocessing analysis specified in Section 3.2. The resulting normalized signal was then converted to a 2D bispectrum image (i.e., a 256 × 256 matrix) using the method specified in Section 3.4. The image was fed into the pretrained CNN TFLite model to extract high-level features and then into the pretrained BDF for classification. The classification result was shown on the SPI LED screen. We conducted 10 experiments on a Raspberry PI Zero 2W. On average, the whole inference process required around nine seconds and consumes around 27.79% of 512M memory. Discussion This study contributes to the literature in the following ways: Firstly, we developed a new hybrid model that can simultaneously detect lung and heart diseases. Note that classification problems with many classes with imbalanced datasets present more of a challenge a problem with fewer classes. The experiments showed that our proposed hybrid model that deals with 11 classes can achieve better performance than other relevant models that deal with fewer classes using the two same datasets (see Table 3). For example, with the five-class heart sound dataset, Yaseen et al. [16] achieved 97.90% accuracy and 94.50% sensitivity; Glosh et al. [38] achieved 98.33% accuracy and sensitivity; and Alqudah et al. [29] achieved 98.70% accuracy and sensitivity. Secondly, our findings confirm the those of prior studies [39][40][41] that ensemble learning classifiers (i.e., BDF in this study) can solve the over-fitting problem because, on the one hand, ensemble learning maximizes the diversity through the random selection of highlevel input features extracted from the CNN part of our hybrid model; on the other hand, the bootstrap bagging mechanism can increase the strength among multiple decision trees and improve classification performance [36]. Finally, our proposed hybrid model is capable of being deployed in a low-cost singleboard computer. Connecting the computer to a digital stethoscope through a mini-USB port can make automation in lung and heart disease diagnosis possible. Our work supports the claim that AI systems have the potential to improve diagnostic efficiency while reducing human errors in medicine [42]. Our findings also reveal that harnessing the power of edge computing can transform the healthcare field, as with most other industries, which offers unprecedented occasions to improve patient and clinical group results, decrease costs, and so on. However, our study has a limitation. Although our proposed hybrid model has achieved satisfactory results in rigorous cross-validation experiments, we have not tested it in hospitals. This is because the cost of large-scale clinical data is relatively high, and it was difficult for us to work with patients under the high-risk conditions of COVID-19 infection. In the future, we hope to obtain additional research funding to produce numerous digital stethoscopes and to collaborate with hospital staff on new data acquisition and testing of our model. Conclusions Since its earliest appearance in 1816 as an impromptu paper cone rolled by Dr. René Laennec, designs for stethoscopes have the familiar consensus configuration: a chest piece, a pair of earpieces, and a tube or tubes connecting them. The stethoscope continues to play an important role in the digital age. In this study, we developed a hybrid model that harness the power of CNN and the random forest classifier. The experiments confirmed the superiority of the proposed model, which not only achieves satisfactory performance but is also lightweight enough to be deployed in a low-cost single-board computer to form a digital stethoscope. Therefore, our study and the AI-empowered stethoscope solution are particularly important for people in remote areas and developing countries or practitioners involved in humanitarian relief.
5,723.2
2023-02-26T00:00:00.000
[ "Computer Science", "Medicine" ]
IL-6 and Mouse Oocyte Spindle Interleukin 6 (IL-6) is considered a major indicator of the acute-phase inflammatory response. Endometriosis and pelvic inflammation, diseases that manifest elevated levels of IL-6, are commonly associated with higher infertility. However, the mechanistic link between elevated levels of IL-6 and poor oocyte quality is still unclear. In this work, we explored the direct role of this cytokine as a possible mediator for impaired oocyte spindle and chromosomal structure, which is a critical hurdle in the management of infertility. Metaphase-II mouse oocytes were exposed to recombinant mouse IL-6 (50, 100 and 200 ng/mL) for 30 minutes and subjected to indirect immunofluorescent staining to identify alterations in the microtubule and chromosomal alignment compared to untreated controls. The deterioration in microtubule and chromosomal alignment were evaluated utilizing both fluorescence and confocal microscopy, and were quantitated with a previously reported scoring system. Our results showed that IL-6 caused a dose-dependent deterioration in microtubule and chromosomal alignment in the treated oocytes as compared to the untreated group. Indeed, IL-6 at a concentration as low as 50 ng/mL caused deterioration in the spindle structure in 60% of the oocytes, which increased significantly (P<0.0001) as IL-6 concentration was increased. In conclusion, elevated levels of IL-6 associated with endometriosis and pelvic inflammation may reduce the fertilizing capacity of human oocyte through a mechanism that involves impairment of the microtubule and chromosomal structure. Introduction Interleukin-6 (IL-6) is a pleotropic cytokine which is known to activate acute phase proteins and maintain chronic inflammatory states ranging from cardiovascular diseases to infertility [1]. For example, IL-6 has been known to be involved in the pathogenesis of small vessel diseases, accelerate joint inflammation and destruction in rheumatoid arthritis, and also contribute to neurovascular disease such as dementia [2,3,4]. This preinflammatory cytokine is known to be expressed and reactive oxygen species (ROS) are known to be elevated in patients with chronic pain syndromes, acute pelvic inflammatory disease, idiopathic infertility and patients with endometriosis [5,6,7,8,9]. In oocytes, as in other cells, ROS play an important role as regulatory mediators of intracellular signaling responsible for cellular functions in biological systems [10]. In contrast, under pathological conditions, substantially elevated ROS are hazardous, resulting in mutations, inactivation or loss of mitochondrial DNA, and synthesis and accumulation of oxidized proteins. Similarly, oxidative stress is known to affect the membrane lipid composition, decrease the concentrations of antioxidants, and increase cytosolic Ca 2+ which hampers cellular function [11,12,13,14]. Recently, we have demonstrated that oocytes exposed to increased concentration of ROS such as superoxide (O 2 N2 ), hydrogen peroxide (H 2 O 2 ), and hypochlorous acid (HOCl) undergo deterioration of quality as assessed by an increase in zona pellucida dissolution time, increase in ooplasmic microtubule dynamics, and major loss in the cortical granule status, all of which are major markers of oocyte aging phenomena [15]. These findings may explain high reproductive failure known to occur with advancing age, obesity, diabetes mellitus, and a myriad of other clinical conditions [16,17,18,19]. ROS are known to be one of the major stimuli of cytokines which may mediate oxidative stress, and potentially alter the redox equilibrium in various cell types [20,21,22]. IL-6 has been correlated with ROS in infertile males with varicocele and has been shown to contribute to male infertility [23]. In this work, we explored the direct effect of various concentrations of IL-6 on the metaphase-II mouse oocyte spindle and chromosomal alignment in vitro, as a possible mechanism of deterioration of oocyte quality in patients with pelvic inflammatory states where IL-6 levels are elevated. Effect of IL-6 on microtubules To determine the effects of IL-6 on metaphase-II mouse oocytes, oocytes were exposed to recombinant mouse IL-6 (50, 100 and 200 ng/mL) for 30 minutes and subjected to indirect immunofluorescent staining to identify alterations in the microtubule and chromosomal alignment compared to untreated controls. Scores 1 and 2 were combined as ''Good'' score and 3 and 4 as ''Poor'' score ( Figure 1, see Materials and methods section for more details). The majority of oocytes (86.4%) had ''Good'' microtubular scores defined as those with scores 1 and 2, in the unexposed controls compared to only 9.5% in the group exposed to 200 ng/ mL of IL-6 (p,0.01) ( Figure 2, Table 1). Alterations in the microtubule scores were seen in IL-6 concentrations as low as 50 ng/ml. At an IL-6 concentration of 50 ng/ml 36% of oocytes had ''Good'' and 64% had ''Poor'' scores. At 100 ng/mL, 23.5% of oocytes demonstrated ''Good'' scores whereas more than 75% had ''Poor'' scores. Most of the oocytes (90.5%) demonstrated poor scores when treated with 200 ng/ml of IL-6. Thus IL-6 caused a dose dependant deterioration of oocyte microtubules. Effect of IL-6 on chromosomal alignment Similar to the effect observed in the microtubules, overall comparison between untreated and treated groups demonstrated a trend of increased ''Poor'' scores with increasing IL-6 concentrations in chromosomal alignment. Oocytes exposed to 200 ng/mL of IL-6, showed higher ''Poor'' scores (85.7%) compared to 10.2% in the unexposed control group (p,0.01). At 50 ng/mL 32% had ''Poor'' scores compared to 68% with ''Good'' scores in chromosomal alignment (Table 1; Figure 2). Our findings represent a concentration dependant alteration of the metaphase-II oocyte microtubule and chromosomal alignment when exposed to recombinant mouse IL-6. An IL-6 concentration greater than 50 ng/mL resulted in an increase in ''Poor'' scores in both the microtubules and chromosomes. Confocal images demonstrating alterations in microtubules and chromosomal alignment at different concentrations are shown in ( Figure 3A-D). Discussion Our current finding, for the first time, demonstrated a direct link between increasing concentrations of IL-6 and the deterioration in morphology of the microtubule and chromosomal alignment in metaphase-II mouse oocytes. This alone may be important in biologic settings such as pelvic inflammation and endometriosis where IL-6 levels are elevated. It is well established that maintenance of the integrity of the mature oocyte spindle is vital for proper cell division and embryo formation [24]. Indeed, abnormal spindle dynamics mediated by IL-6 elevation is known to lead to aneuploidy, failure of fertilization, early loss of pregnancy [19], or low reproductive outcome under certain pathological conditions such as in patients with endometriosis and pelvic inflammation [18]. It is unclear what specifically induces these alterations in the oocytes, but inducers such as TNF-a, and ROS associated with oxidative stress have been shown to alter the oocyte morphology and spindle dynamics contributing to infertility [15,25,26]. Thus, the effect of IL-6 on oocyte quality could be explained by its ability to directly affect the metaphase-II oocyte through its negative effect on the spindle structure and chromosomal alignment or indirectly by involvement of reactive oxygen species. Oxidative stress is well known to increase IL-6 production in several types of cells. As for example, oxidized low density lipoprotein can upregulate nuclear factor kappa B (NF-Kb) mediated IL-6 generation in endothelial cells [27], and acute hypoxia leading to generation of oxidative stress have been documented to increase IL-6 production in hepatocytes [28]. It has also been documented that the addition of ROS scavengers resulted in the inhibition of the IL-6 response to ROS stimulation, suggesting that ROS may be, at least in part, involved in IL-6 production [29,30,31]. Previously, we have shown that ROS such as superoxide (O 2 N2 ), hypochlorous acid (HOCl), and hydrogen peroxide (H 2 O 2 ) can alter the oocyte quality manifested by hypergranulated cytoplasm, absence of perivitelline space, abnormal spindle dynamics [15]. More recently, we have demonstrated that HOCl induced alteration in the microtubule and chromosomal structure of metaphase-II mouse oocytes in vitro, a process that can be inhibited by melatonin (article in press) [32]. Choi et al. demonstrated that H 2 O 2 at 12.5 nmol/mL initiated the deterioration in the microtubule and chromosomal structure whereas [26] HOCl at 50 nmol/mL demonstrated similar effects (article in press) [32]. Our current study demonstrated similar changes assessed by the same scoring technique for the alterations of the spindle and chromosomal structure at concentration of 50 ng/mL of IL-6. Hence, it is evident that pathological concentrations of ROS and cytokines whose generation in biological systems is interdependent in inflammatory states can affect the oocyte spindle structure. Thus deterioration of the oocyte spindle may be a direct effect of ROS generated from inflammation or hypoxia or secondary to cytokines like IL-6 augmented by ROS generation. Various authors have shown that women with endometriosis have higher peritoneal fluid concentrations of IL-6 compared to those who are normal [33,34,35,36]. Peritoneal fluid macrophages are known to express IL-6 and soluble IL-6 receptors through which this cytokine is known to exert its effects in pelvic inflammatory states [37]. A recent animal study demonstrated that IL-6 can increase the rate of meiotic arrests and germinal vesicle breakdown in bovine species in inflammatory states induced by treatment with lipopolysaccharide [38]. Another group has demonstrated that peritoneal fluid obtained from patients with endometriosis affects oocyte microtubule and chromosomal structure leading to altered oocyte quality [25] which could be prevented by treatment with L-carnitine [39]. Lcarnitine is known to down-regulate cytokines such as IL-1, IL-6, and TNF-a and increase their clearance in rats implanted subcutaneously with sarcoma tumor [40]. Since the metaphase-II oocyte gets exposed to the higher peritoneal fluid concentrations of IL-6 after ovulation in patients with inflammatory disorders, it may undergo deterioration of spindle structure and chromosomal alignment contributing to failure of fertilization and poor reproductive outcomes. Other than peritoneal fluid, follicular environment in endometriosis demonstrates higher IL-6 which may hamper oocyte quality prior to ovulation [41,42]. Interestingly, IL-6 has been shown to play a role in regulating mouse cumulus cell expansion as a physiological process in the ovarian follicle. This study also indicated that the effects of IL-6 on cumulus cells and the oocyte, may be detrimental if elevated within the follicle at inappropriate or for extended periods of time, such as during chronic infections, endometriosis and in obese patients. Therefore, impaired fertility associated with these conditions could be related to the direct effects of abnormally high levels of IL-6 and other potent cytokines that can impair ovarian follicular cell function and oocyte quality [43]. Thus higher IL-6 levels associated with impaired fertility are probably secondary to the effect on the oocyte spindle. In addition, patients with Chlamydia-induced salpingitis and upper reproductive tract inflammation which are established etiologies of infertility, demonstrated higher IL-6 levels in the infiltrating lymphocytes in the acute phase of the disease [44]. IL-6 can also alter ciliary beat frequency in human fallopian tube owing to its presence in the inflammatory milieu of peritoneal fluid thus affecting ovum pick up and implantation [45]. These findings indicate that various sources of IL-6 may affect the mature oocyte spindle and chromosomal structure in states of inflammation. In conclusion, the oocyte spindle apparatus is of vital importance in the successful outcome of the reproductive process. Oxidative states can compromise oocyte quality by affecting the meiotic spindle based on in vitro studies. IL-6, generated in the process of oxidative stress, not only regulates the inflammatory milieu and assists in maintenance of chronic inflammatory state but also directly affects the metaphase-II oocyte spindle and contributes to infertility. The future directions in prevention of oocyte quality damage may not only focus on antioxidants but also on inhibitors of cytokine cascades. Percentage of oocytes with ''Good'' and ''Poor'' outcomes IL-6 (ng/mL) remove excess cryoprotectant for 3 minutes. This was followed by transferring the oocytes in HTF media and incubated at 37uC and 5% CO 2 for 60 minutes for complete repolymerization. The oocytes were then screened for presence of polar body confirming their Metaphase-II stage. The oocytes were divided equally into unexposed controls and IL-6 treated groups (50,100 and 200 ng/ mL). The end points of the experiments involved the assessment of microtubule and chromosomal alignment as previously described [26,32]. Results obtained were compared in each experimental set between different groups using appropriate statistical tests. Immunostaining and fluorescence microscopy Oocytes were fixed in a solution containing 2% formaldehyde, 0.2% Triton X-100 for 30 minutes. Fixed oocytes underwent indirect immunostaining using mouse anti-alfa tubulin antibody against the microtubules as the primary and FITC conjugate antigoat antibody as the secondary antibody. The chromosomes were stained using propidium iodide. Stained oocytes were loaded into antifade agent (Biomedia, CA) on marked slides that have two etched rings. Scores were assigned 1 to 4 for both microtubule and chromosomal alterations based on previous published data [26]. Scores 1 and 2 were combined as ''Good'' score and 3 and 4 as ''Poor'' score. This was done since no statistical differences were found between scores 1 and 2 and similarly between 3 and 4. This also increased the power of the analysis. The alterations in the microtubules and chromosomes were compared with controls and scored by two different blinded observers (Figure 1) [26]. Images were obtained both utilizing immunofluorescent and confocal microscopy. Confocal Microscopy, assessment of Microtubules and chromosomal alignment Confocal images were obtained utilizing a Zeiss LSM 510 META NLO (Zeiss LSM 510 META, Jena, Germany) microscope. Oocytes were localized using a 106 magnification lens and spindle alterations assessed using 1006 oil immersion lens. The microtubules were stained fluorescent green, which was distinct from the red fluorescent staining of chromosomes. Individual treated and control oocytes in each experiment set were closely examined for spindle status. The categorization of oocytes based on MT and CH status was performed by two observers blinded to treatment group assignment, who used comprehensive evaluation of the individual optical sections and the 3-D reconstructed images ( Figure 3A-D). Statistical Analysis Data were analyzed using SPSS v. 19.0 for Windows (IBM Corp.). Pearson Chi square was utilized for comparisons between control and induced groups. Comparisons among pairs of groups were analyzed with Fisher's Exact tests with correction for multiple comparisons were made using the Holm modified Bonferroni correction. Statistical significance was determined by a p,0.05.
3,222.8
2012-04-20T00:00:00.000
[ "Biology", "Medicine" ]
Gastroretentive Sustained Release Floating and Swellable Cefadroxil Formulation : In vitro and in vivo Evaluation A new Gastroretentive Sustained Release (GRSR) tablet of cefadroxil was developed with floating and swellable properties. Various release retarding polymers, swelling agent, gas generating agent and release modifying agents were evaluated. The optimized formulation was studied for various physical parameters, in vitro drug release profile and for in vitro floating properties. The formulation provided sustained drug release for about 14 h with the floating lag time of 30 s and floating duration of about 14 h. Owing to the promising in vitro floating property, the formulation was explored for in vivo floating performance in healthy human volunteers by radio-analytical technique. The developed formulation of Cefadroxil showed prolonged gastric retention in vivo for 7 h. The tablets also showed significant swelling property with excellent tablet integrity till 7 h. The developed formulation exhibited promising gastroretention in vivo. The radio-analytical technique used to evaluate the in vivo gastroretention was found to be simple, cost-effective and was precisely used for detecting floating time and tablet integrity throughout the study. *Corresponding author: Chaudhari SV, PhD, Tris Pharma Inc, 2033, Suite H, Route 130, Monmouth Junction, New Jersey, 08852, USA, Tel: 732 823 4814; Fax: 732 94 Introduction Gastroretentive Delivery Systems (GRDS) have been designed for achieving therapeutic benefit for drugs that are preferentially absorbed from the proximal part of the Gastrointestinal Tract (GIT) or that are less soluble in or are degraded by the alkaline pH they encounter at the lower part of GIT [1][2][3]. These systems offer various pharmacokinetic advantages specifically for β-lactam antibiotics with reduction of blood level fluctuations when compared to that observed from conventional forms [4]. Cefadroxil (CFD) is a broad-spectrum cephalosporin antibiotic commonly prescribed in the treatment of respiratory tract, urinary tract and skin and soft tissue infections with usual dosage of 1 or 2 g daily in a single or divided doses [12]. It exhibits short elimination halflife of 1.2 h with primary excretion via renal pathway (88 to 93% of the administered dose within 24 h). Thus, the short half-life and very high urinary excretion makes it undesirable to maintain the plasma levels of CFD in the therapeutic range for prolonged time, providing a strong rationale for development of sustained release (SR) formulation of CFD. CFD has good solubility and stability in acidic pH and decreases with increasing pH [4]. Thus, the present work includes development of Gastroretentive Sustained Release (GRSR) formulation of CFD and evaluation of floating properties in vitro and in vivo. (The swelling studies of the developed formulation are not discussed in this work). Various approaches for evaluation of in vivo gastroretention of the formulation in experimental animals as well as in human volunteers have been studied. Some of the well-known techniques are gamma scintigraphy, use of radiopaque materials, etc. The gamma scintigraphy technique has been successfully explored in experimental animals [13,14] as well as in Human subjects [15,16] as reported in some of the literature data. The use of radiopaque materials in the experimental product with evaluation of the subject using X-ray technique post administration has also been studied in experimental animals [17][18][19][20][21]. The technique however has been used in animal models with the results predicted for humans. The X-ray technique is comparatively less complicated still provides accurate evaluation of the gastroretentive system in vivo. The aim of the study was to evaluate the developed GRSR formulation of CFD in human subjects using the X-ray technique for in vivo gastroretention. Materials CFD was obtained as a gift sample from M/s Macleods Pharmaceuticals, India. HPMC grades (K 15M and HPMC K100M) were gifted by M/s Colorcon Asia Pvt. Ltd., India, PVP K30 by M/s Rohm Pharma, Germany. Luzenac Pharma and Ferro gifted talc and Magnesium Stearate respectively. Barium Sulfate was purchased from Merck, Germany. All the other excipients and solvents used were purchased from Merck India Ltd. and were of analytical grade. Preparation of tablets Tablets were prepared by conventional wet granulation method. All the excipients were passed through # 40 ASTM Sieve, mixed and granulated with PVP K30 (5% w/v in isopropyl alcohol). The wet mass was passed through # 8 ASTM Sieve and dried at 50ºC to 60ºC for about 20-30 min. The dried mass was then passed through # 18 ASTM Sieve and the granules were lubricated with magnesium stearate and talc and compressed into caplet sized tablets (21 × 11 mm) on a Cadmach single station tablet press (Formulation: CFD-173). The formulation, CFD-173 was studied for incorporation of a radio-opaque compound, Barium Sulfate, most commonly used for clinical diagnosis of gastrointestinal tract. Barium sulfate exhibits high density (4.4777 g/cm 3 ) which makes it difficult to achieve the desired floating properties of the developed formulation upon its incorporation. It also exhibits poor flow property, affecting inversely the flow property of the tablet-blend. The formulation, CFD-173 was therefore modified by changing the composition of the excipients to get the desired floating properties without significantly altering the in vitro release profile. The developed formulation CFD-173XR contained Barium Sulfate about 9.0% w/w of the tablet and to accommodate this quantity, CFD quantity was reduced by 10% of its dose. Evaluation of tablets Tablets were evaluated for appearance, hardness, weight variation, friability and drug content. In vitro dissolution study The in vitro dissolution of CFD was studied by USP dissolution apparatus I with rotation speed of 100 rpm. The dissolution was carried out in buffer change medium (900 mL as pH 1.2 buffers for first 2 h, pH 4.5 buffer for next 2 h and water till the end) and in pH 1.2 buffer (900 mL) of the dissolution maintained at 37 ± 0.5ºC. The samples of 5 mL were withdrawn at predetermined time intervals of 1, 2, 3, 4, 6, 8, 10, 12, 14 and 16 h and replaced with fresh buffer each time. Samples were analyzed by the developed UV spectroscopy method at 263 nm. In vitro floating study The floating study was carried out using USP dissolution apparatus type II. The medium used was 0.1 N HCl (pH 1.2 buffers); 900 mL maintained at 37.5 ± 0.5ºC throughout the experiment. The floating lag time i.e., time to float the tablet and the duration of floating were studied visually. In vivo floating study The floating ability of the developed GRSR formulation was studied in vivo by radio-analytical technique in healthy human volunteers (formulation CFD-173XR). The study was conducted at the K.J. Somaiya medical college and research centre, radiology department, Mumbai, India and the study design/protocol was approved by the local ethical committee. The healthy human volunteers (n=4) selected were males between (age group 20-30 years) with the body weights ranging between 60 ± 8 kg and were housed a day before in the K. J. Somaiya hospital during the entire study. The volunteers were undergone regular check-up to meet the exclusion criteria and were checked as per the protocol after completion of the study. The volunteers were fasted overnight at least 10 h prior to dosing with water ad lib. The study was performed in fed condition and volunteers were given breakfast. The blank X-ray was taken immediately and 5 min after the breakfast. The volunteers were then given tablets (CFD173-XR) with 240 mL of water. Water was not allowed freely 1 h post dosing. Lunch, breakfast and dinner were provided at 5, 9 and 13 h after the morning breakfast to all the volunteers. During the h of food restriction, no beverages like tea, coffee, milk etc. were permitted other than water. X-ray photographs of all the volunteers were then taken at 2, 4, 7 and 10 h intervals in standing position from the front side of the stomach covering total abdominal part. Evaluation of tablets The various excipients were optimized for their quantities used and the prepared formulations were studied for their effect on in vitro drug release. The tablets were compressed at the hardness of 7-8 kg/cm2 and showed friability less than 0.2% w/w. The assay was found to be 100% of the labelled amount of CFD (The formulation optimization data is not discussed in this work). Drug release studies In our previous work [22,23], the GRSR of ofloxacin was developed and successfully studied for in vitro (drug release and swelling property) and in vivo pharmacokinetic studies. Based on this the similar formulation approach was explored for the development of GRSR formulation of CFD. The in vitro drug release was for CFD173-XR determined in the similar way as of CFD-173 in both; the buffer change method and in acid medium (Figures1 and 2). The drug release in both the media was found to be more than 90% in 14 h and comparable with f 2 values in buffer change method and in acid medium for CFD-173 and CFD-173 XR formulations were found to be 62.37 and 58.28 respectively (Figures 1 and 2). In vitro floating study The floating lag time and duration of floating in pH 1.2 buffer and in 0.01 N HCl medium (pH 3.5) (to simulate the fed condition environment) were found to be 50 ± 8 s (N=6) and 12 ± 1 h (N=6) respectively for both the formulations; CFD-173 and CFD-173XR. In vivo floating study To locate the tablet upon administration in the X-ray photographs, the specific vertebrae locations were used as background. As specified in the literature [24], the stomach is located under the diaphragm in the left region of the abdomen. Although the exact position and size of stomach vary continually and depends largely on the fed condition, the location can be traced out in X-ray photos against the vertebrae locations. Generally, the stomach lies in between 10 th , 11 th and 12 th thoracic vertebrae while the distal part of the stomach and upper intestinal part may be located against the 1 st and 2 nd lumber vertebrae. The formulation included Barium Sulfate (9.0% w/w of tab) in the optimum quantity that it was sufficient enough to provide the maximum radio-opacity in the abdomen X-ray. The X-ray photographs are shown in Figure 3. The X-ray photograph taken at 0 h ensured the absence of any radio-opaque substance before commencing the study. After 2 nd h the X-ray photographs showed the bright, very well and sharp edged tablet in the stomach region (seen against 10th to 11th thoracic vertebrae in all the volunteers). After 4 h of the study, the tablet was found to be at slightly lower regions of the stomach (could be located against 11 th and 12 th thoracic vertebrae). The tablet at this point was observed with larger dimensions and with reduced sharpness of the tablet size. The tablet was observed as a single unit in all the volunteers with no spreading of the Barium Sulfate indicating the retention of the integrity of the tablet. Barium Sulfate being practically insoluble in aqueous medium it cannot diffuse out of the tablet unless it is disintegrated. At the end of the 7 th h, the tablet was further moved down and could be located before the 1 st lumber vertebra and between 1st lumber and the 12 th thoracic vertebra, indicating the residence of the product in the distal part of the stomach or in upper part of intestine. At this point the tablet could be vividly seen as a swollen matrix and the boundaries were further diffused. Again, in all the volunteers the tablets were found to be intact. The tablet brightness was reduced with time which could be attributed to the reduced tablet integrity owing to substantial drug release by the end of 7 h. Further the food (meal) might have affected adversely on the tablet brightness. At the end of 10 h, the tablet could not be seen even in the distal part of the abdomen. Thus, the tablet could have either become much diffused to differentiate or lost its integrity dispersing Barium Sulfate in the intestinal part. Further the 10 h observations also ensured that the tablet did not block the GIT leading to appropriate exit from the body. No adverse events were seen in all the subjects till 48 h after administration. Conclusion The developed GRSR system for CFD formulation provided significant floating property in vitro and in vivo thus supporting the successful development of gastroretentive dosage form. The Radioanalytical technique was successfully used in human subjects for evaluation the floating properties as well as the tablet size and integrity, providing the in vivo performance of the developed formulation.
2,910.6
2016-11-24T00:00:00.000
[ "Chemistry", "Biology", "Materials Science" ]
Standard and Specific Compression Techniques for DNA Microarray Images We review the state of the art in DNA microarray image compression and provide original comparisons between standard and microarray-specific compression techniques that validate and expand previous work. First, we describe the most relevant approaches published in the literature and classify them according to the stage of the typical image compression process where each approach makes its contribution, and then we summarize the compression results reported for these microarray-specific image compression schemes. In a set of experiments conducted for this paper, we obtain new results for several popular image coding techniques that include the most recent coding standards. Prediction-based schemes CALIC and JPEG-LS are the best-performing standard compressors, but are improved upon by the best microarray-specific technique, Battiato’s CNN-based scheme. Introduction DNA microarray technology allows the analysis of the expression of thousands of genes in a single experiment and has become a very important tool in medicine and biology for the study of genetic function, regulation and interaction [1]. Genome-wide monitoring is possible with existing DNA microarrays, which are used in research against cancer [2] and HIV [3], among many other applications. DNA microarrays consist of a solid surface on which thousands of known genetic sequences are bound. Each sequence is placed in one microscopic hole or spot and all spots are arranged conforming to a regular pattern. Two samples, for example from healthy and tumoral tissue, are labeled, respectively, with green and red fluorescent markers called Cy3 and Cy5. Then, equal amounts of the labeled samples are made to react with the genetic sequences on the microarray. If one sample has an expressed sequence corresponding to a sequence placed in the microarray, part of it is hybridized and fixed in the correspondent spot; else, it gets washed away and will not be present in the spot. Once the hybridization and washing have concluded, the microarray is exposed to ultraviolet light so that the emissions from the fluorescent Cy3 and Cy5 dyes can be scanned and registered. Spots whose corresponding sequences are more strongly expressed in the first sample will have more Cy3 dye present and thus will emit more intense green light. The same can be said for the second sample and the red Cy5 dye. Comparing the relative intensity of the green and red channels, it is possible to detect which genes have not been equally expressed in both samples. This can be used to hypothesize about the function of individual genes under many different conditions. The output of a microarray experiment is a pair of monochrome images, one for the green channel and another for the red channel. An example microarray image can be seen in Figure 1. Due to the microscopic size of the spots, images have a high spatial resolution and are of large dimensions. Images from 1000 × 1000 onwards are described in the literature, but sizes over 4000 × 13000 are common nowadays. Since gene expression can vary in a very wide range, image pixel intensities have a depth of 16 bits per pixel (bpp). Microarray images are computer analyzed to obtain genetic information. However, it is not desirable to keep only this genetic information and discard the microarray images. Analysis techniques are not fully mature or universally accepted and are subject to change. Furthermore, repeating an experiment is expensive and not always possible. Depending on the microarray size and the scanner spatial resolution, raw data for a single channel can require from a few to hundreds of Megabytes. With the increasing interest in DNA microarrays, and since many experiments are conducted under several different conditions, great amounts of data are generated in laboratories around the world. Because of the need of keeping and sharing microarray images, a need for efficient storage and transmission arises, and so compression emerges as a natural approach. The role of data compression in computational biology and the state of the art has been addressed by several authors [4,5]. Both lossy and lossless techniques have been proposed in the literature. Lossy approaches exhibit good compression performance on microarray images, but there is an open debate on whether information loss is acceptable or not, since it can alter genetic information extraction results. On the other hand, purely lossless methods guarantee immutable extraction results but offer poorer compression performance. This is partly due to the considerable amount of noise and the abundance of high frequencies present in this type of image. This paper is structured as follows. In Section 2 we review compression schemes specific to microarray images including the most recent ones, and summarize their lossless compression results. In Section 3 we report a novel set of experiments that we have conducted using image compression standards, some of which had not been previously tested on DNA microarrays. Section 4 draws some conclusions. State of the Art of DNA Microarray Image Compression Image compression processes typically comprise up to 5 stages: preprocessing, transform, quantization, entropy coding and postprocessing. Microarray image compression can be modeled likewise. An exhaustive review of relevant works for microarray image compression published in the literature is provided in Subsections 2.1 to 2.7. Surveyed works are organized according to the stages where they make their contribution. This review extends the 2005 article by Luo and Lonardi [6] by addressing newer techniques that have appeared since then, and by providing a more complete and structured description. Compression results reported for all these techniques on different image sets used for benchmarking are compared in Subsection 2.8. Preprocessing The preprocessing stage comprises any computation performed on an image to prepare it for the compression or analysis processes. It is very important in DNA microarray images because many of the existing techniques rely heavily on the results of this stage to obtain competitive coding performance and to extract accurate genetic information. Denoising and segmentation are the main preprocessing techniques applied to this type of image, and are described next. Denoising Microarray images contain noise, which is sometimes considerable. This is due to imperfections in conducting the experiment and also in the image digitalization process. Being able to reduce noise can be useful to extract more accurate genetic information and to obtain better compression results. However, denoising-based compression is not lossless because denoising can be considered an irreversible transform that is applied before storing the image. There has been much research on denoising, but most of it is focused on the microarray analysis process and not on compression. In 2006, Adjeroh et al. published their approach [7] based on the translation invariant (TI) transform. They argue that wavelet based denoising techniques that are not translation invariant suffer from an adverse pseudo-Gibbs phenomenon at discontinuities. This is especially important in microarray images since the edge of each spot can be considered a discontinuity. In their work, the original image is shifted horizontally, vertically and diagonally, and each of these shifted images plus the original one are denoised separately. The resulting images are then shifted back and combined to obtain the denoised image. Adjeroh et al. proposed three different approaches for combining the deshifted images: TI-hard, TI-median and TI-median2. The TI-hard method consists of outputting the average value of each pixel in the four deshifted images. The TI-median technique outputs the median instead of the average. The TI-median2 technique works by applying the TI-median technique twice: first, the TI-median is applied on the original image and an auxiliary image is produced; then the TI-median technique is applied again on the auxiliary image to obtain the denoised output image. Many more authors have covered microarray image denoising for genetic information extraction, but only some of the most recent and relevant are referenced here. In 2005, Lukac proposed a method [8] based on fuzzy logic and local statistics for noise removal. Also in 2005, Smolka et al. discussed the peer group concept [9] as a means to remove impulsive noise. In 2007, Chen and Duan presented a simple method for denoising microarray images [10] based on comparing the edge features of the red and green channels. Recently, in 2010, Zifan et al. have designed multiwavelet transformations [11] to denoise images. Segmentation Segmentation, also know as spot finding or addressing, consists in determining which of the image pixels belong to spots (i.e., the foreground), as opposed to those that do not (i.e., the background). As will be discussed in Section 2.4, this can be very useful in later stages of the compression process. For example, it is possible to code separately foreground and background or to exploit differences in pixel intensity distributions between sets. We discuss here only specific approaches oriented towards image compression. However, for the sake of completeness, some segmentation techniques not directly focused on compression and some general segmentation quality metrics are described at the bottom of this subsection . In 2003, Faramarzpour et al. proposed a lossless coder whose segmentation stage consists of two steps [12]. First, spot regions are located by studying the period of the signal obtained from summing the intensity by rows and by columns, and studying its minima. After that, spot centers are estimated based on the region centroid, which is needed for their spiral scanning procedure, as explained in Section 2.4. Simpler versions of this spot region location idea had already been used in microarray image analysis and also in Jörnsten and Yu's work of 2002 [13], where they proposed a lossy-to-lossless compression scheme. In their technique, a seeded region growing algorithm is used to obtain a coarse mask for the spot pixels, which is then refined. Later, in 2004, Lonardi and Luo presented their MicroZip lossy or lossless compression software [14]. They used a variation of Faramarzpour's spot region finding idea, but they considered the existence of subgrids which are located before spot regions. Four subgrids can be appreciated in Figure 1, but other images may contain more. In 2004, Hua et al. proposed a lossy or lossless scheme [15] with a segmentation technique based on the Mann-Whitney U test, which helps in deciding whether two independent sets of samples have equally large values. Once a region containing a spot has been located, a default mask is applied to obtain an initial set of pixels classified as spot pixels, and then the test is applied to add or remove pixels from this mask. This had already been used in microarray image segmentation [16], but the authors proposed a variation that speeds up the algorithm by up to 50 times. Also in 2006, Bierman et al. described a purely lossless compression scheme [17] with a simple threshold method for dividing microarray images into low and high intensities. It consists of determining the lowest of the threshold values from 2 8 , 2 9 , 2 10 or 2 11 such that approximately 90% of the pixels fall within it. In 2007, Neekabadi et al. proposed another threshold-based technique for segmentation [18], this time in three subsets: background, edge and spot pixels. Their lossless proposal performs segmentation in two steps. First, they determine the optimal threshold value by minimizing the total standard deviation of pixels above and below it. Then they segment the image in the mentioned subsets. To do so, first they determine the spot pixels by eroding the mask formed with pixels above the selected threshold. Edge pixels are the ones surrounding the spot pixels, and background pixels are all the others. Finally in 2009, Battiato and Rundo published an approach [19] based on Cellular Neural Networks (CNNs). They define two layers for their lossless system, each with as many cells as the image has pixels. The input and state of the first layer are the pixels of the original image. Its output is the input and state of the second layer. By defining the cloning templates that drive the whole CNN dynamic, the second layer tends to its saturation equilibrium state and the resulting output tends to a "local binary image" where spot pixels tend to 1 and background pixels to 0. Apart from the compression-oriented segmentation techniques already described, many others have been proposed in the more general context of DNA microarray image analysis. Some of the most recent are referenced next. In 2006, Battiato et al. proposed a microarray segmentation pipeline called MISP [20], where they used statistical region merging, γLUT and k-means clustering. In 2007, they improved the pipeline adding advanced image rotation and griding modules [21]. These two publications comprehensively describe previous well-known segmentation techniques. In 2008, Battiato et al. proposed a neurofuzzy segmentation strategy based on a Kohonen self-organizing map and an ulterior fuzzy k-means classifier [22]. In 2010, Karimi et al. described a new approach using an adaptive graph-based method [23]. Uslan et al. in 2010 [24] and Li et al. in 2011 [25] proposed two methods based on fuzzy c-means clustering. From the image-compression point of view, segmentation performance is generally measured based on the compression rates obtained after segmenting. However, in the DNA microarray image analysis context, other quality metrics might be applicable. In 2007, Battiato et al. defined a quality measure based on a previous technique for general DNA microarray image quality assessment, and compared the overall segmentation performances of their MISP technique and other previous approaches [21]. More information about microarray image quality assessment and information uncertainty can be found in Subsection 2.6. Transform The transform stage consists of changing the image domain from the spatial domain to another one where it can be more efficiently processed or coded. Examples of this are applying the DCT to obtain a frequency representation, or using a wavelet transform to change to the spatial-frequency domain. However, wavelet-transform-based compression is not typically as efficient for microarray images as it is for other types of images [6]. For this reason, transformations are not frequently researched in microarray image compression, although they are used in some works. Since these papers provide little or no original contribution on the transformation stage, they are only briefly mentioned in this section. In 2004, Hua et al. [15] published a modification of the EBCOT algorithm that included a tailored integer odd-symmetric transform for their proposed lossy or lossless scheme (see Section 2.4). In 2004, Lonardi and Luo [14] made use of the Burrows-Wheeler transform [26] for lossy or lossless compression in their MicroZip software (see Section 2.4). In 2006, Adjeroh et al. used a variation of the TI transform [7] for denoising (see Section 2.1). In 2007, Peters et al. [27] applied a slightly modified version of the singular value decomposition (SVD) in their lossy compression scheme (see Section 2.5). In 2010, Zifan et al. [11] used multiwavelet transformations, also for denoising (see Section 2.1). In 2011, Avanaki et al. [28] tested the use of an existing wavelet transform before applying fractal lossy compression (see Section 2.4). Quantization The stage of quantization consists of dividing sets of values or vectors into groups, effectively reducing the total number of symbols needed to represent them and thus increasing compressibility, at the expense of introducing information loss. In the microarray image compression literature, there are almost no original contributions for the quantization stage, partly because information loss is not always acceptable. There are however two exceptions. In 2000 and in 2003 Jörnsten et al. [29,30] proposed both scalar and L1-norm vector adaptive quantizers (see Section 2.4) that can be used in lossy or lossless compression. In 2007, Peters et al. [27] used simple truncation in their lossy SVD-based technique (Section 2.5). Entropy Coding In this stage of the image compression process, data obtained from previous stages are expressed in an efficient manner to generate a more compact bitstream. DNA microarray images show a strong spatial regularity [14], and this has been used in most techniques present in the literature. Many of them segment the image into foreground (spot) and background pixels and code them separately. Others build contexts or try to predict the intensity of the next pixels based on the previous ones, sometimes after segmenting the image. Ideas following each of these patterns are discussed in the next two subsections. Some of the works could be classified in either or both of these groups. Here they have been assigned to the one that, in our opinion, is more important to the algorithm. Segmentation Based Coding DNA microarray images are usually segmented into spot and background pixels as part of the preprocessing stage to exploit their different statistical properties. Segmentation is always performed when extracting genetic information. Particular specific segmentation proposals have been discussed in Section 2.1. Several techniques that exploit this segmentation in their coding stage are presented next. In 2002 and in 2003 Jörnsten et al. [13,29] presented a lossy-to-lossless compression scheme called SLOCO, a version of the LOCO-I algorithm, the basis of the JPEG-LS standard. After griding and segmentation, the image is divided into multiple rectangular subblocks, one for each spot. In this way, each spot can be accessed and sent independently and with different quality. For each spot subblock, two subimages are created: one where the background has been set to the spot mean value (the spot subimage), and another where the foreground has been set to the estimation of the background value so that the subimages have more homogeneous pixel intensities. Then each of the images is processed with SLOCO, which uses prediction in the spatial domain [31]. An important contribution of SLOCO is the use of an adaptive quantizer (UQ-adjust instead of UQ) that permits variable error per pixel δ so that spot pixels with higher intensities can be expressed with lower precision. Jörnsten et al. proposed both a scalar and a L1-norm vector quantizer (L1VQ), whose errors can be bitplane-encoded to obtain progressive lossy-to-lossless compression. In 2003, Faramarzpour et al. presented a prediction-based lossless compression technique [12]. The image is gridded in rectangular subblocks, and spot centers are then estimated. For each of the subblocks, a spiral path is created to transform the 2D sequence into a 1D one. A linear prediction scheme that uses neighbor pixel intensities and their distances to the spot center is then applied on the 1D sequence. Differences between consecutive prediction errors form a sequence that is adaptive Huffman coded after being split on the index that minimizes the expected length of the coded sequences. In 2004, Hua et al. presented microarray BASICA software [15] and proposed a progressive lossy-to-lossless compression scheme. In their work, they first grid and segment the image. After that, they separate each of the subblocks into foreground and background and then code them with a modified version of the EBCOT algorithm, the basis of the JPEG2000 standard [32]. The main modification to EBCOT is an adaptation of the original context modeling, which allows a better handling of the irregular shapes of the foreground and background subimages. Bit shifts are also performed when coding the foreground so that the most relevant information is sent first in this progressive scheme. Also in 2004, Lonardi and Luo presented their MicroZip software [14], which offers both lossless and lossy compression. In their work, the image is first gridded and segmented into foreground and background. Then each of the 16-bit streams is divided into two 8-bit substreams comprising the most and least significant bytes. The four resulting substreams are losslessly coded except for the LSB of the background, which can be compressed either losslessly or with loss. Lossless coding is done using the Burrows-Wheeler Transform [26], originally designed for text compression, which reduces the total entropy by computing all permutations of a given channel and then sorting them lexicographically. Lossy coding is done with the SPIHT algorithm [33]. In 2006, Bierman et al. presented their lossless compression MACE (Micro Array Compression and Extraction) software [17]. In their work, high and low intensity pixels are separated using a simple threshold-based method to exploit the fact that intensity distribution in microarray images is very skewed. One image is generated for the low intensity pixels, and another for the high intensity ones. In the low intensity image, high intensity pixels are set to zero, and vice versa. The low intensity image is then losslessly coded using dictionary-based techniques such as Gzip or LZW, after being split into two subimages consisting of the most and least significant bytes of the original image, respectively. The high intensity image is processed with a sparse matrix algorithm, and then compressed. Later in 2007, Bierman et al. studied the performance impact of varying the dictionary size in their compression techniques [34]. They concluded that compression improved up to a certain dictionary size, where the performance stopped improving and began degrading. In 2009, Battiato and Rundo published a lossless compression algorithm [19] based on image segmentation and color re-indexing. As previously discussed, segmentation is made by means of a CNN-based system and produces two complementary sub-images. The foreground is compressed with a generic lossless algorithm and stored separately. The background is first transformed into an indexed image. Then its color palette is re-indexed with an algorithm that reduces the zero-order entropy of local differences, which are losslessly coded. The re-indexing algorithm had been previously presented by the authors in 2007 [21]. Context-Based Coding Contexts are used in image compression because they allow a more precise estimation of the occurrence probabilities of each symbol, which results in a reduction of the total entropy and thus of the compressed bitstream size. In 2005 and in 2006 Zhang et al. [7,35] proposed a context-based lossless approach which also employed segmentation. In their work, they define a mixture model for microarray images where they consider two structural components (foreground and background) and assign probabilities based on the gamma distribution. Considering this model, they divide the image into two streams (foreground and background) and then each of those into two substreams, one for the most significant byte and the other for the least significant byte. MSB substreams are then processed by a simple predictive scheme, but LSB substreams are not. These four substreams are then coded with prediction by partial approximate matching (PPAM), a lossless compression technique also proposed by Zhang and Adjeroh [36]. In that paper, multicomponent compression is briefly addressed by compressing first one channel, I r , and then the pixel by pixel difference, I D = I r − I g , obtaining slightly better results than compressing I r and I g separately. In 2006, Neves and Pinho [37] proposed another context-based lossless approach. It is a bitplane-based technique that uses 3D finite-context models to drive an arithmetic coder. The most significant bitplane is encoded first, with a causal context formed by four surrounding pixels, and bitplanes from the second to the eighth most significant bitplanes are encoded using bits from the one being encoded and from the ones previously coded. Finally, the 8 least significant bitplanes are coded using only bits from the previous bitplanes for the context model. The probabilities used to drive the arithmetic coder are based on the number of times that a given symbol has appeared in the image while in a given context. The average coding bitrate for each bitplane is monitored and whenever one shows an expansive behavior (more than 1 bpp), that bitplane and the following bitplanes are not arithmetically coded, but simply output raw. Neves and Pinho used a trial and error procedure to build these context templates, which are the same for every image. In 2009, they extended this procedure so that specific templates are built for each image using a greedy approach, obtaining better results [38]. General Techniques Adapted to Microarray Images Several authors have considered adapting generic image compression algorithms to DNA microarray images, as discussed previously. Others have opted to apply them directly, as described next. In 2007, Peters et al. presented a lossy compression method [27] based on singular value decomposition (SVD). More recently, in 2011, Avanaki et al. [28] used fractal and wavelet-fractal lossy compression techniques on microarray images. Postprocessing After compression, general images are sometimes processed to enhance their visual quality or to provide new features. DNA microarray images are not usually postprocessed with these goals. Rather, they are analyzed in order to extract genetic information and estimate their certainty. Because of this, traditional quality measures such as MSE or PSNR may not be completely suitable when performing lossy compression on DNA microarray images. Inspired by the manner in which genetic information is extracted, some researchers have proposed quality metrics specific for microarray images, which are applied after segmenting the image into spots and background. Wang et al. proposed in 2001 a combined quality index (q com ), which considered spot sizes, signal to noise ratios, background variability and excessively high local background [39]. In 2004, Sauer et al. analyzed this metric and proposed and extended it to two new quality measures, q com1 and q com2 [40]. In 2005, Battiato et al. defined an image segmentation quality metric [21] based on the q com2 measure. Pan-Gyu et al. defined another quality metric that considered signal and background noise, scale invariance, spot regularity and spot alignment [41]. Another factor to study is the information uncertainty of microarray experiments. It is common for DNA microarrays nowadays to replicate the same spot at least three times and to analyze the differences between the replicas to detect or alleviate the information variability of an experiment. Repeating an experiment using a number of different microarrays and comparing the information obtained for the same gene across them is another option to study the information uncertainty [23]. Different authors have used some of these concepts to define distortion measures for lossy microarray image compression, and have measured the performance of some of the discussed algorithms. The ideas on which these measures are sustained are described next. Spot Detection Spot detection consists of labeling spots as valid or invalid depending on a measure of the reliability of the extracted information. When performing lossy compression, this labeling might be affected, and thus one can define a distortion measure based on the number of differences in the classification [15]. Spot Identification Spot identification consists of determining whether a particular gene is being expressed with higher, lower or the same intensity in the two samples that correspond to the two channels of a typical DNA microarray image. This is usually done by comparing pixel intensity properties of the two channels for each valid spot. Lossy compression can affect these pixel intensity properties, and thus one can define a distortion measure by counting the number of differences in the classification after compression [15,29], or even a quantitative difference between intensity logarithms [15]. Spot Classification Once spots have been evaluated in the identification step, clustering algorithms are often applied to the measurements obtained for each spot in order to classify them into different groups. Hierarchical clustering and k-means are the most widely used algorithms for this purpose [7], but expression based classification has also been proposed [42]. The discrepancies between the classification or clustering results produced before and after lossy compression can be used to define new distortion measures. Distortion Results Despite the fact that several distortion measures have been defined, there are not many published surveys that report results for distortion measurements. The existing surveys consider mostly generic image compression techniques like SPIHT and JPEG2000, and the only algorithm specific to DNA microarray images studied in this context is SLOCO, Jörnsten's algorithm [29,42,43]. Even though there seems to be significant reluctance to employ lossy compression, all authors that discuss distortion measures agree on the fact that for lossy compression, even at very low bitrates (high compression), these measures are affected in a very limited way [7,29,42]. Some claim that the variability induced by the lossy compression process is lower than that introduced when replicating an experiment [29], and even that lossy compression may improve the quality of the extracted information [7]. Technique Summary All microarray-specific techniques reviewed above are summarized in Table 1 according to the subsections where they have been discussed. They have been sorted chronologically and marked as lossless, lossy, or both, depending on the type of compression in which they participate. Table 1. Classification of microarray-specific techniques discussed in this document, attending to the compression stage where they make their contribution. Techniques are sorted chronologically and can appear in more than one category. Purely lossy methods are marked with red and ×. Purely lossless with blue and . Lossy and lossless with green and . Preprocessing Transform Quantization Entropy coding Generic Postprocessing Denoising Segmentation Segmentation Context Lossless Compression Results Comparison In this subsection, we first present image sets that have been used previously for benchmarking in the literature. Reported compression results for microarray-specific techniques are shown next. We analyze and compare those results to illustrate their relative compression performance for DNA microarray images. Lossy compression results are not discussed because published papers do not generally provide data tables that allow homogeneous comparisons among the different approaches, and also because it is not yet clear what an admissible information loss is. However, attending to the partial information available, it can be argued that lossy schemes which allow the compression with loss of the background with respect to the foreground are among the most frequent and successful. Image Sets Used in the Literature Several different image datasets have been used for benchmarking in microarray image compression, but none is common across all publications. In some papers, the images used are not specified. In others, only the source of the images is mentioned, but no other information about their size or characteristics is disclosed. Datasets that are described in the literature are presented in Table 2. Information about the number of images that they contain and their approximate size is also provided. Note that each image corresponds to one channel of a DNA microarray experiment. Table 2. Image sets referenced in the literature. All images are 16 bpp. Dataset Images Size (px) Comparison of Results Results from lossless compression schemes described in Section 2 are presented in Table 3. Techniques are listed chronologically, oldest first. Values for each algorithm and dataset are taken directly from the original papers. Dashes mean results were not provided for a given image set, and the Unspecified column is used when the image set is not revealed by the authors. Results are expressed in bits per pixel (bpp), so lower is better. Original images are all 16 bpp. No single image set has been uniformly used for benchmarking in all techniques, so it is difficult to compare performance fairly. MicroZip, ApoA1 and ISREC are the sets that have been employed more frequently. Jörsten's SLOCO claims the best results for the ApoA1 set with 8.556 bpp, but cannot be consistently compared to the other methods because of the lack of data for the other sets. With that exception, and attending only to the results for these three corpora, Battiato's method based on CNNs performs best in all three with bitrates of 8.619 bpp, 9.52 bpp and 9.49 bpp, respectively. A lower bound of 8 bpp is believed to exist for microarray images due to the presence of random noise in the least significant bitplanes. However, some authors have been able to obtain slightly better results for these bitplanes using high order contexts [35]. Table 3. Comparison of lossless microarray-specific schemes on 16 bpp images. All results are expressed in bits per pixel (bpp), so lower is better. All results have been adopted directly from the references specified in the table. Best results for each specified image set have been highlighted in green, and worst results in red. Algorithm Year Compression Standards It is important to compare microarray-specific techniques with generic compression techniques, especially standard techniques, in order to estimate the benefits of developing new microarray-specific techniques. In 2006, Pinho et al. [48] compared the performance of lossless JPEG2000, JBIG and JPEG-LS on the MicroZip, ApoA1 and ISREC image sets. We have conducted our own set of experiments and have been able to verify and extend their results. The MicroZip, Yeast, ApoA1 and ISREC sets used for benchmarking in the literature have been already described in Section 2.8. In addition, we have employed the Stanford set and the Arizona set. The Stanford set contains 20 images with sizes over 2000 × 2000, and up to 2200 × 2700, obtained from the Stanford Microarray Database public FTP at ftp://smd-ftp.stanford.edu/pub/smd/transfers/Jenny. The Arizona set has been kindly provided by David Galbraith and Megan Sweeney from the University of Arizona, and contains 6 images, each of size 4400 × 13800. All images are 16 bits per pixel. In Table 4, we summarize all the information about the different sets used in this work, including average image entropy. General Compression Schemes Results for general compressors (not image compressors) are shown in Table 5. These results have been computed dividing the total size in bits for all compressed files by the total number of pixels in the images. All compressors have been invoked in best compression mode. Best results in all datasets are obtained with Bzip2, which is especially efficient for the Yeast set. Image Compression Standards In our experiments we have tested the performance of standard image compression schemes as well. As in Pinho's work, we have evaluated lossless JPEG2000, JBIG and JPEG-LS. In addition, we have examined results for CALIC and different modes for lossless JPEG2000. While Pinho et al. tested only the JJ2000 implementation using the default parameter selections of DWT, with 5 decomposition levels and 33 quality layers, we have also evaluated the performance with 0 to 5 decomposition levels and the same number of quality layers with both the JJ2000 and Kakadu implementations. We show in Table 6 an updated version of the data that we have previously provided [49]. As in the case of the general compression schemes, results are obtained by dividing the total number of bits of the compressed images by the total number of pixels of the original images. We have obtained essentially identical results as in Pinho et al.'s work for the cases on which they have reported. Very similar results for all sets are obtained when using the Kakadu implementation with the same parameter choices used for JJ2000. In general, increasing the number of wavelet decomposition levels improves compression performance by approximately 0.5 bpp. The only exception is the Yeast set, where using 0 decomposition levels yields 32.33% better results than using 5 decomposition levels. A possible explanation is that the original images from the Yeast set contain less than 6% of all the possible intensities for a 16 bpp image, but applying one level of DWT produces three times as many different coefficient values and increases the entropy by more than 2 bits. CALIC and JPEG-LS prediction-based schemes perform better than all the other non-specific schemes on all image sets except for the Yeast set, where JPEG2000 with 0 decomposition levels and JBIG significantly outperform CALIC in more than 23%. The best image compressor for each set is between 2.86% and 5.25% better than Bzip2, the best generic compressor. Battiato's algorithm performs 12.81%, 10.45% and 11.85% better than the best non-microarray-specific technique for the MicroZip, ApoA1 and ISREC sets, respectively. Image Properties Affecting Compression Performance A key issue to be addressed when designing new compression schemes which improve overall performance could be identifying relevant image properties that affect the compression efficiency. Two different classes of properties can be defined for DNA microarray images: general properties applicable to any image and properties specific to microarray images. General properties like resolution size or image entropy have the advantage of being more easily measurable, but also the disadvantage of not always providing much insight on the nature of DNA microarray images nor their compression. Image size does not correlate very well with the compression performance of any of the general or image-specific schemes shown in Tables 3, 5 and 6. On the other hand, average entropy is clearly related to the compression performance for microarray images, but this fact is not useful for designing new algorithms. Properties only applicable to DNA microarray images like spot quality and information uncertainty and their effect on compression could be very useful to understand the peculiarity of the underlying signal. Spot quality of microarray images and information uncertainty of microarray experiments (described in Subsection 2.6) may be useful in this regard. It would be very interesting to investigate the relationship of these two properties and the compression performance of different algorithms. Unfortunately, it is not feasible to do so in a straightforward way. Most quality assessment procedures rely on a previous segmentation of the image into foreground and background, but publicly available tools either require manual intervention, which is very inefficient and can cause imprecisions, or cannot be considered equally reliable for all datasets. Were an automatic method for assessing quality found, it would then be feasible to investigate the mentioned relationship. In addition, most information uncertainty measures require information of the genes associated to each spot, which is not available for the majority of the datasets used for benchmarking. If these data were available, it would be interesting to analyze the experimental deviations for the different datasets and their relationship with compression performance. Conclusions and Future Work DNA microarrays are state-of-the art tools widely used in biology and medicine. Analysis of microarray images is still being developed and repeating experiments is expensive or impossible, so keeping them for re-analysis is desirable. Due to the large amounts of data produced, efficient storage and transmission schemes are necessary, and compression arises as a natural approach. At least 10 compression schemes specific for microarray image compression have been proposed in the literature. We have classified all of them according to the stage or stages of the image compression process that they contribute to most, and have described their most relevant ideas. Lossless compression results for both microarray-specific and different standard image compression methods have been discussed. The best microarray-specific technique reported for a variety of data sets is Battiato's CNN-based proposal. According to our experiments, the best standard lossless image compressors are the prediction-based algorithms CALIC and JPEG-LS, except for the Yeast set, where lossless JPEG2000 with zero decomposition levels and JBIG improve upon those techniques. The best image compressor for each set is between 2.88% and 5.25% better than Bzip2 except for the Yeast set, and Battiato's algorithm performs 12.81%, 10.45% and 11.85% better than the best non-microarray-specific technique for the three image sets for which data from most algorithms exist. As future work, we will analyze the properties of the new generation of DNA microarray scanners and their implications in the compression of the scanned images. We will also search for new data about the information uncertainty and image quality present in the different datasets, and analyze their relationship to compression performance.
9,032.8
2012-02-14T00:00:00.000
[ "Biology", "Computer Science" ]
Corrosion Behavior of Hydrotalcite Film on AZ 31 Alloy in Simulated Body Fluid The hydrotalcite (HT) film is a promising bioactive coating for magnesium alloys. In the present study, we investigate the corrosion behavior of HT film in the simulated body fluid (SBF), and compare with which in NaCl solution. The HT film can provide a very plummy initial protection to the AZ31 alloy in SBF. The corrosion behavior of the HT film in the two solutions is quite different. When in 0.1 mol·L−1 NaCl solution, the film is dissolved gradually, and filiform corrosion is predominant after 3 days immersion. While in Hank’s solution, the thickness and composition of the film are changed. A corrosion products layer mainly consisted of Mg/Ca–PO4/HPO4, and minor of CaCO3 is deposited on the top of HT film, which enhances the barrier effect of the HT film. As a result, except for local pit corrosion at several active places, most of the area of the coated sample still remains integral even after immersion for 15 days. It is demonstrated that the HT film has better corrosion protection effect in SBF than in NaCl solution. Introduction Magnesium (Mg) and its alloys have been intensively studied as biodegradable implant materials since they possess many advantages as follows.(1) Good biocompatibility: Mg is an element essential to the human body with half of the total physiological Mg stored in bone tissue [1][2][3].The in vitro result reported by Cheng et al. indicated that Mg does not present any cytotoxic effects on L929 and ECV304 cells [4].The in vivo study also showed that Mg/Mg alloys are biocompatible [5].(2) Good mechanical properties: Mg alloys have similar strength and elastic modulus with natural bones, and have the potential to minimize stress shielding [6,7].(3) Close density with bones: Densities of Mg based metals (1.7-2.0 g cm −3 ) are close to natural bones (1.8-2.1 g cm −3 ) [8]. (4) Additionally, due to their unique biodegradable property, Mg based implants do not demand second surgery compared with those stainless steel and titanium based implants [9][10][11].Both mechanical properties and degradation profiles should be required for the next-generation biodegradable stents [12].However, Mg alloys undergo rapid corrosion under physiological conditions [13,14], resulting in an early loss of mechanical stability of the implant before the end of the healing process [15].To solve the corrosion problem, the main method is to develop corrosion resistant alloys.Surface treatment is another commonly used method to slow down the corrosion rate of the Mg alloys. Concerning the clinical application of biodegradable Mg implants, an ideal coating should be biocompatible and dissolvable in human body.From this point of view, hydrotalcite-like compounds (HTs) film is a potential option.On the one hand, HTs have good biocompatibility and low toxicity.HTs have been approved for human clinical uses including drug release, as well as protein, nucleotides and DNA carriers [16][17][18][19][20][21][22].For example, Riaz et al. pointed out that the drug-HT hybrids were quite instrumental because of their application as advanced anti-cancer drug delivery systems [19].Data obtained by Perioli et al. suggested that HT was a suitable material able to improve the biopharmaceutical properties of class IV BCS drugs [20].From the in vivo testing of adult male Sprague Dawley rats, Kwak et al. [18] concluded that the HT particles had little systemic effect at doses ≤200 mg kg −1 .On the other hand, HTs films have been applied to enhance the corrosion resistance of Mg alloys [23][24][25][26][27][28].Lin and Chen found that the HT conversion films adhere well to the substrates, and provide a good protection to the Mg alloys in NaCl solution [24,26].Furthermore, it is found that HTs solids are degradable when they are exposed to aqueous solutions [27,29].Although, we have investigated the corrosion behavior of the HT film in NaCl solution, the protective property of this HT film to Mg in the physiological environment is not well understood.Hence, the aim of the present work is to investigate the corrosion resistance of the Mg-Al HT film as biologic coating in simulated physiological conditions.In addition, a comparison of the corrosion behavior of the HT film in NaCl solution and simulated body fluid (SBF) is also provided in this paper. Fabrication of the Films The material used in this study was extruded AZ31 alloy.The grain size of the alloy was not very even, the average value was about 16 µm.The surface of the samples was ground to 2000 grit SiC paper, ultrasonically cleaned in ethyl alcohol, and then dried in the cold air.The HT film was prepared by a two-step in situ growth method, and followed the same procedure according to the literature previously described [26].Carbonic acid solution was prepared by bubbling CO 2 gas through 200 mL of distilled water for about 10 min at room temperature (20 ± 2 • C).Al containing solution was prepared by dissolving a pure Al panel into the 0.5 mol•L −1 Na 2 CO 3 solution at 60 • C. Then this Al containing solution was dropped into the carbonic acid solution until achieving the pH value of about 8 to form the pretreatment solution at 60 • C. The post treatment solution was based on the pretreatment solution using NaOH solution to adjust the pH value to 10.5, then heated to 80 • C. Samples were first immersed in the pretreatment solution with a continuous bubbling of CO 2 for 30 min and then immersed in the post treatment solution for 1.5 h to obtain the HT film. Characterization The morphology of the films was observed using an environmental scanning electronic microscope (ESEM, Philips XL30, FEI, OR, USA) equipped with an energy dispersive X-ray spectroscopy (EDS, X-act, Oxford Instruments, Oxford, UK).The cross section samples were first mounted using epoxy resin.Then they were ground to 5000 grit SiC paper.After that, they were polished with alumina powders (0.5 µm).Subsequently, samples were cleaned with ethyl alcohol, and dried in the cold air.The chemical composition was analyzed by EDS and X-ray photoelectron spectroscopy (XPS, ESCALAB 250, Thermo Fisher Scientific, Waltham, MA, USA).The XPS was probed using Al Kα radiation (1486.6 eV).The power was 150 W, the pass energy was 50.0 eV and the step size was 0.1 eV.All energy values were corrected according to the adventitious C 1s signal set at 284.6 eV.The data were analyzed with Xpspeak 4.1 software (v4.1, developed by Raymund Kwok, Hongkong, China). Electrochemical tests were carried out using a ParStat 4000 potentiostat (Ametec, Berwyn, PA, USA).A classical three-electrode system was applied.The samples, a saturated calomel electrode (SCE) and a platinum plate, were used as working electrode, reference electrode and auxiliary electrode, respectively.The open circuit potential (OCP) of the AZ31 alloy with and without film was investigated by a potential vs. time curve with a sampling frequency of 100 s point −1 .The polarisation curves were obtained on an exposed area of 1 cm 2 at a constant voltage scan rate of 0.5 mV s −1 after an initial delay of 300 s.Immersion test was performed according to the GB 10124-88 of China [30].The size of the samples for immersion tests was 25 × 50 × 2 mm 3 .The electrochemical and immersion tests were conducted in Hank's solution (NaCl 8.0 g/L, KCl 0.4 g/L, CaCl 2 0.14 g/L, NaHCO 3 0. C. The corrosion products were removed in a chromic acid bath consisting of 180 g L −1 CrO 3 . Morphology of the HT Film Figure 1 shows the morphology of the coated sample.According to our previous work [26], the chemical composition of the film was mainly Mg 6 Al 2 (OH) 16 CO 3 •4H 2 O.The optical photo in Figure 1a shows that the HT film is very homogeneous.Figure 1b presents a compact and smooth film, and micro-cracks hardly can be seen.Higher magnification morphology in Figure 1c clearly demonstrates a typical HT structure.The surface of the Mg substrate is completely covered with the dense and uniform blade-like flakes.From the cross-sectional image of the coated sample in Figure 1d, it is observed that the HT film is strongly adhered to the substrate, and there is no crack on the film, which indicates that the film is very compact.The thickness of the film is about 1.04 µm. Morphology of the HT Film Figure 1 shows the morphology of the coated sample.According to our previous work [26], the chemical composition of the film was mainly Mg6Al2(OH)16CO3•4H2O.The optical photo in Figure 1a shows that the HT film is very homogeneous.Figure 1b presents a compact and smooth film, and micro-cracks hardly can be seen.Higher magnification morphology in Figure 1c clearly demonstrates a typical HT structure.The surface of the Mg substrate is completely covered with the dense and uniform blade-like flakes.From the cross-sectional image of the coated sample in Figure 1d, it is observed that the HT film is strongly adhered to the substrate, and there is no crack on the film, which indicates that the film is very compact.The thickness of the film is about 1.04 μm.1b and EDS testing region in Figure 1d, respectively.They correspond to the numbers in Table 1). Electrochemical Corrosion Test The potential vs. time curves of AZ31 alloy with and without film in Hank's solution is shown in Figure 2a.In the case of AZ31 substrate, the OCP keeps increasing from −1.84 to −1.57V (vs.SCE) in the initial 1.66 × 10 4 s, and then drops down slightly.It implies that a corrosion products film is formed continuously on the alloy surface, and then the film breaks down.Afterwards, OCP fluctuated in a small range, indicating that the rupture and formation of the corrosion products film reach a dynamic equilibrium.The optical morphology of the AZ31 substrate after being soaked for 9.0 × 10 4 s is also provided in the low section of Figure 2a.It is observed that there is a thin but nonhomogeneous film on the surface, where many white-bright spots distribute randomly, indicating (1 and 2 represent the EDS full scan region in Figure 1b and EDS testing region in Figure 1d, respectively.They correspond to the numbers in Table 1). Electrochemical Corrosion Test The potential vs. time curves of AZ31 alloy with and without film in Hank's solution is shown in Figure 2a.In the case of AZ31 substrate, the OCP keeps increasing from −1.84 to −1.57V (vs.SCE) in the initial 1.66 × 10 4 s, and then drops down slightly.It implies that a corrosion products film is formed continuously on the alloy surface, and then the film breaks down.Afterwards, OCP fluctuated in a small range, indicating that the rupture and formation of the corrosion products film reach a dynamic equilibrium.The optical morphology of the AZ31 substrate after being soaked for 9.0 × 10 4 s is also provided in the low section of Figure 2a.It is observed that there is a thin but non-homogeneous film on the surface, where many white-bright spots distribute randomly, indicating the rupture of the corrosion products film.In contrast, the performance of the sample coated with HT film is different.OCP of the coated sample increases from −1.90 to −1.63 V (vs.SCE) within 1.20 × 10 4 s.Subsequently, it keeps elevating with a relatively lower slope.It can be implied that the reaction between HT film and Hank's solution sustains during the whole immersion to form a corrosion products layer on the top, which is hardly stabilized.To the end of the testing, the OCP of the coated sample achieves to −1.49V (vs.SCE).It is noteworthy that OCP fluctuates several times at the range around 1.20 × 10 4 , 5.70 × 10 4 and 1.84 × 10 5 s, respectively.It may be because that local superficial pit corrosion occurs at some active spots, but these active areas are covered by corrosion products immediately.No corrosion pits are observed by naked eye on the surface of the coated sample after being soaked for 2.62 × 10 5 s (more than 3 days).It can be implied that the film can effectively delay the initial appearance of megascopic corrosion pit. Figure 2b shows the polarization curves of the AZ31 alloy with and without film in Hank's solution.The HT film slows down the corrosion rate of bare alloy by inhibiting both the cathodic hydrogen evolution and anodic dissolution reactions.The hydrogen evolution rate in the cathodic side is decreased.The anodic sides of the two curves are quite different.In curve 2, there is a passive tendency region.The breakdown potential (E b ) of the film coated sample is about −1.34 V, which is more positive than that of the substrate (−1.51V).The corrosion current density (i corr ) of the substrate and the coated sample is approximately 23.64 and 5.54 µA cm −2 , respectively.The positive shift of the corrosion potential and the decrease of current density indicate that the HT film can improve the corrosion resistance of the AZ31 alloy in SBF. Corrosion Morphology of the HT Film Optical images of the AZ31 alloy with and without HT film immersed in Hank's solution for different time are shown in Figure 3.After 3 days immersion, filiform corrosion has happened to more than half area of the bare alloy, which is corroded severely.The majority of the coated sample is not attacked after 15 days immersion.The immersion test also indicates that the HT film can offer good protection to the AZ31 alloy in SBF.However, compared to the morphology of the original HT film before immersion which is very homogeneous and smooth without any tubercle (Figure 1a), there are some white particles on the surface of the HT film after immersion test. Figure 4 presents the surface morphology of the AZ31 alloy with HT film after immersion test for 15 days in Hank's solution.It is revealed from the low magnification morphology observation of the SEM image that lots of white corrosion products are deposited on the pit.However, these particles do not stack densely.It can be seen from the high magnification morphology that the microstructure of the top film changes greatly after immersion test.The baculiform particles stack tightly, which is quite different from the original HT microstructure before immersion (Figure 1c) than the curved hexagonal platelets lying perpendicular to the substrate surface.After removing corrosion products, it is observed that the depth of pit is inhomogeneous, where it is deeper at the intermediate site Corrosion Morphology of the HT Film Optical images of the AZ31 alloy with and without HT film immersed in Hank's solution for different time are shown in Figure 3.After 3 days immersion, filiform corrosion has happened to more than half area of the bare alloy, which is corroded severely.The majority of the coated sample is not attacked after 15 days immersion.The immersion test also indicates that the HT film can offer good protection to the AZ31 alloy in SBF.However, compared to the morphology of the original HT film before immersion which is very homogeneous and smooth without any tubercle (Figure 1a), there are some white particles on the surface of the HT film after immersion test. Figure 4 presents the surface morphology of the AZ31 alloy with HT film after immersion test for 15 days in Hank's solution.It is revealed from the low magnification morphology observation of the SEM image that lots of white corrosion products are deposited on the pit.However, these particles do not stack densely.It can be seen from the high magnification morphology that the microstructure of the top film changes greatly after immersion test.The baculiform particles stack tightly, which is quite different from the original HT microstructure before immersion (Figure 1c) than the curved hexagonal platelets lying perpendicular to the substrate surface.After removing corrosion products, it is observed that the depth of pit is inhomogeneous, where it is deeper at the intermediate site (Figure 4c).It indicates that both vertical development and horizontal development of the pit corrosion occurred.Many shallow corrosion traces are also observed on the surface of the substrate.4a represent the EDS testing regions and correspond to the numbers in Table 1.) Figure 5 shows the cross-sectional morphology of the HT film coated sample after immersion test for 15 days in Hank's solution.Figure 5a displays a deep pit, the depth of which is about 126 μm, and it is filled up by corrosion products.In addition, there are many micro-cracks in the products block mass.In the high magnification image, there are three layers above the substrate, just like a "sandwich".The top layer is the corrosion products film covered above the HT film, the thickness of which is about 0.73 μm.The second layer is the HT film after immersion, but compared to the original HT film, the thickness of which is increased significantly (about 2.12 μm). There is an open crack, penetrating the two layers directly to the substrate.Those micro-cracks can provide channels for the 4a represent the EDS testing regions and correspond to the numbers in Table 1.) Figure 5 shows the cross-sectional morphology of the HT film coated sample after immersion test for 15 days in Hank's solution.Figure 5a displays a deep pit, the depth of which is about 126 μm, and it is filled up by corrosion products.In addition, there are many micro-cracks in the products block mass.In the high magnification image, there are three layers above the substrate, just like a "sandwich".The top layer is the corrosion products film covered above the HT film, the thickness of which is about 0.73 μm.The second layer is the HT film after immersion, but compared to the original HT film, the thickness of which is increased significantly (about 2.12 μm).There is an open crack, penetrating the two layers directly to the substrate.Those micro-cracks can provide channels for the 4a represent the EDS testing regions and correspond to the numbers in Table 1.) Figure 5 shows the cross-sectional morphology of the HT film coated sample after immersion test for 15 days in Hank's solution.Figure 5a displays a deep pit, the depth of which is about 126 µm, and it is filled up by corrosion products.In addition, there are many micro-cracks in the products block mass.In the high magnification image, there are three layers above the substrate, just like a "sandwich".The top layer is the corrosion products film covered above the HT film, the thickness of which is about 0.73 µm.The second layer is the HT film after immersion, but compared to the original HT film, the thickness of which is increased significantly (about 2.12 µm).There is an open crack, penetrating the two layers directly to the substrate.Those micro-cracks can provide channels for the corrosion medium passing through to the substrate.As a result, the Mg substrate under the film is corroded, and another loose and discontinuous corrosion products film layer is formed underneath the HT film.In addition, there is a gap between the HT film and the bottom corrosion products layer.It may be caused by the mechanical ground because the corrosion products of the substrate underneath the HT film are loose and brittle. corrosion medium passing through to the substrate.As a result, the Mg substrate under the film is corroded, and another loose and discontinuous corrosion products film layer is formed underneath the HT film.In addition, there is a gap between the HT film and the bottom corrosion products layer.It may be caused by the mechanical ground because the corrosion products of the substrate underneath the HT film are loose and brittle.5b represent the EDS testing regions and correspond to the numbers in Table 1.) Composition Analysis of the Film The chemical composition of the film before and after immersion was analyzed by EDS and XPS.The content of various elements in different regions of the films in Figures 1, 4a and 5b marked by 1-7 is listed in Table 1.The original HT film only contains C, Mg, Al and O, and the content of Mg is very high (Region 1), indicating that the signal from the matrix is strong.While the composition of the film after immersion test is more complex, including C, O, Mg, Al, P and Ca, and the signal from the matrix Mg is weak (Region 4), implying that the detected information is mainly attributed to the film.It is because the thickness of the film after immersion is increased, and the result is in accordance with the cross-sectional morphology (compared Figures 1d and 5b).Furthermore, the atomic ratio of Mg and Al elements in Region 6 is decreased to 3.13:1, compared with Region 2 which is about 3.68:1.The main information from the composition comparison of the HT film before and after immersion is that Ca and P have been inserted into the HT structure.The corrosion products deposited on pit are composed of O, C, Mg, P and Ca, and the content of Ca is especially high (Region 3).According to the EDS analysis, the contents of Ca and P in the corrosion products film layer are high, but the Al content is very small (Region 5).The composition of the bottom corrosion products layer is C, O, Al, Mg, Ca and P (Region 7), and Mg is the main metal element.In addition, the content of Ca and P in the corrosion products film layer below the HT film is smaller than that in the top layer.It may be because that only small amount of electrolyte passed through into the HT film with poor liquidity.5b represent the EDS testing regions and correspond to the numbers in Table 1.) Composition Analysis of the Film The chemical composition of the film before and after immersion was analyzed by EDS and XPS.The content of various elements in different regions of the films in Figures 1, 4a and 5b marked by 1-7 is listed in Table 1.The original HT film only contains C, Mg, Al and O, and the content of Mg is very high (Region 1), indicating that the signal from the matrix is strong.While the composition of the film after immersion test is more complex, including C, O, Mg, Al, P and Ca, and the signal from the matrix Mg is weak (Region 4), implying that the detected information is mainly attributed to the film.It is because the thickness of the film after immersion is increased, and the result is in accordance with the cross-sectional morphology (compared Figures 1d and 5b).Furthermore, the atomic ratio of Mg and Al elements in Region 6 is decreased to 3.13:1, compared with Region 2 which is about 3.68:1.The main information from the composition comparison of the HT film before and after immersion is that Ca and P have been inserted into the HT structure.The corrosion products deposited on pit are composed of O, C, Mg, P and Ca, and the content of Ca is especially high (Region 3).According to the EDS analysis, the contents of Ca and P in the corrosion products film layer are high, but the Al content is very small (Region 5).The composition of the bottom corrosion products layer is C, O, Al, Mg, Ca and P (Region 7), and Mg is the main metal element.In addition, the content of Ca and P in the corrosion products film layer below the HT film is smaller than that in the top layer.It may be because that only small amount of electrolyte passed through into the HT film with poor liquidity.The XPS analysis of the HT film after immersion test for 15 days in Hank's solution is shown in Figure 6.The C1s spectrum has two peaks.The strong peak is due to the adventitious hydrocarbons from the environment.Small shoulder at approximate 289.6 eV indicates the presence of CaCO 3 .Figure 6b presents the high resolution XPS spectrum of O 1s, which is deconvoluted into three peaks.The peak at 533.7 eV can be attributed to P-OH [31,32].The binding energy of 532.6 eV is H 2 O.The peak at 531.5 eV is attributed to CO32-or P=O [26,[31][32][33].The spectrum of P 2p is divided into two peaks, 132.7 and 133.5 eV, which are attributed to PO 4 3− and HPO 4 2− , respectively [34,35].Al signal is inexistent in the spectrum (Figure 6d), indicating that the XPS signals are mainly attributed to the corrosion products film after immersion test, not from the HT film.The result is in accordance with the above results that there is another film layer on the top of HT film (Figure 5b) and almost no Al is identified in this layer.The high-resolution spectrum of Ca 2p displays two distinctive peaks due to the spin orbit splitting.The Ca 2p The XPS analysis of the HT film after immersion test for 15 days in Hank's solution is shown in Figure 6.The C1s spectrum has two peaks.The strong peak is due to the adventitious hydrocarbons from the environment.Small shoulder at approximate 289.6 eV indicates the presence of CaCO3. Figure 6b presents the high resolution XPS spectrum of O 1s, which is deconvoluted into three peaks.The peak at 533.7 eV can be attributed to P-OH [31,32].The binding energy of 532.6 eV is H2O.The peak at 531.5 eV is attributed to CO32-or P=O [26,[31][32][33].The spectrum of P 2p is divided into two peaks, 132.7 and 133.5 eV, which are attributed to PO4 3− and HPO4 2− , respectively [34,35].Al signal is inexistent in the spectrum (Figure 6d), indicating that the XPS signals are mainly attributed to the corrosion products film after immersion test, not from the HT film.The result is in accordance with the above results that there is another film layer on the top of HT film (Figure 5b) and almost no Al is identified in this layer.The high-resolution spectrum of Ca 2p displays two distinctive peaks due to the spin orbit splitting.The Ca 2p3/2 peak at 347.8 eV can be attributed to CaHPO4•2H2O.The binding energy of Ca 2p3/2 peak at 346.9 eV can be attributed to Ca3(PO4)2 or CaCO3 [34].The Mg 1s peak at 1303.9 eV is assigned to Mg3(PO4)2 [35].It is observed that the content of phosphates is larger than that of hydrogen phosphates.Based on the composition analysis, it can be seen that the top corrosion products layer is mainly consisted of Mg3(PO4)2, Ca3(PO4)2, CaHPO4•2H2O and CaCO3. A Comparison of the Corrosion Behavior of AZ31 with HT Film in NaCl Solution and Hank's Solution We preliminarily compared the corrosion protection effect of the HT film for AZ31 alloy in 0.1 mol•L −1 NaCl and Hank's solution.The corrosion behavior of the HT film in SBF is quite different from that in NaCl solution.In NaCl solution, localized corrosion has already occurred on the coated sample after 12 h immersion (Figure 5e in Reference [26]).When after 3 days immersion, the coated sample displayed a filiform corrosion characteristic and metal brightness (Figure 5a in Reference [28]).As illustrated in Figure 7a, dissolution of the HT film takes place, accordingly, the HT film cannot continuously provide protection to the Mg substrate for longer periods of time, and many long filaments are observed.Differently, in SBF, no corrosion is visible in the region of the coated sample after OCP test for more than 3 days.And, the whole surface of the substrate is still covered with film even after 15 days immersion.However, pit corrosion occurs at some active spots.Although the corrosion products deposited on the pit can provide some degree of protection, the corrosion suppression effect is limited in view of the fact that the products particles deposit is loosely above the pits (Figure 4a).In addition, the electrolyte can penetrate the film at the weak sites during the long-time immersion, and then micro-cracks appear on the surface of the film.Once the electrolyte passes through the micro-cracks to the substrate, underneath corrosion occurs.The corrosion behavior of the HT film in SBF can be illustrated as shown in Figure 7b.The corrosion protective effect of the HT film in SBF is superior to that in NaCl solution.It may be attributed to the corrosion products layer precipitation above the HT film.As a diffusion barrier against electrolyte uptake, the top corrosion products layer governs the dissolution of HT film and suppresses the corrosion activity.These facts indicate that the corrosion environment governs the corrosion behavior of the HT film. A Comparison of the Corrosion Behavior of AZ31 with HT Film in NaCl Solution and Hank's Solution We preliminarily compared the corrosion protection effect of the HT film for AZ31 alloy in 0.1 mol•L −1 NaCl and Hank's solution.The corrosion behavior of the HT film in SBF is quite different from that in NaCl solution.In NaCl solution, localized corrosion has already occurred on the coated sample after 12 h immersion (Figure 5e in Reference [26]).When after 3 days immersion, the coated sample displayed a filiform corrosion characteristic and metal brightness (Figure 5a in Reference [28]).As illustrated in Figure 7a, dissolution of the HT film takes place, accordingly, the HT film cannot continuously provide protection to the Mg substrate for longer periods of time, and many long filaments are observed.Differently, in SBF, no corrosion is visible in the region of the coated sample after OCP test for more than 3 days.And, the whole surface of the substrate is still covered with film even after 15 days immersion.However, pit corrosion occurs at some active spots.Although the corrosion products deposited on the pit can provide some degree of protection, the corrosion suppression effect is limited in view of the fact that the products particles deposit is loosely above the pits (Figure 4a).In addition, the electrolyte can penetrate the film at the weak sites during the long-time immersion, and then micro-cracks appear on the surface of the film.Once the electrolyte passes through the micro-cracks to the substrate, underneath corrosion occurs.The corrosion behavior of the HT film in SBF can be illustrated as shown in Figure 7b.The corrosion protective effect of the HT film in SBF is superior to that in NaCl solution.It may be attributed to the corrosion products layer precipitation above the HT film.As a diffusion barrier against electrolyte uptake, the top corrosion products layer governs the dissolution of HT film and suppresses the corrosion activity.These facts indicate that the corrosion environment governs the corrosion behavior of the HT film. Summary In summary, the HT film is compact and uniform, which can improve the corrosion resistance of the AZ31 substrate and greatly delay the initial corrosion in SBF.A dense corrosion products film mainly consisting of Mg/Ca phosphates and CaCO3 is continuously precipitated above the HT film, which has high chemical stability.However, local pit corrosion takes place, and underneath corrosion occurs at some active places after long periods of immersion. The difference in corrosion behavior of the AZ31 alloy with HT film in SBF and NaCl solution is preliminarily revealed.In NaCl solution, the HT crystals could be dissolved, and macroscopic filiform corrosion occurred only after 3 days immersion.In SBF, corrosion is localized, but most areas of the coated sample still remain integral even after 15 days of immersion.In SBF, the deposited top corrosion products layer can enhance the barrier effect of the HT film.Hence, the HT film can provide protection for longer periods of time in SBF than in NaCl solution. Author Contributions: J.C. and S.M. conceived and designed the experiments.J.C. and K.K. carried out the experimental works and prepared all the figures.J.C. and Y.S. analyzed the data and co-wrote the paper.E.H. and J.A. contributed to the general discussion.All authors reviewed the manuscript. Summary In summary, the HT film is compact and uniform, which can improve the corrosion resistance of the AZ31 substrate and greatly delay the initial corrosion in SBF.A dense corrosion products film mainly consisting of Mg/Ca phosphates and CaCO 3 is continuously precipitated above the HT film, which has high chemical stability.However, local pit corrosion takes place, and underneath corrosion occurs at some active places after long periods of immersion. The difference in corrosion behavior of the AZ31 alloy with HT film in SBF and NaCl solution is preliminarily revealed.In NaCl solution, the HT crystals could be dissolved, and macroscopic filiform corrosion occurred only after 3 days immersion.In SBF, corrosion is localized, but most areas of the coated sample still remain integral even after 15 days of immersion.In SBF, the deposited top corrosion products layer can enhance the barrier effect of the HT film.Hence, the HT film can provide protection for longer periods of time in SBF than in NaCl solution. Figure 1 . Figure 1.Morphology of the hydrotalcite film coated sample: (a) Optical photo; (b) surface morphology in low magnification; (c) surface morphology in high magnification; (d) cross-sectional morphology.(1 and 2 represent the EDS full scan region in Figure1band EDS testing region in Figure1d, respectively.They correspond to the numbers in Table1). Figure 1 . Figure 1.Morphology of the hydrotalcite film coated sample: (a) Optical photo; (b) surface morphology in low magnification; (c) surface morphology in high magnification; (d) cross-sectional morphology.(1and 2 represent the EDS full scan region in Figure1band EDS testing region in Figure1d, respectively.They correspond to the numbers in Table1). Coatings 2018, 8 , x FOR PEER REVIEW 4 of 10 the rupture of the corrosion products film.In contrast, the performance of the sample coated with HT film is different.OCP of the coated sample increases from −1.90 to −1.63 V (vs.SCE) within 1.20 × 10 4 s.Subsequently, it keeps elevating with a relatively lower slope.It can be implied that the reaction between HT film and Hank's solution sustains during the whole immersion to form a corrosion products layer on the top, which is hardly stabilized.To the end of the testing, the OCP of the coated sample achieves to −1.49V (vs.SCE).It is noteworthy that OCP fluctuates several times at the range around 1.20 × 10 4 , 5.70 × 10 4 and 1.84 × 10 5 s, respectively.It may be because that local superficial pit corrosion occurs at some active spots, but these active areas are covered by corrosion products immediately.No corrosion pits are observed by naked eye on the surface of the coated sample after being soaked for 2.62 × 10 5 s (more than 3 days).It can be implied that the film can effectively delay the initial appearance of megascopic corrosion pit.Figure2bshows the polarization curves of the AZ31 alloy with and without film in Hank's solution.The HT film slows down the corrosion rate of bare alloy by inhibiting both the cathodic hydrogen evolution and anodic dissolution reactions.The hydrogen evolution rate in the cathodic side is decreased.The anodic sides of the two curves are quite different.In curve 2, there is a passive tendency region.The breakdown potential (Eb) of the film coated sample is about −1.34 V, which is more positive than that of the substrate (−1.51V).The corrosion current density (icorr) of the substrate and the coated sample is approximately 23.64 and 5.54 μA cm −2 , respectively.The positive shift of the corrosion potential and the decrease of current density indicate that the HT film can improve the corrosion resistance of the AZ31 alloy in SBF. Figure 2 . Figure 2. (a) Potential vs. time curves; (b) polarization curves of the AZ31 alloy with and without the hydrotalcite film in Hank's solution. FilmFigure 2 . Figure 2. (a) Potential vs. time curves; (b) polarization curves of the AZ31 alloy with and without the hydrotalcite film in Hank's solution. Coatings 2018, 8 , 10 ( x FOR PEER REVIEW 5 of Figure4c).It indicates that both vertical development and horizontal development of the pit corrosion occurred.Many shallow corrosion traces are also observed on the surface of the substrate. Figure 3 . Figure 3. Optical corrosion morphology of: (a) The bare AZ31 alloy after immersion test for 3 days; (b) the hydrotalcite film coated sample after immersion tests for 15 days in Hank's solution. Figure 4 . Figure 4. Surface morphology of the hydrotalcite film coated sample after immersion test for 15 days in Hank's solution: (a) The white particle area in low magnification; (b) the intact area in high magnification; (c) the pit area after removing corrosion products.(3 and 4 in Figure4arepresent the EDS testing regions and correspond to the numbers in Table1.) Figure 3 . 10 ( Figure 3. Optical corrosion morphology of: (a) The bare AZ31 alloy after immersion test for 3 days; (b) the hydrotalcite film coated sample after immersion tests for 15 days in Hank's solution. Figure 3 . Figure 3. Optical corrosion morphology of: (a) The bare AZ31 alloy after immersion test for 3 days; (b) the hydrotalcite film coated sample after immersion tests for 15 days in Hank's solution. Figure 4 . Figure 4. Surface morphology of the hydrotalcite film coated sample after immersion test for 15 days in Hank's solution: (a) The white particle area in low magnification; (b) the intact area in high magnification; (c) the pit area after removing corrosion products.(3 and 4 in Figure4arepresent the EDS testing regions and correspond to the numbers in Table1.) Figure 4 . Figure 4. Surface morphology of the hydrotalcite film coated sample after immersion test for 15 days in Hank's solution: (a) The white particle area in low magnification; (b) the intact area in high magnification; (c) the pit area after removing corrosion products.(3 and 4 in Figure4arepresent the EDS testing regions and correspond to the numbers in Table1.) Figure 5 . Figure 5. Cross-sectional morphology of the hydrotalcite film coated sample after immersion test for 15 days in Hank's solution: (a) Low magnification; (b) high magnification.(5, 6 and 7 in Figure5brepresent the EDS testing regions and correspond to the numbers in Table1.) Figure 5 . Figure 5. Cross-sectional morphology of the hydrotalcite film coated sample after immersion test for 15 days in Hank's solution: (a) Low magnification; (b) high magnification.(5, 6 and 7 in Figure5brepresent the EDS testing regions and correspond to the numbers in Table1.) Figure 7 . Figure 7. Schematic illustration of the corrosion behavior of AZ31 with hydrotalcite film in: (a) NaCl solution; (b) Hank's solution. Figure 7 . Figure 7. Schematic illustration of the corrosion behavior of AZ31 with hydrotalcite film in: (a) NaCl solution; (b) Hank's solution. Table 1 . The content of various elements in different regions of the films in Figures1, 4a and 5bmarked by 1-7, respectively. Table 1 . The content of various elements in different regions of the films in Figures1, 4a and 5bmarked by 1-7, respectively. [35]peak at 347.8 eV can be attributed to CaHPO 4 •2H 2 O.The binding energy of Ca 2p 3/2 peak at 346.9 eV can be attributed to Ca 3 (PO 4 ) 2 or CaCO 3[34].The Mg 1s peak at 1303.9 eV is assigned to Mg 3 (PO 4 ) 2[35].It is observed that the content of phosphates is larger than that of hydrogen phosphates.Based on the composition analysis, it can be seen that the top corrosion products layer is mainly consisted of Mg 3 (PO 4 ) 2 Ca 3 (PO 4 ) 2 , CaHPO 4 •2H 2 O and CaCO 3 .
8,950.2
2019-02-12T00:00:00.000
[ "Materials Science" ]
Fluctuational dynamics and the structure factor of nonlinear reactive systems on a 1 D lattice Within the investigated model of a one-dimensional biand tri-molecular chemically reacted crystal, the asymptotic behaviour of the amplitude of a two-particle static structure factor (as a function of the crystal length) has been discovered. The nonlinear fluctuational scenario leads us to the conclusion as to the possibility of existence of an asymptotic metastable cluster fragmentation within initially homogeneous 1D systems. A connection between some possible effects and the properties of the fluctuations in reacting systems and reactive dynamics in a partially filled lattice is also shown. Introduction One-dimensional models are known to be an effective tool for solving a variety of problems in statistical mechanics.In the past decades much attention in such areas as chemical reactions, random walks and aggregation problems has been paid to the role of dimensionality [1].In particular, work carried out in Brussels by Nicolis, Provata, Prakash, Tretyakov and Turner has showed that restricting space to low dimension can cause deviations from the mean field behaviour, depending on the type of the nonlinearity involved.For instance, while the bimolecular reaction A+X ←→ 2X shows the mean field behaviour on a 1D completely filled lattice, the trimolecular reaction A+2X ←→ 3X stabilizes in such a lattice in a nonequilibrium locally frozen asymptotic state in which the ratio of the average number A to X particles is a constant quite different from the mean-field value. The work carried out within the framework of the IUAP project during our stay in Brussels focused on two topics: the properties of the fluctuations in the reacting systems and the study of reactive dynamics in a partially filled lattice.A manuscript summarizing the results is currently being prepared for publication in Europhysics Letters.We hereafter summarize the prinsipal steps and conclusions. Fluctuations in a completely filled 1D lattice Consider the bimolecular reaction 2X ←→ A + X on an ideal totally filled 1D lattice of M sites, bearing in mind that a given species can only react with its nearest neighbours and that particles cannot overlap.We adopt as an initial condition a uniform configuration containing only X particles.Since at equilibrium all the positive configurations of A of X particles, except the one of a lattice filled completely by A, can be generated from this initial condition, the probability distribution g M (N A ) is given by with From (1) one recovers asymptotically (M → ∞) the previous result of Nicolis et al. r = N A / N X = 1.Furthermore, one can compute a covariant matrix of the fluctuations around the mean particle numbers etc. Coming now to the trimolecular model and adopting the same initial condition, one can easily see that because of the geometric constrains involved, for each given N A , at least N A − 1 sites cannot be occupied by A particles.The probability distribution replacing (1) is thus (3) Again, the expression reproduces the value N A / N X ≈ 0.38 obtained previously by Nicolis et al.However, one is now also in the position to compute the properties of the fluctuations.For instance, one finds Partially filled lattice Next we allow for vacancies in the lattice, starting with an initial configuration in which N X particles (N X < M) are distributed randomly among the M sites available.We also introduce three auxiliary variables: -the number of nearest neighbours pairs occupied simultaneously by X particles -N XX , -the number of nearest neighbours pairs of which only one is occupied by X particles -N 0X , -the number of nearest neighbours pairs both of which are empty -N 00 . The following relations between these variables are easily established showing that of these three variables only one can be chosen independently, say N XX .The number of different configurations of X particles with only N XX pairs is then Since A particles can only be generated from the configurations involving continuous X particles, the conditional probability to find N A particles in the system is N XX N A (up to factor), and the equilibrium distribution of the lattice is, for the bimolecular model, This expression can be reduced to a form exhibiting a confluent hypergeometric function and used to calculate the fluctuations of the number of particles.The principal result of this analysis is that now the fluctuations behave anomalously where λ = N X /M is an initial filling fraction. Structure factor To get an idea of the type of spatial inhomogeneities locally created in the lattice we evaluated the structure factor of the system, a quantity the additional interest of which is in its experimental accessibility. Let g be a wave number associated with the inhomogeneities.The structure factor is then where l and l ′ denote the lattice sites.Performing the summations over l and l ′ we obtain In the long wavelength ("hydrodynamic") limit g → 0 this expression reduces to The g-dependence of this function reflects the existence of a spatial variability.However, since the extremum of S AX is at g = 0, no preferred length scale emerges.The situation is likely to change in the trimolecular model which is currently under investigation.9. Gradstein J.C., Ryzhik J.M. Tables of Integrals, Sums, Series and Products.Moscow, Nauka Pub., 1963 (in Russian).
1,226.4
1999-01-01T00:00:00.000
[ "Physics" ]
Adaptive erasure of spurious sequences in sensory cortical circuits Summary Sequential activity reflecting previously experienced temporal sequences is considered a hallmark of learning across cortical areas. However, it is unknown how cortical circuits avoid the converse problem: producing spurious sequences that are not reflecting sequences in their inputs. We develop methods to quantify and study sequentiality in neural responses. We show that recurrent circuit responses generally include spurious sequences, which are specifically prevented in circuits that obey two widely known features of cortical microcircuit organization: Dale’s law and Hebbian connectivity. In particular, spike-timing-dependent plasticity in excitation-inhibition networks leads to an adaptive erasure of spurious sequences. We tested our theory in multielectrode recordings from the visual cortex of awake ferrets. Although responses to natural stimuli were largely non-sequential, responses to artificial stimuli initially included spurious sequences, which diminished over extended exposure. These results reveal an unexpected role for Hebbian experience-dependent plasticity and Dale’s law in sensory cortical circuits. Supplementary Math Note 2 Theory of time-reversible systems 3 2. Proof. Sequentiality is trivially non-negative, because it is the ratio of non-negative numbers, therefore we only have to prove that it is smaller than one. We will assume that the cross-covariance is computed for all possible values of time lag (S = T ) and wrapped around in a circle (i.e. C(T + s) = C(s)). In particular, we assume that T is odd and consider time lags between −(T − 1)/2 and (T − 1)/2. In practice, this assumption is inessential as long as the maximum time lag is large enough to cover the bulk of the cross-covariance. In matrix form, sequentiality in Equation 10 In order to prove that sequentiality never exceeds one, we have to prove that the numerator never exceeds the denominator. By inspection of Equation S2, this is true if and only if s Tr C(s) 2 ≥ 0. Note that Tr C(s)C(s) T is non-negative for all values of s since it is equal to the sum of squared elements of C(s). However, the second term, Tr C(s) 2 may be negative in some cases. For example, if C(s) is anti-symmetric for some values of s, then Tr C(s) 2 is negative. However, we show below that it is non-negative when summed over all values of s. We define the cross spectral matrix as the discrete Fourier transform of the cross covariancê Since C(s) T = C(−s), it is straightforward to show that the cross spectral matrix is Hermitian. It is also possible to show thatĈ(ω) is positive semi-definite, for all frequencies ω. We define the matrix Π tt = E[x t x t ] = C(t − t ), for a scalar and stationary random process x, where E denotes expectation over its distribution. The matrix Π is positive semidefinite and circulant, and Fourier transforming along both time variables (t and t ) we obtainΠ ωω = δ ωω Ĉ (ω). SinceΠ is positive semidefinite,Ĉ(ω) must be non-negative. The extension to the multivariate case is straightforward, and shows thatĈ(ω) is positive semi-definite. Using Parseval's theorem, we have s Tr C(s) 2 = ω Tr Ĉ (ω)Ĉ(ω) Each element of the sum on the right hand side must be non-negative, because it is the trace of a product of two Hermitian positive semi-definite matrices (see Theorem 4.3.53 in Ref. 80). Using similar arguments, we find Therefore, substituting these expressions into Equation S2, we find that sequentiality is equal to which never exceeds one since all sums are positive. Symmetries Theorem 2. When its singular values are distinct, all singular vectors of the reshaped cross-covariance Γ in Equation 12 are either symmetric or anti-symmetric. Proof. Let K be the commutation matrix, of size N 2 × N 2 , operating in the vector space of neuron pairs, that replaces the (i + jN ) pair with the (j + iN ) pair. This corresponds to transposing the original cross covariance matrix, replacing ij with ji, but operating on the vector space of neuron pairs. We also define R as the reflection matrix that replaces the time lag s with the time lag −s. This corresponds to flipping the time lag vector. Using the identity C ij (s) = C ji (−s), it is straightforward to note that transposing space or transposing time are two equivalent operations, i.e. Therefore the reshaped cross-covariance matrix is invariant upon the simultaneous action of K and R. We denote the SVD of Γ by where S is a diagonal matrix containing the singular values of Γ, and the columns of matrix U (resp. V ) are the (orthogonal) left (resp. right) singular vectors of Γ. Then, Equations S7 and S8 imply that Since the permutation matrices K and R are orthogonal, the matrices in round brackets are also orthogonal. Note that the SVD of a matrix is unique, provided that singular values are distinct, and except for a simultaneous sign change of the same column of U and V . Therefore, we must have that where ± is a diagonal matrix composed of +1 and −1 elements only. Note that signs apply separately to different columns, but the sign of a given column of U and V matches. From the meaning of the K and R as spatial transpose and timereversal respectively, these equations imply that each pair singular vectors (left and right) are either both symmetric or both anti-symmetric in space (left) and time (right). This, in turn, means that each singular value can be unequivocally labelled as "symmetric" or "anti-symmetric", thus justifying the alternative expression of sequentiality given by Equation 13. Supplementary Math Note 2 Theory of time-reversible systems Summary of results We are interested in determining under which conditions the activity produced by the dynamical system in Equation 5 is time-reversible. Our main assumptions are: 1) that the system relaxes to a stationary state, and 2) that all v i (t) and ξ j (t) are jointly Gaussian. These assumptions are exactly satisfied only when f j [v] ∝ v (linear response functions), but they still hold approximately for weakly nonlinear transfer functions, especially if N is large. We show in Section 2.3 that the following synaptic matrix guarantees (is a sufficient condition for) time reversibility: where H -which we refer to as the "Hebb" part -is a matrix that must satisfy the conditions detailed in Section 2.3 and D -the "Dale" part (see below) -is some vector of presynaptic factors. A simple example of a "Hebb" factor H that guarantees time reversibility is the input covariance Another possible form is where k(s) is an arbitrary scalar kernel function and Σ out is the output covariance matrix, Note that C out depends on H itself, therefore Equation S14 gives an implicit formula that must be solved for H. The proportionality of synaptic strengths to the covariance of neural activity is consistent with empirical observations Cossell et al. (2015). Furthermore, by interpreting k(s) as an STDP kernel, we show in Section 3.1 that this matrix represents a fixed point of STDP dynamics, which can be simulated in order to solve Equation S14. Finally, assuming that all elements of the Hebb component H are positive, then neuron j is excitatory (i.e. all its outgoing weights are positive) if D j ≥ 0; it is inhibitory otherwise. This justifies the interpretation of D as a "Dale" component. The derivation of Equations S12 to S14 is provided in Section 2.3 (see derivations leading up to Equations S47 and S54). Derivation of covariance functions In this section we derive formulas for the covariance functions, under the assumptions of stationarity and Gaussianity. For convenience of notation, we rewrite Equation 5 in matrix form as We assume that the system relaxes to a stationary state, and define the following cross-covariance matrices where angular brackets correspond to the operation of averaging over different realizations of the stochastic process ξ(t). Note that stationarity implies that C in (−s) T = C in (s), and C out (−s) T = C out (s). The following theorem provides an expression for these cross-covariance matrices. Theorem 3. Given the dynamical system described by Equation S16, if the system is stationary and the variables (v, ξ) are jointly Gaussian, then the cross-covariance matrices satisfy the following expressions: Here, the Fourier transform and inverse transform are defined bŷ Proof. We first derive the differential equations that govern the evolution of the covariance matrices. To do this, we shift temporally and multiply Equation S16 by, respectively, the transposed input and output deviations In order to average these equations, we use the assumption that {ξ(t), v(s)} are jointly distributed according to a multivariate Gaussian, along with Bussgang's theorem, to obtain 1 where the diagonal matrix Φ is given in the theorem statement. Note that Φ = I when the transfer function is linear. Defining the matrix J = W Φ − 1, and averaging Equations S26 and S27, we obtain the following differential equations for the covariance matrices: The theorem follows by applying the Fourier transform to Equations S30 and S31. can be eliminated from Equations S20 and S21, to obtain This equation is nonlinear, since the matrix Φ, and therefore J, depends on the diagonal of C out (0). However, J does not depend on ω; thus, we can alternate between (i) solving a linear equation forĈ out given fixed J, and (ii) fixingĈ out and using Equation S22 to obtain J. This will yield a self-consistent solution. Conditions for time reversibility In this section, we turn to the problem of finding necessary and sufficient conditions for time reversibility. Recall that we assumed that the input covariance is symmetric in time (cf. Equation 6 and below): By definition, the network output is said to be time-reversible if the same type of equality holds for network activity as well: Therefore, the goal is to find conditions under which Equation S33 implies Equation S34. We stress that time reversibility implies that the joint probability of two states, at two different time points, is symmetric. Because of the Gaussianity assumption, this reduces to time reversibility of the covariance, Equation S34, since higher order statistics do not contribute (and the mean is constant due to stationarity). We prove the following. Theorem 4. Given time-reversible input, the output is time reversible if and only if any one of the following equalities holds: Proof. Due to the stationarity assumption, symmetry in time implies symmetry in space, since as noted above we must in Fourier space. Therefore, the input covariance satisfies the following equalities: Similarly, the output is time reversible if and only if either of the following equalities holds: We focus on provingĈ out (ω) T =Ĉ out (ω). For brevity, we omit the dependence on ω, and we denote byĈ out sym = C out +Ĉ out T the symmetric part of the covariance, and byĈ out asy =Ĉ out −Ĉ out T its anti-symmetric part. Using Equations S32 and S33, we have that Since this equation is linear in the covariance, the anti-symmetric part is zero if, and only if, the term in round brackets is zero. Therefore, a necessary and sufficient condition for time reversibility is Furthermore, it is straightforward to show that if J satisfies this equation, then all its powers satisfy the same equation, and therefore so does any analytic function of J. The converse is also true, since the inverse of any analytic function is also analytic. In particular, iωτ − J is analytic in J, and so is its inverse. Thus, from Equations S20 and S21, we have that Therefore, any one of the three Equations S43 to S45 implies, and is implied by, any one of the other two. The theorem follows by applying the inverse Fourier transform to Equations S43 to S45. Consequence of Theorem 4 for "Hebb and Dale" conditions. Each one of Equations S35 to S37, on its own, is necessary and sufficient for time-reversibility of the output. Since the input covariance is given, while the other covariances must be determined, it is easiest to test for reversibility by checking whether Equation S37 holds. Using the separability of the input covariance (Equation 6), Equation S37 is rewritten as Using J = W Φ − I, and noting that Φ is diagonal, it is possible to show that satisfies Equation S37 for any arbitrary diagonal matrix D. This corresponds to Equation S13 in Section 2.1. However, other forms are also possible, including Equation S14, and we show in the following theorem how to derive them. We also note that, in the special case of a linear transfer function (Φ = I), Equation S46 can be rewritten as Since sequentiality is zero when this equation holds, we use the norm of the left hand side of this expression as an approximation for asym in Equation 4 of the main text for N = 2 and for symmetric W and Σ. We verified that this approximation was accurate over a wide range of parameters in Figure 2C (Star Methods, "Parameter values"). Theorem 5. We assume that the matrix J has distinct eigenvalues and the matrix Σ is non-singular. If a matrix J exists satisfying the equation where the columns of V are the eigenvectors of J, and E is a diagonal matrix. (Note that V needs not be an orthogonal basis, i.e. V T V = I in general). If we further assume that Σ is positive definite, then J must be equal to where L is a symmetric matrix. Proof. To verify this statement, note that Equation S49 implies J = ΣJ T Σ −1 . By writing the spectral decomposition of J = V ∆V −1 , where V and ∆ are, respectively, the eigenvectors and eigenvalues of J, we have that ∆ = . Moreover, Sylvester's law of inertial states that V EV T have the same number of positive and negative eigenvalues as E does. Therefore, if we further assume that Σ is positive definite, then E must have positive elements. Under this assumption, we consider the unique singular value decomposition of V E 1/2 = U Λ 1/2 U , where Λ is the diagonal matrix of singular values and U , U are the orthogonal matrices of left and right singular vectors, respectively. It is straightforward to show that U and Λ are, respectively, the eigenvectors and eigenvalues of Σ = U ΛU T . Finally, we rewrite J by substituting the expression of its eigenvectors, V = U Λ 1/2 U E −1/2 , in its spectral decomposition, J = V ∆V −1 , and find where the last equality defines L. If J is unknown, while Σ is given, then U and Λ are given, while U and ∆ are arbitrary. Therefore, U ∆U T is an arbitrary symmetric matrix, and so is L. In particular, Equation S50 allows us to conclude that, if a matrix J exists for which the system is time reversible, and Equation S35 holds, then we must have that C out (s) = V E out (s)V T , where V are the eigenvectors of J and E out (s) is some diagonal matrix. Furthermore, since Equation S35 holds for all values of s, then any linear combination of different values of s can be taken. By taking an integral over different s values, weighted by a kernel k(s), we have that J ds k(s)C out (s) = ds k(s)C out (s) J T . Provided that the integral is positive definite, Theorem 5 implies that the synaptic matrix J must be equal to for some arbitrary scalar kernel k(s) and symmetric matrix L. Conversely, we can set J by choosing any arbitrary symmetric matrix L in this expression, and this would still guarantee time reversibility. It is straightforward to show that L can be chosen in a way that where D is an arbitrary diagonal matrix. This corresponds to Equation S14 in Section 2.1. We will show in Section 3.1 that this expression can be interpreted as a fixed point of spike timing-dependent plasticity (STDP). As a final remark, we stress that no J may exist for which time reversibility holds (for example, for more complex types of input), in which case Equation S54 does not guarantee time reversibility. Linear examples In this section we consider the simple case in which the neural dynamics are linear, thus Φ = I and J = W − I (see Section 2.2). In this case, the cross-covariances can be calculated analytically when the input has simple statistics. First, we consider the case of white noise input, i.e. an input covariance of the form C in (s) = Σ in δ(s). We denote by C out (s) the output cross-covariance and by Σ out = C out (0) the output covariance at zero time lag. The output covariance can be calculated from Equation S32, and is equal to The covariance at zero time lag Σ out is equal to Computing this integral is complicated, but Σ out can be calculated instead by solving the Lyapunov equation We calculate the sequentiality using the definition, Equation 10, that we rewrite here substituting sums with integrals, since time is continuous in this case We substitute the expression for C out , given by Equation S55, in the integrals and find that sequentiality can be expressed as seq 2 = Tr (I 1 ) − Tr I 2 Σ out Tr (I 1 ) + Tr (I 2 Σ out ) Where the integrals I 1 and I 2 are equal to These integrals can be calculated as the solution of the following equations These can be solved by standard methods for Lyapunov equations. Another interesting case, which we use in all simulations, is that of colored noise input, which we model as a multivariate Ornstein-Uhlenbeck (OU) process with a covariance that is separable in space and time. The derivations performed above can be generalized to handle such coloured inputs by defining auxiliary variables that independently integrate their own white noise inputs and feed their outputs (which now have colored OU structure) to the network. Thus, the first block of N variables correspond to the neural network, interacting through the synaptic matrix W , and integrating the input from the auxiliary variables with a time constant τ . The second block of N variables correspond to the auxiliary variables, which integrate white noise of covariance B, with a time constant τ in , and they interact through a mixing matrix Z. The (colored) auxiliary variables are then fed into the neural variables. The mixing matrix Z is set to zero in all simulations implemented here. Under these block ordering conventions, the extended system is described by the following Jacobian: The extended input covariance is equal to Σ in e δ(s), with showing that only the second block (auxiliary variables) receives the white noise input. Analogously, the output covariance of the extended system is composed of four blocks: Note that since the output of the second block is interpreted as the input to the neural network, we have denoted it by C in (s). Similarly, the cross-covariance of the two blocks has the same interpretation as the input-output covariance as above, and is therefore denoted by C io (s). Finally, the top-left block is the covariance of the network output, naturally denoted by C out (s). Since the extended system is driven by white noise, the extended output covariance can be calculated analytically, as in Equation S55, and is equal to By summing over the corresponding blocks, the first term of the Lyapunov equation J e C e (0) is rewritten as where Σ out , Σ in , Σ io are the respective cross-covariances at zero time lag. The following equations follow, one for each block of the Lyapunov equation: These equations can be solved numerically in order to find Σ in , Σ io , Σ out . The white noise limit corresponds to Z = 0 and τ in → 0, for which we have Σ io = B/2τ . In this limit, we can recover the Lyapunov Equation S57. Sequentiality in the colored noise case can be calculated following similar methods as in the white noise case. We start again from Eq.S58, and we note that we can express C out as where C e (s) is given by Equation S66 and P is a N × 2N matrix equal with two blocks, (I0), the N × N identity matrix and the N × N matrix with all elements equal to zero. Then, substituting the expression for C out in the integrals, we find that sequentiality can be expressed as seq 2 = Tr P I 1 P T − Tr P I 2 C e (0)P T Tr (P I 1 P T ) + Tr (P I 2 C e (0)P T ) , where the integrals I 1 and I 2 are equal to These integrals can be calculated as the solution of the following equations These can be solved by standard methods for Lyapunov equations. Mean field dynamics We now consider the case in which the dynamics of the synaptic matrix follow spike-timing-dependent plasticity (STDP), and we show that a fixed point of this dynamics corresponds to a time reversible state (Equation S54). We use a variant of STDP that depends on presynaptic spikes and the subthreshold postsynaptic membrane potential (rather than postsynaptic spikes; Clopath et al., 2010, see also Figure 4A). According to this rule, (additive) changes to the synaptic weight between presynaptic neuron j and postsynaptic neuron i occur due to two kinds of contributions. First, at the time of its occurrence, each presynaptic spike causes LTD (a decrease in the synaptic weight) that is proportional to the low-pass (causally) filtered version of the postsynaptic membrane potential at that time. Second, each presynaptic spike also gives rise to an (exponentially decaying) eligibility trace, which continually causes LTP (an increase in the synaptic weight) that is proportional to the momentary product of the eligibility trace and the (unfiltered) postsynaptic membrane potential. Thus, the change in the synaptic weight between the two cells can be described by the following equation: where τ s is the time constant of synaptic changes, δv i (·) is the (unfiltered) membrane potential of the postsynaptic cell (relative to its long-run time-average), Y j (·) is the spike train of the presynaptic cell (represented as a sum of Diracdelta functions), and k(t) for t ≥ 0 describes the eligibility trace corresponding to a single presynaptic spike, used for determining the magnitude of LTP (described by the first integral), while for t < 0 it is the (sign-and time-inverted) kernel with which the postsynaptic membrane potential is filtered, used for determining the magnitude of LTD (described by the second integral). An example kernel often used in the literature is Therefore, if the activity of postsynaptic neuron i is high within a time interval of about τ + following a presynaptic spike in neuron j, then the synaptic weight increases by an amount proportional to a + . Conversely, if the activity of postsynaptic neuron i is high within a time interval of about τ − preceding a presynaptic spike in neuron j, then the synaptic weight decreases by an amount proportional to a − . While STDP is naturally defined for spiking neurons, e.g. as given by Equation S79, following Kempter et al. (1999); Dayan and Abbott (2001), we use a mean-field approach and describe the average effects of STDP assuming that presynaptic spiking is described by an inhomogeneous Poisson process. Thus, we first average Equation S79 over the distribution of possible spike trains given the underlying presynaptic firing rate time course, and describe STDP dynamics as Second, we also average Equation S81 over the realizations of firing rates and membrane potentials, given their stationary distributions (and in particular, cross-covariances; Equation S29). As a result, STDP dynamics reduces to, in matrix form, where we used Bussgang theorem and Φ is the diagonal matrix of average gains (see Section 2.2 and Equation S23). Instead of Equation S82 , we consider the following, more general form where D is an arbitrary diagonal matrix that effectively gives each presynaptic neuron its own learning rate. The fixed points of Equation S83 are given by Thus, the diagonal matrix D plays the role of the "Dale" factor in Equation S54, and indeed we choose it to have a mix of positive and negative elements. Note that a negative element in D for an inhibitory synapse means that the learning rate is |D| but the effective STDP kernel has been sign-inverted relative to that of excitatory synapses. Also note that the covariance C out (s) depends on W through Equation S32. In order to search for fixed points, we can simulate Equation S83 numerically. Nevertheless, note that Equation S84 itself does not guarantee Hebb-and-Dale connectivity, which would require that weights only depend on the zero-time lag correlations, C out (0), while here C out (s) is integrated over time lags s. Equation S84 corresponds to the condition of time reversibility, Equation S54, which holds for any kernel function k(s), provided that the integral is positive definite. However, stability of the fixed point may depend on the specific choice of k(s). In the following sections we show two cases in which fixed points are described by simple algebraic equations. Example 1: linear dynamics and white input Using the STDP kernel of Equation S80 and the covariance in Equation S55, the integral in Equation S83 can be calculated analytically. Then, synaptic dynamics is equal to At the time reversible fixed point, Equation S49 holds, which also implies that W Σ = ΣW T . Substituting this expression in Equation S57, we find that the output covariance has the following expression Σ out = (I − W ) −1 Σ in /2τ . We further assume that the timescale of potentiation and depression are equal, τ + = τ − = τ k ; Then, using Equation S49 in Equation S85 and setting the time derivative to zero, we find that at the time reversible fixed point the synaptic matrix satisfies This equation can be solved numerically by noting that the eigenvectors of W are equal to the eigenvectors of Σ in D, and by solving a third order equation for the eigenvalues of W . Example 2: linear dynamics and colored input Using the STDP kernel of Equation S80 and the covariance in Equation S67, the integral in Equation S83 can be calculated analytically and is equal to Note that only the first block of this matrix needs to be computed in order to simulate synaptic dynamics. The matrix in round brackets can be expressed in block form as The matrix block inverse is equal to Therefore, the first block of the integral in Equation S87 is equal to This expression, along with Equations S70 to S72 and S83, can be used to simulate synaptic plasticity under colored noise input. Note that this expression reduces to the white noise case for τ in → 0. When time reversibility holds, separately for both the auxiliary variables and the neural network, the covariances can be calculated explicitly, and we have that Furthermore, I assume that τ + = τ − = τ ± and I define k 0 = a + − a − . Under time reversibility, the integral is equal to Figure 1H). Neuron pairs are ordered according to the location of the peak of their cross-covariances. Bottom: first four temporal components (red: symmetric; green: anti-symmetric; as in Figure 1I). (C) Sequentiality spectrum (as in Figure 1G): covariance associated with each temporal component of the synfire chain, in decreasing rank order (colors as in the middle panel). (D) Our measure of sequentiality is not systematically related to oscillatoriness in neural activity. We consider two neurons whose marginal activity statistics are either smooth but non-oscillatory (top row) or oscillatory (bottom row). The degree of temporal coupling between the two neurons can be controlled independently of these marginals, leading either to zero (left column) or maximal ( Figure 2E); random matrix (red) -each entry is drawn independently from a Gaussian distribution of zero mean ( Figure S2A; matrix is scaled to have the same spectral abscissa as of the Hebb and Dale matrix); random eigenvectors (orange) -all eigenvalues are identical to those of the Hebb and Dale matrix, but each entry of every eigenvector is drawn independently from a Gaussian distribution of zero mean (scale does not matter). For each connectivity type, we simulated 100 networks at each level of input sequentiality, and show mean±1 s.e.m. across these networks. (A) Output vs. input sequentiality. Sequentiality is computed as explained in the main text. When the input sequentiality is zero, Hebb and Dale connectivity is the only one resulting in zero output sequentiality. When input sequentiality increases, output sequentiality increases for all three types of networks. (B) Input-output sequence correlation vs. input sequentiality. Input-output sequence correlation is computed by taking the leading skew-symmetric spatial mode of the input, interpreted as the main sequential input pattern, and finding the highest correlation among the leading 10 output patterns. The Hebb and Dale network achieves significantly higher correlation between the input and output sequential patterns than the other networks. (C): Sequentiality (left y-axis) and network timescale (right y-axis) vs. spectral abscissa (i.e. maximum real eigenvalue) of the synaptic weight matrix, for different types of networks. In addition to those shown in panels (A) and (B), we show results for a random symmetric matrix (same as "random matrix" in A-B, but with reciprocal connection strengths set to be equal). In the "random matrix" case, we distinguish sampled weight matrices based on whether their leading eigenvalue (that with maximum real part) is real or complex. All matrices are rescaled by an appropriate factor to obtain the desired value of the spectral abscissa (x-axis). The light blue line shows the corresponding timescale of network integration (i.e. τ / [1 − spectral abscissa], identical for all networks). As we show (and as is well-known; Dayan and Abbott, 2001), across a variety of networks, timescales (and overall fluctuation magnitude, not shown) tend to explode when the connectivity is scaled up towards criticality (spectral abscissa = 1). Yet, we find that this does not affect sequentiality in Hebb and Dale networks (dark blue), as it is provably uniformly zero (Supplementary Math Note 2). However, for more general weight matrices, sequentiality tends to exhibit a maximum at some intermediate connectivity strength (other colors). This can be understood by noting that (i) for weak connectivity, the network does not substantially modify the spatiotemporal structure of the input, which we defined to have seq = 0 in this case, and (ii) as the network approaches criticality, activity becomes dominated by the dominant eigenmode and thus becomes effectively one-dimensional (Ganguli et al., 2008) (except for the case when the leading eigenvalue is complex, yellow) -therefore, by definition, it can only be non-sequential. Figure S5: Temporal changes in sequentiality for natural stimuli. Same as Figure 5G, but for natural stimuli. Sequentiality of neural responses during the late (y-axis) vs. the early half of exposure (x-axis) to the natural stimuli across animals (dots). Sequentiality is not systematically lower later than earlier, and this does not seem to be due to a ceiling effect (in which case dots with a lower x-or y-coordinate would be closer to the diagonal).
7,344.4
2022-03-01T00:00:00.000
[ "Biology" ]
The Involvement of SMILE/TMTC3 in Endoplasmic Reticulum Stress Response Background Thestate of operational tolerance has been detected sporadically in some renal transplanted patients that stopped immunosuppressive drugs, demonstrating that allograft tolerance might exist in humans. Several years ago, a study by Brouard et al. identified a molecular signature of several genes that were significantly differentially expressed in the blood of such patients compared with patients with other clinical situations. The aim of the present study is to analyze the role of one of these molecules over-expressed in the blood of operationally tolerant patients, SMILE or TMTC3, a protein whose function is still unknown. Methodology/Principal Findings We first confirmed that SMILE mRNA is differentially expressed in the blood of operationally tolerant patients with drug-free long term graft function compared to stable and rejecting patients. Using a yeast two-hybrid approach and a colocalization study by confocal microscopy we furthermore report an interaction of SMILE with PDIA3, a molecule resident in the endoplasmic reticulum (ER). In accordance with this observation, SMILE silencing in HeLa cells correlated with the modulation of several transcripts involved in proteolysis and a decrease in proteasome activity. Finally, SMILE silencing increased HeLa cell sensitivity to the proteasome inhibitor Bortezomib, a drug that induces ER stress via protein overload, and increased transcript expression of a stress response protein, XBP-1, in HeLa cells and keratinocytes. Conclusion/Significance In this study we showed that SMILE is involved in the endoplasmic reticulum stress response, by modulating proteasome activity and XBP-1 transcript expression. This function of SMILE may influence immune cell behavior in the context of transplantation, and the analysis of endoplasmic reticulum stress in transplantation may reveal new pathways of regulation in long-term graft acceptance thereby increasing our understanding of tolerance. Introduction The routine monitoring of renal allograft survival in humans depends on functional clinical parameters such as blood creatinine clearance, proteinuria level, the presence of circulating anti-HLA and donor specific antibodies and scoring of intra-graft lesions in graft biopsies. Standard immunosuppressive drugs are nonspecific, increase opportunistic infections and malignancies and can be nephrotoxic [1]. Immune tolerance, which has been achieved in several experimental models [2], might provide a means of avoiding such inherent problems since immunosuppressive treatment could be reduced or completely withdrawn in tolerant patients. Although this phenomenon (induced or ''spontaneous'') is rare in renal transplantation in primates and humans, several studies have shown its clinical feasibility [3,4,5]. Identifying and understanding the biological features characterizing operational tolerance may unveil molecular mechanisms allowing such patients to tolerate their graft without immunosuppression treatment. We previously identified 49 genes differentially expressed in the blood of operationally tolerant patients compared to stable patients under classical immunosuppressive therapy, patients with chronic antibody-mediated rejection and healthy volunteers [6]. These genes were shown to be able to correctly classify most of the patients according to their clinical status. Among these genes, we focused on SMILE, also called TMTC3 (transmembrane and tetratricopeptide repeat containing 3 protein), because it was one of the 13 genes that were over-expressed in the blood of operationally tolerant patients and because its function was still unknown. SMILE is a 7203 bp mRNA (NM_181783) and a 914 amino acid transmembrane protein (NP_861448). The protein presents the particularity of 10 tetratricopeptide repeats (TPRs, according to the UniProtKB website, http://www.uniprot.org/uniprot/Q6ZXV5), a pattern ubiquitously conserved through evolution and species. TPRcontaining proteins are involved in several cellular functions such as molecular chaperone complexes, anaphase promoting complexes, transcription repression complexes, protein import complexes and protein folding [7]. They are found in a variety of different organisms and in various sub-cellular locations such as the cytosol, nucleus, mitochondria and peroxisomes [7]. The involvement of these motifs and the importance of their interactions for molecular and cellular functions have thus been shown in a number of different biological systems [7]. The aim of our study was to analyse the cellular and molecular function of SMILE/TMTC3 in vitro and the global pathways in which it is involved. In this study we report that SMILE interacts with PDIA3, a molecule involved in protein folding, and is involved in response to endoplasmic reticulum (ER) stress, which may play a role in immune regulation. SMILE transcripts are differentially expressed in PBMCs from operationally tolerant kidney transplant patients compared to stable patients and patients with chronic antibody-mediated rejection In order to confirm the previous finding of SMILE mRNA differential expression in the blood of operationally tolerant patients compared to stable and chronic rejection patients by microarrays [6], SMILE mRNA levels were analyzed in the PBMCs of healthy volunteers (HV, n = 11), operationally tolerant patients (TOL, n = 8), and patients under standard immunosuppressive therapy with either stable graft function (STA, n = 9) or deteriorating graft function with biopsy-proven chronic antibodymediated rejection (CAMR, n = 14). As shown in Figure 1A, SMILE mRNA was significantly differentially expressed in the PBMCs of TOL patients compared with STA (**p,0.01) and CAMR patients (*p,0.05) (Kruskal-Wallis test, p = 0.0205). The difference in transcript expression in the PBMCs of operationally tolerant patients was also confirmed compared to a larger cohort of patients with chronic rejection (19 patients) and a larger cohort of stable patients (164 patients) ( Figure S1). The capacity of SMILE transcripts to distinguish between operationally tolerant patients and stable patients ( Figure 1B) was studied by receiver operating characteristic (ROC) curve analysis. This analysis revealed a very good discriminative power for SMILE to distinguish TOL patients from STA patients with an optimal threshold of 1.23 (area under the curve [AUC] = 0.98; 95% confidence interval 0.95 to 1, good sensitivity of 1 and good specificity of 0.93). A ROC curve analysis also determined that the capacity of SMILE transcripts to distinguish between operationally tolerant patients and patients with chronic antibody-mediated rejection was also very good, with an optimal threshold of 1.86 (area under the curve [AUC] = 0.83; 95% confidence interval 0.66 to 0.96, good sensitivity of 0.77 and good specificity of 0.75) ( Figure S2). Furthermore, in a homogeneous cohort of 164 stable patients with a well characterized clinical status: stable renal function (STA) for more than five years under standard immunosuppressive therapy (thirty percent of these stable patients under Prograf and seventy percent under Cyclosporin A treatment), we showed that the level of SMILE mRNA was independent of quantitative variables, including time post-transplantation, creatinine clear- ance, proteinuria, HLA incompatibilities and recipient and donor age ( Figure S3). Similarly, SMILE mRNA levels were also shown to be independent of qualitative variables (described as frequencies) such as recipient and donor gender, presence of anti-HLA antibodies or types of immunosuppressive treatment ( Figure S4). Together, these results suggest that SMILE may be a good biomarker of transplant status. SMILE is involved in protein metabolism SMILE was identified as a high confidence prey (Predicted Biological Score A [8]) in a yeast two hybrid screen with Protein Disulfide Isomerase family A member 3 (PDIA3 or GRP58) as bait, performed on a random-primed human brown adipocyte cDNA library ( Figure S5). PDIA3 is involved in the folding of glycoproteins by disulfide bond formation in the ER and is overexpressed in ER stress [9]. Double-staining of SMILE and PDIA3 in odontoblast cultures ( Figure 2C and D) also showed that SMILE and PDIA3 colocalized in the endoplasmic reticulum, confirming that these two molecules can interact in the ER. To determine the role of SMILE in the cell, we studied SMILE transcript modulation in the HeLa cell line. SMILE mRNA expression was checked by RT-PCR and decreased by almost 84% in resting HeLa cells transfected with SMILE siRNA as compared to cells transfected with the Stealth RNAi negative control Low GC ( Figure S6, ***p = 0.0002, Mann-Whitney test, mean replicate values of three independent experiments). High throughput microarray analysis was performed on resting HeLa cells transfected with SMILE or negative control siRNA in order to identify differentially expressed genes and to define cellular functions affected by SMILE silencing. Signals were studied with a SAM analysis (FDR = 0.0011, number of permutations: 5000). Overall, 549 and 532 genes were significantly up-and downregulated respectively in cells transfected with SMILE siRNA as compared to cells transfected with negative control siRNA. Each list of up-regulated and down-regulated genes was analyzed using the GOminer website (http://discover.nci.nih.gov/gominer/) to define enrichment in several key biological functions. In this approach a function was defined by a GO number. One gene can have several GO numbers meaning that it can be involved in several mechanisms. We defined a set of 24 enriched functions for the list of down-regulated genes ( Table 1). This classification was performed based on GO categories with enrichment p-values,0.05, and categories with at least 10 differentially expressed genes among the total genes involved in the function were selected. Among the down-regulated gene functions of SMILE siRNAtransfected cells, those concerning protein metabolic processes (GO:0019538 line 13 Table 1, GO:0044260 line 9 Table 1 and GO:0044267 line 16 Table 1) were particularly represented, such as catabolic processes (GO:0009056 line 24 Table 1), proteolysis (GO:0006508 line 5 Table 1), biopolymer and protein catabolic processes (respectively GO:0043285 line 12 Table 1 and GO:0030163 line 10 Table 1). Interestingly, among the downregulated transcripts involved in proteolysis, PSMB1 (b1 proteasome subunit, line 15 in Table 2), PSMB9 (b1i proteasome subunit, line 17 in Table 2) and PSMB10 (b2i proteasome subunit, line 10 in Table 2), were found to be significantly down-regulated after SMILE silencing. Because SMILE transcript down-regulation decreases transcripts involved in protein degradation, we tested whether SMILE was involved in proteolysis. We measured the chymotrypsin-like activity of the proteasome in both SMILE siRNA and control siRNA-transfected HeLa cells. SMILE siRNA-transfected HeLa cells displayed a significantly decreased chymotrypsin-like activity compared to control siRNA-transfected cells (Figure 3, *p = 0.0313, Wilcoxon signed rank test). The findings of SMILE interaction with PDIA3 in the endoplasmic reticulum, together with SMILE modulation of transcripts involved in protein catabolism and chymotrypsin-like activity of the proteasome, suggest that SMILE may play a role in the control of proteolysis via proteasome activity in the endoplasmic reticulum. SMILE silencing does not affect cell growth but sensitizes HeLa cells to ER stress To more precisely study the effects of SMILE siRNA on cell morphology, we performed electronic microscopy (EM) analysis in SMILE siRNA and control siRNA-transfected cells. At an ultra structural level, resting control siRNA-transfected cells displayed a well-conserved overall architecture and organization. In contrast, SMILE down-regulation induced ER hypertrophy associated with a reduction of free ribosomes as compared to control cells ( Fig. 4A and B), suggesting that down-regulation of SMILE affects ER function. Improperly folded protein degradation is a main actor of ER stress via accumulation in the ER lumen. We thus hypothesized that down-regulation of SMILE would sensitize cells to the effect of Bortezomib (a 26S proteasome inhibitor inducing ER stress). To address this question, we performed EM analysis in SMILE siRNA and control siRNA transfected HeLa cells treated with Bortezomib (20 nM for 24 h). As expected, Bortezomib treatment induced ER hypertrophy in control cells ( Figure 4C). SMILE siRNA-transfected cells displayed an increased sensitivity to Bortezomib with dramatic ER enlargement and vacuolization and features of cellular disorganization and injury ( Figure 4D). These results suggest that SMILE down-regulation sensitizes cells to ER stress. The down-regulation of SMILE/TMTC3 increases ER stress and impairs long-term cell survival To further determine if SMILE siRNA-mediated downregulation sensitizes HeLa cells to ER stress and if this is mediated by proteasome activity, we monitored the effects of different drugs inducing various stresses on HeLa cells after SMILE silencing in long-term cultures (7 days). Besides Bortezomib, we used Thapsigargin, a blocker of sarco/endoplasmic reticulum Ca 2+ / ATPase, which induces proteasome-independent ER toxicity. Moreover, Etoposide, an inhibitor of topoisomerase II, that induces cytotoxicity in an ER-independent manner, was also used as a negative control. We compared the effects of a seven-day, dose-response treatment with these drugs in HeLa cells transfected with either SMILE siRNA or control siRNA in clonogenic survival assays. As illustrated in Figure 5A, without any treatment, HeLa cells transfected with SMILE siRNA displayed a decreased number of cell clusters compared to cells transfected with control siRNA (**p = 0.0045, Mann-Whitney test). Bortezomib, Thapsigargin and Etoposide induced a dose-dependent decrease in the cluster numbers in both cells transfected with control or SMILE siRNA, showing that these drugs are effective (Significance of p = 0.0001 for the dose-effects of Bortezomib, Thapsigargin and Etoposide, Two-way ANOVA, data not shown) We observed that a large dose of Bortezomib induced a significantly greater decrease in the number of clusters constituted by SMILE siRNA-transfected cells compared to control siRNA-transfected cells. These data confirmed the electronic microscopy and suggested that cells lacking SMILE are more sensitive to the toxic effect of an ER stressor that blocks proteasome activity than control siRNAtransfected cells ( Figure 5B, *p = 0.0317, Mann-Whitney test). Compared to Bortezomib effects, control and SMILE siRNAtransfected cells treated with Thapsigargin or Etoposide displayed the same decrease in the number of clusters, indicating a similar toxicity of these two drugs on cells lacking SMILE mRNA ( Figure 5C and 5D). These results suggest that HeLa cells lacking SMILE mRNA are more sensitive to ER stress dependent on proteasome activity blockade compared to other stresses. Down-regulation of SMILE/TMTC3 induces upregulation of XBP-1 transcription In order to determine whether there is a direct link between SMILE down-regulation and ER stress, we further tested XBP-1 expression in HeLa cells transfected with SMILE siRNA and treated 6 h with 20 nM Bortezomib. XBP-1 is a stress response protein activated upon exposure to ER stress and allowing transcription of genes of the Unfolded Protein Response. SMILE mRNA down-regulation resulted in significant XBP-1 transcript overexpression after Bortezomib treatment ( Figure 6A, *p = 0,0156, Wilcoxon signed rank test). This experiment was confirmed on primary cells (human keratinocytes). SMILE mRNA expression was checked by RT-PCR and decreased by almost 70% in resting keratinocytes transfected with SMILE siRNA as compared to cells transfected with Stealth RNAi negative control Low GC (*p = 0.0418, Wilcoxon signed rank test, mean replicate values of four independent experiments, data not shown). As shown in figure 6B, SMILE transcript silencing and 6 h-Bortezomib treatment also induced a significant increase in XBP-1 transcription (**p = 0.0078, Wilcoxon signed rank test). Interestingly, SMILE transcript silencing without proteasome blockade also induced an increase in XBP-1 transcription in keratinocytes (p = 0.0547, Wilcoxon signed rank test), suggesting that epithelial primary cells are more susceptible to SMILE transcript silencing alone and that SMILE transcript modulation directly impacts ER stress responses. Discussion Although immunological tolerance has been achieved in animal models, its translation into the clinic has not yet been feasible and remains highly experimental in both non human primates and humans. Nevertheless, compelling evidence has accumulated showing that some transplant recipients permanently accept their kidney or liver grafts in the absence of immunosuppressive therapy [5,10,11,12]. Along these lines, during the last decade, significant efforts have been made among the transplant community (Reprogramming the Immune System for Establishment of Tolerance and Indices of Tolerance) in Europe [11] and (Immune Tolerance Network) in the US [12] to identify biological signatures of ''operational tolerance''. We previously identified a list of 49 genes which were able to discriminate operationally tolerant patients from other cohorts of transplant patients [6]. SMILE/TMTC3 was one of the genes found to be differentially expressed in the blood from operationally tolerant patients compared to stable and rejecting patients and whose function was unknown. Confirming the latter study, a differential expression of SMILE transcripts was additionally reported by the team of Newell et al. between a cohort of 25 operationally tolerant patients and stable patients (data available on Gene Expression Omnibus Datasets under reference GSE22229) [12]. The modulation of SMILE transcripts in the blood of operationally tolerant patients and patients with chronicantibody mediated rejection patients and the independence of SMILE transcript levels to external confounding factors suggest that SMILE may have a potential implication in controlling graft status. However, as there is no described cellular or clinical role for SMILE, it is not yet known if SMILE has an active role in the establishment of tolerance, or if this molecule is a passive biomarker of tolerance. Thus, the present study was conducted to further explore the potential functions of SMILE. We report that SMILE interacts with PDIA3, which has a crucial role in glycoprotein folding in endoplasmic reticulum [22], in the loading of peptide on MHC class I in endoplasmic reticulum [13] and which is overexpressed during ER stress. The interaction between SMILE and PDIA3 was initially identified in a yeast Two-Hybrid screen and confirmed by immunohistochemistry showing an endoplasmic reticulum colocalization of the two molecules. We also showed here that siRNA-mediated SMILE knock-down in HeLa cells induces a decrease in several types of transcripts involved in protein catabolism and proteolysis. Among these transcripts we found that several immunoproteasome subunits (PSMB1, PSMB9 and PSMB10) were modulated, suggesting that SMILE exerts its function via the proteasome pathway. As expected, proteasome activity assessed by chymotrypsin-like activity was decreased in SMILE siRNA-transfected cells as compared to control siRNA-transfected cells. These results suggest that SMILE might have a role in protein folding and/or degradation, exerting its function via the proteasome pathway. Incorrect folding of proteins in cells is counteracted by the Unfolded Protein Response (UPR). If UPR is not sufficient to process protein overload in the ER, this pathway can be deleterious and lead to cell apoptosis or autophagy [14,15]. To assess the involvement of SMILE in ER stress responses and protein catabolism, we treated SMILE siRNA-transfected cells with various stressors, including Bortezomib, a proteasome inhibitor. SMILE down-regulation and/or Bortezomib treatment induced dramatic ER enlargement and features of cellular injury. Furthermore, Bortezomib inhibition of long-term cellular growth was strongly enhanced in SMILE siRNA-transfected cells. Interestingly, the toxicity of Thapsigargin, an ER stressor whose effects are unrelated to proteasome inhibition, was independent of the level of SMILE expression on the cell response to stress. Thus, SMILE transcript inhibition increased sensitivity to ER stress dependent on protein overload induced by the proteasome inhibitor Bortezomib. One arm of the UPR response involves the spliced transcript XBP-1. In this study, we showed that SMILE silencing directly increased XBP-1 transcript expression after 6 hours of Bortezomib treatment. Altogether these data suggest that in HeLa cells, proteasome pharmacological inhibition and SMILE silencing act in a synergistic way, likely by blocking protein degradation or modification for degradation. As suggested in the literature, blockade of protein degradation induces accumulation of misfolded proteins in the ER and leads to ER stress, and thus to XBP-1 overexpression [16]. Interestingly, a recent study by Fasanaro et al. reported that SMILE/TMTC3 mRNA is inversely modulated after miR-210 over-expression or inhibition [17]. Of note, miR-210 expression is induced by hypoxia, which was shown to induce UPR as a prosurvival mechanism in tumor cells [18]. One of the responses to hypoxia via miR-210 involves indirect targets implicated in amino acid catabolism [17]. Our results in proteolysis suggests that SMILE may be part of the response to hypoxia -and thus to ER stress -via miR-210 or not. Moreover, our DNA chip analysis revealed that SMILE down-regulation in HeLa cells affects the secretory pathway as well as vesicle-mediated transport (GO:0045045 and GO:0016192). Interestingly, membrane trafficking is one of the functions that is modified in response to miR-210 modulation and that could be set off by hypoxia, according to Fasanaro et al. [17]. Thus, this work supports our results for SMILE having a role in proteolysis and being potentially an actor of the ER stress response. Regarding the fact that SMILE was discovered in PBMCs of patients, it may play a direct role in the immune cell physiology in long-term graft function. The role of the UPR, and particularly of XBP-1, in the mammalian immune system [19,20] and in inflammation has been clearly demonstrated [21]. Indeed, the stress response is involved in a variety of immune cells such as dendritic cells [20,22], macrophages [23] or B cells [24,25,26] and depend on the UPR and notably XBP-1 for their development and/or function. This could be of potential interest given the recent studies showing that operationally tolerant patients display a particular B cell profile highlighting a possible abnormal B cell differentiation process in these patients [11,12,27,28]. A recent paper have reported that the STAT3/ IL-6 pathway, that has also been shown to be involved in ER stress [29,30], is activated neither in operationally tolerant patients nor in rejecting patients [31]. These results that do not confort our hypothesis may be due to the fact that the STAT3/IL-6 pathway is not the only signaling pathway reflecting UPR activity, and the absence of its activity in operationally tolerant or rejecting patients may not preclude the absence of UPR activity in the PBMCs of these patients. Growing evidence suggests that the selectivity of Bortezomib for myeloma cells may be explained by an increased susceptibility of myeloma cells to ER stress-induced apoptosis [32]. In addition, Bortezomib is not only selective for cancerous cells, as recent studies showed that primary B cells, that are largely dependent on UPR and proteasome activity to produce antibodies, are sensitive to Bortezomib. This treatment was shown to decrease donor-specific antibodies in renal transplant patients in recent studies [33,34]. Our results showed that primary cells are far more sensitive to SMILE transcript silencing than HeLa cells, as there was no need for Bortezomib treatment to induce XBP-1 overexpression in SMILE-silenced keratinocytes. These results suggest that SMILE transcript modulation in immune cells may have an impact on the function of the cell and particularly on its response to ER stress. They allow a function in ER stress response to be attributed to this molecule, which was previously unknown. Moreover, it opens up new perspectives about ER stress and graft immune regulation, given the role of the ER stress response in immune cells. SMILE may have a potential role in these cell types related to the emerging role of the ER stress response in transplantation. We also envisage a role for SMILE in the graft itself, in addition to recent works showing ER stress emerging as an actor at the graft level [35,36,37]. To conclude, further studies are needed to analyze the effects of SMILE transcript modulation in immune cells. This molecule and its link with endoplasmic reticulum stress could be of potential relevance in the field of organ transplantation. Patients The study was performed on 42 blood samples. All patients and healthy volunteers (HV) who participated in this study signed an informed consent and the study was approved by the University Hospital Ethical Committee (Nantes, France). The clinical parameters of these patients are described in detail in Table S1. N Patients under standard immunosuppressive therapy with stable graft function (STA; n = 9; patients with Cockroft creatinine clearance .40 mL/min and proteinuria ,1 g/24 h) for at least 3 years with donor-specific antibodies for 2 out of 9 patients. No biopsies were available for these patients because they presented no deterioration of graft function (certain cDNA samples were prepared by TcLand Expression S.A., Nantes, France). These patients were under anti-metabolites (mycophenolate mofetil or azathioprine), calcineurin inhibitors (Cyclosporin A or FK506) and/or steroids. N Operationally tolerant patients: patients with stable graft function (TOL; n = 8; Cockroft creatinine clearance .40 mL/min and proteinuria ,1 g/24 h) for at least 1 year (median 12.5 years, range 5-30 years) without immunosuppressive treatment. Immunosuppressive treatment was stopped due to non compliance (n = 6), post-transplant lymphoproliferative disorder (n = 1) or calcineurin inhibitor toxicity (n = 1). No biopsies were available for these patients since biopsy was refused by our Centre's Ethical Committee. N Patients with chronic antibody mediated rejection: Patients under standard immunosuppressive therapy with biopsyproven chronic antibody-mediated rejection (transplant glomerulopathy, positive for C4d and anti-donor HLA antibodies) (CAMR; n = 14) according to the updated Banff classification criteria [38]. Chronic AMR was diagnosed on biopsies performed in the context of a progressive deterioration of renal function (Cockroft creatinine clearance ,40 mL/min and/or proteinuria .1 g/24 h). Peripheral Blood Mononuclear Cells Peripheral blood from healthy volunteers and patients was collected in EDTA Vacutainers, and PBMC were separated by density centrifugation using Lymphosep, lymphocyte separation media (Bio West, Nuaille, France). PBMC were stored in TRIzol (Invitrogen, Cergy Pontoise, France) at 280uC until use. RNA Extraction and Preparation of cDNA RNA was extracted from human PBMC, HeLa cells and keratinocytes using the TRIzol method (Invitrogen) according to the manufacturer's instructions. Genomic DNA was removed by DNase treatment (Roche, Indianapolis, IN). RNA concentration was calculated using a Nanodrop ND1000 spectrophotometer (NanoDrop Technologies, Wilmington, DE). RNA was reverse transcribed into cDNA using polydT oligonucleotide and Maloney leukemia virus reverse transcription (Invitrogen). Real-Time Quantitative PCR Real-time quantitative PCR was performed in an Applied Biosystems GenAmp 7700 or 7900 sequence detection system (Applied Biosystems, Foster City, CA) using a commercially available primer and probe set for human SMILE/TMTC3 (Applied Biosystems; Hs00699202_m1) and XBP-1 (Applied Biosystems; Hs00231936_m1). The housekeeping gene hypoxanthine phosphoribosyl transferase (HPRT, Applied Biosystems; Hs99999909_m1) was used as an endogenous control to normalize RNA starting quantity. Relative expression between a given sample and a reference sample was calculated according to the 2 2ddCt method after normalization to HPRT with results expressed in arbitrary units. Yeast two-hybrid screen Yeast two-hybrid screening was performed by Hybrigenics Services SAS, France (http://www.hybrigenics-services.com). The coding sequence for aa 1-230 of PDIA3 (GenBank accession number gi: 67083697) was PCR-amplified and cloned into pB28 as a C-terminal fusion to LexA (N-LexA-PDIA3-C). The construct was checked by sequencing the entire insert and used as a bait to screen a random-primed human brown adipocyte cDNA library constructed into pP6. pB28 and pP6 derive from the original pBTM116 [39] and pGADGH [40] plasmids, respectively. 150 million clones (15-fold the complexity of the library) were screened using a mating approach with Y187 (mata) and L40DGal4 (mata) yeast strains as previously described [41] and positive clones were selected on a medium lacking tryptophan, leucine and histidine, and supplemented with 0.5 mM 3-aminotriazole to handle bait autoactivation. The prey fragments of the positive clones were amplified by PCR and sequenced at their 59 and 39 junctions. The resulting sequences were used to identify the corresponding interacting proteins in the GenBank database (NCBI) using a fully automated procedure. A confidence score (PBS, for Predicted Biological Score) was attributed to each interaction as previously described [8]. Preparation of odontoblast culture Dental pulps were obtained from healthy human third molar germs (from 14-to 16-year-olds) extracted for orthodontic reasons with the informed consent of the participants and their parents, in accordance with the French Public Health Code and following a protocol approved by the local ethics committee. Pulps were processed for cultured odontoblast-like cells as described previously [42] and treated during 24 hours with Bortezomib 20 nM (Millennium Pharmaceuticals, Inc, Cambridge, United Kingdom). Immunohistochemistry Odontoblast cell cultures were fixed in 4% paraformaldehyde-0,025% saponin-PBS for 30 min at 4uC, then rinsed in PBS-0,025% saponin-2 mg/ml bovine serum albumin-0,1 M lysine HCl at 4uC. Intracellular detection of proteins was promoted by the permeabilizing effect of saponin. Cultures were then reacted for double staining with anti-PDIA3 (# HPA003230, Sigma-Aldrich, France) and anti-smile (# ab81473, Abcam, France) antibodies. Subsequently, the cultures were rinsed, incubated with goat anti-mouse Alexa Fluor 594 and goat anti-rabbit 488 Transmission Electron Microscopy on transfected and drug-treated Hela cells SMILE and control siRNA transfected HeLa cells at the 3rd day of culture were fixed in cacodylate buffered 4% glutaraldehyde for 1 h at 4uC, washed in buffer and post-fixed in cacodylate buffered 2% osmium tetroxide for 1 h at room temperature. Cells were dehydrated in increasing concentrations (from 50u to 100u) of ethanol and embedded in Epon. Sections (70 nm-thick) were cut with an Ultracut E ultramicrotome (Leica Microsystems GmbH, Wetzlar, Germany), mounted on copper grids, stained with the Reynolds method and observed on a JEM 1010 electron microscope (Jeol LTD, Tokyo, Japan) at a voltage of 80 kV. Gene expression analysis in HeLa cells using DNA chips RNA samples representing two independent experiments from HeLa cells transfected 24 hours with negative control or SMILE siRNA and activated or not with 20 mM PMA (Phorbol 12myristate 13-acetate) for 6 hours were submitted for analysis. After checking RNA quality, 500 mg of total RNA for each sample were prepared with the Agilent Quick Amp Labeling Kit following the one-color manufacturer's protocol. Each sample was hybridized to a whole human genome microarray (4644 K Agilent) following the manufacturer's instructions. After scanning, data were extracted with Feature Extraction (Agilent Technologies) were normalized (lowess function in R [43]) and then, negative control spots and background signal were removed. Significance Analysis of Microarrays (SAM) [44] was applied to identify transcripts differentially expressed between SMILE siRNA and control siRNA-transfected cells. For each analysis, we arbitrarily fixed the false discovery rate (FDR) at less than 0.5%. To assess the biological significance of the differentially expressed genes identified with SAM, GOminer software [44,45] was used to identify the over-represented GO ontology (GO) categories. Only GO categories among the biological process ontology (GO:0008150) were analyzed, and we selected GO categories with enrichment p-values inferior to 0.05, and categories with at least 10 genes. All microarray data is MIAME compliant and the raw data has been deposited in a MIAME compliant database, the Gene Expression Omnibus Datasets. The complete list of the probes used and expression analysis has been submitted to Gene Expression Omnibus GEO # GSE21886. Proteasome-Glo TM Cell-Based Assay HeLa cells were seeded in 6-well plates at a density of 8610 5 cells per well for 24 h and transfected for 48 h with control and SMILE siRNA as described above. The chymotrypsin-like activity of transfected cells was then assayed with the Proteasome-Glo TM Cell-Based Reagent (Promega, Charbonnières Les Bains, France) according to manufacturer's protocol. Luminescence was read with a VICTOR TM X Multilabel Plate Reader (Perkinelmer, Massachusetts, USA). Clonogenic survival assays Control and SMILE siRNA transfected HeLa cells were seeded in 6-well plates at a density of 500 cells per well and exposed to increasing concentrations of Bortezomib (1.25 nM, 2.5 nM or 5 nM from a 0.1 mg/ml start solution, Millennium Pharmaceuticals, Inc, Cambridge), Thapsigargin (25 nM, 50 nM, 100 nM from a 1 mM start solution, Sigma-Aldrich) or Etoposide (90 nM, 120 Nm, 180 nM from a 50 mM start solution, Sigma-Aldrich) for 24 hours. Controls were performed with vehicle only: H 2 O for Bortezomib and DMSO for Thapsigargin and Etoposide. Then, the drug/medium was removed and cells were allowed to incubate in fresh medium under normal conditions for 7 days. After incubation, cells were fixed with 10% methanol-10% acetic acid and stained with a 0.4% solution of crystal violet. Plating efficiencies were determined for each treatment and normalized to untreated cells. Error bars represent SEM. Statistical Analyses The nonparametric Mann-Whitney test, the nonparametric Wilcoxon matched pairs test and the nonparametric Kruskal-Wallis test were performed when appropriate. Values of *p,0.05, **p,0.01 and ***p,0.001 were considered as significant. ROC curve analysis was performed to determine the cutoff point of SMILE mRNA in blood that yielded the highest combined sensitivity and specificity in diagnosing operational tolerance. The statistical method was devoted to the analysis of the diagnostic properties of SMILE, and the theory of ROC (receiver operating characteristic) curves was applied. More information about this method is available in SD Experimental Procedures. A statistical analysis was also made in order to study the relationship between SMILE mRNA expression in a cohort of 164 stable patients and different clinical factors that could influence the diagnostic power of this biomarker. SMILE distribution was normalized with a logarithmic transformation and SMILE logvalues were predicted thanks to a multiple linear regression model.
7,372.4
2011-05-16T00:00:00.000
[ "Biology", "Medicine" ]
Determine the Eigen Function of Schrodinger Equation with Non-Central Potential by Using NU Method So far, Schrodinger equation with central potential has been solved in different methods but solving this equation with non-central potentials is less dealt with. Solving such equations are way more difficult and complicated and a certain and limited number of non-central potentials can be solved. In this paper, we introduce one of the solvable kinds of such potentials and we will use NU method for solving Schrodinger equation and then by using this method we have calculated particular figures of its energy and function. Introduction One of the important tasks of quantum mechanic is finding accurate answers of Schrodinger equation with a certain potential.It is obvious that finding exact answers of SE by the usual and traditional methods is impossible, except certain cases such as a system with Qualeny potential or a coordinating oscillator.Thus, it is inevitable to use methods to help us solve this problem.Among the cases where we have to refuse ordinary methods and seek new methods is solving SE with non-central potentials.Such potentials are of high importance in quantum chemistry and nuclear physics.Recently, a lot of studies are being done about such potentials.Accordingly, different methods are used to solve SE with non-central potentials among which we can name symmetrical cloud, SUSY, SIP idea [1,2], route integral [3] and Factorial method [4]. There is also another method known as NU (Nikiforov-Uvarov) which gives a clear instruction for obtaining exact answers of certain states ,Eigen value of energy and the related functions based on Orthogonal polynomials [5].NU method is based on reducing a second degree differential equation of SE into an equation of hyper geometric type [6][7][8].Based on this, in this paper we will try to solve SE with a suitable potential without any limits.Therefore, we consider a potential as follows to meet all our needs in order to solve SE by NU method: After choosing a suitable change of variable   s s r  , the transformed equation will be as follows: In which  and  are polynomials of maximum second degree and  is a polynomial of maximum first degree.By considering wave function Equation ( 1) will be reduced to the following equation which is of hyper geometric type [5]; That in equations:  is a parameter which is defined as follows [5]: The polynomial ( ) s   shows that its first derivative must be negative.We should notice that  and n  are obtained from a particular answer of     n y s y  s which is a polynomial of n degree.Furthermore, statement   n y s ,wave function of Equation (2), is a function of hyper geometric type which is obtained from the following Rodriguez equation [5]. In which n is the normalization constant and B   s  is a weight function which has to meet the following condition [5,6]. has to be a polynomial of maximum first degree, the statements under the radical in Equation (9) have to be sorted in the form of a first degree polynomial and this is possible when its determiner, , is zero.In this case, an equation is obtained for K and after solving the equation, the obtained figures for K are placed in Equation (9) and by comparing with Equations ( 6) and (10) we will calculate Eigen value of energy. Schrodinger Equation with Central and Non-Central Potentials We consider time-independent SE as follows: Wave function elaborates certain states and their related energy levels, n , for a particle in a potential field.First, central and non-central potentials are considered in spiracle coordinates and put them in the above equation.( ) By considering the whole wave function as: And replacing in Equation ( 12), we can write the equation separately as follows: In which m 2 and   are the separation fix amounts.The answer for Equation ( 16) is as follows [9]. Equations ( 14) and (15) are radial and angular equations respectively which we are going to solve by NU method. Solving Radial Equation and Calculating Eigen Values of Energy by Using NU Method For solving radial part of SE, by considering in Equation ( 14) we will have: Now if we compare the above equation with Equation (1), the general form of equation in NU method, we will have: Therefore, according to the definition of   r  in Equation ( 9), we can write   r  as follows: From the above equation, considering that under the radical there should be a of a first degree polynomial. We will determine K and we will have: To continue, we choose the suitable amount of   r  which can meet the condition 0 Now, for (i), we can write   r  2 4 r r from Equation (5) as follows: And from Equations ( 6) and (10) we will calculate n  and  respectively So: And eventually, we can obtain Eigen value of energy for (i) from the above equation: In the same way, for (ii), Equation (22), by repeating the above process we can obtain Eigen value of energy.Therefore, from Equation (5), we write as: And from Equations ( 6) and ( 10) we calculate n  and  : From the above equations, Eigen value of energy is obtained: We considered 2 E    and from ( 27) and (32), we have Eigen value of energy as: Calculating Eigen Functions Related to Radial Share of Wave Function In order to obtain Eigen radial functions, by using Equation (4) we have: Thus, for (i) and (ii) in Equation ( 22) we have: We will have: On the other hand, from Equation (8) we have: From Equation (37), as we have   2 r r      , we will have: From Rodriguez sequence polynomial, we will have Laguerres dependent functions as [9]: By comparing the two functions, (39) and ( 40) and by considering k    and 2 r x   we will have: In the same way for (ii) in which from Equation (37) we will have: And finally, from Equation ( 7 Therefore, from Equations (2), ( 35), (36), (42), (46) the whole radial wave function can be written as: 1 by taking orthogonal condition of associated Laguerre polynomials and we will have: Solving Angular Equation and Calculating Eigen Values With a variable change as cos x   we will tion (15) as the following transformation: have Equa- By comparing Equations ( 49) with (1) we will have: By using ( 9), we write   x  as: We determine K on condition that there gree polynomial under the radical: And for we have the right choice as: m Eq Also, fro uation (5), we can obtain   x  e will hav Then w e: According to Equations ( 6) and ( 10), we will calculate n  and  respectively: Finally we will obtain  from the above equation: Calculation Eigen Function of A Equation To fi by using Equation (4) we will have: s ngular nd Eigen function related to angular equation, Thus, for   8) we have: On the oth m Equation Therefore, we can write the weight function And eventually, from Equation (7), we wr e ) (x acobi's From Rodriguez sequence, we will have J polynomials as [10,11]: ) and (64) and c sidering This way, we calculated Eigen of functions and Eigen value of energy.According to this wave function, we can determine static characteristics of this system.We can also use this method for solving SE with other non-central potentials. 7. rtain non-central potential by using the rse, we should note that this method, ethods for solving SE, does not funcon with any non-central potential and we can get a 0375-9601(00)00252-8 x x x  (66) Conclusions In this paper we showed that Eigen value of energy and its Eigen dependent functions can be obtained for a system under a ce NU method.Of cou too, like previous m ti suitable result out of this method only with certain types of potentials which meet the requirements of the method.And in this paper, by considering all the requirements, we have introduced the potential in mind and gotten the results mentioned in the article.
1,958.6
2011-08-03T00:00:00.000
[ "Mathematics" ]
CONTEXTUAL TEACHING AND LEARNING APPROACH TO IMPROVE ACTIVITIES AND STUDENT LEARNING RESULTS IN MATH LEARNING OF ANGLE MATERIAL IN 4 TH GRADE AT SDN 104 LANGENSARI-SENANGGALIH KECAMATAN COBLONG BANDUNG CITY This research is motivated by the condition of students in elementary schools studied, namely the low student learning outcomes and the lack of student activity in participating in learning. The results of observations made before the study showed that students could not apply the concept in solving everyday mathematical problems. Besides that, learning is often independent of reallife contexts. Based on the above, the researcher tried to apply the Contextual Teaching and Learning approach in angular learning. This research was carried out aimed at knowing the description of student learning activities and how student learning outcomes in learning the concept of angles through the Contextual Teaching and Learning approach. This classroom action research consists of three cycles. Each cycle consists of one learning action. Every action is carried out starting from planning, action, observation and interview, action analysis and reflection. The object of the study was students at 4th grade in SDN 104 Langensari-Senanggalih Kecamatan Coblong Bandung City. The average scores of student activity in each cycle is: cycle I reached 73.2; cycle II reaches 76.6; and cycle III reaches 81.3. These results show that student activities during learning is increasing so that in the final action, learning activities more dominated by students. While the average scores of student learning outcomes in each cycle is: cycle I reached 68.4; cycle II reaches 73.3; and cycle III reaches 76.8. These results show that students have understood the concept of angles and have the ability to apply concepts in solving problems in daily live. Therefore can be concluded that Contextual Teaching and Learning approach in angle learning can improve student learning outcomes. INTRODUCTION Mathematics is a very important subject in the education system throughout the world. Mathematics is important to be mastered so that students can easily learn other material, because essentially mathematics is the queen of science.Besides that mathematics is important to be used as a hold because mathematics is the basic science of the development of science and technology that is useful in social life. General objectives are given mathematics at the primary and secondary education levels, namely: (1) Preparing students to be able to deal with changes in the conditions in life and in the world that are always evolving, through practice based on logical, rational, critical, careful, honest, effective and efficient thinking; and (2) preparing students to use mathematics and the mindset of mathematics in everyday life, and in learning various sciences.(Suherman, 2001: 56) Geometry is one of the important branches of mathematics to be taught in elementary school.The subject of angle is part of geometry.In learning, students often experience difficulties.Difficulties experienced by elementary school students are objects that are studied in the form of abstract objects, meaning they cannot be held, touched directly by the senses but can only be thought of so that they are difficult to understand.This is what causes student achievement in elementary schools to be low. The success of students in learning is also determined by the teacher itself.The teacher must be able to master the subject matter and convey the material well.In carrying out the teaching and learning process, teachers should be smart in determining learning theory, methods, and approaches that are appropriate to the subject matter and in accordance with the characteristics and needs of elementary school children, so that a pleasant learning atmosphere will be created.Thus can help understanding the material well by students. Based on the observations, the researchers found that one of the factors that caused the fourth grade students at SDN 104 Langensari -Senanggalih had difficulty understanding and learning a material, especially angular material, because in learning students were used to giving examples and exercises without any other variations.Students are not accustomed to constructing their own knowledge in understanding and learning a material.In addition, in learning, the teacher is only oriented towards the mastery of the material without thinking about whether the student really understands the material angle.Learning that is only masteryoriented material causes students to only be able to remember short-term knowledge, so that students find it difficult to solve problems in everyday life. Based on the description above, to improve student learning outcomes in angular learning researchers apply the CTL approach.The CTL approach helps teachers associate material taught with students' real-world situations.The teacher encourages students to make connections between their knowledge and their application in daily life.The CTL approach positions students as individuals who actively construct their own knowledge so that learning is meaningful to students, students will remember the material in the long run, students will also realize the importance.In the CTL approach the teacher cannot tell how a problem is solved, explain the procedure for working on a problem.Teachers are prohibited from explaining a concept or material.The role of the teacher is only limited to facilitating student learning, guiding students to build knowledge, and directing students to find mathematical concepts. Contextual Teaching and Learning Approach Contextual words (contextual) come from the word context which means relationship and atmosphere, so Contextual Teaching and Learning (CTL) can be interpreted as a learning related to a particular atmosphere.In general contextual contains relevant meaning, there is a relationship or direct connection, following the context that carries the intent, meaning, and interests.Understanding the contextual teaching and learning approach is expressed by Nurhadi (2003: 1) as follows. "A contextual approach (Contextual Teaching and Learning) is a learning concept that helps teachers associate the material they teach with the real-world situation of students and encourage the relationship between their knowledge and its application in their lives as family members and society." Through the CTL approach the learning process takes place naturally, because it involves the real-world life of students directly.In learning, students are active in exploring their own knowledge, so that their learning will be meaningful.Students will remember what they have learned in the long run.By linking learning material to students' world situations, students are expected to be able to apply what they have learned in their daily lives. Teachers also play an important role in choosing learning materials that are considered important for students to learn in CTL activities.This is because every child has a tendency and passion to learn and try things that are considered new and strange.Besides the role of the teacher in the Contextual Teaching Learning approach the role of the teacher is to help students find a connection between new experiences with previous experiences, because in the Contextual Teaching Learning in the learning process students must look for the relationship between the material being studied with their experiences in everyday life. The essence of CTL learning (Contextual Teaching learning) includes seven stages, Nurhadi (2003: 5), there are: a. Constructivism In the opinion of Nurhadi (2003: 11) "Humans must construct that knowledge and give meaning through real experience".Based on this opinion, it can be explained that constructivism is the process of building or constructing new knowledge in the cognitive structure of students based on experience.This is what underlies CTL (Contextual Teaching Learning). b. Inquiry Inqury is learning based on search and discovery through a systematic process of thinking.In general, the inquiry process can be done through several steps, namely: formulating the problem, submitting a hypothesis, collecting data, testing hypotheses based on the data found, and making conclusions.The inqury component is important in CTL because through a systematic thinking process, students are expected to have a scientific, rational, and logical attitude, all of which are needed as the basis for the formation of creativity.In Bruner's opinion (Dahar, 1996) that: Through learning discovery, knowledge will last or can be remembered, discovery learning outcomes have a better transfer effect than learning outcomes with conventional learning, and overall discovery learning improves students 'reasoning and students' ability to think freely. c. Questioning Knowledge that someone has, always starts from asking.Asking is a characteristic of the curiosity in an individual.In the learning process through CTL, the teacher does not convey information just like that, but will lure students to find their own.The questioning role is very important, because through the questions the teacher can play and direct the students to find out the material they are learning. According to Sutardi (2007: 97) in a questioning learning has several benefits, namely are: Exploring information both administration and academic, checking students 'understanding, generating student responses, knowing how far students want to know, knowing things that students already know, focusing students' attention on something the teacher wants, to raise more questions from students, and to refresh students' knowledge. d. Learning Community The concept of learning society in CTL is in learning that there is a process of collaboration with others.Cooperation can be in the form of sharing between friends, between groups, or between people who already know to people who do not know. In learning, the application of learning communities can be done through study groups. Students are divided into several groups whose members are heterogeneous, both from their First, heterogeneous groups provide opportunities for teaching each other (peer tutoring) and mutual support.Second, this group increases relations and interactions between races, religions, ethnicities and genders.Finally, heterogeneous groups facilitate classroom management because with one person with a high academic ability, the teacher gets one assistant for every three people. e. Modeling Modeling in CTL is in the learning process by demonstrating something as an example that can be replicated by each student.This modeling process is not limited to just the teacher, but the teacher can utilize students who are considered to have the ability, thus students can be considered as models.This modeling is a very important component in CTL, because through modeling students can avoid theoretical abstract learning. f. Reflection Reflection is a way of thinking about what you have just learned or thought back about what has been done in the past.Through the process of reflection, the learning experience will be included in the cognitive structure of students which will eventually become new knowledge.Through reflection students can also update the knowledge that has been formed, or can add to the knowledge they already have. In the learning process using CTL, every end of learning, the teacher provides opportunities for students to reflect or recall what they have learned.Let students freely interpret their own experiences, so students can conclude about their learning experiences. g. Authentic Assesment In CTL, learning success is not only determined by the development of intellectual abilities, but the development of all aspects.Therefore, the assessment of success is not only determined by learning outcomes such as tests, but also the learning process through authentic assesment. Authentic assesment is actually a process that the teacher does to gather information about the development of learning done by students.This assessment aims to find out whether students really learn or not, whether the learning experience of students has a positive influence on the development, both intellectual and mental of students.Authentic assessment is carried out in an integrated manner with the learning process.This assessment is carried out continuously as long as learning activities take place, therefore, the pressure is directed to the learning process not to learning outcomes. The essence of Math Mathematics is one of the most important fields of study to be taught in elementary schools.As an elementary school teacher, before teaching mathematics to elementary school children should know what mathematics is.Suwangsih and Tiurlina (2006: 3) state that: "The word mathematics comes from Latin words mathematics which was originally taken from the Greek words mathematike which means to study.Mathematike words also relate to other similar words, namely mathein or mathenein which means learning (thinking)." Based on the statement above mathematics is a science that emphasizes more on the way of thinking (reasoning), because in the early stages mathematics was formed from human experience.Humans do activities in understanding mathematics, the activity is an experience which will then be processed in his mind, processed in an analysis and synthesis with his own way of thinking or his own reasoning in his cognitive structure so that it comes to a conclusion in the form of mathematical concepts.In order for mathematical concepts to be easily understood by others, global notation and terms are used. Concept of Angle "The angle is a combination of two lines of AB and AC whose starting points coincide with points AB and AC, each of which is called the angular foot (Priatna, 2004: 10)." Image 1. Acute Angle An angle is named using one capital letter or three capital letters.For example in Figure 1 above the name of the angle is angle A, angle BAC or angle CAB.There are four types of angles that we need to know that are studied in fourth grade in elementary school, namely: right angle, blunt angle, and sharp angle and straight angle.Right angles are angles perpendicular to each other and the size is 90 degrees, blunt angles are angles whose size is more than 90 degrees, the acute angle is the angle of less than 90 degrees, while the straight angle is straight with an angle of 180 degrees. METHOD Classroom action research is a learning activity in the form of an action that is deliberately raised and occurs in the classroom.This action is given to students according to the direction of the teacher.The method used in this study is a qualitative method with classroom action research techniques (action research).Suharjono (Arikunto, 2006: 58) who stated that, "Classroom action research is an action research conducted in the classroom with the aim of improving / improving the quality of learning practices". In line with the opinion of Wiriaatmadja (2008: 13) regarding the definition of classroom action research namely, classroom action research is how a group of teachers can organize the conditions of their learning practices, and learn from their own experiences.They can try an idea of improvement in their learning practices, and see the real influence of that effort.From the opinions of the experts above it can be concluded that classroom action research is a teacher's effort to solve real problems that occur in the classroom to improve the learning process. The subjects of the study were 24 students, consisting of 13 female students and 11 male students.The focus of this class action research is angular learning in fourth grade in elementary school.This class action research will be carried out in fourth grade in SD Negeri 104 Langensari -Senanggalih Kecamatan Coblong Bandung City. The action plan can be described as follows. A. First Cycle Data collection is the core activity in PTK because this process is a determinant of whether the PTK process is good or not.The data to be collected from the action is in the form of qualitative data.The collected data is then analyzed and reflected.to analyze data that occurs during learning actions, in the form of descriptions of meaningful research findings. Data analysis is done as a test of the action hypothesis that has been formulated, then the data is analyzed.Data processing and analysis are carried out continuously from the beginning to the end of the learning process.Data obtained from the test results are then calculated and quantitative data analysis is done by looking for x ̅ and variance. Results Based on the results of research that has been carried out through the description process, analysis and reflection of each cycle and each action, various findings will be discussed in the description below: Description of First Cycle In this first cycle consists of 1 action, namely action 1 regarding material recognizing the angle of an object or waking up.Learning using the Contextual Teaching and Learning approach involves seven main components of learning, stated by Depdiknas (2003: 5) namely constructivism, asking, finding, learning communities, modeling, and actual assessment. However, in the first cycle of learning there are deficiencies and obstacles faced by researchers in increasing activity and learning outcomes through the Contextual Teaching and Learning approach. To clarify the discussion of cycle I which is associated with research questions, an essential finding is needed regarding the research question in the first cycle research process. The essential findings in first cycle can be seen in Table 1.below. I Determine The Angle Of An Object Or Shape a. Student activity in angular learning using the Contextual Teaching and Learning approach in fourth grade elementary school is good enough but not maximal.This is evidenced by the fact that there are students who do not participate in group activities, there are students who do not have the ability to communicate that is not dare to ask questions and express opinions, and there are students who show less attitude in good learning, the average value of student activity reaches 69,1.b.Student learning outcomes in angular learning using the Contextual Teaching and Learning approach in the fourth grade of elementary school is good enough, the average value of student learning outcomes reaches 64.5 and is above KKM (Minimum completeness criteria) fourth grade SDN 104 Langensari Senanggalih which ranges 60.00.However, the average value of students has not been declared complete because the average value is still below the standard mastery learning that ranges from 75. Based on the essential findings in Table 1, it shows that in the first action, the students' activities in angular learning using the Contextual Teaching and Learning approach are good enough but not optimal.In his opinion, if the researcher asks students not to give a response and students do not show a good attitude in learning as students do not follow the learning well and show less curiosity towards learning material.The results of the average value of student activity reached 79.1 and the average value of student learning outcomes reached 68.3.The average value of student learning outcomes that reached 68.3 has not been declared complete because it is still below the standard mastery learning that ranges from number 75.When viewed by individuals only 14 students have declared complete learning based on mastery learning. Description of Second Cycle In this second cycle consists of 1 action.Action in second cycle concerning mention angle names.The essential findings in second cycle can be seen in Table 2. below.the contextual teaching and learning approach in fourth grade of elementary school is good enough and has increased, the average value of student learning outcomes reached 66.8 and already above the minimum completeness criteria (KKM) fourth grade of SDN 104 Langensari-Senanggalih, which ranges from 60.00.However, the average value of students has not been declared complete because the average value is still below the standard mastery learning that ranges from 75. Based on the essential findings in Table 2, it shows that in the first action, students' activities in angular learning using the Contextual Teaching and Learning approach have increased from cycle I but are less than optimal.Students have actively participated in group work, students have started to carry out discussion activities, even though they have not been maximized.Students have also begun to respond to researchers' questions, students are little by little brave to ask and express their opinions.In the learning process, students show enthusiasm in participating in learning, but students have not shown an attitude of curiosity towards learning material.The results of student activity values and student learning outcomes have increased, with an average value of student activity reaching 81.8 and student learning outcomes reaching an average of 73.5.Student learning outcomes have not been stated completely because it is still below the standard mastery learning that ranges from number 75. Students who have declared complete learning based on mastery learning have increased from action 1, namely as many as 16 students. Discussion Based on the discussion of research results from each cycle above, it can be concluded that student activities and student learning outcomes from first cycle to third cycle continue to increase.The increase in student activity from first cycle to third cycle is seen in Figure 1 below: E-ISSN: 2614-4093 P-ISSN: 2614-4085 abilities and seen from their talents and interests.According to Lie (2007: 43) there are several reasons for heterogeneous group formation, namely are: First Action : determine the angle of an object or shape B. Second Cycle Second Action : mention angle names C. Third Cycle Third Action : describe angles This class action research plan consists of 3 cycles.Each cycle consists of 1 action.Each cycle consists of four stages, namely, planning, implementation, observation, and reflection.The planning phase (plan) is an action to improve, improve or change students' behavior and attitudes as a solution.The action / action stage is what things the researcher will or must do as an effort to improve, increase or change as desired in the study.Observation stage (observing) is an activity of observing the results or impacts of each action carried out in research, and reflection (reflect), namely the stage where the occurrence of cystensis analysis, interpretation and explanation (explanation) activities on various information obtained from implementation of actions and observations. material.Student learning outcomes reach an average of 85 and have been declared complete because it is above the standard mastery learning that ranges from number 75.Students who have been declared complete learning based on mastery learning are 23 students. Figure 1 . Figure 1.The results of the average value of activities from first cycle to third cycle Figure 2 . Figure 2. Average Value of Student Learning Outcomes From First Cycle to Third Cycle Table 2 . Essential Findings of Second Cycle Students' activities in angular learning using the Contextual Teaching and Learning approach in fourth grade of elementary school are quite good from action 1.This is evidenced by the decreasing students who do not participate in group activities, students have little courage to ask questions, express opinions and responding to researchers' questions, besides that students have begun to show enthusiasm in participating in learning even if it is not maximal.The value of student activity has increased from action 1, reaching 73.3.b.Student learning outcomes in angular learning using
5,090.4
2018-09-22T00:00:00.000
[ "Education", "Mathematics" ]
GaN-based Matrix Converter Design with Output Filters for Motor Friendly Drive System : This paper introduces a gallium nitride (GaN) high electron mobility transistor (HEMT)-based matrix converter for motor friendly drive systems. A fast switching characteristic of the GaN devices causes high dv / dt. This increases the importance of noise immunity and the reduction of parasitic components in system design. In addition, the high dv / dt in motor drive systems leads to voltage spike at a motor input terminal and leakage current through a motor chassis. Accordingly, a gate drive circuit consists of devices with a high common mode transient immunity. A printed circuit board was designed to minimize parasitic inductance, which was analyzed by performing simulations. To mitigate the dv / dt of the voltage applied to the motor and the leakage current, a dv / dt filter and a sine-wave filter were utilized as an output filter of the matrix converter. The e ff ectiveness of each filter was verified by driving an induction motor. Introduction Wide bandgap (WBG) semiconductors like silicon carbide (SiC) metal oxide semiconductor field effect transistor (MOSFET) or gallium nitride (GaN) high electron mobility transistors (HEMTs) with high efficiency and a fast switching property have emerged as an alternative to silicon (Si) semiconductors in industry. WBG devices have superior characteristics than Si devices. Figure 1 shows the characteristics of Si, SiC, and GaN devices. GaN HEMT has the characteristics of high electric breakdown field, high thermal conductivity, and fast electron mobility [1][2][3]. A lateral two-dimensional electron gas (2DEG) channel has high electron mobility on AlGaN/GaN heteroepitaxy, which enables fast switching transients with a low capacitance property. To utilize these advantages, recently, AC motor drive systems with WBG devices have been studied [4][5][6][7][8][9][10][11]. In ref. [4], electro-hydrostatic actuators with high performance are applied to the WBG-based integrated motor drive. In refs. [6,8], the designs of GaN-based integrated modular motor drive systems are introduced. The fabricated system has a power density of 0.71 kW/L, drive efficiency of 98 %, and motor efficiency of 96.6 % [8]. The factors that degrade the switching properties of SiC devices in pulse-width modulation (PWM) inverter-fed induction motor drive are presented in [9], and the phenomenon of shaft voltage rising due to fast switching is studied in [11]. As cited in the previous papers, many problems arise when the WBG devices are applied to motor drive systems. Especially, the dv/dt problem makes it difficult to safely operate a stable system because of fast switching transients. High dv/dt in motor drive systems reduces the life of the motor and is a key factor generating electromagnetic interference (EMI). When the dv/dt occurs, a common mode (CM) voltage is formed and the current through the motor bearings can reduce the life of the bearings. Similarly, the common mode current flowing through the ground line causes common mode EMI. in the system. To solve this problem, a method of directly changing the motor structure is presented in [12]. Common mode voltage generates electromagnetic coupling between the stator windings and rotor windings. Electromagnetic shielding slot wedge is applied between the stator winding and the rotor winding to form an electromagnetic shield layer that decreases the current flow to the bearings. Motor drive applications with conventional inverters have a back-to-back structure where two voltage source inverters (VSIs) share the same DC link which consists of capacitors of large volume. Matrix converter is a good alternative to the back-to-back architecture, which allows direct conversion of ac input voltage to ac output voltage and there are no DC link capacitors. The matrix converter for motor drive systems has been increasingly studied due to the advantages of bidirectional power transfer, adjustable input power factor, and reduced system size [33][34][35][36][37][38]. There are many studies on three-phase matrix converters using WBG devices. However, the researches mainly focus on SiC devices, not GaN devices. In [39], the bidirectional GaN device is fabricated and the matrix converter is operated to drive an induction motor of 0.4kW. The output filter of the LC filter is shown on the circuit in the paper, but the filter design process is not described, which makes it difficult to know the exact purpose of its use. The objective of this paper is to design the GaN-based matrix converter having reliability with high dv/dt caused by fast switching transients. This paper is organized as follows. In Section 2, the designs of the gate driver circuit and power board for driving GaN HEMT. The design purpose of the gate driver circuit is to have immunity for dv/dt and a stable gate driving capability. Layout is performed to minimize the interference between the power loop and the gate loop. The parasitic inductances between the gate driver circuit and the source of the GaN HEMT are estimated using the Q3D Extractor. In Section 3, the dv/dt filter and sine-wave filter are introduced as output filters for motor drive applications to reduce high dv/dt at the motor input terminal. In Section 4, the switching performance of GaN HEMT on the fabricated printed circuit Common mode voltage generates electromagnetic coupling between the stator windings and rotor windings. Electromagnetic shielding slot wedge is applied between the stator winding and the rotor winding to form an electromagnetic shield layer that decreases the current flow to the bearings. Motor drive applications with conventional inverters have a back-to-back structure where two voltage source inverters (VSIs) share the same DC link which consists of capacitors of large volume. Matrix converter is a good alternative to the back-to-back architecture, which allows direct conversion of ac input voltage to ac output voltage and there are no DC link capacitors. The matrix converter for motor drive systems has been increasingly studied due to the advantages of bidirectional power transfer, adjustable input power factor, and reduced system size [33][34][35][36][37][38]. There are many studies on three-phase matrix converters using WBG devices. However, the researches mainly focus on SiC devices, not GaN devices. In [39], the bidirectional GaN device is fabricated and the matrix converter is operated to drive an induction motor of 0.4kW. The output filter of the LC filter is shown on the circuit in the paper, but the filter design process is not described, which makes it difficult to know the exact purpose of its use. The objective of this paper is to design the GaN-based matrix converter having reliability with high dv/dt caused by fast switching transients. This paper is organized as follows. In Section 2, the designs of the gate driver circuit and power board for driving GaN HEMT. The design purpose of the gate driver circuit is to have immunity for dv/dt and a stable gate driving capability. Layout is performed to minimize the interference between the power loop and the gate loop. The parasitic inductances between the gate driver circuit and the source of the GaN HEMT are estimated using the Q3D Extractor. In Section 3, the dv/dt filter and sine-wave filter are introduced as output filters for motor drive applications to reduce high dv/dt at the motor input terminal. In Section 4, the switching Energies 2020, 13, 971 3 of 14 performance of GaN HEMT on the fabricated printed circuit board (PCB) is tested. The experimental results, according to the output filter type, are compared. Finally, Section 5 summarizes the matrix converter design for GaN HEMT in motor drive systems. Design and Practical Implementation for GaN-Based Power System Since the GaN device has lower input capacitance, gate charge, and output capacitance than super junction MOSFET. It is possible to achieve hundreds of kV/µs fast switching transients. However, the CM current caused by the dv/dt can flow through parasitic inductance and capacitance within the devices and PCB patterns, which produces high spike voltage and noise. This can cause a malfunction of gate drivers, sensors, and microprocessors [40]. For proper design at high dv/dt, the devices with high common-mode transient immunity (CMTI) should be selected. Also, the layout should be carried out to minimize the effects of parasitic inductance. Then, parasitic inductance extraction was carried out, and the results are analyzed. Figure 2 shows the schematic of the proposed gate driver circuit for GaN HEMT. The DC/DC converter is an insulated regulator with +9 V single output and 1 W. Because of a low inter-winding capacitance between input and output of the DC/DC converter, it is possible to reduce the CM current flowing through the inter-winding capacitor during short transients. Therefore, the corresponding gate driver circuit can achieve high CMTI capability. For a gate driver, a Si827x series from Silicon Labs is used. The gate driver is a digital insulated driver. The input signal modulates the carrier provided by an RF oscillator using an on/off keying method providing a superior noise immunity and immunity to magnetic fields. It also has the smallest input capacitance among the operating method of the gate driver such as optocouplers, magnetic couplers, capacitive couplers, etc. The gate driver has a CMTI of 200 kV/µs, and coupling capacitance of 0.5 pF, which allows the gate driver circuit to have the immunity against high dv/dt. board (PCB) is tested. The experimental results, according to the output filter type, are compared. Finally, Section 5 summarizes the matrix converter design for GaN HEMT in motor drive systems. Design and Practical Implementation for GaN-Based Power System Since the GaN device has lower input capacitance, gate charge, and output capacitance than super junction MOSFET. It is possible to achieve hundreds of kV/μs fast switching transients. However, the CM current caused by the dv/dt can flow through parasitic inductance and capacitance within the devices and PCB patterns, which produces high spike voltage and noise. This can cause a malfunction of gate drivers, sensors, and microprocessors [40]. For proper design at high dv/dt, the devices with high common-mode transient immunity (CMTI) should be selected. Also, the layout should be carried out to minimize the effects of parasitic inductance. Then, parasitic inductance extraction was carried out, and the results are analyzed. Figure 2 shows the schematic of the proposed gate driver circuit for GaN HEMT. The DC/DC converter is an insulated regulator with +9 V single output and 1 W. Because of a low inter-winding capacitance between input and output of the DC/DC converter, it is possible to reduce the CM current flowing through the inter-winding capacitor during short transients. Therefore, the corresponding gate driver circuit can achieve high CMTI capability. For a gate driver, a Si827x series from Silicon Labs is used. The gate driver is a digital insulated driver. The input signal modulates the carrier provided by an RF oscillator using an on/off keying method providing a superior noise immunity and immunity to magnetic fields. It also has the smallest input capacitance among the operating method of the gate driver such as optocouplers, magnetic couplers, capacitive couplers, etc. The gate driver has a CMTI of 200 kV/μs, and coupling capacitance of 0.5 pF, which allows the gate driver circuit to have the immunity against high dv/dt. To assure the stability of the gate driver circuit, an unwanted turn-on phenomenon occurred by a miller effect should be prevented. For this purpose, a gate current path with a low impedance must be secured at turn-off. The selected gate driver has dual outputs and can be divided into on-current path and off-current path so that the impedance can be adjusted by altering the gate resistor at each path. The appropriate ratio of the gate on and gate off resistance can ensure the stability of the gate driver. It is recommended that the ratio is selected as between 5 and 10 according to [41]. In addition, the parasitic inductances in the gate driver circuit are one of the main factors affecting the stability of the gate driver. The gate inductance (Lg), common source inductance (Lcs) are the key components that cause ringing and overshoot on gate-source voltage. If the values are large, the oscillation To assure the stability of the gate driver circuit, an unwanted turn-on phenomenon occurred by a miller effect should be prevented. For this purpose, a gate current path with a low impedance must be secured at turn-off. The selected gate driver has dual outputs and can be divided into on-current path and off-current path so that the impedance can be adjusted by altering the gate resistor at each path. The appropriate ratio of the gate on and gate off resistance can ensure the stability of the gate driver. It is recommended that the ratio is selected as between 5 and 10 according to [41]. In addition, the parasitic inductances in the gate driver circuit are one of the main factors affecting the stability of Energies 2020, 13, 971 4 of 14 the gate driver. The gate inductance (L g ), common source inductance (L cs ) are the key components that cause ringing and overshoot on gate-source voltage. If the values are large, the oscillation problem can lead to damage to the devices. Minimizing parasitic inductance in the gate driver circuit should be achieved through the appropriate PCB layout design. Design of PCB Layout To fabricate the designed gate driver circuit and power board circuit, the PCB layout was carried out using Altium software. Bidirectional GaN HEMT switches are configured by a common source structure. For the PCB layout, it is important to minimize the parasitic inductances such as gate loop inductance, common source inductance, and power loop inductance in layout. The GaN HEMT device used in this paper has a package without a wire bond. Compared to TO-247 switches having a package source inductance values of about 10 to 15 nH resulting from long lead length, the GaN HEMT device has a value of about 0.05 nH. This is a very small inductance value, which has a highly large advantage in GaN HEMT operation and can minimize the impact of parasitic inductance. In order to reduce the gate loop inductance, the gate driver is placed close to the gate of GaN HEMT device, and low inductance is secured by using coppers instead of wiring with the trace. To reduce source inductance (L s ) between the bidirectional GaN HEMT switches, a vertical power loop was used [42]. This structure diminishes interference between the gate loop path and the power loop path and makes the value of L s low. Unlike the typical method of placing the switches in a lateral structure on the same layer as shown in Figure 3a, one switch is placed in the top layer and the other in the bottom layer like Figure 3b. In the lateral structure, the gate loop and the power loop are parallel, which is sensitive to magnetic noise interference. However, the vertical structure can minimize coupling between the gate loop and the power loop. By employing this structure, the inductance between switches is limited only by the package inductance and size of the device, and PCB thickness. The paths among sources of each switching device are connected through multiple vias, which can obtain a low value of L s . problem can lead to damage to the devices. Minimizing parasitic inductance in the gate driver circuit should be achieved through the appropriate PCB layout design. Design of PCB Layout To fabricate the designed gate driver circuit and power board circuit, the PCB layout was carried out using Altium software. Bidirectional GaN HEMT switches are configured by a common source structure. For the PCB layout, it is important to minimize the parasitic inductances such as gate loop inductance, common source inductance, and power loop inductance in layout. The GaN HEMT device used in this paper has a package without a wire bond. Compared to TO-247 switches having a package source inductance values of about 10 to 15 nH resulting from long lead length, the GaN HEMT device has a value of about 0.05 nH. This is a very small inductance value, which has a highly large advantage in GaN HEMT operation and can minimize the impact of parasitic inductance. In order to reduce the gate loop inductance, the gate driver is placed close to the gate of GaN HEMT device, and low inductance is secured by using coppers instead of wiring with the trace. To reduce source inductance (Ls) between the bidirectional GaN HEMT switches, a vertical power loop was used [42]. This structure diminishes interference between the gate loop path and the power loop path and makes the value of Ls low. Unlike the typical method of placing the switches in a lateral structure on the same layer as shown in Figure 3a, one switch is placed in the top layer and the other in the bottom layer like Figure 3b. In the lateral structure, the gate loop and the power loop are parallel, which is sensitive to magnetic noise interference. However, the vertical structure can minimize coupling between the gate loop and the power loop. By employing this structure, the inductance between switches is limited only by the package inductance and size of the device, and PCB thickness. The paths among sources of each switching device are connected through multiple vias, which can obtain a low value of Ls. Estimation of Parasitic Inductance To analyze whether the designed PCB has a minimum parasitic inductance, a Finite-Element Analysis (FEA) was conducted using ANSYS Electronics software. The sources of a bidirectional switch are connected through several vias as shown in Figure 4a. AC resistance and AC inductance can be obtained at various solution frequency conditions using a Q3D extractor. The analysis result yields a Ls of 1.5 pH for one bi-directional switch, which proves that the vertical loop design can have low source inductance. Similarly, the gate loop inductance is analyzed through the FEA. In this paper, the parasitic inductance of the gate-off current path is estimated at 10 kHz as shown in Figure 4b. The estimated inductance is 4.32 nH, which means that the designed gate driver has enough reliability to drive the GaN HEMT devices. Estimation of Parasitic Inductance To analyze whether the designed PCB has a minimum parasitic inductance, a Finite-Element Analysis (FEA) was conducted using ANSYS Electronics software. The sources of a bidirectional switch are connected through several vias as shown in Figure 4a. AC resistance and AC inductance can be obtained at various solution frequency conditions using a Q3D extractor. The analysis result yields a L s of 1.5 pH for one bi-directional switch, which proves that the vertical loop design can have low source inductance. Similarly, the gate loop inductance is analyzed through the FEA. In this paper, the parasitic inductance of the gate-off current path is estimated at 10 kHz as shown in Figure 4b. The estimated inductance is 4.32 nH, which means that the designed gate driver has enough reliability to drive the GaN HEMT devices. Table 1 shows specifications associated with dv/dt in national electrical manufacturers association (NEMA) standards [43]. The specifications limit the maximum voltage and the rise time at the motor input terminal to ensure a stable motor drive. The rise time of the GaN HEMT utilized in this paper was 12.4 ns, so if the dv/dt is not mitigated by the output filter or gate resistance, then the minimum rise time in standards is not satisfied. Therefore, to alleviate high dv/dt, dv/dt filter and sine-wave filter were applied as an output filter, respectively. The output filters are shown in Figure 5, and the transfer functions of output filters are as follows: Table 1 shows specifications associated with dv/dt in national electrical manufacturers association (NEMA) standards [43]. The specifications limit the maximum voltage and the rise time at the motor input terminal to ensure a stable motor drive. The rise time of the GaN HEMT utilized in this paper was 12.4 ns, so if the dv/dt is not mitigated by the output filter or gate resistance, then the minimum rise time in standards is not satisfied. Therefore, to alleviate high dv/dt, dv/dt filter and sine-wave filter were applied as an output filter, respectively. The output filters are shown in Figure 5, and the transfer functions of output filters are as follows: Table 1 shows specifications associated with dv/dt in national electrical manufacturers association (NEMA) standards [43]. The specifications limit the maximum voltage and the rise time at the motor input terminal to ensure a stable motor drive. The rise time of the GaN HEMT utilized in this paper was 12.4 ns, so if the dv/dt is not mitigated by the output filter or gate resistance, then the minimum rise time in standards is not satisfied. Therefore, to alleviate high dv/dt, dv/dt filter and sine-wave filter were applied as an output filter, respectively. The output filters are shown in Figure 5, and the transfer functions of output filters are as follows: Output Filter Design The resonant frequency of each filter is calculated as below: Energies 2020, 13, 971 6 of 14 where f sine is the resonant frequency of the sine-wave filter and f dvdt is the resonant frequency of the dv/dt filter. The resonant frequency is determined in the same way by the inductance and capacitance in each filter. However, depending on the purpose of each filter, the resonant frequencies are located at different boundaries. The resonant frequency of the sine-wave filter should be within the range of greater than the fundamental frequency and less than the switching frequency. The sine-wave filter attenuates the switching frequency components of the output voltage. So, since the sinusoidal voltage with fundamental frequency is applied to the motor, problems by high dv/dt are mitigated. On the other hand, the purpose of the dv/dt filter is to limit the rise time of the voltage applied to the motor input terminal. The minimum rise time in the standard is 0.1 µs, so the resonant frequency of dv/dt filter should be within the range of greater than the switching frequency and less than 10 MHz [31]. Table 2 shows the parameters of the designed output filter. The frequency response of each output filter is illustrated in Figure 6. As shown in Figure 6, the resonant frequency of each output filter is located within the above frequency constraints. Figure 7 shows the output voltages of the matrix converter and the dv/dt filter. The magnitude of the output voltage is considered as the peak value of the input line-to-line voltage, which is the maximum value that the matrix converter can generate. The designed dv/dt filter enables the matrix converter to meet the specification in Table 2. The resonant frequency of each filter is calculated as below: where fsine is the resonant frequency of the sine-wave filter and fdvdt is the resonant frequency of the dv/dt filter. The resonant frequency is determined in the same way by the inductance and capacitance in each filter. However, depending on the purpose of each filter, the resonant frequencies are located at different boundaries. The resonant frequency of the sine-wave filter should be within the range of greater than the fundamental frequency and less than the switching frequency. The sine-wave filter attenuates the switching frequency components of the output voltage. So, since the sinusoidal voltage with fundamental frequency is applied to the motor, problems by high dv/dt are mitigated. On the other hand, the purpose of the dv/dt filter is to limit the rise time of the voltage applied to the motor input terminal. The minimum rise time in the standard is 0.1 μs, so the resonant frequency of dv/dt filter should be within the range of greater than the switching frequency and less than 10 MHz [31]. Table 2 shows the parameters of the designed output filter. The frequency response of each output filter is illustrated in Figure 6. As shown in Figure 6, the resonant frequency of each output filter is located within the above frequency constraints. Figure 7 shows the output voltages of the matrix converter and the dv/dt filter. The magnitude of the output voltage is considered as the peak value of the input line-to-line voltage, which is the maximum value that the matrix converter can generate. The designed dv/dt filter enables the matrix converter to meet the specification in Table 2. Figure 8 shows the schematic of the direct matrix converter. A normally-off enhancement-mode 650 V GaN HEMT (GS66516T, GaN Systems) is used in this paper [44]. To implement a modulation algorithm, the input line-to-line voltages were measured by using LV-25P from LEM, and the control board equipped with a microcontroller, TMS320F28377S from Texas Instrument, was used. The experimental electrical specifications are described in Table 3. Experimental Results . Figure 8 shows the schematic of the direct matrix converter. A normally-off enhancement-mode 650 V GaN HEMT (GS66516T, GaN Systems) is used in this paper [44]. To implement a modulation algorithm, the input line-to-line voltages were measured by using LV-25P from LEM, and the control board equipped with a microcontroller, TMS320F28377S from Texas Instrument, was used. The experimental electrical specifications are described in Table 3. Switching Performance The experiments for evaluating switching performances of the fabricated matrix converter are carried out. Since the switching characteristics in drain-source voltage can be degraded in induction motor drives [9], the switching performance experiment is carried out in resistive load conditions. Figure 9 shows the gate-source voltage and the drain-source voltage at turn-on of GaN HEMT and turn-off of GaN HEMT respectively. The rise time of the gate-source voltage is 5.9 ns, and the fall time is 5.4 ns. The rise time of the drain-source voltage is 194.2 ns, and the fall time of the drain-source voltage is 10.4 ns. While very fast switching transients are possible in gate-source voltage, the rise time of the drain-source voltage is considerably large. This may occur because the power loop inductance, except for the source inductance between the GaN HEMT switches, has not been considered in the entire design procedures. This phenomenon needs be improved in future work to make correct turn-off operation. Output Filter Experiments For the GaN-based matrix converter in motor drive system, the dv/dt filter and the sine-wave filter have been established for experimental validation of the performance analysis. Figure 10 shows the experimental results of the matrix converter without and with the different output filters, where V UV is the line-to-line terminal voltage of the induction motor, and I U is the motor current in phase U. The case without any filter can be observed in Figure 10a. When the output filter is not adopted in the GaN-based matrix converter, the input quality of V UV and I U has high-frequency components above the switching frequency of 10 kHz in the steady-state. In the case of the dv/dt filter, the high-frequency components of the motor input current are considerably reduced, although the line-to-line terminal voltage still has high-frequency components as Figure 10b. As shown in Figure 10c where the sine-wave filter is adopted, a pure sinusoidal waveform of V UV and I U without the high-frequency harmonics could be observed as expected. Here, it is clear that the output filters reduce the voltage stress of the motor drive system, but the detailed performance comparison of the output filters can be discussed with the common mode current which is given in the following section. turn-off of GaN HEMT respectively. The rise time of the gate-source voltage is 5.9 ns, and the fall time is 5.4 ns. The rise time of the drain-source voltage is 194.2 ns, and the fall time of the drain-source voltage is 10.4 ns. While very fast switching transients are possible in gate-source voltage, the rise time of the drain-source voltage is considerably large. This may occur because the power loop inductance, except for the source inductance between the GaN HEMT switches, has not been considered in the entire design procedures. This phenomenon needs be improved in future work to make correct turn-off operation. Output Filter Experiments For the GaN-based matrix converter in motor drive system, the dv/dt filter and the sine-wave filter have been established for experimental validation of the performance analysis. Figure 10 shows the experimental results of the matrix converter without and with the different output filters, where VUV is the line-to-line terminal voltage of the induction motor, and IU is the motor current in phase U. The case without any filter can be observed in Figure 10a. When the output filter is not adopted in the GaN-based matrix converter, the input quality of VUV and IU has high-frequency components above the switching frequency of 10 kHz in the steady-state. In the case of the dv/dt filter, the highfrequency components of the motor input current are considerably reduced, although the line-to-line terminal voltage still has high-frequency components as Figure 10b. As shown in Figure 10c where the sine-wave filter is adopted, a pure sinusoidal waveform of VUV and IU without the high-frequency harmonics could be observed as expected. Here, it is clear that the output filters reduce the voltage stress of the motor drive system, but the detailed performance comparison of the output filters can be discussed with the common mode current which is given in the following section. Figure 11 shows the line-to-line voltage at the induction motor input terminal of the GaN-based matrix converter without and with the dv/dt filter. As shown in Figure 11a, it has high-frequency voltage ringing and high dv/dt spikes due to the fast rise time of the GaN devices where the rise time is 78 ns. However, as shown in Figure 11b, the rise time of 0.1 μs by NEMA standards can be satisfied with the designed dv/dt filter where the rise time is 478 ns. It shows that the dv/dt filter significantly mitigates the high-frequency ringing components at the motor terminal. Figure 11 shows the line-to-line voltage at the induction motor input terminal of the GaN-based matrix converter without and with the dv/dt filter. As shown in Figure 11a, it has high-frequency voltage ringing and high dv/dt spikes due to the fast rise time of the GaN devices where the rise time is 78 ns. However, as shown in Figure 11b, the rise time of 0.1 µs by NEMA standards can be satisfied with the designed dv/dt filter where the rise time is 478 ns. It shows that the dv/dt filter significantly mitigates the high-frequency ringing components at the motor terminal. Energies 2020, 13, 971 10 of 14 (a) (b) Figure 11. Experimental results of line-to-line voltage in motor input terminal in the case of (a) without output filter; and (b) with dv/dt filter. Figure 12 shows the common mode current measurement results of the GaN-based matrix converter according to the various output filter options. The common mode current considerably decreases by using the output filters. For the detailed analysis, the frequency spectrums without and with the output filters is obtained in Figure 13. The magnitude of the common mode current is reduced by the sine-wave filter in the frequency range from 1.5 MHz to 10 MHz. (a) (b) Figure 11. Experimental results of line-to-line voltage in motor input terminal in the case of (a) without output filter; and (b) with dv/dt filter. Figure 12 shows the common mode current measurement results of the GaN-based matrix converter according to the various output filter options. The common mode current considerably decreases by using the output filters. For the detailed analysis, the frequency spectrums without and with the output filters is obtained in Figure 13. The magnitude of the common mode current is reduced by the sine-wave filter in the frequency range from 1.5 MHz to 10 MHz. Figure 12 shows the common mode current measurement results of the GaN-based matrix converter according to the various output filter options. The common mode current considerably decreases by using the output filters. For the detailed analysis, the frequency spectrums without and with the output filters is obtained in Figure 13. The magnitude of the common mode current is reduced by the sine-wave filter in the frequency range from 1.5 MHz to 10 MHz. Meanwhile, in the case of the dv/dt filter, the common mode current amplitude slightly decreases in the frequency range from 3.5 MHz to 10 MHz. From the frequency spectrum results, it is confirmed that the sine-wave filter has better mitigation performance for the common mode current due to its pure sine-wave voltage on the input motor terminal. Conclusion In this paper, a design of the GaN-based matrix converter was suggested to obtain a robustness for high dv/dt. The gate driver circuit was designed for high CMTI and reliable operation of the GaN HEMT. PCB layout was also designed to minimize parasitic inductance and interference between the digital loop and the power loop, leading to the remarkable results from the parasitic inductance of the gate path and source path. It was confirmed that NEMA standards were satisfied by applying the dv/dt filter and sine-wave filter. With the dv/dt filter, the experimental result shows that the rise time of line-to-line voltage in motor input terminal is 478 ns above 0.1 μs. With the sine-wave filter, the closely pure sinusoidal output voltage and current waveforms were obtained to meet the rated magnitude and frequency without any harmonic components. Finally, CM current according to each output filter was compared. The magnitude of CM current was decreased by about 47% when the output filters were applied compared to the absence of the output filters. The CM current spectrum results show that the output filters are effective at MHz frequency band and the sine-wave filter can reduce the CM current more than the dv/dt filter. Based on the results of the experiments, the dv/dt filter is suitable when the NEMA standards should be considered and the sine-wave filter is suitable when the CM current reduction should be considered preferentially. Amplitude (A) With sine-wave filter Without filter 10 4 Amplitude (A) With dv/dt filter Without filter Meanwhile, in the case of the dv/dt filter, the common mode current amplitude slightly decreases in the frequency range from 3.5 MHz to 10 MHz. From the frequency spectrum results, it is confirmed that the sine-wave filter has better mitigation performance for the common mode current due to its pure sine-wave voltage on the input motor terminal. Conclusion In this paper, a design of the GaN-based matrix converter was suggested to obtain a robustness for high dv/dt. The gate driver circuit was designed for high CMTI and reliable operation of the GaN HEMT. PCB layout was also designed to minimize parasitic inductance and interference between the digital loop and the power loop, leading to the remarkable results from the parasitic inductance of the gate path and source path. It was confirmed that NEMA standards were satisfied by applying the dv/dt filter and sine-wave filter. With the dv/dt filter, the experimental result shows that the rise time of line-to-line voltage in motor input terminal is 478 ns above 0.1 μs. With the sine-wave filter, the closely pure sinusoidal output voltage and current waveforms were obtained to meet the rated magnitude and frequency without any harmonic components. Finally, CM current according to each output filter was compared. The magnitude of CM current was decreased by about 47% when the output filters were applied compared to the absence of the output filters. The CM current spectrum results show that the output filters are effective at MHz frequency band and the sine-wave filter can reduce the CM current more than the dv/dt filter. Based on the results of the experiments, the dv/dt filter is suitable when the NEMA standards should be considered and the sine-wave filter is suitable when the CM current reduction should be considered preferentially. I leakage (500mA/div) (10ms/div) 10 4 Frequency (Hz) Amplitude (A) With sine-wave filter Without filter 10 4 Amplitude (A) With dv/dt filter Without filter Figure 13. (a) Common mode current spectrum without or with sine-wave filter; and (b) Common mode current spectrum without or with dv/dt filter. Meanwhile, in the case of the dv/dt filter, the common mode current amplitude slightly decreases in the frequency range from 3.5 MHz to 10 MHz. From the frequency spectrum results, it is confirmed that the sine-wave filter has better mitigation performance for the common mode current due to its pure sine-wave voltage on the input motor terminal. Conclusions In this paper, a design of the GaN-based matrix converter was suggested to obtain a robustness for high dv/dt. The gate driver circuit was designed for high CMTI and reliable operation of the GaN HEMT. PCB layout was also designed to minimize parasitic inductance and interference between the digital loop and the power loop, leading to the remarkable results from the parasitic inductance of the gate path and source path. It was confirmed that NEMA standards were satisfied by applying the dv/dt filter and sine-wave filter. With the dv/dt filter, the experimental result shows that the rise time of line-to-line voltage in motor input terminal is 478 ns above 0.1 µs. With the sine-wave filter, the closely pure sinusoidal output voltage and current waveforms were obtained to meet the rated magnitude and frequency without any harmonic components. Finally, CM current according to each output filter was compared. The magnitude of CM current was decreased by about 47% when the output filters were applied compared to the absence of the output filters. The CM current spectrum results show that the output filters are effective at MHz frequency band and the sine-wave filter can reduce the CM current more than the dv/dt filter. Based on the results of the experiments, the dv/dt filter is suitable when the NEMA standards should be considered and the sine-wave filter is suitable when the CM current reduction should be considered preferentially.
8,751.4
2020-02-21T00:00:00.000
[ "Engineering", "Physics" ]
Thermal transport in long-range interacting Fermi-Pasta-Ulam chains Studies of thermal transport in long-range (LR)interacting systems are currently particularly challenging. The main difficulties lie in the choice of boundary conditions and the definition of heat current when driving systems in an out-of-equilibrium state by the usual thermal reservoirs. Here, by employing a reverse type of thermal baths that can overcome such difficulties, we reveal the intrinsic features of thermal transport underlying a LR interacting Fermi-Pasta-Ulam chain. We find that under an appropriate range value of LR exponent $\sigma =2$, while a \emph{nonballistic} power-law length ($L$) divergence of thermal conductivity $\kappa$, i.e., $\kappa \sim L^{\alpha}$ still persists, its scaling exponent $\alpha \simeq 0.7$ can be much larger than the usual predictions in short-range interacting systems. The underlying mechanism is related to the system's new heat diffusion process, weaker nonintegrability and peculiar dynamics of traveling discrete breathers. Our results shed light on searching for low-dimensional materials supporting higher thermal conductivity by involving appropriate LR interactions. which implicitly states that the heat current J is proportional to the temperature gradient ∇T with κ the thermal conductivity being constant for a bulk material, is not valid. Instead, κ shows a sublinear power-law divergence as increasing system size L, i.e., κ ∼ L α (0 < α < 1). This issue is in part motivated by the recent carbon nanotubes (CNTs) technology [4,5], and also by the desire to understand the microscopic origin of non-Fourier's thermal transport. Additionally, recently how to efficiently control heat is a fascinating topic, which has led a new emerging field of phononics [6][7][8][9][10]. Indeed, continuous efforts of theoretical studies on systems of classical coupled oscillators with nearest-neighbor (NN) interactions, such as the Fermi-Pasta-Ulam (FPU) models, and experimental investigations on 1D CNTs have confirmed such anomalous effects in a quite good level. The theoretical predictions of α are varying from 0.2 to 0.5 [11,12], which has been experimentally corroborated in 1D single-walled CNTs [4,5]. However, the same experiments performed on 1D multi-walled CNTs [4] showed that α can be even higher (α = 0.6-0.8). As the real systems are usually plagued with many effects, such as defects, isotopic disorders, and impurities that would greatly lower down the divergence, this poses a fundamental question: Is there any theoretical model free of such effects can support a high divergent exponent comparative to experimental observations? In this Letter we show that a new class of α 0.7 that falls within α = 0.6-0.8 can achieve in a theoretical model of long-range (LR) interacting FPU chain. This result provides a more solid theoretical ground of sufficiently high thermal conductivity appearing in 1D ma-terials. It also suggests the need to extend the study of 1D thermal transport beyond the short-range interacting systems. In principle this indicates a new direction for further exploring thermal transport in 1D systems. Before proceed, we would like to stress that the study of thermodynamics in LR interacting systems is certainly far from trivial and of its practical interest. Theoretically, LR interactions are ubiquitous in many physical systems, ranging from self-gravitating systems to nanoscale systems, and to quantum systems [13][14][15][16]. These LR interacting systems can display many peculiar features, such as ensemble inquivalence, the lack of additivity, and anomalous diffusion of energy. Practically, the present technology allows to fabricate materials with LR interactions. The Coulomb crystals [17], Ising pyrochlore magnets [18,19], and permalloy nanomagnets [20] are some of the noticeable examples. Our model is a FPU type system of N particles including LR interactions with Hamiltonian where x i is the ith particle's displacement from its equilibrium position; p i is its momentum; |j − i| σ represents the interaction strength of the ith particle with its |j−i|th neighborings (σ denotes the range value of LR exponent). Its thermal conduction properties have been studied by the usual method [21], i.e., directly coupling two thermal reservoirs to two ends of the system [22]. However, when one performs studies like that, the feature of nonadditivity of the system will cause the central bulk particles and the two thermalized ends implicitly identified. This results that eventually the whole system may display properties that do not correspond to the original one. Due to this, it has been pointed out that one needs to distinguish between heat flows towards or from the reservoirs and those within the system [23]. View-arXiv:1906.11086v1 [cond-mat.stat-mech] 26 Jun 2019 ing this, here we use an alternative approach that called "reverse nonequilibrium molecular dynamics simulation (RNEMD)" method [24,25] to perform our study. This method can get rid of such boundary effects and wherein the heat current is naturally defined. Note that our model only considers the LR interactions in the quartic anharmonic term [21], which differs from the model in [23], but this does not violate our general conclusion [26]. In addition we are not going to include the Kac scaling factorÑ = 1 This factor was designed to restore the system's extensivity as increasing system size, but it does not help improve the system's nonadditivity [16]. It only constructs an "artificial" extensive system, but our cost for thermal transport is that the phonon's group velocity should depend on system size, which is an unwanted effect. In fact, when looking at dynamical aspects, for a correspondence between both treatments, the only difference is that time should beÑ 1/2 scaled [27]. We shall focus on the case of σ = 2 and also present the result of σ = 8 for comparison. The latter might correspond to the FPU model with NN interactions, although this equivalence requires σ → ∞. The former is of particular interest since wherein ballistic transport (α 1) [21] has been conjecture. This is attributed to the strong finite-size effects and it has been pointed out that to gain convincing results is extremely hard [23]. But anyway, this implies new physics. Another reason for σ = 2 of interest is that the system can support travelling discrete breathers (DBs) without tail under zero temperature [28]. Then one might ask: what will happen of these moving excitations at finite-temperature systems and how they would affect transport? The RNEMD method produces temperature gradient in an unusual way. Unlike the traditional approach [22] to directly induce ∇T , it imposes the heat current by frequently exchanging the kinetic energy (or momentum) between particles. While the nonequilibrium stationary state reaches, ∇T will establish. This "reversion" makes such method an ideal candidate for studying thermal transport in the LR interacting systems. We implement the method in such a way: First, periodic boundary conditions are used and the chain forms like a ring. The ring is then decomposed into M = 32 equaling slabs (each contains n = N/M particles). We give each slab a serial number and label the cold one slab 1 and accordingly, the hot one slab M/2 + 1. This labeling allows us to interchange the momentum of the hottest particle in slab 1 with that of the coldest particle in slab M/2 + 1 at a frequency f exc = 0.1. Such interchanges cause a redistribution of kinetic energy of the system with an energy difference: ∆E = where the subscripts h and c refer to the hottest and coldest particles whose momenta are exchanged, and the sum runs over all exchange events in time t. As a con- sequence, the relaxation of energy difference will drive two heat currents to flow from hot to cold slabs along the two semiring sides bridging them (each side has an effective length L = N/2 − n). After the stationary state eventually reaches, the long-time averaged current across each side is defined by accordingly the time averaged kinetic temperature of each slab is T k = 1 nk B nk i=n(k−1)+1 p 2 i , where k B is the Boltzmann constant (set to be unit) and the sum runs over all n particles in slab k. The heat conductivity κ can then be obtained by κ = − J /∇T according to Fourier's law, with ∇T being evaluated over the slabs between the cold and hot ones. We start our calculations with several fully thermalized systems under T = 0.5. These systems are evolved by velocity-Verlet algorithm [29] with a time step 0.01, that guarantees energy conservation with a relative accuracy of O(10 −5 ). As for LR interacting systems the calculations for force at each time step that demands O(N 2 ) operations is extremely huge, we adopt an algorithm based on the Fast Fourier Transform [30] to accelerate our computations. With this, a transient stage of time 10 6 for system to reach the stationary state, is discarded, and the next evolution of time 10 6 is performed for average. Figures 1(a) and (b) depict two typical temperature profiles for σ = 2 and 8. In both cases a well-behaved temperature gradient is identified. The difference is that ∇T for σ = 2 is much smaller than that for σ = 8. Despite this difference, both results are obviously not the case of flat temperature profile of ballistic transport shown in integrable systems [31,32]. Therefore, here the integrable dynamics is excluded. κ ∼ L α has been observed. This is the case for σ = 2 for the all considered range of L. While for σ = 8 an asymptotic behavior only achieves for a large L as the crossover to the NN interaction model. Indeed, the best fitting gives α = 0.34 ± 0.01, which is very close to the prediction of α = 1 3 in FPU chains with NN interaction [33] and within the recent predictions of two universality classes of α [34,35]. Remarkably, for σ = 2, we have obtained an enhanced κ (at least one order of magnitude compared to σ = 8 for the same L) and a quite large α ( 0.71). We emphasize that this α just falls within the experimental observations of α = 0.6-0.8 in 1D multi-walled CNTs [4], and thus it can be expected, as L approaches macroscopic scales, an extremely high thermal conductivity can achieve, which undoubtedly suggests potential applications. From the theoretical point of view, this indicates new dynamical mechanisms for thermal transport. We understand such mechanisms from the following three aspects. First, it is rooted in a new type of heat diffusion process. This exhibits in the propagation of heat fluctuations following a new type of density function with a new scaling. To characterize such process, we employ the spatiotemproal correlation function [36,37] Here, due to the translational invariance, the correlation depends only on the relative distance m; · represents the spatiotemporal average; l labels a coarsegrained bin's number similar to that adopted in the RNEMD method (here each bin has n = 8 particles). In the definition of Q l (t), g l (t), |j−k| σ and the sum runs over all particles within bin l), and F l (t)( F ≡ 0) correspond to the particle number, energy and pressure den- [38] with a slowly relaxed central peak together with two side peaks moving ballistically [see Fig. 2(b)], like what we have observed in the short-range interacting models. In contrast, for σ = 2, ρ Q (m, t) indeed shows a new shape [see Fig. 2(a)]. Remarkably, the central peak in a long time now turns to a platform, indicating a much faster relaxation. We have further checked that the short time's ρ Q (m, t) behaviors similarly to the sort of ballistic transport [39], and thus our long-time scaled dynamics by excluding the Kac scaling does help reveal the real regime of transport. We figure out this new transport regime by studying This scaling formula is based on the Lévy walk theory [38]. As shown, it is only well satisfied for σ = 8 (especially the central parts), and for σ = 2 [see Fig. 2(c)], to see an excellent collapse requires a much longer time. However, this does not mind our using such a formula. In fact, another relation α = 2−γ [38] based on the same theory connecting γ to α just gives an excellent estimation α 2 − 1.29 = 0.71, in agreement with our above thermal conduction calculation. Second, the new exponent is related to the system's weaker nonintegrability. The nonintegrability is featured by the maximal Lyapunov exponent λ max (> 0) (see Fig. 3), obtained from the standard Benettin-Galgani-Strelcyn technique [40]. As comparison, another completely integrable Toda chain [32,41] with λ max = 0 is also demonstrated in Fig. 3. As shown, λ max for σ = 2 just lies in between the results for σ = 8 and Toda chain, and this seems not changed as further increasing system size. It thus indicates a weaker nonintegrability of the system compared to the short-range interacting FPU model. Indeed, a nonmonotonic variation of λ max on σ (see the inset) also confirms this [21], but the integrable dynamics is certainly ruled out. Therefore, a weaker nonintegrability seems to provide a mechanism to raise the divergent exponent. We finally conjecture that this weaker nonintegrability can make the system support a new type of excitations, i.e., the travelling DBs [28], and it is these moving DBs together with their relatively weak interactions leading to the high divergence. To verify this conjecture is of interest but usually greatly challenging since it is hard to catch out these moving excitations at equilibrium states due to their mobility. Viewing this we choose to study the spatiotemporal evolutions of local energy densities E i (t) for two-time scales (t = 100 and 1000) to visualize this process (see Fig. 4). The evolutions are obtained by considering a short chain with N = 200 for facilitating the computation. The chain is first thermalized to T = 0.5, then the thermal baths are removed and the results are recorded and displayed by a suitable time step [∆t = 1(10) for t = 100(1000)]. As indicated in the upper panel of Fig. 4, both σ = 2 and 8's short-time scaled dynamics exhibit transport similar to the ballistic regime. This explains the ballistic scaling that we have observed in a short time. In contrast, for a relatively long-time scale (see the lower panel of Fig. 4), the ballistic regime for σ = 8 disappears, suggesting strong interactions between heat carriers. But this is apparently not the case for σ = 2: the signature of the localized excitations is still recognized, but probably due to their weak interactions, their identification now becomes weaker. Such evidence is in good accord with our conjecture. To summarize, we have presented a striking result that a high length-divergent exponent α 0.71 of thermal conductivity, that beyond the existing theoretical predictions and falls within the experimental observations of α = 0.6-0.8, can occur in a theoretical model of LR interacting FPU chain under an appropriate range value of σ = 2. This finding is of fundamental importance as it provides a promising ground for the highly sufficient thermal conductivity observed in 1D materials, thus pointing towards new manipulations of heat in practice [42,43]. It also opens up a new avenue for further exploring thermal transport since such a new type of divergence should be supported by new mechanisms. We have revealed that the new exponent is related to the system's more rapid heat propagation, weaker nonintegrable chaotic dynamics, and also the new microscopic dynamics of travelling DBs. The heat propagation shows a new type of density with its scaling exponent γ 1.29 that can be connected to α by the formula α = 2 − γ from Lévy walk theory. This suggests that the Lévy walk model, with appropriate variations, can still be useful for understanding the transport in LR interacting systems. The system's weaker nonintegrability has been confirmed and it indicates that although the chaotic dynamics is not an ingredient necessary for the invalidity of Fourier's law [22], the change of its strength does sufficiently influence the system's thermal conduction, thus paving new way to use nonlinearity to control transport. More-interestingly, the weaker nonintegrability can result in weaker dynamics of interactions of travelling DBs at thermal equilibrium and this seems responsible for the high divergence. This last evidence, together with the above, would undoubtedly encourage more studies of transport from the underlying microscopic dynamics.
3,794
2019-06-26T00:00:00.000
[ "Physics" ]
Fast two-photon imaging of subcellular voltage dynamics in neuronal tissue with genetically encoded indicators Monitoring voltage dynamics in defined neurons deep in the brain is critical for unraveling the function of neuronal circuits but is challenging due to the limited performance of existing tools. In particular, while genetically encoded voltage indicators have shown promise for optical detection of voltage transients, many indicators exhibit low sensitivity when imaged under two-photon illumination. Previous studies thus fell short of visualizing voltage dynamics in individual neurons in single trials. Here, we report ASAP2s, a novel voltage indicator with improved sensitivity. By imaging ASAP2s using random-access multi-photon microscopy, we demonstrate robust single-trial detection of action potentials in organotypic slice cultures. We also show that ASAP2s enables two-photon imaging of graded potentials in organotypic slice cultures and in Drosophila. These results demonstrate that the combination of ASAP2s and fast two-photon imaging methods enables detection of neural electrical activity with subcellular spatial resolution and millisecond-timescale precision. DOI: http://dx.doi.org/10.7554/eLife.25690.001 Introduction Neurons represent, process, and propagate information by controlling the potential across their plasma membrane. Methods to measure electrical activity are thus central to efforts to understand computations in the brain. While electrophysiological approaches have been used for decades, the need to track genetically defined neuronal subpopulations has motivated the development and application of protein-based fluorescent reporters of neural activity. In contrast to electrode-based methods, these optophysiological indicators also allow monitoring without placement of physical probes near or in the neurons of interest. Thus, they can enable easier and less invasive measurement of activity from individual neurons and from their subcellular compartments such as axons and dendrites. Genetically encoded calcium indicators are commonly used to detect the forms of neuronal activity that trigger calcium flux into neurons (Grienberger and Konnerth, 2012;Tian et al., 2012;Lin and Schnitzer, 2016). For example, recent calcium indicators such as GCaMP6f can detect single action potentials (APs) in cell bodies and synaptic responses in spines (Chen et al., 2013). However, calcium concentration is not directly related to membrane potential, and therefore cannot be used to follow voltage changes that do not result in substantial calcium fluxes. Calcium indicators thus cannot effectively report hyperpolarizations and somatic subthreshold depolarizations in many neuronal cell types. The slow kinetics of calcium indicators and of calcium transients also limit the ability of calcium indicators to report voltage changes with high temporal precision and to track fast trains of action potentials (Koester andSakmann, 2000,Helmchen et al., 1996;Theis et al., 2016). To follow membrane potential dynamics more accurately, fluorescent indicators that monitor transmembrane voltage rather than calcium concentration have been developed (Lin and Schnitzer, 2016;Knö pfel et al., 2015;. These genetically encoded voltage indicators (GEVIs) have been used to image activity of neuronal populations in vivo (Akemann et al., 2010(Akemann et al., , 2012Scott et al., 2014;Carandini et al., 2015;Mutoh et al., 2015;Shimaoka et al., 2017) and of single neurons within intact brain tissue (Ahrens et al., 2012;Akemann et al., 2013;Flytzanis et al., 2014;Gong et al., 2014Gong et al., , 2015Storace et al., 2015) using widefield one-photon illumination. However, two-photon microscopy is the preferred modality for in vivo studies as it allows imaging with deeper tissue penetration and with lower background fluorescence and phototoxicity outside of the focal point (Svoboda and Yasuda, 2006;Prevedel et al., 2016). GEVI-based voltage imaging in intact tissue with two-photon excitation has only been demonstrated for monitoring neural activity with either low temporal resolution (less than 100 Hz), low spatial resolution (averaging over ensembles of cells), or both (Ahrens et al., 2012;Akemann et al., 2013;Storace et al., 2015;. Robust single-trial two-photon GEVI imaging with millisecond-timescale and cellular or subcellular resolution in intact neuronal tissue has not yet been reported. Here, we demonstrate GEVI-based two-photon voltage imaging with millisecond-timescale precision, subcellular resolution, and the ability to simultaneously monitor spatially segregated locations. We first describe the development and characterization of ASAP2s, a higher-sensitivity variant of the GFP-based indicator ASAP1 (St-Pierre et al., 2014). We next compare ASAP2s with other GEVIs for two-photon imaging of light-evoked subcellular voltage dynamics in living flies. Having evaluated ASAP2s in vitro and in vivo, we deploy this new indicator to image voltage dynamics using randomaccess multi-photon microscopy in organotypic hippocampal slice cultures, where we demonstrate the ability of ASAP2s to detect action potentials, subthreshold depolarizations, and hyperpolarizations in individual cells in single trials. Finally, we show that ASAP2s can report action potential backpropagation in dendritic arbors and that this new GEVI enables single-trial, single-voxel spike detection at the soma. Development and characterization of ASAP2s We aimed to improve and characterize the performance of GEVIs for reporting rapid neuronal electrical activity using two-photon microscopy in brain tissue. A previous study demonstrated that the two-photon properties of GEVIs can differ from their one-photon characteristics, with several indicators exhibiting response amplitudes that were more than 75% smaller with two-photon than with one-photon excitation (Brinks et al., 2015). In contrast, the ASAP1 indicator (St-Pierre et al., 2014) was unique in maintaining similar response amplitudes under one-and two-photon illumination. We therefore considered ASAP1 as a promising template for two-photon imaging. The probability of correctly identifying spikes from optical noise is higher for sensors with larger response amplitudes, greater brightness, and longer signal decay (slower off-rate) (Wilt et al., 2013). We therefore sought to develop sensors with improvements in one or more of these characteristics. We performed rational mutagenesis in the transmembrane four-helix voltage-sensing domain (VSD) of ASAP1 ( Figure 1A), focusing on residues important for sensing or responding to changes in the electric field. We tested the resulting mutants by patch-clamp electrophysiology in HEK293A cells. We first mutated a conserved site in the first helix (S1) that is thought to be part of a hydrophobic plug that focuses the electric field in homologous proteins (Lacroix and Bezanilla, 2012). However, none of the mutants exhibited improved response amplitudes to rapid, millisecond-timescale changes in electrical activity (Figure 1-figure supplement 1A-H). We then targeted positively charged residues in the fourth helix (S4) responsible for sensing the transmembrane electrical field (Bezanilla, 2008). ASAP1 with an R415Q mutation, which neutralizes one of these sensing charges ( Figure 1B) met two of our key design criteria: improved voltage responsiveness in HEK293A cells (Figure 1-figure supplement 1I) and a slower off-rate than ASAP1 (Table 1). We thus designated this more sensitive variant ASAP2s. Prior to deploying ASAP2s in brain tissue, we first characterized its performance using immortalized cells at 22ºC. In response to a 1 s, 100 mV depolarization from -70 mV in HEK293A cells, ASAP2s exhibited a steady-state fluorescence change of -38.7 ± 1.1%, versus -23.3 ± 1.1% for Representative fluorescence responses of ASAP1 and ASAP2s to voltage steps from À100 to 50 mV. Responses were measured at 5 ms intervals and were normalized to the fluorescence at the À70 mV holding potential. (E) Two-photon excitation spectra of ASAP1, ASAP2s, and EGFP. All proteins were expressed in HEK293-Kir2.1 cells with resting membrane potential of~-77 mV. Brightness was evaluated every 20 nm from 700 to 1040 nm. Each spectrum was normalized to its peak brightness. Traces are the mean of >30 cells. DOI: 10.7554/eLife.25690.002 The following figure supplements are available for figure 1: ASAP1 (mean ± standard error of the mean, p<0.001, t-test), a 66% improvement ( Figure 1C,D). The steady-state response of ASAP2s to voltage steps has the largest amplitude among GEVIs based on fluorescent proteins, surpassing ArcLight Q239 (À32 to À35%, Zou et al., 2014]), MacQ-mCitrine (~À20%, ), and Ace2N-mNeon (<-5% steadystate,~À19% peak, [Gong et al., 2015]). We next evaluated the kinetics of ASAP2s and ASAP1 by fitting the optical responses to 100 mV step depolarizations in HEK293A cells. We also evaluated ArcLight Q239 (hereby designated as ArcLight), an indicator previously used to benchmark new GEVIs St-Pierre et al., 2014). Fluorescence responses of all GEVIs were best fit by a weighted sum of two time constants (Table 1). We focus here on the faster time constants given their greater importance for tracking rapid neuronal activity. The fast depolarization time constants (on-rates) were 5.2 ms for ASAP2s and 2.9 ms for ASAP1 (Table 1), much faster than the 20 ms for ArcLight. Critically, ASAP2s' fast response to repolarization (fast off-rate) was 24 ms or~10 fold slower than ASAP1 (Table 1), a useful change for improving spike detection (Wilt et al., 2013), while being still sufficiently rapid to track fast trains of AP waveforms at 100 Hz in single trials (Figure 1figure supplement 1J). We next confirmed that the R415Q mutation in ASAP1 did not affect its brightness under both one-and two-photon excitation. As brightness depends on the membrane potential, we evaluated indicator brightness in HEK293-Kir2.1, a cell line with plasma membrane potential around -77 mV, close to the resting membrane potential of many neurons (Zhang et al., 2009). When expressed in these cells, ASAP2s matched the brightness of ASAP1 under both one-and two-photon illumination ( Figure 1-figure supplement 2A,B). Two-photon microscopy experiments were performed by exciting GEVIs at~920 nm, the wavelength at which ASAP1 and ASAP2s are maximally excited ( Figure 1E). The commonly-used fluorescent protein EGFP, which we evaluated as a standard, also produced maximal brightness when excited at 920 nm, consistent with previous reports (Drobizhev et al., 2011). We next compared the photostability of ASAP1 and ASAP2s in HEK293-Kir2.1 cells. We also sought to compare the photobleaching kinetics of the ASAP indicators to those of ArcLight and EGFP. Because of fundamental differences in apparent photobleaching between membrane and cytoplasmic probes (Brinks et al., 2015), it is most appropriate to compare photostability under conditions where EGFP is membrane localized. We therefore created a standard by replacing the circularly permuted GFP in ASAP1 with EGFP; we designated the resulting probe ASAP1::EGFP. Under one-photon widefield microscopy at identical illumination power, ASAP1 and ASAP2s photobleached similarly to each other and to ArcLight and more slowly than ASAP1::EGFP (Figure 1-figure supplement 3A,B). Consistent with the photobleaching behavior of superfolder GFP, the fluorescent protein from which the GFP in ASAP indicators was derived, photobleaching kinetics were best fit by a two-term exponential (Pédelacq et al., 2006). In contrast, ArcLight and ASAP1:: EGFP photobleached with monoexponential kinetics. We observed a partial recovery in ASAP1 and ASAP2s fluorescence following incubation in darkness ( Figure 1-figure supplement 3C-F), possibly due to dark reversion of bleached to unbleached molecules (Sinnecker et al., 2005), diffusion of unbleached probes to the illuminated area, or both (Pincet et al., 2016). Longer dark incubation resulted in greater fluorescence recovery, yielding 8.9 ± 0.6% recovery of the original fluorescence for a 0.5 min incubation, and 22.4 ± 0.7% for a 5-min incubation ( Under two-photon laser scanning excitation, ASAP1 and ASAP2s rapidly lost~30% of their initial brightness but then bleached at slower or similar rates compared with ArcLight and ASAP1::EGFP ( Figure 1-figure supplement 4A,B). The photobleaching kinetics of all probes was best fit using exponentials with an additional term compared with their one-photon photobleaching curves. We do not have a mechanistic explanation for this difference in photobleaching kinetics. Increasing the power per pixel increased the rate of photobleaching, but the relationship between photobleaching kinetics and laser power was complex and nonlinear (Figure 1-figure supplement 4C,D). Photobleaching rates are expected to vary based on the power at each pixel, which depend not only on laser power but also on the pixel duty cycle, defined as the percentage of time the laser is exciting each location corresponding to the image pixels. The pixel duty cycle is calculated as the product of the frame acquisition rate and the length of time the laser resides at each location in each sweep (the dwell time). As anticipated, two conditions with identical power per pixel gave similar photobleaching rates, despite differing in laser power, frame rate, and dwell time (Figure 1-figure supplement 4E). Incubation in the dark resulted in fluorescence recovery, as observed under onephoton illumination: 20-25% of original fluorescence was recovered following a 5-min dark incubation for both ASAP1 and ASAP2s and across two conditions with distinct laser power (Figure 1-figure supplement 4F,G). If this reversible photobleaching could be repeatedly obtained over multiple cycles, it would provide a strategy for mitigating photobleaching, for example in longitudinal studies where the same group of cells are repeatedly imaged over multiple days or months. Benchmarking GEVIs for imaging voltage dynamics in cardiomyocytes and neurons We next sought to compare the ability of ASAP2s and other indicators to report voltage dynamics in excitable cells. Given the interest in using indicators to image cardiac electrical activity (Kaestner et al., 2015), we first expressed our indicators in cardiomyocytes differentiated from human embryonic stem cells. Consistent with our results in HEK293A cells, the response amplitude of ASAP2s to cardiac potentials was greater than that of the other indicators, reaching -45.1 ± 1.5% compared with -24.0 ± 1.8% for ASAP1 and -32.9 ± 1.8% for ArcLight (Figure 2A-D, Video 1). In cardiomyocytes derived from induced pluripotent stem cells, the response amplitude of ASAP2s to action potentials was À29.1 ± 2.1% compared with À16.8 ± 2.2% for ASAP1 and À26.3 ± 2.8% for ArcLight. The faster kinetics of the ASAP indicators compared to ArcLight enabled a shorter time to peak when reporting cardiac action potentials ( Figure 2E-F, Figure 2-figure supplement 1). These results demonstrate that ASAP2s can image cardiac action potentials, as previously shown with other voltage indicators in vitro (Kaestner et al., 2015;Tian et al., 2011;Leyton-Mange et al., 2014;Chang Liao et al., 2015;Werley et al., 2017) and in vivo (Chang Liao et al., 2015;Tsutsui et al., 2010;Hou et al., 2014). Before examining the ability of ASAP2s to report neuronal voltage signals, we first evaluated its membrane localization. Proper plasma membrane localization is crucial for detecting voltage events, as GEVIs trapped in internal membrane structures or aggregates do not respond to neuronal activity but can be brightly fluorescent (Baker et al., 2007). Such bright yet inactive GEVI molecules would thus reduce the overall cellular fluorescence response. Moreover, the response amplitude would show large variation between subcellular locations depending on the proportion of properly (I) Mean peak response to current-triggered APs in cultured hippocampal neurons. n = 8 (ASAP1) and 5 (ASAP2s) neurons. For each neuron, we measured 2 to 25 APs (n = 118 total APs for ASAP1, n = 56 for ASAP2s). AP peak voltage was 9.5 ± 2.0 mV for ASAP1 and 15.2 ± 2.0 mV for ASAP2s (mean ± SEM). (J) Widths of the optical spikes at half-maximal height. AP widths in the voltage traces were 4.2 ± 0.2 and 4.6 ± 0.1 ms for ASAP1 and ASAP2s, respectively (mean ± SEM). The data is from the same neurons as panel I. We then measured responses to single APs in cultured neurons. We predicted that the larger response amplitude of ASAP2s, coupled with its slower inactivation rate, would result in larger and longer responses to single APs. Indeed, in neuronal cultures, ASAP2s reported APs with a -12.2 ± 0.5% fluorescence change, compared with -8.6 ± 0.3% for ASAP1, a 42% improvement in response amplitude ( Figure 2H,I, Figure 2-figure supplement 3A). ASAP2s' responses were also longer in duration ( Figure 2J), consistent with its slower kinetics of repolarization. The combination of higher response amplitude and longer response duration resulted in a 90% greater integrated fluorescence change with ASAP2s than with ASAP1 ( Figure 2K). Under widefield illumination of these neurons, ASAP2s photobleached similarly to ASAP1 (Figure 2-figure supplement 3B,C). Benchmarking GEVIs for two-photon microscopy in Drosophila Having optimized and characterized the ASAP indicators in culture, we next sought to benchmark the performance of ASAP2s for in vivo detection of voltage dynamics using two-photon microscopy. The Drosophila visual system has recently emerged as a useful platform for evaluating voltage indicators in vivo , as it is accessible for imaging, visual stimuli can be presented with temporal and spatial precision, and many cell types are well described and can be genetically targeted. Two-photon imaging is especially critical for monitoring voltage dynamics in the fly visual system, as it uses infrared light that does not excite fly photoreceptors, unlike the visible spectrum wavelengths commonly used for one-photon microscopy of most biosensors (Salcedo et al., 1999). We tested several GEVIs in the fruit fly visual system by expressing them selectively in L2 neurons. Along with L1 and L3, L2 neurons are monopolar cells of the lamina that receive direct inputs from the R1-6 photoreceptors ( Figure 3A) (Meinertzhagen and O'Neil, 1991;Sanes and Zipursky, 2010). L1-3 each retinotopically tile the visual field and provide critical outputs to the medulla, the next layer in visual processing. We positioned awake flies in front of a screen displaying visual stimuli (alternating 300 ms dark and light flashes) and imaged L2 axon terminals through a window cut in the cuticle at the back of the head ( Figure 3A). The fluorescence responses of both ASAP1 and ASAP2s indicated that L2 axon terminals Video 1. ASAP2s optical response to cardiac APs in a human embryonic stem cell-derived cardiomyocyte (hESC-CM). A hESC-CM was transfected with ASAP2s at 27 days post-differentiation and was imaged three days later at 100 Hz and with a power density of 11 mW/mm 2 at the sample plane. Quantification of the fluorescence response during the first 10 s is shown in Figure 2B. The movie corresponds to a single trial without filtering, smoothing, or photobleaching correction. DOI: 10.7554/eLife.25690.011 transiently depolarize to light decrements and transiently hyperpolarize to light increments ( Figure 3B), consistent with electrophysiological recordings in lamina monopolar cells (Zettler and Järvilehto, 1971) and our prior GEVI-imaging experiments . In line with our in vitro observations, ASAP2s produced significantly larger mean responses than ASAP1 (a 37% increase in amplitude for depolarizations and a 39% increase in amplitude for hyperpolarizations) but was slightly slower ( Figure 3C,D). Similarly, these performance metrics show that ASAP2s produces larger but slower responses than ASAP2f, a recent voltage indicator primarily characterized for applications in flies . For both ASAP1 and ASAP2s, we could also observe single-cell and single-trial responses, although there was trialto-trial variability ( Figure 3B). Because ArcLight has been previously used in flies (Cao et al., 2013;Sitaraman et al., 2015), we also characterized this sensor under identical imaging conditions. We found that ArcLight produced fluorescence responses with amplitudes similar to ASAP1 but smaller than ASAP2s ( Figure 3B,D). The kinetics of ArcLight fluorescence changes were significantly slower than those of both ASAP2s and ASAP1 ( Figure 3D), in agreement with their respective response kinetics in cultured cells (Table 1, Figure 2B,C,E,F, Figure 2-figure supplement 1). This result is also consistent with previous findings using ArcLight in the fly with onephoton microscopy in which its fluorescence traces had peaks wider than the underlying voltage signals (Cao et al., 2013). Some GEVIs based on the same voltage-sensing domain as ArcLight exhibit faster kinetics. However, they exhibit smaller response amplitudes; they have not yet been tested in flies; and the vast majority have not been evaluated under two-photon illumination (Akemann et al., 2012;Barnett et al., 2012;Baker et al., 2012;Mishina et al., 2012;Han et al., 2013;Tsutsui et al., 2013;Mishina et al., 2014;Piao et al., 2015;Treger et al., 2015). Over 30 min of continuous illumination, the photostability of ASAP1, ASAP2s, and ArcLight was comparable, with all three indicators bleaching to~30% of their initial brightness (Figure 3-figure supplement 1A,B). ASAP1 and ASAP2s both rapidly bleached by~25% within 2 s, consistent with the photobleaching characteristics of a close variant of the GFP used in those two indicators (Pédelacq et al., 2006). ArcLight did not exhibit this rapid bleaching, but its brightness decreased more rapidly than ASAP indicators after these initial 2 s. All three indicators enabled imaging of L2 voltage dynamics with a high signal-to-noise ratio (SNR) for over 30 min (Figure 3-figure supplement 1C). To extend our comparisons to GEVIs that use opsin domains for voltage sensing, we also evaluated Ace2N-2AA-mNeon (Gong et al., 2015), an indicator previously used in flies under one-photon illumination. We also tested MacQ-mCitrine Figure 3B). Unexpectedly, the response polarity of Ace2N-2AA-mNeon was inverted compared to its response under one-photon illumination (Gong et al., 2015), an observation robust to excitation wavelength between 920 and 1010 nm (Figure 3-figure supplement 2C,D). Overall, the poor responses of both indicators in this in vivo benchmark are consistent with published observations that voltage responses of opsin-based GEVIs can be greatly diminished under two-photon excitation (Brinks et al., 2015). Finally, to determine whether calcium imaging could provide the same information as voltage imaging in L2 neurons, we also imaged calcium dynamics using a genetically encoded calcium indicator. We chose GCaMP6f given that it features dramatically improved kinetics over previous Video 2. ASAP2s responses to step voltages in a patch-clamped cultured hippocampal neuron. ASAP2s fluorescence was captured while its transmembrane voltage was stepped from -70 to 0, 30, or 50 mV as labelled. Frames were acquired at 20 Hz and played back in real time. The movie corresponds to a single trial without filtering, smoothing, or photobleaching correction. DOI: 10.7554/eLife.25690.012 calcium indicators (Chen et al., 2013) and is therefore better suited for monitoring the 300-ms light flashes used here. The peak response amplitude of GCaMP6f was substantially larger than that of any of the voltage sensors, and its single-cell and single-trial responses had correspondingly higher SNR ( Figure 3B). However, the GCaMP6f response was not transient, continuing to rise or fall during Each cell contributes its average response across 100 trials (one trial = 1 dark flash and one light flash). Horizontal black lines are the mean fluorescence per trial; indicators with slow or asymmetric responses to dark and light flashes can produce traces that do not begin or end at the mean fluorescence. Bottom, for ASAP1, ASAP2s, ArcLight, Ace2N-2AA-mNeon, and GCaMP6f, five exemplar single-trial responses from a single representative L2 cell (gray) and the same cell's mean response over all trials (colored, n = 100 trials). Solid line is mean; lighter shading is ±1 SEM. Because MacQ-mCitrine didn't produce an apparent stimulus-evoked response when averaging across all cells (top trace), we plotted the mean optical traces of individual cells (bottom traces) to illustrate that stimulus-evoked responses were also not apparent in any cells. (C) Schematic illustrating the response parameters quantified in panel D. A max , the maximal amplitude of the fractional fluorescence change (|DF/F|); t peak , the time at which A max occurs, relative to the start of the flash; t decay , the time constant of the decay from A max . (D) For each voltage sensor, A max , t peak , and t decay are plotted for depolarizations (left) and hyperpolarizations (right). Sample sizes are the same as in panel B, except for some measurements of t decay that did not meet our inclusion criterion (see Materials and methods). A max and t decay were analyzed with the t-test, while t peak was analyzed with the Mann-Whitney U-test. We used the Bonferroni method to correct for multiple pairwise comparisons between ASAP1, ASAP2s, and ArcLight values for a given response parameter (A max , t peak , and t decay ) and sign of voltage change (depolarization or hyperpolarization). *p<0.05, **p<0.01, ***p<0.001. DOI: 10.7554/eLife.25690.013 The following figure supplements are available for figure 3: the entire duration of the flash. This result is consistent with previous studies of L2 using the calcium indicator TN-XXL (Reiff et al., 2010;Clark et al., 2011). Importantly, the observed calcium indicator traces do not correlate with GEVI traces in a simple manner, illustrating the difficulty of using calcium imaging to infer voltage dynamics. Overall, our results demonstrate the suitability of the ASAP sensors for two-photon imaging of voltage dynamics in vivo, with ASAP2s providing the best balance of large response amplitude and sufficiently fast kinetics. Random-access, single-trial two-photon imaging of action potentials in organotypic slice cultures Having benchmarked ASAP indicators in vitro and in vivo, we next sought to determine whether ASAP-family sensors could enable single-trial detection of rapid voltage transients in single cells in organotypic slice cultures under two-photon illumination. We chose to perform voltage detection using random-access multiphoton microscopy (RAMP, Figure 4A), a technique for fast imaging of arbitrary locations in two-or three-dimensional space by rapid movement of the laser beam (Lechleiter et al., 2002;Iyer et al., 2006;Duemani Reddy et al., 2008). RAMP is a useful technique for rapid imaging of multiple neurons or subcellular locations within a single neuron. Therefore, imaging ASAP indicators with RAMP could provide the means to record voltage at multiple sites with exquisite spatial precision and temporal resolution, a combination we term Fast Excitation of Voltage Indicators by RAMP, or FEVIR. We expressed ASAP2s and ASAP1 in organotypic hippocampal slice cultures and imaged fluorescence generated by two-photon illumination. All experiments were performed at room temperature (22˚C). We observed that ASAP2s expressed well along neuronal membranes ( Figure 4B) and that the photon emission from ASAP2s was comparable to that of ASAP1 ( Figure 4C). We determined that the resting membrane potential, membrane capacitance, and input resistance of GEVI-expressing and untransfected neurons were statistically indistinguishable (Figure 4-figure supplement 1), consistent with a previous observation with ASAP1 (St-Pierre et al., 2014). We first tested the ability of FEVIR to detect evoked APs. Both ASAPs detected APs, with ASAP2s producing a fluorescence change of -15.0 ± 0.6%, a 79% improvement over the -8.4 ± 0.5% response amplitude when using ASAP1 ( Figure 4D,E). These two-photon response amplitudes are remarkably similar to one-photon responses, consistent with previous observations with ASAP1 (Brinks et al., 2015) and confirming that the larger response of ASAP2s over ASAP1 observed under one-photon excitation was preserved under two-photon illumination. We evaluated ASAP2s further and demonstrated it can report spontaneous APs ( Figure 4F, Figure 4-figure supplement 2). Both ASAP2s and ASAP1 could also reliably detect subthreshold depolarization and hyperpolarization waveforms in single trials ( Figure 5). To quantify GEVIs' ability to detect spikes, we calculated the detectability metric d', which provides a measure of our ability to distinguish a spike from noise in idealized conditions of well-isolated spikes occurring on an otherwise stable membrane potential (Wilt et al., 2013). For neurons imaged at 925 Hz, we obtained a d' value of 53.9 ± 3.2 (n = 23 cells) for ASAP2s, much larger than 15.0 ± 3.7 (n = 15 cells) for ASAP1 (p<0.0001, Mann-Whitney U test). The d' value for both ASAP1 and ASAP2s indicate a greater than 99% detection rate and a false-positive rate per frame of less than 10 À8 for both ASAP1 and ASAP2s (Wilt et al., 2013). Actual detectability will vary in practice depending on other variables such as illumination power, GEVI expression levels, imaging depth, indicator photobleaching, spike rate, and the presence of subthreshold depolarizations. We characterized the kinetics of the ASAP indicators in our system by oversampling at 3700 Hz ( Figure 6A-E). ASAP1 exhibited AP-induced transients with a mean time-to-peak of 3.1 ms ( Figure 6B,C) and a duration (full-width at half-maximum) of 3.8 ms ( Figure 6D,E). As predicted from its kinetics in cultured cells (Table 1), ASAP2s kinetics were slower, with a time-to-peak of 6.0 ms and a duration of 21.1 ms. ASAP2s' photobleaching was best fit with both fast and slow exponentials ( We next explored how changing the scanning frequency could optimize the SNR. ASAP2s signal amplitudes remained relatively constant across all frequencies tested ( Figure 7A), consistent with the longer durations of its optical transients. Noise decreased at lower sampling frequencies, as expected from the collection of more photons per time point ( Figure 7B). Because of its higher response amplitude, ASAP2s showed higher SNR than ASAP1 at all frequencies ( Figure 7C). ASAP2s' SNR increased with decreasing scanning frequencies, reaching 10.0 ± 1.6 at 231 Hz from 5.4 ± 0.4 at 3700 Hz. In contrast, ASAP1's SNR remained relatively constant at~5 across all frequencies; given the short durations of ASAP1 transients in response to APs, lower frequencies reduce noise but also reduce response amplitude (Figure 7). Overall, these results show that ASAP2s and sub-kilohertz scanning frequencies provide the best SNR for detection of low-frequency evoked APs when performing FEVIR at room temperature in our experimental preparation. When active, most neurons fire APs repetitively. Therefore, we next tested whether the ASAP indicators could report high-frequency trains of APs. APs were delivered at 10, 20, 30, and 100 Hz. At higher spike frequencies, the fluorescence did not recover back to baseline between peaks and therefore reduced the response amplitude of subsequent peaks in the train ( Figure 8A-F). Correspondingly, the detectability (d') of individual peaks in a train decreased as the AP frequency was increased (Figure 8-figure supplement 1). The improved sensitivity of ASAP2s balanced its slower kinetics, producing peak responses matching or exceeding those reported with ASAP1 ( Figure 8). ASAP2s also reported spike trains with larger d' than those reported by ASAP1 for frequencies up to 30 Hz (Figure 8-figure supplement 1). For 100-Hz spike trains, the d' values obtained with ASAP1 and ASAP2s were similar to each other and strongly reduced compared to their magnitude at 30 Hz and slower AP firing frequencies. Finally, we sought to compare ASAP1 and ASAP2s with other recently-reported voltage indicators. Under identical conditions, we observed that the voltage indicator ASAP2f performed similarly to ASAP1 across all metrics, including brightness ( . We also evaluated indicators of the Ace2N-mNeon family, GEVIs previously reported to detect spikes in brain slices under one-photon illumination with high fidelity (Gong et al., 2015). While all cells expressing the variant Ace2N-4AA-mNeon were bright, a significant fraction of the fluorescence was cytoplasmic (Figure 4-figure supplement 4A). We performed RAMP microscopy using voxels at the presumed plasma membrane. In response to a 100 mV step depolarization, Ace2N-4AA-mNeon produced a fluorescence change of À1.8 ± 0.4% (Figure 4-figure supplement 4B-D). We did not detect an obvious reduction of response amplitude over the course of the step depolarization, in contrast to observations under one-photon illumination (Gong et al., 2015). The small response to voltage under two-photon illumination is consistent with our results in flies ( Figure 3B) and with prior observations with homologous FRET-opsin indicators (Brinks et al., 2015). Shortening the linker between the opsin and the fluorescent protein in this indicator may help increase response amplitudes, although this modification can also impair plasma membrane expression (Gong et al., 2015). Given the small response of Ace2N-4AA-mNeon to long step voltages under two-photon illumination, we did not evaluate it further. Single-voxel, single-trial spike detection with FEVIR in organotypic slice cultures The results above demonstrate that ASAP-family indicators can report subthreshold depolarizations, hyperpolarizations, and evoked and spontaneous APs in single trials under two-photon microscopy. This is a critical milestone for optical voltage imaging that no GEVI had previously reached. A next milestone, multi-neuron recordings of spontaneous neural activity, will require distributing imaged voxels across neurons. For example, with a laser dwell time of 50 ms and a sampling frequency of 925 Hz, our FEVIR system could image~20 neurons with one voxel per neuron. We therefore tested whether we could reach the ultimate goal of detecting APs in single trials and single voxels. While ASAP1 responses could not be easily discerned from noise (SNR of 1.1 ± 0.1), ASAP2s responses to single APs in single voxels in single trials were above the mean noise level, giving an SNR of 2.2 ± 0.2 at 925 Hz ( Figure 9A-C). Similar to ASAP1, single-trial, single-voxel ASAP2f responses were not easily detectable from noise (Figure 9-figure supplement 1). Figure 5 continued responses are in gray (n = 8 neurons per indicator). (D) Hyperpolarization waveforms had peak amplitudes of À5,-10, À15, and À20 mV; a time-to-peak of 5 ms; and full width at half maximum of 39 ms. (E,F) Responses to hyperpolarization waveforms using ASAP1 (E) and ASAP2s (F). The mean response is shown in black, and single-trial responses are in gray (n = 8 neurons per indicator). For panels B,C,E,F, raw traces were smoothed with a window size of 10 time points. (G,H) Quantification of the fluorescence response amplitudes (G) and SNR (H) to subthreshold depolarizations. (I,J) Quantification of the fluorescence response amplitudes (I) and SNR (J) to hyperpolarizations. Differences in peak fluorescence responses and SNR between ASAP1 and ASAP2s were not statistically significant (p>0.05, t-test with Bonferroni correction for multiple comparisons). DOI: 10.7554/eLife.25690.021 As expected, averaging over more trials or voxels decreased noise ( Figure 9D,E). To increase the SNR while imaging single voxels in single trials, we considered that lower acquisition frequencies increase SNR when imaging using ASAP2s ( Figure 7C). We therefore binned four adjacent time points, effectively increasing the laser dwell time to 200 ms per time point and more than doubling the SNR ( Figure 9F). With these parameters, we could observe 20 voxels at 231 Hz, with further gains achievable at the same acquisition frequency by increasing dwell time and consequently reducing the number of imaged voxels. When selecting single voxels for multi-neuron recordings, choosing the brightest membrane voxels would be a useful strategy for maximizing SNR and spike detectability (d'). We evaluated the potential of this strategy by analyzing the d' value for the brightest voxels in our recordings. With ASAP2s, the d' for detecting spikes using single voxels was 17.5 ± 1.2 (n = 23 voxels from 23 neurons), indicating a greater than 99% detection rate and a false-positive rate per frame of less than 10 À8 (Wilt et al., 2013). In contrast, the d' for ASAP1 was much smaller at 4.1 ± 0.9 (n = 15 voxels from 15 neurons; p < 0.0001, Mann-Whitney U test), indicating a detection rate of~94% and a falsepositive rate per frame of 0.005. Taken together, these results show that FEVIR using ASAP2s can reliably report APs in single trials, even down to single voxels. Tracking spike propagation in neurons with ASAP2s and FEVIR in organotypic slice cultures The high sensitivity of ASAP2s in single-voxel imaging motivates the deployment of this indicator for monitoring voltage with subcellular resolution with FEVIR. We therefore tested the utility of ASAP2s for optically measuring the speed and attenuation of back-propagating APs in dendrites in organotypic hippocampal slice cultures. Voltage signals were recorded from 25 locations on the dendritic tree at a sampling frequency of 686 Hz and averaged over 31 trials ( Figure 10A). We observed that the peak amplitude of AP-evoked optical signals gradually decreased with increasing distance from the soma ( Figure 10B,C, Figure 10-figure supplement 1A). For example, response amplitudes were 37.6 ± 7.3% lower at 144.3 ± 3.5 mm from the soma (p=0.0066, n = 4 neurons). This reduction could not be attributed to variation in the levels of resting fluorescence, as we did not detect a correlation between response magnitude and resting fluorescence ( Figure 10-figure supplement 1B). The degree of attenuation we observed is similar to the 25-35% decreases in AP amplitude previously measured using purely electrophysiological approaches in CA1 pyramidal neurons (Spruston et al., 1995;Golding et al., 2005) and in cortical neurons (Stuart et al., 1997). Finally, to evaluate whether conduction velocities and latencies of backpropagating APs could be resolved, we scanned two soma voxels and three dendrite voxels on a neuron at a sampling rate of 3700 Hz. Ultrafast sampling allowed us to resolve the kinetics of AP-evoked optical signals ( Figure 10D). For instance, in the example presented, the AP peak was delayed by 1.1 ms in a dendrite at a distance of~150 mm from the soma ( Figure 10E). We calculated the mean conduction velocity as 0.16 ± 0.03 m/s (n = 7 neurons, Figure 10-figure supplement 1C), similar to a reported velocity of 0.24 m/s for backpropagating APs in hippocampal neurons using electrophysiological methods (Spruston et al., 1995). Therefore, the peak amplitude and kinetics of backpropagating APs can be measured across subcellular locations using ASAP2s. We also observed that APs in dendrites were broader than APs in the soma ( Figure 10D), as previously reported (Kim et al., 2012). These results demonstrate that ASAP2s-powered FEVIR permits subcellular voltage imaging with a high SNR, enabling monitoring of neuronal activity simultaneously in multiple subcellular compartments at very high temporal resolution. Discussion In this study, we establish methodology for fast two-photon voltage imaging in vitro, in organotypic slice culture, and in living animals. Our findings have important implications for voltage imaging in intact neuronal preparations. In particular, they establish the suitability of two-photon microscopy and ASAP-family GEVIs, with their fast kinetics and large response amplitudes, for investigations of spontaneous cellular and subcellular voltage changes. Moreover, FEVIR, which combines RAMP imaging and ASAP-family indicators, permits near-simultaneous voltage recording at multiple points subcellular locations. These capabilities are particularly significant given the long-standing quest by neurophysiologists to understand how up to several thousands of synaptic inputs cooperate to trigger an action potential (Megías et al., 2001) and to understand the occurrence and functions of dendritic spikes . In general, the ability to record voltage with subcellular precision in single trials should facilitate future interrogation of neuronal physiology by providing easier access to compartments that are either difficult or impossible to access using electrodes, such as small- caliber dendrites, spines, and boutons. FEVIR could also be used to monitor multiple neurons rather than several locations within the same neuron, enabling studies of circuit function. Our results demonstrate that the choice of indicator is crucial for high-fidelity monitoring of electrical activity. In Drosophila L2 axon terminals, ASAP2s tracked transient voltage responses to changes to visual stimuli with faster kinetics and larger response amplitude than the GEVI ArcLight. In brain slices with FEVIR imaging, the larger response amplitude and slower inactivation kinetics of ASAP2s compared with ASAP1 and ASAP2f enabled improved AP detection, especially at scanning frequencies below 1000 Hz. In particular, ASAP2s allowed AP detection in single voxels in single trials with voxel dwell times of only 50 ms. These parameters enable voltage imaging at~20 two-photon-addressable points per millisecond, which can be distributed among multiple neurons if desired. Overall, in experimental settings where sensitive detection of spikes is desirable, ASAP2s would generally be preferable to ASAP1 and ASAP2f. However, in specific situations where it is important to track the shape of spikes or other fast voltage transients more accurately, ASAP1 and ASAP2f retain an advantage over ASAP2s due to their faster kinetics. Opsin-based GEVIs have been used to report transmembrane voltage changes under one-photon illumination, but appear less suitable for two-photon applications. Some single-domain opsins can produce larger responses to APs than ASAP2s in one-photon microscopy , but their responses are greatly attenuated under two-photon microscopy to levels far below that of the ASAP indicators (Brinks et al., 2015). Fusions of fluorescent protein domains and voltage-sensitive opsins have led to a new class of GEVIs called FRET-opsin or electrochromic FRET indicators. These GEVIs exhibit voltage-induced changes in fluorescence output due to changes in the efficiency of fluorescence resonance energy transfer (FRET) from the fluorescent protein to the rhodopsin under one-photon microscopy Zou et al., 2014). However, these indicators have also shown large reductions in their fluorescence responses under two-photon illumination, with one indicator of this class showing small responses in a previous study (Brinks et al., 2015) and two others showing small or undetectable responses in our experiments. The comprehensive benchmarking of the ASAP indicators across multiple performance metrics and contexts should help biologists determine whether these indicators can enable their experiments. However, we also want to emphasize that experimental preparation, temperature, and illumination condition can impact indicator performance. Indicator expression levels and host cell properties can also alter the impact of GEVIs on plasma membrane capacitance (Cao et al., 2013;Akemann et al., 2009). Re-evaluation of critical performance metrics and membrane electrical properties is therefore recommended when deploying any indicator in a new experimental context. While our results illustrate the abilities of fast two-photon imaging of ASAP-family voltage indicators, additional improvements in sensor performance and imaging technology would be useful. For example, further improvements in the brightness or response amplitude of the ASAP indicators would enhance the SNR of responses, improving the reliability of voltage imaging in the brain, especially in challenging imaging situations such as single-trial, single-voxel imaging. Moreover, increasing photostability is paramount for long-term imaging, requiring the development of more photostable probes or methodologies to rapidly shift illumination to non-bleached voxels. In cases where subcellular resolution is not needed, the point spread function of the illumination beam can be expanded to match the size of an entire neuronal cell body (Prevedel et al., 2016). This approach would require lower light power to produce identical fluorescence, thereby reducing photobleaching kinetics. Finally, improvements in imaging modalities could increase the number of neurons that can be simultaneously monitored with millisecond-level time resolution. In summary, we have demonstrated that the combination of two-photon microscopy and ASAPfamily voltage indicators can be used to track voltage dynamics with subcellular resolution and millisecond-level temporal precision in brain tissue. Using RAMP microscopy, we have provided the first demonstration that GEVIs can report APs, subthreshold depolarizations, and hyperpolarizations in organotypic slice culture in single trials under two-photon illumination. In addition, we have shown that FEVIR can enable tracking of voltage dynamics across multiple locations of a single neuron. We anticipate that the combination of rapid-scanning two-photon microscopy and ASAP indicators described here will facilitate current and future efforts to understand how neural circuits represent, integrate, and transform information. Plasmid construction Plasmids were constructed by standard molecular biology methods and verified by sequencing of all cloned fragments. In vitro experiments with cell lines, stem-cell-derived cardiomyocytes, and neuronal cultures The ASAP variants and ArcLight Q239 were all expressed from pcDNA3.1/Puro-CAG (Lam et al., 2012), a plasmid vector with the strong synthetic promoter CAG. As previously reported for ASAP1 and ArcLight Q239 (St-Pierre et al., 2014), we subcloned all indicators between the NheI and HindIII sites of this plasmid. All indicators had identical Kozak sequences. For experiments to compare indicator brightness, the red fluorescent protein FusionRed (Shemiakina et al., 2012) was fused to the C-terminus of the ASAP variants. To ensure optimal folding of both ASAP indicators and FusionRed, we separated these two domains with a flexible glycine/serine linker (GSGGSGGSG). ASAP1::EGFP, the reference indicator for photostability experiments, was constructed by replacing the fluorophore of ASAP1 with EGFP (V2 to K239). For experiments comparing two-photon excitation spectra, we expressed EGFP using the ubiquitous EF-1a promoter cloned in a pLenti (also called pLECYT) plasmid (Boyden et al., 2005). Experiments in organotypic slice cultures Ace2N-4AA-mNeon was amplified from a template generously provided by M. Schnitzer and subcloned into pcDNA3.1/Puro-CAG using methods described above. All slice culture experiments used pcDNA3.1/Puro-CAG expression plasmids for expressing the ASAP indicators and Ace2N-4AA-mNeon. HEK293 -cell culture HEK293A (Thermo Fisher Scientific, Waltham, MA; RRID:CVCL_6910) and HEK293-Kir2.1 (Zhang et al., 2009) cell lines were confirmed to be free of mycoplasma contamination using the MycoAlert Mycoplasma Detection Kit (Lonza, Switzerland). HEK293-Kir2.1 cells were confirmed to have a polarized resting membrane potential (À77 ± 1.2 mV, mean ± sem, n = 10 cells) consistent with the original report of this cell line (Zhang et al., 2009). Given that cell lines were used as expression systems for characterizing voltage indicators rather than for biological discovery and that the performance metrics of voltage indicators in these cells are consistent with published results and with further experiments in neurons, cell lines were not authenticated using DNA profiling analysis. Cells were maintained in high-glucose Dulbecco's Modified Eagle Medium (DMEM, GE Healthcare, Chicago, IL) supplemented with 5% fetal bovine serum (FBS, Thermo Fisher Scientific) and 2 mM glutamine (Sigma-Aldrich, St.Louis, MO) at 37˚C in air with 5% CO 2 . Cells were plated onto glass-bottom 24-well plates (In vitro Scientific) for standard imaging or onto uncoated no. 0 12 mm coverslips (Glaswarenfabrik Karl Hecht GmbH, Germany) for patch clamp experiments. Transfections were carried out using FuGene HD (Promega, Madison, WI) according to the manufacturer's instructions, except that cells were transfected at~50% confluence with lower amounts of DNA (200 ng) and transfection reagent (0.6 mL) to reduce cell toxicity. Cells were illuminated with a high-power light-emitting diode (LED, UHP-MIC-LED-460, Pryzmatix, Israel) through a 480/20 nm filter at a power density of 4 to 24 mW/mm 2 at the sample plane. Emitted photons were filtered using a 525/50 nm filter. Images were acquired using an iXon 860 electron multiplying charge coupled device camera (Andor, UK) cooled to À80˚C and set to Frame Transfer mode. Fluorescence traces were corrected for photobleaching. Voltage and fluorescence traces were analyzed using custom scripts written in MATLAB (Mathworks, Natick, MA). Voltage traces were corrected for the junction potential post hoc. Fluorescence traces were acquired while cells were voltage clamped in whole-cell mode. Unless otherwise indicated, step voltage depolarizations were applied to change the membrane potential from a holding voltage of À70 mV to voltages ranging from À120 mV to 50 mV for 1.0 s. For these voltage step experiments, we captured images at 200 Hz without binning. While fluorescence responses were measured from pixels at the perimeter of the cell (the plasma membrane), values obtained over the entire cell are nearly identical given the excellent membrane localization of ASAP1 and ASAP2s. For experiments with individual or trains of artificial AP waveforms, we captured images at 1 kHz with 4 Â 4 binning. The AP waveform, derived from a recording of a hippocampal neuron AP, has a full width at half maximum of 4.0 ms and peak amplitude of 100 mV. The fluorescence response was measured as described above. For experiments to determine sensor response kinetics, we increased the sampling frame rate to 2.5 kHz by cropping the imaged area down to 64 Â 64 pixels and increasing binning to 8 Â 8 pixels. The resulting image thus contained 64 pixels (8 Â 8); fluorescence was calculated by summing all 64 pixels. Command voltage steps were applied for 1 s; three identical voltage steps were measured for every cell. Models of the form a.e -bt +c.e -dt were applied to the rising and falling portions of the mean fluorescence trace using MATLAB (Mathworks). HEK293 -two-photon excitation spectra HEK293-Kir2.1 cells were transfected as described above. Cells were imaged with an A1R MP + microscope (Nikon, Japan) fitted with 20Â 0.75NA dry objective, a 525/50 nm filter, gallium arsenide phosphide (GaAsP) detectors, and a titanium:sapphire Chameleon Ultra I laser (Coherent, Santa Clara, CA) with a 80 MHz repetition rate and a pulse width of 140 fs (measured at 800 nm). The laser was tuned between 700 nm and 1040 nm, keeping the laser power at 10 mW across the spectrum. However, this system did not pre-compensate laser pulses for dispersion in the microscope optical path. Pulse width is therefore expected to vary with wavelength (Müller et al., 1998), thereby impacting two-photon excitation absorption efficiency and distorting excitation spectra over those acquired by a pre-compensated system. Laser scanning was performed using galvanometric mirrors. Each image pixel was sampled with a dwell time of 12.1 ms. To evaluate the impact of photobleaching, we acquired images at 900 nm both before and after each wavelength scan. We also compared spectra obtained by scanning from 700 nm to 1040 nm to those produced in the reverse direction (1040 nm to 700 nm). Both methods demonstrated that photobleaching was negligible, and we therefore did not correct for photobleaching. HEK293 -quantifying GEVI brightness For quantifying the brightness of ASAP variants, we used a HEK293 cell line expressing the inwardly rectifying Kir2.1 channel (HEK293-Kir2.1, [Zhang et al., 2009]). This cell line, a generous gift of Gui-Rong Li, has a resting membrane potential of approximately -77 mV, similar to that of primary hippocampal neurons. Cells were transfected with pcDNA3.1/puro-CAG plasmids expressing ASAP1-FusionRed or ASAP2s-FusionRed (see: Plasmid construction). As discussed previously, FusionRed served to normalize for cell-to-cell differences in expression level, thus allowing brightness to be quantified as the ratio of green fluorescence (from the ASAP indicators) to red fluorescence (from the FusionRed standard). Two days post-transfection, cells were superfused with the same extracellular solution we used for electrophysiological recordings in HEK293A cells and imaged with an inverted A1R MP + microscope (Nikon) fitted with a 40Â 1.3-NA oil immersion objective. To evaluate brightness under one-photon illumination, we used the SpectraX light engine (Lumencor). GFP was excited with cyan light filtered with a 470/24 nm excitation filter and at a power density of 87 mW/mm 2 . FusionRed was excited with yellow light filtered with a 555/15 nm excitation filter and at a power density of 151 mW/mm 2 . Emitted photons were filtered with 520/23 nm (GFP) and 597/39 nm (FusionRed) filters and acquired with an ORCA Flash4.0 V2 C11440-22CU (Hamamatsu, Japan) scientific CMOS camera set to 4 Â 4 pixels binning and cooled to À10˚C. To evaluate brightness under two-photon illumination, we used a titanium:sapphire Chameleon Ultra I laser (Coherent) with a 80 MHz repetition rate and a pulse width of 140 fs (measured at 800 nm). The laser was tuned to 900 nm (ASAP) or 1040 nm (FusionRed). Laser pulses were not pre-compensated for dispersion in the microscope optical path. We excited ASAP variants at 900 nm rather than at its peak (920 nm) to match the excitation wavelength used in our organotypic slice culture experiments, which we performed using a random-access multi-photon (RAMP) system that is not compatible with wavelengths exceeding 900 nm. Power was adjusted to 30 mW (ASAP indicators) and 12 mW (FusionRed). This system did not pre-compensate laser pulses for dispersive effects of the microscope optical path. Laser dwell time was set to 12.1 ms, and scanning was performed using galvanometric mirrors. Emitted light was filtered using 525/50 nm (ASAP) or 605/70 (FusionRed) filters. Images were acquired with gallium arsenide phosphide (GaAsP) detectors. HEK293 -quantifying GEVI photostability For quantifying the photostability of ASAP variants, we transfected HEK293-Kir2.1 cells as described above with the corresponding pcDNA3.1/Puro-CAG expression plasmids (see: Plasmid construction). At 2 days post-transfection, cells were superfused with the same extracellular solution we used for electrophysiological recordings in HEK293A cells. To obtain the one-photon photostability data of Figure 1-figure supplement 3A-B, cells were imaged using an Axiovert 100M microscope (Zeiss) fitted with a 40Â 1.3-NA oil-immersion objective. Cells were continuously illuminated with a high-power light-emitting diode (LED, UHP-MIC-LED-460, Pryzmatix) through a 480/20 nm filter at a power density of 11 mW/mm 2 . Emitted photons were filtered using a 525/50 nm filter, and images were acquired with an ORCA Flash4.0 V2 C11440-22CU (Hamamatsu) scientific CMOS camera set to 4 Â 4 pixels binning and cooled to À10˚C. Due to relocation of one of the authors, data for Figure 1-figure supplement 3C-F was acquired with a different system. For these figures, cells were imaged with an Eclipse Ti-E microscope (Nikon) fitted with a 40Â 1.3-NA oil immersion objective. Cells were illuminated with a SpectraX solid-state light source (Lumencor, Beaverton, OR) through a 470/24 nm filter and at a power density of 87 mW/ mm 2 . Emitted photons were filtered using a 520/23 nm filter, and acquired with an ORCA Flash4.0 V2 C11440-22CU (Hamamatsu) scientific CMOS camera set to 4 Â 4 pixels binning and cooled to À10˚C. To obtain the two-photon photostability data presented in Figure 1-figure supplement 4A-B, cells were imaged with an Ultima Multiphoton Microscopy System (Bruker, Billerica, CA) equipped with a Mai Tai HP Deep See Ti:sapphire laser with <80 fs pulses at 800 nm (Spectra-Physics, Santa Clara, CA), galvanometric mirrors for laser scanning, a 60 Â 0.9 NA objective, a 525/50 nm emission filter, and non-descanned multi-alkali photomultiplier tubes (Hamamatsu). The laser was tuned to 920 nm and laser pulses were pre-compensated for dispersion in the microscope optical path. Laser power, laser dwell time, and scanning frequency are specified in the corresponding figure legend. Due to relocation of one of the authors, data for all other two-photon photostability experiments were acquired with different systems. For Figure 1-figure supplement 4E cells were imaged with an A1R MP+ microscope (Nikon) fitted with a titanium:sapphire Chameleon Ultra I laser (Coherent) with 80 MHz repetition rate and pulse width of 140 fs (measured at 800 nm), galvanometric mirrors for laser scanning, a 40 Â 1.3 NA oil immersion objective, a 525/50 nm emission filter and gallium arsenide phosphide (GaAsP) detectors. The laser was tuned to 900 nm and laser pulses were not pre-compensated for dispersion in the microscope optical path. Laser power, laser dwell time, and scanning frequency are specified in the corresponding figure legend. For Figure 1-figure supplement 1-C,F,G, cells were imaged on an LSM 7 MP microscope (Zeiss) fitted with a titanium: sapphire Chameleon Ultra II laser (Coherent) with 80 MHz repetition rate and pulse width of 140 fs (measured at 800 nm), a 20Â/1.0-NA objective, mirror galvanometers for laser scanning, and gallium arsenide phosphide (GaAsP) detectors. The laser was tuned to 900 nm and laser pulses were not pre-compensated for dispersion in the microscope optical path. Since this equipment was only used to image single-labeled specimens (GFP only), we improved detection sensitivity by removing the emission filter. Laser power, laser dwell time, and scanning frequency are specified in the corresponding figure legends. For both one-photon and two-photon photobleaching time series, fluorescence from the entire cell was used to compute optical traces. To quantify the photobleaching time constants, we fit the fluorescence traces with single-and multi-exponential fits using MATLAB (Mathworks). Stem-cell-derived cardiomyocytes -generation and cell culture All stem cell protocols were approved by the Stanford University Human Subjects Research Institutional Review Board (IRB). Cultures were maintained in a 5% CO 2 /air environment. We used authenticated H9 human embryonic stem cells from the WiCell Research Institute. Human-induced pluripotent stem cells (iPSCs) were generated from skin fibroblasts as described previously (Ebert et al., 2014). iPSCs were authenticated by karyotyping to confirm genomic integrity. Routine analyses of pluripotency were conducted by immunostaining pluripotency markers. Human embryonic stem cells (hESCs) and iPSCs were confirmed to be free of mycoplasma contamination using the MycoAlert Mycoplasma Detection Kit (Lonza). hESCs and iPSCs were maintained on Matrigel-coated plates (BD Biosciences, San Jose, CA) in Essential 8 Medium (Gibco, Thermo Fisher Scientific) and were differentiated into cardiomyocytes as described (Lian et al., 2012). Briefly, stem cells were seeded in 6-well plates pre-coated with Matrigel. After reaching 80% confluence, cells were treated for 48 hr with 6 mM CHIR99021 (Selleckchem. com, Houston, TX) in RPMI 1640 Medium (Gibco, Thermo Fisher Scientific) with B-27 serum-free insulin-free supplement (Gibco, Thermo Fisher Scientific). Cells were then transferred to the same medium without CHIR99021 for 24 hr and then treated with 5 mM IWR-1 (Sigma-Aldrich) for 2 days. The media was replaced with fresh media without IWR-1 for another 2 days, before finally switching to RPMI 1640 medium with insulin-containing B-27 supplement. Beating cells were observed at 9 to 11 days post-differentiation and replated to improve attachment and to adjust density. Cardiomyocytes were purified using glucose-free RPMI + B27 medium for two-three rounds, each round lasting two days. Between each round, cells were allowed to recover in normal RPMI + B27 medium containing glucose for two days. The resulting cultures were typically more than 90% pure as determined by the percentage of TNNT2-positive cells by flow cytometry. Stem-cell-derived cardiomyocytes -voltage imaging To prepare cells for imaging, hESC-or iPSC-derived cardiomyocytes were dissociated into single cells with Accutase (Thermo Fisher Scientific) for 20 min. Cells were re-seeded at a density of 50,000 cells/cm 2 on 12 mm round borosilicate coverglass (Carolina Biological Supply Company, Burlington, CA) coated with Matrigel (BD Biosciences) placed within individual wells of 24-well plates. Cells were recovered in RPMI medium supplemented with B27 plus insulin (Thermo Fisher Scientific) for 3 to 4 days to allow them to restart beating. Media was replaced every 1 to 2 days. At 24 to 30 days post-differentiation, cardiomyocytes were transfected with 2.5 mL Lipofectamine 2000 (Thermo Fisher Scientific) and 500 ng of sensor DNA as described in the manufacturer's instructions. At 2 days post-transfection, cells were superfused with the same extracellular solution used for electrophysiological recordings in HEK293A cells. Cells were imaged at 100 Hz with an Axiovert 100M inverted microscope (Zeiss) equipped with a light-emitting diode (LED, UHP-MIC-LED-460, Pryzmatix), a 480/20 nm excitation filter, a 525/50 nm emission filter, a 40Â/1.3-NA oil-immersion objective. Cells were illuminated at a power density of 11 mW/mm 2 at the sample plane. For experiments with hESC-CMs, image acquisition was performed using Solis (Andor) driving a DU-860 EM-CCD (Andor) cooled to À80˚C and set to frame transfer mode. For experiments with iPSC-CMs, we used HCImage (Hamamatsu) to acquire images from an ORCA-Flash4.0 V2 C11440-22CU scientific CMOS camera (Hamamatsu) with 4 Â 4 pixel binning and cooled to À10˚C. Both cameras could be used interchangeably, although when using a CameraLink interface, the Flash4.0 camera enabled acquisition of a larger field of view (512 Â 512 pixels at 26 mm/pixel after 4 Â 4 binning) compared to the DU-860 (120 Â 120 pixels at 20 mm/pixel). Images were analyzed with custom MATLAB (Mathworks) scripts. Fluorescence from the entire cell was used to compute optical traces. Traces were corrected for photobleaching. Neuronal cell cultures -preparation and transfection Unless otherwise indicated, primary hippocampal or cortical neurons were dissected from Sprague-Dawley rats on embryonic days 21-22 and digested with 0.03% trypsin (Sigma-Aldrich) in Dulbecco's Modified Eagle Media (DMEM, HyClone, GE Healthcare) for 20 min at 37˚C in air with 5% CO 2 . Neurons were then dissociated by gentle trituration in Hanks' Balanced Salt Solution (HBSS, Thermo Fisher Scientific) and washed twice in HBSS. Neurons were plated at 3.5 Â 10 4 cells cm À2 on 12 mm no. 0 coverslips (Glaswarenfabrik Karl Hecht GmbH) within wells of 24-well plates. Prior to plating, each coverslip was pre-coated for 24 hr with >300 kDa poly-D-lysine (Sigma-Aldrich) in PBS and washed three times with distilled water. Neurons were cultured overnight at 37˚C in air with 5% CO 2 in Neurobasal with 1 Â B27 supplement (Thermo Fisher Scientific), 2 mM GlutaMAX (Thermo Fisher Scientific), and 10% FBS. The following day, 90% of the medium was replaced with identical medium without FBS. Cytosine b-d-arabinofuranoside (Sigma-Aldrich) was added to a final concentration of 2 mM when glia reached >70% confluence, typically around 5 days in vitro (DIV). At 7 DIV, 50% of the media was replaced with fresh media without serum. Neurons were transfected at 7 to 9 days DIV using 0.5 to 0.75 mL of Lipofectamine 2000 (Thermo Fisher Scientific) and 800 ng of total DNA per well of a 24-well plate. Given the strong promoter (CAG) driving indicator expression, we diluted indicator expression plasmids with buffer (unexpressed) plasmids. Specifically, each well was transfected with 400 ng of indicator expression plasmid and 400 ng of pNCS, an empty bacterial expression plasmid (Lam et al., 2012). Neuronal cell cultures -confocal imaging Cortical neurons Images for Figure 2G were obtained as follows. Rat cortical neurons were transfected at 8 DIV and imaged 3-4 days post-transfection in HBSS supplemented with 10 mM HEPES pH 7.4, 1 Â B27, 2 mM GlutaMAX, and 1 mM sodium pyruvate on an IX81 microscope with a FluoView FV1000 laserscanning confocal unit. Fluorescence excitation was delivered using a 488 nm argon laser through a 40Â/1.3-NA oil-immersion objective (Olympus, Japan). Emission was passed through a 530/40 nm emission filter. Z-sections were imaged using a 1-Airy pinhole setting and two-pass Kalman filtering. A maximum-intensity projection was generated from two to three sections spaced 2 mm apart. Hippocampal neurons Data for Figure 2-figure supplement 2 were obtained as follows. Rat hippocampal neurons were dissected and cultured as described above but using neurons from rats at embryonic day 18. After coating plates with poly-lysine, we also incubated plates in 20 mg/mL laminin (Sigma-Aldrich) for at least 5 hr. We observed that laminin treatment significantly promoted growth and maturation of these younger neurons, producing neurons with more extensive dendritic arbors. Since glial growth was reduced using E18 compared to E21 neurons, we did not add Cytosine b-d-arabinofuranoside. Cells were imaged 2 days post-transfection in HBSS supplemented with 10 mM HEPES pH 7.4, 1 Â B27, 2 mM GlutaMAX, and 1 mM sodium pyruvate on an LSM 780 confocal microscope (Zeiss), fitted with a 40Â/1.4-NA objective, a 488 nm argon laser model, mirror galvanometers for laser scanning, a 530/80 nm emission filter, and gallium arsenide phosphide (GaAsP) detectors. Each image pixel was sampled with a dwell time of 12.1 ms per pixel. Z-sections were imaged using a 1-Airy pinhole setting. Frames correspond to the average of four scans. Maximum-intensity projections were generated from two to three sections spaced 0.4 mm apart. Neuronal cell cultures -patch clamping and voltage imaging At 11 to 13 DIV (2 to 4 days post-transfection), cultured neurons were patch-clamped at 22˚C using the same procedures as when patching HEK293A cells (above). Cells were imaged with an Axiovert 100M inverted microscope (Zeiss) equipped with a 40Â/1.3-NA oil-immersion objective. Cells were illuminated with a high-power light-emitting diode (UHP-MIC-LED-460, Pryzmatix) through a 480/20 nm filter at a power density of 24 mW/mm 2 at the sample plane. Emitted photons were filtered using a 525/50 nm filter. Images were captured at 1000 Hz with 4 Â 4 binning using an iXon 860 electron multiplying charge coupled device camera (Andor) cooled to À80˚C in frame transfer mode. Fluorescence response was measured in all pixels from the cell body. Fluorescence traces were acquired while cells were current-clamped in whole-cell mode. For all experiments, fluorescence traces were corrected for photobleaching. To generate APs, 700 to 1100 pA of current was injected for 1 ms. Analyzed neurons had the following characteristics: an access resistance less than 15 MOhm, a membrane resistance greater than 10 times the access resistance, and APs with peak height >0 mV and width <5 ms at À20 mV. Electrode voltages were recorded using pClamp (Molecular Devices). Voltage and fluorescence traces were analyzed using custom scripts written in MATLAB (Mathworks). Voltage traces were corrected for the junction potential post hoc. Drosophila -husbandry All flies used for imaging, except those expressing MacQ-Citrine or Ace2N-2AA-mNeon, were raised on standard molasses food at 25˚C on a 12/12 hr light-dark cycle. Female flies of the appropriate genotypes were collected on CO 2 within 1 day of eclosion and imaged at room temperature (20˚C) at 5 to 6 days after eclosion. MacQ-Citrine female flies were raised throughout life at 25˚C on standard molasses food supplemented with 100 mM all-trans-retinal (Sigma-Aldrich) to ensure that the MacQ opsin had adequate quantities of retinal cofactor; flies were kept in the dark to prevent retinal degradation. Flies were collected on CO 2 within 1 day of eclosion and imaged at room temperature (20˚C) at 5 to 6 days after eclosion. Following a protocol from the original description of Ace2N-2AA-mNeon (Gong et al., 2015), Ace2N-2AA-mNeon female flies were raised to adulthood at 25˚C on standard molasses food and collected on CO 2 within 1-3 days of eclosion. At this time, they were transferred to standard molasses food supplemented with 400 mM all-trans-retinal (Sigma-Aldrich) to ensure that the Ace2N opsin had adequate quantities of retinal cofactor; flies were kept in the dark to prevent retinal degradation. Flies were imaged after 6 days on retinal food. Drosophila -surgeries, stimulus presentation, and two-photon imaging Flies for imaging were cold anaesthetized, positioned in a fly-shaped hole cut in steel foil such that their heads were tilted forward approximately 60˚to expose the back of the head capsule above the foil while leaving most of the retina below the foil and then affixed in place with UV-cured glue (NOA 68T from Norland Products Inc., South Brunswick Township, NJ). The brain was exposed by removing the overlying cuticle and fat bodies with fine forceps, and an oxygenated saline-sugar solution (Wilson et al., 2004) was flowed over the fly. L2 axon terminals in the medulla were imaged with a TCS SP5 II two-photon microscope (Leica) with an HCX APO 20Â/1.0-NA water immersion objective (Leica) and a titanium:sapphire Chameleon Vision II laser (Coherent) with 80 MHz repetition rate and pulse width of 140 fs (measured at 800 nm). The laser pulses were pre-compensated for dispersion in the microscope optical path. Unless otherwise indicated, the excitation wavelength was 920 nm and 5 to 15 mW of power was applied at the sample. Emitted photons were filtered with a 525/50 nm filter, except for jRGECO1b, for which emitted photons were filtered with a 585/40 nm filter. Visual stimuli were generated with custom-written scripts using C ++ and OpenGL and presented using a digital light projector as described previously (Clark et al., 2011). The visual stimulus was projected onto a coherent fiber optic bundle that then re-projected onto a rear-projection screen positioned approximately 4 cm anterior to the fly that spanned 80˚of the fly's visual field horizontally and 50˚vertically. Immediately prior to being projected onto the screen, the stimulus was filtered with a 447/60 nm or a 482/18 nm bandpass filter to prevent its detection by the microscope PMTs. The stimulus was updated at 120 Hz and had a radiance of approximately 30 mW sr À1 m 2 . The imaging and the visual stimulus presentation were synchronized as described previously (Freifeld et al., 2013). Following this procedure, the time of stimulus onset relative to the start of imaging varied by up to one stimulus frame (8.33 ms). To compensate for this, the average delay was measured (6.25 ms), and all imaging data was shifted in time by this delay. All data were acquired at a constant frame rate of 38.9 Hz using a frame size of 200 Â 20 pixels, a line scan rate of 1400 Hz, and bidirectional scanning. Imaging time per fly never exceeded 1 hr. Drosophila -image analysis Raw images in each time series were aligned in x-y to correct for motion artifacts using a macro based on the plug-in Turboreg in ImageJ (National Institutes of Health, Bethesda, MD); if motion artifacts were too severe to be corrected, the entire time series was not analyzed further. The remaining analysis was performed with MATLAB (Mathworks). Regions of interest (ROIs) around individual L2 medulla terminals were manually selected in the time-series-averaged image. Intensity values for the pixels within each ROI were averaged and the mean background value was subtracted. To correct for bleaching, the time series for each ROI was fit with the sum of two exponentials, and in the calculation of DF/F = (F(t) -F 0 )/ F 0 , the fitted value at each time t was used as F 0 . The stimulus-locked average was computed for each ROI by reassigning the timing of each imaging frame to be relative to the stimulus transitions (dark to light or light to dark) and then computing a simple moving average with a 25 ms averaging window and a shift of 8.33 ms (120 Hz). As the screen on which the stimulus was presented did not span the fly's entire visual field, some imaged ROIs corresponded to cells with receptive fields outside of the area covered by the stimulus (empirically,~30% of the ROIs imaged per fly); the~70% of responding ROIs were identified as those whose responses at time-matched points during the dark flash and during the light flash were significantly different (ttest, p<0.01) for at least three consecutive time points. For MacQ-mCitrine, none of the ROIs met this criterion; as such, all imaged ROIs were presented in Figure 3B. For flies expressing each of the other indicators, there were approximately the same fraction of responding ROIs in each fly imaged. Because of the low amplitude of the Ace2N-2AA-mNeon responses, the responses of the coexpressed jRGECO1b calcium indicator were used to identify responding ROIs. For the upper traces in Figure 3B, we first determined the mean response of each ROI across all trials and then calculated the mean response across all responding ROIs. The lower traces in Figure 3B are the moving-average response of single example ROIs (that are approximately in the 75th percentile for response amplitude for all ROIs of that indicator; colored traces), and 0.6 s single-trial excerpts of the DF/F trace for the same ROIs presented in the single cell examples (plotted at the imaging frame rate of 38.9 Hz; gray traces). Quantification metrics were calculated on each ROI's mean response. The peak response (A max ) during each phase (depolarization and hyperpolarization) was the largest value of |DF/F|. The time to peak (t peak ) was the time at which this peak response occurred, relative to the start of the stimulus. The decay of the response from the peak was fit with a single exponential and the time constant of this fit was t decay . All responding cells (ROIs) were included in the comparisons in Figure 3D, except for t decay , for which we only included cells whose exponential decay could be fit with r-squared values greater than 0.5, as ROIs with values smaller than this were those that were too noisy to be fit. Organotypic hippocampal slice cultures -preparation and electroporation Organotypic hippocampal slices were prepared using 400-mm thick transverse hippocampal slices cut from 6-to 8-day-old male Wistar rats (Stoppini et al., 1991). Animals were anesthetized, the brain was quickly removed, and the resected hippocampus was transferred into cutting solution and cut using a McIlwain tissue chopper (Ted Pella, Redding, CA). Following dissection, three slices were plated on Millicell culture plates (EMD Millipore, Billerica, MA). Slices were kept at a liquid/gas interface in a controlled atmosphere (95% O 2 , 5% CO 2 ) chamber at 37˚C for 7 to 14 days before use. Fresh medium was provided twice a week. On days 2 to 4 following slice preparation, expression plasmids were bulk electroporated with an Grass Technologies electrical stimulator (Natus, Pleasanton, CA). 10 pulses (50 V) were delivered at 10 Hz. Slices were incubated for 7 days following transfection before starting experiments. Organotypic hippocampal slice cultures -electrophysiology During recordings, slices were continuously perfused with artificial cerebrospinal fluid containing 124 mM NaCl, 25 mM NaHCO 3 , 2.5 mM KCl, 1.2 mM MgCl 2 , 2.5 mM CaCl 2 , and 10 mM glucose, equilibrated with 95% O 2 and 5% CO 2 (pH = 7.4, 300 mOsm). Experiments were performed at room temperature (22˚C). Whole-cell current-clamp recordings were obtained with an intracellular solution containing 120 mM K-gluconate, 20 mM KCl, 10 mM HEPES, 2 mM MgCl 2 , 2 mM Mg 2 ATP, 0.3 mM NaGTP, 7 mM phosphocreatine, 0.6 mM EGTA, and 0.04 mM AlexaFluor594 (pH = 7.2, 295 mOsm; Thermo Fisher Scientific). Targeted cells were located at a depth of 20 to 70 mm from the surface of the slice. Neurons targeted for recordings had ovoid shapes, recognizable neuronal features such as dendrites and axons, and were not excessively bright. Glia could usually be discerned based on their more diffuse shapes and very bright somata, likely due to their more hyperpolarized membrane potential. Some glia were recorded by mistake but were easily recognizable by their very hyperpolarized membrane potential, low membrane resistance, and absence of action potential firing upon depolarization. No other criteria were used to exclude cells from characterization or analysis. Electrophysiological signals were acquired with a Multiclamp 700B amplifier (Molecular Devices). Data were shortpass filtered at 2 kHz and digitized at 10 kHz with a Digidata 1440A (Molecular Devices). Recordings were performed with the Clampex 11.0 software (Molecular Devices). Resting membrane potential, capacitance, and input resistance were measured in whole cell mode using pClamp software (Molecular Devices). Resting membrane potential was measured under the condition of no imposed current. Input resistance and capacitance were calculated by repetitively imposing a square 5 mV step in voltage-clamp configuration. Input resistance was measured in the steady-state phase of the current measurement. Membrane capacitance was measured by exponential fitting of the decay phase of the current trace in response to the voltage step. APs were evoked with 2-to 4 ms current pulses of 1.0 to 1.5 nA. EPSP and IPSP waveforms of 5, 10, 15, and 20 mV peak amplitude ( Figure 5A,D) were applied in voltage-clamp mode. Electrophysiological data were analyzed using Clampfit 10.2 (Molecular Devices) and Igor Pro 6.32 (WaveMetrics, Portland, OR). Organotypic hippocampal slice cultures -random-access multi-photon (RAMP) voltage imaging Imaging was performed with a custom-built random-access two-photon microscope (Otsu et al., 2008), as described before (Chamberland et al., 2014) and illustrated in Figure 4A. Briefly, the beam from a titanium:sapphire Chameleon Ultra laser (80 MHz repetition rate, pulse width of 140 fs, wavelength set at 900 nm, Coherent) was directed to a pair of acousto-optic deflectors (AODs, A-A Opto Electronics, France). This configuration permits the ultrafast redirection of the laser beam in two dimensions over the whole field of view. Due to equipment limitation, we imaged at 900 nm, a wavelength at which the two-photon brightness of ASAP1 and ASAP2s is~18% lower than at 920 nm, the peak excitation wavelength ( Figure 1E). The laser beam was attenuated to the desired power using an acousto-optic modulator, and then focused on the sample using a 25 Â 0.95 NA water-immersion objective mounted on an upright microscope (Leica). Transmitted photons were collected and shortpass filtered at 720 nm. Emitted photons were separated into two channels by a 580 nm dichroic mirror (Semrock, Rochester, NY) and passed through a 530/60 nm bandpass filter for the green channel (ASAP1, ASAP2s, ASAP2f or Ace2N-4AA-mNeon) and 730/70 nm bandpass filter for the red channel (AlexaFluor594). Non-descanned photon counting was done with two GaAsP photomultiplier tubes (H7422P-40, Hamamatsu). Optical data were acquired with homemade software written in LabVIEW (National Instruments, Austin, TX). Dwell time was set to 50 ms per pixel in all experiments. The time to move the laser beam from one point to another in the field of view was approximately 6.5 ms, regardless of the distance between the points. Therefore, the number of points recorded dictated the scanning speed. For each cell, recorded voxels were manually distributed in an even pattern across the somatic plasma membrane. All experiments were performed at room temperature (22˚C). Organotypic hippocampal slice cultures -data analysis Images shown were globally adjusted for brightness and contrast, but individual portions of images were not modified in any way. Optical signals were exported using a custom-made routine in Lab-VIEW (National Instruments) and analyzed in Igor Pro (WaveMetrics). Distance measurements of the recording sites from the cell body were performed using ImageJ 1.47 (Schneider et al., 2012). Traces longer than 1 s were corrected for photobleaching. The decay phase of optical signals was fitted in Igor Pro with a monoexponential function. The fractional fluorescence change (DF/F) corresponds to (F -F 0 )/ F 0 , where F 0 corresponds to the baseline (resting) fluorescence. The SNR was measured by dividing the peak DF/F by the standard deviation of the noise. For these measurements, F 0 was measured as the average fluorescence over the 110 to 10 ms preceding the action potential trigger. The standard deviation of the noise was measured 200 ms before the stimulus onset. Detectability (d') analysis All d' analyses were performed on traces acquired at 925 Hz, corresponding to 20 voxels per cell and 50 ms dwell time per pixel. These traces were generated by imaging the response to a single spike followed by a 20 Hz train of 5 action potentials, or to a 10, 30, 100, and 200 Hz spike train without an isolated spike. Fluorescence was acquired by a photomultiplier tube in photon-counting mode. Single-cell optical traces were obtained by summing all 20 voxels from each cell. Single-voxel traces used the brightest of the 20 voxels recorded for each neuron. To calculate spike detectability (d'), we first corrected traces for photobleaching. Sections of optical traces corresponding to resting fluorescence (when spikes were not occurring) were identified and fitted with a single exponential using the exp2fit function in MATLAB (Mathworks), which creates the fit using nonlinear least squares. The resulting exponential curve was used to generate a photobleaching correction function. First, the value in the exponential curve corresponding to the start of the first spike was subtracted from all points in the curve. The resulting vector was then multiplied by À1 and added to the actual fluorescence trace. This procedure corrects for photobleaching in a way that keeps the average photon count of the resting state constant across the trace and equivalent to the value immediately before the start of the first spike. This was necessary because the discriminability index assumes optical traces with constant baselines and require an actual photon count rather than a relative fluorescence intensity (Wilt et al., 2013). AP detectability (d') was calculated using the formula d 0 ¼ DF F ffiffiffiffiffiffiffi ffi F 0 t 2 r described by Wilt and colleagues (Wilt et al., 2013). To measure the fractional fluorescence response (DF/F) to single, isolated spikes, the difference between the peak of the action potential and the resting fluorescence (F 0 , measured as above) was taken and divided by F 0 . Tau (t) was calculated by fitting the trace from the peak to baseline as described above. Since the framework in Wilt and colleagues (Wilt et al., 2013) was designed for single spikes, we extended its use for spike trains by recalculating tau and F 0 for each for each spike in the spike train. For spike trains of 10, 20 and 30 Hz, we calculated F 0 by averaging the maximal fluorescence between each spike; all spikes in a train therefore shared the same F 0 . For 100-Hz spike trains, the baseline fluorescence changed more significantly during the course of a train, so F 0 was calculated for each spike individually as the maximal fluorescence over the following inter-spike interval. 100-Hz spike trains sometimes produced peaks that could not be accurately located or fitted. As these peaks were eliminated from analysis, the calculated d' overestimates detectability over the entire sample. 72% of spikes from 100-Hz trains were included for ASAP1 (18 out of 25), and 60% were included for ASAP2s. The decay phase of optical signals was estimated in MATLAB (Mathworks) as described above. False-positive and true-positive rates were calculated as follows. First, we defined a Gaussian noise distribution with a mean of 1 and a standard deviation of 1. We also defined a Gaussian signal distribution with a mean of 1 + d' and a standard deviation of 1. A simulated vector of data points was generated by randomly sampling from these distributions. This vector was 1 million samples long, and every tenth sample was drawn from the signal distribution, while all other samples were drawn from the noise distribution. The distributions were reset after being drawn upon. For this simulated data set, spikes were assumed to be composed of only a single data point. Equation (6) in Wilt et al. (2013) was used to build the spike log-likelihood distribution. The number of samples (N) set to 1, since our spikes are one sample long. Each point in the log likelihood distribution (L(i)) was therefore computed as follows: LðiÞ ¼ fðiÞ Ã logðSn=BÞ À Sn þ B where f(i) is the value of the simulated data set, Sn is the mean of the signal distribution, and B is the mean of the noise distribution. L(i) was computed for 'i' up to 1 million. Each point in L corresponds to the log likelihood of a spike having occurred at its corresponding point in the simulated data. An iterative algorithm was then implemented to increment a threshold in this log likelihood vector such that values above this threshold indicate the presence of a spike, and values below indicate that a spike has not occurred. False-positive and true-positive rates calculated for each threshold. The ideal threshold was chosen to be the one that maximized 'true positives -false positives' across all positions of the simulated data. Statistical analyses Results presented in the form x ± y represent the mean ± SEM unless indicated otherwise. Sample sizes reported in the text are individual cells (biological replicates) unless indicated otherwise. Statistical comparisons of pre-identified measures of interest between two data sets were performed with the Student's t-test unless otherwise indicated. Prior to performing such statistical comparisons, the Shapiro-Wilk method was used to test the null hypothesis that the data followed a Gaussian (normal) distribution. When this normality hypothesis could not be rejected, Student's t-tests were performed; otherwise, the Mann-Whitney U nonparametric test was used. Prior to performing t-tests, we also tested the null hypothesis of equal variance between the two data sets, and employed Welch's correction when the null hypothesis was rejected. Statistical tests of normality and equal variance were performed with a significance level (a) of 0.05. In figure panels, p-values are graphically depicted as: *p<0.05, **p<0.01, ***p<0.001. When analyzing the results of a specific performance test (e.g. max fluorescence response to APs), we applied the Bonferroni or Holm-Bonferroni correction to the significance levels if more than one pairwise comparison was calculated, as indicated in the corresponding figure legends. Statistical tests were performed in Excel (Microsoft) and MATLAB (MathWorks
18,797.8
2017-07-27T00:00:00.000
[ "Biology", "Engineering", "Physics" ]
Nonlinear Control and Synchronization with Time Delays of Multiagent Robotic Systems We investigate the cooperative control and global asymptotic synchronization Lagrangian system groups, such as industrial robots. The proposed control approach works to accomplish multirobot systems synchronization under an undirected connected communication topology. The control strategy is to synchronize each robot in position and velocity to others robots in the network with respect to the common desired trajectory. The cooperative robot network only requires local neighbor-to-neighbor information exchange between manipulators and does not assume the existence of an explicit leader in the team. It is assumed that network robots have the same number of joints and equivalent joint work spaces. A combination of the lyapunov-based technique and the cross-coupling method has been used to establish the multirobot system asymptotic stability. The developed control combines trajectory tracking and coordination algorithms. To address the time-delay problem in the cooperative network communication, the suggested synchronization control law is shown to synchronize multiple robots as well as to track given trajectory, taking into account the presence of the time delay. To this end, Krasovskii functional method has been used to deal with the delay-dependent stability problem. Introduction Nowadays, much research has been focusing on group coordination, cooperative control, and synchronization problems.In fact, motivated by the profit acquired by using multiple inexpensive systems working together to achieve complex tasks exceeding the abilities of a single agent, cooperative synchronization control has received significant attention.Distributed coordination and decentralized synchronization of multiagent systems have recently been studies extensively in the context of cooperative control [1][2][3][4][5], to name a few.In particular, design based on graph theory and Laplacian matrix produce interesting results [6][7][8][9].Agreement, consensus problems in the area of cooperative control of multiagent systems have been studied in [7,8,[10][11][12].The coordination control strategies are closely related to the synchronization problem in which control laws are coupled and each agent robot control is updated using local rule based on its own sensors and the states of its neighbors.In this context, one recent representative work [13] shows that we can synchronize the multicomposed system in the case of partial knowledge, that is, only position measuring.A decentralized tracking control law globally exponentially synchronizes an arbitrary number of robots and represents a generalization of the average consensus problem.This has been presented in [5].A synchronization approach to trajectory tracking of multiple mobile robots while maintaining time-varying formations has been presented in [14].Adaptive control strategy to position synchronization of multiple motion axes using cross-coupling technology has been developed in [15].In many engineering applications, communication delays between subsystems cannot be neglected.Therefore, the problem of time-delayed communication in control of multirobot systems is important in numerous practical applications.Indeed, without control measures of time delays in cooperative task may even cause instability.The problem of time-delayed communication in control of multiagent systems has been studied in several references [7,[16][17][18].The consideration of time-delayed communication in control of multirobot systems is a mainly practical necessity.In particular, this need occurs when addressing areas which require real-time applications such as operations in unsafe environments and robotic surgery. The objective of this paper is to design a control approach that can achieve both synchronization of the robot movements and asymptotic stable tracking of a common desired trajectory.The proposed controller relies principally on a consensus algorithm for systems modeled by nonlinear second-order dynamics and applies the algorithm to the synchronization control problem by choosing appropriately information states on which consensus is reached.The concept key of the new synchronizing controller is the introduction of a state vector that quantifies the coordination degree between a robot manipulator positions and different positions of its neighbors.In the literature, most of earlier works on multiagent coordination and consensus [3,4,7,19] mainly deal with very simple dynamic models such as linear systems and focuses on an algorithm taking the form of first-order dynamics [11,20,21].In particular, most previous works on consensus and coordination of multiagent systems using the graph theory and laplacian [3,4,[7][8][9] have presented a synchronization to the weighted average of initial conditions but they do not consider multiagent systems where there is a desired path to follow.Therefore, the aforementioned algorithms cannot give solutions for robot networks, where a desired trajectory is required.In contrast, the present work deals with highly nonlinear systems.Moreover, the developed approach achieves not only global asymptotic synchronization of the configuration variables, but also global asymptotic convergence to the desired trajectory.Notable works have focused on highly nonlinear systems.Their developed strategy requires the coupling feedback of the most adjacent robots [5] or axis [15] for the algorithm.However, the proposed strategy is based in partial mesh topology in which there are interconnections between all robots, such that all robots have direct influence in the combined dynamics.We provide by the use of partial mesh topology a high degree of reliability due to the presence of multiple paths for data between robots.On the other hand, it is not a fully connected mesh topology and consequently we avoid the expense and the complexity required for a connection between every robot in the network.In this paper, we study the problem of mutual synchronization when there are communication delays in the network.The delays are assumed to be bounded. Modelling Multi-Lagrangian System Network. The n degree-of-freedom robot manipulator composed of rigid bodies is expressed based on Newton's and Euler's equations as follows: where q i ∈ R n denotes the joint angles of the ith manipulator, qi ∈ R n , and qi ∈ R n are the vectors of joint velocity and joint acceleration, respectively.M i (q i ) ∈ R n×n represents inertia matrix which is symmetric uniformly bounded and positive definite.C i (q i , qi ) qi ∈ R n is a vector function containing Coriolis and centrifugal forces.g i (q i ) ∈ R n is a vector function consisting of gravitational forces.Although the above equations of motion are coupled and nonlinear, they exhibit certain fundamental properties due to their Lagrangian dynamic structure.The most important property is the well-known skew symmetry of the matrix Ṁ −2C [22]. Multiagent Communication Topology. Since we based on our coordination algorithm conception on consensus strategy and concepts of graph theory, we present several basic properties of these technology.Let G = (V , E) a digraph with N nodes, the set of nodes V = 1, 2, . . ., n, and edges E ⊆ V × V .Each node is labeled by v i ∈ V and each edge is denoted by e i j = (v i , v j ).Neighbors of agent v i are denoted by N = {v j ∈ V/(v i , v j ) ∈ E}.The adjacency matrix A = [a i j ] ∈ R n×n of a weighted digraph is defined as Agent i communicates with agent j if j is a neighbor of i or if a i j / = 0. Note that an edge e i j in a directed graph means that robot j can reach information from robot i, but not necessarily vice versa.In contrast, in an undirected graph, pairs of node are unordered and an edge e i j implies that robots i and j can get information from one another.The adjacency matrix of an undirected graph has the same meaning as that of the directed graph except that a i j = a ji .The degree matrix of the digraph G = (V , E) is a diagonal matrix defined as where the degree matrix of G.In the undirected graph case, L is symmetric positive semidefinite.In the present topology, the edge represents bidirectional communication links.This consists on a group of n manipulators interchanging information that can be viewed as an undirected graph (see Figure 1). Tracking and Synchronization Errors. In this paper, we consider the synchronization of multiple robots following a common time-varying trajectory.We will design decentralized control laws for n robots manipulators such that all joint positions mutually synchronize and track a common desired trajectory.The control objective of the proposed synchronization controller scheme is to synchronize the ith joint position and velocity q i , qi to the state of any manipulator q j , qj .Besides the controller is required to regulate the joint positions q j to track a desired trajectory q d .Specifically, the control torque for the ith robot is to control the tracking error to converge to zero and at the same time, to synchronize motions of n robots in communication so that the synchronization error converges to zero.To this end, we define the measure of the position tracking error of the ith manipulator as where Λ i is a diagonal positive definite matrix.Information on the vector e 1i will give insight on the convergence of the joint positions to the desired trajectory.It is required to know the performance of the controller that is to know how the trajectory of each robot manipulator converges with respect to each other.There are various ways to choose the synchronization error.For example in [13], authors include the error information of all systems involved in R2 R1 R3 the synchronization.Our approach will make use of the cross-coupling technique to propose a feasible and efficient synchronization error, which consists on a measure of the synchronization for robot manipulator as defined as follows: where, β i j is a diagonal positive definite matrix which gives insight on the weighted communication among the robot network. Feedback Control Design. The objective of this paper is to design individual tracking controller for n manipulators such that they coordinate their motions and track synchronously a desired trajectory.To this end, we define the global error which encompasses both synchronization error and trajectory tracking error for manipulator i as Under the above strategy, motions of all manipulators are synchronized.The control of each manipulator considers motion responses of the other manipulators for synchronization.It takes into account only robots which make the exchange of information with it.The objective is to design a control law such that the coupling errors, that is, the position errors, velocity errors, and synchronization errors, all converge to zero.For each manipulator, the control law τ i is defined as follows: where q d is a common trajectory reference to be tracked, which is a smooth time-varying trajectory and for which the first and the second derivatives exist for all t ≥ 0. K di is a symmetric positive definite matrix.K i j is a matrix that reflects the quality of communication channels; it is a symmetric positive matrix. Stability Analysis. Substituting (7) into (1) yields This results in which can be written as follows: Using the expression of the synchronization error e 2i and its first derivative gives Further calculation, will result in Equation ( 12) represents the closed loop synchronized system for the ith manipulator.In the sequel we proceed to analyze the stability properties of the proposed synchronized control scheme and ultimately to show that control goals: the position errors, velocity errors, and synchronization errors, all converge to zero.To prove the stability of the overall synchronized system, we define Using (12) we obtain the synchronized error dynamics where Note that K c is symmetric and positive semidefinite matrix, since we have an undirected graph, that is, K i j = K ji .The synchronized error dynamics ( 14) is a linear time invariant system described by a second-order linear differential equation.A sufficient condition for the error dynamics to be stable is that the matrices K p and K d − K c are positive definite.In particular, matrices K di can be diagonal satisfying To analyze the stability properties of the closed-loop error dynamics (19), we take the following definite and radially unbounded Lyapunov function candidate: Its derivative to respect to time is It follows by direct application of Lasalle's invariance principle that the origin (e, ė) = (0, 0) is globally asymptotically stable and lim ė → 0 for t → ∞. Referring to the expression of the global error ( 6): as ė = 0 we have Setting ε i = q i − q d , then (20) can be written as Our objective is to show that lim ε i = 0 for t → ∞. To this end, we define 21) can be written as where matrix A is given by We set the Lyapunov function candidate as Differentiating v(t) with respect to time yields knowing that It follows by direct application of Lasalle's invariance that the origin is globally asymptotically stable.Consequently we obtain lim ε i (t) → 0 for t → ∞.Then q i → q d and qi → qd for t → ∞. Referring to (20), we show that q i → q j for t → ∞. Coordination with Time Delays In this section, we study the coordination control problem taking into account time delays of communication channels.As a first assumption, we suppose that these delays can be justified by the fact that data information sent by the neighboring vehicles j / = i reaches vehicle i after a timedelay due to the short-range communication channels.To take into account the time delay produced during the communication among the robots, we introduced in a coordination error expression a term τ which represents the same time delay due to the short-range communication channels.Therefore, a coordination error, in the time delay context, will be presented as the well-known classical time delayed model of multiagent network: Consequently, the controller implanted in each lagrangian system among the network take the following expression: It will be shown that the behavior of the coordinated system under the effect of time delay changes significantly. Multiplying by M −1 and adding ė2i (t − τ) in both sides yields Using the expression of the synchronization error e 2i and its first derivative gives Further calculation, we obtain the synchronized error dynamics where K p , K d , and K c are the same matrices already defined (see Section 3.3).By the Leibnitz formula, we have substituting (34) into (33) leads to Setting e = [e, ė] T .Therefore (35) can be written as This yields the following form: with β 0 = 0 I −Kp −Kd +Kc and β 1 = 0 0 0 Kc .To analyze the stability of the global system, we consider the following Lyapunov-Krasovskii functional (LKF): where P = P T > 0; R = R T > 0; Z = Z T > 0 are weighting matrices of appropriate dimensions.A straightforward computation gives the time derivative of v(t) along the solution of (37) as Then, if the LMI ξ < 0 is satisfied, the derivative of the Lyapunov-Krasovskii functional is therefore negative definite.In consequence the origin e = 0 is asymptotically stable.This results in e → 0 for t → ∞ and ė → 0 for t → ∞.The proof for asymptotic convergence of the coordinated tracking error e is not sufficient to prove the convergence to zero of both error e 1 and e 2 .Our concern now is to show that coordination is successfully realized for a specific time delay τ c . The proof pursued the same line reasoning as the proof of Section 3.3.Consequently, we obtain the following equation derived from the global error expression: Rewriting all states of (42) into a compact representation and applying the Laplace transform leads to This can be written as If the characteristic equation P(s, τ): det |sI + Λ + e −τs K c | = 0 has all its zeros in the left half complex plan then the system is stable and one can easily conclude about the convergence of q i to q d .Since the ordinal system, free from time delay (i.e., τ = 0) is stable and that P(s, τ) is a continuous function of τ, then using the D-Decomposition, the minimal positive solution to the following equation: would make all the zeros of the characteristic equation in the left half complex plane.Therefore if we select τ ∈ [0, τ c ], where τc = sup(τ ci ) for all 1 ≤ i ≤ n, solutions of (44) converge to zero and consequently q i → q d for t → ∞, qi → qd for t → ∞, and q i → q j for t → ∞. Simulation Results To show the effectiveness of the proposed synchronizing controller we provide some simulation results.These simulations were proposed for a network of 3 identical robot manipulators interconnected under a cooperative scheme as shown in Figure 2. Let the communication structure among the robots described by an undirected strongly connected graph topology as shown in Figure 3.We set Joint Initial Conditions (JICs), coupling and control gains for the three robots as discussed below (see Table 1).Simulations are performed on Matlab/Simulink.Figure 4 illustrates the synchronization of robots that follow a common trajectory.This proves that the tracking and synchronization objectives are attained by the proposed controller. Figures 5 and 6, show, respectively, the convergence of error positions to zero and the convergence of synchronizing errors to zero, explaining how robots, while tracking the desired trajectory, synchronize their positions.The effect of time delays on the coordination of robots is shown in the following write-up.First, the delay-free case is presented in Figure 7 in which it is shown how the three angular positions asymptotically synchronize.Next, we consider the time delay in communication.Synchronization while tracking periodic trajectory is shown in Figures 8 and 9. From these figures, it is seen that the robots do not have the same starting positions.The speed for achieving an agreement depends essentially on the time-delay communication channels.Figures 10 and 11 illustrate that the behavior of the coordinated system changes significantly, under the effect of time delay. Conclusion This paper has considered the synchronization problem in distributed multi-Lagrangian systems.The aim of this work was to find out a decentralized controller, which individually applied to each lagrangian system, the synchronization in position and velocity is therefore met.Reaching synchronization stability of highly nonlinear robot dynamics constitutes one of the main contributions of this paper.The proposed control law ensures the robots' states synchronization while tracking a common desired trajectory.Another aspect of robots coordination and trajectory tracking control was investigated.In the coordination strategy there are practically interconnections between all the systems, such that all systems have influence on the overall dynamics.The proposed algorithm works under cooperative scheme in the sense that it does not require any explicit leaders in the team.The studied topology is connected under an undirected interaction graph.To deal with time-delay problem in communication between robots, the proposed decentralized control guarantees that the information variables of each robot reach agreement even in the presence of communication delay.Illustrative examples have shown the effectiveness of the described strategy.Future work will address the coordination control of under actuated lagrangian systems. Figure 3 : Figure 3: The topology model of the robots in simulation. 4. 1 . Stability Analysis.Substituting (29) into (1) yields = [I, −I].The time derivative of the LKF (38) can thus be bounded by v(t) ≤ δ T ξδ, where ξ= 2N T PM + N T RN − Q T RQ + τM T ZM − (1/τT T ZT).Then if the LMI ξ < 0 is satisfied, the derivative of the Lyapunov-Krasovskii functional is negative definite.To ensure that matrix ξ < 0 is negative definite, we select appropriate control gains K p > K * p and K d −K c > K * through processing Matlab's LMI solver such that
4,509.4
2011-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Chimeric cells of maternal origin do not appear to be pathogenic in the juvenile idiopathic inflammatory myopathies or muscular dystrophy Introduction Microchimeric cells have been studied for over a decade, with conflicting reports on their presence and role in autoimmune and other inflammatory diseases. To determine whether microchimeric cells were pathogenic or mediating tissue repair in inflammatory myopathies, we phenotyped and quantified microchimeric cells in juvenile idiopathic inflammatory myopathies (JIIM), muscular dystrophy (MD), and noninflammatory control muscle tissues. Method Fluorescence immunophenotyping for infiltrating cells with sequential fluorescence in situ hybridization was performed on muscle biopsies from ten patients with JIIM, nine with MD and ten controls. Results Microchimeric cells were significantly increased in MD muscle (0.079 ± 0.024 microchimeric cells/mm2 tissue) compared to controls (0.019 ± 0.007 cells/mm2 tissue, p = 0.01), but not elevated in JIIM muscle (0.043 ± 0.015 cells/mm2). Significantly more CD4+ and CD8+ microchimeric cells were in the muscle of patients with MD compared with controls (mean 0.053 ± 0.020/mm2 versus 0 ± 0/mm2p = 0.003 and 0.043 ± 0.023/mm2 versus 0 ± 0/mm2p = 0.025, respectively). No differences in microchimeric cells between JIIM, MD, and noninflammatory controls were found for CD3+, Class II+, CD25+, CD45RA+, and CD123+ phenotypes, and no microchimeric cells were detected in CD20, CD83, or CD45RO populations. The locations of microchimeric cells were similar in all three conditions, with MD muscle having more microchimeric cells in perimysial regions than controls, and JIIM having fewer microchimeric muscle nuclei than MD. Microchimeric inflammatory cells were found, in most cases, at significantly lower proportions than autologous cells of the same phenotype. Conclusions Microchimeric cells are not specific to autoimmune disease, and may not be important in muscle inflammation or tissue repair in JIIM. Introduction The role of microchimeric cells in health and disease has been controversial. Microchimeric cells are acquired during pregnancy, with the transfer of cells from fetus to mother or mother to fetus. Microchimeric cells have been documented to be elevated in the peripheral blood and affected tissues of patients with autoimmune diseases, such as systemic sclerosis [1], systemic lupus erythematosus [2], neonatal lupus [3], and juvenile idiopathic inflammatory myopathies (JIIM) [4][5][6]. Recently, microchimeric cells were found to be elevated in specific target tissues, such as the liver in hepatitis C infection [7] and tumors, such as HER2-positive breast cancer, cervical, lung and thyroid cancer, and melanoma [8][9][10]. However, not all studies have documented higher levels of chimeric cells in autoimmune conditions [11] or cancer [12]. This variability suggests that they might be recruited nonspecifically to sites of inflammation and tissue injury [13,14] or participate in tissue repair [8]. The JIIM are systemic autoimmune diseases characterized by chronic muscle inflammation and weakness. Juvenile dermatomyositis (JDM), the form of JIIM with characteristic photosensitive skin rashes, including Gottron's papules and heliotrope rash, is the most common of the JIIM and is thought to be mediated by CD4+ T cells, B cell and dendritic cell attack on muscle capillaries, whereas juvenile polymyositis (JPM), the form of JIIM without characteristic rashes, is thought to be mediated by CD8+ T cells on myofibers [15][16][17]. We previously found elevated levels of maternal microchimeric cells in muscle biopsies and peripheral blood of boys with JIIM and characterized these cells to be in the CD4+ and CD8+ peripheral T cells [4]. However, the phenotypes of the microchimeric cells in affected muscle tissues were not investigated. The current study analyzes the frequency of microchimeric cells within different inflammatory phenotypes in the muscle of JIIM and compares these findings with the nonautoimmune inflammatory muscle disorder muscular dystrophy (MD) [18] and with noninflammatory control (NIC) muscle tissue. We sought to determine whether microchimeric cells have a pathogenic or reparative role in JIIM. Patients All studies were performed with full Institutional Review Board approval from the National Institutes of Health and waived approval from the Institutional Review Board at Drexel University College of Medicine. All patients consented to the study. Muscle biopsies were obtained for diagnosis, and prior to initiation of therapy, from ten patients with JIIM (six JDM, four JPM), nine MD (eight Duchenne, one Becker dystrophy) and ten controls without inflammatory disease (four mitochondrial myopathies, six histologically normal) and analyzed by immunofluorescence for specific phenotypes and by fluorescence in situ hybridization (FISH) for maternal microchimeric cells. The ages of the JIIM patients ranged from 3 to 16 years [15,16], the patients with MD from 2 to 14 years [19], and the controls from 2 to 17 years. All tissues were paraffin embedded and derived from males and were cut at 5 μM. No patient had prior blood transfusions. The size of the tissue sample ranged from 9 to 96 mm 2 in JIIM, 6 to 245 mm 2 in MD, and 20 to 62 mm 2 in the controls. Immunofluorescence/fluorescence in situ hybridization Before immunofluorescent staining was performed, one slide from each biopsy was stained with hematoxylin and eosin to verify the histological appearance of the tissue and the presence of inflammatory cells. Biopsies were selected based on infiltration density. The immunofluorescent assessment of the tissues was performed first, and all positive cells of each immunophenotype were documented. Tissues were stained for T cells (CD3, CD4, and CD8), T cell activation markers (CD25 and HLA Class II), memory T cells (CD45RO), naïve T cells (CD45RA), B cells (CD20) and dendritic cells (CD123 and CD83) by using antibodies from Neomarkers (Fremont, CA, USA) or Dako (Carpinteria, CA, USA). The secondary antibody carrying the Cy-2 conjugate (Jackson ImmunoResearch, West Grove, PA, USA) was used to detect the primary antibody, and sections were mounted with DAPI and viewed with a Nikon epi-fluorescent microscope with triple-band filter at 1000× magnification (Nikon Instruments, Melville, NY, USA). Positive cells were documented, and sections were processed for FISH analysis as we described previously [1]. Cells that had XX or XY nuclear probes were quantified. The sections were also assessed after FISH for other nuclei that were carrying XX probes but were negative for immunofluorescent stain. Muscle fiber nuclei were documented by morphological analyses of the muscle fiber based on the location and shape of the nucleus within the fiber. The myogenic origin was confirmed in 12 sections by actin staining. Personnel involved with muscle biopsy studies were blinded to patient diagnosis. Characterization of microchimeric or autologous cells was determined only when both probes were evident in the nucleus (XX for microchimeric cells and XY for autologous cells). We performed immunofluorescence prior to FISH and stained the protein green. Because some immunofluorescence signal can remain after the FISH procedure, we selected red for the X chromosome. This approach left no doubt about the presence of a microchimeric cell, even if residual immunofluorescence staining was present. Biopsy sections were selected to limit the number of overlapping nuclei in the inflammatory cell infiltrates, because overlapping nuclei could result in false assignment of microchimerism due to two X chromosome probes appearing to be in the same nucleus when they are in separate nuclei. Prior to making the final assessment as to whether a cell was microchimeric or not, the entire nucleus of the cell was assessed for the presence of other probes, i.e., possible nuclei lying underneath. Only when it was confirmed that no additional probes were present, were the nuclei assigned as being microchimeric or autologous. Twelve (JIIM or MD) biopsies with many overlapping infiltrating cells without well-defined nuclei were excluded. With this careful approach to immunophenotyping and FISH we reduced the chance of over-identifying microchimeric cells due to overlapping cells and to fluorescent protein remaining after the phenotyping. Statistical analyses Results obtained by FISH were expressed as mean ± standard error of the mean. Differences in the frequency of positive microchimeric cells in the tissue were evaluated using Prism 5 software (GraphPad Software, San Diego, CA, USA). For nonparametric data, the Mann-Whitney rank sum t test was used to compare quantities of microchimeric cells. A p value ≤ 0.05 was considered significant; due to small sample sizes, p = 0.051-0.06 was considered to demonstrate a trend toward significance. Are microchimeric cells enriched in inflammatory diseases and what are their phenotypes? Microchimeric cells were detected in nine of ten JIIM muscle biopsies, nine of nine MD biopsies, and eight of ten NIC biopsies (ten sections per biopsy). Of the total phenotyped and nonphenotyped microchimeric cells across all sections examined, the density was 0.043 ± 0.015 microchimeric cells/mm 2 of muscle tissue in JIIM, 0.079 ± 0.024/mm 2 of tissue in MD, and 0.019 ± 0.007/mm 2 of tissue in NICs (Fig. 1a). MD biopsies had significantly more microchimeric cells/mm 2 than controls (p = 0.01). There was a trend toward more microchimeric cells/mm 2 of muscle tissue in MD patients compared to JIIM (p = 0.06), but there was no difference in the density of microchimeric cells between JIIM and controls. There were few differences in the phenotypes of the microchimeric cells and no differences in the concentration of microchimeric cells/mm 2 of muscle tissue in CD3+, Class II+, CD45RA+ or CD123+ phenotypes in patients with JIIM, MD or the controls (Fig. 1b, e, g, h). There were significantly more CD4+ and CD8+ microchimeric cells in the muscle of patients with MD compared with controls (0.053 ± 0.020/mm 2 vs. 0 ± 0/mm 2 for CD4+ microchimeric cells [ Fig. 1c, p = 0.003] and 0.043 ± 0.023/mm 2 vs. 0 ± 0/mm 2 for CD8+ microchimeric cells [ Fig. 1d, . There was a trend toward more CD4+ microchimeric cells in MD biopsies compared with JIIM (0.053 ± 0.020/mm 2 vs. 0.012 ± 0.006/mm 2 , p = 0.06); whereas the number of CD4+ microchimeric cells did not differ between JIIM and controls. The number of CD8+ microchimeric cells did not differ in the muscle tissue of patients with MD and JIIM, or JIIM and controls. MD muscle tissue had significantly more microchimeric CD25+ cells/mm 2 of tissue (0.038 ± 0.014/mm 2 ), whereas JIIM patients and controls did not have any detectable CD25+ microchimeric cells (Fig. 1f). None of the biopsies had microchimeric cells that were CD20+, CD45RO+ or CD83+ (not shown). Examples of phenotyped microchimeric cells are depicted in Fig. 2. Do microchimeric cells drive disease? For each phenotype we determined whether microchimeric cells were enriched as a percentage of all microchimeric cells, compared to autologous cells of that phenotype as a percentage of all autologous immunophenotyped cells. Muscle cells were excluded in this analysis. An enrichment of microchimeric cells compared to autologous cells could indicate their involvement in disease pathogenesis. In most cases, the proportion of microchimeric cells of a given phenotype compared to all the microchimeric immunophenotyped cells in the muscle tissue was significantly less than the proportion of autologous cells with that phenotype as a percentage of all autologous immunophenotyped cells (Table 1). In JIIM muscle, except for CD45RA+ and CD83+ cells, the proportion of microchimeric cells for each lineage as a proportion of all microchimeric cells was significantly less than the proportion of autologous cells of that same phenotype. The proportion of microchimeric cells that were CD45RA+ compared to all microchimeric cells was similar to, but not greater than, the proportion of autologous CD45RA+ cells compared to all autologous immunophenotyped cells. In MD biopsies, the proportion of microchimeric CD3+ and Class II+ cells as a percentage of all microchimeric cells was significantly less than their autologous counterparts, with a trend toward a lower frequency of microchimeric cells in CD8+ and CD123+ lineages compared to their autologous equivalents. In the control muscle tissue, significantly fewer microchimeric cells were found in CD3+, Class II+ and the CD25+ lineages compared to these autologous lineages as a proportion of the total autologous immunophenotyped cells, with a trend toward fewer microchimeric cells of the CD8+ and CD123+ lineages. Do microchimeric cells mediate tissue repair? Others have proposed that microchimeric cells might mediate tissue repair. In muscle diseases, they could contribute to regenerating myofibers. Thus, we hypothesized that in JIIM and MD the proportion of microchimeric myofiber nuclei would be enriched compared to autologous myofiber nuclei. We stained 15 sections for actin but found that the stain interfered with FISH, possibly due to the overabundance of actin within these muscle sections. Consequently, we subsequently quantified maternally derived myofiber nuclei based on the morphology of the nucleus and its location within the fiber (Fig. 2, bottom). The density of microchimeric myofiber nuclei was 0.01 ± 0.01 microchimeric cells/mm 2 of muscle tissue in JIIM, 0.07 ± 0.03/mm 2 in MD, and 0.02 ± 0.01/mm 2 in controls. MD biopsies had significantly more microchimeric myofiber nuclei than JIIM (p = 0.018). The number of microchimeric myofiber nuclei did not differ between JIIM and controls or between MD and controls (p = 0.33 and p = 0.18, respectively). Like the phenotyped microchimeric cells, we found significantly fewer microchimeric muscle nuclei than autologous muscle nuclei in JIIM, MD and control biopsies. The proportion of microchimeric myofiber nuclei was significantly less than the proportion of autologous myofiber nuclei. The percentage of microchimeric myofibers compared with autologous myofibers from the same group was 0.6 ± 0.6 % in JIIM (p = 0.002), 3.6 ± 1.2 % in MD (p < 0.001) and 2.5 ± 1.4 % in controls (p = 0.0014) (see Fig. 2 for representative microchimeric myofibers). Discussion Previously we found higher frequency and number of microchimeric cells in the peripheral blood and muscle tissue of JIIM patients compared to noninflammatory or healthy control subjects, suggesting that microchimeric cells may play a role in the pathogenesis of JIIM [4,20]. One limitation in those studies was that we did not phenotype the microchimeric cells in the tissues; however, we did find microchimeric cells in the CD4+ and CD8+ lineages in peripheral blood [4]. The current study further investigated microchimeric cells in JIIM, compared with inflammatory but nonautoimmune muscle disease (MD) and with noninflammatory control muscle and examined the immunophenotypes of microchimeric cells in muscle tissue. Our goals were to determine whether microchimeric cells play a pathogenic role in disease and/or whether they are involved in tissue repair. Persistent microchimeric cells have been found in patients with other inflammatory diseases, suggesting that these cells might be recruited nonspecifically to sites of inflammation [7-10, 13, 14]. Compared with our prior study, we found a similar frequency of JIIM muscle tissue samples containing microchimeric cells and a similar overall quantity of Total microchimeric cells consists of all cells with XX probes microchimeric cells in JIIM muscle tissue [4]. Here, we also saw a similar concentration of microchimeric cells in MD tissues, suggesting that this finding is not specific to autoimmune diseases but may be generally associated with overt inflammatory muscle disease. This suggests that the number of inflammatory microchimeric cells recruited to the site of inflammation does not depend on the total number of inflammatory cells present. MD is a genetic disease caused by mutations in dystrophin inducing muscle fiber damage, with resultant muscle necrosis and tissue inflammation [21], whereas the cause of JIIM is unknown but presumed to result primarily from an autoimmune-mediated destruction of myofibers and muscle capillaries [17]. In both diseases, tissue destruction is mediated by T and B cells, and in JDM by dendritic cells [22][23][24]. We were surprised to find microchimeric cells also in a high proportion of control muscle tissues, in contrast to our prior study [4]; however, in the prior study, only one muscle section was examined, whereas in the present study, ten sections from each patient's muscle tissue were examined. Microchimeric cells were not observed in every tissue section from a given patient and thus overall were considered to be a rare event. To characterize the phenotype of microchimeric cells in JIIM and MD, we stained muscle tissue for various immunophenotypes. Overall, we detected few microchimeric cells within inflammatory phenotypes, and we did not find microchimeric cells in B cell or dendritic cell lineages, which are thought to be important in JIIM pathogenesis. Of the T cell immunophenotypes, generally the concentration of microchimeric cells was greater in MD tissues than in noninflammatory controls, and typically T cells or activated T cells were not elevated in JIIM muscle tissue. Microchimeric cells of a given phenotype did not occur more frequently than their autologous counterparts in JIIM or MD muscle tissue, compared to all microchimeric or immunophenotyped autologous cells. In terms of location, the endomysial and perivascular regions of the muscle showed similar numbers of microchimeric cells among the three groups. However, MD and JPM tissues were more likely to have microchimeric cells in the perimysium compared with controls. It is this sole finding that weakly suggests that microchimeric cells might play a pathogenic or reparative role. We also found microchimeric cells in noninflammatory tissues and in myofibers, not only in infiltrating inflammatory cells, suggesting that these cells are resident in tissues regardless of inflammation. Tissues with maternal microchimeric cells include autoimmune and nonautoimmune thyroid [25], pancreas in type I diabetes [26], neonatal lupus heart muscle [27], tonsils and adenoids [28], cutaneous inflammatory diseases [29] and inflammatory bowel disease [30]. Normal tissues with maternal microchimeric cells include heart [27], tonsils/ adenoids [28], skin diseases [29] and inflammatory bowel disease [30]; albeit at lower levels than diseased tissue of the same type, suggesting that a threshold number of microchimeric cells might be necessary to drive disease. However, in some instances microchimeric cells were found at comparable numbers in healthy tissues [6]. One hypothesis proposed by other investigators is that microchimeric cells might mediate tissue repair via microchimeric stem cells. Initially we stained for muscle fibers, but this staining procedure interfered with subsequent FISH. Therefore, we counted the numbers of microchimeric nuclei that were apparently myofibers, based on their morphological position and shape within the muscle fiber, and compared them to the numbers of autologous myofiber nuclei. We found significantly more microchimeric muscle nuclei in MD than JIIM muscle, but there were no significant differences between JIIM and controls. However, each disease class had significantly fewer microchimeric myonuclei than autologous muscle nuclei. One caveat is that we do not know how many myofibers were regenerated from autologous progenitors during the disease process [27], and this study was not designed to determine that. Overall, these results suggest that microchimeric myofibers are no more frequent than microchimeric inflammatory cells in diseased tissues, and that noninflammatory tissue resident in microchimeric myofiber nuclei are present at higher levels than observed in JIIM. This result suggests that microchimeric cells are not enriched in muscle tissue in any of these disease conditions, and that they do not mediate tissue repair at an augmented rate compared to autologous myofibers. We stained one phenotype per FISH analysis for microchimeric cells in the tissue, but we did not stain for all cell phenotypes in the tissues. Consequently, we may have missed a microchimeric cell phenotype(s) that was noninflammatory, including muscle stem cells. We did not analyze all inflammatory phenotypes, such as macrophages, that are frequent in the inflammatory infiltrates of JIIM and MD muscle [22,23], and some microchimeric cells did not match any of the phenotypes analyzed. The number of inflammatory cells in the JIIM samples was comparable to previously published data [22], and this suggests that the immunophenotyping was not underrepresenting inflammatory cell phenotypes. Furthermore, although we did not observe many differences in the phenotypic numbers of microchimeric cells between disease groups, only one or a small number of pathogenic microchimeric cells might be sufficient to cause disease; however, we believe this is unlikely as microchimeric cells were also present in the muscle tissue of noninflammatory controls. In addition, we did not examine the functionality of the microchimeric cells phenotyped and although microchimeric cells were found in inflammatory cells of the controls, they may have been quiescent, whereas in the inflammatory muscle diseases they may be activated, indicated by different cytokine profiles [31]. We did, however, investigate T cell activation markers, such as Class II, CD25, and memory T cell subsets, and did not detect more microchimeric cells in JIIM muscle compared to noninflammatory controls, and only MD had significantly more CD25 microchimeric cells. Finally, our sample sizes were relatively small and the study may have been underpowered to detect differences between groups. Murine studies suggest that maternal microchimeric cells are able to manipulate the fetal immune system and promote the development of various Regulatory T cells (Tregs) that are tolerant toward the noninherited maternal antigens [32]. Additional studies using inbred mice have also reported the trafficking of alloreactive T cells in offspring and that the maternal cells can influence the fetal response to targeting specific tissues [33]. These studies were elegantly performed in mice, and whether this same phenomenon occurs to the same extent during human gestation leading to increased risk for autoimmune diseases is yet to be proved. Currently, no studies have been performed to determine whether maternal microchimerism of cells that carry noninherited shared epitopes place the JIIM cohort at a greater risk for disease, like they apparently do for rheumatoid arthritis [34]. HLA-DRB1*0301 in linkage with HLA-DQA1*0501 confers the highest susceptibility for JIIM; however, in contrast to previous studies [31], we found that HLA-DQA*0501 was not significantly associated with microchimerism [35]. Maternal microchimeric cell transfer is a frequent occurrence during fetal development [36], and these cells most likely distribute throughout all the tissues and become embedded in the bone marrow with the ability to differentiate into inflammatory cells or stromal cells upon cell damage. Thus, the presence of maternal microchimeric cells in tissues is not surprising in light of the high number of normal samples that were positive in our study and in other studies [6,37]. Our data support the findings of Ye et al., who also reported their controls being positive for maternal microchimerism [6]. The presence of microchimeric cells in the control tissues without inflammation suggests that they are present either from fetal development if stromal in nature, or are inflammatory cells trafficking through the tissue. In contrast with Ye et al., we found the presence of some microchimeric T cells, albeit at very low percentages in our JDM samples. However, the direct role of maternal microchimeric cells in human disease is still not clear and against the complex genetic background of humans may mean that the direct role of these cells in autoimmunity may never be conclusively demonstrated. Conclusions Overall, our results indicate that microchimeric cells likely do not play important pathogenic roles in autoimmune or inflammatory muscle disease, nor are they seemingly involved in mediating repair of muscle tissue. Once differentiated, where there is inflammation, the autologous cells, along with their microchimeric counterparts, appear to be recruited to those sites in like manner. This study suggests that microchimeric cells are not part of the autoimmune pathogenesis in JIIM or the primary inflammatory process in MD.
5,303.2
2015-09-04T00:00:00.000
[ "Biology", "Medicine" ]
Focusing properties of elliptical mirror with an aperture angle greater than π Abstract. An analytical apodization function of an elliptical mirror with an aperture angle greater than π is derived for the analysis of the focusing properties. The distribution of electric field intensity near the focal region is given using vectorial Debye theory. Simulation results indicate that a bone-shaped focal spot is formed under linearly polarized illumination, and a tight-circularly symmetric spot is generated under radially polarized illumination. The change in eccentricity causes such a change in the focusing pattern under radially polarized illumination, that a greater eccentricity causes a spot tighter in transverse direction but wider in axial direction. Under radially polarized illumination, the transverse and axial full-width-at-half-maximum will be 0.382λ and 0.757λ, respectively, and the conversion efficiency of the longitudinal component can go beyond 99%, when the semi-aperture angle is 2π/3 and the eccentricity is 0.6. It can, therefore, be concluded that the tight focusing pattern with strong and pure longitudinal field can be achieved under radially polarized illumination for particle acceleration, optical tweezers, and high-resolution scanning microscopy. Introduction 2][3] Although aberrations such as coma and astigmatism caused by the assembly errors and the poor off-axis properties of these mirrors have limited even the low-aperture applications of them in the past few decades, [4][5][6][7] with the fast development of precision engineering, they have been developed and produced for high-aperture angle applications.Aperture angle of an elliptical mirror in this article indicates the full-covering angle at the focal spot.An elliptical mirror is free from chromatic aberration, and it can enable aperture angle to reach π or even go beyond π, 2 which means tighter focal spots may be obtained.Compared with objective lenses, elliptical mirrors can be easily used for the extension of aperture angles.What is more, using elliptical mirrors with aperture angles greater than π, the specimen can be put in forward and backward illuminations.The focal spot of a focusing system with a high-aperture angle is always tight, and therefore, vectorial Debye theory and the apodization function are used to describe the focusing patterns of the elliptical mirrors. 1,8,9podization function reveals the energy redistribution in the spherical wavefront on the exit pupil, while it is passing through or being reflected by a focusing element.The spherical wavefront can approximately be equal to the wavefront over the aperture of a focusing element, but this approximation is valid only in the low-aperture case.When the aperture angle is high, the difference between the spherical wavefront and the wavefront over the focusing element's aperture cannot be negligible.The energy distribution is totally different when it is being focused by a lens, a parabolic mirror, or an elliptical mirror in the high-aperture case.The focusing properties of a lens, a parabolic mirror, or an elliptical mirror with high-aperture angle, but less than π, have been discussed in Refs.10, 7 and 8, and 1 and 3, respectively.An apodization function has been derived earlier in differential form for a focusing elliptical mirror, but it is not applicable for the analysis we intend to do in this article, because the aperture is now greater than π.The simulation based on that apodization function will also be time consuming, because the function is not analytic.Therefore, it is of great necessity to derive an analytic expression of apodization function before the focusing properties of an elliptical mirror with aperture greater than π can properly be analyzed. Derivation of Apodization Function Apodization function, pðθÞ, is defined as the ratio of exit amplitude to incident amplitude on a mirror at focusing angle θ; and if the energy loss caused by reflection or absorption can be negligible, it will be expressed as 1 where α is the divergence angle at F 1 and θ is the convergence angle at F 2 , as shown in Fig. 1.Points F 1 and F 2 are the two conjugate foci of the elliptical mirror in an elliptical mirror-based system (EMBS).The position of focus F 1 coincides with the focus of the lens.The plane wave converges to focus F 1 after it passes through the lens.Then, the divergent spherical wave propagates to the mirror surface.The wavefront before reflection is defined as S 1 , and the wavefront after reflection is defined as S 2 . Further, as shown in Fig. 1, the mirror function is , where a ¼ jOVj and b ¼ jOWj are the major and the minor semiaxes of the ellipse and c ¼ jOF . The relationship between α and θ used to calculate pðθÞ in Eq. ( 1) are shown below 1 α ¼ arctan½ðz 0 − cÞ tan θ∕ðz 0 þ cÞ; (2) where z 0 is distance between M and F 2 along optical axis Z. Equation ( 1) is derived from the definition of the apodization function, and it is valid no matter whether the semi-aperture angle θ is greater than π∕2 or not.However, as Eqs.( 2) and (3) are not continuous for θ ¼ π∕2, Eqs.(1)-(3) cannot be used to describe the energy redistribution in the spherical wavefront on the exit pupil when the aperture angle (2θ) is greater than π.By introducing the eccentricity e of an ellipse, this difficulty can be overcome.Besides, the expression of the apodization function given in Eq. ( 1) is not analytic, and the simulation based on this expression will be limited to computation speed.An analytic expression of apodization function must be derived for the analysis of the focusing properties of an elliptical mirror with aperture angle greater than π. The eccentricity e and the major semiaxis a are usually used to define an ellipsoid.e is equal to c∕a, and the arbitrary ellipsoid value of e is a positive number less than 1.By substituting e ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi As shown in Fig. 2, D 1 C 1 and D 2 C 2 are the directrixes of the ellipse, respectively.The eccentricity e is also equal to the ratio of the distance (such as the line MF 1 ) from any particular point on the ellipse to one of the foci at the perpendicular distance to the directrix from the same point (line MD 1 ), i.e., e From the geometrical relationship in triangles F 1 MN and Angles α and θ can be expressed as shown below Further, By substituting Eqs. ( 7) and ( 8) into Eq.( 1), the apodization function can be simplified as Equation ( 9) is the analytical expression of the apodization function for an elliptical mirror, and if θ max is the largest semi-aperture angle of a given elliptical mirror, the apodization function will be continuous in domain of definition ½0; 2θ max , even θ max ≥ π∕2. 3 Focusing Patterns under Linearly Polarized Illumination As shown in Fig. 3, the electric field of incident light wave is linearly polarized along the X-axis, and after it passes through the mirror and is reflected by the mirror, the electric field distribution near the focus F 2 can be described as shown below 1 8 < : where A is a constant, which satisfies A ¼ π∕λ, and I 0 , I 1 , and I 2 are intermediate variables used to make the expressions of E x , E y , and E z , respectively, compact, and they are expressed as shown below 8 > < > : pðθÞ sin θð1 þ cos θÞJ 0 ðkr s sin θ s sin θÞe −ikr s cos θ s cos θ dθ; pðθÞsin 2 θJ 1 ðkr s sin θ s sin θÞe −ikr s cos θ s cos θ dθ; pðθÞ sin θð1 − cos θÞJ 2 ðkr s sin θ s sin θÞe −ikr s cos θ s cos θ dθ; (11) where ðr s ; θ s ; φ s Þ is spherical coordinates near the focus F 2 , and J n is a Bessel function of the first kind of order n. The established expression of the electric field is used, and let the eccentricity e be 0.6, then the light field near the focus F 2 is acquired when the semi-aperture angle is 2π∕3.As shown in Fig. 4, the electric field intensity in the focal plane mainly consists of components which are polarized along the Xand Z-axes.The maximum values of field intensity in Figs.4(a) and 4(c) are close to each other.The distribution of total intensity splits into two peaks, since the two main components are distributed in different directions.The bone-shaped distribution makes the focusing pattern different from the airy pattern in transverse direction, which is obtained using the low-aperture focusing lens. The bone-shaped focusing pattern becomes more obvious as the aperture increases, which means the distance between two peaks further increases.This means that the depolarization effect becomes significant in the transverse direction under high-aperture conditions.Meanwhile, the component of the electric field along Z-axis is enhanced in intensity when the convergence angle increases. It can be seen from Fig. 5 that the ratios of maximum intensity of component X to that of component Z versus the semi-aperture angle θ when e ¼ 0.6 or e ¼ 0.8 agree with each other, despite different eccentricities.It should be noted that, in the view of the fabrication technology, we only consider the semi-elliptical mirror.It means that the maximum semi-aperture angles θ max of elliptical mirrors with different eccentricities are different, for example, when eccentricity e is 0.6, θ max ¼ π − arctanð4∕3Þ, which is slightly larger than 126 deg.Similarly, when e is 0.8, θ max is slightly larger than 143 deg.For an elliptical mirror with e ¼ 0.8, the electric field intensity of the component Z can be enhanced and finally exceeds the intensity of component X, as the angle θ increases.Theoretically, θ max could exceed 143 deg for a full elliptical mirror with e ¼ 0.8. In that case, the electric field intensity in the focal plane under linearly polarized illumination can still be calculated using Eqs.( 9)- (11).However, the calculation is beyond the scope of this article, as such an elliptical mirror is difficult to fabricate. Figure 6 shows the axial distribution of the electric field intensity, which is under the same condition as given in Fig. 4. It is clear that the spot will be compressed in the axial direction, as the semi-aperture angle increases. Focusing Patterns under Radially Polarized Illumination Since tight spots can be acquired under radially polarized illumination and the longitudinal component of the electric field near the focus is significant, [11][12][13][14] focusing properties of the elliptical mirrors under radially polarized illumination are discussed in the high-aperture case when the aperture angle is greater than π.The electric field near focus F 2 consists of radial and longitudinal components, and they can be expressed in cylindrical coordinates ðρ s ; θ s ; z s Þ, as shown below where lðαÞ describes a Bessel-Gauss beam waist at plane before the lens 15 and lðαÞ β is the ratio of the pupil radius to the beam waist before the aplanatic lens, which is set to be 1 in the general sense.The numerical aperture of the lens in the EMBS is 0.95.If eccentricity e is 0.6 and the semi-aperture angle is 2π∕3, the focal distributions of the light field will be as shown in Fig. 7.It is interesting to see that the distribution of the radial component is no longer a doughnut-shape, but it splits into two separate doughnut-shaped areas when the aperture angle is greater than π. As shown in Fig. 8, the full-width-at-half-maximum (FWHM) of the point-spread function is nearly 0.38λ in the transverse direction and 0.76λ in the axial direction.The focal spot is tighter in size than the diffraction limit of a lens system, even when the numerical aperture of the lens is supposed to be 1. The extent of compression for the focal spot benefits from a deep mirror, and so a deep elliptical mirror generates a spot tighter than that focused by a normal elliptical mirror.It can be seen from Table 1 that for a mirror with e ¼ 0.6, the tightest spot in the transverse direction can be obtained when the semi-aperture angle is between 110 and 115 deg, and for a mirror with e ¼ 0.8, the same thing can be obtained at nearly 110 deg.It is interesting to see that when compared with the variation tendency of transverse FWHM, axial FWHM decreases steadily as the aperture angle increases.Aperture extension causes the axial compression of the focal spot. Another advantage of radially polarized illumination is that a high-longitudinal polarization conversion efficiency can be achieved in the focal plane, 13,16 and the efficiency can be defined as Much work has been done on the realization of a strong longitudinally polarized focal field because of its applications such as particle acceleration, optical tweezers, and high-resolution microscopy.The existing lens systems can be used to increase the conversion efficiency to nearly 80%, which means that the energy of the longitudinal component within the focal volume is nearly four times stronger than that of the radial component.It can be seen from Fig. 9 that the biggest value of the ratios for different eccentricities are different, and the semi-aperture angles required to produce the tightest focal spot are not alike, too.A deeper elliptical mirror with e ¼ 0.8 can focus a stronger longitudinal field than a shallower elliptical mirror with e ¼ 0.6.The four lines in Fig. 9 reveal the maximum and the integral values of jE z j 2 ∕jE r j 2 versus the semi-aperture angle, and the peaks of them are greater than 99, which means that the conversion efficiency defined in Eq. ( 13) can go beyond 99% when the aperture angles of elliptical mirrors with different eccentricities are greater than π.It can, therefore, be concluded that elliptical mirror is a more effective focusing element to acquire strong and pure longitudinal field in the focal plane. Conclusions A concise analytic expression of apodization function was derived for an elliptical mirror.The focusing properties under linearly and radially polarized illuminations were studied using this apodization function when the aperture angle of the elliptical mirror is greater than π.It was found that a bone-shaped focal spot can be generated by focusing a linearly polarized beam and a tight-circularly symmetric spot can be formed by focusing a radially polarized one.The conversion efficiency of the longitudinal field in an EMBS can go beyond 99% under radially polarized illumination, which means that a quite strong and pure longitudinal field near the focal region can be obtained.The nature of an elliptical mirror with two conjugate foci makes it attractive for scanning microscopy, optical tweezers, and particle acceleration. Fig. 1 Fig.1Reflection on an elliptical mirror.jOV j ¼ a, jOW j ¼ b, andjOF 1 j ¼ jOF 2 j ¼ c ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi a 2 − b 2 p. z 0 is the distance between M and F 2 along optical axis Z . Fig. 2 Fig. 2 Cross-section of ellipsoid in Y − Z plane. Fig. 3 Fig. 3 Schematic diagram of focusing of an elliptical mirror.φ is the angle between the X − Z plane and the meridian plane, and θ is the semi-aperture angle. Fig. 4 Fig. 4 Distribution of electric field intensity in the focal plane under linearly polarized illumination.(a)-(c) are intensity distributions of E x , E y and E z ; (d) is the total field intensity distribution. Fig. 5 Fig. 5 Ratios of maximum intensity of component X to maximum intensity of component Z versus the semi-aperture angle θ. Fig. 6 Fig. 6 Axial distribution of the electric field intensity under linearly polarized illumination.(a) and (b) are intensity distributions in the X − Z and Y − Z planes. Fig. 7 Fig. 7 Axial distribution of electric field intensity under radially polarized illumination.(a) and (b) are intensity distributions of E r and E z ; (c) is the total field intensity distribution. Fig. 8 Fig. 8 Point-spread function of an elliptical mirror-based system (EMBS) with e ¼ 0.6, and θ max ¼ 2π∕3 in transverse and axial directions under radially polarized illumination. Fig. 9 Fig. 9 Ratios of longitudinal and radial components of field intensity of elliptical mirrors for different eccentricities.Two components are compared by maximum and integral intensity in the focal plane, which are marked by data in black and red, respectively. Table 1 Full-width-at-half-maximum (FWHM) in transverse and axial directions.
3,806.6
2013-12-02T00:00:00.000
[ "Physics" ]
A Novel Suture-Based Vascular Closure Device to Achieve Hemostasis after Venous or Arterial Access While Leaving Nothing behind: A Review of the Technological Assessment and Early Clinical Outcomes Vascular hemostasis after venous and arterial access in cardiovascular procedures remains a challenge. As sheath size gets larger for structural heart and vascular procedures, no dedicated closure devices exist that can overcome all the challenges of achieving vascular hemostasis, in particular on the venous side. Efficiently and reliably ensuring hemostasis at the access point is crucial for enhancing the safety of a procedure. Historically, hemostasis relied on manually compressing venous access sites. However, the shift towards larger sheaths and the more frequent use of continuous anticoagulation has strained this approach. Achieving hemostasis solely through compression in these scenarios demands heightened vigilance and prolonged application, resulting in increased patient discomfort and extended immobility. Consequently, manual compression may consume more time for healthcare providers and contribute to bed occupancy in hospitals. This review article summarizes the development of the SiteSeal® Vascular Closure Device, a novel leave-nothing-behind approach to achieve hemostasis. The introduction of this technology has provided clinicians with a safer and more effective way to achieve immediate hemostasis, facilitate early ambulation, and enable earlier discharges with fewer access site complications compared with traditional manual compression. Introduction Recent advancements in percutaneous transvenous techniques for treating cardiac abnormalities, such as catheter ablations and structural heart and wireless pacemaker implantations, have transformed the landscape of cardiac care.The management approach for atrial fibrillation (AF) has notably transitioned from traditional medications and anticoagulants to procedures like ablation devices.This evolution has conversely led to a significant surge in the absolute number of procedures performed globally each year, reaching an industrial scale.Consequently, healthcare professionals are facing mounting pressure to streamline patient processing, aiming to mobilize and discharge them within hours post-procedure [1][2][3][4][5]. Several factors have contributed to the heightened importance of venous access site management in these procedures.These include the emergence of techniques necessitating larger venous sheaths, a rise in patients requiring long-term anticoagulation, and the expanding acceptance of older and more obese individuals for invasive interventions.The femoral vein is the primary choice for venous access due to its favorable characteristics, including its large size and consistent anatomy.It reliably accommodates multiple sheaths with outer diameters of up to 27 F in a single vessel [6][7][8].In contrast, veins in the upper body are infrequently utilized.Jugular access is uncomfortable for patients, while subclavian or axillary access carries risks such as hemothorax, pneumothorax, hematomas, pseudo-aneurysms, and potential nerve damage.Additionally, the veins in the arm are often too small for the required procedures. Traditional venous access site hemostasis has relied on manual compression, which remains the benchmark for assessing the efficacy of other hemostasis methods, especially for small-caliber venous sheaths.However, even with smaller sheaths, achieving hemostasis through compression alone can be time-consuming, taking up to 30 min, causing discomfort for patients, and placing additional burdens on medical staff.Moreover, the mandatory immobilization period of 4-8 h following manual compression adds to the cost and patient discomfort, as they need to lay flat with no mobilization.This approach also carries risks, including the potential for deep vein thrombosis and bleeding complications stemming from incomplete control or vascular injury at the access site, leading to hematoma, arteriovenous fistulae, and pseudoaneurysm formation [9][10][11][12]. Figure-of-eight (F8) suture is an alternative approach for large-bore venous-access closure, including multiple sheaths in a single vessel [13,14].Prior studies have evaluated the safety and efficacy of the F8 suture technique through venography or vascular ultrasound.Studies have shown that F8 achieved faster hemostasis, resulting in faster ambulation and shorter overall hospital stay.Additionally, it led to significantly fewer access site complications compared to manual compression [15,16].However, time to ambulation is still within hours (4-10), and time to discharge is typically 1 day [17][18][19].In this paper, we describe a novel 'leave-nothing-behind' single vascular closure device (VCD) capable of enhancing patient mobility.The development, deployment technique, and clinical performance of this novel VCD will be discussed. SiteSeal Vascular Closure Device The SiteSeal ® (EnsiteVascular LLC, Olathe, KS, USA) is an FDA Market cleared simpleto-use, atraumatic, vascular closure device that achieves hemostasis with minimal complications.SiteSeal is designed for every patient, regardless of shape, size, or calcification, ensuring the vessel is closed without leaving anything behind.The SiteSeal is a springloaded polypropylene device (Figure 1).The springs not only provide pressure during deployment to achieve hemostasis, but also help modulate heartbeat pressure variation without stopping blood flow through the vessel.Also included with the SiteSeal polypropylene device are a 2-0 Vicryl suture, hemostatic powder, a tincture of benzoin or mastisol, and elastic tape dressings for securing the device following deployment. Deployment and Removal of the SiteSeal VCD SiteSeal utilizes a number 2-0 Vicryl suture to make a Z-stitch to hold the SiteSeal device in place and close the venotomy site in a linear fashion.The Z-stitch is placed by entering the soft tissue at the skin insertion site of the sheath.The first entrance is approximately 1 cm east of the sheath, passing under the sheath and exiting 1 cm west of the sheath.The second entrance for the Z-stitch is 1 cm above the skin insertion of the sheath and 1 cm to the east.The needle then crosses up and over the sheath and back down into the soft tissue, exiting 1 cm west of the sheath.The Z-stitch suture forms a double half knot which, when closed, creates an 'X' over the venotomy site (Figure 2A).Hemostatic powder is placed around the sheath and the half-knot to minimize oozing from the suture holes (if there are any) following suture removal (Figure 2B). Deployment and Removal of the SiteSeal VCD SiteSeal utilizes a number 2-0 Vicryl suture to make a Z-stitch to hold the SiteSeal device in place and close the venotomy site in a linear fashion.The Z-stitch is placed by entering the soft tissue at the skin insertion site of the sheath.The first entrance is approximately 1 cm east of the sheath, passing under the sheath and exiting 1 cm west of the sheath.The second entrance for the Z-stitch is 1 cm above the skin insertion of the sheath and 1 cm to the east.The needle then crosses up and over the sheath and back down into the soft tissue, exiting 1 cm west of the sheath.The Z-stitch suture forms a double half knot which, when closed, creates an 'X' over the venotomy site (Figure 2A).Hemostatic powder is placed around the sheath and the half-knot to minimize oozing from the suture holes (if there are any) following suture removal (Figure 2B). Deployment and Removal of the SiteSeal VCD SiteSeal utilizes a number 2-0 Vicryl suture to make a Z-stitch to hold the SiteSeal device in place and close the venotomy site in a linear fashion.The Z-stitch is placed by entering the soft tissue at the skin insertion site of the sheath.The first entrance is approximately 1 cm east of the sheath, passing under the sheath and exiting 1 cm west of the sheath.The second entrance for the Z-stitch is 1 cm above the skin insertion of the sheath and 1 cm to the east.The needle then crosses up and over the sheath and back down into the soft tissue, exiting 1 cm west of the sheath.The Z-stitch suture forms a double half knot which, when closed, creates an 'X' over the venotomy site (Figure 2A).Hemostatic powder is placed around the sheath and the half-knot to minimize oozing from the suture holes (if there are any) following suture removal (Figure 2B).The SiteSeal device is cocked by turning the crossbar horizontally and applying pressure, which results in the loading of the springs.It is then positioned over the sheath at the venotomy site with the incline plane facing north (Figure 2C).The sheath is then removed while the two loose suture ends are pulled tight against the sheath.The suture ends are pulled up through the design slots and the suture is tightly knotted (Figure 2D).The loaded springs are then released by turning the cross-bar back to a vertical position.This action results in the pressure created by the Z-stitch to continue and fold the soft tissue surrounding the venotomy site, closing the opening in a linear fashion (Figure 2E).The roof is positioned onto the SiteSeal, and elastic tape is placed on top for stabilization (Figure 2F).An angiogram following the positioning of the SiteSeal demonstrates the closure of the venotomy site in a linear fashion. The removal of the SiteSeal is first achieved by removing the elastic tape and then uncapping the roof of the SiteSeal.The suture between the base and the cross-bar is then cut, allowing for the removal of the SiteSeal.The next step is to remove the suture by cutting the long axis of the Z-stitch.Both ends of the Z-stitch are then removed. Clinical Experiences with the SiteSeal TM VCD From November 2022 to April 2024, 3628 SiteSeal VCDs were placed in 2227 patients.The device has been utilized by 29 physicians at 15 different clinical sites.A total of 1464 cases included its use for electrophysiology interventions.In this subset, 424 of the 1464 patients were female (29%), with a mean age of 62 ± 11 years and 58 ± 11 years for females and males, respectively.In total, 879 of the 1464 patients (60%) were treated with three sheaths in the right common femoral vein and one sheath in the left common femoral vein.The three sheaths in the right common femoral vein consisted of two small sheaths (up to 10 Fr.) and one medium sheath (12)(13)(14)(15)(16).The sheath in the left common femoral vein consisted of a medium sheath.For these patients, the left and right groin were closed with one SiteSeal device per groin.In total, 512 of the 1464 patients (35%) were treated with two sheaths (one small and one medium sheath) in the right common femoral vein and two sheaths in the left common femoral vein.For these patients, the left and right groin were closed with one SiteSeal device per groin.And, in 73 of the 1464 patients (5%), two sheaths (one small and one large; 16-27 Fr.) were positioned in the right common femoral vein.For these patients, the right groin was closed with a single SiteSeal device.Overall, in patients treated with the SiteSeal device for electrophysiology intervention, there were 0 major complications and 2.3% minor complications with zero reintervention procedures. In addition to electrophysiology intervention, a total of 105 cases included its use for structural heart procedures.In this subset, 49 of the 105 patients were female (47%), with a mean age of 70 ± 12 years and 73 ± 12 years for females and males, respectively.The patients were treated using one large sheath (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27) in the right common femoral vein and subsequently closed with a single SiteSeal device.In this cohort of patients, there were 0 major complications and 2.7% minor complications with zero reintervention procedures. The final cohort of patients treated with the SiteSeal devices included patients undergoing coronary artery procedures, with a total of 668 involving its use.In this subset, 200 of the 668 patients were female (30%), with a mean age of 73 ± 12 years and 75 ± 13 years for females and males, respectively.The patients were treated using one small sheath (up to 10 Fr.) in the right common femoral artery and subsequently closed with a single SiteSeal device.In this cohort of patients, there were 0 major complications and 2.5% minor complications with zero reintervention procedures. The overall clinical outcomes have shown a faster patient turnaround, improved patient satisfaction, the alleviation of healthcare burdens, and a lower risk of infection.These can be attributed to reducing the time to ambulation, patients' ability to immediately elevate their head, and no restriction on leg movement following the deployment of the SiteSeal VCD.The design and delivery of the SiteSeal also leave nothing behind following positioning, reducing the risk of a foreign-body reactions and any access site complications.The SiteSeal VCD has no French size limitations within the arterial or venous systems, enabling its broad utility for physicians.And, lastly, the use of a single SiteSeal VCD can close multiple sheaths in a single vessel, as well as large bore sheaths, accelerating patient time to discharge and improving patient turnover rates with fewer complications. In a recent single-center, prospective, published clinical study, the clinical outcome of the SiteSeal was compared to two vascular closure devices: the Vascade, a collagen-plug closure device, and Perclose, a suture-based closure device.The mean age and BMI of the patients were 68 ± 11 years and 29.7 ± 6.2 kg/m 2 , respectively.In total, 46 of the 138 patients were female (33%), and femoral vein access was obtained in all, bilaterally in 113 (82%), with a mean of 3 ± 1 access sites per patient, and a sheath size ranging from 6-to 23 Fr.Femoral arterial access was obtained in 14 (10%) cases, once per patient, with a sheath size ranging from 5-to 8-Fr.The results of this study showed no arterial access complications.On the venous side, zero (0/40) patients treated with the SiteSeal had hematomas, whereas both the Vascade group (3/57) and the Perclose group (1/41) had incidences of hematomas.In addition, the time to ambulation was less in the SiteSeal group (123 ± 11 min) compared to Vascade (139 ± 11 min) or Perclose (137 ± 34 min), although differences did not reach significance [20]. Figure 3 shows the simple workflow of the SiteSeal placement in a clinical setting.The Z-stitch is placed, and a double-half knot is positioned at the venotomy site.Hemostatic powder is then placed at the knot and sheath location.The device is then positioned over the sheath with the incline plane facing the heart.Following the removal of the sheath, the operator pulls the suture ends tight and up through the designed slots, activating the spring mechanism.The operators then place a second device on the opposite groin, followed by placing elastic tape on top for stabilization.The tape, device, and sutures are then removed before patient ambulation. deployment of the SiteSeal VCD.The design and delivery of the SiteSeal also leave nothing behind following positioning, reducing the risk of a foreign-body reactions and any access site complications.The SiteSeal VCD has no French size limitations within the arterial or venous systems, enabling its broad utility for physicians.And, lastly, the use of a single SiteSeal VCD can close multiple sheaths in a single vessel, as well as large bore sheaths, accelerating patient time to discharge and improving patient turnover rates with fewer complications. In a recent single-center, prospective, published clinical study, the clinical outcome of the SiteSeal was compared to two vascular closure devices: the Vascade, a collagen-plug closure device, and Perclose, a suture-based closure device.The mean age and BMI of the patients were 68 ± 11 years and 29.7 ± 6.2 kg/m 2 , respectively.In total, 46 of the 138 patients were female (33%), and femoral vein access was obtained in all, bilaterally in 113 (82%), with a mean of 3 ± 1 access sites per patient, and a sheath size ranging from 6-to 23 Fr.Femoral arterial access was obtained in 14 (10%) cases, once per patient, with a sheath size ranging from 5-to 8-Fr.The results of this study showed no arterial access complications.On the venous side, zero (0/40) patients treated with the SiteSeal had hematomas, whereas both the Vascade group (3/57) and the Perclose group (1/41) had incidences of hematomas.In addition, the time to ambulation was less in the SiteSeal group (123 ± 11 min) compared to Vascade (139 ± 11 min) or Perclose (137 ± 34 min), although differences did not reach significance [20]. Figure 3 shows the simple workflow of the SiteSeal placement in a clinical setting.The Z-stitch is placed, and a double-half knot is positioned at the venotomy site.Hemostatic powder is then placed at the knot and sheath location.The device is then positioned over the sheath with the incline plane facing the heart.Following the removal of the sheath, the operator pulls the suture ends tight and up through the designed slots, activating the spring mechanism.The operators then place a second device on the opposite groin, followed by placing elastic tape on top for stabilization.The tape, device, and sutures are then removed before patient ambulation.Briefly, (A) a Z-stitch using a 2-0 Vicryl suture is created and a double half knot is formed, creating an 'X' over the venotomy site.(B) Hemostatic powder is then placed around the sheath and the half-knot.(C) The SiteSeal is then positioned over the sheath at the venotomy site with the incline plane facing north.The sheath is then removed while the two loose ends of the suture are pulled tight against the sheath.The suture ends are pulled up through the design slots and the suture is tightly knotted.Following the release of the loaded springs, the roof is placed on the SiteSeal and (D) the procedure is repeated on the opposite groin.(E) This is followed by placing elastic tape on top of both SiteSeal devices for stabilization.(F) The removal of both SiteSeals demonstrates no hematoma or bleeding at the venotomy site. Benefits of SiteSeal TM VCD as Compared to Manual Compression and Figure-of-Eight Suture Techniques While arterial access for percutaneous coronary interventions has shifted from predominantly femoral to predominantly radial due to equipment size reductions, the trend in transvenous interventions has been toward larger-diameter equipment.Consequently, upper limb veins are rarely adequate for these procedures.Traditional methods for achieving hemostasis at venous access sites typically involve manual compression.This approach is effective for smaller venous sheaths and serves as the benchmark for evaluating other hemostatic techniques.However, even with smaller sheaths, achieving hemostasis through compression alone can be time-consuming, often taking up to 30 min.This prolonged com-pression period can be uncomfortable for patients and adds to the workload of medical staff.Following manual compression, patients are typically required to remain immobilized for 4-8 h, which further contributes to the inconvenience and cost of the procedure [7,9,18,21].Immobilization, aimed at reducing the risk of access site complications, leads to discomfort for patients, particularly in the form of back pain.Additionally, immobilization poses a welldocumented risk factor for the development of deep vein thrombosis and embolism [22,23]. To overcome some limitations of MC after sheath removal to achieve post-procedure venous hemostasis, suture techniques such as figure-of-eight sutures are now widely used as an alternative.F8, along with modifications such as F8 sutures with a three-waystopcock, has been primarily implemented to minimize the time associated with manual compression [24,25].Several authors have highlighted that achieving an optimal workflow is among the primary challenges faced by healthcare organizations [26,27].Turnaround time is essential for optimizing processes and ensuring patient satisfaction [26][27][28].Improvement in workflow in the catheterization laboratory is therefore desirable.The F8 suture technique has improved the workflow compared to manual compression; however, the rate of sameday discharges remains low, at 12.3% for F8 and 3.2% for manual compression [18].Even in the case of the modified F8, which adds a three-way stopcock, there was no change in discharge time (1.2 ± 0.4 days vs. 1.3 ± 0.6 days; p = 0.232) [18]. The main goal of the SiteSeal VCD has been to address this shortcoming and to further improve workflow, allowing for more same-day discharges.The design and implementation of the SiteSeal accelerates the hemostasis process by applying the pressure load at the venotomy site using the loaded springs, tightening the Z-stitch and ultimately closing the vascular opening in a linear fashion.This action differs to F8 or manual compression due to the combination of the Z-stitch and the SiteSeal design.Specifically, the SiteSeal is designed to allow the Z-stitch to close the venotomy site in an east-west fashion, essentially closing the venotomy in a longitudinal manner.Furthermore, the placement of the SiteSeal applies pressure on the proximal side, controlling blood flow and allowing a platelet-plug formation on top and distal to the venotomy.Additionally, the release of the springs (Figure 4) creates an increase in the Z-stitch tension, further closing the venotomy, and increases focal downward pressure.From our early clinical experiences with no major complications or the need for re-intervention, the majority of patients have been discharged on the same day.In cases where the patients were discharged the next day, none were related to the SiteSeal device, and were mainly due to patient choice (e.g., driving home late at night) or the physician's choice due to significant associated comorbidities.In addition to improving the overall workflow, the SiteSeal VCD was also intended to increase patient comfort.Patients treated with F8 are required to lay flat for a minimum of 2 h in a supine position, after which they can elevate the head of the bed but still must rest in a supine position for another few hours.Patients undergoing manual compression experience longer durations.Patients treated with the SiteSeal VCD, however, can immediately elevate their head to 30 degrees with no restriction on leg movement.This allows for patient comfort and minimizes back and leg pain, as they can move around.Additionally, head elevation enables patients to eat and drink immediately, decreases the chance of aspiration, and allows for more efficient breathing. The figure-of-eight technique is also susceptible to rebleeding due to the loosening of In addition to improving the overall workflow, the SiteSeal VCD was also intended to increase patient comfort.Patients treated with F8 are required to lay flat for a minimum of 2 h in a supine position, after which they can elevate the head of the bed but still must rest in a supine position for another few hours.Patients undergoing manual compression experience longer durations.Patients treated with the SiteSeal VCD, however, can immediately elevate their head to 30 degrees with no restriction on leg movement.This allows for patient comfort and minimizes back and leg pain, as they can move around.Additionally, head elevation enables patients to eat and drink immediately, decreases the chance of aspiration, and allows for more efficient breathing. The figure-of-eight technique is also susceptible to rebleeding due to the loosening of the suture, and there is an inability to retighten the suture in the case of rebleeding [25].Additionally, as there is no applied downward pressure, there is also potential for bleeding when the F8 is removed.The SiteSeal device crossbar design allows for the adjustment of pressure at the venotomy site.The design also enables medical staff to inspect the puncture site for bleeding or any signs of hematoma after the SiteSeal VCD has been implemented.In addition, due to the crossbar design applying the additional load onto the venotomy site, the risk of the knot snapping at the last step, as observed in F8 sutures, is minimized.Finally, since the Z-stitch is pulled up through the design slots of the SiteSeal VCD, it is easier to cut and remove.Whereas, in F8 sutures, the taut sutures often become buried in the pinched skin, making removal difficult and frequently leading to fragmentation [28,29]. It is noteworthy that other techniques, such as suture or collagen delivery devices, have also been shown to be effective for immediate femoral venous hemostasis.However, these devices are not applicable for every size of sheath (most of them approved up to 12 Fr), have significant costs because they require multiple devices for multiple sheaths in a single vessel or a large-bore sheath, and may cause clinical complications [18,30].The SiteSeal is the only vascular closure device cleared by the FDA with no restriction to sheath size for both arterial and venous access.It can close multiple sheaths within a single vessel, as well as large bore sheaths, with one closure device for both arterial and venous access, while leaving nothing behind. Conclusions Effective venous hemostasis is essential for the safe performance of procedures for cardiac arrhythmias, left atrial appendage closure devices, wireless pacemakers, and other structural heart procedures.The SiteSeal VCD provides a safer and more effective way to achieve hemostasis.The SiteSeal can achieve immediate hemostasis, facilitate early ambulation, and enable earlier discharges with fewer venous access site complications compared with traditional manual compression.Furthermore, the improved workflow and efficiency are much greater compared to suture-based closing techniques, such as the F8.Along with the novel closure of the venotomy, the ability of the patient to have an elevated head and freedom of leg movement improves the overall procedure efficiency, comfort, and safety. Figure 1 . Figure 1.Illustration of the SiteSeal VCD.The SiteSeal design includes a crossbar, springs, a suture slot, a notched slot, and an incline plane.The incline plane is designed to position the north side towards the heart on the proximal side of the venotomy site. Figure 2 . Figure 2. Step-by-step deployment of the SiteSeal VCD at a venotomy site.(A) A Z-stitch using a 2-0 Vicryl suture is created and a double half knot is formed, creating an 'X' over the venotomy site.(B) Hemostatic powder is then placed around the sheath and the half-knot.(C) The SiteSeal is then Figure 1 . Figure 1.Illustration of the SiteSeal VCD.The SiteSeal design includes a crossbar, springs, a suture slot, a notched slot, and an incline plane.The incline plane is designed to position the north side towards the heart on the proximal side of the venotomy site. Figure 1 . Figure 1.Illustration of the SiteSeal VCD.The SiteSeal design includes a crossbar, springs, a suture slot, a notched slot, and an incline plane.The incline plane is designed to position the north side towards the heart on the proximal side of the venotomy site. Figure 2 . Figure 2. Step-by-step deployment of the SiteSeal VCD at a venotomy site.(A) A Z-stitch using a 2-0 Vicryl suture is created and a double half knot is formed, creating an 'X' over the venotomy site.(B) Hemostatic powder is then placed around the sheath and the half-knot.(C) The SiteSeal is then Figure 2 . Figure 2. Step-by-step deployment of the SiteSeal VCD at a venotomy site.(A) A Z-stitch using a 2-0 Vicryl suture is created and a double half knot is formed, creating an 'X' over the venotomy site.(B) Hemostatic powder is then placed around the sheath and the half-knot.(C) The SiteSeal is then positioned over the sheath at the venotomy site with the incline plane facing north.(D) The sheath is then removed while the two loose suture ends are pulled tight against the sheath.The suture ends are pulled up through the design slots and tightly knotted.(E) Following unloading of the springs, the tension created by the Z-stitch continues to fold the soft tissue surrounding the venotomy site, closing the opening in a linear fashion.(F) This is followed by placing elastic tape on top of the SiteSeal device for stabilization. Figure 3 . Figure 3. Clinical images showing the placement of the SiteSeal VCD.Briefly, (A) a Z-stitch using a 2-0 Vicryl suture is created and a double half knot is formed, creating an 'X' over the venotomy site.(B) Hemostatic powder is then placed around the sheath and the half-knot.(C) The SiteSeal is then positioned over the sheath at the venotomy site with the incline plane facing north.The sheath is then removed while the two loose ends of the suture are pulled tight against the sheath.The suture ends are pulled up through the design slots and the suture is tightly knotted.Following the release of the loaded springs, the roof is placed on the SiteSeal and (D) the procedure is repeated on the opposite groin.(E) This is followed by placing elastic tape on top of both SiteSeal devices for Figure 3 . Figure 3. Clinical images showing the placement of the SiteSeal VCD.Briefly, (A) a Z-stitch using a 2-0 Vicryl suture is created and a double half knot is formed, creating an 'X' over the venotomy site.(B) Hemostatic powder is then placed around the sheath and the half-knot.(C) The SiteSeal is then positioned over the sheath at the venotomy site with the incline plane facing north.The sheath is then removed while the two loose ends of the suture are pulled tight against the sheath.The suture ends are pulled up through the design slots and the suture is tightly knotted.Following the release of the loaded springs, the roof is placed on the SiteSeal and (D) the procedure is repeated on the opposite groin.(E) This is followed by placing elastic tape on top of both SiteSeal devices for stabilization.(F) The removal of both SiteSeals demonstrates no hematoma or bleeding at the venotomy site. Figure 4 . Figure 4.The beneficial design of the SiteSeal spring system.The release of the springs by the operator leads to an increase in the Z-stitch tension and increases focal downward pressure. Figure 4 . Figure 4.The beneficial design of the SiteSeal spring system.The release of the springs by the operator leads to an increase in the Z-stitch tension and increases focal downward pressure.
6,605.2
2024-08-01T00:00:00.000
[ "Medicine", "Engineering" ]
Design of a Web-Based Digital Learning Resource Center to assist online learning with mathematics content in primary schools Implementation of online learning policy has become an alternative solution. In another way, also raises various problems faced by teachers, students, and also parents who accompany their children to study online from home. Especially in learning with Mathematics content which already has a negative stigma. This development process aimed to assist teachers in providing digital learning resources in the learning process by providing a digital learning resource center that can be used by teachers, students, and parents. The main focus of this study is 1) the design of the digital LRC website. 2) Development of a digital LRC website that can be used by teachers in presenting the learning process. 3) provide helpful guidance for teachers in digital learning development by utilizing digital LRCs. The development of the Website-Based Digital LRC is realized through a seven-step Software Development Life Cycle (SDLC) design model. This paper will only focus on Systems Analysis and Requirements and also describing the DLRC system design that will be developed. The design resulted from this study is a Website-Based Digital LRC that can be utilized by both teacher and student. Optimization of this Digital LRC has the potential to reduce the time and effort of a teacher in their preparation for learning. Introduction The Corona pandemic emergency status announced by WHO has an impact on the world community, including Indonesia. Apart from having an impact on the community in implementing the WFH (Work from Home) policy on all work that must be done from home, this virus also has an impact on the educational process. With the limitation of interaction between communities, the Ministry of Education and Culture of Indonesia issued a policy to dismissing schools and replacing learning activities through distance education utilizing both online and offline learning called Learning from Home. Home learning that has been initiated can be an alternative solution that has many advantages in facing the challenges of education during this pandemic [1,2]. Students in particular feel that online learning is not fun and runs more difficult [3]. In the opinion of students, this happens because of changing learning patterns, increasingly irregular time settings, unsupportive device and signal facilities, lack of interaction, and monotonous methods. Among reasons for the difficulty of online learning during the COVID-19 pandemic, a monotonous method is one of the things that can be prioritized for the improvement of this connection Among the reasons for the difficulty of online learning during the COVID-19 pandemic, a monotonous method is one of the things that can be prioritized for the improvement of this connection [4]. One of the things 2 that are considered to be able to improve is by increasing digital learning resources so that they can provide options for teachers to provide a variety of learning [5]. Essentially, learning resources can be divided into self-designed and utilized learning resources [6,7]. Learning resources by design requires the ability of teachers to develop learning resources both conceptually and technically. In contrast, the learning resources that are utilized do not require the ability of the teacher to develop learning resources as a whole, but only require the ability of the teacher to select and modify learning resources that are deemed appropriate to meet their learning needs. With the use of media and learning resources in addition to increasing learning motivation, the focus of students is also able to enrich learning offerings [8,9]. In particular, self-designed learning resources are very suitable if you have spare time and abundant resources. while the use of utilized learning resources is highly recommended if the need for learning resources is demanded in a short time and the resources needed for development are not much available [10]. In the context of digital learning resources, one of the prerequisites for development is qualified digital skills, if these competencies are not owned, it is better if you can use the utilized learning resources. Three important principles should be applied in online learning during the Covid -19 period, that is providing varied activities, the material duration that tends to be short, and the last one emphasizes the interaction process [11]. Teachers will be facilitated to provide learning variations if they choose to use teaching materials by utilization. The utilized teaching material has the advantage of providing variation because it is made by many different people even though they have the same material orientation [10]. Materials that tend to be short will make it easier for students to digest learning because of the significant reduction in focus during online learning [12]. Teachers can focus more on building interactive learning environments if they have sufficient time. This can only happen if the time is not consumed to prepare teaching materials. Early researches results reveal the cause of the difficulty of online learning during the Covid-19 pandemic, and one of them is the inability of teachers to produce digital learning resources. It is one of the causes that can be prioritized to resolve in this regard. Especially for Mathematics instruction. Mathematics learning on the other hand has quite a tough challenge. Even held by face-to-face learning, it tends to have a more negative stigma in elementary school, taught as a difficult subject matter [9]. This potential may increase with the limitations of online learning. It is necessary to take anticipation by support teachers preparing for their instruction, especially on mathematics in elementary schools. The main focus of this paper is to explain the activity towards 1) developing a digital learning resource center website that can be used by teachers in presenting the learning process. 2) provide an overview for teachers to develop mathematic learning, by using utilized digital learning resource center. Research methods The main focus of this study is 1) the design of the digital LRC website. 2) Development of a digital LRC website that can be used by teachers in presenting the learning process. 3) provide helpful guidance for teachers in digital learning development by utilizing digital LRCs. That focuses will be implemented by Software Development Life Cycle methods as one of the most effective methods for designing, building, and maintaining information systems [13]. The development of the Website-Based Digital LRC is realized through adoption of a seven-step Software Development Life Cycle (SDLC) design model that is 1) Planning, 2) Systems Analysis and Requirements, 3) Systems Design, 4) Development, 5) Integration and Testing, 6) Implementation, and 7) Operations and Maintenance. This paper will only focus on the first three steps, from Planning, analyzing the system and its requirement, to the system designing. The ease of following the stages it has is the main reason for choosing SDLC as a design development method. Apart from that The system development life cycle (SDLC) focuses on data, as opposed to processing and functionality [14]. This is related to the needs of the system, which relies on managing a lot of data. Meanwhile, the functions of the DLRC system are quite simple. One of the results from the third phase is the category of users. There are three categories of DLRC system users, that is guest, contributor and admin. The Guest role has the authority to access the material and download it directly or just copy the link. Meanwhile, contributors have the authority of guests and added with the ability to add teaching materials. While administrators have the authority of guests and contributors, admins are also able to manage users and are also able to manage all content. The permission map of each user can be seen in figure 1. Discussion What main abilities does the DLRC website have? Analyzing the main capabilities of the DLRC website is quite a dilemma. Many desire to produce a perfect output, but due to the limitations of the current situation and conditions, the decision is made that the main features of the DLRC system are very simple. The DLRC system that was built is expected to have the main ability to provide opportunities to share digital teaching materials that are free to use and mutual appreciation for teaching materials that have been developed and shared. so that with this sharing effort, each user, especially teachers, can take advantage of a variety of teaching materials that have been shared [10]. What resources does the teacher have? Teachers tend to use the combination of student and teacher books from the government without doing further development. So that the possibility of variations in learning materials is quite small, especially in mathematics learning materials. Moreover, mathematics teachers tend to be more confident when delivering material directly and then reinforcing the direct mentoring process [9]. So that in general every teacher does not have many variations of teaching materials that are ready to be presented to help implement the ideal learning process. What are the resources that can be utilized? The abundance of learning resources on the internet today is a fact of necessity. However, there is no validation and information on whether the material is relevant to the needs of a lesson. This includes determining its relevance, there are also difficulties in finding teaching materials that can be candidates for selection. This creates a problem for the teacher to get relevant and appropriate teaching materials from the internet. Meanwhile, in higher education institutions, there are a lot of assignments that culminate in the activities of compiling teaching materials by students, regardless of the relevance of the assignments made by students, they have the potential to be used by the teacher as an alternative teaching material [11]. At least, the teaching materials are well categorized in terms of content and designed for certain grade levels so that they are quite specific, and tend to be easier to modify and adapt to instructional needs by each teacher. How is the scope of DLRC content? The analysis of material coverage consists of two parts. First, the coverage of the content of the material presented and the second, the scope of what types of teaching materials will be accommodated. With the fact that elementary schools in Indonesia use integrated learning through thematic subjects [15], the content of the material will not be specific to mathematics. But generally, contains themes that have been determined by the national level curriculum developer from level one to level six. The digital learning resources center system design will be realized as a webbased system that provides access to content that can be used as ready-to-use or modifiable teaching materials. The principle of content management and system design development on this website puts forward three main principles, namely open-access, a combination of the principle of repositories and online catalogs, and promoting non-commercial use. The principle of open-access or free to access is the main feature that we want to present, so there is no need for user authentication to be able to access available teaching materials. This is intended to improve system accessibility as well as answer the main background of the problem to be solved, facilitate efforts to share resources and provide teachers with the widest possible access to a variety of teaching materials. another effort to support the achievement of this idea, all files stored in the DLRC system will be registered with a CC-BY-NC license which means with this license allow users to remix, adapt, and build upon learning materials non-commercially [16], and although the new learning materials must acknowledge the author or contributor and be non-commercial. And also, users do not have to license their derivative works on the same terms. The flow of access and interaction between the user and the system database is carried out with a very simple procedure as seen in figure 2. This simplicity is a form of realization of the open access principle that the DLRC system is currently developing. Especially for contributors and administrators, after carrying out the authentication procedure via login, they can directly access the database according to their capabilities and authority. Specifically for guest users, it is different from other users. Guest users generally do not need to authenticate via login to be able to access a limited database, with permission to "watch-only" the content available. However, if the guest wants to use the content 5 available on the system on their platform by copying the link to the teaching material or by downloading it, then further procedures are required. By giving a rating and commenting on each content to be used, guests can display a link that can be copied or a button to download the desired teaching material. Conclusion The application of the three main principles of the Website-Based DLRC system, which are open-access, the combination of the principles of repositories and online catalogs, and non-commercial use is expected to help accommodate the needs of online learning in general and specifically for mathematics learning. The free of access was the main reason why the system built in a simple data flow. However, the interaction can still be seen in the adaptation of the repository and catalog context. And finally, the non-commercial materials have made the website-based DLRC rich in the content of teaching and learning materials. In particular, in providing teaching materials that can be used by teachers in schools so that teachers can be more focused to develop more varied and interactive learning, and present meaningful and quality learning activities for the student.
3,117.8
2021-07-01T00:00:00.000
[ "Education", "Computer Science", "Mathematics" ]
Magnetic Nanoparticles Magnetic nanoparticles are a class of nanoparticle that can be manipulated using magnetic fields. Such particles commonly consist of two components, namely a magnetic material, often iron, nickel, and cobalt, and a chemical component that has functionality, frequently with (bio)catalytic or biorecognition properties. Magnetic nanoparticles, magnetic nanorods, and other magnetic nanospecies have been prepared, and used in many important applications. Particularly, magnetic nanospecies functionalized with biomolecular and catalytic entities have been synthesized and extensively used for many biocatalytic, bioanalytical, and biomedical applications. Different biosensors, including immunosensors and DNA sensors, have been developed using functionalized magnetic nanoparticles for their operation in vitro and in vivo. Their use for magnetic targeting (drugs, genes, radiopharmaceuticals), magnetic resonance imaging, diagnostics, immunoassays, RNA and DNA purification, gene cloning, cell separation, and purification has been developed. Moreover, magnetic nano-objects of complex topology, such as magnetic nanorods and nanotubes, have been produced to serve as parts of various nanodevices, for example, tunable fluidic channels for tiny magnetic particles, data storage devices in nanocircuits, and scanning tips for magnetic force microscopes. The increasing number of scientific publications focusing on magnetic materials indicates growing interest in the broader scientific community (Figure 1). This Special Issue covers all research areas related to magnetic nanoparticles, magnetic nanorods, and other magnetic nanospecies, as well as their preparation, characterization, and various applications, specifically emphasizing biomedical applications. The review articleswritten by the leading experts cover different subareas of the science and technology related to various magnetic nanospecies—touching upon the multifaceted area and its applications. The different topics addressed in this Special Issue will be of high interest to the interdisciplinary community active in the fields of nanoscience and nanotechnology. It is hoped that the collection of the different review articles will be important and beneficial for researchers and students working in various areas related to bionanotechnology, materials science, biosensor applications, medicine, and so on. Furthermore, the issue is aimed at attracting young scientists and introducing them to the field, while providing newcomers with an enormous collection of literature references. . The number of published papers mentioning "magnetic nanoparticles" derived from statistics provided by Web of Science. The search was performed for the key words "magnetic nanoparticles" in the topic. Note the dramatic increase of the publications related to magnetic nanoparticles (the statistics for 2019 was not complete). The articles in this Special Issue cover the following specific subareas of the research field: 1 . General Information-Preparation, Characterization, Modification, and Usage of Various Magnetic Nanoparticles and Nanorods Advances in nanotechnology led to the development of nanoparticle systems with many advantages due to their unique physicochemical properties. The review article by Katz [1] serves as a brief introduction to the research area and overviews composition and synthetic preparations of various magnetic nanoparticles and nanorods ( Figure 2). Another review by Antone et al. [2] focuses specifically on iron oxide nanoclusters and their preparation and use. A review by Socoliuc et al. [3] describes the design and synthesis of single-and multi-core iron oxide nanoparticles and provides an overview on the composition, structural features, surface, and magnetic characterization of the cores. Biomolecular functionalization of magnetic nanoparticles has allowed their numerous applications. Specifically, the modification of magnetic nanoparticles with cellulose enzyme is reviewed in the article by Khoshnevisan et al. [4]. The number of published papers mentioning "magnetic nanoparticles" derived from statistics provided by Web of Science. The search was performed for the key words "magnetic nanoparticles" in the topic. Note the dramatic increase of the publications related to magnetic nanoparticles (the statistics for 2019 was not complete). The articles in this Special Issue cover the following specific subareas of the research field: 1. General Information-Preparation, Characterization, Modification, and Usage of Various Magnetic Nanoparticles and Nanorods Advances in nanotechnology led to the development of nanoparticle systems with many advantages due to their unique physicochemical properties. The review article by Katz [1] serves as a brief introduction to the research area and overviews composition and synthetic preparations of various magnetic nanoparticles and nanorods ( Figure 2). Another review by Antone et al. [2] focuses specifically on iron oxide nanoclusters and their preparation and use. A review by Socoliuc et al. [3] describes the design and synthesis of single-and multi-core iron oxide nanoparticles and provides an overview on the composition, structural features, surface, and magnetic characterization of the cores. Biomolecular functionalization of magnetic nanoparticles has allowed their numerous applications. Specifically, the modification of magnetic nanoparticles with cellulose enzyme is reviewed in the article by Khoshnevisan et al. [4]. Biomedical Applications of Magnetic Nanoparticles The comprehensive review by Hepel [5] provides a very broad view on the use of magnetic nanoparticles for various applications in nanomedicine, (Figure 3). Another review by Piñeiro et al. [6] concentrates on the use of magnetic nanoparticles in medical biosensing, theranostics, and tissue engineering. The use of iron oxide magnetic nanoparticles in pharmaceutical areas has increased in the last few decades. The article by Luciano Bruschi et al. [7] reviews conceptual information about magnetic nanoparticles, methods of their synthesis, properties useful for pharmaceutical applications, advantages and disadvantages, strategies for nanoparticle assemblies, and use in the production of drug delivery, hyperthermia, theranostics, photodynamic therapy, and as antimicrobial substances. Biocatalysis and biomedical perspectives of magnetic nanoparticles as versatile carriers are highlighted in the review by Bilal et al. [8]. Another review article by Obaidat et al. [9] overviews the use of magnetic nanoparticles for hyperthermia, which is a non-invasive method that uses heat for cancer therapy where high temperature has a damaging effect on tumor cells. Magnetic hyperthermia uses magnetic nanoparticles exposed to alternating magnetic fields to generate heat in local regions (tissues or cells). While this therapeutic method is highly important for cancer treatment, the paper is mostly focused on the physical properties of the magnetic nanoparticles, and the intrinsic and extrinsic parameters required for the medical use of magnetic nanoparticles. The implication of magnetic nanoparticles in cancer detection, screening, and treatment is reviewed in the article by Hosu et al. [10]. This review summarizes studies about the implications of magnetic nanoparticles in cancer diagnosis, treatment, and drug delivery as well as prospects for future development and challenges of magnetic nanoparticles in the field of oncology. The review article by Stergar et al. [11] is concentrated on potential biomedical applications of NiCu magnetic nanoparticles. While the most frequently used magnetic nanoparticles are composed of iron oxide (Fe3O4), NiCu magnetic nanoparticles, which are not common for biomedical applications, demonstrate some advantages due to their unique features. The article by Tada and Yang [12] is a review of iron oxide labeling and tracking of extracellular vesicles. Extracellular vesicles are essential tools for conveying biological information and modulating functions of recipient cells. Therefore, their visualization (imaging), particularly with magnetic nanoparticles, is highly important and the reviewed method is expected to be applicable and useful in clinical analysis. Biomedical Applications of Magnetic Nanoparticles The comprehensive review by Hepel [5] provides a very broad view on the use of magnetic nanoparticles for various applications in nanomedicine, (Figure 3). Another review by Piñeiro et al. [6] concentrates on the use of magnetic nanoparticles in medical biosensing, theranostics, and tissue engineering. The use of iron oxide magnetic nanoparticles in pharmaceutical areas has increased in the last few decades. The article by Luciano Bruschi et al. [7] reviews conceptual information about magnetic nanoparticles, methods of their synthesis, properties useful for pharmaceutical applications, advantages and disadvantages, strategies for nanoparticle assemblies, and use in the production of drug delivery, hyperthermia, theranostics, photodynamic therapy, and as antimicrobial substances. Biocatalysis and biomedical perspectives of magnetic nanoparticles as versatile carriers are highlighted in the review by Bilal et al. [8]. Another review article by Obaidat et al. [9] overviews the use of magnetic nanoparticles for hyperthermia, which is a non-invasive method that uses heat for cancer therapy where high temperature has a damaging effect on tumor cells. Magnetic hyperthermia uses magnetic nanoparticles exposed to alternating magnetic fields to generate heat in local regions (tissues or cells). While this therapeutic method is highly important for cancer treatment, the paper is mostly focused on the physical properties of the magnetic nanoparticles, and the intrinsic and extrinsic parameters required for the medical use of magnetic nanoparticles. The implication of magnetic nanoparticles in cancer detection, screening, and treatment is reviewed in the article by Hosu et al. [10]. This review summarizes studies about the implications of magnetic nanoparticles in cancer diagnosis, treatment, and drug delivery as well as prospects for future development and challenges of magnetic nanoparticles in the field of oncology. The review article by Stergar et al. [11] is concentrated on potential biomedical applications of NiCu magnetic nanoparticles. While the most frequently used magnetic nanoparticles are composed of iron oxide (Fe 3 O 4 ), NiCu magnetic nanoparticles, which are not common for biomedical applications, demonstrate some advantages due to their unique features. The article by Tada and Yang [12] is a review of iron oxide labeling and tracking of extracellular vesicles. Extracellular vesicles are essential tools for conveying biological information and modulating functions of recipient cells. Therefore, their visualization (imaging), particularly with magnetic nanoparticles, is highly important and the reviewed method is expected to be applicable and useful in clinical analysis. Biosensors Based on Magnetic Nanoparticles Magnetic nanoparticles conjugated with various biomolecules offer a versatile approach to biosensors, particularly in biomedical applications (Figure 4), as discussed in the review article by Krishnan and Yugender Goud [13]. Another comprehensive review by Üzek et al. [14] focuses on optical biosensing systems based on magnetic nanoparticles. The optical biosensors on the platform of biomolecular-functionalized magnetic nanoparticles are broadly categorized into four typessurface plasmon resonance (SPR), surface-enhanced Raman spectroscopy (SERS), fluorescence spectroscopy (FS), and near-infrared spectroscopy and imaging (NIRS)-that are commonly used in various bioanalytical applications. The use of biosensors based on magnetic nanoparticles specifically for food safety monitoring is highlighted in the review by Khan et al. [15]. Due to the expanding occurrence of marine toxins, and their potential impact on human health, there is an increased need for tools for their rapid and efficient detection. The use of magnetic nanoparticles in marine toxin detection is explained in the review article by Gaiani et al. [16]. Magnetic Janus nanoparticles bring together the ability of Janus particles to perform two different functions at the same time in a single particle with magnetic properties enabling their remote manipulation, which allows headed movement and orientation. The article by Campuzano et al. [17] reviews the preparation procedures and applications in the (bio)sensing field of static and self-propelled magnetic Janus nanoparticles. The main progress in the fabrication procedures and the applicability of these nanoparticles are critically discussed, also giving some clues on challenges to be dealt with and future prospects. Biosensors Based on Magnetic Nanoparticles Magnetic nanoparticles conjugated with various biomolecules offer a versatile approach to biosensors, particularly in biomedical applications (Figure 4), as discussed in the review article by Krishnan and Yugender Goud [13]. Another comprehensive review by Üzek et al. [14] focuses on optical biosensing systems based on magnetic nanoparticles. The optical biosensors on the platform of biomolecular-functionalized magnetic nanoparticles are broadly categorized into four types-surface plasmon resonance (SPR), surface-enhanced Raman spectroscopy (SERS), fluorescence spectroscopy (FS), and near-infrared spectroscopy and imaging (NIRS)-that are commonly used in various bioanalytical applications. The use of biosensors based on magnetic nanoparticles specifically for food safety monitoring is highlighted in the review by Khan et al. [15]. Due to the expanding occurrence of marine toxins, and their potential impact on human health, there is an increased need for tools for their rapid and efficient detection. The use of magnetic nanoparticles in marine toxin detection is explained in the review article by Gaiani et al. [16]. Magnetic Janus nanoparticles bring together the ability of Janus particles to perform two different functions at the same time in a single particle with magnetic properties enabling their remote manipulation, which allows headed movement and orientation. The article by Campuzano et al. [17] reviews the preparation procedures and applications in the (bio)sensing field of static and self-propelled magnetic Janus nanoparticles. The main progress in the fabrication procedures and the applicability of these nanoparticles are critically discussed, also giving some clues on challenges to be dealt with and future prospects.
2,846.2
2020-01-15T00:00:00.000
[ "Materials Science", "Physics" ]
Designing multiepitope-based vaccine against Eimeria from immune mapped protein 1 (IMP-1) antigen using immunoinformatic approach Drug resistance against coccidiosis has posed a significant threat to chicken welfare and productivity worldwide, putting daunting pressure on the poultry industry to reduce the use of chemoprophylactic drugs and live vaccines in poultry to treat intestinal diseases. Chicken coccidiosis, caused by an apicomplexan parasite of Eimeria spp., is a significant challenge worldwide. Due to the experience of economic loss in production and prevention of the disease, development of cost-effective vaccines or drugs that can stimulate defence against multiple Eimeria species is imperative to control coccidiosis. This study explored Eimeria immune mapped protein-1 (IMP-1) to develop a multiepitope-based vaccine against coccidiosis by identifying antigenic T-cell and B-cell epitope candidates through immunoinformatic techniques. This resulted in the design of 7 CD8+, 21 CD4+ T-cell epitopes and 6 B-cell epitopes, connected using AAY, GPGPG and KK linkers to form a vaccine construct. A Cholera Toxin B (CTB) adjuvant was attached to the N-terminal of the multiepitope construct to improve the immunogenicity of the vaccine. The designed vaccine was assessed for immunogenicity (8.59968), allergenicity and physiochemical parameters, which revealed the construct molecular weight of 73.25 kDa, theoretical pI of 8.23 and instability index of 33.40. Molecular docking simulation of vaccine with TLR-5 with binding affinity of − 151.893 kcal/mol revealed good structural interaction and stability of protein structure of vaccine construct. The designed vaccine predicts the induction of immunity and boosted host's immune system through production of antibodies and cytokines, vital in hindering surface entry of parasites into host. This is a very important step in vaccine development though further experimental study is still required to validate these results. The impact of coccidiosis on poultry production, and the evolution of drug resistance to known prophylactic drugs, attenuated and non-attenuated vaccines, is currently felt globally, and it is continuously inflicting grave economic loss to the poultry industry 1,2 . Avian coccidiosis is a ubiquitous intestinal disease caused by Eimeria species, which are intracellular obligate Apicomplexan protozoans [3][4][5] . This disease is characterised by the gut epithelial cells invasion by Eimerial sporozoites, resulting in clinical symptoms such as malabsorption and increased vulnerability to other pathogen infections 6,7 . Eimeria parasites consist of fairly large genomes with their size ranging from 55 and 60 Mbp, carrying 8000 to 9000 genes 8 . The classification of Eimeria genomes includes the nuclear genome, which carries approximately 60 Mbp of DNA within 14 chromosomes of 1-7 Mb 5 . Other genomes are the mitochondrial genome of ~ 6200 bp, a circular apicoplast genome comprising ~ 35 kb circular extrachromosomal DNA and a double-stranded RNA viral genome 9 . This genome composition is the most complex and prone to evolve rapidly due to the fast life cycle of Eimeria 10 . Eimeria infection displays a direct faecal-oral life cycle, where the chickens ingest sporulated OPEN 1 Methods and materials The retrieval of Eimeria protein sequences and identification of conserved sequences from the genomic sequences. The protein sequences of Eimeria Immune mapped protein 1 (IMP1) antigen from different Eimeria species (see Supplementary Table S1) were retrieved in FASTA format from the protein database of the National Centre for Biotechnology Information-NCBI (https:// www. ncbi. nlm. nih. gov/ prote in). The obtained sequences were subjected to multiple sequence alignment (MSA) with conserved CD4 and CD8 chains capable of inducing immunological responses. MSA was performed using the CLUSTALW online server (http:// www. genome. jp/ tools-bin/ clust alw) 28 . The MSA was performed using default parameters and obtained results of conserved regions that matched genome sequence and conserved chains, which had a minimum of 15 amino acid residues that were selected for further analysis. Antigenicity and prediction of transmembrane of conserved regions. In this study, the selected conserved regions were tested for antigenicity using the VaxiJen v2.0 Server (http:// www. ddg-pharm fac. net/ vaxij en/ VaxiJ en/ VaxiJ en. html) 29 , with the threshold set to 0.4. The sequences identified as probable antigens were selected and tested for transmembrane helix properties in the TMHMM v2.0 server (http:// www. cbs. dtu. dk/ servi ces/ TMHMM/). The TMHMM server was used to identify outer membrane protein sequences. Identification of T-cell epitopes. Since there is no data currently available in the immunoinformatic tool for chicken alleles used for MHC-epitope binding prediction, the human HLA alleles were selected and used as an alternative for the chicken MHC for both MHC-I and MHC-II epitope predictions 30,31 . As the chickens have different MHC alleles, studies have shown that anchor residues in chicken BF haplotypes are similar to residues anchored on mammalian MHC, especially those with 8-9mer in size, hence for this study MHC B locus was considered along with human alleles for the predictions 32 . Prediction of cytotoxic T-cell/CD8 + T-cell epitopes. The conserved sequences that fulfilled the transmembrane analysis were submitted to the NetCTL v1.2 tool to generate nonamers that can bind to major histocompatibility complex (MHC) class I HLA alleles molecules and induce CD8 + . The resulting nonamers were subjected to the IEDB analysis tool (http:// tools. iedb. org/ mhci/) to determine cytotoxic T-cell (CTL) epitopes 33 . The nonamers were analysed using the Stabilized Matrix Base Method (SMM). The parameters for identifying MHC-I binding alleles selected included the amino acid length of peptide set to 9.0, the IC 50 value < 250, and the human as MHC source species 21,34 . The generated epitopes were examined for antigenicity, with a threshold set to 0.5 as the main parameter. Sequences that were detected to be above the set threshold were selected as probable antigens. The CTL cell epitopes were further tested for immunogenicity using the MHC-I immunogenicity tool of IEDB (http:// tools. iedb. org/ immun ogeni city/). The immunogenicity tool was mainly used to identify sequences that can stimulate an immune response towards any parasites in a host (human or animal) 20 . Helper T-cell/ CD4 + T-cell epitope prediction. The helper T-cell (HTL) epitopes were predicted using the IEDB MHC-II binding tool (http:// tools. iedb. org/ mhcii/). The prediction method used was SMM-align (sta- www.nature.com/scientificreports/ bilisation matrix alignment), allele length was set at 15, and the threshold for IC 50 was < 250 35 . This technique also allowed the identification of MHC class II HLA alleles that bind to HTL epitopes. The identified HTL epitopes were subjected to the IFNepitope tool (http:// crdd. osdd. net/ ragha va/ ifnep itope/) to predict epitopes that could induce cytokine IFN-γ. The method and parameters used for prediction was the SVM (support vector machine) based method and the IFN-γ versus non-IFN-γ model. The identification of IL-4 inducers was achieved through the IL4pred tool (http:// crdd. osdd. net/ ragha va/ il4pr ed/). The resulting shortlisted epitopes were scrutinised using similar immunoinformatic tools as CTL epitopes, to determine their antigenicity with a threshold set at 0.5. B-cell epitope prediction. The B-cell epitopes were predicted using the antigenic and outer membrane protein sequences, initially identified in TMHMM and VaxiJen server. These sequences were input in the ABCpred server (http:// osddl inux. osdd. net/ ragha va/ abcpr ed/) 36 to identify antigens that can induce antibodies and induce B-cell response. This server uses an artificial neural network to predict B-cell epitopes. The window length selected for prediction ranged from 12 to 16, with a threshold value of 0.51. The epitopes that overlapped with the final CD8 + T-cell epitopes and were observed to be non-allergenic were considered for the final vaccine construct. Conservancy and allergenicity test. The generated T-cell and B-cell epitopes identified as antigenic and immunogenic were tested for conservancy using IEDB conservation across antigen tool (http:// tools. iedb. org/ conse rvancy/) 37 . The allergenicity of the conserved epitopes was determined by the AllerTop v2.0 tool, where sequences identified as allergens were discarded, and only non-allergic sequences were selected 24 . Epitope merging for generation of multiepitope subunit vaccine. The prioritised epitope candidates for CD8 + , CD4 + and B-cell epitopes were determined using various immunoinformatic tools and were joined together with an immunological adjuvant to form a multiepitope vaccine (MEV). The CD8 + T cell epitopes were joined together with the aid of AAY linkers, CD4 + T cell epitopes were linked by GPGPG linkers, and the B-cell epitopes were linked by the KK linkers. An appropriate adjuvant was attached to the N-terminal of the vaccine with the aid of the EAAK linker. These linkers provide extended flexibility to the peptides making up the vaccine, making it more stable. The addition of the adjuvant to the vaccine is crucial as it boosts the immunogenicity of the multiepitope construct 38 . The adjuvant added to the selected T & B-cell epitopes was a cholera toxin subunit B (CTB) sequence (accession no. ABV74245.1). CTB is the non-toxic component of the cholera toxin that forms pentamers with high binding affinity for the toxin's cell surface GM1-ganglioside receptor of the gut mucosa 39 . Gene expression profiling studies indicated that cholera toxin B stimulated TLR5 signalling pathway activation 40 . Antigenicity, allergenicity, solubility, and physiochemical properties assessment. The antigenicity of the final MEV sequence was tested using Vaxijen v2.0 server (http:// www. ddgph armfac. net/ vaxij en/ VaxiJ en/ VaxiJ en. html). The predicted vaccine's antigenic nature ensured its ability to bind and interact with the receptor during the docking stage. AllerTop v2.0 was used to further determine the constructed vaccine as an allergen or non-allergen. To assess physicochemical parameters of the vaccine protein, the vaccine sequence was subjected to ProtParam53 web server (https:// web. expasy. org/ cgi-bin/ protp aram/ protp aram/) from the Expert Protein Analysis System (EXPASY) to calculate the number of amino acids of the vaccine, molecular weight (kDa), theoretical isoelectric point (pI), estimated half-life, instability index, aliphatic index, hydropathicity GRAVY 41 . Tertiary structure prediction, refinement, and validation. The tertiary structure of the previously designed MEV protein was predicted and generated using the RaptorX server (http:// rapto rx. uchic ago. edu/). Since the designed vaccine was a novel protein without any known template, Raptor was the best suited for structural predictions. Molecular refinement of the predicted vaccine tertiary structure was achieved using the GalaxyRefine server (http:// galaxy. seokl ab. org/ cgi-bin/ submit. cgi? type= REFINE) 27,42 . The structure refinement was done to improve the structural quality of the vaccine protein. The GalaxyRefine server predicted five refined models of the vaccine construct resulting from structural perturbations and structural relaxations. From the refined models, model 1 was predicted by structure perturbation applied to the clusters of the side chains and models 2-5 were generated by more aggressive perturbations 43 . All the five refined models were further checked for GDT-HA, RMSD, MolProbity score and the best-refined model was selected and validated. The validation of the selected refined tertiary structure for the designed vaccine protein was performed using PROCHECK, which generated a Ramachandran plot, and ProSA-web (https:// prosa. servi ces. came. sbg. ac. at/ prosa. php) was employed for final validation which generated a Z-score for confirmation 44 . Molecular docking of vaccine constructs with Toll-like receptor 5. Docking of vaccine with TLR5 (PDB ID: 3V44) was done to check the vaccine's binding affinity and agonistic ability towards receptor molecule. The TLR5 was selected as a receptor to bind the designed vaccine because it was reported to be ortholog of TLRs found in humans and it was highly expressed in cecum primary immune effector cells of infected chickens with mature natural flora 45,46 . To start docking, solvent accessibility was calculated with the Naccess tool (http:// wolf. bms. umist. ac. uk/ nacce ss/) to access the active and passive residues of both the vaccine construct and TLR5, which were then used in docking simulation. The PDB structure of TLR5, active residues of receptor and sequence of the MEV were submitted to AttractPep (http:// www. attra ct. ph. tum. de/ servi ces/ ATTRA CT/ pepti de. www.nature.com/scientificreports/ htm) 47 and completed using a locally installed attract package on the Centre of High-Performance Computing (CHPC) Lengau cluster. A total of 50 models were generated for the MEV_TLR5 complex. The model with the lowest energy and binding properly to the receptor was selected and visualised using VMD 1.9.3 software (http:// www. ks. uiuc. edu/ Resea rch/ vmd/) 48 . The UCSF-Chimera v1.14 software (http:// www. cgl. ucsf. edu/ chime ra/) 49 was later used to visualise the best-docked tertiary structure of the refined TLR-vaccine. Molecular dynamics simulations and analysis. Docked complex of MEV and TLR5 was further subjected to energy minimisation through molecular dynamics simulations (MDS) using AMBERS 14 package 50 . MDS was performed to evaluate complex stability and interactions between the docked proteins 51 . The input proteins were described using FF14SB 52 , and topologies of the vaccine structure were generated using the LEAP module of AMBER 14. This was done by introducing ions (protons and Cl-) into the orthorhombic solvation box of TIP3P water molecules of 8 Å, to neutralise the system 53 . To minimise high energy configurations in the protein, energy minimisation was performed to obtain the lowest energy of the protein 54 . This step was initially performed with 10,000 steps (500 steepest descents with 9500 conjugate gradient) and followed by full minimisation of 2000 steps. The system was gradually heated for 2 ns (ns) in a canonical ensemble (NVT) with a Langevin thermostat (from 0 to 300 K). The collision frequency applied to the system was 1.0 p s −1 , with the density of the water system regulated with 4 ns of NPT simulation. The molecular dynamic production was run for 100 ns of NPT (constant number N, pressure P and temperature T), where equilibration of the entire system was reached at 300 K for another 2 ns at pressure of 1 bar. After molecular dynamic simulation, the PTRAJ and CPPTRAJ modules in AMBER 14 were used to analyse parameters: Root Mean Square Deviation (RMSD) and Root Mean Square Fluctuations (RMSF). In silico codon optimisation, cloning and expression of vaccine construct. Codon optimisation of the multiepitope construct was achieved using Java Codon Adaptation Tool (JCat:http:// www. jcat. de/) 55 . The optimisation was performed to obtain an improved nucleotide sequence adapted to its potential selected expression host (E. coli strain K12). The JCat adaptation is dependent on the codon adaptation index (CAI) and percentage GC content of improved sequence. The optimal CAI score of optimised gene sequence ranges from is 0.8 to 1.0 and GC% (30-70%), indicating improved expression of a gene in its corresponding organism, without any translation errors 51 . The optimised nucleotide sequence of the vaccine was cloned and expressed in E. coli (strain K12) host, where XhoI (CTC GAG ) and BamHI (GGA TCC ) restriction sites were added to 5` and 3` ends of the construct prior to cloning, respectively. To clone the improved vaccine sequence into a suitable expression vector, SnapGene viewer v5.3 software (http:// www. snapg ene. com/) was used. Immune simulation. The multiepitope peptide was subjected to an online in silico immune simulation server (C-ImmSim: http:// kraken. iac. rm. cnr. it/C-IMMSIM/) to generate and evaluate vaccine candidate immune response 56 . All simulation parameters used for simulation were set at default, where a single injection and the vaccine with no lipopolysaccharide (LPS) option were selected. The vaccine was administered at 3 intervals of 4 weeks. Results The retrieval of Eimeria protein sequences and identification of conserved sequences from the genomic sequences. To design the multiepitope subunit vaccine, a total of 19 Eimeria Immune Mapped protein-1 antigen genomic sequences representing all Eimeria species were retrieved from NCBI. The genome's conserved sequences were created through multiple sequence alignment using the CLUSTALW online server, where 22 unique and conserved sequences were selected. Antigenicity and prediction of transmembrane of conserved regions. All the selected conserved sequences were tested for antigenicity with default parameters of the threshold value set at ≥ 0.4, in Vaxijen v2.0 server. It was found that 16 sequences fulfilled the antigenicity property of set threshold with Vaxijen score ranging from 0.4295 to 0.9046 (Table 1). The transmembrane analysis performed using TMHMM v2.0 server detected about 12 conserved sequences that fulfilled and exhibited the exomembrane properties ( Table 1). Prediction of T-cell epitopes. Prediction of cytotoxic T-cell/CD8 + T-cell epitopes. The conserved sequences selected were subjected to the NetCTL v1.2 server, where a total of 577 receptor-specific immunogenic nonamers of CTL epitopes were found. The nonamers were subjected to the IEDB MHC-I prediction tool, where the SSM-based method and the IC 50 value parameter < 250 were set to predict MHC-I binding alleles accurately. The prediction analysis detected that about 214 CTL epitopes interacted with one to eight MHC alleles under the set IC 50 parameter. The selected epitopes were further tested for antigenicity with a threshold value set at ≥ 0.5, in Vaxijen v2.0 server. A total of 77 CTL epitopes were detected to be antigenic in nature, with epitope 'FKISVF-GFA' having the highest Vaxijen score of 2.2931. The immunogenicity analysis performed using the IEDB tool identified 41 sequences as immunogenic. These epitopes were tested for the conservancy, where 20 epitopes were noted to be conserved. The conserved epitopes also underwent allergenicity analysis using AllerTop v2.0 server, where seven (7) CD8 + epitopes: AQEAAA AAA , EAAA AAA AA, FGFVAVTPP, FKISVFGFA, FNISAFGFV, FT-SPAPAAK and KISVFGFAA were found to be non-allergens and were regarded as final predicted epitopes. The summary of results obtained for the final predicted CD8 + T cell epitopes, IC 50 , antigenicity, immunogenicity and allergenicity scores are presented in Table 2. www.nature.com/scientificreports/ Prediction of helper T-cell/ CD4 + (HTL) epitopes. The 12 conserved sequences that previously fulfilled the transmembrane analysis were subjected to the IEDB MHC-II binding tool to predict HTL epitopes and their respective HLA alleles. The VaxiJen server was applied to the detected HTL epitopes to test for antigenicity. A total of 103 epitopes were obtained that fulfilled both the IEDB tool and Vaxijen parameters of IC 50 value < 250 and ≥ 0.5 antigenicity score were considered as potential HTL epitopes. The identified HTL epitopes were assessed for the conservancy, where 45 CD4 + T-cell epitopes were selected as conserved. The identified conserved epitopes were further subjected to IFNepitope and IL-4pred immunoinformatic tools to identify HTL epitopes that can induce immune response by producing signal cytokines, i.e., IFN-gamma inducers and interleukin inducers. A total of 25 CD4 + T-cell epitopes exhibited both IFN-gamma and IL-4 inducer properties, enhancing the immunogenic capacity of the potential vaccine. These epitopes were subjected to AllerTop v2.0 for allergenicity analysis, where the allergenicity analysis revealed 21 epitopes to be non-allergenic, making them final suitable predicted HTL epitopes candidates for vaccine development since they do not cause any allergic reactions to the host ( Table 3). The epitope 'KEEEEFKISVFGFAA' with the highest antigenic VaxiJen score of 1.5250 was disregarded as the allergenicity analysis identified it as allergic. B-cell prediction. The B-cell epitopes were predicted using the ABCpred server, where 12 conserved genomic sequences that passed the transmembrane property test were used as input template. A total of 71 B-cell epitopes were generated and further tested for antigenicity, allergenicity and conservancy. From the generated epitopes, only 6 B-cell epitopes (Table 4) passed the criteria mentioned above. www.nature.com/scientificreports/ Epitope merging for generation of multiepitope subunit vaccine. The multiepitope vaccine (MEV) candidate against Eimeria was constructed by joining together the prioritised final predicted 7 CTL epitopes, 21 HTL epitopes and 6 B-cell epitopes using AAY, GPGPG and KK linkers and the adjuvant was attached at the N-terminal and appropriate linkers to enhance the immunogenicity of the construct. This resulted in the multiepitope vaccine containing 731 amino acid residues, which was further validated for antigenicity, allergenicity and physicochemical properties. www.nature.com/scientificreports/ Antigenicity, allergenicity, solubility, and physicochemical properties assessment. The constructed MEV was subjected to various tools to validate its effectiveness as a vaccine. The vaccine was found to be antigenic with Vaxijen score of 0.6043, non-allergic and immunogenic (score = 8.59968), making it a potentially good candidate to provoke an effective immune response in the host. The physiochemical properties evaluated using the ProtParam server revealed that the designed multiepitope vaccine had a molecular weight of Tertiary structure prediction, refinement, and validation. The tertiary structure of the designed multiepitope vaccine was predicted using the RaptorX server. Since the designed vaccine had no structural template, the RaptorX server was the suitable platform to generate the vaccine's tertiary structure. The server produced five different potential models which were validated using the PROCHECKER server. Model 4 was chosen from the models produced as potential vaccine structure with Ramachandran plot percent of 79.8% of residues in the favoured region. However, the validation showed that the designed vaccine peptide had some missing residues. Hence, the selected model was refined using GalaxyRefine server. The server also produced five potential vaccine models with an improved number of residues allowed in the favoured region. The best model selected was Model 5 (Fig. 1a), which showed an improved percentage of 83.2% in the Ramachandran plot analysis when revalidated after refinement (Fig. 1b). Other favourable parameters obtained for the refined model included: GDT score of 0.8684, RMSD value of 0.626, MolProbability of 3.399, clash score of 84.1, and poor rotamers of 3.8. The selected model was further validated using ProSA-web, where a Z-score of − 10.48. The Z-score obtained suggests that the designed vaccine is of good quality since it lies within the vicinity of PBD X-ray experimental structures (Fig. 1c). The structural quality of the vaccine construct in addition to the physicochemical properties previously discussed are displayed in Fig. 2. Docking and Molecular dynamic simulation of vaccine constructs with Toll-like receptor 5. The docking of the vaccine with receptor TLR5 (PDB ID: 3V44) produced 50 docked complexes and the produced models were visualised using the VMD software and Chimera v2.0 to observe the vaccine's interaction and binding to the TLR5 complex. The best-docked complex was selected with a binding affinity of − 151.893 kcal/mol (Fig. 3a). Figure 3c showed that the MEV from the present study bound to the binding domain of TLR5 when compared to the binding domain of Salmonella flagellin mapped on the TLR5 (PDB ID: 3V47) in Fig. 3d, which was an indication of good interaction of the MEV with the TLR5. Molecular dynamics (MD) simulation (Fig. 3b) was performed to assess the stability and binding of the docked complex using parameters RMSD and RMSF. The structural flexibility of the MEV_TLR5 complex after MD simulation was demonstrated by RMSD results (Fig. 4a), where fluctuation for backbone atoms of MEV before docking were within 1.5 Å to 25.0 Å whereas the fluctuation of the complex after MD simulation was within 1.5 Å to 15.0 Å. The binding of MEV to the TLR5 caused more stability to the MEV system as displayed in the figure. RMSF analysis (Fig. 4b) revealed side chain atoms of docked complex that exhibited high interaction between vaccine and TLR5, with fluctuation regions at 50-180 and 510-660 residues. The binding energy recorded from the MD simulation was − 47.14 ± 0.46 kcal/ mol with contributions from other component energy such as van der Waal (− 167.30 ± 0.54 kcal/mol), gas phase energy (− 665.32 ± 3.50 kcal/mol), electrostatic (− 498.02 ± 0.54 kcal/mol), electrostatic contribution to the solvation of free energy (639.55 ± 3.23 kcal/mol), non-polar solvation energy (− 21.37 ± 0.08 kcal/mol) and solvation free energy (618.18 ± 3.16 kcal/mol). Energy decomposition of the interacting residues of both TLR5 and MEV with cut off ≤ − 1.0 is within the range of − 6.106 to − 1.222 kcal/mol as presented in Table 5. www.nature.com/scientificreports/ restriction sites were added into the final vaccine and closed into the expression vector petDEST42 (7630 bp construct) (Fig. 5). Immune simulations for vaccine construct. The results of immune simulation revealed different immune profiles produced by the vaccine, where the vaccine was observed to induce an immune response by the increase of antibodies upon administration to the simulation. The administration of the vaccine candidate showed a significant increase in tertiary immune response exhibited by high levels of IgG1 + IgG2, IgG + IgM, IgG1 and IgG2 compared to the primary response represented by IgM (see Supplementary Fig. S1a). Exposure of the B-cell population to the vaccine showed constant increased levels, developing memory cells to keep the vaccine's memory if the host encounters reinfection (Fig. 6b). The subsequent injection of vaccine construct also exhibited a significant increase in cytokines such as IFN-gamma, TGF-b, IL-10, IL-23, and IL-12, indicating a good immune response (see Supplementary Fig. S1a), correlating with the prediction of IFN-gamma epitopes in the vaccine. The decline in antigen level with each injection confirmed the presence of antibodies that effectively maintained the possibility of an antigenic surge. Discussion Coccidiosis is a ubiquitous disease caused by Eimeria in livestock 7,57,58 . This disease has caused substantial economic loss globally in the chicken industry due to the high mortality of chickens leading to reduced productivity. Vaccination of poultry at an early age is crucial to curb the economic impact that coccidiosis currently pose to the poultry industry. Due to their rapid life cycle, consisting of multiple stages within a host, the complexity of Eimeria infections requires novel approaches in vaccine design to improve their efficacy. This study aimed to apply immunoinformatic and reverse vaccinology to generate a multiepitope vaccine candidate capable of inducing a targeted response against Eimeria. www.nature.com/scientificreports/ Advances in the immunoinformatic approach enables the prediction and selection of specific immunogenic machinery of the parasite responsible for the disease rather than the inclusion of the whole cell, preventing undesirable immune response which may result in enhanced allergenicity of the parasite towards a host, reducing the time required for the design of the vaccine 22,23,39 . The tools involved in epitope prediction are critically selected based on their accuracy to ensure the design of long-lasting immunogenic vaccines with high efficacy. The T-cell and B-cell epitope prediction is achieved through IEDB servers and ABCpred server, respectively. Several recent studies have implemented these tools to predict epitopes and develop multiepitope vaccines 17,34,36,51,59 . Even though most prediction tools used to identify B-cells have low accuracy, ABCpred has been considered one of the most effective epitope prediction servers available with an accuracy of 65.93% 26 . To ensure the uniqueness and efficiency of epitopes identified with the tools mentioned above, further analysis (such as antigenicity-efficiency = 70-80%, allergenicity, conservancy, etc.) of the predicted epitopes is required. While the accuracy of these tools ranges from ~ 65 to 88% which is well above average, they produce predictive results that need further justification in the laboratory. Similar tools were used in the present study to predict and design a multiepitope vaccine. In this study, the genomic sequence of Eimeria IMP-1 antigen was exploited to predict and design an immunogenic multiepitope vaccine candidate. The inclusion of different epitopes in the vaccine design may potentially provide maximum protection against various Eimeria spp. strains. Song et al. 2 suggested that merging T-cell epitopes from different stages of the Eimeria life cycle could overcome the parasite's antigen complexity, which this study aimed to explore through designing a multiepitope vaccine. An effective vaccine should stimulate T (CTL or HTL) cells and B-cells capable of inducing immune responses to eliminate any pathogenic antigens within a host. To achieve this, the conserved sequences of IMP-1 antigen were selected and used to generate 28T-cell epitopes (7 CTL and 21 HTL) and 6 B-cell epitopes that were 100% conversed, antigenic and immunogenic vaccine candidates. The use of T-and B cells capable of eliciting a strong humoral and innate response in the vaccine construct ensured that the vaccine designed would confer double protection against the parasite 54 . The CD4 + helper T cells were further screened for IFN-γ and IL-4 inducing properties. The presence of these inducers in a vaccine construct is crucial for initiating the production of cytokines that activate an immunestimulatory response through the production of cytokines (danger signalling cells). Tang et al. 13 reported that IFN-γ and IL2 could elicit lymphocytes and are involved in Th1 mediated immune response and Th2 immune response, respectively. Hence the inclusion of epitopes capable of inducing cytokines was crucial for improving the efficacy of the vaccine. The final predicted T-cells (7 CTL epitopes and 21 HTL epitopes) and 6 B-cell epitopes were joined together by flexible linkers to form the vaccine construct with 731 aa. An ideal vaccine also depends on the effectiveness of the adjuvant selected to improve the host immune response. In the current study, CTB adjuvant was added to the N-terminal of the vaccine peptide by EAAAK linker. CTB is the non-toxic component of the cholera toxin, which aids in the formation of a pentameric structure that binds to GM1-ganglioside receptors of the gut mucosa and controls the expression of receptors in gut tissue cells 39,60 . This toxin serves as a strong adjuvant when combined to other proteins and significantly elevates the ability of proteins to induce immune response after oral administration. Toxin derived adjuvant (such as CTB) has been used in several studies, where it was noted effective in enhancing the immunogenicity of multiepitope vaccines by inducing the production of immunoglobulins (IgG and IgA) 17,27,59,61,62 . The inclusion of non-toxic CTB to the designed vaccine enhances the efficacy of the potential peptides attached to it and promotes nasal or oral administration, and since the intestinal tissue damage observed in chickens infected by Eimeria results in severe dehydration and bloody diarrhoea, the combination of CTB and the multiepitope vaccine may counteract the damage caused by the parasite and reduce water loss by binding to epithelial receptors 63 . The constructed vaccine was found to be immunogenic with a score of 8.59968, antigenic with VaxiJen score of 0.6043 and non-allergenic, making it a good candidate for vaccine development. The vaccine consists of 731 amino acid residues with a molecular weight of 73.25 kDa with an instability index of 33.40, fitting the criteria that vaccine proteins with molecular weight < 110 kDa and instability index < 40 are relatively stable/ good vaccine candidates 22,27 . The evaluation of vaccine physiochemical properties further revealed the designed vaccine construct to be thermostable, acidic with a highly hydrophobic nature (GRAVY = 0.140); physiochemically fitting for production. Similar findings were observed by Ojha et al. 54 . The designed vaccine was subjected to structural validation using a Ramachandran plot and ProSA with scores of 83.2% and − 10.48, respectively. These scores indicated that the vaccine structure was of good quality, as the model lied in the vicinity of X-ray resolved structures in PDB. Toll-like receptors (TLRs) are highly conserved transmembrane proteins that play a vital role in detecting and reducing foreign molecule within host such as pathogens 15 . These receptors detect protozoan parasites through recognition and activation of conserved pathogen components such as pathogen-associated molecular patterns (PAMPs) 45 . Activation of PAMPS within host signals TLR to produce and regulate the expression of cytokines, i.e. interleukin (IL)-12, interferons (IFNs), tumour necrosis factor (TNF), etc 45 . Production of these cytokines initiates the host's innate immune system to develop an adaptive immune response crucial from host protection against infection 62 . In the present study, the designed vaccine was docked to the TLR5 receptor. This receptor has been previously reported as an effective inducer of innate immune response in chickens infected with Eimeria 64 . With the majority of apicomplexans using the gliding motility when invading the host, TLR5 in Eimeria infections is likely to be induced by the presence of flagellin in the intestinal microbial flora and can easily be detected in chickens with mature intestinal gut 13,59,65 . The selection of the TLR5 as a receptor for the designed vaccine was informed by the fact that the receptor is both present in humans and chickens, where it responds to bacterial flagellin 66 . Though it is known that TLR5 responds to flagellin-like ligands, it has still been shown from gene expression profiling studies that TLR5 signalling pathway activation was detected after stimulation by cholera toxin B 40 . Moreover, it has also been confirmed that aside from flagellin, TLR5 also responds to profilin Figure 2. The structural quality of the vaccine construct (a) Aromatic amino acid residues (b) hydrogen bonds with donor acceptor atoms (c) charge distribution with the red, white and blue indicated negative, neutral and positive charged residues respectively (d) solvent accessibility surface from blue for exposed to green for buried and (e) the hydrophobic surface of the MEV with red and blue representing hydrophobicity and hydrophilicity respectively. www.nature.com/scientificreports/ protein from the apicomplexan parasite-Toxoplasma gondii 67 . Interestingly, this profilin protein has also been reported to be expressed in all the developmental stages of several Eimeria parasite species and is conserved 68,69 . Molecular docking of the vaccine candidate allowed analysis of binding interaction of vaccine to the receptor. The highest binding energy observed from the docking simulation revealed that the vaccine complex has a significant affinity to the receptor and can stimulate TLRs within the host leading to an improved immune response against Eimeria. Similar findings were obtained by Yin et al. 15 when they evaluated E. tenella IMP1 and flagellin (TLR5 agonist) as potential Eimeria vaccine candidate. They proved through vaccination of three-week-old AA broiler chickens that recombinant EtIMP1-flagellin fusion protein enhanced the immune response of chickens, making it an effective immunogen 15 . Molecular docking results were further validated using molecular dynamics simulation, where obtained RMSD, RMSF and energy decomposition analysis revealed significant interaction of the vaccine construct and the TLR5 51,54 . To assess the ability of the proposed vaccine to initiate immune response, the vaccine was subjected to an immune simulation using C-IMMSIM server. The in silico immune response results showed a consistent behaviour correlating with expected outcome from the host's immune system when induced by vaccination. The production of IgA upon initial administration of vaccine into the simulation stimulated an increase in all immunoglobulin response where elevated production of antibodies that aid in secondary and tertiary response (i.e., IgG1 + IgG2, IgG1 and IgG2) was observed, followed by a significant reduction in antigen (Ag) concentration (Fig. 6a). The administration of the vaccine was also noted to elevate the B-cell population, especially the memory B-cell (Fig. 6b). The gradual increase of B-cell memory cells and antibodies after subsequent exposure to the vaccine injections confirmed the vaccine's effectiveness to host when exposed for a duration of time, consistent with the vaccine's immunogenicity. Similar behaviour was observed in the T cytotoxic(TC) cells and T helper (TH) response, where memory cells were elevated (Fig. 6c,d). The CD8 + cytotoxic cells and CD4 + T helper cells are critical for anticoccidial immunity against avian coccidiosis, and it has been reported that increased populations of T cells are linked to increased production of pro-inflammatory cytokine interferon (IFN)-γ, capable of regulating and inhibiting the development of Eimeria parasite 70 . www.nature.com/scientificreports/ IFN-γ is an important cytokine in chickens known to enhance expression of MHC II antigen and is involved in Th1-mediated immune response 71 . The immune simulation showed elevated production of IFN-γ, TGF-β and IL-2 (see Supplementary Fig. S1a). The increased, continuous production of cytokines such as interferon (IFN)-γ and T-cells show the vaccine's potential to exert a protective effect against parasite infections since they are crucial for cellular immune response and anticoccidial immunity towards coccidiosis 19,72 . Zhang et al. 73 reported upregulation of TL5 in E. tenella infection, which triggered activation of pro-inflammatory cytokines IL-2 and IFN-γ. This supports immune simulation findings obtained in this study, where vaccine construct induced increased production of IFN-γ and IL genes (see Supplementary Fig. S1a). Similar findings were reported by Liu et al. 74 in a separate study, evaluating the efficacy of DNA vaccine (pVAX-Ea14-3-3) against Eimeria infection. The authors confirmed production of IFN-γ, IL-2 and IL-4 in high levels provided effective protection against Eimeria infection (E. acervulina, E. maxima and E. tenella). The elevated levels of IFN-γ enhances vaccine efficacy through induction of microbicidal effects, crucial in resolving parasitic infections. High levels of cytokines in the host activate microphages that inhibit and kill Eimeria while enhancing immune responses. As observed in this study, subsequent doses of vaccine activated the growth of macrophages (see Supplementary Fig. S1b) and cytotoxic T cells (Fig. 6c), which remained elevated throughout the simulation. Avian coccidiosis immunity is linked to innate and adaptive, where the innate immune response to Eimeria parasites is activated at different phases of the parasite life cycle and is facilitated by natural killer (NK) cells, dendritic cells, epithelial cells, heterophils, and macrophages 70 . Innate immune response against Eimeria infection is activated at the early stages of the infection and serves as the first line of defence, where it utilises immune receptors to detect and respond to parasitic infection 45 . The adaptive immune response is crucial for the prevention of host invasion and growth of the parasites. This mode of immunity in chickens is specific, encompassing both cellular and humoral immune mechanisms and is regulated by B-and T-cells. T-cells have been reported to produce cytokines such as interleukin (IL)-2, IL-4, IL-10, IL-12 and IL-18, tumour necrosis factor (TNF)-α and transforming growth factor (TGF)-β1-4, as a part of protective immunity against avian coccidiosis 75 . These findings correlate with our immune system simulation results (Fig. S1a), confirming the potential of the proposed multiepitope vaccine in inducing long-lasting humoral and cellular immune responses and protection against www.nature.com/scientificreports/ www.nature.com/scientificreports/ Eimeria infections. Previous literature suggests that in silico immune simulations can be consistent with the real immune response exhibited by the affected host against pathogens 56 . The simulation outcome for the present study is a crucial step in vaccine development and could potentially provide reliable insight on the efficiency of the designed vaccine against Eimeria through induction of protective immune response. Also, the optimized expression of the proposed vaccine in the E. coli K12 strain and successful in silico cloning into expression vector promises for easier and more accurate production of vaccine in large scales 76 . However, findings obtained in this study still require further laboratory experiments for validation purposes. Conclusion From the present study, the design of a multiepitope vaccine was achieved successfully using the immunoinformatic approach. The vaccine designed exhibited all the parameters crucial for potential vaccine candidates, as it effectively induced immune response through the production of cytokines in an immune simulation technique. Based on this study, it might be promising to focus on specific regions of the parasite's protein rather than large protein residues, as this might contribute to the reduction of the parasite's antigen complexity. Also, the combination of multiple T-cells from different phases of the Eimeria life cycle may effectively confer ideal protection against multiple Eimeria species though this still requires further experimental validation(s). This would also minimise any possible negative effects of using the whole genome of the parasite, reducing risk of reinfection. In the present study, a multiepitope vaccine candidate containing 7 CTL epitopes, 21 HTL epitopes and 6 B-cell epitopes, and an adjuvant resulted in a vaccine construct that significantly enhanced immune protection of the host by prediction. It can be concluded that the immunoinformatic approaches explored in the prediction of the designed vaccine candidate yielded promising results. It is highly recommended that further studies and experimental validation be done on the results obtained and reported in the current study for confirmation purposes and to validate the safety and efficacy of the vaccine.
9,209.2
2021-09-14T00:00:00.000
[ "Medicine", "Biology", "Agricultural and Food Sciences" ]
Studies on the deposition of copper in lithium-ion batteries during the deep discharge process End-of-life lithium-ion batteries represent an important secondary raw material source for nickel, cobalt, manganese and lithium compounds in order to obtain starting materials for the production of new cathode material. Each process step in recycling must be performed in such a way contamination products on the cathode material are avoided or reduced. This paper is dedicated to the first step of each recycling process, the deep discharge of lithium-ion batteries, as a prerequisite for the safe opening and disassembling. If pouch cells with different states of charge are connected in series and deep-discharged together, copper deposition occurs preferably in the cell with the lower charge capacity. The current forced through the cell with a low charge capacity leads, after lithium depletion in the anode and the collapse of the solid-electrolyte-interphase (SEI) to a polarity reversal in which the copper collector of the anode is dissolved and copper is deposited on the cathode surface. Based on measurements of the temperature, voltage drop and copper concentration in the electrolyte at the cell with the originally lower charge capacity, the point of dissolution and incipient deposition of copper could be identified and a model of the processes during deep discharge could be developed. www.nature.com/scientificreports/ the batteries, opening the cells and separating the anode, cathode and separator foils. All further steps of the chemical-mechanical separation of the cathode material from the aluminum foil must also be designed in such a way that any degradation of the particles, for example, due to breakage, deposits on the surface, contamination with other battery components (e.g. particles of the aluminum or copper foils) or direct chemical attack on the material is prevented or minimized. The subject of this study is the first step of recycling, the deep discharge of the battery, which has an enormous impact on the functional recycling and reusability of the cathode material recovered. An unwanted side reaction that can occur during deep discharge is the deposition of copper on the cathode foil. Jo et al. found out in their investigations that an increasing copper content in the NMC leads to a loss of capacity of the battery 13 . They found a slightly lower discharge capacity at a copper content of 0.5…1.5 mol%. After 50 cycles, the capacity of the pure active material was 135.64 mAh g −1 . By comparison, the discharge capacity of the active material contaminated with 0.5, 1.5 and 2.5 mol% was 131.81, 129.17 and 85.2 mAh g −1 , respectively. The material contaminated with 2.5 mol% copper showed a slightly lower capacity than the pure active material only at low discharge rates (0.1 C). At high discharge rates (5 C), the capacity decreases by about 85% compared to cells with copper-free cathode material. Guo et al. connected four fully charged NMC-based batteries (state of charge: SOC = 100%) in series with a fully discharged battery (SOC = 0%) 14 . The current flow generated when the full battery is discharged forces a charge transport on the discharged cell, which, due to the absence of Li + , must be ensured by other charge carriers as an unwanted side reaction. They found that the voltage curve of the initially deeply discharged cell passes through a minimum at a SOC = − 11% (relative to the initially fully charged batteries) during further "discharging. " The time the voltage reaches the minimum was declared as the point at which the dissolution of the copper foil of the anode begins. According to Guo, copper is deposited on the cathode at the interface to the separator and leads to local short circuits from a SOC = − 13% onward, which increase in frequency up to SOC = − 20%, and the internal resistance of the cell asymptotically approaches a limit value. The process can be reversed as long as the dissolved copper (SOC > − 12%) has not yet been deposited on the cathode. These cells can be almost fully recharged and show only slight power losses. Cells that were discharged to a SOC ≤ − 14.5% and deposited copper could be charged but showed a significant self-discharge and depleting open-circuit voltage. Zheng et al. investigated the degradation mechanisms of LiFePO 4 cells as a function of overdischarge 15 . They found a correlation between the capacity loss of the cell and the value of the end-of-discharge voltage. They found that overdischarge to 0.5 V and 0 V lead to lower cycle performance in addition to serious capacity loss. Electrode impedance trends showed that the impedance of both electrodes increased, with the cell at 0 V exhibiting the highest values. Based on half-cell tests, the capacity loss could be related to the anode. Experiments on overdischarge were carried out by Fear et al. with NCA (nickel-cobalt-aluminum oxide) as the cathode material 16 . They divided the discharge into different phases using the first and second derivation of the voltage curve. They concluded that the oxidation of the copper of the carrier foil begins at the minimum voltage curve at − 1.5 V, where the first derivative is 0, and is followed by the dissolution of the copper after the NCA breakdown. Furthermore, they described the voltage rise after the minimum by the increasing potential of the cathode, since the overpotential for copper reduction is reduced and copper ions compete with the lithium ions to be reduced at the electrode surface, as already described by Kasnatscheew 17 . The internal resistance of the cell decreases due to internal short circuits and the voltage approaches − 0.23 V asymptotically. At this point, the copper bridges across the cell have grown sufficiently so that the cell behaves like a resistor in the circuit rather than an electrochemical system. Robles et al. performed long-term cycling to investigate the degradation mechanisms as a function of overdischarge 18 . Cells whose discharge voltage was 2.7 V exhibited a 20% capacity loss after 287 cycles. The capacity loss was attributed to the thickening of the SEI. If the discharge voltage is set to of 1.5 V, the capacity loss of 20% is reached after 120 cycles. In addition to SEI thickening, increased li-plating is responsible for this. If the final discharge voltage is set to 0 V or − 0.5 V, Li-plating, particle cracking, copper dissolution and the formation of copper bridges occur. Cells that were overdischarged to this − 0.5 V failed after only 14 cycles due to internal short circuits. The deep discharge of LiCoO 2 cells to 0 V leads to an increase of the anode potential to about 3.5 V, so that the copper of the anode foil is oxidized and, as a result, is deposited on the cathode foil. The investigations of Li et al. showed that the SEI decomposes to gaseous products (CO, CO 2 , CH 4 ) and the cells swell during deep discharge to 0 V 19 . Kasnatscheew et al. analyzed the interactions between the electrodes in a three-electrode Swagelok cell during overdischarge by determining the single-electrode potentials of anode and cathode to a reference electrode located in the system 17 . A characteristic potential plateau at about 3.56 V was found at the graphite electrode due to the oxidation of the copper. The constant anode potential after the beginning of the copper oxidation was interpreted to mean that this process continues during the entire remaining discharge phase. The timedelayed potential drop observed at the positive electrode was attributed to the competitive reaction between the conventional lithium plating reaction and the parasitic Cu plating reaction. He et al. cyclized different LiFePO 4 cells under overdischarge conditions (5, 10, 15 and 20% overdischarge) 20 . Under these conditions, for example, the cell cyclized to 120% depth of discharge failed on the second cycle. The oxidation and reduction potentials of Cu/Cu + and Cu + /Cu 2+ were determined vs. Li/Li + . They showed that there is a gradual formation of copper bridges, which lead to internal short circuits and result in considerable self-discharge. Hendricks et al. performed deep discharges to 0.5 V, 0.25 V and 0 V 21 . The electrodes were subsequently examined with XPS and XAFS. Copper was detected in all cells discharged below 0.5 V, which was attributed to the dissolution of the anode current collector. They suggested that the dissolution of the copper foil leads to a poorer adhesion of the anode material, which justifies a capacity loss, during 40 cycles, of 10%. Furthermore, they found that the deposition of copper species leads to a blockage of intercalation sites and thus also contributes to a capacity loss. The deposited copper species were identified as Cu 2 www.nature.com/scientificreports/ non-conductive and thus do not lead to internal short circuits. However, it was not been excluded that overdischarge into reversal could result in deposition of metallic copper. It is necessary for the best possible performance of the recyclate that the material for recycling is copper-free; this applies to both the active material and the electrolyte. It is, therefore, crucial to determine the point on the voltage curve during discharge where copper is present in the electrolyte and the point where copper is deposited. Discharge of individual cells. Deep discharges of single cells with different load resistors and, conse- quently, different discharge currents (0.5 … 200 A) to a final clamp voltage of ≈ 0 V were performed. Apart from temperature influences, the resistor over the discharge process can be regarded as almost constant (∆R ≈ 0 … 10%). The voltage drop across the internal resistance of the cell increases due to the low load resistors and the resulting high load currents, resulting in a lower clamp voltage under load. With a resistor of 41 mΩ, for example, an initial current of 80…90 A flows (Fig. 1a,b). Figure 1b shows the corresponding current behaviors at selected resistors (30…74 mΩ). The voltage plateau in Fig. 1a is explained by the fact that the cathode potential is nearly constant and the anode potential increases only slightly in the range up to approx. 20 Ah 19 . The height of the plateau results from the discharge current. The collapse of the clamp voltage at a discharged charge quantity of about 20 Ah results from the rapid increase of the anode potential (> 3 V vs. Li/Li + ) with a simultaneous decrease of the cathode potential (≈ 3.5 V vs. Li/Li + ). When both potentials have reached the same value, the cell is potential-free (U CV = 0 V). Figure 1a shows that the temperature increases continuously during the discharge at the load resistor of 41 mΩ. The greatest increase is at the point where the clamping voltage collapses rapidly. This rapid temperature rise is associated with the collapse of the SEI 22 . The maximum temperature reached is ≈ 44 °C (Fig. 1a). By contrast, a maximum temperature of about 67 °C was measured when discharging at 13 mΩ. The discharge currents show the same curve as the clamp voltage during discharging. The current is dependent on the value of the resistor (Fig. 1b). Opening the cells. The batteries were then opened and the cathode foils analyzed for degradation and copper deposition. The most visually striking finding is that the plastic separator degrades massively during discharge with load resistors ≤ 10 mΩ. In some places, the aluminum oxide coating of the separator adheres so firmly to the cathode that the separator carrier foil can be removed from the aluminum oxide coating. In other places, the separator adheres so firmly to the cathode that it breaks when the separator is pulled off. The gas development during discharging can be followed by inflating the battery. The collapse of the SEI and the resulting gaseous reaction products 22 cause gas bubbles to form between the foils, which remain between the separator and the cathode foil. No degradation of the separator occurs only at these, typically round or oval places, so that no buildup can be detected there. After opening the cells, none of the foils showed visually recognizable copper deposits on the cathode foil. No deposition of copper independent of the discharge current can be detected by means of REM-EDX analysis. It should be noted that the chemical analysis shows no cobalt, manganese, nickel or copper could be detected in the electrolyte. The EDX analysis of the NMC also shows no measurable influence of the discharge current on the chemical composition of the NMC. Discharge of cells connected in series. Pouch cells with different state-of-charge (SOC 1 > > SOC 2 , see 2) were connected in series and discharged in a further series of experiments. The aim is to simplify the simulation of a battery stack containing batteries with different charge capacities, which can be caused, for example, by production differences (layer quality, capacity differences) or by uneven aging or the failure of individual cells. As a result, the cell with a lower charge capacity is already completely discharged after a shorter discharge time, while the cell with a higher charge capacity still supplies voltage, thus, forcing a current flow in the cell www.nature.com/scientificreports/ already discharged. The forced current flow causes complex chemical side reactions, such as the almost complete de-intercalation of the anode material, a collapse of the SEI at the anode and, in another process, the dissolution of the copper carrier foil at the carbon anode and the deposition of copper on the cathode 16,20 . Figure 3a shows the complete discharge curve and its derivative of a cell during overdischarge. The course of the discharge voltage curves is in very good agreement with the discharge voltage curves determined by Fear et al. The latter concluded from the voltage curves and their 1st and 2nd derivations that the oxidation of the copper foil does not begin until the voltage curve has reached its minimum 16 . In order to provide analytical evidence of which point of the voltage curve marks the dissolution of the copper foil, one cell at a time was discharged to a defined voltage value, the electrolyte was removed and the cell was opened. The copper content in the electrolyte was quantitatively determined by a high-resolution continuum source atomic absorption spectrometer (Table 1). Table 1. The red squares in Fig. 3b show the copper concentration determined in the electrolyte during overdischarge. The voltage curve (blue line) was drawn in again for illustration. Figure 3c shows a cathode foil (tab. 2, cell 7) with a large area of deposits of copper with the typical "leopard skin" pattern and separator components (white deposits). Figure 3a clearly shows that copper is already detectable in the electrolyte at − 1.2 V and, thus, before the voltage minimum is reached. This is remarkable since, according to Fear et al., the electrochemical dissolution of the copper foil should only begin at the voltage minimum (here − 1.91 V) 16 . The amount of charge transferred at − 1.2 V of Q = 25.3 Ah corresponds approximately to the amount of charge that can be withdrawn from a cell that has been individually discharged to 0 V (Fig. 2). Since, from this point on, practically no Lithium-ions are available in the working cell as a charge carrier, the charge flow in the working cell forced by the driving cell leads to the dissolution of the copper of the anode carrier foil and the transport of copper ions to the cathode. At this point, no copper is yet detectable on the cathodes of the opened cell. Copper deposits are found both on the cathode and on the anode only at a potential of − 1.77 V (Q = 25.7 Ah) at the working cell. At the same time, an increased concentration of copper ions is detectable in the electrolyte. Copper deposits could also be detected on the surface of the graphite anode (Fig. 4a) only in the cell discharged to − 1.77 V (Q = 26.1 Ah), but not inside the graphite layer (Fig. 4b). Since there is no electron conduction between cathode and anode at this stage, this leads to the assumption that the first copper ions reaching the surface of the graphite are possibly reduced to metallic copper by reactive, reducing decomposition products from the collapse of the SEI. Once these are consumed, the copper deposited on the graphite, which is electronically connected to the copper carrier foil by the graphite, dissolves again in the further course of the discharge of the driver cell. After the collapse of the SEI, a further reduction of copper ions at the anode surface is no longer observed. In the further course of the overdischarge, the deposition of copper only takes place on the cathode surface. As the discharge proceeds to the voltage minimum, the amount of copper deposited on the cathode foil and the copper concentration in the electrolyte continue to increase. At Q ≈ 27 Ah, the clamping voltage of the work cell rises sharply from − 1.91 V to about − 1.5 V. The cause, according to other authors 14,16,18 , is assumed to be the first copper dendrites growing through the separator and leading to internal short circuits. The first copper dendrites, which grow through the separator and lead to internal short circuits, are suspected of being the cause. Apparently their initial conductivity is very limited, so that although the internal resistance of the cell decreases, further dissolution of the copper foil and further copper deposition on the cathode foil occurs. Thus, at this Table 1. Analysis of electrolyte (w Cu , w Ni,Mn,Co ) and visual inspection of anode (AF) and cathode (CF) foils, x → not present, ✓→ present; U End is the voltage drop across the battery at electrolyte withdrawal, Q is the amount of charge withdrawn from the circuit, cell 7 a.M. after the minimum, LOD Cu = 0.01 mg kg −1 , n.d. → not detectable. It follows that the current flow almost completely passes over copper dendrites, which have led to a cell short circuit from the cathode surface across the separator to the graphite layer. The increase in the voltage drop across the working cell at the end of the discharge then corresponds to a decrease in the internal cell resistance due to the electron conduction through the copper dendrites. In their studies, Hendricks et al. found that the deposited copper was non-conductive species 21 . However, they considered it possible that further overdischarge into reversal, as described in this and other work 14,16,18 , could result in deposition of metallic copper. Figure 5 shows an example of the top view and cross-section of a cathode foil taken from cell no. 7 after opening. It can be seen that the separator is very firmly attached to the graphite layer in places (Fig. 5a). Figure 5b shows the cross-section of the area marked in Fig. 5a, consisting of an Al carrier foil (D) coated on both sides with NMC (E), plastic separator foil (G) coated on both sides with Al 2 O 3 , and the graphite layer (H). The sharp boundary between the copper layer and the NMC layer shows that the copper is deposited on both sides and flat on the NMC surface of the cathode (Fig. 5b, layer "G"). Furthermore, the plastic membrane of the separator (G) and the Al 2 O 3 coating of the separator foil (F) are completely penetrated by deposited copper. The original Al 2 O 3 coating of the separator foil (in red) is still present but seems to be completely penetrated by copper. Figure 5b also shows that copper has grown dendritically from the surface of the NMC coating through the separator and into the carbon coating of the anode (H). In addition, no oxygen is detectable in the deposited copper by EDX (Fig. 5c), which supporting the theory that further overdischarge into reversal results in the deposition of metallic and thus conductive copper 14,16,18,21 . Finally, it should be noted that the chemical analysis shows that the cathode material is not affected by the discharge processes and no cobalt, manganese or nickel could be detected in the electrolyte (Table 1). Temperature characteristics. Figure 6a shows the voltage curve of the discharge of a working cell at a resistor of 58 mΩ and the temperature curves of the working cell and driver cell. The temperature in the working cell increases only slightly until the end of the capacity expected but very significantly as soon as the voltage drop occurs. This effect becomes obvious because, as shown in Fig. 6b, the strongest increase of the temperature corrected by the waste heat (Newton's cooling law) takes place at the point where the 1st derivative of the voltage function has the local minimum. The maximum temperature of about 50 °C is reached at the voltage minimum of the working cell. The temperature difference between the work cell and the driver cell is, therefore, considerable since the same current flows through both cells. Only when the voltage in the working cell rises again (Fig. 6b, dU/dQ = max), the 1st derivative of the corrected temperature drops to about 0 and the temperature of the cell decreases continuously. It is plausible to assume that the maximum temperatures inside the cell are significantly higher and initiate the degradations there that cause the separator foil to adhere to the surfaces of the electrodes in a virtually non-detachable manner. High temperatures inside the cell are also likely to cause the viscosity of the electrolyte to decrease and the resulting gas bubbles to displace the electrolyte. Discussion Interpretation of the voltage curve. It has been shown that the deep discharge of a single cell does not lead to the dissolution and deposition of copper. If, on the other hand, at least two cells are connected in series in which one cell has a lower charge capacity than the other, deep discharge in this cell causes the copper carrier foil to dissolve and copper to be deposited on the surface of the NMC cathode. If the anode is completely delithiated, no lithium is available for further charge transport. The potential of the series-connected cells of higher charge capacity (SOC > 0) is the driving force to trigger the parasitic process of dissolving the copper in the cell with lower charge capacity, which is discharged earlier (SOC = 0) with further current flow (SOC < 0). Based on the present results, the following interpretation of the discharge curve is suggested (Fig. 7). The black area marks the area of delithiation and breakdown of the SEI, the green area, the copper dissolution and the red area, the internal short circuit through copper dendrites. The point at which the delithiation is finished as a dominant process is shown as a minimum in the derivative of the voltage curve. This is also the point where the temperature increase is greatest (Fig. 6b). It is assumed that the further, smaller drop in voltage or the subsequent increase in discharge (dV/dQ) is possibly due to the discharge of protic species or the residual water in the electrolyte at the anode 23,24 . This is accompanied by the gas formation, which leads to cell bloating. This process is determinant up to the point where the derivative forms a small plateau (Fig. 7, transition black-green). Following this plateau, the derivative increases more strongly again and the voltage drop proceeds with a smaller increase, which means that the charge transport is less inhibited. In this region, charge transport is carried out by copper ions, although it is still unclear whether this is Cu +16 or Cu 2+14 . The section marked in red is introduced with a steep rise and a subsequent maximum of the derivative function. From this point on, a large number of copper dendrites cause internal short circuits (see Fig. 5) and connect the surfaces of anode and cathode in an electrically conductive manner. Electrochemical processes take on a subordinate role from this moment. As shown in Fig. 6, no further significant heat generation is observed after this stage, which suggests that the electrical work performed is low and the cell behaves approximately as an ohmic resistor. Summary. It has been shown that the deep discharge of a single cell does not lead to the dissolution and deposition of copper. If, on the other hand, at least two cells are connected in series in which one cell has a lower charge capacity than the other, deep discharge in this cell causes the copper carrier foil to dissolve and copper to be deposited on the surface of the NMC cathode. If the anode is completely delithiated, no lithium is available for further charge transport. The potential of the series-connected cells of higher charge capacity (SOC > 0) is the driving force to trigger the parasitic process of dissolving the copper in the cell with lower charge capacity, which is discharged earlier (SOC = 0) with further current flow (SOC < 0). By measuring the copper concentration at various points of the discharge curve, analytical proof was provided of the point at which the dissolution of the current collector of the anode begins. Furthermore, it could be shown that the copper concentration in the electrolyte increases rapidly afterwards and converges towards a limit value of about 8 ppm in the investigated interval. Using SEM-EDX mappings, the penetration of the separator and thus the electrical contact between anode and cathode by the copper growing on the cathode surface could be shown. There is no effect on the functional recycling as long as copper ions are present in low concentrations (≤ 1 ppm) in the electrolyte. On the other hand, the deposition of copper on the cathode material over the entire surface leads to significant damage to the NMC recovered since the resulting loss of capacity does not allow for the economic reuse in recycled batteries. Discharge tests. Pouch cells with cathodes mainly consisting of NMC 622 and a remaining capacity of 20 Ah were used for the experiments. The deep discharge of individual cells was performed with the help of constant resistors (8 mΩ … 5.9 Ω); cell voltage and discharge current were recorded via a data log system (Arduino Uno, time interval of the measuring points = 2 s). The temperature was recorded with a temperature sensor www.nature.com/scientificreports/ TF-500 type K (PCE Instruments) and with a data logger PCE-T 390 (PCE Instruments) at intervals of 5 s during the deep discharge. Figure 1a shows the test arrangement for the deep discharge of two pouch cells connected in series. Both cells were previously charged to 100% capacity; cell 2 was then discharged via a resistor of 41 mΩ to a clamp voltage (U CV ) of 3 V, which corresponds to a capacity of 15%. A new pair of pouch cells was used for each deep discharge test in a series connection. If the voltage of the working cell in the deep discharge reached a defined value (U CV = -0.8, -1.2, -1.5, -1.77, -1.91, -1.0 V after the minimum), the test was stopped and the electrolyte was taken from the working cell immediately. The temperature of the working cell was recorded during deep discharge, as described above. Electrolyte analysis. The analysis of the electrolyte extracted was carried out using graphite furnace technology with a high-resolution continuum source atomic absorption spectrometer ContrAA 700 (Analytik Jena AG, Jena, Germany). Argon 99.999% (Westfalen AG, Münster, Germany) was used as the inert and purge gas. Characteristic absorption wavelengths were used for the evaluation of the elements nickel (232.0030 nm), cobalt (240.7254 nm), manganese (279.4817 nm) and copper (324.7540 nm). An equidistant 11-point calibration (0…0.067 mg kg −1 ) in triple determination was carried out regarding the quantification of the copper. The electrolyte sample was manually diluted by a factor of 42…43 (by mass) with deionized water (MilliPore, 18 MΩ cm) and subsequently diluted by a ContrAA 700 autodilution procedure. A calibration was not necessary for Ni, Co and Mn since no element signal could be detected in any of the electrolyte solutions. SEM-EDX analysis. All deeply discharged pouch cells were then opened. The morphological and elemental analysis of the dismounted cathode foils were performed using a scanning electron microscope (SEM, Zeiss EVO MA15) equipped with an energy dispersive X-ray analysis (EDX, AMETEK, 20 kV).
6,616.4
2021-03-18T00:00:00.000
[ "Materials Science", "Engineering" ]
Variation of the intrinsic rock properties on Hoek-Brown failure criterion parameters The Hoek-Brown (H-B) criterion is one of the most commonly used rock failure criteria in recent years. This criterion includes a constant parameter called m i which is a fundamental parameter for estimating rock strength. Due to the im portance of the m i parameter in the H-B criterion, it is necessary to conduct comprehensive studies on various aspects of the effect of this parameter on the behavior of rocks. Therefore, in this study, using numerical simulation of the Triaxial Compressive Strength (TCS) tests in PFC-2D code, the effects of microscopic properties of different rocks on the H-B parameter m i have been studied. Based on the results of this study, it was found that the effects of micro-parameters on the H-B parameter m i can be different depending on the type of rock, however this parameter has an inverse relationship to the micro-parameters of bond tensile strength and bond fraction of the rocks. Also, the m i parameter increases with an increase in the micro-parameters of the friction coefficient, the friction angle, the particle contact modulus, and the contact stiffness ratio of rocks. Introduction Between all of the failure criteria presented, the Hoek-Brown (H-B) (1980) empirical criterion is one of the most well-known. This criterion has become an indispensable tool for rock engineers due to its simplicity, proper compatibility to laboratory data, as well as its applicability to rock masses, and it has been used successfully in various aspects of rock engineering in recent years (Merifield et Depending on the application, different types of H-B criterion have been proposed. The mathematical equation for an intact rock is expressed as follows (Hoek & Brown, 1980a): (1) Where: σ 1 : the major principal stress, σ 3 : the minor principal stress or confining pressure, m i : H-B material constant, σ ci : the uniaxial compressive strength of the intact rock. According to the H-B criterion, one of the parameters that has a significant effect on the failure of rocks is m i , which is a dimensionless parameter. The strength parameter m i which is generally assumed to be a curve-fitting parameter to achieve the H-B failure envelope, can be determined by statistical evaluation of the results of experimental studies, linear or non-linear approaches (Hoek and Brown, 1980a;Hoek and Brown, 1980b;Hoek, 1983;Shah and Hoek, 1992;Colak and unlu, 2004), and its values are distributed from 7 to 35 depending on the rock material characteristics (Hoek, 2007). Due to its significance in H-B failure criterion, many researchers have dedicated their studies to parameter m i (Colak and Unlu, 2004 Material and methods In the current study, the mechanical properties of three rock types namely andesite, limestone, and sandstone have been used to conduct investigations. Numerical simulations have been performed using Particle Flow Code 2D (PFC-2D), which is one of the most popular software based on the Discrete Element Method (DEM). Due to its high capability to simulate the mechanical behavior of different rocks, PFC-2D has been considered by many researchers and has been used in a wide range of numerical studies in recent years. (Potyondy and Cundall, 2004;Calvetti, 2008 In these studies, the numerical samples follow the ISRM recommendations, and the diameter and height of the samples were considered to be 54 mm and 115 mm, respectively Figure 1. Before starting the analysis, it is essential to define the base values of the micro-parameters. The base values of the micro-parameters from each rock are given in Table 1, which is determined with a calibrating operation on the experimental data. For the calibration process, the mechanical behavior of the synthetic samples under the compression test was reproduced and compared with the experimental tests. In this paper, the TCS test has been used for calibrating the micro-properties of synthetic samples. Also, to ensure the accuracy of the numerical results, the failure envelopes of one of the rock samples has compared with laboratory tests, which is shown in Figure 2. Moreover, in Table 2, the values of rock strength parameters in both numerical and laboratory modes are compared with each other. According to Figure 2 and Table 2, the values obtained from numerical models are very close to the laboratory values in all three rock samples. Therefore, it can be concluded that the created models are sufficiently accurate to perform more investigations. Results and discussion After validation and ensuring the accuracy of the answer provided by numerical simulations, to study the effect of micro-parameters on the m i parameter, seven fac- tors including bond tensile strength, bond cohesion, the friction coefficient, the friction angle, bond fraction, the particle contact modulus (Ec) and the contact stiffness ratio (Kn/Ks) have been selected. Since in current research the effects of micro-parameters on three different rocks have been evaluated, the results have been presented in three sections, which are discussed as follows. Andesite The results of the studies about the behavior of m i parameter in andesite rock are shown in Figure 3. According to Figure 3, it can be seen that the value of m i decreases with an increase in the amount of bond tensile strength, which changes from 0 to 15 MPa. By increasing the amount of particles bond cohesion, which has varied from 10 to 40 MPa, the m i parameter decreased firstly (C=20 MPa), then increased slightly and the changes became small. With an increase in the amount of the friction coefficient, which changed from 0.2 to 0.8, m i also increased. The rate of these changes decreased between the values of 0.4 and 0.6 and then increased again as before. Changes in the values of the friction angle led to an increase in the m i parameter. The rate of these changes was smaller at lower angles. The m i parameter also decreased with an increase in the bond fraction which varied from 0.5 to 1. Moreover, as the stiffness ratio increased, the m i parameter increased linearly. Also, with an increase in the value of the modulus of elasticity, the m i parameter increased slightly. The effect of micro-parameters on the failure envelope of andesite rock are shown in Figure 4. According to Figure 4, it can be seen that the greatest impact of the tensile strength was on the minor stress. This micro-parameter also had an effect on the major stress of the rock, but its amount is small. The micro-parameters of cohesion, the friction coefficient and the friction angle caused the strength of the rock samples to increase. These micro-parameters only affected the major stress of the rock. Bond fraction lead to an increase in the value of the major and minor stress. The stiffness ratio and particle contact modulus also affected both stresses, but the greatest effect had occurred on the major stress. Sandstone The results of studies on sandstone samples are shown in Figures 5 and 6. Based on the results of Figure 5, it can be seen that increasing the amount of the micro-parameter of bond tensile strength leads to a decrease in the value of the m i parameter, so that its value has changed from 16 to 7, which reached about half of its value. By increasing the micro-parameter of bond cohesion, m i changed slightly at first (40 MPa < C < 60 MPa), particle contact modulus, the m i parameter first increased slightly (E c <14) and then it decreased again (E c >14). According to Figure 6, it can be seen that, same as andesite rock, the greatest effect of the micro-parameter of bond tensile strength was on the minor stress and its effect on the major stress was relatively smaller. The micro-parameters of bond cohesion, the friction coefficient and the friction angle only affected the major stress Rudarsko-geološko-naftni zbornik i autori (The Mining-Geology-Petroleum Engineering Bulletin and the authors) ©, 2021, pp. 73-84, DOI: 10.17794/rgn.2021.4.7 of the rock. The bond fraction and the particle contact modulus increased both the maximum and minimum stresses. Also, the stiffness ratio micro-parameter affected both major and minor stresses, but the greatest effect occurred on the minor one. Limestone The results of studies on limestone samples are shown in Figures 7 and 8. According to Figure 7, it can be seen that with an increase in bond tensile strength, the m i parameter decreased so that its value reached half of the original value (it changed from 32 to 15). With an increase in bond cohesion, the m i parameter first decreased (20 MPa < C < 40 MPa), then it increased (40 MPa < C < 65 MPa) and finally decreased again. Increasing the friction coefficient, which varied from 0.4 to 1, led to an increase in the m i parameter so that the value of m i changed from 12 to 18. The rate of changes in m i parameter were small at first (0.4 to 0.6), then these changes increased (0.6 to 0.8) and finally decreased again. As the friction angle increased, m i also increased. However, the rate of this changes was relatively low. By increasing the amount of bond fraction micro-parameter, which varied from 0.7 to 1, the m i parameter decreased. This reduction was linear and the m i changes ranged from 22 to 17. Increasing the stiffness ratio also led to an increase in the value of the m i parameter. The changes of the m i parameter were small at first (1 <K N /K S < 1.5), then increased (1.5 <K N /K S < 2) and finally became small again. Moreover, by increasing the amount of particle contact modulus, m i first increased slightly (E c <15), and then the m i value decreased again. According to Figure 8, it can be seen that similar to the earlier samples, the greatest effect of the bond tensile strength micro-parameter was on the minor stress and its effect on the major stress was relatively smaller. Increasing the micro-parameter of bond cohesion and bond fraction increased the major and minor stress of the limestone. The micro-parameters of friction coefficient and friction angle affected the major stress of the rock. Also, the effect of these micro-parameters on the minor one were very minimal. Increasing the micro-parameter stiffness ratio increased the major stress. This parameter also led to a reduction in minor stress, which is lower in higher values. Also, increasing the micro-parameter of the particle contact modulus increased the major and minor stresses, and these changes were greater than in the major stresses. Conclusion In this paper, using numerical simulations of triaxial compressive strength tests in PFC-2D code, the effects of micro-parameters in rocks on the H-B parameter m i were researched. To perform the analyses, the mechanical behaviour of three different rock types: andesite, sandstone and limestone, have been simulated and the effects of micro-parameters of bond tensile strength, bond cohesion, the friction coefficient, the friction angle, bond fraction, the particle contact modulus (Ec) and the contact stiffness ratio (Kn/Ks) have been evaluated. According to the performed analyses, the results of this research can be summarized as follows: • The m i parameter of the H-B criterion has an inverse relationship to the micro-parameters of bond tensile strength and bond fraction of the rocks, so that by increasing these micro-parameters, the m i parameter decreases by about half of its original value. • The effects of the bond cohesion micro-parameter on m i is varied in different rocks. • The m i parameter increases with an increase in the micro-parameters of the friction coefficient, the friction angle, the particle contact modulus, and the contact stiffness ratio of rocks. However, these effects vary depending on the type of rock. • The greatest effect of the bond tensile strength micro-parameter was on the minor stress and its effect on the major stress is relatively smaller. • The greatest effects of micro-parameters of the friction coefficient and the friction angle was on the major stress and their effects on a minor one are very small. • Increasing the amount of the contact stiffness ratio micro-parameter reduces the minor stress in different rocks.
2,829.4
2021-09-20T00:00:00.000
[ "Geology" ]
Towards Smart Campus Management: Defining Information Requirements for Decision Making through Dashboard Design : At universities worldwide, the notion of a ‘smart campus’ is becoming increasingly appealing as a response to the multitude of challenges that impact campus development and operation. Smart campus tools are widely used to support students and employees, optimise space use and save energy. Although smart campus tools are supposed to support campus managers in their decision-making processes, the use of the information delivered by smart campus tools and their application in organisational processes has received little attention. In this paper, we focus on the use of dashboards in the connection of IoT information to strategic decision-making processes in the management of university campuses. To this end, we developed a briefing approach for dashboards that expresses the needs of campus management and matches the structure of decision-making processes. In two cases, dashboards based on this approach were use-tested by stakeholders for defining information requirements for IoT applications. The results suggest that users are able to use dashboards for assessing portfolio performance and determining interventions. Through iteration the usability of the dashboard is improved and information requirements are refined, resulting in a brief for a campus management dashboard. The results suggest that the briefing approach can be used to determine IoT information requirements, though further research is required to study indications and contra-indications of the proposed method. Introduction At universities across the world, the notion of a 'smart campus' is becoming increasingly appealing as a response to the multitude of challenges that impact campus development and operation. Firstly, universities are faced with an increasingly uncertain demand for facilities, both qualitatively and quantitatively. A growing share of international students results in a more uncertain student influx [1] and a more diverse demand for student facilities and services on campus [2,3]. Furthermore, as securing research funding from public or private sources is increasingly competitive in 'academic capitalism' [4,5], there is competition for financial resources. This results in more temporary employment contracts and uncertainty in the demand for offices and laboratories. Secondly, the modernisation of many campuses is becoming pressing. Many campuses in Europe and the United States consist largely of ageing buildings that are often in need of renovation and therefore (re)investment [6,7]. Combined with reduced government funding, this leads universities to alternative financing models. Newell and Manaf [8] observe a tendency amongst five Australian universities to use different funding models for their investments such as leasing, debt funding, donations and private development. In the UK, universities have already invested significantly using, e.g., private bond issuing, commercial bank lending and loans from the European Investment Bank [9]. Put together, these challenges greatly increase the difficulty of strategic decision making in campus management. The combination of more ambitious goals and pressure on energy, financial and human resources drive universities to invest in efficient campus management, including by means of information, through smart tools. In previous research, the authors researched the use of smart campus tools in universities. Smart campus tools are defined as follows: "a smart campus tool is a service or product with which information on space use is collected real-time to improve utilization of the current campus on the one hand, and to improve decision making about the future campus on the other hand" [10]. Although there are many examples of smart campus tools available in both practice and literature, the utilization of information delivered by smart campus tools in organisational processes has received little attention [11]. In previous research we studied strategic decision-making processes in campus management and explored how information from the Internet of Things (IoT) can support them. The conclusion was that the IoT can deliver valuable information to the overview of real estate supply and its performance. As this overview normally requires information from many different sources, its creation tends to be very time-consuming. A more efficient and reliable alternative is to bring together data from various IoT applications, other databases and sources in a platform that supports automated production of overviews [11]. Based on that, the main objective of the present research is to develop an appropriate connection of IoT applications and their data to real-life decision-making processes. The paper reports on two cases (Radboud University and TU Delft) in which organisations are supported to determine the information needs for their decision-making processes by designing dashboards. In addition to the managerial results, the design outcomes (the dashboards) are also of interest for the case study organisations: they provide examples of the performance required in strategic decision making. Therefore, the secondary objective of this research is to design usable dashboards for campus managers, using the conceptual design in Figure 1 as a starting point. The main research question of this paper is thus: How can the information demand of campus management be matched to the capabilities of IoT applications, and optimally displayed in a dashboard? Design research is chosen as the strategy to answer the main research question, as the subject calls for an operational exploration of the fundamental principles and conditions of dashboards that contain information from the IoT. The dashboard designs presented in this paper express indicators and relations relevant to campus management, which are first designed, and then refined and tested together with users. The novelty of this research lies in this use of design research. To the best of the authors' knowledge, there is no research that fulfils the following conditions: (a) it discusses dashboard prototyping as a needs analysis method for IoT applications in campus management (see Section 2.2), and (b) the dashboard designs report a combination of indicators from the IoT and legacy systems related to all four stakeholder perspectives in campus management (see Section 3.1). The rest of this paper is structured as follows: first, Section 2 discusses the use of design research (2.1) and the use of dashboards and dashboard design for the purposes of this research (2.2), and introduces the cases (2.3). Then, Section 3 discusses the design principles of the dashboard (3.1), followed by the design outcomes (3.2) and then the determination of requirements through dashboard design (3.3). Finally, Section 4 concludes the paper. Design Research Strategy In order to answer the main research question, design research was conducted as described in Van Aken [12,13], Hevner et al. [14] and Hevner [15]: prototypical dashboards were designed for specific campus questions and the design process and the performance of the design results was studied. Figure 2 shows the parts of the research positioned in the framework of Hevner [15]. This framework consists of three cycles:  In the relevance cycle a problem is formulated for which an artefact needs to be designed and requirements to design and test the artefact;  In the design cycle the researcher iterates between designing and testing the artefact that is designed to solve the research problem;  In the rigor cycle the problem and the design outcomes are grounded in the scientific knowledge base. In this research, both cases formulate their own specific problems. The dashboard prototypes are designed in the design cycle and tested together with relevant stakeholders. By grounding the dashboard design in existing theory and research, the knowledge generated through the design outcomes in both cases can be added to the knowledge base. Accordingly, the design research leads to multiple design outcomes: an object design, a process design, and an implementation design (in accordance with Hevner [15]). In this research, those design outcomes are as follows:  The process design is the sequence of activities to realise the object design. The process design describes which steps should be taken to determine information requirements for campus decision making. Testing the process design is the main objective of this research.  The object design is the dashboard prototype. The dashboard is based on previous research, and is designed to support campus managers in determining the match between the demand for and supply of real estate and subsequent steps in making a campus strategy. The two resulting object designs and their usability are the secondary objective of this research.  The implementation design is a brief, which specifies (a) practical use requirements for the dashboard, (b) which information the dashboard needs to show to support the specific decision process and (c) which steps need to be taken to organise the dashboard accordingly. The implementation design thus reports the outcomes of the main and secondary objectives to each case. The research design of a case is shown in Figure 3. Following the client statement, which describes the problem faced by the case and its requirements for a solution, the authors design dashboard prototypes based on dashboard design principles (from the knowledge base). The results are tested in two workshops, which took place online (due to COVID-19 restrictions) with a group of stakeholders. In each case, six participants were selected in consultation together with the client. These participants were professionals who were involved in strategic campus decision-making processes. The design of the dashboard prototypes was implemented in Microsoft Excel, a program (1) with sufficient facilities for combining various data sources and visualising data and (2) familiar to participants. The goal of the workshops was to determine the information requirements for the dashboard, which moved from what is maximally possible (workshop 1) to what is required by the participants (workshop 2). Prior to the use of the dashboard in the first workshop, users were introduced to the dashboard through a presentation and a short instruction video. Observers recorded the interactions during the workshops, which were then coded and analysed in three ways: A1: The number of interactions with each indicator: this was used to select which indicators were actually required in the dashboard. A2: The quality of the interactions with each indicator: this was used to (a) determine if participants understood the contents of the dashboard and (b) to identify opportunities to improve the dashboard. A3: The interventions determined by the participants on the basis of the dashboard: this was used to understand if participants could use the dashboard to complete the assignments. As Figure 3 shows, the outcomes of analysis A1 and A2 were used to refine the design of the dashboards. They were thus part of the process design, which was proposed and tested as the main objective of this paper. The dashboard designs and analysis A3 give information about the object designs and how they were used by participants, and were thus connected to the secondary objective of this paper. Research design for one case, displayed twice to show the relationship between the analyses and main and secondary objectives. The resulting design brief answers the client statement. The analyses of the testing phase inform the knowledge base. A1, A2 and A3 denote the three analyses reported in the paper. Emphasis in bold denotes relevance to each objective. Dashboards and Dashboard Design The use of dashboard design in this research needs to be grounded from two perspectives. Firstly, dashboard design is one of several methods to determine information requirements, i.e., the main objective of this paper. Secondly, dashboards are one of several methods to present information in decision making in campus management, i.e., the secondary objective of this paper. First this section discusses the use of dashboards as a means to present information in decision making, after which it moves to determining information requirements through their design. Dashboards are an increasingly popular instrument in the field of performance management [16,17]. Over time, dashboards have evolved from stand-alone displays of KPIs to interactive enterprise-wide decision support systems [17]. This is cause for some confusion: some distinguish dashboards as instruments for operational decision making from scorecards as instruments for strategic decision making [18], while others define a dashboard more broadly as an instrument to be tailored to a specific type of decision or objective [19,20]. This research uses a more broad interpretation of dashboards, after Few: "a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance" [19]. This broader definition of dashboards requires further specification and alignment with their objective. Table 1 describes the characteristics of the dashboards designed in this research for the purposes of informing strategic decision making processes in campus management. Dashboards can also be positioned against multiple criteria decision analysis (MCDA) approaches. Here, dashboards and MCDA approaches are seen as complementary rather than competing. MCDA deals with the structuring and solving of problems involving multiple criteria, such as the problems studied in the cases of this research. There is a broad range of MCDA approaches available, which have also been applied to problems in real estate management [21,22]. Following the results of our previous research, we focused on a specific activity in the decision-making process: the overview of the supply of real estate and its performance. Dashboards are well-suited to provide such an overview in a visual display, on a single screen. The objective of this overview was to create a basis for subsequent actions. In subsequent steps of this decision-making process (defining strategies and weighing and selecting strategies), MCDA approaches are usable. A dashboard combining information from the IoT with other campus management indicators actually provides a reliable basis for MCDA modelling of decisions and their impact on the criteria displayed in the dashboard. Following the discussion about the use of dashboards to present information, the next issue is the use of dashboard design as a method to determine information requirements. Within information management this is related to the activity of requirements analysis for (information) systems development [23], also termed needs analysis or requirements engineering. The first step of requirements analysis is requirements elicitation, which concerns itself with gathering and organising information requirements from stakeholders [24]. The use of prototyping (in our case, dashboard design) is a common method to achieve this [24,25]. Other methods to elicit requirements are traditional techniques, e.g., surveys and interviews, group techniques, e.g., brainstorms and focus groups, or contextual and cognitive techniques [24,25]. Tuunanen et al. [24] review these techniques in order to find a method that (1) has the possibility to reach a wide range of users, i.e., a community, and (2) has two-directional communication, allowing for interaction and understanding of the users. In this research, the intended users of the dashboards are a small, homogeneous group; hence, its development does not have to involve many users. Furthermore, the real-time communication by IoT devices distributed in an environment affects the way users interact with it [25,26], which is another reason to use more interactive, two-directional elicitation methods such as prototyping and iterative design [26]. Case Descriptions Two case studies were included in this research: Radboud University (RU) and TU Delft (TUD). The case selection was based on the following reasons:  Both cases were included in previous research [11], in which the information requirements for their processes of creating a real estate strategy were studied;  Key stakeholders have indicated that it is difficult to produce an overview of their real estate portfolio and its performance for use in strategic decision making;  They have expressed a desire to make more decisions on a portfolio level, which would require such information;  Currently they do not have any IoT applications implemented but wish to do so in the future. In both cases, the dashboards display information derived from the available data on the real estate portfolio and complemented with fictive data where the sources would have been IoT applications. Further case-specific information on the use of the available data is given in the case descriptions. Radboud University Nijmegen Radboud University (RU) is a university with around 22,000 students located in Nijmegen, the Netherlands. The university has concentrated its activities on its campus, which was formerly an area in the periphery of Nijmegen, but now it has become immersed by the city. At the start of 2020 the university established a new real estate strategy. The strategy focuses on sustainability and optimal use of the existing buildings on the one hand, and on further developing towards a livelier campus on the other hand. RU wants to accommodate growth maximally in the existing area and further increase the utilisation of the buildings. Rather than longer opening hours across the campus, it chooses for a synergy of existing functions. A higher utilisation is achieved by implementing modern office concepts, improving the scheduling of education and implementing smart tools to show the available capacity within the existing spaces to the users. In this research, the university chose to focus the case on its study places. In the existing situation, there are many types of study places in the various buildings of the university. Each student uses mostly the study places of their own faculty and the library building. There is no overview of all the study places; furthermore, the management of the study places is organised in different ways. In the future the university wants to use all study places as flexible, shared facilities that can be used by any student at the university. At the time of the research, following the transfer of study place assets from the faculties to the department of campus and facilities, a project group was working towards a uniform way of managing them. This included stating the desired quality and quantity of study places, the use of personnel and the required finances. The RU campus has around 28 university buildings, six of which contain study places. Beyond their location, not much information on study places is available. The floor area per study place and costs of each building are known. However, the number and type of study places are not registered. In the dashboard, information is required on room level, including floor area, type, capacity and costs. Consequently, what was displayed in the dashboard prototypes had to be supplemented with hypothetical, plausible data, both for the real estate indicators and the information that would be delivered through IoT. This should not influence the quality of the results. Even with fictive data, workshop participants could assess the performance of the real estate portfolio and define interventions based on that. Any deviation from reality would not impede utilization of the indicators included in the dashboard and, therefore, the workshops would still provide the envisaged feedback. TU Delft TU Delft (TUD) is a university with around 26,000 students located in Delft, The Netherlands. TUD houses its activities on its campus, located south of Delft's city centre. In 2019 the university's Campus and Real Estate (CRE) department established a new campus strategy, which focuses on optimal use of the existing facilities and resources to realise the university's ambitions and accommodate growth. The campus strategy includes the construction of new buildings in the south of the campus, intensifying the use of existing buildings in the middle of the campus, and disposition of buildings in the north of the campus. In this research, TUD chose to focus on dashboards for the whole portfolio and for separate buildings to be used in reporting and updating its campus strategy. A first version of this dashboard had been made to show the current performance of the portfolio and buildings, but which would also serve as a basis for showing the expected performance as a result of the campus strategy. The main issue with these dashboards is how to provide an overview of a building or portfolio at one glance. Furthermore, the case offers the opportunity to further develop the first version of the existing dashboards and develop a vision on which information from IoT is valuable to include in those dashboards. There are around 60 buildings on the TUD campus. It was decided to focus on buildings, wholly or partially used for academic purposes, which included around 80 percent of the area in the portfolio. The floor area and space types were known for each space. The capacity was also largely known for each space. The number of users, quality, costs and energy use were known for each building. Space utilisation data was known per room for education spaces and study places, based on a 2019 survey. The dashboard was thus based on real data, with the exception of the information to be delivered by IoT. Therefore, in contrast to the first case, it was expected that the participants would frequently relate the information in the dashboard to their existing knowledge of the campus. Principles for Dashboard Design The design of the dashboards in this research is based on a knowledge base combining theories and instruments from corporate real estate management (CREM), building automation, the IoT and information management. The dashboard is further detailed using design principles for dashboards as outlined by Few [19]. Following the earlier definition by Few (see Section 1), there are several requirements for dashboards-just as in a dashboard of a car: a dashboard should not display all information, but the information that is needed to perform a specific activity such as driving a car. This information is collected from multiple sources: a car dashboard obtains data from sensors in the tank, engine, transmission, etc., to report fuel levels, speed, rotations, etc. Finally, information is reported succinctly and meaningfully to the user, e.g., by showing a meter with thresholds for maximum speed or for fuel tank content, or simply by displaying an alert when a seatbelt is not used. From CREM several principles are drawn for a dashboard to be used in university campus management, based on Den Heijer [27]. These principles direct choices on which type of indicators to consider and which to omit (to avoid information overload), and how to report them. The principles are: 1. The dashboard reports on the process of adding value through real estate. Real estate is positioned as the input, the use of the real estate as the throughput, and the organisational performance as output; 2. The four stakeholder perspectives must be present in the dashboard. If a dashboard is tailored towards a specific group, the dashboard should include information on the other perspectives. The question is, what are the key indicators per perspective; 3. Preferably, the indicators should be related to each other-e.g., euro/m 2 , users/m 2 , etc.; 4. The indicators in the dashboard are customised to the type of campus decision, and limited in number by the requirement to fit on a single screen; 5. The stakeholder perspectives are applicable on multiple abstraction levels: e.g., on the organisational level of the university, faculty or department and on the real estate level of a building portfolio, building or set of spaces. From the IoT, lessons with regards to the sensing of properties of the environment with various technologies are drawn [11,28]. The real-time data supplied by the IoT allows for better use of spaces on campus by users on a day-to-day basis. Furthermore, real estate managers can make better decisions about demand in the long term, when real-time data collection is used as a 'ground truth' [29,30] for actual space use. Previous research provides overviews of the management information that can be made available through IoT applications. From information management, lessons on the use of information technologies (IT) are drawn, including those of the IoT, in order to deliver value in organisations. In previous research [11] process and information analyses were conducted for both cases presented here. These analyses match the demand for information from campus management and the supply of information from the IoT and other IT systems, and thus serve as a foundation for the information requirements to be satisfied in the dashboard. The information requirements for an overview of existing spaces include various space characteristics such as type, area, capacity, condition level and level of amenities. The IoT complements these with information on frequency and occupancy rates, user satisfaction, energy use and indoor environmental quality. These requirements are combined with the five principles from CREM to guide the conceptual design (see Figure 1). This conceptual design is the starting point of the cases: designing what is possible with IoT applications. Following that, the cases focus on selecting what is desired from IoT applications. After determining which information to display, the next issue is how to display it. Table 2 provides several considerations with regards to displaying information. Each property of a dashboard is matched with initial values for the real estate dashboards and matching indicators. The variations in timing depend on the type of information displayed. For the existing situation, the current performance of real estate indicators is shown. For IoT indicators this is the year-to-date performance. In addition, a comparison over the past five years is required because real estate indicators tend to change very slowly. The most important comparisons in the dashboard, aside from the comparison in time, are a comparison to norms determined by the organisation and a comparison across buildings. Visual indicators are used to draw user attention to poor performance. Finally, data on objects and past interventions are added to provide further context to the contents of the dashboards. Indicators need not be binary, but too much distinct states will become too complex Non-quantitative data: top 10 customers, issues to investigate, etc. Addition of interventions, object data to support information in dashboards -An important choice drawn from Few [19] is the use of bullet graphs for clear visual communication. The advantage of bullet graphs is that they enable the display of performance on an indicator across multiple divisions of the portfolio and compared to values for poor, medium and good performance. Figure 4 shows an example of a bullet graph used in one of the dashboards in this research. The overlay of measurement on requirements makes it easier to discern which parts of the portfolio perform well and which do not. Radboud University The dashboard design for RU was determined by two information needs that must be satisfied: (1) establishing the match between the demand for spaces and the supply of spaces and (2) identifying trends that may impact the future demand for spaces. This led to the initial division into two dashboards (Figures 5 and 6). Each dashboard initially contained eight indicators, four related to the provision of real estate and four related to space use: study places per student, average stay duration, the percentage of spaces that comply to the brief, user satisfaction, total costs per study place, occupancy, floor area per study place and energy use per study place. In the main dashboard, the performance on each indicator was visible for every type of study place and the whole portfolio. In the trends dashboard, the performance on the whole portfolio over the past five years was visible. In both dashboards, the user could navigate between viewing the performance on a campuslevel or selecting a specific building. After the first workshop, the indicators stay duration and energy use were omitted as they were found to be of less importance to determine the performance of the study place portfolio (see Section 3.3.1). Furthermore, two other dashboards were made (see Appendix A): one in which the main dashboard displayed the performance per building rather than per type of study place, and another that offered a more detailed insight into the performance on four criteria. These were tested in workshop 2. The dashboard tested in workshop 2 complied to the requirements set in Section 3.1: (1) it positioned traditional real estate indicators in the top row as input and indicators based on information from the IoT below them as throughput; (2) it contained indicators in each stakeholder perspective; (3) it defined the indicators in such a way that their values could be related to each other; (4) it was customised for decisions on the study places of the university and (5) it reported on both a portfolio and a building level. Both the main dashboard and the alternative to the main dashboard were found to be useful by the participants. The additional dashboard was also found to be useful, but requires further development and testing. TU Delft The dashboard design for TUD focused primarily on resolving the challenge of displaying the information in a clear way. Firstly, there was a challenge in what could be reported on a building level, i.e., costs and energy use, and information to be reported across the different space types of the building, i.e., education spaces, study places, offices and laboratories (and later meeting rooms). This led to the design of a dashboard showing the performance on the level of the whole portfolio or a selected building. The design of the dashboards was identical. To help navigate through the building dashboard, an overview was given of the buildings, which required the most attention. Initially, the dashboard contained five building-level criteria ( Figure 7): operating costs, depreciation costs, building efficiency and energy use in warmth and electricity. For each space type, it contained six criteria: seats (or m 2 ) per user, space utilisation in frequency and/or occupancy, quality, user satisfaction, floor area per seat and an indoor environmental quality score. After the first workshop, the indicators building efficiency, m 2 per seat and the indoor environmental quality score were omitted because they were deemed less important in determining the performance of the portfolio (see Section 3.3.1). A financial criterion was added to reflect the use of resources during the year: budget vs. expenditure. The type of office spaces was further distinguished into offices and meeting rooms. After these amendments, a trends dashboard was made to show the development in past years (see Appendix A). Finally, the overview to help navigate through the building dashboard was improved, based on feedback. In the first version, this overview included a ranking per space type to direct the user to the buildings requiring attention for each space type. This was adjusted to one overview with a list of the five buildings requiring the most overall attention. The dashboard tested in the second workshop is displayed in Figure 8. The dashboard tested in workshop 2 complied with the requirements set in Section 3.1: (1) it positioned traditional real estate indicators as input and indicators drawing information from the IoT below them as throughput per stakeholder perspective and space type; (2) it contained indicators from each stakeholder perspective; (3) it defined the indicators in such a way that their values could be related to each other; (4) it was customised for decisions on the buildings of the university and (5) it reported on both a portfolio and a building level. The main dashboard was found to be useful by the participants. The trends dashboard and the overview for navigation were not sufficiently used in the workshops to evaluate thoroughly and require further development. Design Outcomes (Analysis A3) In each workshop, the participants were asked to complete two assignments using the dashboard: first, to assess the performance of the whole portfolio, and second, to determine interventions per building. This analysis discusses these interventions as the outcomes of using the dashboards. The proposed interventions for specific buildings were compared to initial conclusions drawn up by the main author. For each specific building, the three most important interventions were drawn up a priori and compared to the interventions proposed by the participants. Each intervention could occur multiple times across buildings, and they could be determined in separate occurrences by participants, as there were three outcomes of workshop 1 and 2 outcomes of workshop 2. Table 3 lists the most important interventions drawn up in the RU case, the number of times they occur and to what extent these interventions were also defined by the participants. Each intervention could occur six times at most, as there were six buildings, which could potentially all require the same intervention. Then, the interventions determined by the participants were compared to the number of times these interventions could have been determined. Table 4 shows that participants were able to define multiple interventions. They were particularly focused on silent study places in workshop 1. In workshop 2, participants were focused more on identifying qualitative interventions. Furthermore, the table shows that the participants identified five interventions, which were not identified in the author's main conclusions. The identification of these interventions shows an ability to combine the information from the dashboard with knowledge about the campus, the buildings and its users that is not contained in the dashboard: e.g., discussing how to redevelop quality requirements, by sending students to other buildings or by naming the planned disposition of a building as an intervention. Table 4 shows the results for the TUD case. Here, the number of possible occurrences of interventions was based on the buildings selected by the participants to study, as there were more than 40 buildings in the model. The selected buildings differed somewhat per workshop group. Similar to the first case, participants were able to define multiple interventions. The results show that participants were mainly focused on quantitative interventions (increase or reduction of a type of space), and less on qualitative interventions. Furthermore, the participants defined four additional interventions. These interventions and the additional comments revealed a need for more specific information on occupancy patterns, which could be delivered through drill-down dashboards (see case 1). Furthermore, they show the ability of participants to connect the information in the dashboards to existing knowledge of the portfolio, e.g., the current tenants' demands and satisfaction levels. Relative Importance of Indicators (Analysis A1) This analysis studies the use frequency of indicators during the assignments in order to determine which indicators to exclude from the dashboards. In each assignment, participants first completed the assignment and were then asked to state their conclusions. First, the number of mentions per indicator during the navigation was counted; then, the indicators were ranked from 1 to 8 based on those counts. The score indicates the average rank of each indicator during each workshop. The results of the workshops were averaged. Based on the average, rank indicators were categorised in terms of their importance and compared to the use of indicators mentioned by participants in their conclusions, also based on an average of counts. The outcomes of both cases were also compared to the performance on each indicator according to the dashboards (i.e., where the dashboards draw the user's attention to). The comparisons showed that there was little to no relation between what the model draws attention to and what the participants look at. This suggests that the participants of the workshop used the model based on their own expertise and not just by what the model indicates. This is a positive finding with respect to usability, which is the subject of the third analysis. The outcomes of the analysis for the RU case are reported in Table 5. In the first workshop, based on the use of the indicators in the assignments, study places/student, occupancy, compliance to the brief and user satisfaction were determined to be of high importance; floor area per place was of medium importance; costs, stay duration and energy use were categorised as low importance. The use of the indicators in formulating conclusions supported these findings. Based on these results, stay duration and energy use were omitted from the dashboard in the second workshop. Despite low importance, costs were not omitted, following the dashboard requirement of including information from each stakeholder perspective. The results of the second workshop were very similar to those of the first. Table 5. Use of the indicators during the assignments and in forming conclusions (case RU). Asterisks (*) denote instances in which the importance based on the conclusions deviates from the importance based on the assignments. Indicators Workshop The outcomes of the analysis for the TUD case are reported in Table 6. The table distinguishes building-level and space-type indicators because each space-type indicator was repeated per space type and was thus used much more frequently in the assignments. Consequently, these indicators were counted separately for each space type and averaged prior to their ranking. The use of indicators in formulating conclusions deviated slightly from the assignments, especially for sustainability and user satisfaction. Based on the results, building efficiency and indoor climate score were omitted because of low scores; additionally, m 2 per seat was removed to reduce the information load. On the other hand, sustainability remained in the dashboard following the requirement of including information from each stakeholder perspective. The results of the second workshop are similar to the first workshop, except for sustainability. Furthermore, given the feedback of some of the participants, it should be considered to add the m 2 per seat indicator to the dashboard again. Table 6. Use of the indicators during the assignments and in forming conclusions (case TUD). Asterisks (*) denote instances in which the importance based on the conclusions deviates from the importance based on the assignments. In this analysis, the quality of the use of indicators during the assignments was analysed. Based on observation, the use of an indicator was labelled as positive or negative. Positive uses, which suggest sufficient information quality and flow, reacted to a positive or negative situation in the model, seeking relations between indicators or seeking relations with the real-life context. Negative uses, which suggest insufficient information quality and flow, ignorance of the situation in the model, confusion about what is displayed or a dead end (the user gets stuck in the interpretation of the model due to wrong interpretations). Each of these uses was counted in the transcript of the workshop, with the relationships between indicators counted as 0.5 point per indicator and all other types of uses as 1 point. Ignorance of situations in the model was determined by comparing the points to which the model draws attention with if the participants pay attention to those points. Indicators In both cases, the number of positive interactions during the first workshop greatly outnumbered the number of negative interactions: see Table 7. At RU the ratio was 6.1:1, at TUD 5.2:1. This analysis supports the initial observations made during the workshops, namely that participants were able to use the model well to complete the assignments and form conclusions. Between the cases a difference can be observed in how the model was used: at RU participants made sense of the information by reacting to what was in the model and relating indicators to each other, while at TUD participants made more connections between what was in the model and the situation in reality. This is thought to be the effect of using fictive data in the first case, which forced participants to focus on what was in the dashboard. The primary objective towards workshop 2 was to reduce the number of negative interactions by improving information quality. At RU there was some confusion about the definitions of study places per student, stay duration and occupancy. To resolve this, popups giving the definitions were added next to each indicator. In addition, for study places per student and occupancy, a 'drilldown' dashboard was made that enabled the users to see the differences in performance during education weeks and exam weeks. At TUD, there was confusion with regards to the definitions of quality, user satisfaction and the indoor climate score. Here, pop-ups giving the definition of the latter two were added to remove confusion, while for quality a link led to the description of an existing framework for defining quality. As a result of these changes, in workshop 2 the ratio of positive to negative interactions increased at TUD from 5.2:1 to 8.7:1. At RU, the ratio decreased from 6.1:1 to 5.4:1. However, the decrease is due to one new participant, who participated only in workshop 2. If the group including this participant was excluded from the results, the ratio increased to 8.5:1. At RU, the confusion concerning indicators was reduced, which suggests that the adjustments to the model had an effect. However, the alerts for the cost indicator were fairly often ignored, which suggested further improvement to the information quality of this indicator is needed. At TUD, the confusion with regards to costs increased as well. This was largely due to the addition of another financial indicator between the first and the second workshop. Furthermore, participants indicated that, to be able to reach conclusions, they needed additional information on indicators such as quality and user satisfaction, despite clarity in their definitions. Here a similar 'drilldown' dashboard as in the first case would be useful. Conclusions The main question to be answered in this research was: How can the information demands of campus management be matched to the capabilities of IoT applications, and optimally displayed in a dashboard? This research question is connected to the main objective of this research (to develop a connection between IoT applications and real-life decision-making processes) and a secondary objective (to design usable dashboards for campus managers). With regards to the secondary objective, the results described the translation of various principles and the outcomes of process and information analysis into a conceptual design for dashboards. The designs for both cases were evaluated and were found to be compliant with the principles outlined in Section 3.1. Next, the results of analysis A1 showed that the participants made use of indicators in all four stakeholder perspectives to formulate different kinds of interventions (see analysis A3). These results show that it is possible to design usable dashboards for a portfolio of study places and for an entire real estate portfolio at a university, combining data from existing systems and data to be delivered by IoT, based on the combination of principles from various fields [11,19,23,27]. Additionally, the findings from analysis A2 suggest that involving participants in the design process improved the usability of the dashboards, as the refined dashboards resulted in a higher ratio of positive to negative interactions. This is supported by participants, who indicated that the workshops enabled them to learn how to use the dashboards and work with their information. Specifically, the introduction of the dashboard in the first workshop was appreciated. Analysis A2 also showed that for some indicators such as quality, user satisfaction, but also occupancy and m 2 per user, participants may require definitions and explanations. 'Drilldown' dashboards were proposed as a solution (case 1) for analysts to determine interventions with precision. With regards to the main objective, the results describe how the workshops resulted in the selection of indicators (analysis A1) and how improvements to the design resulted in improved usability in the second workshop (analysis A2). In the first case, the information requirements for the IoT were determined to be occupancy and user satisfaction; in the second case, the dashboard was required to include data on frequency and occupancy (depending on space type) and on user satisfaction. Next to the information requirements for the IoT, the design process also resulted in further information requirements. For example, in both cases requirements were formulated for the measurement and reporting of quality. The use of multiple workshops to test the dashboards, to assess which indicators are useful and if the total dashboard is still a good overview, helps with the selection of information. Prototyping (see Section 2.2) is thus found to be a suitable method for the purpose of this research, as suggested by [24][25][26]. In the process of dashboard prototyping, the number of iterations (workshops) is a factor to consider. Especially when there are many indicators involved and participants feel that one or more of the excluded indicators should be reconsidered, a third workshop is useful. It can also help to test different dashboard alternatives, including different indicators per stakeholder perspective. In the second case, a third workshop could have been used to specify the indicators per space type. However, more iterations may also result in loss of focus or confusion. In case 2, the addition of an indicator after the first workshop was found to result in confusion. Therefore, workshops should generally work towards the use of fewer indicators, the addition of previously removed indicators or specifying existing indicators. Finally, the results were used to develop design briefs, i.e., implementation designs. These design briefs covered the intended use of the dashboards, detailed definitions for each indicator, including information source, and procedures for addressing the complexity of acquiring the data and translating it to the information in the dashboard. Based on that and the existing situation, costs for acquiring and maintaining the data were estimated and a step-by-step plan was made for each organisation to realise the dashboard. In both cases, the design briefs were received positively by stakeholders and the client. Though the dashboards seem quite similar, the client statements and departure points of the cases were different, leading to different outcomes. At Radboud University the objective was to help the Campus and Facilities department to manage the portfolio of study places, following the recent transfer of ownership from the faculties to their department. The results showed that even when not much information is available, dashboard design helps to make decisions on structuring information and thus on data collection. The step-by-step plan thus comprised specific steps, e.g., the acquisition of IoT applications, making a policy detailing quality requirements and the data collection to monitor that policy. At TU Delft, the objective was to give the CRE department an overview of the portfolio and buildings for use in updating the campus strategy. Compared to the first case an initial design and more information were available. The results showed how dashboard design helps to consolidate information on both a building-level and space-type level in the same screen in a simple, usable way. In particular, this design showed how to organise information on a higher order: to help understand what part of the building or portfolio requires attention, how important that part is, and how comparisons across space types can be made. The step-by-step plan included more generic steps than in the previous case, e.g., decide per space type in which way to measure frequency/occupancy and determine how to measure quality across the portfolio. Within each step, more detailed decisions have to be made. In summary, the use of dashboard design shows several positive indications for determining IoT information requirements. The designed dashboards could be used by participants to complete the assignments, and led to several indications on how the designs may be further improved. Further research is needed to better understand how choices in the dashboard design affect results. This includes application of the dashboards in tactical and operational decision making. Funding: The two case studies reported in this paper are separately funded. The case study of Radboud University was funded through an agreement to conduct research related to (1) the development of the university's campus strategy and (2) the development of the information systems, which support campus decision making. The case study of TU Delft has been funded through an agreement with the university's Campus and Real Estate department and Executive Board to conduct research on several strategic themes related to the development and management of the campus. Institutional Review Board Statement: Given the relationship of the participants to the researchers and the nature of this research, this research was conducted in compliance with the university's ethical guidelines. Therefore, a review was not applicable. Informed Consent Statement: Informed consent was waived due to (a) minimal risk for subjects and (b) the fact that no personal data was collected or stored. Data Availability Statement: The data presented in this study are placed under embargo at https://doi.org/10.4121/13664213.v1, and available on request from the corresponding author. The data are not publicly available because the workshops contain potentially sensitive information.
11,068.4
2021-05-11T00:00:00.000
[ "Computer Science", "Engineering", "Education" ]
Blood biochemistry and haematology of migrating loggerhead turtles (Caretta caretta) in the Northwest Atlantic: reference intervals and intra-population comparisons We established reference intervals for blood biochemistry and haematology of loggerhead turtles captured off the Mid-Atlantic coast of the USA. This assessment of blood variables in healthy, wild loggerhead turtles allows for comparisons with turtles impacted by anthropogenic and environmental threats, as well as turtles sampled in different habitats and life stages. Introduction Establishing baseline blood biochemistry and haematology profiles, often in the form of reference intervals (RIs), is a common practice for evaluating the clinical health status of wild animals (Bolten and Bjorndal, 1992;Troiano et al., 1997;Samour et al., 1998;Christopher et al., 1999;Stamper et al., 2005;Hidalgo-Vila et al., 2007;Deem et al., 2009;Gelli et al., 2009;Delgado et al., 2011;Basile et al., 2012;Fazio et al., 2012;Lewbart et al., 2014;Muñoz-Pérez et al., 2017). As with human medicine, in veterinary diagnostic laboratories, RIs are typically established as the central 95% of the reference population with 90% confidence limits (CIs), thus creating a narrow range of expectations for clinically healthy animals (Lumsden and Mullen, 1978). RIs provide a clinical baseline that is useful for monitoring health trends in wild populations. For example, Christopher et al. (1999) documented biochemical and haematological RIs for desert tortoises which provided a means for analysing differences between sexes, distinguishing seasonal influences on physiological condition, and assessing differences in foraging behaviour between tortoises at three geographic locations. Establishment of RIs also permits assessment of compromised health status due to anthropogenic or environmental disturbances (Kelly et al., 2015). Stacy et al. (2017) utilized previously established RIs and expert clinician-based assessments to characterize the health status of marine turtles impacted by the BP Deep Water Horizon oil spill. Physiological status of oiled turtles was monitored by documenting blood biochemistry and haematology throughout the rehabilitation period to assess the full breadth of impact of crude oil exposure and the likelihood of full recovery. Studies such as this provide insight on health problems that may occur in response to anthropogenic or environmental disturbances, and help clinicians and conservation managers provide well-informed response efforts for impacted animals (Stacy et al., 2017). Our study focused on establishing RIs for the Northwest (NW) Atlantic Distinct Population Segment (DPS) of loggerhead turtles (Caretta caretta), which is comprised of loggerhead turtles that inhabit waters on the eastern coast of the USA and Canada (Conant et al., 2009;Wallace et al., 2010). The NW Atlantic DPS is listed as threatened by the US Endangered Species Act (Conant et al., 2009) and endangered by the Canadian Species At Risk Act (Government of Canada, 2017). This population faces a number of threats such as fisheries bycatch (Brazner and McMillan, 2008;Haas, 2010;Murray, 2011;Murray and Orphanides, 2013), oil and gas explorations (Klima et al., 1988;Bolten et al., 2011), and climate change (Hawkes et al., 2007;Chaloupka et al., 2008). Fisheries interactions, in particular, have been highlighted as a potential source of mortality for loggerheads (Bolten et al., 2011). Even if turtles do not die as a direct result of entanglement or hooking in fishing gear, injuries sustained as a result of capture may result in sublethal impacts that could affect post-release behaviour and fitness (Lewison et al., 2004;Wilson et al., 2014). Previous studies have illustrated variation in blood chemistry between hand-caught and fisheriescaught loggerheads indicative of induction of a stress response and metabolic disturbances (Williard et al., 2015), however, additional data on natural variation in blood variables for healthy turtles are needed in order to appropriately assess impacts of fisheries interactions and other disturbances. Establishment of RIs for loggerhead turtles in the NW Atlantic DPS permits differentiation between healthy and unhealthy turtles and allows for clinically-based, comprehensive assessment and management of populations (Flint et al., 2010a). A small number of studies have provided biochemical and haematological RIs for NW Atlantic DPS loggerhead turtles in seasonal nearshore foraging habitats along the southeastern coast of the USA (Deem et al., 2009;Kelly et al., 2015). The primary goal of our research was to provide biochemical and haematological RIs for NW Atlantic loggerhead turtles during seasonal migrations in offshore habitats of the US Mid-Atlantic Bight (MAB) (Winton et al., 2018). The physiological status of marine turtles during migration at temperate latitudes may differ from that of turtles residing at lower latitude foraging grounds. Not only do migratory turtles experience high metabolic demands from continual swimming (Papi et al., 1997;Bowen et al., 2005), but the energetic demands of migration may occur in tandem with shifts in behaviour and environmental factors (Solow et al., 2002). Migrating loggerhead turtles exhibit a greater number of shorter duration dives compared with turtles at foraging grounds (Papi et al., 1997), which may reflect a decrease in food intake during directed long-distance movements. Furthermore, as poikilothermic animals, loggerhead turtle behaviour and metabolic function are impacted by the cooler water temperatures experienced at higher latitudes (Mrosovsky, 1980;Davenport, 1997). The creation of biochemical and haematological RIs for loggerhead turtles migrating through offshore habitats in the MAB provides a baseline for clinical health assessments and evaluation of physiological impacts of environmental disturbance, as well as a basis of comparison for the physiology of different behavioural states. It is widely recognized that establishment of blood chemistry RIs at the inter-population level for a given species is necessary in order to account for unique genetics, variety of habitats encountered, and differences in behaviour (Hrubec et al., 2000;Flint et al., 2010b). Establishment of RIs at the intra-population level is warranted given the physiological adjustments that may occur while foraging in nearshore neritic habitats, migrating in pelagic waters, or nesting on land (Prange, 1976;Deem et al., 2009). The goals for our study were two-fold: (1) establish RIs for a broad range of blood variables for use in clinical health assessments, and (2) compare blood variables for loggerhead turtles residing in coastal foraging grounds and during migration to provide insight into the energetic and physiological status associated with different behavioural states. Turtle capture and sampling Turtles were sampled from May to June in 2011, 2012, 2013 and 2016 along the continental shelf off the Mid-Atlantic coast of the USA (36-39°N, 73-75°W; Figure 1). Individual loggerhead turtles were spotted at-sea while aboard the F/V Kathy Ann, a 91 ft commercial scalloping vessel chartered for this research. To avoid startling the turtle, the research vessel remained situated at a distance, and a small, inflatable boat was deployed with a driver and a netter to capture the turtle. Personnel on-board the research vessel maintained sight of the turtle and directed the small boat to a distance where the netter gained visual contact. The small boat then approached the turtle from behind to avoid startling the animal. When close enough, the netter quickly placed a large dip net in front of the turtle, allowing the turtle to swim forward into the net. After the turtle was netted, it was brought aboard the small boat and transported back to the research vessel. Of the 81 loggerhead turtles sampled, 73 were designated as large juveniles (58.1-80.0 cm SCL, N = 66) or sub-adults (80.1-87.0 cm SCL, N = 7) according to size classifications previously established (Crouse et al., 1987). Processing of each turtle involved the collection of a blood sample (see below), core body temperature (T) measurement via a soft thermocouple thermistor (Model 8402-00; Cole Parmer Instrument Co., Vernon Hills, IL) inserted 4-8 cm into the cloaca, and SCL_NT (straight carapace length_notch to tip) measurement using calipers. Satellite transmitters (GPS-Argos Satellite Relay Data Loggers; Sea Mammal Research Unit, University of St. Andrews, St Andrews, Fife, KY16 8LB, UK) were attached to the carapace of each turtle as part of a separate study of loggerhead turtle movements and behaviour (Winton et al., 2018); turtles tracked for ≥3 months by satellite telemetry were considered 'healthy' and were included in blood biochemistry analysis. Blood sample collection and handling Blood samples (12 ml) were obtained from the dorsal cervical sinus using a 1.5′ 20-gauge needle and 12-ml syringe ( Figure 2). The sample was immediately divided between green-top tube (GTT) vacutainers containing lithium heparin with no plasma separator. Subsamples were drawn from GTT vacutainers using a 1.5′ 20-gauge needle and 1-ml syringe for analysis via an i-STAT Handheld point-of-care blood analyser (Abbott Point-of Care Inc.; Princeton, NJ). In 2011, 2012 and 2013, additional subsamples were collected for manual determination of packed cell volume (PCV) by centrifugation in haematocrit tubes, determination of total solids by refractometer and preparation for veterinary diagnostic laboratory (VDL) analyses at IDEXX Reference Laboratories (Buzzards Bay, MA). For the latter, plasma (1 ml) was harvested by centrifugation of remaining blood in GTTs and frozen at −18°C, and 1 ml of whole blood in a small GTT was refrigerated. Both the plasma biochemical profile and complete blood count were assessed by VDL analysis within 8 days. Biochemistry, blood gas and haematology variables The i-STAT analyser was used in conjunction with three types of cartridges to measure blood variables. In 2011, CG4+ cartridges (pH, pCO 2 , pO 2 , HCO 3 − , TCO 2 , sO 2 , Base Excess, lactate) were loaded with a subsample of whole blood drawn from a GTT vacutainer. In 2012 and 2013, CG8+ cartridges (pH, pCO 2 , pO 2 , HCO 3 − , TCO 2 , sO 2 , Base Excess, haematocrit (Hct), haemoglobin (Hgb), sodium (Na), potassium (K), ionized calcium (iCa), glucose (Glu)) were loaded immediately with samples directly taken from turtles and subsamples for CG4+ cartridges were prepared as in 2011. The i-STAT analysis in 2016 used CHEM8+ (TCO 2 , Hct, Hgb, Na, K, chloride (Cl), Anion Gap, iCa, Glu, blood urea nitrogen (BUN), creatinine (Crea)) and CG4+ cartridges, in that order, loaded as subsamples from GTT vacutainers. Previous work has suggested no significant difference between blood variables measured with different types of i-STAT cartridges (Lewbart et al., 2014). Thus, for blood variables with multiple i-STAT measurements, values derived from the first cartridge run were used for assessment of baseline blood biochemistry. The average time lag between blood collection and loading blood into i-STAT cartridges for analysis was 12 min (range 1-137 min). 4 ABS Eosinos, % monocytes, ABS Monos and plasma protein (PP). A small number of VDL variables (Na, K, Glu) also were measured by i-STAT. Previous studies have concluded that differences between values derived from VDL analysers and i-STAT are not biologically or clinically significant (Wolf et al., 2008;Atkins et al., 2010). Thus, values obtained in the field using i-STAT were maintained for analysis. This choice minimized the potential for handling or storage effects on blood values. Haematocrit values were derived from i-STAT and VDL, and PCV was determined via manual centrifugation of Hct tubes (Supplementary Table S1). Previously published work has illustrated that Hct values provided by i-STAT are lower than values obtained manually by centrifugation of Hct tubes in loggerhead turtles (Wolf et al., 2008) Statistical analysis For a broad assessment of the data, descriptive statistics were calculated for size, core body temperature and blood variables across all four sampling years. To assess the time-sensitivity of blood gas measurements (pH, pCO 2 , pO 2 , TCO 2 , HCO 3 − ), regression analysis was performed on the absolute difference in blood gas values measured by different cartridges (|CG8+-CG4+|) against time elapsed (min) between cartridge loading (P ≤ 0.05) for the 2012 dataset. We used the Mann-Whitney U test to compare 21 blood variables for juvenile to sub-adult loggerheads in a nearshore foraging habitat (Kelly et al., 2015) and during migration (our study) to investigate significant differences between behavioural states. P-values were adjusted via the Holm-Bonferroni method (P ≤ 0.002). The influence of size (SCL_NT) and core body temperature (T) on individual blood biochemistry and haematology variables was assessed by creating a correlation matrix between all measured variables. Then the Spearman rank correlation coefficients and associated P-values for correlations between SCL_NT and T were extracted from the matrix. Associated P-values were adjusted via the Holm-Bonferroni method to reduce the chance of spurious correlations due to Type I error from multiple comparisons (for T and SCL_NT, P ≤ 0.001). All statistical analyses excluding calculation of RIs were conducted using Microsoft Excel and R v3.2.0 (The R Foundation for Statistical Computing, Vienna, Austria) through the RStudio interface (R Studio, Boston, MA, USA). Results The median SCL_NT for all turtles combined was 73.7 cm and values ranged from 54.9 to 100.8 cm. Median T was 19.6°C with values ranging from 12.3 to 25.3°C. Summary information for each sampling year is presented in Table 1. Basic descriptive statistics (median and range) and RIs for blood biochemistry, blood gas and haematology variables are reported in Table 2. When the difference in blood gas values obtained with different cartridges (|CG8+-CG4+|) were plotted against time elapsed between cartridge loading, weak but statistically significant relationships were found for pCO 2 (P = 0.013, R 2 = 0.2134), TCO 2 (P = 0.011, R 2 = 0.2233), and HCO 3 − (P = 0.024, R 2 = 0.1810). The other blood gas variables did not demonstrate statistically significant relationships with time elapsed between loading cartridges. These statistically significant results provide support for utilizing the first cartridge run for analysis. The results from our study are presented alongside values for loggerhead turtles in nearshore foraging grounds (Kelly et al., 2015) to facilitate comparisons between different geographic locations (Table 3). Comparisons with additional studies are presented in Supplementary Table S2. The results of the Mann-Whitney U test indicated that 14 out of 21 blood variables were significantly different (P ≤ 0.002) between turtles resident in nearshore foraging habitats (SCL range 50.4-80.6 cm, Kelly et al., 2015) and migratory turtles in our study (SCL range 54.9-100.8 cm). Median values for PCV/Hct, TP, globulin, ABS Azuros, ABS Lymphs and UA were all higher in migrators while median values for Glu, Na, K, P, Cl, AST, ABS Monos and BUN were all lower in migrators compared with resident turtles. Discussion The migratory sample population in our study and resident loggerhead turtles sampled at seasonal (May-November) neritic foraging habitats in Core Sound, North Carolina (Kelly et al., 2015) are of similar size (juveniles to sub-adults) and likely both derived from the NWA DPS (Winton et al., 2018), thus permitting a comparison of RIs at the intrapopulation level during different behavioural and physiological states. Of the 21 blood variables included in our comparison, 14 variables showed statistically significant differences. Additionally, we found that T was significantly correlated with blood variables related to metabolic status. This helps validate the practice of considering ecological and biological processes when establishing RI values for a species. Given the time of year in which sampling occurred, juvenile and sub-adult loggerhead turtles sampled in our study likely were migrating from overwintering grounds in North Carolina or further south to seasonal foraging grounds at higher latitudes (Winton et al., 2018). These younger age classes are not undertaking migration for breeding and reproductive purposes, as the adults do, rather they are driven to migrate due to seasonality and spatiotemporal distribution of resources (Chambault et al., 2015). Previous studies have illustrated that marine turtles may show preference for specific foraging grounds over others and exhibit strong site fidelity to those foraging areas, with spatial ranges found to be as small as < 5 km 2 in some loggerhead populations (Thomson et al., 2012;Carman et al., 2016;Winton et al., 2018). Ceriani et al. (2014) identified that interindividual isotopic variance in loggerhead turtles may be reflective of differences in behavioural preference for specific migratory and foraging grounds rather than dietary trophic level or individual physiological variation as previously assumed (Vander Zanden et al., 2010). The loggerheads captured for our study were utilizing a major migratory corridor that has been documented in earlier studies for both juveniles and adults (Winton et al., 2018), but the physiological status of turtles along this migratory route had not been described previously. Understanding the migratory physiology of marine turtles, specifically juveniles and sub-adults, is a difficult endeavour given the logistic difficulties of locating and sampling healthy individuals, as well as the limited capacity for continued monitoring of turtles following initial capture and sampling. Our study provides the first documentation of blood chemistry and haematology for loggerhead turtles during northward spring (May-June) migrations in the NW Atlantic and, therefore, provides a unique opportunity to investigate the physiological status of this species in a temperate latitude offshore habitat. Furthermore, our data permit an assessment of the physiological differences between migratory and resident juvenile loggerhead turtles. Comparisons between these different behavioural states can provide insight regarding energetic status and whether or not juvenile turtles rely on (2011,2012,2013,2016), for all loggerhead turtles (Caretta caretta) included in data analysis. Sampling methodology is characterized by capture date, key physical attributes and clinical blood analysers used for assessment. SCL_NT and Temperature are presented as Median ± SD. *n = 12 Year Kelly et al. (2015) that was converted to the units used in our study. Please see Table 2 for sample sizes for each variable for migratory turtles. (Åkesson and Hedenström, 2007), as do other long-distance migrators. Additionally, information about metabolic demands and strategies, and how metabolism may be impacted by variable temperatures experienced over the course of migration, may be gained through explorations of blood biochemistry. Finally, assessments of health status may be facilitated by haematology data. Comparison of migrating vs. resident turtles Migratory turtles had significantly lower Glu, blood ions (Na, K, P and Cl) and BUN. Stamper et al. (2005) also noted a decrease in Glu, blood ions (Na, K, Ca and Cl) and BUN in loggerhead turtles migrating through Pamlico and Core Sound, NC in late fall compared with summer resident turtles at these sites, and hypothesized that the differences in these blood variables reflected a less active foraging pattern and decreased waste production in migrators. Adult female loggerheads are aphagic and rely on endogenous energy stores during their extensive breeding migrations (Bonnet et al., 1998), but much less is known regarding the foraging patterns of juvenile and sub-adult loggerheads during migration. Snover et al. (2010) noted that the diet of loggerhead turtles in neritic habitats is more nutrient dense than that in oceanic habitats. Foraging opportunities may be limited along offshore migratory routes, or juvenile to sub-adult turtles may prioritize travelling over foraging during directed long-distance movements. Interestingly, we found that UA was significantly higher in migrators compared with turtles in nearshore foraging habitats. Glomerular filtration rate (GFR) of UA remains constant for birds during long-distance migrations (Landys et al., 2005;Gerson and Guglielmo, 2013); if the same is true of migratory marine turtles, then increased production of UA due to an increase in protein catabolism, linked with unchanging UA clearance rates, would result in higher levels of plasma UA (Bairlein et al., 2015). Reliance on protein catabolism may increase during long-distance migrations as carbohydrate and lipid energy stores are depleted with high and continuous levels of energy expenditure (Martin et al., 2015). Furthermore, water produced from protein catabolism may help offset respiratory water loss during periods of sustained activity (Gerson and Guglielmo, 2011). The relative importance of different endogenous fuel stores in migrating turtles is a topic worthy of further investigation (Jenni and Jenni-Eiermann, 1998;Guglielmo, 2010;Bairlein et al., 2015). Although UA has traditionally been thought of as a metabolic end waste product that is not biologically useful (Keilin, 2008), more recent research has demonstrated beneficial antioxidant and neuroprotective effects from circulatory UA (Johnson et al., 2009;Álvarez-Lario and Macarrón-Vicente, 2010). These features of UA might be biologically significant for migratory animals should they incur oxidative and metabolic stress from extensive fuel usage and depletion (Skrip et al., 2015). We also documented lower levels of AST in migrators compared with resident turtles. Lower levels of AST are associated with uremia, the pathological condition of excessive nitrogenous waste in the blood (Warnock et al., 1974), and are generally correlated with higher levels of UA, BUN and P in human patients (Gao et al., 2000). In contrast, we observed significantly lower levels of BUN and P in migrators compared with resident turtles in our study. Migratory animals may have adaptations to regulate nitrogen metabolism during conditions of decreased food intake to allow for more efficient recycling of BUN for amino acid/protein synthesis, as has been documented for fasting, hibernating mammals (Stenvinkel et al., 2013). If marine turtles are capable of employing such mechanisms, this could explain the discrepancy in trends for UA and BUN levels between migratory and resident turtles. The median for PCV/Hct for migratory turtles in the Mid-Atlantic was significantly higher than that of resident turtles in Core Sound; however, migratory turtles also had a greater median SCL_NT, which could affect interpretation of the observed difference. As documented in previous studies (Frair, 1977;Frair and Shah, 1982;Bolten and Bjorndal, 1992;Osborne et al., 2010;Stacy et al., 2018), there is a positive correlation between body size and blood cell characteristics, including size and quantity of cells. That said, an increase in PCV/Hct also could provide migratory loggerhead turtles with enhanced capacity for oxygen delivery to support sustained, aerobic activity during long-distance migration (Krause et al., 2016). Total protein and globulin were higher in migratory loggerheads compared with nearshore residents. Markedly increased levels of TP and globulin have been documented in nesting marine turtles and are hypothesized to be indices of vitellogenesis and folliculogenesis (Casal et al., 2009), but this cannot explain the trends observed in juvenile and subadult turtles. Hyperproteinemia can occur in response to dehydration (Manning, 1998a, b), and this interpretation is supported by the higher PCV/Hct observed for migrating turtles; however, dehydration does not occur in avian longdistance migrators. The ability of birds to maintain water balance during migration, despite high levels of respiratory water loss associated with elevated metabolic rates, is due to increased water produced from protein catabolism (Gerson and Guglielmo, 2011). The significantly higher UA levels in migrating turtles suggests that protein catabolism may be occurring, but perhaps the resultant water production is not sufficient to offset sources of water loss during migration. Gicking et al. (2004) found that values for beta-globulin in Atlantic loggerheads were significantly higher in adult turtles compared with juveniles, so higher levels of globulin in migrators may simply reflect the larger size of migrators compared with resident turtles; however, Gicking et al. 9 (2004) also found significant differences between sexes, thus, determining the physiological basis for beta-globulin variance amongst age and sex classes requires further research. Migratory turtles had higher levels of ABS Azuros and ABS Lymphs, and lower levels of ABS Monos compared with turtles resident at nearshore foraging grounds. Stacy et al. (2011) recommends combining ABS Azuros and ABS Monos in all reptile taxa excluding snakes, as these leukocytes are morphologically, and likely functionally, similar in most reptile species; upon combining these two leukocytes the difference between migratory versus resident turtles is not statistically significant. Nevertheless, the difference in ABS Lymphs remains. Elevated levels of lymphocytes typically indicate inflammation or infection in reptiles (Stacy et al., 2011), but it is unclear why migratory turtles would be more prone to infection. Some work has demonstrated that migration may increase the risk of spreading infectious diseases due to anthropogenically created migratory stopover hotspots generated by habitat loss; however, other studies indicate that migration might offer an evolutionary benefit against accumulation of parasites due to spatiotemporal avoidance of areas with high infection potential, culling of infected individual through the process of migration, or recovery from infection during the process of migration (Shaw and Binning, 2016). An alternate way of looking at this result is that lower levels of lymphocytes documented in resident turtles compared with migrators may indicate that residents are experiencing immunosuppression due to increased glucocorticoid circulation in response to in-shore stressors (Aguirre et al., 1995;Milton and Lutz, 2003;Tarlow and Blumstein, 2007). In this case, the lymphocyte profile exhibited by migratory turtles from our study would be the non-pathologic immunological state. Temperature and size effects We found that T was significantly correlated with blood variables related to metabolic status. Both venous and arterial blood can reflect aspects of metabolic status, including metabolic acidosis (Brandenburg and Dire, 1998); we used venous blood in our study. Lactate was positively correlated with T (12.3-25.3°C) in migrating turtles; this is in contrast with previous findings for captive sub-adult loggerhead turtles, in which plasma lactate values were independent of T (15-30°C) until especially low temperatures were achieved (10°C), at which point an elevation in lactate occurred (Lutz et al., 1989). The positive correlation observed in our study may be due to stable lactate clearance times (Gerson and Guglielmo, 2013) combined with differences in metabolic demand and capacity at different temperatures for ectothermic turtles; increased anaerobic capacity at warmer temperatures could result in higher levels of circulating lactate, particularly in response to vigorous activity (Martin et al., 2015). Concurrent with the increase in lactate with T, we also observed a significant positive correlation between LDH and T. Enzymes, such as LDH, that catalyse intracellular biochemical reactions are released into the bloodstream due to cell turnover. If higher levels of enzyme are present in the cells, as may be expected with increased metabolic capacity at higher T, this will also be reflected by plasma levels of the enzyme. Metabolic pathways utilized for lactate clearance by migratory or endurance-exercised animals include the resynthesis of glycogen stores (gluconeogenesis) and direct lactate oxidation (Martin et al., 2015); both pathways utilize the LDH enzyme to convert lactate to pyruvate, which serves as substrate for subsequent biochemical reactions. If migratory turtles decrease food consumption, there may be a preference towards gluconeogenesis as a means to replenish glucose and glycogen stores, given the importance of these substrates for maintaining the vital functions of certain organs (Tavoni et al., 2013). Many ectothermic animals (most herpetofauna and fishes) store lactate intramuscularly for the synthesis of glycogen (Gleeson, 1996), and previous work with lizards has demonstrated that the primary fate of lactate produced during exercise is gluconeogenesis rather than direct oxidation (Gleeson and Dalessio, 1989). We found a significant negative correlation pCO 2 with T. This is in contrast to the results of Lutz et al. (1989), which reported a positive correlation between pCO 2 and T in loggerhead plasma. Lutz et al. (1989) interpreted this positive correlation as a reflection of maintenance of constant relative alkalinity of the blood at different temperatures. The discrepancy between previous laboratory studies and our results could be due to differences in metabolic demand and acidbase maintenance for migratory turtles. If anaerobic capacity increases with increasing T, as suggested by our LDH results and locomotory performance studies (Elnitsky and Claussen, 2006), this may result in higher circulating levels of lactate and potential disturbances to blood pH during periods of activity. Harms et al. (2003) found that metabolic acidosis associated with lactate accumulation due to capture stress could be mitigated via hyperventilation and a concurrent decrease in blood pCO 2 in loggerhead turtles. A similar phenomenon may occur with shifts between aerobic and anaerobic metabolic pathways as a result of variable activity intensity at higher temperatures. The positive correlation between plasma Na and T has been documented previously in loggerhead turtles (Lutz and Dunbar-Cooper, 1987). Hypernatremia may occur in association with dehydration (Morley, 2015), and the potential for dehydration in migrating turtles may increase with T as metabolic and respiratory rates increase. Conclusions and conservation implications As indicated by the number of significantly different blood variables between migratory and residential loggerhead turtles, the relevance of assessing this population during all its behavioural states is of great importance, particularly if blood variables are to be used for assessing physiological impacts of anthropogenic disturbances. As a case study, we can consider the physiological impacts for sea turtles that interact with fisheries. Loggerhead turtles that are a part of the NWA DPS are susceptible to pressures from the commercial gillnet fisheries (Murray and Orphanides, 2013) and the dredge and bottom trawl fisheries for scallops and fishes conducted in the Mid-Atlantic (Murray, 2011;Warden, 2011). Sea turtles entangled in fishing gear may struggle to reach the surface to breathe, and experience respiratory and metabolic disturbances due to prolonged submergence. Signs of respiratory and metabolic distress could be revealed by assessing blood gases, pH, bicarbonate and lactate (Williard et al., 2015). Furthermore, blood cell counts and enzyme profiles may provide insight into injuries sustained by the animal while entangled. Establishment of baseline RIs for migratory loggerheads in the Mid-Atlantic will facilitate future studies of the impacts of anthropogenic threats, such as fisheries interactions, on health status and post-release survival of loggerhead turtles in this region. Fisheries interactions are just one of many anthropogenic factors that may impact migratory marine vertebrates (Lennox et al., 2016). Climate change is of great concern, due to the potential effects on physiological and ecological aspects of migration. Shifts in thermal regimes have the potential to influence the energetic costs of migration, especially for poikilothermic animals like sea turtles. Direct effects of temperature on metabolic physiology of marine turtles (Davenport, 1997) have the potential to influence diving behaviour, which is often limited by thermo-and haloclines (Arendt et al., 2012;Chambault et al., 2015). Turtles may also be affected by projected changes in ocean currents, as cost-effective usage of passive transport may be important for documented resting behaviours exhibited by migrating loggerhead turtles at night (Dujon et al., 2014). Oceanographic changes may cause shifts in food type and availability which can influence rates of growth and development (Hawkes et al., 2009), and trophic mismatch between energy requirements and availability of suitable resources may become a factor that influences survivorship (Edwards and Richardson, 2004). Migrating sea turtles may lose ephemeral foraging patches, necessitating a change in behaviour to suit changing climate conditions. Identification of unique aspects of the biochemical and haematological profiles for sea turtles at the intra-population level allows more detailed and in-depth conservation efforts to be implemented through contextualization of the physiology of different behavioural states. By using RIs to provide a physiological basis for the behavioural state of migratory loggerhead turtles at present, clinicians and managers alike can make more confident conservation decisions in the future based on preserving the physiological migratory phenotypes that are currently expressed. Supplementary material Supplementary material is available at Conservation Physiology online.
7,040.6
2019-01-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Monopole and vortex content of a meron pair We investigate the monopole and vortex content of a meron pair by calculating the points at which the transformation to the Laplacian Center Gauge is ill-defined and by studying the behavior of Wilson loops. These techniques reveal complementary aspects of the vortex and monopole structure, including the presence of closed monopole lines and closed vortex surfaces joining the two merons, and evidence for intersecting vortex surfaces at each meron. Introduction The QCD vacuum is characterized by two striking phenomena, the breaking of chiral symmetry and the confinement of color charge. Chiral symmetry breaking may be understood in terms of localized topological excitations of the gluon field and their associated quark zero-modes that produce a non-vanishing value of the chiral condensate. Classical instanton [1] solutions of the Yang Mills equations with topological charge Q = 1 and their quantum fluctuations provide a physical foundation for these topological excitations and thus a natural understanding of chiral symmetry breaking. In contrast, the mechanism for confinement is not presently well understood, and various pictures have been investigated to try to explain it in terms of relevant structures in the QCD vacuum. Various point-like solutions to the Yang Mills equations, which fall off at large distances in all space-time dimensions, have been considered. Although Q=1 instanton solutions provide an understanding of chiral symmetry breaking, in the dilute gas and instanton liquid approximations they do not lead to confinement [2]. Merons, topological charge 1 2 solutions found by De Alfaro, Fubini and Furlan [3], are more strongly disordering objects than instantons and were proposed as a mechanism for confinement by Callan, Dashen and Gross [4]. Fractons, also solutions of the Yang Mills equations of motion with fractional topological charge, appear on the four-dimensional torus, T 4 , when twisted boundary conditions are imposed [5]. The possible relevance of these objects to confinement was pointed out in [6], and a scenario for confinement based on the fractional charge solution found in reference [7], was proposed by González-Arroyo and Martínez [8]. One and two-dimensional structures in the QCD vacuum have also been considered as mechanisms for confinement. In the dual superconductor picture [9], the condensation of monopoles in the QCD vacuum leads to confinement. Monopoles are one-dimensional curves in space-time that appear in QCD as defects in the abelian gauges proposed by 't Hooft [10]. The gauge is fixed up to the Cartan subgroup of the gauge group and monopoles appear at points in space where this gauge fixing is ill-defined, leaving a gauge freedom larger than the abelian subgroup. In the vortex theory [11], confinement is due to the condensation of vortices. Vortices are two-dimensional surfaces carrying flux in the center of the SU(N) group, which means that a Wilson loop intersecting the surface of the vortex takes the value of one of the elements of the center of the group. Classical vortex solutions to the SU(N) Yang Mills equations have been found numerically [12]. The mechanism for chiral symmetry breaking and the alternative descriptions of confinement are not mutually exclusive -rather they are highly interrelated. The fact that the intersection of two vortices has topological charge 1 2 [13][14][15] provides a provocative connection between chiral symmetry breaking and confinement and suggests that the confinement properties of charge 1 2 merons may also be understood in terms of the intersections of vortices. In addition, as elaborated below, monopole lines lie on vortex surfaces, so that both structures coexist and may be studied simultaneously. In this picture, a meron pair corresponds to the intersection of two closed vortex sheets containing closed monopole loops and provides the simplest system in which one could explore this structure quantitatively. As the separation between the merons decreases to zero and they merge into an instanton, one would expect a vortex sheet and a monopole loop on it to shrink to a point at the center of the instanton [16,17]. A similar picture of the separation of an instanton into two fractionally charged objects connected by hedgehog world lines is given in reference [18]. In this article we investigate numerically the monopole and vortex content of a meron pair in SU(2) Yang Mills theory by calculating the points at which Laplacian Center Gauge fixing is ill-defined [19,20] and by calculating the behavior of Wilson loops. The monopole and vortex content of an isolated meron has already been studied analytically by Reinhardt and Tok [21] using Laplacian Center Gauge fixing and Wilson loops, and provides an essential foundation for the present work. Since their work, as well as that of others, has shown Laplacian Center Gauge fixing to be an imperfect tool, in this study we also explore the limitations of this tool as well as the physics of the QCD vacuum. The outline of this letter is the following. In section 2 we describe the meron pairs that we study and in section 3 we use Wilson loops to explore their vortex content. Section 4 presents the monopole and vortex content of these configurations determined from Laplacian Center Gauge defects and section 5 summarizes our conclusions. Merons [3] are solutions to the classical Yang-Mills equations of motion in four Euclidean dimensions, which can be written as where η aµν is the 't Hooft symbol. Using the conformal symmetry of the classical Yang-Mills action, it can be shown that in addition to a meron at the origin, there is a second meron at infinity, and these two merons may be mapped to arbitrary positions. The gauge field for the two merons [3] is This gauge field for the meron pair has infinite action density at points To avoid the problem of these singularities, we use the following expression [4] A a µ (x) = η aµν x ν Here, the singular meron fields for √ x 2 < r and √ x 2 > R are replaced by instanton caps, each containing topological charge 1 2 to agree with the topological charge carried by each meron. We study the monopole and vortex content of this configuration by putting the gauge field on a lattice of size N t ×N 3 s . For details of the procedure for putting the meron pair on the lattice and relaxing it to a solution of the field equations, see reference [22]. In this article, we analyze four meron pair configurations obtained on N t ×N 3 s lattices with N s = 16, 24 and N t = 2N s . We study configurations with different cap sizes, c, distances between merons, d, and sizes of the lattice, N s . We used a configuration with N s = 16, c = 4 and d = 10 (configuration I), and three configurations with N s = 24: one with c = 1 and d = 12 (configuration II), one with c = 5 and d = 12 (configuration III), and one with c = 1 and d = 16 (configuration IV). We have checked that the field strength from each of the lattice configurations has the essential properties described in reference [22] for the continuum field strength. We have also applied up to five cooling sweeps to the meron pair configurations in order to relax them close to lattice solutions, and checked that the monopole and vortex content for these meron pair configurations are independent of this cooling. Although we do not explicitly address Dirac zero modes in this work, note that the zero mode for a meron pair configuration has been calculated for a range of separations in reference [22] and displays two peaks at the positions of the merons. figure 1A. We see that at short distance, the value of the Wilson loop goes from +1 towards the value −1, as for a single meron, and only changes this behavior at large distance where the contribution of the second meron starts to be significant, approaching the value +1 when the loop is bigger than the distance between the merons. We also see in figure 1A the effect of the cap size. The cap gives a characteristic size c to the meron, which is reflected in the distance one must go for the value of the Wilson loop to start to be approach −1 and thus enclose the vortex flux. Note The results in the xt plane for the same configurations as in figure 1A, are shown in figure 1B. We see that again at short distances, the Wilson loop goes from +1 to −1 as the size of the loop increases, and as r exceeds half the separation between merons, the Wilson loop begins to approach +1, which it will reach when both merons are included. Again, the loop must be larger than the cap size, c, to enclose all the vortex flux. As before, the results for the other two planes, yt and zt, are the same, and the other configurations show analogous behavior reflecting the other cap sizes and separations. Vortex content from Wilson loops The conclusion from this study of Wilson loops in a meron pair is that, like an isolated meron, a meron in a pair behaves like a source or sink for flux in non-trivial elements of the center of the group for all six planes defined by the Cartesian axes, and the size of the source or sink is of order of the cap size, c. Thus, each meron corresponds to the intersection of orthogonal pairs of vortices. Monopole and vortex content from Laplacian Center Gauge defects In this section, we present the monopole and vortex content of the meron pair configurations described in the previous section, as inferred from the points at which gauge fixing to the Laplacian Center Gauge is ill-defined. Fixing the gauge to Laplacian Center Gauge [19,20] involves the use of the two eigenvectors with lowest eigenvalues, ψ a 1 (n) and ψ a 2 (n), of the Laplacian operator, in the presence of a gauge field R ab (n, µ) in the adjoint representation of the gauge group. The lowest eigenvector, ψ a 1 (n), is rotated to the (σ 3 ) direction in color space. This step fixes the gauge up to the abelian subgroup of the SU(2) group. The U(1) abelian freedom is fixed by imposing the additional condition that the ψ a 2 (n) eigenvector is rotated to lie in the positive (σ 1 , σ 3 ) half-plane. After these two steps, the gauge is completely fixed up to the center degrees of freedom. Monopoles and vortices are found in Laplacian Center Gauge as defects of the gauge fixing procedure, which means we have to look at the points at which the gauge fixing prescription is ill-defined. The first step, rotation of the first eigenvector to the third direction in color space, is ill-defined if ψ a 1 (t, x, y, z) = 0. This defines lines in four-dimensional space and these lines are identified as monopole lines. The second step, rotation of the second eigenvector to the positive (σ 1 , σ 3 ) half-plane, is ill-defined at points at which the first and second eigenvectors are parallel. This condition defines surfaces in four dimensional space and these surfaces are identified as vortex sheets. To fix to the Laplacian Center Gauge we use the algorithm presented in [23] to calculate the lowest eigenvectors of the Laplacian operator. We calculate the four eigenvectors with lowest eigenvalues, and find that the three lowest eigenvalues are degenerate. With two vectors chosen from these three, or from linear combinations of these three, we can fix the gauge to Laplacian Center Gauge. Note that because of the degeneracy in the lowest eigenvalues, the monopole and vortex content is ambiguously defined, and in this work we will consider all the different monopole and vortex patterns that may be obtained from the lowest eigenvectors. Before considering the monopole and vortex content of our meron pair configurations, it is useful to review the monopole and vortex content of two limiting cases, an instanton and a single meron. The eigenfunctions of the lowest state of the Laplacian for these two cases are known analytically [21]. To obtain the locus of all the points which can be monopoles or vortices for our meron configurations, we calculate the determinant of these three vectors, I, II and III, at each lattice point. First, note that if any of the vectors is zero, the condition to find monopoles, the determinant is zero. Second, note that if there is a linear combination between them giving a zero vector, the condition to find vortices, the determinant is also zero. Finally, note that the determinant is independent under linear combinations of the three vectors. Hence, all the points for our meron configurations that can be monopoles or vortices are determined by the condition that the determinant vanishes. The result we obtain is the following. We find a region on the lattice in which the determinant is always positive and another one in which it is always negative, both regions separated by a three-dimensional volume in which the determinant vanishes and defines all positions that can be monopoles or vortices. We describe the shape of this volume by showing some of its two-dimensional sections. First, we show its temporal dependence. Consider the determinant as a function of x and t, for values of y and z fixed to the values Figure 2: Figure A shows the action density S(t,x,y,z) for the meron pair with d = 16 and c = 1 (configuration IV) as a function of x and t, with y and z fixed to the values that maximize the action density. Figure B shows the absolute value of the discriminant of the three lowest Laplacian eigenvectors, D(t, x, y, z), to the 1/4 power as a function of x and t, for the same meron pair configuration and values of the y and z coordinates used in figure A. Figure C shows the absolute value of D(t, x, y, z) to the 1/4 power as a function of x and y for z fixed to the value that maximizes the action density and t fixed to the midpoint between the two merons. surfaces joining the two merons that intersect the merons in spatial planes. We note that the high symmetry of the background field produces a highly atypical situation including, for example, the intersection of monopole loops and a high degeneracy of equivalent solutions. It is possible that the introduction of a small perturbation would not only remove the intersections and degeneracy, but also produce a more generic situation of intersecting vortices. If this is not the case, more powerful techniques will be required to fully analyze the vortex structure. Finally, looking at the combination of the results we obtain for the meron pair from Wilson loops and Laplacian Center Gauge fixing, it is reasonable to conclude that in a pair as well as in isolation, a meron is a localized source of monopole trajectories and a localized object with topological charge 1 2 carrying center flux in six orthogonal space-space and space-time planes.
3,525.2
2002-02-21T00:00:00.000
[ "Physics" ]
Evaluation of the therapeutic response of hepatitis C in coinfected patients ( HIV / HCV ) : a study of cases from a hospital for chronic liver diseases in the Eastern Brazilian Amazon Introduction: The aim of this study was to evaluate the therapeutic response of hepatitis C in patients coinfected with human immunodeficiency virus (HIV-1). Methods: A retrospective study of 20 patients coinfected with HIV-1/HCV who were treated in the outpatient liver clinic at the Sacred House of Mercy Foundation Hospital of Pará (Fundação Santa Casa de Misericórdia do Pará FSCMPA) from April 2004 to June 2009. Patients were treated with 180μg PEG interferon-α2a in combination with ribavirin (1,000 to 1,250mg/day) for 48 weeks. The end point was the sustained virological response (SVR) rate (HCV RNA negative 24 weeks after completing treatment). Results: The mean age of the patients was 40±9.5 years, of which 89% (n=17) were male, and the HCV genotypes were genotype 1 (55%, n=11/20), genotype 2 (10%, n=2/20) and genotype 3 (35%, n=7/20). The mean CD4+ lymphocyte count was 507.8, and the liver fibrosis stages were (METAVIR) F1 (25%), F2 (55%), F3 (10%) and F4 (10%). The early virological response (EVR) was 60%, the end-of-treatment virological response (EOTVR) was 45% and the SVR was 45%. Conclusions: The median HCV viral load was high, and in 85% of cases in which highly active antiretroviral therapy (HAART) was used, none of the patients with F3-F4 fibrosis responded to treatment. Of the twenty patients treated, 45% achieved SVR and 45% achieved EOTVR. Studies that include cases from a wider region are needed to better evaluate these findings. INTRODUCTION With the introduction of highly active antiretroviral therapy (HAART) in 1996, there have been many important changes in the natural history of human immunodeficiency virus-1 (HIV-1) infection 1 .Studies show that the progression of liver disease caused by the hepatitis C virus (HCV) is more severe and progresses more rapidly in people coinfected with human immunodeficiency virus/hepatitis C virus (HIV/HCV) compared with those only infected with HCV 2 .This effect can be explained by the high viremia and cytotoxicity of HCV, resulting in accelerated fibrosis processes, increased risk of cirrhosis, increased morbidity and mortality due to terminal liver disease, earlier development of hepatocellular carcinoma and increased incidence of liver toxicity associated with the use of antiretrovirals 3 .HIV-coinfection is included among the factors that may contribute to the accelerated progression of liver disease in patients with hepatitis C 4,5 .The presence of HIV appears to alter the natural history of HCV infection in terms of its progression towards hepatic cirrhosis, hepatocellular carcinoma and liver failure 6 .In a cohort study of a group of coinfected patients, there was an increase in the progression of liver fibrosis and its evolution to cirrhosis compared with a monoinfected group of HCV patients 7 . These studies illustrate the importance of treating these coinfected patients.Recently, studies have shown that the successful treatment of hepatitis C drastically reduces the complications of pre-existing liver disease 8 .Treatment is especially recommended for patients with a high probability of achieving sustained virological response (SVR), for patients with genotype 2 or genotype 3 and for patients infected with genotype 1 who have a low viral load (less than 400,000 to 500,000IU/ml) 9 .The treatment of choice for patients coinfected with HIV/HCV is PEG-interferon 10,11 in combination with ribavirin (RBV) at the same doses used in HIV-negative patients, independent of the viral genotype, with the presence of mild to severe fibrosis (F1 by the METAVIR or the Brazilian Society of Pathology classifications) 12 .The early virological response (EVR) and sustained virological response (SVR) rates are, however, lower in HIV-positive patients than in HIV-negative patients 13 . Amaral ISA et al -The therapeutic response of hepatitis c in coinfected patients (HIV/HcV) Case studies This was a retrospective study of patients with HIV-HCV coinfection using the research protocol containing demographic information, laboratory tests, histopathology of liver biopsy, early virological response, end-of-treatment virological response (EOTVR), sustained virological response (SVR) and non-responders to treatment with 180µg PEG-Interferon (PEG-IFN) α-2a in combination with ribavirin (1,000mg for patients under 75kg and 1,250mg for patients ≥ 75kg).Twenty patients being treated at the outpatient liver clinic at the Sacred House of Mercy Foundation Hospital of Pará (Fundação Santa Casa de Misericórdia do Pará -FSCMPA, Belém, State of Pará, Brazil) who met the indication criteria for treatment according to the Brazilian Ministry of Health (the inclusion criteria) were selected.EVR was considered to be a reduction ≤ 2 log of the baseline levels of hepatitis C virus ribonucleic acid (HCV RNA) at week 12, and EOTVR and SVR were defined as undetectable serum levels of HCV RNA at weeks 48 and 72, respectively.The patients who participated in the study were 15 years of age or older, both male and female, serologically confirmed HIV carriers (ELISA + indirect immunofluorescence or western blot), positive for anti-HCV antibodies by enzyme-linked immunosorbent assay (ELISA) and confirmed by reverse transcription-polymerase chain reaction (RT-PCR).Samples were forwarded to the HIV reference services in the State of Pará.All of the patients were treated at the outpatient clinic of the FSCMPA hospital between April 2004 and June 2009.The following tests were performed: serum aminotransferase levels were assessed at the FSCMPA laboratory using an auto-analyzer.The HIV viral load and cluster of differentiation 4 (CD4)+ T-lymphocyte levels were assessed at the Central Laboratory of the State of Pará (Laboratório Central do Estado do Pará -LACEN-PA).The serological markers for viral hepatitis (HBsAg, anti-HCV) were used and molecular biology tests (HCV genotyping and HCV viral load) performed at the Laboratory of Serology and Molecular Biology, Hepatology Section, Evandro Chagas Institute.The patients received histological evaluations via liver biopsy prior to treatment (when indicated), which were performed at FSCMPA by professionals in the chronic liver diseases program in the aforementioned hospital as a routine service.METAVIR scores were used for staging fibrosis and hepatic inflammatory activity.The data were subjected to statistical analysis.The analysis, organization and tabulation of the study data were generated using the software EPI INFO (version 6.2) and the software Biostat 5.0 (Ayres et al., 2008).The control group included 49 patients treated at FSCMPA, only with HCV infection who received PEG interferon and ribavirin. Only twenty patients who were selected to receive treatment with 180μg PEG-IFN α-2a in combination with ribavirin were studied.Eighteen (90%) patients were male, 5/20 (25%) were married, 14/20 (70%) were single and 1/20 (5%) was divorced.All of the patients had CD4+ lymphocyte counts above 200 cells/mm 3 , 18/20 (90%) patients were on antiretroviral therapy and 7/20 (35%) patients had undetectable levels of serum HIV RNA.The biochemical data, molecular biology tests and stage of fibrosis based on the METAVIR classification are summarized in Table 1.Early virological response was observed in 12/20 (60%) patients, EOTVR in 9/20 (45%) patients and SVR in 9/20 (45%) patients.With respect to HCV genotype and SVR, there was no statistically significant correlation.When SVR was compared with HCV RNA levels, there was also no statistically significant correlation.For aminotransferase levels, only patients whose alanine aminotransferase (ALT) levels were 2.5 times the upper limit of normal (ULN) responded to treatment (SVR), p<0.05 (Table 2).The reasons for treating hepatitis C in patients coinfected with HIV/HCV vary.In these patients, HAART is more hepatotoxic, the evolution of liver disease is faster and the risk of developing hepatocellular carcinoma is higher 14 .Successful treatment reduces the complications of liver disease.Unfortunately, many patients do not meet the prerequisites for inclusion in the scheme for HCV antiviral treatment.The virologic response rates found were similar to those of large studies, although with a smaller sample.PRESCO (Peginterferon Ribavarin España Coinfection Trial Study Group) study with larger samples involving 389 patients, found SVR rate of 50% and LAGUNO (Infectious Diseases Service the Hospital Clinic, Barcelona, Spain (Montserrat Laguno) study with a smaller sample of 52 patients, found SVR rate of 44%.When ALT levels are evaluated, no patient had normal ALT level, similar to the large studies.When grouped together ALT levels greater than 2.5 x ULN and below this limit and relating them to the SVR, significant correlation was observed among patients with values above 2x ULN.HCV genotypes 2 or 3, low HCV viral load, absence of liver cirrhosis, age younger than 40 years old, elevated ALT levels, elevated CD4 counts, and low or undetectable plasma HIV-RNA, are the best candidates for HCV treatment in coinfected group of HIV/HCV 15 .The biochemical response in HCV-HIV co-infected patients is not a good marker of SVR.In the, APRICOT (Aids Pegasys Ribavirin International Coinfection Trial), RIBAVIC, LAGUNO and PRESCO studies studies [16][17][18][19] , patients were using the HAART scheme in more than 70% of the cases, the mean CD4+ lymphocyte counts were above 500 cells/mm 3 and the ALT levels were normal in only 16% of cases in the RIBAVIC study.In the RIBAVIC (Ribavirin Study Team (France) and PRESCO studies, the stages of fibrosis were measured by the METAVIR classification (F3/F4) and were reported in 28 and 39% of cases, respectively.These data are similar to the presented study (FSCMPA), even with the smaller sample size.In a study using PEG-IFN and ribavirin in 68 coinfected patients, the researchers found an SVR rate of 35% 20 , which is lower than the rate found in the present study.In the PRESCO study, in which 28% of the patients had F3-F4 fibrosis, the response rate at the end of treatment and SVR were 67% and 50%, respectively, which is higher than the present study; however, the sample size was much larger. Rev The RIBAVIC study, which included 194 patients, reported that 39% of patients had F3-F4 fibrosis, with EOTVR and SVR rates of 35% and 27%, respectively.In the APRICOT study, which included 289 patients, 15% of whom had hepatic cirrhosis, the EOTVR and SVR rates were 49% and 40%, respectively.The SVR results reported in the present study were similar to those found in the LAGUNO study (Table 3).Studies have shown that the percentage of SVR is higher, either in monoinfected or coinfected patients, for patients infected with genotypes 2 and 3 and when pre-treatment HCV RNA levels are lower 21 .Patients coinfected with HIV/HCV have higher levels of HCV RNA 22 in the present study, only 11.2% of patients presented with HCV RNA levels below 400,000IU/ml.Importantly, in the present study, patients with a stage of severe fibrosis (F3, F4), did not achieve a sustained virological response to treatment.Although the sample size in the present study was small, several studies have shown that one of the predictive factors for being non-responsive to treatment is an advanced stage of fibrosis 15 .In the control group is shown that the patients respond to anti HCV similarly, slightly worse compared to monoinfectado.Is important to select the patients coinfected for anti HCV treatment in order that achieve a SVR similarity monoinfected patients. Patients coinfected with HIV/HCV can be treated following the indication and contraindication criteria.The virological responses, even with the small sample size in the present study, were similar to those reported in previous studies.A larger sample size is needed to improve this regional assessment.
2,566.2
2013-01-01T00:00:00.000
[ "Medicine", "Biology" ]
Bit threads, Einstein’s equations and bulk locality In the context of holography, entanglement entropy can be studied either by i) extremal surfaces or ii) bit threads, i.e., divergenceless vector fields with a norm bound set by the Planck length. In this paper we develop a new method for metric reconstruction based on the latter approach and show the advantages over existing ones. We start by studying general linear perturbations around the vacuum state. Generic thread configurations turn out to encode the information about the metric in a highly nonlocal way, however, we show that for boundary regions with a local modular Hamiltonian there is always a canonical choice for the perturbed thread configurations that exploits bulk locality. To do so, we express the bit thread formalism in terms of differential forms so that it becomes manifestly background independent. We show that the Iyer-Wald formalism provides a natural candidate for a canonical local perturbation, which can be used to recast the problem of metric reconstruction in terms of the inversion of a particular linear differential operator. We examine in detail the inversion problem for the case of spherical regions and give explicit expressions for the inverse operator in this case. Going beyond linear order, we argue that the operator that must be inverted naturally increases in order. However, the inversion can be done recursively at different orders in the perturbation. Finally, we comment on an alternative way of reconstructing the metric non-perturbatively by phrasing the inversion problem as a particular optimization problem. Motivation Recent progress in the joint program on quantum information and holography has uncovered striking connections between entanglement and spacetime. Arguably, the most exciting discovery in this context, and the one which ignited most of the research in this field, was the proposal of Ryu and Takayanagi that relates the entanglement entropy of a region A in the boundary to the area of a minimal codimension-two bulk surface γ A [1], (1.1) This formula was further generalized to a fully covariant setting in [2] and proved formally in [3,4] using the known entries of the holographic dictionary. The RT prescription (1.1) and its covariant version generalize in an elegant way the well-known Bekenstein-Hawking formula for black hole entropy and provide a natural way to interpret it directly in terms of a microscopic CFT description. Given its elegance and simplicity, entanglement entropy became a robust tool to investigate fundamental aspects in holography, ranging from the problem of bulk reconstruction [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21], to the emergence and dynamics of spacetime [22][23][24][25][26][27][28][29]. Recently, Freedman and Headrick proposed an alternative way to compute entanglement entropy that does not rely on bulk surfaces, but instead, is phrased in terms of a specific flow maximization problem [30]. More specifically, the new prescription states that and can be shown to be equivalent to the RT formula through the continuous version of the max flow-min cut theorem of network theory. The maximization above is an example of a convex optimization program and, hence, the equivalence between (1.1) and (1.2) can also be proved using techniques borrowed from convex optimization [31]. Soon after this paper appeared, it was realized that various geometric problems could likewise be translated to the realm of convex optimization leading to interesting new results [32,33]. The connection with convex optimization has also helped uncover various properties of entanglement entropy from the bit thread perspective [34], as well as some generalizations and applications to other entanglement related quantities [35][36][37][38][39][40][41]. A complementary approach that departs from the realm of convex optimization was put forward in [42,43] and studies aspects of bit threads and entanglement by considering explicit constructions of max flows. This is the line of work that we will mostly follow in this paper. There is one crucial distinction between the two prescriptions to compute entanglement entropy that we believe deserves further investigation: while the minimal surface γ A is in most cases unique, the solution to the max flow problem v is highly degenerate. More specifically, it can be shown that v is uniquely determined only at the bulk bottle-neck γ A , but is highly non-unique away from it. This non-uniqueness raises the question: Out of the infinitely many thread configurations that could be associated with a boundary region, is there any meaningful separation or classification that could be associated with states of special "entanglement classes" in the dual field theory? JHEP01(2021)193 Intuitively, it would seem that this large degeneracy could indeed be associated to a choice of microstate (or a particular class of microstates) that give rise to the same amount of entanglement between the region A and its complement, 1 however, a precise version of this statement is not settled yet. On the other hand, one can try to exploit this nonuniqueness to gain new insights on the gravity side. The utility of the non-uniqueness property stems from the observation that, if a version of this statement is true (even if we do not know it yet), then a particular solution to the max flow problem v could potentially carry more information than the minimal surface γ A itself: it could encode in detail how the local correlations between the degrees of freedom in the region A and in its complement are distributed for a particular choice of microstate. If so, then, one could imagine that specific questions related to bulk reconstruction and the emergence of spacetime could be answered in a more efficient way by properly selecting a class of configurations/states adapted to the specific problem at hand. In this paper we will give some steps in this direction. Specifically, our main objective is to understand how the program of gravitation from entanglement [22][23][24][25][26][27][28][29] unfolds in the language of bit threads and to explore an alternative way of metric reconstruction based on this framework. The particular questions that we want to address are the following: • How are the metric and Einstein's equations encoded in generic thread configurations? • Can bulk locality be manifest in particular constructions? • Is it possible to reconstruct the bulk geometry from a max flow solution? • If so, how does the method compare to the ones based on RT surfaces? Following [22][23][24][25][26][27][28][29], we begin by considering these questions in a perturbative setting in which we study small deformations continuously connected to a reference state. An important motivation of such continuous construction comes from the study of the phase transition of RT surfaces that happens for disjoint regions as one varies their separation. It is known that, close to the phase transition, the RT surface can change from a connected to a disconnected configuration. Such jumps posit a puzzle to a potential quantum information interpretation of the RT surfaces from the bulk perspective, which is solved in the language of bit threads by imposing the additional property of being continuous across phase transitions [30]. Continuity is, then, a desirable feature of bit threads under continuous deformations. Before we study the above questions, let us review some of the features of the standard methods of metric reconstruction using RT surfaces [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21], and explain potential advantages of studying this problem with bit threads. While there are other methods for bulk reconstruction, e.g. [45][46][47], our comments and comparisons refer only to approaches that make explicit use of RT surfaces. Quite generally, if one hopes for a reconstruction of JHEP01(2021)193 the metric everywhere in the bulk one must start with a sufficiently dense set of extremal surfaces that probe the full manifold M . This is in fact possible in some simple cases, at least for the subset of M that can be foliated by boundary anchored extremal surfaces. For static (2 + 1)−dimensional bulk geometries this was achieved in [9] starting from the full set of extremal surfaces associated with all CFT intervals, and using ideas from holeography [5][6][7][8]. More recently, it was shown that the same ideas could be extended to the time-dependent case in [18] and to higher dimensions [19], here by focusing on the subset of extremal surfaces associated with spherical regions (or topologically equivalent, in the approach of [20]). The problem of metric reconstruction using bit threads has a major advantage over the ones described above: it does not rely on the ability of the manifold M of admitting foliations by boundary anchored extremal surfaces. In fact, threads can probe regions in the bulk that extremal surfaces cannot, such as entanglement shadows near the vicinity of (spherical) black hole horizons [43]. It is important to point out that bulk shadows do not appear exclusively in cases where gravity is strong; one simple counterexample is the metric of a conical deficit geometry, which arises by the backreaction of a point particle in AdS [48]. Consequently, formulating the problem of metric reconstruction in the language of bit threads, even for the simpler case of perturbative states, is interesting on its own right. In particular, it will shed new light on the issue of emergence of spacetime from entanglement entropy [49,50], without resorting to other measures of entanglement such as entwinement [48]. Another important difference with respect to the problem of metric reconstruction using extremal surfaces is that the latter requires as a starting point the knowledge of a dense set of surfaces that probe the bulk geometry. While we can do the same in the language of bit threads, i.e., start from a dense set of thread configurations, the fact that one single solution to the max flow problem already probes the full bulk geometry presents us with an interesting possibility: we can start from a finite set of thread configurations, containing one, or possibly only a few solutions of the max flow problem. We will consider both approaches in this paper, and show that the explicit reconstruction is possible in both cases. In the remaining part of the introduction we will provide a quick guide to help navigate our paper and enumerate the most important findings of each section. Road map and summary We begin in section 2 with a short discussion of various topics that we constantly refer to in our paper. Most of this material is a review of previous works, covering known results about perturbations around AdS and the calculation of entanglement entropy, both in the language of extremal surfaces and bit threads. We also include a short analysis of bit threads in perturbative excited states in subsection 2.3.1 which is new. The main message of this analysis is that, to leading order in the perturbation, it is consistent to use the prescription (1.2) on a constant-t slice, even if the perturbation includes time dependence. In section 3 we study simple explicit realizations of max flows for bulk geometries that are perturbatively close to pure AdS. We begin in section 3.1 by discussing some general properties about these max flows: the boundary condition at the minimal surface and JHEP01(2021)193 how this condition is sufficient to encode the first law of entanglement entropy. We then proceed to study two particular constructions in subsections 3.2 and 3.3, respectively. The first method that we consider is a generalization of the geodesic method developed in [43]. This method assumes a particular set of integral curves as a starting point, which we take to be the family of space-like geodesics that intersect normally the minimal surface. Given this assumption, one then determines the norm by imposing the divergenceless condition, implemented through the Gauss's law. We show that this construction works both for geodesics of the unperturbed and perturbed geometries under some mild assumptions. In subsection 3.3 we study a slightly more general method. Here, our starting point is to propose a family of level set functions for the flow and then determine its norm based on the divergenceless condition, now implemented directly by solving a differential equation. The flows constructed via this method are a generalization of the maximally packed flows presented in [42,43], where the level set functions are now arbitrary (not necessarily a nested set of minimal surfaces). This method is therefore fully non-perturbative and easily adapted to any boundary entangling region. Importantly, both constructions presented in section 3 assume as an input a solution to the Einstein's equations in the bulk. Given an explicit metric one can determine the norm of the vector field from the divergenceless condition, which requires an integration from the minimal surface (where the norm is known) to the points of interest. Such integration generically introduces a nonlocal dependence on the background metric which renders these methods non-suitable for addressing questions of bulk reconstruction. However, this also suggests a way around it. More specifically, since the nonlocality is introduced in both cases through the implementation of the divergenceless constrain, it suggests that a construction that implements this condition in a background independent way would be absent of such nonlocalities, which is possible if pose the question in the language of differential forms. Motivated by the above observation we start subsection 4.1 by rewriting the bit thread framework in the language of differential forms. We study in detail the case of perturbative states and show, in subsections 4.2 and 4.3, that the Iyer-Wald formalism provides a candidate for a thread perturbation which is explicitly local in the metric and furthermore connects the closedness condition with the linearized Einstein's equations. Further, we explore the problem of metric reconstruction in subsection 4.4 and show that it can be cast in terms of the inversion of a particular differential operator. We provide explicit inversion formulas for the case of spherical entangling regions in two distinct scenarios: i) assuming knowledge of a dense set of forms parametrized by their radii and centers and ii) assuming knowledge of a finite set with a minimal number of forms. The second approach turns out to be very powerful; for instance, it suffices to have a single form to provide a full solution for the bulk metric in asymptotically AdS 3 and AdS 4 spaces, which we construct explicitly. We also show that the problem is well-posed in higher dimensions, starting with a carefully selected finite set. We end the section with a detailed analysis of how to recover the time components of the metric via boosts and translations of the space-like hypersurface on which the threads are defined, and a thorough discussion on how to generalize the reconstruction problem to higher orders in the perturbation. JHEP01(2021)193 2 Preliminaries In this section, we will start with a brief discussion of a number of topics that we will be essential throughout the paper. We include this discussion for completeness. However, since most of this material is a review, it can be safely skipped by the cognoscenti. Linear perturbations around AdS Let us start by reviewing basic properties of linear perturbations around empty AdS. In Fefferman-Graham coordinates, any asymptotically AdS metric can be written as 1) where x σ are boundary coordinates and z is the holographic coordinate. For concreteness we have assumed a Minkowski boundary geometry. With this parametrization, one can extract the expectation value of the stress-energy tensor from the asymptotic form of the perturbation, Plugging the above ansatz into the vacuum Einstein equations, we obtain the following expressions for the zz, zµ and µν components [22]: respectively, where the box operator is the standard Laplace operator in Minkowski space, i.e., ≡ ∂ µ ∂ µ . Alternatively, one can write down the perturbation as follows: where G(x, z) is the Green's function of the graviton in empty AdS, (2.6) A somewhat useful expression can be obtained by expanding δg µν in powers of z [51]: The strategy is to use the linear Einstein equations order by order in z to determine T (n) µν for n > 0 in terms of the expectation value of the stress-energy tensor T µν . A simple calculation shows that the zz and zµ equations imply µν is traceless and conserved for all n. Finally, the µν equations imply that It is convenient to go to momentum space, where ∂ z C µν = 0. Moreover, since H µν must be finite at z = 0, the only possibility is that C µν = 0. This means that only the n = 0 term in (2.7) survives, while all other higher order terms vanish. This can also be seen from the recursive formula (2.9): for d = 2 we have that T (0) µν = 0, therefore all n ≥ 1 terms vanish! The above analysis implies that, to linear order in the perturbation, the general solution for the metric in d = 2 is given by: Since the stress tensor should be traceless and conserved the general form it can take is the sum of right-moving and left-moving waves, Specific examples can be obtained by specifying the profiles of f (t − x) and g(t + x). In appendix A we will explore in detail the case corresponding to a local quench state. Linear corrections to entanglement entropy Entanglement entropy can be computed via the RT formula [1], JHEP01(2021)193 or its covariant HRT version [2], where the minimality condition is replaced by extremality, (2. 16) We are interested in computing the leading correction to entanglement entropy, assuming that the geometry is a small perturbation over AdS. At linear order in the expansion parameter, λ, entanglement entropy can in principle receive two types of contributions. To see this we can expand the area functional L and embedding functions φ(ξ i ) parametrizing the codimension-two surface γ A as follows, (2. 18) This means that at this order, the embedding φ(ξ i ) can be taken to be the (unperturbed) embedding in pure AdS. This is a useful property, because there are many exact solutions for the embending functions of various regions in empty AdS. For our purposes, it will suffice to recall the explicit embedding for spheres in empty AdS in Poincaré coordinates, We will make use of this expression in later sections when we discuss concrete realizations of perturbative bit threads. Bit threads in dynamical scenarios The original formulation of bit threads [30] is equivalent to the (non-covariant) RT formula [1], equation (2.15), so it only applies to situations with time reflection symmetry (e.g. spatial regions in static spacetimes). In this section we will explain one way to extend this prescription to fully dynamical cases and show that the formulation of [30] extends straightforwardly to the case of perturbative excited states. One way to include time dependence is by using the maximin reformulation of HRT [52]. To do so, we pick a particular Cauchy surface Σ that contains the boundary of the region, ∂A, perform the area minimization on it, and then maximize over all possible Σ. We can then use the standard bit thread prescription for each Cauchy surface Σ by maximizing the flux through the boundary region Σ ∩ D[A], 2 and then maximizing over all Cauchy JHEP01(2021)193 surfaces: Here, X(Σ) is the space of vector fields on Σ. We note that this formula was recently studied in the context of the membrane theory [40]. There also exists a fully covariant bit thread version of the correspondence [53], but we will not use it in this paper. A solution to the maximin prescription given by the left-hand side of (2.20) consists of a codimension-two surface γ A that solves the two optimization steps. Such a solution would naturally be accompanied by a specific choice of a codimension-one hypersurface Σ on which γ A is a minimal surface. However, in [52] it was shown that such Σ is highly non-unique away from the maximin surface γ A . This fact was used in [52] to argue that one could pick a particular Σ that simultaneously contains the maximin surfaces of various disjoint boundary regions required to prove the strong subadditivity property of holographic entanglement entropy. Below, we will use this freedom to argue that to first order in a general time-dependent perturbation of a static metric, one can always choose Σ to be the constant-t hypersurface associated with the unperturbed metric Σ 0 , or more in general, any space-like surface that is perturbatively closed to it and passes through γ A , Σ λ . The case of perturbative excited states Even though the choice of Σ is highly non-unique, it can be shown that not any slice that passes through γ A is a good one. The reason is that γ A is not necesarily minimal on any of such slices Σ. To see this, consider a null congruence shot out from γ A . The surface γ A is extremal, hence, its expansion vanishes: θ = 0. However, the Raychaudhuri equation implies that dθ/dλ < 0 [52]. This means that in this case γ A is a local maximum of area rather than a minimum and, by continuity, the same should hold for space-like surfaces Σ that are close enough to the null congruence. In the left panel of figure 1 we give an example to illustarate this fact. Notice that in one of these slices Σ the minimal area surfaceγ A is not the same as extremal surface γ A . Therefore, finding a max flow in Σ is not equivalent to computing the entanglement entropy of region A. For the case of perturbative excited states, a natural candidate for a good Cauchy slice would be a slice Σ λ that is perturbatively close to the t = t 0 hypersurface associated with the unperturbed metric, Σ 0 . We can parametrize such a slice as t = t 0 + λ δt(z, x), with the constraint that δt(z, x) must vanish at γ A . The question here is if we can find surfaces on Σ λ that are homologous to A but have smaller area than γ A at order λ. Supposing there are such surfaces, we denoteγ λ A as the one with the minimal area. However, we know that γ A is a minimal area surface in the unperturbed background, therefore, by continuity we know thatγ λ A → γ A as λ → 0. Without loss of generality we can then parametrize such a surface with embedding functions as in (2.17). On the other hand, the calculation in (2.18) shows that corrections to the embedding do not affect the area at linear order. This means that area(γ λ A ) = area(γ A ) + O(λ 2 ), so we can conclude that γ A is a minimal area surface on Figure 1. A solution to the maximin problem γ A is naturally accompanied by a specific choice of a codimension-one slice Σ on which γ A is a minimal area surface. Such a slice is highly non-unique, however, not all slices that pass through γ A are allowed. Left: in this example Σ is perturbatively close to the null congruence shot out from γ A . In this case the minimal area surfaceγ A (orange curve) does not coincide with γ A (red curve). Right: for perturbative excited states, it can be shown that γ A is a minimal area surface on any slice Σ λ that is perturbatively close to Σ 0 . This means that we can pick any of these surfaces, and in particular Σ 0 , to construct relevant bit thread configurations. any Σ λ perturbatively close to Σ 0 . We illustrate this result in the right panel of figure 1. This also implies that on any of these surfaces, and in particular on Σ 0 , the solution to the max flow ploblem computes the entanglement entropy of region A, and hence all of them are equally good for the construction of bit thread configurations. Simple realizations of perturbative bit threads Given the enormous simplification that happens at O(λ) from the point of view of the HRT prescription, we would like to study and understand the general properties of perturbative thread configurations based on the constructions developed in [43]. We will start by stating simple constraints that the O(λ) HRT surfaces induce on general bit threads, and then proceed with the specific constructions. We will show that these methods lead to thread configurations that successfully encode general properties of the CFT state and the bulk geometry, such as the first law of entanglement entropy and its relation to the (linearized) Einstein's equations, albeit in a highly nonlocal form. Along the way, we will state the precise problem of metric reconstruction that we look to solve and enumerate the challenges that these simple constructions face, leading to a quest for a new method that exploits bulk locality in a more explicit way. Generalities Let us begin by considering empty AdS d+1 in spherical coordinates. The geometry of a constant-t slice Σ is given by JHEP01(2021)193 The minimal surface γ A for a ball of radius R is given implicitly by and its outward-pointing unit normal vectorn at a point (r, z) on the minimal surface is: For simplicity we have omitted the angular coordinates, since both the minimal surface and the state are invariant under rotations. A simple realization of a vector field/thread configuration, v = |v|τ , based on geodesics is given by (see [43] for details) As a check, notice that this vector field (i) satisfies the divergenceless condition ∇ · v = 0 and (ii) is equal ton at the location of the minimal surface v| γ A =n. Combining these two, it immediately follows that the flux along any bulk surface Γ A homologous to A (not necessarily the minimal surface γ A ) yields the entanglement entropy of the ball (in units of 4G N ), We emphasize that while the minimal surface γ A is in most cases unique, the choice of vector field v is highly non-unique; it is uniquely determined only at the bottle-neck γ A . Next, we would like to find the perturbed vector field in a perturbatively excited state, i.e., a state with bulk metric g λ µν = g µν + λδg µν + O(λ 2 ) (satisfying Einstein's equations): at linear order in λ. While the perturbation in the vector field δv is on its own highly non-unique, any consistent realization must satisfy some nontrivial properties, including the first law of entanglement entropy in the CFT and the linearized Einstein's equations in the bulk. The problem that we want to address is the following: Given a consistent thread configuration for an excited state v λ , is it possible to reconstruct locally the bulk geometry at the same order in the perturbation? JHEP01(2021)193 A couple of comments are in order. First, note that we are focusing on excited states. While it is true that the same question would make sense even in the vacuum state, we recall that the bulk metric in this case is fixed by symmetries, rendering the problem exceptionally simple. Second, the non-uniqueness of v λ for a given metric indicates that the correspondence is not one-to-one. Even if we isolate a family of thread configurations that follow from the same bulk metric, the way they encode this information may be nonunique and, generically, highly nonlocal. In the following, we will identify basic constraints that generic realizations of v λ must satisfy and then, study how the particular constructions of [43] encode the information about the bulk metric. Boundary conditions for the perturbed threads In order to find a solution v = |v|τ for a thread configuration, we need to solve for the divergenceless condition ∇ · v = 0 subject to the norm bound |v| ≤ 1. One way to proceed is to use the fact that the norm bound is saturated |v| = 1 at the bottle-neck γ A . In other words, we need to impose that at the minimal surface γ A , v is equal to its unit normal, v a λ | γ A =n a . (3.9) Notice that this does not uniquely determine the vector field everywhere in the bulk; intuitively, the ambiguity of the thread configuration away from γ A corresponds to a choice of microstate in the dual CFT, such that all the macroscopic properties of the system are satisfied, including the entanglement entropy S A . Let us now determine how (3.9) looks like in the perturbed geometry. Fortunately, at the linear order in the perturbation the RT surfaces are unchanged and we can use this to our advantage. This implies that at this order, the change in the normal vector is only induced by the change in the geometry. To see this, consider the metric on a constant-t slice Σ of the perturbed geometry 3 where We will keep the λ's explicitly throughout our calculations as a bookkeeping device (to count the order of the perturbations), but at the end we will set it to unity. Also, for future reference, we give an explicit expression for the inverse metric at linear order in λ, where: JHEP01(2021)193 As explained in the previous section, the embedding function (3.2) is not corrected at this order. Therefore its normal covectorn a remains the same, up to an overall constant N , (3.14) Ensuring thatn is properly normalized to one, we find that at linear order in λ: Finally, raising the index with the inverse metric we find that For example, in d = 2, we find that: For d ≥ 3 we can obtain similar but more longwinded expressions but for the sake of simplicity we will not transcribe them here. Finally, from (3.9) we find that at linear order in λ, our boundary condition at the bottle-neck γ A is: We emphasize that this condition does not uniquely determine v λ in the bulk, specially in regions far away from γ A where v λ is highly non-unique. First law of entanglement entropy Since v λ is divergenceless, the flux across any bulk surface homologous to A is constant. Hence, the boundary condition (3.18) should be enough to demonstrate the first law of entanglement, provided we pick γ A itself as our homologous region. To illustrate this, we can perform a simple analysis in d = 2 dimensions. The area element dS γ A in this case is given by The order O(λ) term gives the change in entanglement entropy, JHEP01(2021)193 Finally, according to (2.7) we can expand H xx (t, x, z(x)) as However, as emphasized in the previous section, for d = 2 only the n = 0 survives. By the traceless condition we know that T x), so we arrive to the first law of entanglement entropy with the right modular Hamiltonian in 2d [23] For d > 2 the proof is slightly more complicated, but it can be shown by working out the above expansions in momentum space, and resuming the resulting series. We refer the reader to [51] for a detailed analysis in these higher dimensional cases. The crucial insight here is that any divergenceless vector field satisfying (3.18) will automatically encode the first law of entanglement entropy, which for arbitrary dimensions takes the form Since the first law of entanglement entropy has been shown to be equivalent to the bulk Einstein's equations at the linear level [23], then all consistent thread configurations should also encode them in some form. It remains to be seen how are the Einstein's equations encoded in the specific thread configurations, and how easy would be to recover the metric from particular constructions. Method 1: geodesic bit threads Following [43], we will now present simple methods to construct explicit thread configurations satisfying the boundary condition (3.18) for perturbative excited states. The first method consists on picking a family of integral curves with good properties, and then fixing the norm by ensuring that Gauss's law is satisfied everywhere. In the following, we will describe this construction in some detail and study how the information of the bulk metric is encoded in the resulting thread configuration. Integral curves A good family of integral curves must satisfy the following properties: 1. They must be orthogonal to the minimal surface γ A . 2. They must be continuous and not self-intersecting. 3. They must start and end at the boundary, or possibly at a bulk horizon. JHEP01(2021)193 Given a family with these properties, it is then straightforward to construct a divergenceless vector field with the desired boundary condition. There is a small caveat here, however: one can only check if the norm bound is satisfied |v| ≤ 1 a posteriori. One crucial result of [43] is that a thread construction based on space-like geodesics automatically satisfy the norm bound, provided that the metric background satisfies some simple geometric properties. This conclusion followed from a systematic analysis of geodesic foliations of an arbitrary Riemannian geometry, so it must also hold true for the case in consideration, i.e., for geometries dual to perturbative excited states. Therefore, our first candidate for the family of integral curves will be the space-like geodesics of the perturbed background. Corrected geodesics: let us consider the d = 2 and d > 2 cases separately. In [43] it was shown that space-like geodesics in an arbitrary (2 + 1)-dimensional (d = 2) background lead to a vector field satisfying the norm bound |v| ≤ 1, provided that the Ricci scalar on a constant-t slice (a Riemannian submanifold) is negative everywhere, i.e. We can check that this condition is indeed satisfied for the perturbative states that we are considering. Working in coordinates adapted to the geodesics, and using the same notation of [43], we will write the bulk metric as follows: where x labels different points along the minimal surface and λ is an affine parameter that runs along geodesics orthogonal to it. 4 The above metric is a solution of Einstein's equations: 5 where T µν is the bulk energy momentum tensor. A quick calculation shows that the induced Ricci on a constant-t slice is: hence, for negative cosmological constant Λ < 0, we have that R < 0 if and only if the local energy density is bounded from above: Since the kind of perturbations that we are considering are all vacuum solutions, i.e. we have T µν = 0, then we conclude that the corrected geodesics can indeed be taken as a good family of integral curves. 4 This coordinate system does not need to foliate the full manifold; points that are not covered by these coordinates have by definition a vanishing vector field v = 0. 5 We have set 8πGN = 1 for simplicity. JHEP01(2021)193 For spheres in higher dimensional spaces (d > 2) the situation is a bit more complicated. Assuming that the state is invariant under rotations, we can pick a plane that intersects the origin and find the geodesics within this plane. Then we foliate the full spacetime by surfaces of revolution generated by rotating such geodesics along all possible angles. With this construction, the bulk metric can be written as After some algebra, one finds that the criterion (3.25) generalizes to [43] where R 2 is the induced Ricci on the auxiliary 2-dimensional metric defined in (3.30). On a pure AdS background, one finds that R 2 = −2, while the terms on the right hand side of (3.31) are strictly positive. This means that there is a finite gap, or in other words, that the bound is O(1) far from saturation. On the other hand, linear perturbations of the metric would lead to corrections on both sides of the equation but these corrections can only be of order O(λ). This means that for sufficiently small λ, the condition (3.31) will still hold true, regardless of the fluctuations. Similar arguments could be made for metrics that are perturbatively close to AdS but are not rotationally invariant, however, the analysis would be certainly more complicated. In these situations one would need to find corrected geodesics within infinitely many planes intersecting the origin and repeat the above steps. But, again, since the pure AdS case is far from saturating (3.31), the analysis at linear order would only lead to corrections of order O(λ), meaning that the bound would always be satisfied for sufficiently small λ. The above arguments show that the O(λ) geodesics are good candidates for integral curves for any number of dimensions. There is a slight technical problem, however: it is practically impossible to obtain closed expressions for the corrected geodesics in a generic perturbed background. In practice, rather than working with the corrected geodesics, it is more convenient to propose an alternative family of integral curves. In the following we will explore this possibility in more detail. Uncorrected geodesics: the corrected geodesics are far from saturating the bound (3.25) in d = 2 or, more generally, (3.31) in higher dimensions. Therefore, it is clear that a continuous family of curves that are perturbatively close to them will similarly do the job. The most natural and simplest candidate for this are the uncorrected space-like geodesics. To illustrate this point we will consider the d = 2 case, where we can make a precise analytic statement. In this case, the minimal surface (3.2) is given implicitly by We have added subindexes 'm' to point out that these coordinate points are on γ A . The geodesics in pure AdS are given by semicircles anchored at the boundary. These semicircles form a two-parameter family of curves and are defined implicitly by JHEP01(2021)193 where x s is the center of the circle and R s its radius. The tangent vector with unit norm at an arbitrary point is given bŷ where H(t, x) ≡ H xx (t, x). As expected, the tangent vector still points in the same direction but its normalization is corrected at leading order in the perturbation. Since the integral curves must be orthogonal to the minimal surface, we must enforce thatτ where the latter is given in (3.18). At order O(λ), this requirement leads to 6 In order to arrive to these expressions we have made use of the equation ( Next, we need to check if the proposed integral curves are properly nested [43]. In order to check this, we find the point x a at which they intersect A, 7 37) and the dual point xā at which the curves intersectĀ, One can check that self-intersection is avoided if and only if dx a /dx m > 0 and dxā/dx m < 0. A quick calculation leads to 6 With these definitions Rs can take negative values. We can take an absolute value of xm in the denominator of (3.35) to make Rs positive. However, allowing Rs to take any value will be useful below, in the definitions of xa and xā. 7 If we insist that Rs ≥ 0, these definitions for xa and xā would only be valid for xs ≥ 0, while for xs ≤ 0 one should interchange the two. JHEP01(2021)193 One can check that at order O(1) both conditions are satisfied, i.e., dx a /dx m > 0 and dxā/dx m < 0. At linear order in the perturbation, we get a term that does not have a definite sign (the last term in the square brackets), but one can always choose a small enough λ such that these inequalities are still satisfied. As an example, let us consider a plane wave, 8 The last term in the square brackets can become order O(1) if the frequency ω is large enough. To prevent this to happen, one must take λ 1/( ωR 3 ). If the background is decomposed in Fourier modes, then the maximum frequency will be the relevant one, and the above condition is replaced by This means that for smooth functions we can always find a small λ that satisfies the conditions. For sharply peaked functions this might not be the case, since the Fourier spectrum could contain arbitrarily high frequency modes. We will therefore restrict our attention to states with smooth stress energy tensor. Notice that this is not an important restriction. In CFT language, a state with a sharply peaked stress energy tensor will not be perturbatively close to the vacuum, and hence, the gravity dual would have important higher order contributions that we have ignored in the approximation of linearized gravity. Magnitude Given a set of integral curves the next step is to find the appropriate norm of the vector field |v λ |. We will denote X(x m , ξ) the proposed family of curves; x m labels points on the minimal surface and ξ is a parameter that runs along the curve. As explained above, the curves X(x m , ξ) can be the uncorrected geodesics. The parameter ξ can be taken as the proper length from the given point to the minimal surface. Following [43], we now fix the norm by implementing a version of Gauss's law for an infinitesimal cylinder enclosing each curve. 9 More specifically, we impose that the flux through an infinitesimal area element δA transverse to one of the threads is constant, where h ab = g ab −τ aτb is the projection of the metric on a plane orthogonal to the integral curve. Using the fact that at the minimal surface |v λ (x m , ξ m )| = 1, and letting δA → 0, we arrive to the following expression for the norm In this example even the second term in the square brackets can have a negative sign, but this problem goes away when one impose energy conditions. The last term, however, will still be indefinite after imposing energy conditions. 9 Alternatively, we could fix the norm by solving the first order differential equation for |v λ | resulting from the divergenceless condition, subject to the appropriate boundary condition at γA. This would be completely equivalent to the Gauss's law method described here, since the latter condition is the differential form of Gauss's law. However, since we have the explicit form of the integral lines, the Gauss's law turns out to be more convenient in this case, providing a final answer in closed form, as shown below in equation (3.43). JHEP01(2021)193 where ξ m is the parameter at which the curve intersects the minimal surface γ A . Notice that we do not need to verify whether the norm bound |v λ | ≤ 1 is satisfied everywhere. This is already guaranteed given our choice of integral curves and the argument based on the negativity of the scalar curvature presented in section 3.2.1. Reference [43] provides various explicit examples of geodesic flows constructed with this method, including the case of spherical regions in empty AdS, given in equation (3.4). In appendix A we complement this study by constructing a new explicit example, now for the case of the specific perturbative excited state corresponding to a local quench. We can now inquire about how the bulk metric and the Einstein's equations are encoded in this particular construction. Unfortunately, at this level we can already see that such information is encoded in the vector field v λ in a highly nonlocal fashion. On one hand, one needs to solve for the geodesics in the unperturbed background subject to a boundary condition that depends on a particular metric perturbation. And, on the other hand, the magnitude of the vector is found by transporting the boundary condition along the geodesic, ensuring that the vector field is divergenceless. This process is inherently nonlocal; in particular, the final result for |v λ | exhibits an explicit bilocal dependence on the metric perturbation, since it must be evaluated at the points labeled by ξ m and ξ. The latter parameter, in particular, encodes the proper distance between the point in consideration and the minimal surface γ A , which is nonlocal information on its own. These observations imply that it would be rather difficult to invert the problem and recover the metric from the resulting thread configuration. Similarly, the same remarks apply for the Einstein's equations: even though they are assumed as a starting point for this construction (the perturbations we consider are on-shell), they are ultimately encoded nonlocally in the resulting thread configuration. Method 2: level set construction The second method of constructing thread configurations consists on starting with a specific family of level set hypersurfaces and then building up a vector field that is orthogonal to them and, of course, divergenceless. This is a slight generalization of a method initially proposed in [43], as we will see below. In the following, we will spell out the details of the general construction for arbitrary metrics, and then specialize to the case of perturbative excited states, where the construction simplifies drastically. General metrics We begin by proposing a family of level set surfaces with the following properties: 1. They must contain the minimal surface γ A as one of its members. 2. They must be continuous and not self-intersecting. 3. They must not include closed bulk surfaces. Given a family with these properties, it is then straightforward to construct a divergenceless vector field with the desired boundary condition. We can understand this as follows: given JHEP01(2021)193 a family of level set hypersurfaces, one can first generate the corresponding integral lines by imposing that they must be orthogonal to each member of the family. Having the integral lines, then, the problem reduces to that of section (3.2) so we could follow the steps outlined there. This means that, in general, we can only check if norm bound is satisfied a posteriori. There is however, one clever exception to the rule. We can ensure that |v| ≤ 1 is satisfied everywhere by construction if we impose the following extra condition on the level set surfaces: 4. They must be homologous to A. If this condition is satisfied, then, the max flow-min cut theorem guarantees that |v| will be maximal at the bottle neck γ A . Since |v| γ A = 1 then, this implies that |v| ≤ 1 at any other member of the family. Notice that condition 4 is not a strict requirement, but a useful one. In fact, simple examples of vector fields generated by level sets that are not homologous to A are the maximally packed flows constructed in [43]. In that construction the level set surfaces were picked as a family of nested minimal surfaces, containing γ A as one of its members. The motivation there was to find a flow with maximal norm |v| = 1 in a given codimension-one region of the bulk, which was possible due to the nesting property of bit threads [30,31]. 10 For the purposes of this paper, however, we are not interested in the above requierement, so we can explore other possibilities. In the remaining part of this section we will in fact assume that the condition 4 is satisfied, so we do not have to deal with the norm bound. Let us now describe in detail the construction from level sets. To begin with, we need an efficient way to specify our level set hypersurfaces. In practice, we can do so by picking an appropriate scalar function ϕ(x i ) such that the ϕ = constant surfaces give us our desired level sets. We can then write the following equation At first glance, (3.44) seems more general than a gradient flow, but in fact it is not. In principle one could always redefine the scalar function ϕ →φ = ϕ Υ(ψ)dψ and therefore simply write v = ∇φ. However, the functionφ would not only encode information about the level sets, but also about the norm, so it would be extremely difficult to guess a good function that gives us our desired level sets and that also satisfies the divergenceless condition ∇ 2φ = 0. In practice, then, it is much easier to start with (3.44) and determine Υ(x i ) through the divergenceless condition. We emphasize that, in this scenario, the specific values of ϕ do not have a particular meaning and are in particular not related to the norm of v. The field ϕ here only determines the unit vector in τ = v/|v|, through One crucial observation that follows from the definition (3.44) is that the covector v a (i.e. v with lower index) only depends on the metric g ab through Υ(x i ). To make this point JHEP01(2021)193 self-evident, and in a form which is partially "independent" of the metric, we can write v a = Υ(ϕ, g)∂ a ϕ . (3.46) We will exploit this observation below, for the case of perturbative states. For now, let us notice that the boundary condition at the minimal surface implies: 47) or, equivalently, All we have left is to determine Υ away from the minimal surface, which can be done by imposing the divergenceless condition. Here we have two options: i) we can use Gauss's law as we did in section 3.2 or ii) we can directly attempt to solve ∇ · v = 0, which should give us a first order differential equation for Υ. As mentioned in section 3.2, the two methods are completely equivalent, since Gauss's law is the integral form of the divergenceless condition. However, since we do not have explicit expressions for the integral lines, then, the first option turns out to be more complicated in this case. 11 We therefore proceed by deriving a differential equation for Υ, which can be derived from ∇ · v = 0. Plugging (3.46) into this condition and massaging the equation leads to: As advertised, this is a first order differential equation for Υ in terms of the scalar field ϕ and the background metric g. Solving this equation subject to the boundary condition (3.48) would then give a unique solution for the vector field v. Perturbative excited states The above construction simplifies drastically for the case of perturbative excited states. In the following we will specialize to this situation and study in detail how the information about the bulk perturbation is encoded in the resulting thread configuration. For a metric of the form g λ ab = g ab + λδg ab we are only interested in obtaining the vector field v λ (3.8) to linear order in the perturbation around the zeroth order solution. Since the minimal surface γ A does not change at linear order in λ, a simple choice for the level sets consistent with all requirements would be to pick the same surfaces as for the unperturbed geometry. In this case we have that: In other words, with this choice of level sets, only the function Υ(x i ) gets corrected at linear order in λ, so the first correction of the vector field δv a turns out to be proportional to the zeroth order solution, JHEP01(2021)193 The function Ψ is determined at the minimal surface by the boundary condition |v λ | = 1. Expanding at linear order, we obtain Since the zeroth order term is already normalized to one, the terms inside the parenthesis must vanish. Using (3.51) we arrive to: In the last equality we have used the fact that δ(δ a b ) = δ g ab g bc = δg ab g bc + g ab δg bc = 0. We did this, because it would be particularly convenient to have an expression for the boundary condition of Ψ(ϕ, g λ ) in terms of the background v with upper index. Next we would like to determine the function Ψ away from the minimal surface, which can be done by imposing the divergenceless condition. Again, we proceed by deriving a differential equation for Ψ akin to (3.49). In order to do so, first notice that Taking the divergence of v λ and using the fact that ∇ · v = 0 (at zeroth order), we obtain: Taking the explicit variation of √ g λ and using (3.55) we obtain: where δg ≡ g ab δg ab . In summary, given a background metric g ab and a solution to the max flow problem v a , one can always solve the problem of maximizing the flux in a state where the metric g λ ab is perturbatively closed to the original one. Assuming that the level set surfaces remain the same in the perturbed geometry, the solution for the perturbation of v is given by equation (3.55), which is determined in terms of a scalar function Ψ and the metric perturbation δg ab . This function can be obtained by solving the first order differential equation (3.58) subject to the boundary condition (3.53). In retrospective, the only non-trivial input required for this kind of construction is the choice of background vector field v, which is in turn used as a seed for the perturbed solution v λ . Specializing to spherical regions, one simple choice would be to pick v as a JHEP01(2021)193 Figure 2. Contour plot for the magnitude |v| of the geodesic flow given in (3.4), in d = 2 dimensions (i.e., empty AdS 3 ). The contours correspond to the level set surfaces of v, which are all homologous to A and, in particular, include γ A as one of its members. This implies that this vector field v can indeed be used as a seed to generate a good solution v λ in a perturbative excited state. geodesic flow, which is known in closed form if the background metric is empty AdS. This background v is given explicitly in equation (3.4). It is easy to check that the level sets of this vector field are all homologous to A, as is shown in figure 2. Since this construction assumes that the level sets are kept fixed, this implies that any perturbative solution v λ build up from this background field v will automatically respect the norm bound |v λ | ≤ 1. In appendix A we present an explicit example of such perturbative solutions, for the case of a local quench. Finally, we can comment on how the metric perturbation and the Einstein's equations are encoded in this particular construction. Although the explicit use of metric is reduced in comparison to the construction via integral curves, the last step in the level sets method introduces the same level of nonlocality. In particular, the way we fixed the scalar field Ψ was by solving the divergenceless condition (3.56). Even though this equation is local, the nontrivial boundary condition (3.53) introduces nonlocalities in the solutions, because the equation effectively transports information from γ A to other regions in the bulk. From the Gauss's law perspective the situation is perhaps easier to understand. In that case, the final answer for v λ exhibits an explicit bilocal dependence with respect to the metric perturbation, through its magnitude (3.43). The way we solve for Ψ in this formalism is completely equivalent to that case, because Gauss's law is nothing but the integral form of the divergenceless condition. Hence, even though this construction seems particularly efficient for building up perturbative solutions v λ , it ultimately contains the same kind of nonlocalities than the construction via integral curves. Hence, the inversion problem to recover the bulk metric and the Einstein's equations is equally difficult in both constructions. Bit threads and bulk locality The simple perturbative realizations of bit threads of the previous section highlight the need of a bit thread construction that does not make explicit use of the metric. Fortu- JHEP01(2021)193 nately, we know how to reformulate this formalism in a framework that makes background independence explicit: using the language of differential forms. The equivalence between divergenceless vector fields v and closed (d − 1)−forms w was already emphasized in [30] and was used in [31] to efficiently deal with some subtleties of the max flow problem for null intervals. 13 In this section we will first break down this equivalence in detail, giving explicit formulas that translate various relevant expressions between the two languages. We then argue that the Iyer-Wald formalism provides us with a particular realization of the perturbed thread configuration δw that makes explicit use of bulk locality. In particular, we show that the linearized Einstein's equations are explicitly encoded in this construction through the closedness condition, i.e., dδw = 0. We exploit this unique property of the Iyer-Wald construction to tackle the question of metric reconstruction and show that this problem can be phrased in terms of the inversion of a particular differential operator. Finally, we carry out the explicit inversion at linear order and discuss how to generalize our results to higher orders in the perturbation. Bit threads in the language of differential forms In the presence of a metric g ab , the explicit map between flows, i.e., divergenceless vector fields v and closed (d − 1)−forms w, is given by where w represents the Hodge star dual of w, defined via In the above formula ε a 1 ...a d represents the totally antisymmetric Levi-Civita symbol, with sign convention ε i 1 ...i d−1 z = 1. Furthermore, the indices of w are raised with the Riemannian metric g ab , and its determinant is denoted by g. At this point we can already notice an important difference between the two objects, namely that, while the notion of a flow requires a background metric, w can be defined independently of g ab . This will play a crucial role below, specifically, when we address the problem of metric reconstruction. Let us carry on with our analysis. The inverse of the map (4.1) can be stated in terms of the natural volume form , given by where a 1 ...a d is proportional to ε a 1 ...a d and normalized such that i 1 ...i d−1 z = √ g. In terms of , the (d − 1)−form w is given by 4) or in components, JHEP01(2021)193 Following standard manipulations one can relate the divergence of v a with the exterior derivative of w. 14 This formula shows explicitly the anticipated fact that divergenceless vector fields, or "flows", are mapped to closed (d − 1)−forms. The precise relation between the two is given by (4.4). Now, it is well known that k−forms have well defined integrals over k−dimensional hypersurfaces. Therefore it is convenient to write down an explicit formula for the restriction of w on a codimension-one surface Γ in terms of intrinsic geometric quantities of that surface. Such a formula can be derived using the fact that the volume d−form induces a volume (d − 1)−form˜ on Γ via where n is the unit normal to the surface. Contracting the last index of (4.7) with v and using (4.5) leads to an explicit expression for the form w evaluated at an arbitrary codimension-one surface Γ, with local unit normal n, in terms of the (d − 1)−form˜ This result is equivalently derived in the language of forms, using Stoke's theorem: This leads to which is equivalent to (4.10), given (4.8). With the ingredients described above, we are now in a position to translate the max flow-min cut theorem to the language of differential forms. First, we have (4.13) JHEP01(2021)193 The inequality here comes from the standard norm bound, |v| ≤ 1, which in terms of forms can be rewritten as (4.14) In short, equation (4.13) implies that, locally, the form w evaluated on any codimensionone hypersurface is bounded by the natural volume form defined on it˜ . The max flow-min cut theorem then implies that where W is the set of closed forms obeying the bound (4.14). This means that at the bottle-neck γ A , an optimal bit thread form w * should be equal to the volume form˜ , i.e., Finally, combining with the RT formula for entanglement entropy, (4.15) becomes which is the differential form version of the max-flow formula (1.2). There are many situations in which one might want to define the threads in terms of forms w instead of vector fields v. In particular, this reformulation will prove extremely useful for the problem at hand, namely, for the study of perturbations around a given background and the corresponding solutions to the flow maximization problem. The case of linear perturbations Having understood how the bit threads formalism translate to in the language of differential forms, it is now time to go back to our original problem. We will assume that the following data is given: a background metric g ab on a manifold M with boundary ∂M , and an optimal flow v that maximizes the flux through a boundary region A. Using (4.5), then, this would also imply the knowledge of an optimal closed form w. In the following, we will consider the max flux problem in geometries that are perturbatively close to g ab , i.e., g λ ab = g ab + λδg ab . We will denote a solution to the problem as w λ , where w λ = w + λδw. First, notice that the closedness condition implies We can also use the fact that the minimal surface γ A does not change at first order in the perturbation, so γ λ A = γ A . Since this is a bottle-neck for the flow, both v and w are fixed at its location. In particular, from (4.16) it follows that Then, given a max flow w for the unperturbed geometry, we are set to find a closed (d − 1)−form δw that satisfies the boundary condition (4.19) and it is such that the norm JHEP01(2021)193 bound constraint (4.14) holds everywhere in the bulk for the sum w λ = w + λδw. For simplicity, let us introduce the following notation for the inner product 20) and for its first order variation with respect to the metric With these notations, the norm bound (4.14) at first order in λ is given by which looks more difficult to implement than its vector field counterpart. From (4.22) it is clear that the norm bound will typically depend on w so a priori it seems unlikely that a generic δw obeying (4.18) and (4.19) could satisfy (4.22) independent of w. The task becomes even more untractable if one requires δw to be given in terms of a linear local functional of δg ab and its covariant derivatives ∇ (a 1 · · · ∇ an) δg ab (see however [56]). In the remaining part of this section we will show that, despite the above remarks, the Iyer-Wald formalism provides a concrete realization of such perturbed form. Iyer-Wald formalism and Einstein's equations One of the crucial breakthroughs in the joint program of holography and quantum information is that the first law of the entanglement entropy, together with the Ryu-Takayanagi formula, imply the linearized Einstein's equations in the bulk. This was originally proven using Hamiltonian perturbation theory [22]. In a beautiful paper [23], it was further shown that it is possible to make this connection more explicit by the proper implementation of the Noether's charge formalism in the bulk, also known as the Iyer-Wald formalism. In this new language, the problem of linearized perturbations is cast in terms of differential forms, a more natural and elegant approach that bridge the CFT and bulk quantities in an efficient way. In this section we will briefly review the basic ingredients of [23], making the connection between entanglement entropy and Einstein's equations manifest. Later in section 4.3 we will show that the Iyer-Wald formalism provides us with a canonical choice for the differential form δw that solves the max flux problem in a perturbed geometry. As a byproduct, we will show that such a canonical form will automatically encode (locally) the linearized Einstein's equations in the bulk which, in turn, will prove useful for the problem of metric reconstruction. Let us first state the problem that [22] sought to solve and then discuss the approach of [23]. In general quantum field theories (holographic or not), for small perturbations over a reference state, ρ = ρ (0) + λδρ, entanglement entropy satisfies the first law For generic CFTs, this setup can be conformally mapped to the case where A is a ball of radius R, centered at an arbitrary point x = x c , in which case [59,60] On the other hand, the left-hand side of (4.23) is computed via the Ryu-Takayanagi formula in the bulk. For ball shaped regions in pure AdS, or small perturbations around it, the RT surface γ A is given by a half hemisphere of radius R extended on the extra dimension, centered at x = x c and z = 0. The Ryu-Takayanagi formula then adopts the form where δ˜ is the variation of the natural volume form on the surface γ A . A further ingredient is the relation between the expectation value of the boundary stress tensor, appearing in the right-hand side of (4.23), and the fluctuations of the bulk metric δg µν . In the Fefferman-Graham gauge, where the latter is given by (2.1), the former can be identified as the first subleading (normalizable) mode in a near boundary expansion (2.2). Taking into account that the boundary field theory is conformal and that the stress tensor conserved, then, this identification imposes non-trivial boundary condition for the metric fluctuations H µν , (4.28) Using the above, the right-hand side of (4.23) becomes Similarly, evaluating the left-hand side of (4.23) using (4.27) yields JHEP01(2021)193 The first law (4.23), together with (4.29) and (4.30), then establishes a relation between integral functionals of H µν on A and on γ A . It turns out that this functional dependence in turn implies the Einstein's equations for H µν , linearized around pure AdS. This was shown originally in [22] by a direct comparison between the two sides of (4.23). From the gravitational perspective, (4.23) was then proven to be equivalent to the generalized first law of black hole thermodynamics applied to the bifurcate killing horizon of Rindler AdS [23]. This was made explicit by a clever implementation of the Noether's theorem in the bulk, using a formalism developed a couple of decades back by Iyer and Wald in [61]. In order to apply this formalism to the problem at hand, the key observation was that the RT surface for ball-shaped regions in pure AdS, or perturbations around it, coincides with the bifurcate horizon of the time-like killing vector with respect to which a notion of energy and entropy are possible. In fact, a specific conformal transformation (known as the CHM map [60]) maps the interior of the Rindler wedge associated with A to the exterior of an hyperbolic black hole in AdS, where the killing vector ξ coincides with the generator of time translations. Following Iyer and Wald [56,61,62] one then investigates the Noether's theorem for the Killing symmetry generated by ξ. This leads to the definition of a (d − 1)−form with zti 1 ···i d−1 = √ −G. As noted in [23], the form χ satisfies the following properties which can be more easily verified by evaluating (4.32) on a Cauchy hypersurface Σ, containing both γ A and A. For instance, taking Σ to be the t = t 0 slice, one obtains the (d − 1)−form from which both equations in (4.34) follow trivially. The key point here is that χ is closed provided that the bulk equations of motion are satisfied. For instance, in the constant-t Cauchy slice used above one finds JHEP01(2021)193 where δE g tt is the tt component of the linearized Einstein's equations, and t is the induced volume form on Σ. Similarly, other components of the Einstein's equations are obtained by specializing to different Cauchy slices. Thus, provided that the metric perturbations satisfy these equations, the form χ is closed and the Stokes theorem implies the equality between the left and right equations (4.34). This equivalence also applies in the converse way, given the nonlocal form of (4.34) and the arbitrariness of R and x 0 . This concludes the proof of the statement that they were after, namely, that for theories where the Ryu-Takayanagi formula computes entanglement entropy, the first law of entanglement entropy in the CFT is equivalent to the Einstein's equations in the bulk, linearized over empty AdS. Method 3: canonical bit threads from Iyer-Wald Taking into account the nice properties of χ defined via the Iyer-Wald formalism, here, we propose that specializing this form to a spacelike hypersurface Σ containing both γ A and A can be taken as a canonical candidate for the perturbed thread (d − 1)−form 15 Given the integral properties of this form, it is straightforward to check that the flux through any surface homologous to A yields the change of the entanglement entropy in the perturbed state, δS A , as expected. Furthermore, this construction fully exploits the property of bulk locality, in particular, connecting the required closedness of δw with the linearized Einstein's equations via (4.36). We will see below that this property will play a very important role for the problem of bulk reconstruction. It only remains to be checked whether the norm bound constraint (4.14) is satisfied at the desired order (4.22) everywhere in the bulk. This condition will depend on the background form w and then might not hold in general. However, for our purposes it will suffice to find one w such that the combination w λ = w + λδw respects the bound for any perturbation. We will devote the remaining part of this section to check that this is indeed possible. To begin with, we note that the norm constraint in the form (4.22) is slightly more complicated than its equivalent in terms of vectors. Hence, we will first translate the form (4.37) into the language of flows and then check the condition in terms of the latter. For this purpose, we will need an explicit expression relating δv and δw in the presence of a perturbed metric g λ ab = g ab + λδg ab . In terms of the Levi-Civita symbol ε, the variation of (4.5) reads It is convenient to define a new vector field δv a Φ , given by JHEP01(2021)193 which is divergenceless with respect to the unperturbed metric g ab , i.e., 40) and is related to δw via its Hodge dual (again, with metric g ab ), The subindex Φ here highlights the fact that the flux of this vector field across any bulk surface homologous to A, computed with the original metric, equals the change in the entanglement entropy δS A . We emphasize that this vector field should not be thought as the variation of the flow v, but just as an auxiliary object. However, given a δv a Φ obtained from (4.41), we can easily recover the true variation of the flow δv a from (4.39). In the Fefferman-Graham gauge (2.1), the metric perturbation takes the form δg ij = z d−2 H ij (with δg zz = δg zi = 0) and δv a reads where H i i = δ ij H ij . Thus, δv a depends not only on δw but also on the background flow v a . In fact, the extra piece in (4.42) is precisely what is needed such that the divergence of v a taken with the full metric g λ ab vanishes at the desired order, Next, we need to make a choice for the background flow v in order to get an explicit δv a and test the norm bound |v λ | ≤ 1. Since the background v should already respect the bound |v| ≤ 1, it is clear that v λ can only exceed this bound by an amount of order O(λ). This can indeed be the case for bulk points that saturate the bound at leading order |v| = 1 (e.g., at the bottle-neck γ A ), or in their vicinity. On the other hand, points that are parametrically far from saturating the bound at leading order are safe, in the sense that we can always take λ to be arbitrarily small such that |v λ | = |v| + O(λ) ≤ 1. Given the above discussion, then, we should ideally pick a background flow v such that its magnitude decays rapidly away from the minimal surface γ A . Fortunately, we already know good examples of flows respecting this property, e.g., the so-called "geodesic flows" [43], which for spheres in empty AdS take the form (3.4). In the following we will take these background solutions and verify that the norm bound is satisfied at the desired order in the perturbation. First, notice that from (3.4), and using (4.41)-(4.42), it immediately JHEP01(2021)193 follows that By construction, v λ = v+λδv saturates the norm bound on γ A since the form w λ = w+λδw from which it is derived obeys the appropriate boundary condition for a max flow (4.16). We can check this explicitly: at γ A we have that ξ t = 0, so 16 This leads to the expected saturation at first order in λ, Away from γ A the norm bound is not guaranteed to hold, but since |v| decays as a power law, it would suffice to study |v λ | in a neighborhood of γ A . In order to see this in detail, we note that the level sets of the background flow v (depicted in figure 2) have the form where ∆ ∈ R, i.e., spheres with radius √ R 2 + ∆ 2 , centered at ( x c = x 0 , z c = −∆). It can be checked that, in the vicinity of the minimal surface (∆ 2 R 2 ) Since d > 1, then |v| < 1 for any ∆ = 0 as expected. Now, we want to check whether the norm bound is still satisfied for v λ at the leading order in the perturbation. More precisely, what we really want is to check that for a fixed λ, |v λ | ≤ 1 at linear order in λ for an arbitrary ∆. A short calculation shows that 16 A brief comment is in order. The expression for δva|γ A in (4.46) does not agree with the expected boundary condition at the bulk bottle-neck (3.18). The explanation of this mismatch is simple: the difference between the two vector fields is proportional to a vector that is tangential to the minimal surface δv a T = (δ a b − v a v b ) δv b so its first order contribution to the norm constraint vanishes, g ab v a δv b T = 0, because δvT is orthogonal to v. Therefore, δva|γ A in (4.46) is equally good as (3.18) to our order of approximation. JHEP01(2021)193 which after some algebra can be put in the following form: The parenthesis in the left-hand side of (4.51), or (4.50), can in fact be non-negative, which implies that the above inequality will not hold for arbitrarily small ∆. Nevertheless, it is interesting to estimate the order of magnitude of the potential violation of the norm bound constraint as it could still be consistent with our order of approximation. From (4.51) it follows that the norm bound can be violated provided that Plugging (4.52) back into (4.50), we observe that this would only lead to a violation of order O(λ 2 ). Since all of our analysis is at linear order in λ, we can safely ignore this issue. In other words, up to our order of approximation the norm bound is not violated and this means that the "canonical" thread configuration constructed from the Iyer-Wald formalism satisfies all the defining properties for a max flow. We relegate to appendix A the study of a explicit example of these canonical thread configurations. Finally, we can comment on how the information of the metric perturbation and the Einstein's equations are encoded in this particular thread construction. First, notice that the variation of the flow δv constructed from Iyer-Wald fully exploits the property of bulk locality. This is evident, since for this particular construction the divergenceless condition, ∇ λ · v λ , maps directly to the Einstein's equations, which are defined locally in the bulk. Moreover, the fact that the δv constructed here (4.45) can be written in terms of a linear local functional of δg ab and its derivatives, present us with an interesting possibility: we can use the information of this canonical solution to invert the problem and recover the bulk metric from it! We will explore this problem in more detail in the next subsection, and comment on the implications and possible generalizations to the full non-linear regime. Explicit reconstruction at linear order Our bit thread construction based on differential forms makes explicit use of the property of bulk locality, hence, it should be possible to invert the problem and recover the metric for a generic linear excitation of the boundary quantum state. In this section we will study this problem in detail. More specifically, we will consider a manifold M with boundary ∂M and a set of forms δw that encode the local pattern of entanglement of boundary regions. We will assume the knowledge of the zeroth order -pure AdS-metric g µν , which is otherwise fixed by conformal symmetry (i.e. kinematics), and set up the problem of how to reconstruct the metric perturbations δg µν from the above data. Our starting point is the knowledge of the change in the entanglement structure of the CFT, which in this case is encoded in the set of (d − 1)−forms δw. We emphasize that these canonical forms can be uniquely specified solely from CFT data. Given a perturbative excited state in the CFT, one can first evaluate the expectation value of the stress-energy JHEP01(2021)193 tensor T µν and thus the modular Hamiltonian H A associated with a spherical region A. This information can then be used as a boundary condition for δw on A ⊂ ∂M . For instance, specializing to a constant-t slice Σ, this yields 17 where¯ is the natural volume form in the boundary CFT. In fact, we can analytically continue this form to the whole boundary ∂M , so that 18 One way to see that this is consistent would be to conformally map the interior of the sphere to the exterior. Upon implementing this transformation one finds the same functional form for the modular Hamiltonian but integrated along x ∈ A c , hence, providing a boundary condition also at A c = ∂M \A. With the above boundary condition, the full (d−1)−form in the interior of the manifold M is then uniquely determined if we assume bulk locality [56]. To see this, notice that the Iyer-Wald construction yields a form δw such that dδw = 0 on-shell, which is a local condition. If we want to maintain this condition, then, the only ambiguity in δw would be the addition of a term δw → δw + dC where C is a (d − 2)−form such that dC vanishes on ∂M . This is of course a gauge redundancy, which we fix by working in Fefferman-Graham coordinates. Therefore, the boundary condition (4.54) together with the condition of bulk locality are enough to uniquely specify the full (d − 1)−form on M . Before proceeding with the specifics of this analysis, let us first quickly review how the problem of metric reconstruction is normally addressed. In the usual HRT story, given background metric g µν , the change in the entanglement entropy of a region A at first order in the perturbation δg µν is given by 19 This means that δS A encodes information about the first order change in the trace of the induced metric h ij over the extremal surface γ A . On the other hand, the induced 17 Notice that a choice of boundary condition on A is equivalent to picking a specific entanglement contour in the dual CFT. We emphasize that this corresponds to focusing on a particular class of microstates with a given local entanglement pattern. Although this boundary condition is in general non-unique, (4.54) is the boundary condition singled out by the Iyer-Wald construction. 18 More covariantly, on a general Cauchy slice Σ containing ∂A, the boundary condition would be where N µ is a future pointing unit normal vector, and ζA is the conformal killing vector that generates D[A], Upon conformally mapping the causal development of the region D[A] to the hyperbolic cylinder H d−1 ×Rτ , it can be shown that ζA coincides with the time-like Killing vector 2πR ∂τ . 19 The analysis in terms of extremal surfaces can be done for general states, not necessarily perturbative. Here we are discussing only this simpler case to highlight an important difference with our approach. JHEP01(2021)193 metric on γ A depends on the bulk metric as well as the explicit embedding of γ A on the geometry. Therefore, by cleverly considering different boundary regions with extremal surfaces intersecting at a bulk point, one could access to the various components of the bulk metric at the given bulk point and hence derive an inversion formula for the metric perturbation δg µν . As is evident from the previous paragraph, the problem of metric reconstruction by extremal surfaces heavily relies on the possibility of foliating the full manifold M with boundary anchored extremal surfaces. In particular, we would necessarily need to start from a dense family of surfaces that pass through all (reachable) bulk points multiple times. While we can do this in the language of bit threads, i.e., start from a dense set of thread configurations, the fact that one single solution to the max flow problem already probes the full bulk geometry presents us with an interesting possibility: we can start from a finite set of thread configurations, containing one, or possibly only a few solutions of the max flow problem. The minimal number of thread configurations needed in such a set will generally depend on symmetry considerations as well as the number of dimensions, as will be discussed below. For the time being, let us summarize the two approaches to metric reconstruction that we can explore. For simplicity, we will frame the discussion by focusing on a constant-t slice Σ, so that we will aim to recover the spatial components of the metric δg ij . The δg tt and δg ti components can be recovered in a similar way, by choosing appropriate boosted slices Σ , as we will explain at the end of the section. The two methods that we will explore are: • Reconstruction from a dense set of thread configurations. Here we will assume knowledge of δw for all spheres in the CFT, with arbitrary radius R and center point x 0 . • Reconstruction from a minimal set of thread configurations. Here we will assume knowledge of δw for a few spheres with radius R (i) , and center point x (i) 0 , with i = 1, . . . , n. The precise value of n will be fixed so that the inversion problem is well defined. We will now discuss these two methods in detail. Reconstruction from a dense set of thread configurations. Given the infinite set of (d − 1)−forms δw encoding the canonical entanglement pattern of spheres of arbitrary radius R and center point at x 0 , on a constant-t slice Σ, our goal is to extract the components of the perturbed metric δg ij . We recall that, in the presence of a metric, the set of (d − 1)−forms δw define a set of covector fields δw a (R, x 0 , z, x), instead of the numbers δS(R, x 0 ), so it is clear that in this framework we have infinitely more information in comparison to the standard setup using extremal surfaces. Hence, we can expect to be able to reconstruct the metric in a more straightforward way. If the full metric is given in the Fefferman-Graham gauge (2.1), the components of the metric perturbation take the form δg ij = z d−2 H ij (with δg zz = δg zi = 0). In this gauge, JHEP01(2021)193 the components of the covectors δw a can be related to the metric perturbations as follows: These equations can be easily inverted using the dependence on R and x 0 of (4.56) and (4.57). In fact, there are infinitely many ways to invert these equations. The simplest way is to get rid of the derivative terms, so that we obtain a system of algebraic equations. However, we have several ways to accomplish this. Below we will discuss two different options. The first option is by evaluating both sides of (4.56) and (4.57) on the set of parameters (R, x 0 ) that satisfy ξ t (R, x 0 ) = 0, i.e., (4.59) Notice that the requirement given by (4.59) means that our reconstruction is limited to the points that are accessible via extremal surfaces. This means that this option is, in a sense, analogous to the metric reconstruction via the HRT prescription and does not exploit the full reach of the bit threads. We will continue for now and then explain an alternative that does not impose this limitation. Let us first analyze (4.56). From this equation, we can immediately find an algebraic expression that gives the perturbed trace H i i (z, x), (4.60) In fact, we can extract the trace H i i at a point (z, x) from (4.60) using any single covector with parameters (R, x 0 ) such that (4.59) is satisfied. This is an example of the nonuniquess of the inversion formulas. Notice that equation (4.60) provides the solution to the full inversion problem for d = 2, in which case (4.57) is identically zero. In fact, for d = 2 the only component of the metric perturbation that we need to solve for corresponds to H xx (z, x) which equals the trace (4.60). In higher dimensions, we can use (4.57) in addition to (4.56), and proceed in a similar way to extract the information of the individual components of the perturbed metric H i j (z, x). In order to do that, first replace the solution for the trace (4.60) in equation (4.57), so that the latter equation becomes: Further, for a given j and within the set of allowed parameters, we can take x j 0 = x j and x k 0 = x k for k = j. This leads to the following solutions for the diagonal and non-diagonal JHEP01(2021)193 components of the perturbation: which provides the solution to the full inversion problem for d > 2. As mentioned above, the method outlined in the previous paragraph is limited to the points that are accessible via extremal surfaces. An alternative way of inverting the system of equations in a less restrictive setting is the following. First notice the relations Using the above, one finds that from which one can easily invert the diagonal and off diagonal components of the perturbation for d > 2. 20 More explicitly, we obtain that which gives the solution to the full inversion problem for d > 2. On the other hand, for d = 2 we can simply use equation (4.60) or, alternatively, the second reconstruction method that we explain below. Notice that summing over j in equation (4.66) leads to a different expression for the trace of the perturbation as (4.60). This represents another example of the non-uniqueness in the inversion equations. Reconstruction from a minimal set of thread configurations. One may wonder whether the extended nature of the entanglement pattern information present in a single form δw(R, x 0 ; z, x) for fixed (R, x 0 ), or a finite number of forms n, could suffice to recover the components of the metric perturbations δg ij . Naively, the number of unknown variables that we need to solve for is d(d − 1)/2, which is the number of symmetric components of H ij ({i, j} run over d − 1 values). On the other hand, the number of equations that we would have at our disposal is given by nd, i.e., d components of a single covector δw a , times JHEP01(2021)193 the total number of forms n ∈ N. Assuming they are all linearly independent of each other, we have that for a fixed d, the minimum number of formsn that we would need is where • represents the ceiling function (i.e., the smallest integer greater or equal to its argument). For d = 2 and d = 3 (3d and 4d gravity respectively) we obtainn = 1, which means that we can hope to recover the full metric from a single form. In higher dimensions the problem would be underdetermined so we would need to increase the number of forms, although, to a finite number set by d. In the following, we will focus on the cases for which n = 1 but we will come back to the higher dimensional cases at the end of the section. We start with equation (4.56) and notice that its right-hand side can be rewritten as This equation can be in principle easily inverted for the trace H i i . However, we must proceed with some care because ξ t can attain a zero value in multiple points in the bulk. We refer the reader to appendix B for a detailed analysis of this equation, and we will simply state the final result here. The analysis is naturally split for bulk points x ∈ A c and x ∈ A (∀z). In the former case, ξ t never vanishes in the bulk, so the analysis is particularly simple. In this case we obtain This equation is also valid for x ∈ A provided that z 2 < z 2 * . For z ≥ z * the integrand in (4.70) has a double pole at λ * = z * /z (with λ * ∈ [0, 1]). This divergence can be removed by an appropriate regularization, e.g., using the principle value prescription. However, given the simplicity of our problem we can find the answer directly from (4.69) by taking one of the end points of the integral to be arbitrarily close to the zero locus of ξ t . After a series of careful manipulations explained in appendix B we arrive at a formula that is valid in the region x ∈ A and for all z: (4.71) This new integral seems to still have a single pole at λ = λ * . However, a close inspection shows that this term is proportional to which can be checked from (4.56). Therefore, the integral is manifestly finite. We note that, indeed, the naive principle value regularization of (4.70) results in (4.71), and likewise JHEP01(2021)193 there are many ways to derive (4.71) from (4.70). Perhaps the simplest way to do it is by slightly changing the integration contour: where ∈ R, and then letting → 0. It can be shown that this prescription is consistent with both (4.70) and (4.71) so it is valid everywhere in the bulk. Notice that in the above formulas we have not specified the number of dimensions d. This means that given the knowledge of a single δw(z, x) one can always find an inversion formula for the trace of the perturbation, given explicitly by (4.70)-(4.71) or its equivalent (4.73). We will now recover the other components of the metric for the cases d = 2 and d = 3. The d = 2 case: the inversion problem in d = 2 is exceptionally simple because in this case there is only one metric component to solve for. Therefore, the trace of the perturbation provides the full solution to the problem, i.e., H i i (z, x) = H xx (z, x). Nevertheless, we will analyze this case in some detail and check that our formulas for the trace ar consistent with the expected results. First, notice that in this case there are further simplifications that considerably reduce our problem: equation (4.57) vanishes exactly so δw x = 0 everywhere in the bulk. The closedness relation dδw = 0 then implies ∂ z δw z = 0, so δw z (z, x) = δw z (0, x). Using this fact, and applying the formula (4.73) which is valid everywhere in the bulk, we obtain This is precisely what is expected from the analysis of section 2.1, specifically from equation (2.13). For x ∈ A c , (4.70) coincides explicitly with (4.73) so the integral above is the same. For x ∈ A we can alternatively use (4.71). Notice that for d = 2 the integrand in (4.71) is identically zero since δw z (z, x) is constant so, in this case, the full result is given by the first term in (4.71). Indeed, this term coincides with (4.74), as expected. Again, since for d = 2 this is the only component of the perturbed metric that we need to solve for, then, equation (4.74) completes the inversion problem. The d = 3 case: to solve the inversion problem in d = 3 we need equations (4.57), in addition to (4.56). These equations involve derivatives with respect to the spatial coordinates ∂ i so, as a system of first order differential equations, we would need information about H ij at some fixed x i in order to have a well defined boundary value problem. We will deal with the choice of such boundary conditions below, but for now, let us explain how to setup the inversion problem. First, since equation (4.73) already provides the solution for the trace part of the metric, we can already replace this back into (4.57) and solve for the remaining metric components. We find it convenient to write the fluctuations as (4.75) so that h = H i i gives the trace while φ and χ are the two fields that we still need to solve for. Defining x 1 ≡ x and x 2 ≡ y, equations (4.57) then take the form: Thus, we have a system of two coupled partial differential equations of first order. It is possible to decouple these equations by taking one further derivative and combining appropriately the two equations. To do so, we first rewrite (4.76)-(4.77) as: (4.79) Equations (4.78) and (4.79) can now be combined as a pair of Poisson's equations: where ρ φ and ρ χ act as sources for φ and χ, respectively. These equations can be solved using standard Green's function methods. Assuming knowledge of the solutions at a closed surface ∂V, one can formally write the solution in the interior of the surface V as follows: where G( x, x ) is the Green's function for the Laplace operator in 2d, given by Next, we need to impose appropriate boundary conditions. To do so, let us start discussing the region z > R for which ξ t < 0 so that there will be no subtleties with poles in the integrals. We will consider a closed surface ∂V at infinity, so that dS =r r dθ , with r = | x | → ∞. We note that, normally, the integral over ∂V in 2d would give a finite contribution, assuming that the fields that we solve for are finite at infinity. This is because ∇ G( x, x ) · dS → constant at large r . However, in our particular problem, we JHEP01(2021)193 need to solve for the combination φ/ξ t , or χ/ξ t , which decay (at least) as 1/r 2 at large r . 21 This means that the surface integrals in (4.82) and (4.83) vanish, regardless of the specific values of φ and χ at r → ∞, and therefore, where the boundary conditions once again force c 0 = 0. 22 For the region z < R, we note that ξ t can become zero at multiple points in the bulk, so we would need to deal with potential divergences of the integrands. Following the calculation of the trace (explained in detail in appendix B), we can expect to get rid of these non-physical divergences by a simple regularization procedure. One easy way to do this is by adding a small imaginary part to z, (4.88) and at the end of the calculation let → 0. The integration region should now be free of singularities, so these formulas apply for all values of z. Hence, together with the trace formula (4.73), they provide a complete solution of the reconstruction problem in d = 3. Higher dimensional cases: as discussed above, by comparing the number of independent components of the perturbed metric H ij against the number of equations that we obtain from a single form δw, one can conclude that the minimum number of formsn that are in principle required to invert the problem is given by (4.68). However, this does not imply that any set ofn forms should lead to a well defined inversion problem since, depending on the choice, one could end up with a set of equations that are not completely linearly independent. In the following we will spell out the precise conditions that we must impose on a minimal set of forms and give a concrete example of how these conditions can be satisfied. First, notice that for a spherical region, a single form δw is parametrized by d real numbers P = {R, x 1 0 , · · · , x d−1 0 }. Moreover, one can easily check that for each choice of such numbers, the d components of the form δw a , with a ∈ {z, 1, . . . , d − 1}, are all linearly independent since each component involves different sets of metric components H ij . Therefore, the task at hand reduces to finding a convenient set of parameters P k , with 21 To see this, notice that both φ and χ are components of the perturbation Hij, which can generally be written as (2.7), i.e., Hij ∼ z 2n T (n) ij . The leading order term gives the CFT stress-energy tensor, which scales as T ij , so these terms they decay faster at infinity. Hence, Hij scales like T (0) ij and φ/ξ t ∼ 1/r 2 and χ/ξ t ∼ 1/r 2 at large r (in the worst case scenario). 22 Notice that if c0 = 0, both φ ∼ r 2 → ∞ and χ ∼ r 2 → ∞ at large r, which would be unphysical. JHEP01(2021)193 k ∈ {1 , . . . ,n}, such that the individual components of each form are linearly independent across the set. More concretely, if we label then forms as δw k , then we would need to impose that for a fixed a the set {δw 1 a , . . . , δwn a } must be linearly independent. In order to visualize explicitly the dependence of each component δw a on the parameters {R, x 1 0 , · · · , x d−1 0 }, we rewrite equations (4.56) and (4.57) as follows: (4.90) In this way, we can express each component of the form as a linear combination of (d + 1) linearly independent functions with coefficients α l given by Note, however, that by choosing a set of parameters P we can only specify up to d of the above coefficients, while one of them will necessarily be determined in terms of the others. This is in fact not a problem. If we consider the set of forms δw k and repeat the above analysis, we find now that where k ∈ {1, . . . ,n}. We note that the number of coefficients α k l that we can fix for each k is larger than the total number forms that we have at our disposal, i.e., d > d− 1 2 . Therefore, we still have a lot of freedom on the choice of parameters P k to be able to make the set {δw 1 a , . . . , δwn a } linearly independent. One way to achieve this is by focusing only on a subspace of forms obtained by varyingn out of the d free coefficients of each δw k a , which we denote as β k l , while keeping the rest fixed. More explicitly, we can split the sum in (4.93) as where β k l is now an ×n matrix, with the coefficients that we vary, andβ k l denote the ones that we keep fixed. The condition for the above linear independence is then given by the non-vanishing of the determinant of the matrix β k l . For instance, we could take JHEP01(2021)193 and vary the parameters {R k , (x i 0 ) k } for i ∈ {1, . . . ,n − 1} such that det(β k l ) = 0 while keeping (x i 0 ) k fixed for i ∈ {n, . . . , d − 1}. More generally, notice that there can be many other possible choices. One one hand, the choice of the subset β k l ⊂ α k l can be arbitrary and, on the other hand, the remaining free parameters can be random; they must not necesarily be kept fixed. General time slices: recovering the full bulk metric. Let us now go back to the problem of recovering the full bulk metric δg µν . First, notice that when we picked a constant-t slice Σ, we were able to recover the components of the metric tangent to it, namely, δg ij . This means that we still need to find the time components, δg tt and δg ti , in order to complete the reconstruction problem. Below, we present a simple algorithm to recover these extra metric components. From the bulk point of view it is easy to see that δg tt and δg ti are, in fact, constrained from the equations of motion (2.4). Specifically, they can be determined from the zz and zµ components of Einstein's equations, These equations imply that δg tt must equal the spatial trace δg tt = δg i i , while δg ti can be determined from a first order equation ∂ t δg ti = ∂ j δg ij . This can be easily implemented in practice. However, the problem is that these particular bulk equations of motion are not known from the CFT perspective, so their use cannot be justified. Indeed, once the surface Σ is chosen to be a constant-t slice, the closedness condition dδw = 0 only encodes the tt component of the Einstein's equations, as explained at the end of section 4.2. One simple solution to this problem, is to pick a more general time slice Σ and repeat the reconstruction analysis outlined above. For simplicity, we will pick here a boosted slice parametrized by coordinates (t , x i , z ), but a similar analysis can be implemented from more general choices of Σ . We will denote the boosted coordinate x i = x and label the coordinates orthogonal to it as x j = y j for j = i. With this notation, a boost with rapidity β (which we can take to be arbitray) is given by the bulk transformations t = t cosh β + x sinh β , (4.97) x = x cosh β + t sinh β , (4.98) We can now perform the reconstruction analysis in this new slice, and recover the components δg i j on Σ . Indeed, a quick calculation shows that the components that we can recover are δg x x | β = sinh 2 β δg tt + 2 sinh β cosh β δg tx + cosh 2 β δg xx , (4.101) δg x y i | β = sinh β δg ty i + cosh β δg xy i , (4.102) JHEP01(2021)193 If this analysis is done for at least a few values of the rapidity β 1 = β 2 = 0, then it is clear that we will have enough information to recover the extra components δg tt and δg tx from linear combinations of δg i j | β i . This completes the reconstruction problem for the locus of all points in the intersection between the original slice Σ and the new slices Σ , as shown in left panel of figure 3. In particular, from the transformations (4.97)-(4.100), it follows that the constant-t slices parametrized by β intersect the surface Σ at the line 105) x = x cosh β + t sinh β + σ , (4.106) The new slices Σ , depicted in the right panel of figure 3, intersect the original slice Σ at so it is clear that, if we perform the reconstruction problem for a general slice Σ with arbitrary β and σ we would have enough information to reconstruct the full metric in Σ. Finally, note that the above algorithm has a trivial extension to generic choices of Σ. Thus, picking a family of slices Σ that foliates the full manifold M , and repeating the same analysis for each Σ gives a complete solution to the reconstruction problem in M . Going beyond linear order In the past section we have shown that the problem of metric reconstruction can be carried out explicitly at linear order for the case of perturbative excited states. This was accomplished using the canonical bit thread construction based on differential forms. But, can this methodology be generalized to the non-linear regime? To answer this question, we will start with a brief review of our findings and then discuss how the different aspects of our proposal can be generalized. To begin with the reconstruction problem, we assumed knowledge of a set of canonical forms that codify the entanglement pattern of subregions in the dual CFT. We recall that these canonical forms δw can be uniquely specified from CFT data. Specifically, they are completely fixed by the boundary condition on ∂M (4.54), which is given in terms of the one-point function of the CFT stress-energy tensor T µν , and the requirement of bulk locality. In the presence of a metric, we found that one of these forms specifies a covector field that can be schematically written as ( δw) a = F bc a δg bc , (4.110) where F bc a is a specific linear differential operator. In low-dimensional cases, the equations resulting from a single form provide enough data to invert the problem, so that This kind of inversion works for d = 2 and d = 3, i.e., in AdS 3 and AdS 4 , respectively. For higher dimensional cases the problem becomes underdetermined, but it can be easily generalized by starting from a set of differential forms δW = {δw 1 , . . . , δw n }, that encode the entanglement pattern of a family of subregions in the CFT. In this case, we find that so, for large enough n one can always invert the system as The optimal value of n depends on the number of dimensions, and is given by (4.68). Of course, the larger the value of n, the more information that we have at our disposal, and the easier the inversion problem becomes. In fact, in the limit of n → ∞, or when the set of differential forms is dense enough, there is even enough maneuver to turn the inversion problem into a simple algebraic system of equations as shown explicitly in section (4.4.1). Let us now comment on what to expect in the non-linear regime. To start with, we can treat the problem perturbatively but extending the above results to higher orders in the JHEP01(2021)193 perturbation. In this case, the perturbation of the metric can be given as an expansion 23 g λ µν = g µν + λδg λ µν , δg λ µν ≡ δ (1) g µν + λδ (2) g µν + λ 2 δ (3) g µν + · · · , (4.114) and, similarly, any solution to the max-flow problem (4.115) On general grounds we expect that, in the presence of the metric (4.114), the change in the form δw λ should follow an equation similar to (4.110), i.e., ( δw λ ) a =F bc a δg λ bc , (4.116) but now withF bc a being a higher order differential operator. For example, at second order in λ we expect a second order differential operator, and similarly for higher orders. This introduces the standard non-uniqueness problem for the inversion of a non-linear operator. However, this issue can be circumvented by solving the reconstruction problem recursively in λ. To see this, notice that the different terms in (4.115) should depend on the different metric perturbations and their derivatives as follows: for j ∈ {1, . . . , i} and k ∈ {1, . . . , 1 + i − j}. In other words, for a given value of i, in the right-hand side of (4.117) we expect up to i th derivatives of δ (1) g µν but only 1 st derivatives of δ (i) g µν . Thus, if we solve for the metric at first order in λ, then we can reformulate the problem of bulk reconstruction at second order as a linear problem above the g µν +λδ (1) g µν solution. This also generalizes to higher orders in λ, so that the inversion problem at a given order can also be cast as a linear problem above one lower order. We should also comment on how the boundary condition (4.54) generalizes to higher orders in λ. At linear level, we saw that it is fully specified by the one-point function of the CFT stress-energy tensor T µν . However, at higher orders we would need to specify further data, e.g. [66]. A useful case study would be to consider the reconstruction problem at second order in λ, which was already worked out in [27] in the framework of extremal surfaces. This paper generalized the Iyer-Wald construction, and thus [23], to include second order variations in the metric, hence their construction can be used to obtain canonical thread configurations at second order in λ. More specifically, [27] focused on a class of CFT states which are expected to have a classical gravity description, and are defined by adding sources for primary operators to the Euclidean path integral defining the vacuum state [67][68][69], (4.118) 23 As shown in [64], when going to higher orders in λ it is convenient to work in the so-called Hollands-Wald gauge [65] so that the coordinate location of γA is fixed and L ξ A (g λ µν )|γ A = 0. In this gauge the argument presented in section 2.3.1 about the choice of Σ can be extended to higher orders in λ. JHEP01(2021)193 Among other things, their calculation led to a closed expression for the entanglement entropy in these CFT states at second order in the sources: 24 A (x 1 , x 2 ) andK (2) A (x 1 , x 2 ) are some specific kernels. A few comments are in order. First, notice that at this order the entanglement entropy depends on the specific CFT only through the central charges a * and C T . In fact, for theories that are dual to Einstein gravity, they must be proportional to each other, i.e., a * ∝ C T , so the answer depends only on one CFT parameter. Second, (4.119) correctly encodes the first-law of entanglement δS A = H A at linear order in λ but, in addition, it also contains information about the relative entropy in the excited state δ (2) A ), to second order in λ. Finally, note that equation (4.119) can now be used to specify a canonical boundary condition for δw λ in ∂M , and so, it should suffice to uniquely specify the full (d − 1)−form on M by further requiring bulk locality. This means that at second order in λ, we need also the one-point functions of all primary operators O α , in addition to stress-energy tensor. From the bulk perspective, [27] showed that the closedness of the Iyer-Wald form χ, related to δw λ through (4.37), encodes now the following data: at first order one recovers (4.36), which specialized to an arbitrary slice Σ leads to the linearized Einstein's equations E (1) ab = 0. At second order, one obtains where (E (2) ab ) grav is the second order Einstein tensor and T (2) ab is the second order stressenergy tensor associated with the bulk fields φ α dual to the CFT operators O α . Thus, for theories with a * = C T , a CFT state of the form (4.118) secretly encodes Einstein's equations (at least up to second order in the perturbation) with matter determined by the CFT one-point functions. Therefore, on general grounds we can expect that the inversion problem using the canonical bit thread prescription to be well defined at second order in λ. One can also go to higher orders in perturbation theory, however, it is clear that one would ultimately need infinitely more CFT data in the full non-linear regime, rendering the problem untractable. To see this, notice that to the i th order in the perturbation, the JHEP01(2021)193 relative entropy δ (i) S(ρ A ||ρ (0) A ) will generically involve i-point functions of the primary operators O α , so at the full non-linear level we would need to specify all correlation functions of the theory. Here we have two options to proceed. One could try to pursue the reconstruction problem in a theory with a know gravity dual, e.g., N = 4 SYM, and with some hard work, one should be able to recover the metric order by order in the perturbation. Alternatively, one could start with a generic CFT and try to constrain the structure of the CFT i-point functions such that the calculations match with the gravity side. Indeed, such constraints must start appearing for i ≥ 4, since these correlation functions are not fixed by conformal invariance. Another possibility would be to consider the full reconstruction problem without resorting to perturbation theory. 25 For example, given a CFT with a known holographic dual, one can start by computing entanglement contours [70,71] for several regions A i in a state of the form (4.118). These contours can then be used as boundary conditions in ∂M for a set of closed bulk forms w i that encode the entanglement pattern of the individual regions. Via the closedness condition, dw i = 0, and assuming further input such as bulk locality, one should then be able to reconstruct particular realizations of the set of forms w i on M that solve the max flow problem in the bulk. If this set is sufficiently dense, then, one could set up the problem of metric reconstruction as a particular optimization problem. More specifically, given a set of such (d − 1)− forms W = {w i }, the metric g ab should emerge as the minimal positive definite symmetric (0, 2) tensor for which the norm bound constraint holds for all the elements w i of the fundamental set W . It would be interesting to develop a more precise algorithm based on this optimization problem and understand how the Einstein's equations would emerge upon its implementation. Conclusions and outlook In this paper we developed a new framework for metric reconstruction based on the bit threads reformulation of entanglement entropy. Our work can be divided roughly into two main parts. In section 3 we explored simple constructions of perturbative thread configurations based on the general methods originally developed in [43] but expanded in this paper in various ways. We explored in detail two particular constructions, one that starts by specifying the class of integral curves and a second one that assumes a specific family of level set surfaces. We showed that both methods are efficient and can be easily implemented for the case of perturbative excited states, as we discuss in detail in the concrete example of a local quench in appendix A. However, we realized that both constructions encode the information about the bulk metric in a highly nonlocal way. This implies that these realizations are not particularly useful to tackle the question of metric reconstruction and highlights the necessity of reformulating bit threads in a language that makes background independence manifest. JHEP01(2021)193 Motivated by the above results, we started section 4 by reformulating the bit threads framework in terms of differential forms. We gave general formulas that translate the relevant equations of the standard description in terms of flows into this new language and studied in detail the case of perturbative excited states. We pointed out that the Iyer-Wald formalism provides us with a canonical choice for the perturbed thread configuration that makes explicit use of bulk locality. More explicitly, we showed that the Iyer-Wald construction yields a particular solution to the max flow problem in the bulk that can be uniquely determined from CFT data, and encodes the Einstein's equations in the bulk through its closedness condition. Assuming that a set of such forms is given, we then showed that the problem of metric reconstruction is equivalent to the inversion of a particular differential operator. We gave explicit inversion formulas for the case of 2d and 3d CFTs, and argued that the problem is also well possed in higher dimensions. Finally, we discussed the generalization of our results to higher orders in the perturbation and its relation to the full Einstein's equations. There are some open questions related to our work which we think are worth exploring: • Explicit inversion at higher orders: in section 4.4.2 we have sketched out how to generalize the metric inversion problem to higher orders in λ. We believe that this would be fairly straightforward to second order in the perturbation if one uses [27] as a starting point, while it would be illuminating to have explicit inversion formulas for the differential operator at this order. Generalizing this story to higher orders should be possible but may require some extra work. To some extent, this study would be even more rewarding since it could yield non-trivial constraints on the space of theories with classical gravity duals, specifically on the structure of their correlation functions. It would also be interesting to work out a more precise algorithm for metric reconstruction at the full non-linear level following the discussion at the end of section 4.4.2 and, in particular, understand how the Einstein's equations would emerge in this context. • More general states and entangling regions: in our work we have considered perturbations around the vacuum state and focused on spherical regions in the boundary theory. While it is true that in this setting one could in principle recover Einstein's equations systematically (order by order) and hence the bulk geometry, one could relax these two points, with the latter one being arguably the easiest of the two (at least for regions with local modular Hamiltonians). We note that substantial progress on generalizing the former using the Iyer-Wald formalism appeared in [72]. This paper also argued that doing the linear analysis for arbitrary states and shapes of the entangling region is sufficient to capture the full non-linear Einstein's equations in the bulk. It would be interesting to relax these conditions in our method of bulk reconstruction using bit threads, and try to make these statements a bit more precise in our context. It would also be interesting to explicitly start with a state for which the bulk geometry has entanglement shadows, and understand how our approach encodes information about these regions. JHEP01(2021)193 • Covariant bulk reconstruction: in this work we have incorporated time-dependence by combining the maximin prescription introduced in [52] and the non-covariant formulation of bit threads [30]. As explained in section 2.3.1, this was possible in virtue of various crucial simplifications of the perturbative setting that we consider. However, it should be possible to pose the same question in the fully covariant formulation of bit threads [53]. We believe that in this case, the full modular flow in the bulk should play a role, and could serve as a guide for constructing canonical bit thread configurations in other special cases. Related to this, it would be interesting to ask the question about bulk reconstruction in time-dependent situations that are not easily accessible to the perturbative setting we consider here, e.g., completely far-of-equilibrium states that could lead to black hole formation in the bulk. • Higher derivative theories: finally, we believe it would be worthwhile to explicitly generalize our method of bulk reconstruction to the case of higher derivative theories in the bulk. We point out that the program of gravitation from entanglement using extremal surfaces has been worked out in detail for these theories in [28], using the Iyer-Wald formalism both to first and second order in the state deformations around the vacuum. Likewise, the bit thread reformulation of entanglement entropy has already been generalized to the case of higher derivative gravities in [36], incorporating corrections to the local norm bound that depend on the specific theory. It would be interesting to see how our formulas are corrected if we turn on these extra gravity couplings. We hope to come back to some of these points in the near future. JHEP01(2021)193 In a general dimension, the metric perturbation can be expanded as H µν = 16πG N d z 2n T (n) µν . However, in d = 2 all n ≥ 1 terms vanish and we have, The stress tensor should be traceless and conserved, therefore, its general form is For general linear perturbations we can perform a Fourier decomposition, and take f (t + x) and g(t + x) to be an appropriate superposition of plane waves. However, for concreteness we will consider an example that is physically motivated: a local quench that arises by the insertion of a local primary operator. The stress tensor of a local quench is given by the sum of two shock waves [73,74], where the (small) parameter λ gives the total energy inserted and α acts as a UV regulator. As discussed in section 3.2 the unperturbed geodesics provide a family of integral curves that satisfy the criteria given in [43]. In pure AdS, the geodesics are semicircles anchored at the boundary. These semicircles form a two-parameter family of curves, defined by where x s is the center of the circle on the x-axis and R s is its radius. If we denote (x m , z m ) a point on the minimal surface γ A , we showed in section 3.2.1 that where H(t, x) ≡ H xx (t, x). Plugging A.6 and A.7 in A.5 we obtain a family of geodesics orthogonal to γ A , parametrized by the point x m ∈ [−R, R] on the minimal surface. Likewise, we can use the formula (3.43) to obtain the magnitude of the vector field. This calculation can be done following the examples of [43]. The final result for the integral curves and magnitude are plotted in figure 4. We can study the same example using the level set construction discussed in section 3.3. In this approach, given a solution to the max flow problem in the unperturbed geometry v, the solution for the perturbation of δv is where Ψ is a scalar function that is determined by solving the first order differential equation v · ∇Ψ + ∇ a (δg ab v b ) + 1 2 v · ∇(δg) = 0 , (A.9) with boundary condition In figure 5 we present the results obtained using this method. The third method explored in this paper, discussed in section 4.3, relies on the Iyer-Wald formalism to define a canonical bit thread perturbation. This approach uses the language of differential forms and relates a max flow vector field v, to an optimal closed form w. In a background that is perturbatively close to a given geometry, i.e. g → g + δg, the optimal closed form w → w + δw. Since knowing w + δw determines the max flow v + δv, the problem now amounts to finding δw. The Iyer-Wald formalism provides a form, χ, defined in (4.32), that can be taken as δw, δw = 4G N χ. The form χ is closed when the equation of motions are satisfied. Having δw it is straight forward to determine δv. However, the configuration should satisfy the norm bound (4.22) and that is not guaranteed, in general, in this construction since this condition depends explicitly on the metric. Nevertheless, the more detailed analysis performed in section 4.3, reveals that up to our order of approximation, the norm bound is indeed satisfied. To illustrate this point we plot the norm for the same perturbation studied with the previous approaches. The resulting v and its norm is plotted in figure 6 shows that for a perturbative small λ the norm bound is indeed satisfied. In figures 4-6 we have presented the perturbed vector field, v λ = v + λ δv, and its magnitude obtained using the three different methods developed in this paper. It is also interesting to look just at the perturbation δv to gain insight into the time-dependence of the local pattern of entanglement induced by the quench. For concreteness we only show results using the level-set method, which are presented in figure 7. In [73] it was conjectured that the quench insertion generates an entangled pair of wave packets that move in opposite directions (see figure 4 of [73] for a pictorial representation). However, it was recently shown in [74] that this intuition is only true if one includes the leading JHEP01(2021)193 1/N corrections coming from the entanglement of bulk fields. More specifically, in this paper it was argued that at the leading order in G N the two wave packets are effectively unentangled, and only the quantum correlations between the degrees of freedom in each individual packet contribute to the total entanglement entropy. Remarkably, we can reach the same conclusion from the plots in figure 7, which exhibit the following features: i) two wave packets moving together with the shocks, i.e., in opposite directions at the speed of light and ii) threads around each wave packet connecting degrees of freedom in their fronts with those in their tails. The fact that we do not see threads connecting the two wave packets implies that they are effectively unentangled at the leading order in G N , in agreement with the result of [74]. Moreover, the precise pattern of the threads explains why S A peaks at t = R (see e.g. figure 7 of [74]): at this time most of the threads connect the degrees of freedom of A with those in its complement (recall that threads connecting points within A do not contribute to the entanglement entropy of the region). It will be very interesting to repeat this analysis for the case of a global quench, and understand how the local pattern of entanglement evolves in time for cases that admit an entanglement tsunami interpretation [75,76] (large regions) and cases that do not [77,78] (small regions).
29,773.8
2021-01-01T00:00:00.000
[ "Mathematics" ]
Coupled solitons in rare-earth doped two-mode fiber We present first ever analytical solutions for shape-preserving pulses in a Kerr nonlinear two-mode fiber doped with 3-level Λ atoms. The two modes are near-resonant with the two transitions of the atomic system. We show the existence of quasi-stable coupled bright-dark pairs if the group velocity dispersion has opposite signs at the two mode frequencies. We demonstrate the remarkable possibility allowed by the fiber dispersion for the existence of a new class of solutions for unequal coupling constants for the two modes. We present the conditions for existence and the analytical form of these solutions in presence of atomic detuning. We confirm numerically the analytical solutions for the spatio-temporal evolution of coupled solitary waves. © 2008 Optical Society of America OCIS codes: (270.5530)Pulse propagation and solitons; (190.4370) Nonlinear optics, fibers References and links 1. S. L. McCall and E. L. Hahn, “Self-induced transparency by pulsed coherent light,” Phys. Rev. Lett. 18, 908-911 (1967) 2. S. L. McCall and E. L. Hahn, “Self-induced transparency,” Phys. Rev. 183, 457-485 (1969). 3. G. L. Lamb, Jr., “Analytical Descriptions of Ultrashort Optical Pulse Propagation in a Resonant Medium,” Rev. Mod. Phys. 43, 99-124 (1971). 4. H. A. Haus, “Physical interpretation of inverse scattering formalism applied to self-induced transparency,” Rev. Mod. Phys. 51, 331-339 (1971). 5. A. I. Maimistov, A. M. Bhasrov, S. O. Elyutin, and M. Y. Sklyarov, “Present state of self-induced transparency theory,” Phys. Rep. 191, 1-108 (1990). 6. R. Grobe, F. T. Hioe, and J. H. Eberly, “Formation of Shape-Preserving Pulses in a Nonlinear Adiabatically Integrable System,” Phys. Rev. Lett. 73, 3183-3186 (1994). 7. J. H. Eberly, “Transmission of dressed fields in three-level media,” Quantum Semiclass. Opt. 7, 373-384 (1995). 8. A. Rahman and J. H. Eberly, “Theory of shape-preserving short pulses in inhomogeneously broadened three-leve media,” Phys. Rev. A 58, R805-R808 (1998). 9. J. H. Eberly and V. V. Kozlov, “ Wave Equation for Dark Coherence in Three-Level Media,” Phys. Rev. Lett. 88, 243604-1-243604-4 (2002). 10. G. Vemuri, G. S. Agarwal, and K. V.Vasavada, “Cloning, Dragging, and Parametric Amplification of Solitons in a Coherently Driven, Nonabsorbing System,” Phys. Rev. Lett. 79, 3889-3892 (1997). 11. D. P. Caetano, S. B. Cavalcanti, and J. M. Hickmann, “Coherent interaction effects in pulses propagating through a doped nonlinear dispersive medium,” Phys. Rev. E 65, 036617-1-036617-6 (2002). 12. S. E. Harris, J. E. Field, and A. Kasapi, “Dispersive properties of electromagnetically induced transparency,” Phys. Rev. A 46, R29-R32 (1992). 13. L. V. Hau, S. E. Harris, Z. Dutton, and C. H. Behroozi, “Light speed reduction to 17 metres per second in an ultracold atomic gas,” Nature, 397, 594-598 (1999). 14. G. P. Agarwal, “Nonlinear Fiber Optics,”2nd Ed., (Academic Press, San Diego CA 1995). 15. S. Trillo, S. Wabnitz, E. M. Wright, and G. I. Stegeman, “Optical solitary waves induced by cross-phase modulation,” Opt. Lett. 13, 871-873 (1988). (C) 2008 OSA 27 October 2008 / Vol. 16, No. 22 / OPTICS EXPRESS 17441 #100986 $15.00 USD Received 2 Sep 2008; revised 13 Oct 2008; accepted 13 Oct 2008; published 15 Oct 2008 16. Q. Yang, J. T. Seo, B. Tabibi and H. Wang, “Slow Light and Superluminality in Kerr Media without a Pump,” Phys. Rev. Lett. 95, 063902-1-063902-4 (2005). 17. M. Nakazawa, E. Yamada, and H. Kubota, “Coexistence of self-induced transparency soliton and nonlinear Schrdinger soliton,” Phys. Rev. Lett. 66, 2625-2628 (1991). 18. M. Nakazawa, E. Yamada, and H. Kubota, “Coexistence of a self-induced-transparency soliton and a nonlinear Schrdinger soliton in an erbium-doped fiber,” Phys. Rev. A 44, 5973-5987 (1991). 19. M. Nakazawa, Y. Kimura, K. Kurokawa, and K. Suzuki, “Self-induced-transparency solitons in an erbium-doped fiber waveguide,” Phys. Rev. A 45, R23-R26 (1992). 20. S. Ghosh, J. E. Sharping, D. G. Ouzounov, and A. L. Gaeta, “Resonant Optical Interactions with Molecules Confined in Photonic Band-Gap Fibers,” Phys. Rev. Lett. 94, 093902-1-093902-4 (2005). 21. S. Ghosh, A. R. Bhagwat, C. K. Renshaw, S. Goh, A. L. Gaeta, and B. J. Kirby, “Low-Light-Level Optical Interactions with Rubidium Vapor in a Photonic Band-Gap Fiber,” Phys. Rev. Lett. 97, 023603-1-023603-4 (2006). Introduction Coherent pulse propagation in atomic media has been one of the central issues of quantum optics since the pioneering work of McCall and Hahn [1,2] on self induced transparency (SIT).Initial studies on SIT focused on shape preserving pulses (e.g., solitons) in resonant two-level systems [3,4,5].A generalization to two pulses with frequencies tuned to the respective transitions of a 3-level Λ system led to the discovery of adiabatons [6].These were shown to be extremely sensitive to the input conditions.A thorough investigation of two pulse propagation in three level systems demonstrating the sech and tanh pulse pairs, was carried out by Eberly and coworkers [7,8,9].Efforts were made to extend the area theorem to three level systems [9].Soliton cloning and dragging using two lasers on the two transitions of a Λ system was another interesting discovery [10].Soliton cloning in a three level system coupled to a two-mode fiber was investigated numerically [11].Electromagnetically induced transparency (EIT) in three level systems opened the floodgates for unprecedented control on pulse velocity, with tremendous potentials for quantum information and many other applications [12,13].Slowing down and storing robust objects like solitons by manipulating the dispersion in the atomic systems is now an open possibility.In a somewhat different context, mainly for the demands of long-haul communication industry, solitons in Kerr nonlinear fibers were studied extensively [14].The existence of these solitons in a non-resonant nonlinear system depends on a fine balance between nonlinearity and dispersion.The dynamics of such solitons are governed by the nonlinear Schrödinger equation (NLS).Note that the physical origin of shape preserving pulses in this case is quite different from that of resonant nonlinearities, which is reflected by the fact that SIT-solitons are described by sine-Gordon equation.In the context of NLS-solitons there have been generalizations to two modes as well [15].These could be the two modes of two adjacent single mode fibers or the two orthogonal modes of a birefringent fiber.With opposing character of dispersion at the two mode frequencies, the system is known to possess coupled bright-dark states [15].Issues like control of pulse velocity in systems with non-resonant nonlinearity were also addressed [16].Delay and advancement of long pulses were observed experimentally.Delay of pulses were shown to be controllable by means of nonlinear coupling between different frequency components in a temporally nonlocal Kerr medium [16]. Considering the fact that research has progressed so much on the above two types of nonlinearity (i.e., resonant and non-resonant) separately, there is a need to combine the expertise and extract the best from each.Perhaps the first attempt was made by Nakazawa et al., who showed that both SIT and NLS solitons can coexist in a medium which has both types of nonlinearity [17,18,19].Such a situation will typically correspond to a nonlinear fiber doped with, say, rare earth materials.The urgency of such research can be appreciated easily in the light of some very recent experiments involving EIT phenomena in a fiber geometry [20,21].The advantages of using the fiber leading to large optical depth and tighter confinement of the field, and hence The schematics of three level Λ system interacting with two fields corresponding to Rabi frequencies 2G and 2g, respectively.The single photon detuning is denoted by Δ. power densities, are obvious.An optical depth in excess of 2000 was reported in a system where a photonic band gap fiber was used, which had the Rb atoms released by light-induced atomic desorption [21].The tight confinement of the field allowed to have ultralow-level nonlinear optical interaction with control field powers at nano-watt regime.Keeping in view the recent experiments and the theoretical trends, we focus on a two-mode fiber, doped with three level Λ atoms.The two modes are assumed to be near-resonant with the two relevant transitions of the Λ system (see Fig. 1).To the best of our knowledge, such a system has not been probed for stable propagation of pulse pairs.We derive first ever analytical solutions for the combined effects of three level resonant nonlinearities and the non-resonant nonlinearities of the fiber.We thus considerably extend known works on solitons in three level systems and in fibers [7,8,15].We show that the analytical solutions in the form of solitary waves are possible even in presence of finite detuning.We make several important observations.Fiber parameters, namely the group velocity dispersion (GVD), determines the stability aspects of the pulse pair, while their delay is governed mainly by the 3-level system.We also predict a new class of solutions with allowance for a frequency shift.These solutions are characterized by a group velocity, tunable by the GVD of the fiber.Most of the results are presented in analytical form, while the stability aspects are studied by direct integration of the propagation equations.We show that quasi-stable propagation of bright-dark pair of solitons is possible if the modes are chosen on the positive and negative sides of null group velocity dispersion.We also look at the various limiting cases in order to recover the earlier known results. The organization of the paper is as follows.In section II we present the mathematical formulation and the analytical results.We also derive the conditions under which such solutions exist.Section III gives the results of numerical integration of coupled atom-field system, focusing on the stability aspects of the solutions of section II.Finally, in conclusions, we summarize the main results. Mathematical formulation We consider a Kerr-nonlinear two-mode fiber doped with 3-level Λ− atoms as shown in Fig. 1.Many of the rare earth elements can well be approximated by a Λ− system, while the two modes of the fiber could be the orthogonally polarized modes of a birefringent fiber.The two modes of the fiber, assumed to be near-resonant with the transition |1 ←→ |2 and |1 ←→ |3 , respectively, are defined as Here E i is the slowly varying envelope, ω i is the carrier frequency, and k i is the wave number of the respective field.We use the Schrödinger formalism for the medium to describe dynamics of population and polarization of the atoms.The probability amplitudes C i (z,t) of the atomic levels |i for the Λ-system within the rotating wave approximation can be written as where dot denotes ∂ /∂t.The Rabi frequencies 2g and 2G for the two field modes are related to the slowly varying amplitudes of E 1 and E 2 according to the relations where d i j represents the transition dipole moment matrix element.The single photon detuning is denoted by Δ.The induced atomic polarization related to the atomic transition between levels |1 and |2 is described by where the quantities χ (1) and χ (3) are known to be the linear and third-order non-linear optical susceptibilities respectively.The first term in the round bracket is responsible for self phase modulation (SPM) and the second term leads to cross-phase modulation (XPM).The polarization P is a slowly varying function of both space and time coordinates because of the dependence on the slowly varying parameter E 1 .We use the nonlinear Schrödinger's equations(NLS) to obtain the spatiotemporal evolutions of the light pulses through a doped nonlinear dispersive medium.Taking slowly varying envelop approximation and converting the equation for the Rabi frequencies of the field, we obtain where η i determines the coupling to the atomic system for the i-th mode In Eqs.(5)(6), 2) and ( 5).The SIT-NLS solitions of Nakazawa et al. [17] can be obtained by setting G to be zero, reducing the problem to one of single mode of the fiber interacting with a resonant two-level system.It is also clear from Eqs. ( 5) and ( 6) that one would recover the case considered by Trillo et al [15] by setting both the η's to be zero.We use all these limiting cases as checks of correctness of our numerical code (see section III below).Since both the bare fiber and the bare 3-level system allow for sech-tanh pairs of pulses [7,8,15], we use the following ansatz for the solution of Eqs.(2),( 5) and (6). In the above equations σ determines the temporal width of the pulse and 1/(Kσ ) gives the envelope velocity in the moving frame.The other constants are to be determined in a selfconsistent fashion.The expression of C 3 is chosen in such way that, in the remote past, all the population is in the ground state.Note that in contrast to Refs.[7,8], we have allowed for the frequency shifts Ω 1,2 .The choice of the temporal exponential factors in Eqs. ( 8)-( 11) is obvious and leads to the cancellation of explicit time dependence except for a problematic term i(Ω 1 − Ω 2 ) in C 2 arising from Eq.( 2).This term can be eliminated under the condition Henceforth, we will assume that the frequency shifts are identical for both the fiber modes, i.e., Ω 1 = Ω 2 = Ω.Under such constraints the expressions for the amplitudes C 1 and C 2 can be written as The solutions ( 8), ( 9), (12)(13)(14) are then substituted in Eqs. ( 2), ( 5), ( 6) and coefficients for Sech, Tanh and SechTanh are then collected yielding the self-consistency relations.The Bloch part yields the following two relations The Eq. (15) gives relation between α and Δ as follows: The coupled nonlinear Schrödinger part yields the following set of equations In writing Eq. ( 18) we made use of Eq. (15).Equation (20) easily leads to the constraint It will be shown later that for stable propagation of pulse pairs, it is essential that β 1 = β 2 , and thus Ω can be evaluated using Eq. ( 23) as Equations ( 21), ( 22) and ( 16) lead to the important relation Equation( 25) is one of the central results that should be satisfied by the nonlinearity and dispersion in the medium at the two mode frequencies.We show in the next section that this condition can be satisfied with the same or opposite signs of group velocity dispersion.One of these solutions is quasi-stable against modulational instability while the other breaks down quickly. A close inspection of the argument of the pulse envelopes leads easily to the following relation satisfied by the group velocity It is clear from Eq. ( 26) that for η 1 = η 2 = η (leading to Ω = 0), large detuning (diminishing 3-level effect) leads to v g → c, which holds for bare fiber.The other extreme, namely Δ = 0, reproduces the result of Eberly for perfect resonance c/v g = 1 + cη/A 2 [7].It should be noted that for η 1 = η 2 , and hence Ω = 0, the group velocity gets affected by the fiber GVD β .We now demonstrate how the results of Nakazawa et al. [17,18] can be recovered from our general results.This case along with few other limiting cases are discussed below in the context of testing our numerical code.We set G = 0 and η 2 = 0, η 1 = η, β 1 = β in order to recover the coexistence of SIT-NLS solitons.This yields the following conditions The conditions (27) along with the limiting forms of the solutions for g, C 1 and C 3 agree well with those of Nakazawa et al.Further, at perfect resonance (Δ = 0) for Ω = 0, one has simple relations for the parameters Numerical results and discussion In this section we present the numerical results by integrating the full set of coupled Bloch Eqs. (2) and NLS Eqs. ( 5) and ( 6) with the following scalings We use a combination of the Runge-Kutta method and split operator method to simulate the spatiotemporal evolution of the optical pulses to delineate the effect of both resonant and nonresonant nonlinearity of the medium.We use initial conditions given by our analytical solutions to check their stability.The atomic system is assumed to be prepared in the ground state.Before we present the numerical results, a discussion of parameters leading to coupled solitons in a medium with both resonant and non-resonant nonlinearity would be in place.It was shown by the Nakazawa et al. that choosing a realistic system of a nonlinear fiber, doped with rare earth materials for the coexistence of SIT and NLS solitons is extremely difficult [17,18].The difficulty is associated with the fact that the power requirement for the N=1 solitons for the two cases differ by orders of magnitude.Hence, in their experiment the NLS soliton was suppressed by choosing the fiber mode with null GVD [19].We do not offer any resolution of this problem and use heuristic parameter values.Although our choice of parameters is somewhat away from those of available realistic fibers and rare earth atoms, our calculations reveal clearly the intricate interplay between the resonant and non-resonant nonlinearities.The presence of the fiber is shown to lead to hitherto unknown solutions for unequal couplings.Since we present analytical expressions for most of our results, the suitability of a given fiber or atomic species can be checked with further development of fiber and materials technology.In our numerical simulations, we consider the width of the pulse σ = 1.As a consequence of our choice of parameters, most of the results are in arbitrary units and they are suppressed in the plots. In order to ensure the correctness of our numerical code, we first studied various limiting cases.We start with the standard SIT solitons of a two-level system and verify the stable propagation of a bright soliton with area 2π, and break-up of a pulse with area 4π into two 2π solitons (results of simulation not shown).We next verify the case of nonlinear propagation of dark and bright solitons in two mode optical fibre [15] (not shown).The bright pulse can propagate without any distortion despite normal GVD, when it couples with the dark soliton with anomalous GVD through cross phase modulation.It is necessary to choose dark and bright solitons on the two sides of null group velocity dispersion for the stable solitons solutions. As the other limiting case we studied the system investigated by Nakazawa et el.[18].In Fig. 2 we have demonstrated the propagation dynamics of SIT-NLS soliton in a resonant dispersive medium in the presence of group velocity dispersion and self phase modulation.We show in Fig. 2(a) the stable propagation of a 2π SIT-NLS soliton.As can be seen from Fig. 2(b), in conformity with the earlier results, the input soliton with area 4π splits into two separate 2π solitons after traversing some distance into the medium.We next provide numerical results for the medium with competing resonant and non-resonant nonlinearities.Since distortionless propagation of the pulses is of utmost importance for any practical application, we first present the results pertaining to the stability of the pulses.In Fig. 3(a) and 3(b) we show the spatiotemporal evaluation of the bright and dark solitons for η 1 = η 2 = 1 and Δσ = 0. We have chosen the intensities of input bright and dark pulses as A 2 = 4 and B 2 = 3, respectively, such that they obey the self-consistency relation (16).Like in the case of Trillo et al. [15] we choose the group velocity dispersion of the bright and dark pulses above and below the null dispersion. 5. Spatio-temporal evolution of the (a) bright and (b) dark solitons in (i) a two-mode fiber (solid curve), (ii) 3-level system (dashed-dot curves) and (iii) in a doped fiber (dotted curves).Cases (ii) and (iii) are plotted for Δσ =0 and 5, respectively.Case (ii) also corresponds to a doped fiber with Δσ = 0.The other parameters are as follows Keeping in mind condition (25), we have used β 1 = 1.0 for the bright pulse and β 2 = −2.5 for the dark pulse, respectively.It is clear from Fig. 3(a) that the bright soliton moves without any distortion even in normal dispersion regime.This quasi-stable bright soliton propagation is possible because it couples with dark soliton via the cross phase modulation.After sufficient propagation distances, the dark soliton starts showing signs of modulations instability.In order to highlight the need for choosing opposite character of GVD for the two modes for stable bright-dark pairs, we show in Fig. 4 the case where β 1 and β 2 were chosen to have the same sign, still consistent with Eq. (25).It is clear from the figure that both the constituents of the pair disintegrate after propagating short distances. In order to appreciate the contributions of the constituent systems separately, and also to assess the effect of detuning, we undertake a detailed comparative study of the bare sytems (fiber and 3-level atoms) with the doped fiber.The results for the spatio-temporal evolution of the bright and dark pulses are shown in Figs.5(a) and 5(b), respectively.The solid and the dashed-dot curves give the results for the bare fiber and a resonant (Δσ = 0) 3-level system, respectively.In fact, the latter curves are identical with those for the fiber doped with perfectly resonant atoms.The results for finite detuning, namely Δσ = 5 is intermediate between these two extremes (see dotted curves in Fig. 5).For very large detuning, the doped fiber results are the same as those for the bare fiber.It is thus clear that the delay aspects of the pulses in the doped fiber is derived mainly from the strong dispersion in the atomic dopants.All these conclusions are consistent with Eq.(26). As mentioned earlier, for η 1 = η 2 , the fiber GVD can modify the group delay.This is shown in Fig. 6 where we have plotted Kσ (= 1/v g − 1/c) as a function of the normalized detuning Δσ for η 1 = 1, and for two values of η 2 , namely, η 2 = 1(solid curve) and η 2 = 2 (dashed curve).The plots are obtained by two different means, namely, (a) by direct integration and interpreting the location of the peaks as in Fig. 5(a) and (b) by using Eq.(26).Both these methods led to the same curves.The results for bare 3-level atoms with η 1 = 1, η 2 = 2 is also given by the solid curve.Both these cases of bare 3-level system with unequal coupling constants and a doped ) for a doped fiber as a function of normalized detuning Δσ for η 1 = 1 and for two values of η 2 , namely, η 2 = 1 (solid line) and η 2 = 2 (dashed line).The results for a 3-level system with η 1 = 1 and η 2 = 2 is also given by the solid curve.The other parameters are as in Fig. 5. fiber with equal coupling strength have no contribution from the rightmost term in Eq. (26).It is clear from this figure that the solutions with finite frequency shift (Ω = 0 for η 1 = η 2 ) can lead to larger delays. Conclusions In conclusion, we have studied propagation of shape-preserving pulses in a two-mode fiber doped with Λ atoms.We have obtained the analytical form of the solutions, as well as the conditions for their existence with allowance for atomic detuning and frequency shifts.We have also simulated the spatio-temporal evolution of pulses by means of a combined Runge-Kutta and split-step method.We show that the stability aspects of the pulses of the doped fiber are inherited from the fiber, while the delay aspects are governed by the atomic system.Our results clearly reveal the important role of group velocity dispersion on stable propagation of these pulses.Quasi-stable propagation results in case when the two pulses are tuned at frequencies with opposing signs of GVD.The opposite case leads to a quick breakup of the pulses due to modulational instability.This is in tune with the findings of Trillo et al. [15].We also reported a new class of solutions for unequal coupling strengths for the two modes.We presented a detailed study of the group delay with clear demarkation of the contributions from the fiber and the Λ atoms.Further, we demonstrated how other known results can be recovered from our general results both analytically and numerically. (Fig. 2 . Fig. 2. (a) Propagation of the sech pulses with an area 2π at different propagation distances in a single mode fibre coupled with resonant nonlinearity.The Fig. 2(b) shows the breakup of a with area 4π two pulses with area 2π.The different parameters used in the numerical simulation are as follows: group velocity dispersion β 1 = −0.5 and Kerr nonlinearity γ 1 = 1. Fig. 4 . Fig. 4. Growth of instability for (a) bright and (b) dark solitons in a three level medium in presence of nonresonant nonlinearity and fiber dispersion.Parameters are the same as in Fig. 3, except that now group velocity dispersion β 1 (=-0.25)and β 2 (=-1.25)have the same sign.
5,716.2
2008-10-27T00:00:00.000
[ "Physics" ]
Transient response of a liquid injector to a steep-fronted transverse pressure wave Motivated by the dynamic injection environment posed by unsteady pressure gain combustion processes, an experimental apparatus was developed to visualize the dynamic response of a transparent liquid injector subjected to a single steep-fronted transverse pressure wave. Experiments were conducted at atmospheric pressure with a variety of acrylic injector passage designs using water as the working fluid. High-speed visual observations were made of the injector exit near field, and the extent of backflow and the time to refill the orifice passage were characterized over a range of injection pressures. A companion transient one-dimensional model was developed for interpretation of the results and to elucidate the trends with regard to the strength of the transverse pressure wave. Results from the model were compared with the experimental observations. Introduction Pressure gain combustion (PGC) research has been rapidly gaining attention as a potential means to produce thrust or generate power at higher efficiency than conventional constant-pressure combustion technology [1,2]. These transient devices rely on the detonative mode of combustion as opposed to deflagration at constant pressure, as is com- For example, the rotating detonation engine (RDE) is currently receiving substantial attention as a candidate for future applications and serves as a major motivation for the present study. Under ideal operation, the annular RDE combustion chamber contains one or more azimuthally traveling detonation waves traversing the annulus at velocities approaching the Chapman-Jouguet (CJ) value, while regions downstream of the wave passage supply fresh propellants to support the perpetuation of the next incoming wave. Because detonation waves can travel at speeds well over 1000 m/s, injectors in these devices are subject to dynamic downstream pressures oscillating at kilohertz frequencies. Theoretically, detonation waves can produce pressure ratios in excess of 30 depending on reactant mixtures, and these strong waves create pressures that exceed the injection pressure such that backflow of propellants is a distinct possibility. Understanding the transient response of the injector under these conditions is crucial since reverse flow of the entire injector passage would bring combustion products into the manifold with potentially disastrous consequences. The transient injection flow recovery subsequent to wave passage must also be understood in order to assess injection and filling characteristics that prepare the combustible mixture for the next arrival of a detonation wave [3]. Unfortunately, only limited prior work has focused on injector response to violent events such as the passage of a detonation wave. Moreover, most prior research was motivated by injector response during combustion instabilities in constant-pressure combustion engines, not detonation engines. Nevertheless, the classic work dates back to the 1950s with Miesse [4] and Reba and Brosilow [5] whose efforts are also covered in some detail in the NASA Special Publications (SP Series) from the 1970s [6,7]. The Reba and Brosilow linear analysis describes amplitude and phase shift of a plain-orifice atomizer to an imposed pressure oscillation of arbitrary frequency. The relevant frequency here (injection frequency) is the rate at which fluid in the passage is replaced, i.e., v/L. These classical studies show that the injection response to a small amplitude sinusoidal pressure perturbation tends to roll off to low amplitudes when frequencies are, say, an order of magnitude higher than that of the injection frequency. MacDonald et al. [8] extended the Reba and Brosilow analysis to consider nonlinear pressure perturbations using an axisymmetric CFD approach. Their results show that the nonlinear response is less than the linear result except at frequencies in excess of 5 v/L. These authors also included the effect of the vena contracta pulsations that are attributed to hydrodynamic instability of this reentrant region near a sharp orifice inlet. Simple models have also been used to assess the influence of injector unsteadiness on the jet development outside of the orifice exit [9,10]. While this work helped to establish nonlinear effects of a finite amplitude sinusoidal pressure disturbance, it does not address the nature of steepfronted waveforms consistent with passing detonation events. In the more recent era, substantial efforts have been devoted to understanding the response of specific injector types including swirl injectors [11][12][13][14], shear coaxial injectors [15,16], and swirl coaxial injectors currently of interest for oxidizer-rich staged combustion cycles [17,18]. Brady also recently published a peripherally related study regarding line priming [19]. The RDE community has recently begun to produce fully coupled simulations in which dynamic injection is directly coupled to the detonation passage [20,21]. Results to date show that the very presence of injector and plenum dynamics will cause the detonation wave structures to be different from that obtained with ideal injectors. Unfortunately, past efforts [20][21][22][23] have all focused on injection of gaseous propellants; the challenges of two-phase and liquid injection schemes remain to be addressed. As there are currently no available studies of the response of liquid injectors to violent events such as a passing detonation, we began by exploring the range of conditions over which dynamic behavior was observed in the channel when exposed to a detonation event that was initiated at atmospheric conditions. At atmospheric initial pressure, dynamic backflow was observed at low/modest injection manifold pressures-typically about 0.07-0.2 atm gauge pressures. Because the differential pressure created by a detonation event scales linearly with the ambient pressure, we would expect larger injector pressure drops to display the dynamic behavior uncovered at the ambient pressure detonation conditions explored in this introductory work. Experimental facility To meet the objectives of the study, a cold-flow experiment was designed to represent a single injector element of an RDE. An oxygen/hydrogen "pre-detonator" (henceforth referred to as "predet") was developed to drive a detonation through an optically accessible test section, and a highspeed camera recorded the response of the liquid-water in this case-as the detonation wave passes. Figure 1 provides a schematic arrangement of the plumbing and instrumentation diagram (P&ID) for the device. We are indebted to the Detonation Engine Research Facility at AFRL for donating a functional predet that creates the desired DDT event [24]. Major elements in the P&ID include two fastresponse solenoid valves (LeeCo models IEPA2411241H and IEPA2 411141H), a mixing and ignition chamber, and a deflagration-to-detonation transition (DDT) tube. Ignition was achieved using a remote-control automotive spark plug (NGK model ME 8). The DDT tube was a 6.4-mm (0.25 in.) stainless steel tube approximately 88 mm (3.5 in.) in length, with an inner diameter of 4.6 mm (0.18 in.). Its upstream end was tapped internally with 10-32 threads to a depth of approximately 25 mm (1 in.). Feed pressures for both hydrogen and oxygen were set to 1.38 MPa (200 psia). Due to the small size of the propellant feed lines (1.6 mm or 1/16 in. tubing), we did not possess the instruments capable of measuring the flow rates of each propellant. However, we approximated the equivalence ratio of the mixture to be two based on choked flow calculations. The predet was interfaced with the end of an expanding acrylic channel on the test article. The test article also included a modular injector insert and pressure instrumentation as highlighted in Fig. 2. The transition channel section was included to diffuse the detonation to the cross-sectional area required for the injector module, and a low-expansion half-angle of 5 • was employed to avert any potential flow separation; design details are included in [25]. Visual observations of wave structures in the near-orifice-exit region confirmed that the design was sufficient to maintain a planar wave at the test article. The transition section was used for all injector configurations studied, while the injector modules were replaced with the various designs for different tests. In all configurations, the predet effluent entered the transition section through a 6.4-mm (0.25 in.) compression tube fitting. A channel width of 4.6 mm (0.18 in.) was chosen to match the inner diameter of the DDT tube, and the height was machined to 13.7 mm (0.54 in.) such that a flat rectangular profile was obtained. A high-frequency pressure transducer was located on the channel sidewall at the same axial station as the injection site. We fabricated four different injector designs to assess the influence of injector length and the presence of an upstream plenum (countersink) on the overall response. Dimensions of each injector are provided in Table 1, and schematics of each of the four designs, designated long, medium, short, and plenum (L, M, S, and P, respectively), to represent their key features are shown in Fig. 3. All injector designs featured a 90 • conical transition between the plenum and injector pas- sage to guide the flow. Based on earlier tests with a different design, a water outlet port was added directly across from the injector exit to prevent the accumulation of water within the detonation passage since this could lead to inconsistent pressure conditions during the experiment. The steady-flow discharge coefficients of the injectors were also determined using the catch-and-weigh method. In the range of manifold pressures tested, all injectors had discharge coefficients consistent with the developing flow regime such that their values were strongly dependent on the imposed pressure drop. The non-constant discharge coefficients were a source of uncertainty as discussed in Sect. 4. A chart of the discharge coefficients plotted against injector pressure drop is shown in Fig. 4. Rather modest pressure drops were employed in the study to assess regions where injection dynamics were important. Design P shows a significantly lower discharge coefficient than the other injectors, presumably due to a second vena contracta in the narrower plenum. Injector manifold pressure was measured using a 400 kPa (60 psia) GE Druck pressure transducer located approximately 8 cm (3 in.) upstream of the injector and was manually controlled using a needle valve 25 cm (10 in.) upstream of the pressure transducer. The predet was fed from 34.5 MPa (5000 psia) hydrogen and oxygen K-bottles, while nitrogen purge and deionized water were linked directly to the laboratory's supply lines. A high-speed data acquisition system was used to control the predet's solenoid valves and record the profile of the pressure wave at a sampling rate of 1 MHz. The pressure transducer used to capture the pressure signal was a Kulite , flush-mounted as per manufacturer specification. Its diaphragm is protected by a built-in perforated screen that increases the sensor's rise time to approximately 20 µs, causing the pressure readings to be spread horizontally (temporally). However, according to the pressure-wave shape analytical study presented in the following section, the total delivered impulse instead of the peak pressure amplitude was the main driver of injector response. Therefore, we have reason to believe that the temporal skewing of the pressure signal had little impact on our analysis to follow. We performed five tests at each test condition to quantify repeatability and measured the backflow distance and refill time of the various injectors from the high-speed camera footage. Detailed information of the test hardware and facilities can be found in [25]. One-dimensional numerical model development The nature of the interaction between a transverse pressure wave and a cylindrical liquid injection is highly threedimensional, and accurate predictions of injector response will certainly require CFD studies in three dimensions. How-ever, it is no trivial task to simulate 3-D transient two-phase flows and parametric studies become impractical. Instead, we developed an unsteady, one-dimensional, lumped-parameter computational model to help interpret the experimental measurements and serve as a preliminary design and analysis tool. The model solves for the dynamic response of a column of liquid with density ρ and length L subjected to a highly transient downstream pressure disturbance as highlighted in Fig. 5. We consider a fixed manifold pressure, P m , and an initial chamber pressure of P 1 . Additionally, x represents the column end location for the purposes of tracking its motion along the orifice passage. While injector flow dynamics have been of interest to the combustion stability and water hammer communities for many years, we have not found an analysis comparable to this simple approach in the existing literature. By and large, the combustion stability community has assessed transient response to sinusoidal waveforms using both linear [5,6] and nonlinear [8] models. With water hammer, finite wave speeds are employed as the applications typically stem from "long pipes." In this case, linear and nonlinear wave equations are employed to assess dynamics [26]. As an initial step, the column of liquid was modeled as a fixed mass, and a step change in pressure from P 1 to P 2 was considered. The downstream pressure P 1 can be set to zero without loss of generality, i.e., we measure all pressure differences with respect to this initial gauge pressure. Applying Newton's second law to the liquid column with F = P A gives The dynamic pressure term appearing in (1) stems from the fact that the entire column is moving prior to a disturbance in the downstream pressure. Consequently, the entire manifold stagnation pressure has to be applied in order to stagnate , and the lower sign applies during backflow conditions. It becomes apparent from the above equation that for a plain orifice, the cross-sectional area does not play a role in the problem. During backflow, the entire column of liquid will be pushed upstream. Letting v 1 represent the initial Bernoulli velocity of the flow prior to the disturbance, we have Similarly, the Bernoulli velocity after the step change, v 2 , is Here, v 2 takes the positive sign when P 2 < P m . When P 2 > P m , the flow reverses and v 2 takes on a negative value. Equation (1) is a nonlinear ordinary differential equation that is integrated numerically to give instantaneous v and x values using an explicit second-order accurate in time method for computing the liquid-gas interface velocity, v. A secondorder backward Euler differencing scheme is utilized to solve for the velocity at the current time level i in terms of quantities at time levels i − 1 and i − 2. The resulting difference approximation for the velocity at the current time level is The scheme is started using first-order methodology in a standard implementation of the backward Euler scheme. Equation (4) can be numerically integrated in time to obtain The shaded region indicates the range of typical pressure ratios in a realistic system the interface location. A timestep of 1 × 10 −8 s was shown to be sufficiently small to produce converged results (Fig. 6). Additional details of the numerical treatment can be found in [25]. The orifice transit time τ represents a fundamental quantity in describing the dynamic response of the column: We can also define a dimensionless pressure p = P m /P 2 that characterizes the strength of the imposed disturbance relative to the manifold pressure. Consider the case when P 1 = 0 at t < 0 and p = P m /P 2 = constant when t > 0. This represents a step pressure change which drives the fluid flow to a different steady state. We can define a response time t r as the time taken for the flow to reach 95% of the difference between v 1 and v 2 since an asymptotic behavior is expected as the flow approaches v 2 and the driving acceleration diminishes. Figure 7 depicts the behavior of this response time over a wide range of pressure disturbance amplitudes. When p 1, the imposed disturbance is a small fraction of the initial manifold pressure and the response time tends to asymptote to t r ≈ 3τ under these conditions. For a very strong disturbance such as that imparted by a detonation wave p 1, the most rapid response is attained under these conditions with t r < 2τ . When very weak disturbances are imposed ( p ≈ 1), the orifice takes the longest to respond since the imposed forces are the smallest under these conditions. For pressure ratios consistent with detonation waves (indicated by the shaded region in Fig. 7), the results give response times in the range 2τ < t r < 5τ and the injector does respond on a timescale consistent with the time fluid spends in the passage under nominal injection conditions. While instructive, these results are of limited use since detonation events are highly transient, characterized by a steep-fronted pressure spike followed by a period of pressure decay. For this rea- Plot of non-dimensional velocity versus non-dimensional time for various decay times and peak pressures son, we consider a triangular-shaped pressure disturbance characterized by instantaneous rise to a maximum pressure P 2 followed by a linear decay in pressure. This profile will be more representative of the pressure created by the passage of a detonation wave. We can define τ c as the duration of linear decay and pressure impulse as the area under the triangular-shaped disturbance (I = 0.5τ c P 2 ), where P 2 is now the peak pressure. Here, we consider several P 2 and τ c combinations under the constraint that I = constant. Keeping manifold pressure and therefore orifice transit time τ constant, we can use τ to nondimensionalize τ c . Making τ c /τ = 1 and P 2 = 10P m the baseline case, the value of τ c /τ can be decreased while keeping I constant to make the pressure profile steeper; the cases considered are shown in Fig. 8. Figures 9 and 10 show velocity and position histories. Figure 9 shows varying degrees of initial deceleration since the peak pressure is now different for each of the cases. While there is a more violent velocity excursion for a highamplitude short pulse as compared to a low-amplitude long pulse, the asymptotic behavior and overall response time vary little for the cases considered. This is a fundamental result that is important to system dynamics as the shape of the imposed overpressure is of less concern than the overall impulse applied to the system. Figure 10 reinforces this Once the imposed impulse has been applied, all the cases tend to converge toward the same overall system response. The results of this study indicate that the horizontal skewing of pressure profiles due to the pressure transducer's rise time would not result in significant error. Figure 11 depicts the overall recovery time t rec , defined as the time required for the injection speed to return to 95% of its original value. Results show only small variations in recovery time over the range of conditions considered. Thus, the orifice dynamic response will depend almost exclusively on the impulse generated by the wave. While the fixed-mass analysis highlights some top-level characteristics of importance, we desire a more accurate representation of the system to account for additional physical phenomena. For example, as the free surface propagates into the orifice passage, the gas has less mass in the orifice passage to accelerate. We removed the fixed-mass constraint and included viscous effects to produce a variable-mass model for the injector. Flow beyond the inlet and exit of the injector was neglected by freezing x at 0 or L when it exceeded those values such that only the mass within the injector channel was used for calculations. The following diagram shows the control volume used in the calculations (Fig. 12). From (1), the acceleration due to the imposed pressure gradient in the axial direction is Here, the density is the mass-weighted average of the liquid and gas present in the orifice. It is important to account for the combusted gas density here (obtained from NASA CEA [27]) so that the acceleration remains bounded when the entire injector is filled with combustion products. In real flow, frictional loss is expected on the channel wall and is calculated using the Fanning friction factor, f , according to where Re = ρv D/μ is the Reynolds number. The Colebrook equation [28] for the turbulent regime requires f to be solved numerically. After f is obtained and wall shear stress is computed, the net frictional force on the domain will be where v is the interface velocity and A wet is the wall area currently wetted by the fluid. Friction of gaseous combustion products is neglected. The mass of liquid in the orifice is then the product of the orifice cross section, liquid column length, and liquid density. The mass of the fluid in the injector channel is given by The net acceleration of the liquid column is the sum of the acceleration due to pressure gradient and the deceleration due to friction: Numerical treatment of the variable-mass model remained the same as (4), and the same timestep of 1 × 10 −8 s produced results that were insensitive to further refinements in timestep [25]. We also used empirical pressure data in this case to provide a direct comparison between the experimental measurements and analysis predictions. Results from this comparison are included in the following section. Figure 13 shows a macroscopic view of events as a detonation wave travels down the channel. The wave travels from top to bottom in the right half-plane, and water flows from left to right. The water jet is shattered into a fine mist by the passage of the high-amplitude detonation front. At the same time, the column of water in the injector is pushed back toward the plenum by the sudden spike in pressure. This series of images taken at 12,000 frames per second (fps) and 304 by 512 pixels resolution serves to provide an overall picture of the events during each test run. Subsequently, we reduced the viewing window significantly to enable higher frame rates (more than 80,000 fps) and thereby capture more detail regarding the injector's response. It should be noted that the injector shown here was from a prior experiment and was not of the same design as those used to generate the data presented in this article. However, the events occurring downstream of the injector face remain unchanged. Pressure data Over half of the total sets of pressure data that we collected showed unexpected excursions shortly after the passage of the pressure wave, and the occurrence is believed to be the result of thermal drift of the pressure sensor. In earlier tests with a different design, the pressure sensor was situated directly across from the injector orifice and was therefore continuously covered with a film of water. In those tests, pressure excursions were not observed in the data. When water was allowed to leave the detonation passage freely, the transducer was directly exposed to the hydrogen-oxygen flame during blowdown and therefore produced an increased output voltage. For instances where thermal drift was minimal, it was likely that water droplets had come to rest directly on the transducer face due to the splashing resulting from injection recovery or passage over-relaxation (where surrounding gas gets drawn back into the detonation channel). The pressure data containing thermal drift could not be used directly as input for the numerical analysis. However, analysis of all the pressure data revealed that the detonations produced by the predet were consistent in strength and profile. Figure 14 shows 23 sets of pressure data in which thermal drift was absent or minimal. The peak height, duration, and shape of the pressure signals were largely similar. From the figure, we see that the pressure ratios of the first peaks cluster around the value of three. From later tests with a modified test article, we determined that the pressure waves traveled at speeds between 1200 and 1300 m/s. The pressure ratios and wave speeds both indicate that CJ detonation had not been achieved; instead, the reaction zone was decoupled from the shock. Presently, we have reason to believe that the second peaks correspond to the pressure rise due to the decoupled reaction zone. The next best option was to choose a set of data that was relatively unaffected by thermal drift and had peak values similar to the others, and use it as a single representative input for all computations. The justification for doing so lies We classified the results of the experiments under three broad categories: complete backflow, partial backflow, and limited backflow. Complete backflow is defined by gaseous combustion products penetrating the entire length of the orifice and becoming trapped in the plenum. Partial backflow occurs when the gas-liquid interface propagates upstream into the injector passage with the gaseous phase occupying the entire cross section of the passage. Finally, limited backflow is characterized by the case where gas occupies just a fraction of the injector cross section near the exit plane. The absence of inversion of the liquid-gas interface is also a characteristic of limited backflow. Examples of each category are shown in the figures that follow. Video data Under very low-speed liquid injection conditions, combustion gases propagate up the injector passage all the way into the injection plenum as illustrated in Fig. 16 (i.e., a complete backflow situation). Here, we should point out that the injection pressure was very low (6.9 kPa or 1 psi), so this condition is likely not representative of high-pressure combustion conditions in a pressure gain device. Gas first enters the injector passage through the boundary layer on the upwind side of the orifice and the liquid-gas interface appears tilted toward the upwind side of the injector passage due to this effect. For the test shown, the interface moves the entire length of the injection passage in about 300 µs with an average velocity of 12.7 m/s. Upon penetration into the orifice plenum, some of the gas remains trapped in the plenum due to buoyancy effects. The period between 450 and 517 µs is presumably a time when there is nearly no liquid in the orifice passage except for a small annular liquid region along the wall of the orifice, evident from the visible distortion close to the exit plane. At 517 µs, liquid surrounding the previously continuous column of gas pinches off the column at the orifice entrance and forms a new free surface. The process is apparent during video playback, but not easily seen in the still images. At 596 µs, the free surface becomes more visible as recovery begins. Note that the liquid-gas interface is tilted toward the downwind side of the orifice passage in the last frame; presumably, the upwind side of the passage recovers first during this highly transient process. Figure 17 depicts partial backflow that could presumably occur if a low/intermediate liquid feed pressure (soft injection system) is employed. As with the large backflow condition in Fig. 16, the liquid-gas interface is tilted toward the upwind side of the injector and the interface inverts its tilt as flow recovers and liquid pushes out the two-phase region. This interesting behavior appears consistently in the results and appears to be a fundamental multidimensional effect. Further tests performed with the test article rotated 180 • such that the pressure wave traveled from bottom to top (not shown) displayed identical surface behavior, i.e., the surface tilts upwind during the backflow phase and downwind during recovery phase, indicating that gravity has little effect on the shape of the free surface. One can imagine the high-pressure gas first pushing into the upwind boundary layer in the orifice thereby leading to a tilted free surface. Similarly during the recovery phase, the downwind side of the orifice passage will experience higher-pressure gas conditions last and therefore might cause a delayed recovery relative to the upwind side of the passage. It is surprising that this multidimensional argument appears to hold even when the free surface is pushed a substantial distance upstream into the orifice passage. Here, the backflow duration is of the order of 400-500 µs. Flow recovery appears to occur over a similar time interval. Figure 18 shows a limited backflow situation that is perhaps most relevant to high-pressure operation and injection conditions. Here, the manifold pressure was sufficiently high to prevent the injector from checking off completely; this behavior might be characterized as a stiff injection system. The yellow dashed lines show the approximate edge of continued liquid flow from the downwind region of the orifice even while the upwind portions of the orifice were undergoing backflow. These dynamics tend to be more readily apparent in the video playback. Even though liquid flow persists at the injection plane, the flow rate is very low relative to the full-flow condition. The third image (middle) shows the injector in a state of backflow, and the fourth shows it in the process of recovery. In both of these images, the slope direction of the liquid-gas interface remained the same. In other words, the free surface tilt inversion does not tend to occur under limited backflow conditions. Measurements from video data We analyzed the video data to obtain backflow distance and time taken between the arrival of the detonation wave and refilling of the orifice with liquid. The backflow distance was defined to be the maximum displacement of the liquid-gas interface observed along the centerline of the injector, and refill time was defined as the time between the first observable arrival of the pressure wave and complete refilling of the injector with liquid. Under limited backflow conditions, portions of the orifice continue to flow in the positive direction (into the detonation channel) and the free surface may never cross the centerline, resulting in zero measured backflow distance. For this reason, the measurements are more qualitative, but have value for representing gross trends. Figures 19 and 20 show measured backflow distance and refill time as a function of dimensionless pressure drop P/P m for all injector types. Note that while the injector pressure drop is a dynamic value during tests, the P on the abscissa represents only the initial pressure drop. The interested reader may refer to [25] for the absolute manifold pressure values in each test. We conducted a total of five tests at each condition, and in general, the data show a very repeatable performance with the exception of a few outliers. In Fig. 19, all designs exhibit a nonlinear relation between backflow and pressure drop. As one might expect, the shortest injector (Design S) had the largest amount of backflow since it had the smallest amount of liquid mass in the passage to displace. Designs M and L then follow in reducing order. While Designs P and S have the same orifice length and diameter, the plenum in Design P drastically reduced the amount of backflow. This suggests that at least some of the fluid in the plenum region is interacting with the pressure wave imparted by the detonation. The plenum design may therefore provide a mechanism to tune the injector's resistance to back-pressure disturbances, and it may be desirable to consider this design feature for PGC applications. We introduce the term "refill time" as the length of time for the injector to backflow and completely refill with liquid. Refill time is an important parameter as it is necessary to understand when liquid arrives into the chamber to support the next detonation wave passage. Figure 20 shows how the absolute refill time varies as a function of P/P m . For the conditions tested, the refill times are all quite long relative to that which might be acceptable in an RDE application. As the orifice pressure drop/velocity is increased, refill time can be all but eliminated. Nevertheless, we investigated the soft injection conditions to assess regions where significant backflow and refill times exist. In general, the results show a surprising lack of sensitivity to injector design at low values of P/P m . Designs M and S have refill times that almost overlap with each other, with Design L's data points also in close proximity. At higher P/P m , Designs P and L have noticeably smaller refill times. The plenum appears to serve a role of isolating the impact of the dynamic response to the near-exit region. The relative insensitivity of backflow distance and refill time to injector design seems to suggest that the injector's Fig. 21 Schematic summarizing observed behavior in near-exit region of the orifice due to a passing strong pressure wave response is a local phenomenon. In general, the static pressure at the injector exit is the same regardless of injector length, and the local flow processes appear to be primarily controlled by this parameter. However, the boundary layer thickness is also playing a role as it is evident that gas penetration begins in the upstream boundary layer region and can be more pronounced when the boundary layer is thicker. Figure 21 provides a schematic representation of the events based on this assertion and our observations. As a result, the injectors showed similar responses even though their lengths differed by up to a factor of two. Comparison of predictions with measurements We used the lumped-parameter model outlined in Sect. 3 to simulate the response of each injector type. Figure 22 provides a sample output from one of the simulations in response to the measured overpressure in Fig. 15. The lower left plot shows the acceleration on the liquid in the injector orifice caused by the pressure pulse. As expected, it has the same profile as that of the pressure signal. The velocity profile in the upper right shows how quickly the liquid-gas interface moves within the injector. Positive values indicate flow toward the detonation channel, and negative values signify backflow. The last and most important plot is the time history of the liquid-gas interface location. From this plot, the two parameters of interest are obtained: maximum backflow distance and recovery time. These are the two measurable quantities from the experiments and are therefore the bases of comparison. We made comparisons between the lumped-parameter model and the experimental results for Designs L, M, and S. We chose to exclude design P since the 1-D model did not include provisions to consider plenum geometry and Design P deviated significantly from a plain orifice. Figure 23 shows model prediction errors for dimensionless backflow distance (as a percentage of orifice length) as a function of the injection Reynolds number computed based on the imposed pressure drop. Errors in predicted backflow in Fig. 23 are very large at low injection Reynolds numbers, and the backflow distance is vastly under-predicted by the simple model. The major tioning RDE. The model has a more reasonable agreement at higher Reynolds number injection conditions, but the overall backflow in these cases is also much smaller. The non-dimensional error in refill time comparison is shown in Fig. 24. The one-dimensional model consistently under-predicts the refill time by about a factor of 1.75-2.25 (actual refill time 75-125% larger than the prediction). Clearly, these errors indicate that the 1-D treatment does not capture the fundamental physics at work. Three potential phenomena could play a role in the disparity: (i) Multidimensional flow effects The high-speed imaging displays a surface that is far from planar. The interface tilts toward the passing wave during backflow and undulates toward a planar surface during the recovery phase. Clearly, a 1-D model cannot capture this complex multidimensional motion. (ii) Viscous effects As the injection Reynolds numbers are very low in cases that display dynamic free surface motion, boundary layers are thick and viscous forces play a large role. The dynamic surface shapes that are observed are clearly affected by this factor as the interface pushes furthest into the low-momentum fluid in the boundary layer on the upstream side of the orifice; the simple model does not properly account for these physics. (iii) Dynamic manifold response The model presumes a constant manifold pressure, whereas the manifold will also respond as the compression wave from the passing detonation travels forward into this region. Wave reflections within the manifold will create an expansion wave that will travel back downstream in an attempt to equilibrate pressures after the passage of the detonation. This wave will temporarily decelerate (or even stagnate) the interface until additional wave reflections serve to recover the initial manifold pressure prior to the violent event. For these reasons, a compressible treatment of the flow passage, with a full consideration of the manifold design, may be required to better replicate recovery time periods. While the recovery results display longer times than we might expect with the simple model, the injection conditions where dynamic surface behavior is observed occur at very low manifold pressures, and pressure drops more consistent with operational devices display minimal recovery times from our study. Study of these physics at high ambient pressures is necessary to confirm the expected behavior, i.e., minimal recovery time for "stiffer," high-pressure injectors. Unfortunately, orifice cavitation becomes a concern when replicating these high pressure drops with ambient back-pressure and these concerns have limited the range of injection conditions that could reasonably be studied with this initial research effort. Conclusions We studied the transient response of a liquid injector subjected to a steep-fronted transverse pressure wave by exposing a single plain-orifice injector to a weak hydrogen-oxygen detonation in a transparent structure. Water was the injected fluid, and injection differential pressures of 6.9-34.5 kPa (1-5 psi) were used in injectors that varied in length from 3.81 to 7.62 mm (0.15-0.30 in.). High-speed videos and companion high-frequency pressure measurements provided simultaneous pressure and surface shapes during fluid backflow within the injector. We also created a one-dimensional flow model to assess abilities in predicting the measured response on this basis. Results have shown that the behavior of the liquid is far from one-dimensional. Instead, the mechanism for backflow is complex because of the boundary layer dynamics that most likely play a major role in gas penetration, especially at low injector Reynolds numbers. Specifically, the detonation wave appears to first propagate into the injector along the boundary layer on the upwind side of the orifice. Because the highpressure gas first propagates into this region, the free surface is tilted upwind during backflow. An interesting reversal of the surface tilt is observed during flow recovery, except in cases when the overall extent of backflow is limited. The maximum extent of backflow is strongly correlated with the injection pressure drop, and higher P injection can limit or eliminate backflow. In general, longer orifice passages tend to exhibit less backflow since a larger mass of liquid must be accelerated by the high-pressure gases. We uncovered an interesting behavior with a design that featured a small plenum behind the injection orifice. The reduced plenum diameter tended to limit backflow, and this design feature might offer a mechanism to control dynamic response in practical systems. While the backflow results exhibited significant trends with differing injector designs, the refill time (time for the free surface to return to the orifice exit plane) did not display strong influence from injector design and all concepts that we studied had similar behavior, at least at low injection pressure drops. At higher pressure drops more realistic of actual operating conditions, refill time was shorter for the longer injection element and the design employing the narrower plenum feature. Once again, the plenum appears to offer features that might be desirable for operational devices. The experiments carried out at atmospheric pressure indicate that while the relatively low injection pressures were unable to prevent backflow from occurring, it only took approximately 20.7 kPa (3 psi) of pressure drop to resist the pressure wave to the point where backflow was only limited. Since it is impractical to completely eliminate backflow due to the scaling of detonation pressure with initial pressure, it is a likely scenario that we would want to design injectors to operate in the limited backflow regime. Lastly, the 1-D model shows some promise in the prediction of backflow distance at higher initial Reynolds numbers, but lacks accuracy in predicting refill time, whose prediction error did not appear to show any direct dependence on Reynolds number or injector length. Assessing the model in the higher Reynolds number/injection velocity conditions is desirable because orifice cavitation limited the range of injection velocities in this study. We desire to perform further investigations at higher-chamber-pressure conditions as it will permit the use of higher injection velocities and thinner boundary layers that will perhaps make the model more relevant.
9,329.6
2018-07-01T00:00:00.000
[ "Engineering", "Physics" ]
Ternary Complex of Transforming Growth Factor-β1 Reveals Isoform-specific Ligand Recognition and Receptor Recruitment in the Superfamily* Transforming growth factor (TGF)-β1, -β2, and -β3 are 25-kDa homodimeric polypeptides that play crucial nonoverlapping roles in embryogenesis, tissue development, carcinogenesis, and immune regulation. Here we report the 3.0-Å resolution crystal structure of the ternary complex between human TGF-β1 and the extracellular domains of its type I and type II receptors, TβRI and TβRII. The TGF-β1 ternary complex structure is similar to previously reported TGF-β3 complex except with a 10° rotation in TβRI docking orientation. Quantitative binding studies showed distinct kinetics between the receptors and the isoforms of TGF-β. TβRI showed significant binding to TGF-β2 and TGF-β3 but not TGF-β1, and the binding to all three isoforms of TGF-β was enhanced considerably in the presence of TβRII. The preference of TGF-β2 to TβRI suggests a variation in its receptor recruitment in vivo. Although TGF-β1 and TGF-β3 bind and assemble their ternary complexes in a similar manner, their structural differences together with differences in the affinities and kinetics of their receptor binding may underlie their unique biological activities. Structural comparisons revealed that the receptor-ligand pairing in the TGF-β superfamily is dictated by unique insertions, deletions, and disulfide bonds rather than amino acid conservation at the interface. The binding mode of TβRII on TGF-β is unique to TGF-βs, whereas that of type II receptor for bone morphogenetic protein on bone morphogenetic protein appears common to all other cytokines in the superfamily. Further, extensive hydrogen bonds and salt bridges are present at the high affinity cytokine-receptor interfaces, whereas hydrophobic interactions dominate the low affinity receptor-ligand interfaces. mal development, immune function, and carcinogenesis (1)(2)(3). TGF-␤s are the founding members of a highly diversified family of signaling ligands and receptors, known as the TGF-␤ superfamily. To date the superfamily consists of more than 30 growth factors and cytokines, including TGF-␤s, bone morphogenetic proteins (BMPs), activins, inhibins, nodal, Müllerian inhibiting substance, growth differentiation factors, and others (4). TGF-␤s and related factors signal through two single-pass transmembrane receptors, known as the type I and type II receptors. These two receptor types have the same overall domain structure, including an extracellular ligand-binding domain displaying a three-finger toxin fold, a single transmembrane helix, and a cytosolic serine-threonine kinase domain. Signaling is initiated by the ligand, which binds the receptor extracellular domains, bringing them together and triggering a phosphorylation cascade, whereby the type II phosphorylates the type I, and the type I phosphorylates Smads, the cytoplasmic effectors of the pathway (3). Specificities have been determined based on cell-based affinity labeling studies with radiolabeled ligands and have enabled the identification of major ligands for most receptors of the superfamily, including those specific for BMPs, TGF-␤s, activins, and Müllerian inhibiting substance (4). Structural studies of the BMP and TGF-␤ receptor extracellular domains complexed to their cognate ligands have revealed that although ligands and receptors of the different subfamilies share the same overall fold, they nevertheless bind and assemble their receptors in ways that are entirely distinct (5)(6)(7)(8)(9). The distinct mode of ternary complex assembly for BMP-2 and TGF-␤ underscores the complexity governing the ternary complex assembly. That also raises the question about which mode of type II receptor binding is realized for other cytokines in the superfamily and what are the critical factors determining receptor specificity and promiscuity. In addition, the cytokines outnumber their receptors in the family with at least 29 ligands in mammals signaling through seven type I and six receptors (3, 10 -12), raising the question of combinatorial ligand recognition in the superfamily. Further, little is known regarding the underlying mechanisms by which ligands of particular subfamilies induce their specific activities. Although the three TGF-␤ isoforms, TGF-␤1, -␤2, and -␤3, signal through the same receptors and share significant sequence (71-79% identity) and structural similarity (backbone root mean square deviations (r.m.s.d.) Ͻ 1.5 Å) (13)(14)(15)(16), they nevertheless carry out unique functions in vivo as shown by the severe yet distinct phenotypes of the isoform-specific TGF-␤ null mice (17)(18)(19)(20). These differences have been attributed to distinct patterns of expression (17)(18)(19)21), yet some evidence suggests that it might also be due to differences in the ligands themselves. For example, the addition of purified exogenous TGF-␤s has been shown to lead to different outcomes in a bilateral palatal shelf closing assay, with TGF-␤3 promoting complete fusion and TGF-␤1 and -␤2 promoting only partial fusion (22). The application of purified TGF-␤1 and -␤3 has been further shown to lead to dramatic differences in cutaneous wound healing, with TGF-␤3 preventing and TGF-␤1 promoting scarring (23). The objective of this study was to investigate the mechanism by which TGF-␤s bind and assemble T␤RI and T␤RII into a signaling complex, to define the structural principles that underlie TGF-␤ isoformspecific function through comparison with the TGF-␤3 ternary complex, and to define the structural principles governing the combinatorial ligand recognition among TGF-␤ superfamily receptors. Crystallization and Structure Determination-TGF-␤1 was first mixed with T␤RII and then with T␤RI at a molar ratio of ϳ1:4:4. The ternary complex was separated from the excess of T␤RI and T␤RII by size exclusion chromatography in 50 mM NaCl, 20 mM Tris-HCl, pH 8.0. Crystals of the TGF-␤1⅐T␤RI⅐T␤RII ternary complex were obtained by vapor diffusion in hanging drops at room temperature in 8 -15% polyethylene glycol 4000 -8000 at pH 6.0 -8.0. The complex crystals diffracted to 3.0 Å resolution, belonged to a triclinic space group P1 with cell dimensions a ϭ 37.70 Å, b ϭ 99.35 Å, c ϭ 102.7 Å, ␣ ϭ 64.01°, ␤ ϭ 84.47°, and ␥ ϭ 84.34°. The x-ray data sets were collected at the Southeast Regional Collaborative Access Team 22-ID Beamline at the Advanced Photon Source, Argonne National Laboratory. Supporting institutions are listed at the Southeast Regional Collaborative Access Team website. The data were processed and scaled with HKL2000 (43). The structure was solved by the molecular replacement method using the structures of TGF-␤2 (Protein Data Bank code 2TGI), the extracellular domains of T␤RII (Protein Data Bank code 1M9Z), and BMPR-Ia (Protein Data Bank code 1REW) as search models. The solutions for TGF-␤1 and T␤RII were obtained with the program Phaser, and the solution for T␤RI was obtained using Evolutionary programming for molecular replacement (44,45). There are two ternary complexes in each asymmetric unit that were modeled using the programs O and COOT (46,47). The structure was refined using a maximum likelihood target function of CNS v1.1 (48) with a 2-fold noncrystallographic (NCS) constraint between the two ternary complexes. Final rounds of the refinement were carried out with phenix.refine (49) without NCS constraints. Several programs from CCP4 program suite were used throughout model building and refinement (50). Surface Plasmon Resonance-Binding studies were performed with a BIAcore 3000 instrument (GE Healthcare) and were analyzed using the software package Scrubber2 (Biologic Software). TGF-␤s were biotinylated and captured on carboxymethyl dextran (CM5) chips to which 5000 response units (RU) streptavidin had been covalently attached to all four flow cells using an amine coupling kit (GE Healthcare). TGF-␤2 was biotinylated in 25 mM MES, pH 4.8, by first activating it with a 10-fold molar excess of 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride (GE Healthcare) in the presence of a 25-fold molar excess of N-hydroxysuccinimide (GE Healthcare) and then by adding a 100-fold molar excess of amine-PEO 3 -biotin (Pierce). TGF-␤1 and -␤3 were biotinylated by first complexing the protein with an excess of T␤RII (4 equivalents) or T␤RI and T␤RII (4 equivalents each) followed by the addition of a 10-fold molar excess of sulfo-N-hydroxysuccinimide-LC-LC-Biotin (Pierce). Singly biotinylated TGF-␤1 and -␤3 were separated from doubly and multiply biotinylated forms by applying them to a Source S cation exchange column (GE Healthcare) in the presence of 30% isopropanol at pH 4.0. To ensure the reliability of the results, surface densities of captured TGF-␤s were kept at 50 -300 RU. T␤RII binding data for TGF-␤1 and -␤3 were collected using ligands that had been modified in the presence of T␤RII, whereas T␤RI binding data (both alone and in the presence of 4 M T␤RII) were collected using ligands that had been modified in the presence of both T␤RI and T␤RII. Binding assays were performed by injecting 2-fold serial dilutions of the receptor in duplicate or triplicate in HBS-EP buffer (GE Healthcare) at a flow rate of either 5 l/min (equilibrium experiments for interactions with slow association times) or 50 -100 l/min (equilibrium experiments for interactions with fast association times and kinetic experiments). The surfaces were regenerated by a brief injection of 10 mM glycine, pH 2.5 (30-s contact time at a flow rate of 50 -100 l/min). Instrument noise was removed by referencing the data against at least three blank buffer injections. A very small background signal, caused by the nonspecific absorption of the receptors to the surfaces, was removed by referencing the data against a flow cell containing only immobilized streptavidin. Equilibrium analyses were performed on steady state measurements using the equilibrium binding response near the end of the injection. Kinetic analyses were performed by global fitting with a simple 1:1 model. Standard errors were obtained from the variation among the derived parameters from independent measurements using Origin Software. RESULTS AND DISCUSSION Structure of TGF-␤1⅐T␤RI⅐T␤RII Ternary Complex-The structure of a human TGF-␤1⅐T␤RI⅐T␤RII ternary complex was determined to 3.0-Å resolution (Table 1). One TGF-␤1 homodimer binds to two T␤RI and two T␤RII receptors forming a 2:2:2 complex (Fig. 1). Each asymmetric unit contains two complexes related by a NCS 2-fold symmetry. Electron densi-ties for all 112 residues in both chains of TGF-␤1 homodimer, for residues 20 -126 of T␤RII, and for residues 9 -85 of T␤RI were well defined except for two loop regions of T␤RI between residues 36 and 38 and between residues 64 and 71 that correspond to the tips of finger 2 and finger 3 of the three-finger toxin fold. All interface residues between the three components of the complex were well resolved. The overall assembly of the ternary complex is very similar to the reported structure of TGF-␤3⅐T␤RI⅐T␤RII complex (Protein Data Bank code 2PJY) with an r.m.s.d. of 1.14 Å for 357 C␣ atoms (supplemental Fig. S1) (7). In the center of the complex, butterfly-like shaped TGF-␤1 homodimer (TGF-␤1 A and TGF-␤1 B subunits) interacts with two T␤RII and two T␤RI receptors (Fig. 1). The structure of the TGF-␤ monomer was originally described as a slightly curved left hand with ␣1 and ␣3 helixes forming the thumb and the heel of the hand and two antiparallel ␤-sheets forming its four fingers (13). Both receptors T␤RI and T␤RII demonstrate three-finger toxin fold with some differences in the shape, length, and secondary structure of their first and third fingers. The ternary complex defines three pairwise interaction interfaces between TGF-␤1 and T␤RI, TGF-␤1 and T␤RII, and between T␤RI and T␤RII, burying 1518-, 940-, and 740-Å 2 solvent-accessible areas, respectively. There are no significant conformational changes in TGF-␤ and T␤RII upon complex formation. Superposition of the current receptor with the unbound T␤RII (Protein Data Bank code 1M9Z) resulted in an r.m.s.d. of 0.5 Å for 100 C␣ atoms. Similarly, the receptorbound TGF-␤1 can be superimposed with unbound TGF-␤1 (minimized average NMR structure, Protein Data Bank code 1KLC), TGF-␤2 (Protein Data Bank code 2TGI), and TGF-␤3 (Protein Data Bank codes 1TGK and 1TGJ), resulting in an r.m.s.d. of 0.9 -1.5 Å for 100 -109 C␣ atoms. Recognition of TGF-␤1 by the Type I Receptor-The type I receptor contacts both monomers of TGF-␤1, generating two primarily hydrophobic patches of the interface burying 370 and 1150 Å 2 of accessible-solvent area with TGF-␤1 A and TGF-␤1 B , respectively ( Fig. 2 and supplemental Table S1). The interface between TGF-␤1 A and T␤RI consists of Trp 30 , Trp 32 , Tyr 90 , and Leu 101 of the "palm" side of TGF-␤1 A fingers and Ile 54 , Pro 55 , and Phe 60 from T␤RI. This interface is well conserved in the structure of TGF-␤3 ternary complex as well as among the sequences of three TGF-␤ isoforms (Figs. 3A and 4A). The second patch consists of Ala 1 , Leu 2 , Asn 5 , and Tyr 6 from ␣1-helix and Ile 51 , Gln 57 , and Lys 60 from ␣3-helix of TGF-␤1 B contacting His 15 , Leu 16 , Lys 19 , Phe 31 , Ile 54 , and Val 61 from T␤RI with one hydrogen bond between the side chain of Tyr 6 of TGF-␤1 and His 15 of T␤RI. Interestingly, in the TGF-␤3 ternary complex structure, T␤RI is rotated ϳ10°away from TGF-␤ as calculated by program HINGE (Fig. 1C and supplemental Fig. S1B) (24), resulting in a partial solvent expo-sure and a 400-Å 2 reduction of buried solvent-accessible area at this interface between T␤RI and TGF-␤3 B . There is, however, one hydrophobic interaction between Thr 67 of TGF-␤3 B and Val 71 of T␤RI, which is absent in current TGF-␤1 complex because of a partial disorder of ␤4-␤5 loop around Val 71 of T␤RI. T␤RI and BMPR-Ia dock to their ligands at a closely related site (Site I) with a 10 -30°difference in their receptor orientations (Fig. 5A). However, both the identity and position of interface residues vary considerably between the type I receptors (Fig. 4), suggesting a promiscuous recognition at Site I. Nevertheless, no cross-reactivity was observed between BMPR-Ia and T␤RI (1,25). Previously (7), a prehelix loop region (between the ␤4-␤5 strands) was suggested to be partially responsible for T␤RI specificity. The current structure points to the new area of interactions between the N-terminal loop of T␤RI (␤1-␤2 loop) and the ␣1-helix of TGF-␤1. The corresponding loop in BMPR-Ia contains insertions that potentially would hinder BMPR-Ia binding to the ␣1-helix of TGF-␤ (Fig. 5A). This ␣1-helix is stabilized by a Cys 7 -Cys 16 disulfide bond in TGF-␤s but is either absent or disordered in the structures of all BMPs, activin, and growth differentiation factor 5 (5-6, 26 -29). Interestingly, ALK1 and ALK2 also have a short ␤1-␤2 loop that correlates with their permissive recognition of both TGF-␤s and BMPs. Recognition of TGF-␤1 by the Type II Receptor-The interactions between TGF-␤1 and T␤RII involve five TGF-␤1 residues (Arg 25 , His 34 , Tyr 91 , Gly 93 , and Arg 94 ) at the tips of its fingers and seven T␤RII residues (Phe 30 , Asp 32 , Ser 49 , Ile 50 , Ser 52 , Ile 53 , and Glu 119 ) on the base of the toxin-fold fingers of the receptor ( Fig. 2 and supplemental Table S1). Although Arg 94 and Tyr 91 of TGF-␤1 form hydrophobic contacts with Ile 50 , Ile 53 , and Phe 30 of T␤RII, the majority of the interactions at this interface are either salt bridges or hydrogen bonds ( Fig. 2 and supplemental Table S1). In particular, Arg 25 and Arg 94 of TGF-␤1 form salt bridges with Glu 119 and Asp 32 of T␤RII, respectively, that are also conserved in T␤RII⅐TGF-␤3 binary and T␤RI⅐TBRII⅐TGF-␤3 ternary complexes (Fig. 3C). A major structural difference between the TGF-␤ and BMP receptor complexes is the association with their respective type II receptors. T␤RII and ActRII not only bind to different sites on their ligands, Site IIa and IIb, respectively, they also use distinct residues for their ligand recognition (Figs. 4 and 5B). T␤RII contacts TGF-␤ via residues from ␤1and ␤2-strands. The secondary structure conformation of this receptor region is stabilized by a disulfide bond between Cys 38 and Cys 44 that is FIGURE 3. Structural superposition between TGF-␤1 and TGF-␤3 ternary complexes. A and B, interface contacts between T␤RI and TGF-␤1 A (A) and between T␤RI and the helix ␣1 of TGF-␤1 B (B). TGF-␤1 complex is colored in light and dark blue for TGF-␤1 A and TGF-␤1 B monomers and red for T␤RI, whereas TGF-␤3 complex is in gray. C and D, the T␤RII/TGF-␤ interface (C) and the T␤RI/T␤RII interface region (D) with TGF-␤1 complex colored in light blue for TGF-␤1, red for T␤RI, and yellow for T␤RII, whereas the TGF-␤3 complex is in gray. unique to T␤RII. The corresponding region in all other type II receptors, including BMPRII and ActRII, lacks both ␤1Ј, ␤1Љstrands and the Cys 38 -Cys 44 disulfide bond. It would result in a conformation that would sterically clash with their cytokines at Site IIa (Fig. 5, C and E). Therefore, we can conclude that Site IIa at the "tip of cytokine fingers" is unique and restricted only to TGF-␤ complex assembly. On the other hand, BMPs use their "knuckle site" of their fingers, Site IIb, to bind the ␤4 and ␤5 strands of their receptors. This region in T␤RII contains a unique 5-8-amino acid insertion forming a hook blocking its ␤4 and ␤5 strands from binding to Site IIb (Figs. 4B and 5D). Except T␤RII, other known type II receptors have significantly shorter ␤4-␤5 loops that adopt similar conformation (Fig. 4B), suggesting knuckle site on cytokine is a common type II receptor binding site in TGF-␤ superfamily. Moreover, the ␤4-␤5 region on type II receptors is responsible for their specificity. Here, as in the above mentioned type I receptor recognition, the shorter ␤4-␤5 loop is characteristic for promiscuous type II receptors, such as ActRII and ActRIIB, whereas the ␤4-␤5 loop in BMPRII, which is 3 residues longer, restricts its recognition to BMPs only. The type I and II receptors in TGF-␤ superfamily share a common three-finger toxin fold, yet have distinct binding sites, and do not appear to cross-react. Site I binding involves the residues on the first ␤1-␤2 loop of both T␤RI and BMPR-Ia. This loop is not only 7-13 residues shorter in all type I receptors ( Fig. 4) but is restrained by a conserved Cys 14 -Cys 17 disulfide bond unique to type I receptors. Likewise, Site IIb binding is supported by a conserved disulfide bond (Cys 72 -Cys 84 , ActRIIA numbering) between the ␤5 and ␤6 strands present in all type II receptors but absent in type I receptors (Figs. 4 and 5F). The corresponding region in type I receptors assumes a different conformation stabilized by a conserved disulfide bond (Cys 62 -Cys 76 ; Fig. 4C) present only in the type I receptors. Thus, the ligand recognition by the type I and II receptors of TGF-␤ superfamily appears to rely primarily on structural compatibilities, such as insertions and deletions at their receptor-ligand interfaces and unique disulfide bonds stabilizing specific secondary structure conformations. In general, insertions at the interface restrict the recognition, whereas deletions generate promiscuity. TGF-␤1, -␤2, and -␤3 Exhibit Distinct Receptor Preferences but Comparable Ternary Assembly Affinities-Despite high sequence and structure similarity among TGF-␤ isoforms (Fig. 4), they often cause distinct outcomes in embryonic and adult tissues (17,19,21,30). These differences in TGF-␤s actions might be caused by their distinct interactions with their receptors. TGF-␤1 and -␤3 have previously been shown to bind and assemble T␤RI and T␤RII into complexes in an ordered manner, first by forming a stable binary complex with T␤RII and then by recruiting T␤RI (31,32). Current structures of the ternary complexes support this interdependent manner of assembly (7). To examine the receptor binding properties of the TGF-␤s, we used surface plasmon resonance with TGF-␤s immobilized on the sensor surface between 50 and 300 RU. The initial measurements were aimed at defining the relative affinities of T␤RI and T␤RII for the three isoforms. The sensorgrams obtained upon injection of T␤RII over the TGF-␤1, -␤2, and -␤3 surfaces were characterized by relatively fast on and off rates and could be readily fit using a simple 1:1 binding model (Fig. 6A). The derived K D values were consistent with expectations, with both TGF-␤1 and -␤3 (K D values of 190 and 140 nM, respectively) having an affinity more than a 100-fold greater than TGF-␤2 (22.4 M) ( Table 2). Structurally, the T␤RII interface residues Arg 25 and Arg 94 of TGF-␤1 and -␤3 are replaced with lysines in TGF-␤2 (Fig. 4). This likely weakens TGF-␤2 binding to T␤RII because of a shorter lysine side chain and A, sequence comparison of several mammalian TGF-␤s, BMPs, activin, nodal and growth differentiation factor 5 (GDF5). The numbering is consistent with the sequence of TGF-␤1. B, sequence comparison of several type II receptors. The numbering is according to the T␤RII sequence. C, sequence alignment of several type I receptors. The numbering is consistent with the T␤RI sequence. The secondary structure elements are illustrated as arrows and cylinders for ␤-strands and ␣-helices, respectively. The residues involved in interactions in the TGF-␤1 and BMP-2 ternary complexes are highlighted cyan and magenta, respectively. Disulfide bonds critical for receptor/ligand specificity and compatibility and highlighted yellow, numbered, and connected by thick yellow lines. Secondary structure elements crucial for receptor/ ligand pairing are boxed. Arg 25 and Arg 95 are marked by stars. T␤RI, to our surprise, yielded detectable signals when injected over the TGF-␤2 and -␤3 but not TGF-␤1 surfaces (Fig. 6B). The TGF-␤2 sensorgrams could be readily fit to a simple 1:1 binding model with a K D of 11.2 M, whereas the TGF-␤3 sensorgrams could not. The steady state fitting of TGF-␤3 sensorgrams yielded a K D of 2.4 M ( Table 2 and supplemental Fig. S2). Interestingly, TGF-␤2 binds T␤RI with 2-fold higher affinity than T␤RII. To attempt to quantify the weak interaction between T␤RI and TGF-␤1, an equilibrium experiment was performed with a higher density TGF-␤1 surface (686 RU). However, the response was barely detectable over the range of T␤RI concentrations sampled (0 -16 M). An estimate based on equilibrium response indicates a K D greater than 70 M (35). In addition to the type I receptor affinity differences between TGF-␤1 and -␤3, the kinetic association and disassociation rates for TGF-␤3 are also slower (Fig. 6). This could contribute to the functional difference between the TGF-␤ isoforms. Thus, T␤RI also displays ligand preferences with TGF-␤3 Ͼ TGF-␤2 Ͼ Ͼ TGF-␤1. Although T␤RI binds TGF-␤3 tighter than TGF-␤1, the interface between T␤RI and TGF-␤3 is 400 Å 2 smaller than that between T␤RI and TGF-␤1 because of a 10°d ifference in T␤RI docking orientation. The structure shows that 8 of 13 TGF-␤ residues are conserved at the T␤RI interface among the three isoforms. Asn 5 , Ile 51 , Gln 57 , Lys 60 , and Gln 67 vary with Ile 51 and Gln 67 unique to TGF-␤1 and thus may contribute to its weaker T␤RI binding (Figs. 2D and 4A). Despite the fact that T␤RI forms a larger interface area than T␤RII with TGF-␤1 and -␤3, it binds both TGF-␤s weaker compared with T␤RII. Structurally, TGF-␤1 interacts with its type II receptor mostly through hydrogen bonds and salt bridges but with its type I receptor via primarily hydrophobic contacts ( Fig. 4 and supplemental Table S1). A smaller interface yet higher affinity interaction between T␤RII and TGF-␤1 suggests the importance of hydrogen bonds in achieving high receptor-ligand affinity. Similarly, BMP-2 forms 16 hydrogen bonds and salt bridges with its high affinity receptor BMPR-IA but mostly hydrophobic FIGURE 5. Structural determinants critical for receptor/ligand recognition in TGF-␤ superfamily. A, alignment of TGF-␤1 and BMP-2 and their type I receptors. TGF-␤1 is colored cyan. For clarity only one monomer of BMP-2 is shown in magenta. T␤RI and BMPR-Ia are in red and blue, respectively. ␤1-␤2 and ␤4-␤5 loops as well as disulfide bonds on receptor and ligand that are critical for receptor/ligand compatibility are marked. Receptor II binding sites in TGF-␤1 and BMP-2 ternary complex (Site IIa and Site IIb) are outlined as ovals. B, schematic representation of TGF-␤-type and BMP-type ternary complex with receptor I and II binding sites marked as Site I, Site IIa, and Site IIb. C, T␤RII in surface representation (yellow) with TGF-␤1 binding site painted cyan. BMPRII ␤1-␤2 loop is represented as a blue ribbon that blocks the binding site, thus prohibiting BMPRII from binding to Site IIa. D, ActRII in surface representation (gray) with BMP-2 contact area colored magenta. T␤RII unique extension of ␤4-␤5 loop (yellow ribbon) and unique conformation of ␤4-␤5 loop in type I receptors (red ribbon) prevent binding of type I receptors and T␤RII to Site IIb. E, surface representation of T␤RI (red) with TGF-␤1 contact area marked cyan. ActRII ␤1-␤2 loop is shown as a gray ribbon that blocks one of the binding sites illustrating impossibility of any type II receptor to bind to Site I. F, critical differences in the conformation between type I (red) and type II (gray) receptors. contacts with the low affinity ActRII. Because hydrogen bonds, in general, provide more specific receptor/ligand recognition than hydrophobic interactions, the dominance of hydrogen bonding interactions at the TGF-␤1/T␤RII and BMP-2/BMPR-IA interfaces is consistent with their higher affinity and thus the preferential recognition of type II receptor for TGF-␤s and type I receptor for BMPs. Hydrogen bonds and salt bridges also dominate the high affinity receptor ␣ chain binding to hematopoietic cytokines, such as interleukin-2 and -4, and van der Waal's contacts primarily occupy low affinity receptor-cytokine interfaces (36 -38). The subsequent binding studies were aimed at quantifying the extent to which one receptor type potentiated the binding of the other. To accomplish this, T␤RII was included in the buffer at a concentration of 4 M, whereas T␤RI was injected over a range of concentrations. The sensorgrams obtained were characterized by slow association and dissociation rates and could each be easily fit to a simple 1:1 binding model (Fig. 6C). The derived K D values were 70, 16, and 14 nM for TGF-␤1, -␤2, and -␤3, respectively (Table 2). Thus, higher T␤RI binding affinities were observed for all three TGF-␤ in the presence of T␤RII, reflecting their similar efficiencies in assembly of the ternary complexes. Except for the case of TGF-␤2 where 4 M T␤RII is 5-fold lower in concentration than their K D , the K D values measured for TGF-␤1 and TGF-␤3 reflect the binding of T␤RI to the T␤RII⅐TGF-␤ binary complex. Structurally, the cooperative receptor binding reflects the favorable contacts between T␤RI and T␤RII FIGURE 6. Surface plasmon resonance sensorgrams and kinetic fits for binding of the T␤RI and T␤RII extracellular domains to TGF-␤1, -␤2, and -␤3. A, sensorgrams obtained as T␤RII was injected. The traces correspond to triplicate measurements of 2-fold serial dilutions of the receptor over the concentration ranges shown. The surface densities were 185, 339, and 165 RU for TGF-␤1, -␤2, and -␤3, respectively. The red curves correspond to global fits of each data set to a 1:1 binding model using Scrubber 2 software. B, sensorgrams and kinetic fits obtained as T␤RI was injected. Surface densities were 242, 339, and 595 RU for TGF-␤1, -␤2, and -␤3, respectively. Sensorgrams obtained for TGF-␤3 indicated heterogeneity that could not be fit to a simple 1:1 model, and hence no fit is shown. C, sensorgrams and kinetic fits obtained as T␤RI was injected in the presence of 4 M T␤RII. The surface densities were 498, 339, and 595 RU for TGF-␤1, -␤2, and -␤3, respectively. The close resemblance between the TGF-␤1 and TGF-␤3 ternary complex structures is consistent with their similar binding affinities. However, there are some important differences with regard to assembly. Although TGF-␤1 and TGF-␤3 both bind T␤RII with high affinity, TGF-␤1 binds T␤RI much more weakly than TGF-␤3. This difference in T␤RI binding also persists in the context of the binary complexes, with the T␤RII⅐TGF-␤3 complex having a 5-fold greater affinity for binding and recruiting T␤RI compared with the T␤RII⅐TGF-␤1. To better describe the receptor preference and the cooperative contribution of individual receptors in TGF-␤ signaling complex assembly, we define the receptor preference as the ratio of the dissociation constants between the T␤RI and T␤RII for each TGF-␤. For example, TGF-␤3 binds to T␤RI and T␤RII with 2.4 and 0.17 M affinities, respectively, resulting in a 14-fold receptor preference for T␤RII. This T␤RII preference is estimated to be greater than 200 for TGF-␤1 and vanished completely (i.e. 0.5) for TGF-␤2. In other words, there is no preferential binding to the type II versus type I receptor in TGF-␤2 receptor recruitment. These results demonstrate that although all three TGF-␤s can effectively assemble their ternary complexes, their receptor preferences and the contribution of each receptor to the cooperative assembly appear distinct. TGF-␤1, because of its strong preference for binding T␤RII over T␤RI, assembles ternary complex in a prototypical manner, first binding T␤RII with high affinity (K D ϭ 190 nM) and then and only then binding and recruiting T␤RI (K D ϭ 70 nM). TGF-␤3 also preferentially binds T␤RII over T␤RI, although with less preference compared with TGF-␤1 (Table 2). Therefore, TGF-␤3 also likely binds and assembles its receptors in a largely prototypical manner. In contrast, TGF-␤2 displays no receptor preference, and both receptors contribute nearly equally to the assembly of its ternary complex. This suggests that TGF-␤2, instead of following the sequential receptor recruitment paradigm, engages either T␤RI or T␤RII and then recruits the complementary receptor or requires additional co-receptors to stabilize the binary cytokine-receptor complex. It is also possible that T␤RI and T␤RII associate into a preformed dimer, although no direct binding can be detected in solution between the two receptors. These results could explain the 100 -1000-fold lower potency of TGF-␤2 in inducing functional responses in cells lacking the TGF-␤ co-receptor betaglycan, because the fraction of ligand initially captured on the surface would expected to be lower for TGF-␤2 compared with TGF-␤1 and TGF-␤3 (primarily because of its lower affinity for T␤RII). Based on this, betaglycan likely functions as enhancer of cellular sensitivity to TGF-␤2 by its demonstrated ability to promote binding of TGF-␤2 to T␤RII (39); this in turn should endow TGF-␤2 with the capacity to bind and recruit T␤RI in a manner comparable with that of TGF-␤1 and -␤3. In summary, the structure of TGF-␤1⅐T␤RI⅐T␤RII ternary complex and its comparison with the TGF-␤3 ternary complex showed a common ligand recognition mode in the TGF-␤ family of cytokines by their receptors. In particular, the low affinity type I receptor interacts with both TGF-␤ and T␤RII but with slightly different orientations at site I in the two structures. Among the type I receptors, their ligand specificity appears to correlate with the lengths of their ␤1-␤2 ligand binding loops, with receptors with shorter ␤1-␤2 loop being more promiscuous. The high affinity T␤RII binds to a conserved site (IIa) at the tip of TGF-␤1 and TGF-␤3. The T␤RII binding site at the tips of the cytokine fingers is restricted to TGF-␤ only. All other type II receptors in TGF-␤ superfamily bind to the common site IIb at the knuckles of cytokine. As with type I receptors, the length of the ␤4-␤5 region is a determining factor for type II receptor specificity and promiscuity, whereby the receptors with shorter ␤4-␤5 loop display promiscuous ligand recognition. Hydrogen bonds and salt bridges rather than hydrophobic interactions appear to be critical for the high affinity receptor recognition, in this case T␤RII and TGF-␤1. Unlike TGF-␤1, both TGF-␤2 and TGF-␤3 exhibited significant affinities to T␤RI. Although all three TGF-␤s form their ternary receptor complexes equally well, the variations in the type I and II receptor preferences among the TGF-␤s likely modulate the kinetics of ternary complex assembly. As a result, TGF-␤2 likely recruits the type I and II receptors simultaneously rather than sequentially. The differences in the kinetic assembly of the type I and II TGF-␤ receptors suggest a potential functional variation among TGF-␤s with respect to cellular and tissue distributions of their receptors.
7,324.8
2010-03-05T00:00:00.000
[ "Biology" ]
Linearity Test with Unit Root in TV-ESTAR Framework Firstly, this paper proposes F statistic whose limit distribution and critical values are also provided to test nonlinearity and structure change with unit root in TV-ESTAR model framework. The results show that the distribution of F statistic is nonstandard. Then, this paper analyzes finite sample characteristics of F statistics through the Monte Carlo simulation and founds F statistics has better power than kss statistics in Kapetanios et al to test nonlinear unit root with structure change. Literature Review Since the 1980s, a large number of nonlinear measurement models, methods and technologies have emerged. Due to the heterogeneity of economic agent, the structure changes are smooth rather than sudden, the Smooth Transition Autoregressive (STAR) model, STAR model has been recognized by the economics community and become one of the most active models. In the linear test under the unit root condition, one type of research uses the lag of the variable or its difference as the conversion variable to explore nonlinearity, to reflect that economic agent make different choices with the change of historical information, and the other type of research Then choose time t as the conversion variable to analyze the nonlinearity, to reflect the nonlinear characteristics of the structural changes in the behavior of economic agents over time. Kapetanios et al (2003, hereinafter referred to as kss) proposed a unit root linear test method within the framework of the exponentially smoothed autoregressive (ESTAR) model. Kilic (2004) pointed out that the stationarity of the sequence should be considered in the linear test of the STAR model. Sandberg (2008) assumes =t, he discussedthe linear test of TV-LSTAR with t and as Transfer variable. Through the first-order Taylor expansion, it is found that the Wald linear test statistic is nonstandard Brownian motion. The simulation results show that when the sequence is a random walk process, its critical value is higher than that of the standard chi-square distribution under stationary conditions, but smaller than Kilic (2004). The simulation experiment also shows that when the linear test is performed on the sequence whose data generation process is the unit root, the rejection frequency of Sandberg (2008) and Killic (2004) are both higher than the given nominal 5% significance level, but Kilic (2004)) Has a higher rejection frequency. Therefore, using the standard chi-square distribution may get wrong conclusions. Based on kss (2003), Kilic (2011) uses ∆y as the conversion variable, and proposes a method to convert the parameter space through a search mechanism to find the smallest t for ESTAR Model unit root test and linearity test. The above literature shows that there may be both nonlinearity (kss, 2003;Kilic, 2004Kilic, , 2011) and structural changes (Sandberg, 2008) in the unit root linear test. Campbell and Perron (1991) pointed out that if the deterministic trend term is set incorrectly, it will seriously affect the power of the ADF test. The unit root test under the ESTAR framework proposed by Kss (2003), after rejecting the null hypothesis, implicitly believes that under the alternative hypothesis, although the mean recovery speed of different deviations changes nonlinearly, the same deviation from the mean recovery speed at any time Unchanged, that is, the unit root test and linearity test are performed under the assumption that there is no structural change. Sandberg (2008) considered time t as the conversion variable, but only pointed out that when the sequence is a unit root process, the standard chi-square distribution cannot be used, and did not study the relative effectiveness of the unit root linear test under the STAR framework and the TV-STAR framework. The ESTAR model is a very useful model. In this paper, structural changes are added based on kss (2003), and the TV-ESTAR model framework is constructed to analyze the nonlinearity and structural changes in the unit root linear test. The article is arranged as follows: The second part gives the TV-ESTAR model and the null hypothesis for testing whether the unit root has nonlinearity and structural changes; the third part introduces the TV-ESTAR model unit root linearity test method and derives the test statistic F And the limit distribution of F_s, and the critical value of the test statistic is simulated at the same time; the fourth part simulates the level and power of the statistic through the Monte Carlo experiment, and compares it with the linear test power of the kss statistic without structural changes, the fifth part is the conclusion. TV-ESTAR Model and Assumptions Assuming that the variables are non-stationary, the non-linearity and structure change tests are performed in the STAR model with structural changes, and TV-ESTAR formula (1) is given: , 1 , In order to facilitate the construction of statistics for structural changes and nonlinear tests, assume , , 1 0.5, 0,The null hypothesis for testing whether there is nonlinearity can be set toγ 0, and the null hypothesis for testing whether there is a structural change is γ 0 and is unrecognizable, the first-order Taylor expansion can be used to obtain equation (2): , , R γ , γ Is the remainder after Taylor expansion, its value is 0 under the condition that the null hypothesis is established, and it does not affect the asymptotic distribution. According to formula (2), the null hypothesis of linearity test and structural change becomes H0: β β 0, H01: β 0. Not rejecting H0, the sequence does not have nonlinearity, it is a linear unit root process. After rejecting H0, then rejecting H01, the model has both nonlinearity and structural mutation, the TV-ESTAR model is estimated. After rejecting H0, accepting H01, the model has only nonlinearity, and there is no structural change, which is the ESTAR model. If there is a sequence correlation, like kss (2003), equation (1) can be extended to obtain a higher-order test equation (3): Test Statistics for Unit Root Linearity and Structural Change To test H0,assume , ′ is ols of (5), , ′, then: Construct the F statistic as follows (Hamilton, 1999): s Is the residual variance estimated by the model's least squares, R is 2 2 Identity matrix, 2, , 0,0 ′ ,assume diag T , T ,we get: According to the central limit theorem of functionals and the continuous mapping theorem (Hamilton, 1999), there is (⇒ means weak convergence in distribution) under H0,s → σ ,Thus the F statistic can become: Therefore, the distribution of the F statistic of the unit root linear test in the TV-ESTAR model is obtained. For the H01 test, let 0,1 , 1, r,r 0, diag T , T , After calculating we get F ⇒ The above two test statistics both converge to the random functional of the Wiener process. For the case of non-zero mean and deterministic trend terms, the asymptotic distribution of the test statistics is the same, but instead of using W(r) W r * is used, where W r * is the Brownian motion of de-mean and de-trend respectively (kss, 2003). For the equation (3) with sequence correlation, its limit distribution has not changed. Therefore, the critical value of the F statistic with sequence correlation will not change. Using Monte Carlo simulation test (50,000 times) to obtain F, the critical value of F_s statistic is shown in Table 1 and Table 2. Note: Case 1, Case2, Case3 correspond to zero mean, non-zero mean, non-zero mean and deterministic trend, respectively. Note: Same as Table 1 4 Limited Sample Size Test First, briefly examine the finite sample characteristics of the F statistics Case 1, Case 2, and Case 3. The data generation process is as shown in equation (18): μ ~iid 0,1 , ρ 0.5,0,0.5 , When the sample size is T=50, 100, and 200, repeat 10,000 times. When calculating the value of the F test statistic, the test models under different settings include the first-order lag of ∆y . Therefore, for the case of ρ=0, the test model is set too much, and the case of ρ=-0.5, 0.5 is tested The model is correctly set, the nominal significance level is set to 5%, and the test results are shown in Table 3. It can be seen that in the process of generating these data, as the sample size increases, the horizontal distortion is smaller. Limited Sample Power Test For the sake of simplicity, and because non-zero mean (no deterministic trend) is a common situation in practical applications, we only list the results of the simulation of case2. The sample size is T=100 and T=200, and the nominal significance level is set to 5%. Assuming that the structural change position parameter is in the middle of the sample point, the data generation process uses the TV-ESTAR formula (4), and different parameters are selected Value, take values for each group of parameters, repeat the simulation 10000 times, calculate the proportion of the rejection of the null hypothesis, that is, the rejection frequency. Since the data generation process is a non-linear model with structural changes, the higher the rejection frequency, Which indicates that the test is more effective, and for the sake of comparison, the rejection frequency obtained by using the kss test as a judgment is given at the same time. let γ 1, γ 0.1,0.5,1 , the combination of autoregressive parameters ρ1 and ρ2 is as follows ρ 0.05, 0.1, 0.3 , ρ 0.05, 0.1, 0.5, 0.9 ,T=100, c=50. From the simulation results (Table 4), it can be seen that when the given γ is constant, the absolute value of ρ , ρ is constantly increasing, and the power of all test statistics is constantly increasing, because this reflects the return of the sequence to the mean. The speed is getting faster and faster. When the absolute difference between the two parameters reflecting the structural change |ρ -ρ |is small, that is, the structural change does not exist or is not obvious, the power of the F test of the linear test is lower than the power of the kss test, and the difference of | ρ -ρ | keeps increasing, that is, the structure changes are becoming more and more obvious, and the power of F test is obviously better than that of kss test. For example, when γ =1, ρ =-0.05, and ρ =-0.9, the power of F statistic is 45% higher than that of kss statistic. The reason may be that the calculation of the F statistic requires more parameters to be estimated, and it is carried out under the assumption that there is a structural change, which will cause overestimation problems and reduce the test power, but when the structural change becomes more obvious, its advantages are obtained. Make up for the loss of efficacy. Given |ρ -ρ |, when the difference is constant, with the increase of the nonlinear conversion parameter γ , the power of both the kss test and the F test increase, but the power of the F test increases faster, and the power of the kss test increases more than the reason may be that the kss test is performed under the assumption that there is no structural change. The increase of γ is equal to the relative increase of |ρ -ρ |difference, so the power of the F statistic is improved. In general, if the model has obvious structural changes and nonlinearities, the F-test has better power than the kss test. When there are no structural changes and very weak nonlinearities, the F-test is inferior to the kss test. However, theF test is in this way Under the circumstances, it seems that the structure change can be judged more accurately. Therefore, in practical applications, starting from the linear model, there are two path options: 1) Perform the kss test first, and then perform the F test, which is equivalent to special to general construction. Modular mode; 2) F-test first, then F_S test, which is equivalent to a special to general and then a special modeling method. No matter which path, F must be checked. You can choose according to the following steps. F is calculated under the framework of the TV-ESTAR model. If H01 is accepted, it means that there is no structural change in the model. Use kss for linearity test. If H01 is rejected, It is more accurate to use F statistic for linearity test. Conclusion This article discusses the linear test problem when the sequence is a unit root process under the framework of the TV-ESTAR model. The structure change is added on the basis of kss (2003), which allows the same degree of deviation from the mean value of different periods to recover at different speeds. The modeling process requires a two-stage test of the model, that is, first test the existence of nonlinearity, and then test the existence of structural changes after rejecting the null hypothesis of linear unit roots. Different from the kss test, the F statistic is given to test nonlinearity and structural changes based on the first-order Taylor expansion, and their limit distribution is derived. Since the limit distribution converges to the random functional of the Wiener process, the simulation gives The critical value of the test statistic. For the case of non-zero mean and trend terms, the same method of kss is used to remove the mean and the trend, and the critical value is also given. If there is a highorder serial correlation, the limit distribution of the test statistic does not change, so the critical value Still applicable. Using the Monte Carlo test to simulate the limited sample characteristics of the proposed test statistic, it is found that when the data generation process is a nonlinear ESTAR model with structural changes, the power of the F test is better than that of the kss linear test.
3,192
2021-11-16T00:00:00.000
[ "Economics" ]
Sensitivity Analysis Techniques Applied in Video Streaming Service on Eucalyptus Cloud Environments Applied on ABSTRACT Nowdays, several streaming servers are available to provide a variety of multimedia applications such as Video on Demand in cloud computing environments. These environments have the business potential because of the pay-per-use model, as well as the advantages of easy scalability and, up-to-date of the packages and programs. This paper uses hierarchical modeling and different sensitivity analysis techniques to determine the parameters that cause the greatest impact on the availability of a Video on Demand. The results show that distinct approaches provide similar results regarding the sensitivity ranking, with specific exceptions. A combined evaluation indicates that system availability may be improved effectively by focusing on a reduced set of factors that produce large variation on the measure of interest. INTRODUCTION Cloud Computing may be defined as a model for enabling on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction (NIST, 2013). This resource set is typically utilized as a service model where in the client only pays for what is consumed. Video on Demand (VoD) streaming services are run on cloud infrastructure platforms in order to offer cost savings, easy scalability, and high availability to users. To evaluate the availability of the service, hirerarchical analytical models are employed to represent the architecture (Dantas et al., 2012). In this paper, we propose an availability model of a VoD service based on Eucalyptus cloud environment, for evaluating the sensitivity of VoD service components. For guiding the implementation of system improvements, Design of Experiments (DoE) and percentage difference were used for identifying availability bottlenecks of the VoD service. The remainder of the paper is organized as follows: Section II introduces related works on system availability and sensitivity analysis, Section III introduces basic concepts of cloud computing technologies, video streaming, dependability models and analysis and sensitivity analysis. Section IV presents the architecture of the system analyzed in this paper. Section V presents the availability model designed for architecture. Section VI conducts a case study about architecture analysis and presents sensitivity analysis. Finally, Section VII shows the conclusions of the study and suggests possible future work. RELATED WORKS Recent research have employed hierarchical modeling to represent cloud computing architectures, it is possible to compare the different solutions and estimations of dependability measures (Dantas et al., 2012;Chuob et al., 2011). Moreover, some works have also employed sensitivity analysis to identify the critical system components, and thereby propose infrastructure improvements . Khazaei et al. (2012) integrated an availability model in overall analytical sub-models of cloud system. Each sub-model captures a specific aspect of cloud centers. The key performance metrics such as task blocking probability and total delay incurred on user tasks are obtained. Choub et al. (2011) proposed a private cloud, with a modeling of the cloud computing based upon the Eucalyptus architecture. To understand the behavior of Eucalyptus, it was considered Ubuntu Enterprise Cloud (UEC) as reference architecture for our Cloud Test bed environment in Lab. With UEC architecture, it was addressed the availability of each component of the cloud base on Markov chain through the level analysis of hierarchical available model (HAM). Malik et al. (2013) provided a formal analysis modeling and verification of open source state-of-art VM-based cloud management plataforms to model and analyze the structural and behavioral properties of the systems have used high-Level Petri Nets. In (Araujo et al., 2014), an availability model of a digital cloud library through a OpenNebula cloud manager is proposed, using an hierarchical approach to model and evaluate the digital library environment. Measurements were performed to obtain the availability parameters of the library service deployed in a private cloud. In this paper, the authors proposed availability models applied to a cloud environment for VoD streaming service. Design of Experiments (DoE) and percentage difference were applied in order to find the bottlenecks of system availability. BACKGROUND This section presents the concepts which provide a background for this paper, including: cloud computing technologies, video streaming, dependability modeling and analysis and sensitivity analysis techniques. Cloud Computing and the Eucalyptus Platform A cloud computing system is comprised of a bundle of resources, such as hardware, software, development platforms and services, readily usable and accessible through the Internet (Armbrust et al., 2010). Services can be provisioned on different levels by cloud computing providers, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). In a cloud computing environment, virtual resources can also be dynamically allocated and resized to deal with a varying workload, enabling optimal use of physical resources. Cloud computing services are typically accessed through a pay-per-use model (Pereira, 2010) and the provider offers is determined and guaranteed by means of service level agreements. Eucalyptus is a software architecture based on Linux that leverage the implementation of private and hybrid IaaS clouds. This means that users can utilize their own collections of resources (hardware, storage and network) through a selfservice interface according to their needs. The Eucalyptus software framework is modular (Eucalyptus Systems, 2010), and consists of five high level components, each with its own Web Service: Cloud Controller (CLC), Cluster Controller (CC), Node Controller (NC), Storage Controller (SC), e Walrus (Eucalyptus Systems, 2010). Video Streaming Video streaming is a technology used in the transmission of digital multimedia contenton the Internet (Delgado et al., 2006). Streaming enables data to be sent and flow without the need to wait for content to load completely. This requires smaller network bandwidth and less storage space. As the multimedia data arrives, it is stored in a fast buffer before execution starts. Encoding, protocols and buffering mechanism are factors that might affect transmission of a video streaming (Diaz-Sanchez et al., 2011). The video streaming service works with the protocol Real Time Streaming Protocol (RTSP) which allows the control in the data transfer with real time properties. The RTSP makes the transfer possible on demand of real time data with audio and video (Valeriana and Marcelo, 2008). Models for Dependability Analysis In early 1980s Laprie coined the term dependability for encompassing concepts such reliability, availability, safety, confidentiality, maintainability, security and integrity etc. (Laprie, 1992;Laprie, 1995), whereas reliability of system at is the probability that the system performs its functions without failing up to time instant . Availability can be expressed as the ratio of the expected system uptime to the expected system up and downtime: There are several types of models that can be used for analytical evaluation of dependability. Reliability Block Diagrams (RBD), Fault Trees, Stochastic Petri Nets (SPN) and Continuous Time Markov Chains (CTMC) have been used to model fault-tolerant systems and evaluate various dependability measures. Sensitivity Analysis The Design of Experiments, known as DoE, is method used to perform sensitivity analysis. Through this method it is possible to assess the importance of each parameter of the system and furthermore it can be used to simultaneously determine the individual and interactive effects of many factors that may affect the output measures. In the DoE method, parameters are called factors and the value that is assigned to each factor is called a level. There are numerous types of DoE and among the most commonly used are: full factorial design, fractional factorial design and simple design (Jain, 1991). Sometimes the number of experiments required for a full factorial design is too large. This may happen if either the number of factors or their levels is large. It may not be possible to use a full factorial design due to the cost or the time required. In such cases, one can only use a fraction of the full factorial design (Mathews, 2004). Fractional factorial designs are commonly used to reduce the number of runs required to build an experiment (Mathews, 2004) In this study we adopted the fractional factorial design, because there are more than five variables that we need to analyze. For 2 designs with five or more variables consider using the fractional factorial designs to reduce the number of runs required by the experiment. SYSTEM ARCHITECTURE The system architecture is based on Eucalyptus cloud computing platform. A broad view of the components of the VoD service architecture is seen in Figure 1 (Melo et al., 2017). The VoD architecture is divided into two sides: client and the server. The physical structure is composed of three machines. One machine is used for frontend and two machines for the nodes. The client connects to the video streaming server through the Internet. A storage volume is allocated in the frontend for storing the collection of videos. A Virtual Machine (VM), running the Apache and VLC applications, is instantiated in the nodes. VLC provides the video streaming features, whereas Apache is responsible for hosting the service on a dedicated Web page. The user issues a request for displaying a video hosted on a specific web page. VLC, in turn, grabs the requested video from the remote storage volume and transmits the stream to the user. AVAILABILITY MODELS This section discusses the availability models employed to represent the redundant architecture evaluated by this research, from which the availability values were calculated. Such values were obtained by the hierarchical combinatorial method, which combines the system state representation of Markov chains with RBDs (Sahner and Trivedi, 1987;Kim et al., 2009), and is the method commonly employed to evaluate complex IT systems. The systems were modeled and evaluated with the SHARPE (Silva et al., 2012) and Mercury tools (MoDCS, 2017), which were specifically designed for the analysis of such models. Model for Architecture RBD and CTMC models were used to represent the subsystems of the architectures presented in Figure 1. These models are then combined, constituting a hierarchical model. The architecture can be divided into three Volume subsystem is allocated in the frontend for storing the collection of videos. The Service subsytem is further refined by a CTMC (see Figure 3), which allows to compute availability values to be considered top-level in the RBD. A CTMC was proposed due to the interdependency between the system's components. In the top-level model the service as well as the node subsystem infrastructure is represented by the Service RBD block. However the availability of such a sub-system (service + node subsystem infrastructure) cannot be properly represented by an RBD since the node subsystem implements an active redundant mechanism. Therefore, the Service RBD block is refined by the CTMC depicted in Figure 3, which represents the service availability of the node subsystem infrastructure. The CTMC comprises the states UUW, UDU, UUD, and UWU (service available), and the states DDW, DUW, DDD, DWU, DDU, DWD, and DUD (service unavailable). The notation for the states is based on the current condition of each component. The three letters represent initialisms of the operating condition of the three components, respectively, the service, the first node, and the second node. The service may be up (U) or down (D). The NCs work by being alternately in warm standby mode, and only one of them should be up (U) at any one time, whilst the other is either in warm standby (W) or down (D). In this model the initial service is represented by UWU, where the service is available, the first node in warm standby, and the second node is running. From this state it is possible to move to DWU (service failure), DWD (second node failure), or UDU (first node failure). From the DWU state (service down, first node in warm standby and second node up), may be reached UWU (representing service repair), DWD (second node failure), or DDU (first node failure). From state DWD three outcomes are possible; either the failure of all system components (DDD), the initialization of the first node (DUD) or the repair of the second node (DWU). From state UDU (service and second node running), the possible outcomes are DDD (failure of all components), DDU, or UWU. The state UDU can lead to either the failure of all system components (DDD), the repair to waiting state of the first node (DWU), or the instantiation of a new virtual machine with all system applications, making the service available again (UDU). In state DDD all system components are down; the service is unavailable and the two nodes are unavailable. From this state it is possible to reach two other states; repair of the first node (DUD) and the repair of the second node (DDU). State DUD represents service unavailability, where service and second node are down, but the first node is up. From this state, three other states can be achieved; failure of all components of the system (DDD), repair of the service (UUD), or repair of the warm standby mode of the second node (DUW). Conversely, state UUD indicates system availability, where the service and the first node are up, but the second node is faulty. From here, the following three states can be reached; DUD (service failure), UUW (repair of the node to warm standby mode), and DDD (since failure of the only functional mode will automatically cause service failure too). In state DUW the system is unavailable due to service application failure, although the first node is up and the second node is in warm standby mode. From DUW the following states can be reached; DUD (failure of the warm standby node), DDW (first node failure), or UUW (service repair). With service and first node operational, and the second node in warm standby, UUW indicates system availability. From this position in the model, the possibilities are warm standby failure (UUD), service failure (DUW) or first node failure, which would cause the service to become unavailable (DDW). The state DDW indicates system unavailability, with service and first node down, and second node in warm standby. From DDW, it is possible to reach three other states; failure of all the components of the system (DDD), initialization of the second node (DDU), or repair of the first node (DUW). System failure is an event that occurs when the provided service deviates from the intended service (Silva et al., 2012;MoDCS, 2017). The failure rates of the two nodes are represented by , whilst represents the rate of node repair. A node in warm standby has the failure rate of , and the repair rate to return it to standby is . A warm standby node is transformed to available mode at the rate of . The failure rate of the service application is 0 , while the repair rate is 0 . The 0 was obtained from the inverse of the time to failure of the service module. To calculate this result, we used the CTMC model of Figure 3 and the Mercury tool (Silva et al., 2012;MoDCS, 2017). The repair rate of the service is considered as the instantiation of a new virtual machine, including all the applications necessary to its operation (Apache and VLC). A closed-form equation for computing the availability of the complete service ( ) can also be obtained, as demonstrated by Equation 2. (availability of the frontend) and (availability of the volume) can be computed from the RBD of Figure 2, whilst 0 is calculated from Equation 1. In this equation, , , and 0 correspond to the availability of the frontend, volume, and service, respectively. CASE STUDIES Three case studies were designed to analyze the system availability (Melo et al., 2017). The case studies can be summarized as follows: i) Case Study I: availability analysis of architecture. ii) Case Study II: sensitivity analysis of all the system components (see Figure 2) to establish a ranking of the most important parameters in the video streaming service. iii) Case Study III: analysis the behavior of the system using other technique of sensitivity analysis, by means of a percentage difference. Figure 1 depicts the architecture, which has the dedicated frontend machine and two machines for the nodes. The Table 1 shows the values of mean time to failure (MTTF) and mean time to repair (MTTR) used the model of the Frontend. Those values were obtained from (Kim et al., 2009;Dantas et al., 2012) and were used to compute the dependability metrics for the frontend, and subsequently for the whole system. This subsystem has a MTTF of 180.72 h and a MTTR of 0.96999 h. Case Study I The values used of MTTF and MTTR for Volume subsystem are described in the Table 1. Those values were obtained from Melo et al. (2017) and Kim et al. (2009). Table 1 presents the parameters for the blocks of the RBD model for the structure shown in the Figure 2. The values of MTTF and MTTR of the Frontend and volume modules are based on Dantas et al. (2012) and Melo et al. (2017). The availability of the Service module is computed from the CTMC depicted in Figure 3. Table 2 presents all values of the parameters used for computing the availability of the Service module. The values are based on the analyses shown in Dantas et al. (2012) and Melo et al. (2017). For the presented configuration of parameters, we find a value of 0.994401 for the availability of the video streaming system, without redundancy. This availability corresponds to about 49,05 hours of downtime in a year, and therefore highlights the importance of searching for effective solutions to improve this system. Case Study II For 2 designs with five or more variables consider using the fractional factorial designs to reduce the number of runs required by the experiment (Mathews, 2005). This way, we performed the analysis of a fractional factorial design of experiments to provide point of view on the sensitivity of Video Stream Service availability with respect to each parameter. This analysis was performed on the 10 parameters shown in the ranking based on partial derivatives. Two levels were considered for each parameter: the minimum and maximum values used in the graphical analysis. This 2 factorial experiment was evaluated according to the individual effects for the system availability, and these values are shown in Table 4. The measures of interest were calculated with the values given in Table 1 and Table 3, and the sensitivity ranking of availability for all parameters of the streaming service were given in Table 4. The tools Minitab was used for calculated sensitivity analysis (Minitab, 2000). The results are ordered according to absolute values. Negative values indicate that there is an inverse relationship between parameters and system availability. For example, the sensitivity with respect to failure rates is negative due to the fact that as failure rate increases (that is, the MTTF decreases) availability decreases. The index in Table 4 indicates that the parameters , and have the largest effect values. Based on the results it can be concluded that the Fontend is the critical point of the video streaming system in terms of availability, and should therefore receive priority when improvements to the system are considered. Table 4 also demonstrates that the rate , which is the rate of repair the node, is shown to have the second least impact on system availability, with only having less. The ranking obtained from design of experiments sensitivity analysis provides a direct view of the order of importance of all parameters. Figure 4 is a graphic representation of the system availability where the MTTF parameters for Frontend were altered one at a time, whilst fixing all other parameters to the values given in Table 1 and Table 2. The plot confirms that increasing the failure times of the Frontend modules results in increased availability. The need to implement improvements in the Frontend modules is confirmed by the ranking given in Table 4. An alternative method for reaching this conclusion would be to implement failover mechanisms in one or both of the components and evaluate on impact on system availability. Figure 5 illustrates the system availability as a function of the MTTF for the Frontend module, and clearly demonstrates the impact that MTTF increases have on system availability. Increasing the MTTF of the Frontend results in a reduction in downtime of 44.80 hours in the year. Case Study III In this case study analysis the behavior of the system using another technique of sensitivity analysis. In this case we used the percentage difference with this method we can also identify components that affect the availability of the systems. The percentage difference is calculated using a Equation 3. This equation shows the expression for this approach, where max{ ( )} and min{ ( )} are the maximum and minimum output values, respectively, computed when varying the parameter over the range of its possible values of interest. If ( ) is known to vary monotonically, so only the extreme values of (i.e., and ) may be used to compute max{ ( )} and min{ ( )} subsequently { } citejain. (3) Table 5 presents the ranking of the sensitivity analysis in descending order. In order to generate the results of this ranking we used the input data from Table 3 and applied in Equation (3). Table 5 gives the percentage difference of all system components calculated from the input parameter values given in Table 3. The Table 5 indicates that the repair and rate Frontend ( ) and ( ) module are the most important component with respect to availability. A refined analysis combining the two rankings -2 DoE and percentage difference indices may provide a reduced list of parameters which deserve the highest priority to improve the system availability. We perform such a combined analysis by checking the parameters which appear among the first three positions of the rankings. The parameters which match such a criterion are: , , 0 and 0 . CONCLUSION This paper presented sensitivity analysis of video streaming service availability based on hierarchical analytical models. Design of Experiments (DoE) and percentage difference were used to assess the impact of each input parameter. The results show that the system availability may be improved effectively by focusing on a reduced set of factors which produce large variation on steady-state availability. Most parameters ranked in the highest positions of sensitivity rankings is related to the frontend component. This component should receive the highest priority to achieve effective improvements on system availability. We also performed a combined analysis from the two techniques which is useful to reduce the list of parameters which an analyst should focus on, and solve possible conflicting results from distinct approaches. For future work, the authors propose to implement further sensitivity techniques, and also to extend the scope of the models considered in different topologies.
5,264
2018-01-23T00:00:00.000
[ "Computer Science" ]
Multicolor fluorescence fluctuation spectroscopy in living cells via spectral detection Signaling pathways in biological systems rely on specific interactions between multiple biomolecules. Fluorescence fluctuation spectroscopy provides a powerful toolbox to quantify such interactions directly in living cells. Cross-correlation analysis of spectrally separated fluctuations provides information about intermolecular interactions but is usually limited to two fluorophore species. Here, we present scanning fluorescence spectral correlation spectroscopy (SFSCS), a versatile approach that can be implemented on commercial confocal microscopes, allowing the investigation of interactions between multiple protein species at the plasma membrane. We demonstrate that SFSCS enables cross-talk-free cross-correlation, diffusion, and oligomerization analysis of up to four protein species labeled with strongly overlapping fluorophores. As an example, we investigate the interactions of influenza A virus (IAV) matrix protein 2 with two cellular host factors simultaneously. We furthermore apply raster spectral image correlation spectroscopy for the simultaneous analysis of up to four species and determine the stoichiometry of ternary IAV polymerase complexes in the cell nucleus. Introduction Living cells rely on transport and interaction of biomolecules to perform their diverse functions. To investigate the underlying molecular processes in the native cellular environment, minimally invasive techniques are needed. Fluorescence fluctuation spectroscopy (FFS) approaches provide a powerful toolbox that fulfills this aim (Jameson et al., 2009;Weidemann et al., 2014;Petazzi et al., 2020). FFS takes advantage of inherent molecular dynamics present in biological systems, for example, diffusion, to obtain molecular parameters from fluctuations of the signal emitted by an ensemble of fluorescent molecules. More in detail, the temporal evolution of such fluctuations allows the quantification of intracellular dynamics. In addition, concentration and oligomerization state of molecular complexes can be determined by analyzing the magnitude of fluctuations. Finally, hetero-interactions of different molecular species can be detected by cross-correlation analysis of fluctuations emitted by spectrally separated fluorophores (Schwille et al., 1997). Over the last two decades, several experimental FFS schemes such as raster image (cross-) correlation spectroscopy (RI(C)CS) (Digman et al., 2005;Digman et al., 2009b), (cross-correlation) Number&Brightness analysis (Digman et al., 2008;Digman et al., 2009a), and imaging FCS (Krieger et al., 2015) have been developed, extending the concept of traditional single-point fluorescence (cross-) correlation spectroscopy (F(C)CS) (Magde et al., 1972). A further interesting example of FFS analysis relevant in the field of cell biology is represented by scanning F(C)CS (SF(C)CS). Using a scan path perpendicular to the plasma membrane (PM), this technique provides enhanced stability and the ability to probe slow membrane dynamics (Ries and Schwille, 2006), protein interactions (Ries et al., 2009b;Dunsing et al., 2017), and oligomerization at the PM of cells. FFS studies are conventionally limited to the analysis of two spectrally distinguished species due to (i) broad emission spectra of fluorophores with consequent cross-talk artifacts and (ii) limited overlap of detection/excitation geometries for labels with large spectral separation. Generally, only a few fluorescence-based methods are available to detect ternary or higher order interactions of proteins (Galperin et al., 2004;Sun et al., 2010;Hur et al., 2016). First in vitro approaches to perform FCS on more than two species exploited quantum dots (Burkhardt et al., 2005) or fluorescent dyes with different Stokes shifts excited with a single laser line in one- (Hwang et al., 2006) or two-photon excitation (Heinze et al., 2004;Ridgeway et al., 2012a), coupled with detection on two or more single photon counting detectors. Following an alternative conceptual approach, it was shown in vitro that two spectrally strongly overlapping fluorophore species can be discriminated in FCS by applying statistical filtering of detected photons based on spectrally resolved (fluorescence spectral correlation spectroscopy [FSCS]; Benda et al., 2014) or fluorescence lifetime (fluorescence lifetime correlation spectroscopy [FLCS]; Böhmer et al., 2002;Kapusta et al., 2007;Ghosh et al., 2018) detection. Such framework allows the minimization of cross-talk artifacts in FCCS measurements performed in living cells (Padilla-Parra et al., 2011). Recently, three-species implementations of RICCS and FCCS were successfully demonstrated for the first time in living cells. Schrimpf et al. presented raster spectral image correlation spectroscopy (RSICS), a powerful combination of RICS with spectral detection and statistical filtering based on the emission spectra of mEGFP, mVenus, and mCherry fluorophores (Schrimpf et al., 2018). Stefl et al. developed single-color fluorescence lifetime cross-correlation spectroscopy (sc-FLCCS), taking advantage of several GFP variants characterized by short or long fluorescence lifetimes (Štefl et al., 2020). Using this elegant approach, three-species FCCS measurements could be performed in yeast cells, with just two excitation lines. Here, we explore the full potential of FSCS and RSICS. In particular, we present scanning fluorescence spectral correlation spectroscopy (SFSCS), combining SFCS and FSCS. We show that SFSCS enables cross-talk-free SFCCS measurements of two protein species at the PM of living cells tagged with strongly overlapping fluorophores in the green or red regions of the visible spectrum, excited with a single excitation line. This approach results in correct estimates of protein diffusion dynamics, oligomerization, and interactions between both species. Further, we extend our approach to the analysis of three or four interacting partners: by performing cross-correlation measurements on different fluorescent protein (FP) hetero-oligomers, we demonstrate that up to four FP species can be simultaneously analyzed. We then apply this scheme to simultaneously investigate the interaction of influenza A virus (IAV) matrix protein 2 (M2) with two cellular host factors, the tetraspanin CD9 and the autophagosome protein LC3, co-expressed in the same cell. Finally, we extend RSICS for the detection of four molecular species and quantify, for the first time directly in living cells, the complete stoichiometry of ternary IAV polymerase complexes assembling in the nucleus, using three-species fluorescence correlation and brightness analysis. Results Cross-talk-free SFSCS analysis of membrane-associated proteins using FPs with strongly overlapping emission spectra and a single excitation wavelength To test the suitability of SFSCS to quantify interactions between membrane proteins tagged with strongly spectrally overlapping fluorophores, we investigated HEK 293T cells co-expressing myristoylated and palmitoylated mEGFP (mp-mEGFP) and mp-mEYFP. These monomeric FPs are anchored independently to the inner leaflet of the PM and their emission maxima are only ca. 20 nm apart ( Figure 1-figure supplement 1). The signal originating from the two fluorophores was decomposed using spectral filters (Figure 1-figure supplement 2A) based on the emission spectra detected on cells expressing mp-mEGFP and mp-mEYFP separately (Figure 1-figure supplement 1). We then calculated autocorrelation functions (ACFs) and the cross-correlation function (CCF) for signal fluctuations assigned to each fluorophore species. Representative CFs for a typical measurement are shown in Figure 1A, indicating absence of interactions and negligible cross-talk between the two FPs. In contrast, we observed substantial CCFs when analyzing measurements on cells expressing mp-mEYFP-mEGFP heterodimers (Figure 1-figure supplement 3A). Overall, we obtained a relative cross-correlation ( rel. cc.) of 0.72 ± 0.12 (mean ± SD, n = 22 cells) in the latter sample compared to a vanishing rel. cc. of 0.02 ± 0.04 (mean ± SD, n = 34 cells) in the negative control ( Figure 1B). Comparison of two types of linker peptides (short flexible or long rigid) between mEGFP and mEYFP showed that the linker length slightly affected rel. cc. values obtained for heterodimers ( Figure 1figure supplement 3C). FPs linked by a short peptide displayed lower rel. cc., probably due to fluorescence resonance energy transfer (FRET), as previously reported (Foo et al., 2012). Therefore, unless otherwise noted, similar long rigid linkers were inserted in all constructs used in this study that contain multiple FPs (see Supplementary file 1a). ['Ch2']; gray: CCF calculated for both fluorophores) obtained from SFSCS measurements on the PM of HEK 293T cells co-expressing mp-mApple and mp-mCherry2. Solid thick lines show fits of a two-dimensional diffusion model to the CFs. (E) Relative cross-correlation values obtained from SFSCS measurements described in (D) ('A + Ch2') or on HEK 293T cells expressing mp-mCherry2-mApple heterodimers ('Ch2-A'). (F) SNR of ACFs for mApple (light red) and mCherry2 (dark red), obtained from SFSCS measurements described in (D), plotted as a function of the average ratio of detected mApple and mCherry2 fluorescence. Data are pooled from three (B) or two (E) independent experiments each. The number of cells measured is given in parentheses. Error bars represent mean ± SD. The online version of this article includes the following source data and figure supplement(s) for figure 1: Source data 1. Relative cross-correlation and signal-to-noise ratios for two-species scanning fluorescence correlation spectroscopy measurements. Overlapping fluorescence emission from different species detected in the same channels provides unwanted background signal and thus reduces the signal-to-noise ratio (SNR) of the CFs (Schrimpf et al., 2018). To assess to which extent the SNR depends on the relative concentration of mEGFP and mEYFP fluorophores, we compared it between measurements on cells with different relative expression levels of the two membrane constructs ( Figure 1C). While the SNR of mEGFP ACFs was only moderately affected by the presence of mEYFP signal (i.e., SNR ranging from ca. 2.5 to 1.0, with 90% to 10% of the signal originating from mEGFP), the ACFs measured for mEYFP showed strong noise when mEGFP was present in much higher amount (i.e., SNR ranging from 2.5 to 0.2, with 90% to 10% of the signal originating from mEYFP). Next, we tested whether the same approach can be used for FPs with overlapping emission in the red region of the visible spectrum, which generally suffer from reduced SNR in FFS applications Foust et al., 2019). Therefore, we performed SFSCS measurements on HEK 293T cells co-expressing mp-mCherry2 and mp-mApple. Also, the emission spectra of these FPs are shifted by less than 20 nm (Figure 1-figure supplement 1, spectral filters are shown in Figure 1figure supplement 2B). Correlation analysis resulted generally in noisier CFs ( Figure 1D) compared to mEGFP and mEYFP. Nevertheless, a consistently negligible rel. cc. of 0.04 ± 0.06 (mean ± SD, n = 24 cells) was observed. In contrast, a high rel. cc. of 0.78 ± 0.19 (mean ± SD, n = 18 cells) was obtained on cells expressing mp-mCherry2-mApple heterodimers ( Figure 1E, Figure 1-figure supplement 3B). SNR analysis confirmed lower SNRs of the CFs obtained for red FPs ( Figure 1F) compared to mEGFP and mEYFP, with values for mApple depending more weakly on the relative fluorescence signal than mCherry2 (i.e., ca. twofold change for mApple vs. ca. fourfold change for mCherry2, when the relative abundance changed from 90% to 10%). . Diffusion and molecular brightness analysis for two-species scanning fluorescence spectral correlation spectroscopy (SFSCS) measurements at the plasma membrane (PM) of HEK 293T cells. (A) Diffusion times obtained from SFSCS measurements on HEK 293T cells expressing either influenza A virus (IAV) HA-mEGFP or mp-mEYFP separately (blue), or co-expressing both fusion proteins (red). (B) Normalized molecular brightness values obtained from SFSCS measurements on HEK 293T cells co-expressing mp-mEGFP and mp-mEYFP (blue), mp-2x-mEGFP and mp-mEYFP (red), or expressing mp-mEGFP alone (yellow). Normalized brightness values were calculated by dividing molecular brightness values detected in each SFSCS measurement by the average brightness obtained for mEGFP and mEYFP in cells co-expressing mp-mEGFP and mp-mEYFP. Data are pooled from two independent experiments for each sample. The number of cells measured is given in parentheses. Error bars represent mean ± SD. Statistical significance was determined using Welch's corrected two-tailed Student's t-test (****p<0.0001, ns: not significant). The online version of this article includes the following source data and figure supplement(s) for figure 2: Source data 1. Diffusion times and normalized molecular brightness values for two-species scanning fluorescence correlation spectroscopy measurements. We furthermore verified that SFSCS analysis results in correct estimates of protein diffusion dynamics. To this aim, we co-expressed mEGFP-tagged IAV hemagglutinin spike transmembrane protein (HA-mEGFP) and mp-mEYFP. We then compared the diffusion times measured by SFSCS to the values obtained on cells expressing each of the two constructs separately (Figure 2A). For HA-mEGFP, an average diffusion time of 34 ± 9 ms (mean ± SD, n = 21 cells) was determined in cells expressing both proteins. This value was comparable to that measured for HA-mEGFP expressed separately (36 ± 8 ms, mean ± SD, n = 18 cells). For mp-mEYFP, diffusion times of 9 ± 3 ms and 8 ± 2 ms were measured in samples expressing both proteins or just mp-mEYFP, respectively. In addition to diffusion analysis, we also analyzed the cross-correlation of HA-mEGFP and mp-mEYFP signal for two-species measurements, resulting in negligible rel. cc. values (Figure 2-figure supplement 1). Hence, SFSCS yielded correct estimates of diffusion dynamics and allowed to distinguish faster and slower diffusing protein species tagged with spectrally strongly overlapping FPs. Finally, we evaluated the capability of SFSCS to precisely determine the molecular brightness as a measure of protein oligomerization. We compared the molecular brightness values for mEGFP and mEYFP in samples co-expressing monomeric FP constructs mp-mEGFP and mp-mEYFP with the values obtained for cells co-expressing mp-2x-mEGFP homodimers and mp-mEYFP ( Figure 2B). From SFSCS analysis of measurements in the latter sample, we obtained a normalized molecular brightness of 1.64 ± 0.36 (mean ± SD, n = 21 cells) for mp-2x-mEGFP, relative to the brightness determined in the monomer sample (n = 19 cells). This value is in agreement with our previous quantification of the relative brightness of mEGFP homodimers, corresponding to a fluorescence probability (p f ) of ca. 60-75% for mEGFP . The p f is an empirical, FP-specific parameter that was previously characterized for multiple FPs . It quantifies the fraction of non-fluorescent FPs due to photophysical processes, such as transitions to long-lived dark states, or slow FP maturation and needs to be taken into account to correctly determine the oligomerization state of FP tagged protein complexes. As a reference for the absolute brightness, we also determined the relative molecular brightness of mEGFP in cells expressing mp-mEGFP alone, yielding a value of 1.03 ± 0.21 (mean ± SD, n = 22 cells). Additionally, the brightness values determined for mEYFP in both two-species samples were similar, with a relative ratio of 1.07 ± 0.18, as expected. This confirms that reliable brightness values were obtained and that dimeric and monomeric species can be correctly identified. In summary, these results demonstrate that SFSCS analysis of fluorescence fluctuations successfully separates the contributions of FPs exhibiting strongly overlapping emission spectra, yielding correct quantitative estimates of protein oligomerization and diffusion dynamics. Simultaneous cross-correlation and brightness analysis for three spectrally overlapping FPs at the PM In the previous section, we showed that SFSCS enables cross-talk-free cross-correlation analysis of two fluorescent species excited with a single laser line, even in the case of strongly overlapping emission spectra. To explore the full potential of SFSCS, we extended the approach to systems containing three spectrally overlapping fluorophores. We excited mEGFP, mEYFP, and mCherry2 with 488 nm and 561 nm lines simultaneously and detected their fluorescence in 23 spectral bins in the range of 491-695 nm. We measured individual emission spectra (Figure 1-figure supplement 1) for singlespecies samples to calculate three-species spectral filters (Figure 3-figure supplement 1), which we then used to decompose the signal detected in cells expressing multiple FPs into the contribution of each species. As a first step, we performed three-species SFSCS measurements on HEK 293T cells co-expressing mp-mEYFP with either (i) mp-mEGFP and mp-mCherry2 (mp-G+ mp-Y + mp-Ch2) or (ii) mp-mCherry2-mEGFP heterodimers (mp-Ch2-G + mp-Y ). Additionally, we tested a sample with cells expressing mp-mEYFP-mCherry2-mEGFP heterotrimers (mp-Y-Ch2-G). We then calculated ACFs for all three FP species and CCFs for all fluorophore combinations, respectively. In the first sample (mp-G + mp-Y + mp-Ch2), in which all three FPs are anchored independently to the PM, we obtained CCFs fluctuating around zero for all fluorophore combinations, as expected ( Figure 3A). In the second sample (mp-Ch2-G + mp-Y ), a substantial cross-correlation was detected between mEGFP and mCherry2, whereas the other two combinations resulted in CCFs fluctuating around zero ( Figure 3B). In the heterotrimer sample, CCFs with low level of noise and amplitudes significantly above zero were successfully obtained for all three fluorophore combinations ( Figure 3C). From the amplitude ratios Figure 3F). Low rel. cc. values were obtained for all fluorophore combinations that were not expected to show interactions, for example, 0.05 ± 0.08 (mean ± SD, n = 46 cells) between mEGFP and mEYFP signal in the first sample. It is worth noting that these values, albeit consistently negligible, appear to depend on the specific fitting procedure (see Figure 3-figure supplement 2 and Materials and methods for details). For mEGFP and mCherry2, similar rel. cc. values of 0.45 ± 0.06 (mean ± SD, n = 20 cells) and 0.56 ± 0.08 (mean ± SD, n = 17 cells) were observed in cells expressing mp-mCherry2-mEGFP heterodimers or mp-mEYFP-mCherry2-mEGFP heterotrimers. The minor difference could be attributed, for example, to different linker peptides (i.e., long rigid linker between FPs in heterotrimers and a short flexible linker in heterodimers), increasing the degree of FRET between mEGFP and mCherry2 in heterodimers and reducing the cross-correlation. The heterotrimer sample showed high rel. cc. values also for the other two fluorophore combinations: mEGFP and mEYFP ( rel. cc. G,Y = 0.79 ± 0.12) or mCherry2 and mEYFP ( rel. cc. Y,Ch2 = 0.57 ± 0.07). In addition to cross-correlation analysis, we performed molecular brightness measurements on samples containing three FP species. In particular, we compared molecular brightness values obtained by SFSCS on HEK 293T cells co-expressing homodimeric mp-2x-mEGFP, mp-mEYFP, and mp-m-Cherry2 (mp-2x-G + mp-Y + mp-Ch2) to the values measured on cells co-expressing the three monomeric constructs mp-mEGFP, mp-mEYFP, and mp-mCherry2 (mp-G + mp-Y + mp-Ch2). Whereas similar brightness values were obtained for mEYFP and mCherry2 in both samples, for example, relative brightness of 1.04 ± 0.23 for mEYFP and 1.03 ± 0.21 for mCherry2 (mean ± SD, n = 25 cells/n = 28 cells), a higher brightness of 1.70 ± 0.46 was measured for mEGFP in the first sample ( Figure 3G). This value corresponds to a p f of ca. 70% for mEGFP, as expected . To confirm that absolute brightness values are not influenced by the spectral decomposition, we also determined the brightness of mEGFP in cells expressing mp-mEGFP alone ( Figure 3G), resulting in values close to 1 (1.08 ± 0.23, mean ± SD, n = 28 cells). The IAV protein M2 interacts strongly with LC3 but not with CD9 Having demonstrated the capability of SFSCS to successfully quantify protein interactions and oligomerization, even in the case of three FPs with overlapping emission spectra, we applied this approach in a biologically relevant context. In more detail, we investigated the interaction of IAV channel protein M2 with the cellular host factors CD9 and LC3. CD9 belongs to the family of tetraspanins and is supposedly involved in virus entry and virion assembly (Florin and Lang, 2018;Hantak et al., three-species SFSCS measurements on HEK 293T cells co-expressing mp-mEGFP, mp-mEYFP, and mCherry2 (A), mp-mCherry2-mEGFP heterodimers and mp-mEYFP (B), or expressing mp-mEYFP-mCherry2-mEGFP heterotrimers (C), as illustrated in insets. Solid thick lines show fits of a two-dimensional diffusion model to the CFs. (D) Representative fluorescence images of HEK 293T cells co-expressing CD9-mEGFP, LC3-mEYFP, and IAV protein M2-mCh2. Spectral filtering and decomposition were performed to obtain a single image for each species. Scale bars are 5 µm. (E) Representative CFs (green/yellow/red: ACFs for mEGFP/mEYFP/mCherry2; purple/blue/gray: CCFs calculated for the pairs mEGFP and mEYFP/mEGFP and mCherry2/ mEYFP and mCherry2) obtained from three-species SFSCS measurements on HEK 293T cells co-expressing CD9-mEGFP, LC3-mEYFP, and M2-mCh2. Solid thick lines show fits of a two-dimensional diffusion model to the CFs. (F) Relative cross-correlation values obtained from three-species SFSCS measurements described in (A-C) and (E). (G) Normalized molecular brightness values obtained from three-species SFSCS measurements on HEK 293T cells co-expressing mp-mEGFP, mp-mEYFP, and mp-mCherry2 (blue), mp-2x-mEGFP, mp-mEYFP, and mp-mCherry2 (red), CD9-mEGFP, LC3-mEYFP, and M2-mCh2 (green), or expressing mp-mEGFP alone (yellow). Normalized brightness values were calculated by dividing the molecular brightness values detected in each SFSCS measurement by the average brightness obtained for mEGFP, mEYFP, and mCherry2 in cells co-expressing mp-mEGFP, mp-mEYFP, and mp-mCherry2. Data are pooled from two independent experiments for each sample. The number of cells measured is given in parentheses. Error bars represent mean ± SD. The online version of this article includes the following source data and figure supplement(s) for figure 3: Source data 1. Relative cross-correlation and normalized molecular brightness values for three-species scanning fluorescence correlation spectroscopy measurements. 2019; Dahmane et al., 2019). The autophagy marker protein LC3 was recently shown to be recruited to the PM in IAV-infected cells (see also Figure 3-figure supplement 4A,B), promoting filamentous budding and virion stability, thus indicating a role of LC3 in virus assembly (Beale et al., 2014). SFSCS allows simultaneous analysis of protein-protein interactions for four spectrally overlapping FP species Having demonstrated robust three-species cross-correlation analysis, we aimed to further explore the limits of SFSCS. We investigated therefore whether SFSCS can discriminate differential interactions between four species using the spectral emission patterns of mEGFP, mEYFP, mApple, and mCherry2 for spectral decomposition (Figure 1-figure supplement 1, Figure 4-figure supplement 1). As a proof of concept, we performed four-species measurements on three different samples: (i) cells co-expressing all four FPs independently as membrane-anchored proteins (mp-G + mp-Y + mp-A + mp-Ch2), (ii) cells co-expressing mp-mCherry2-mEGFP heterodimers, mp-mEYFP, and mp-mApple (mp-Ch2-G + mp-Y + mp-A ), and (iii) cells expressing mp-mEYFP-mCherry2-mEGFP-mApple heterotetramers (mp-Y-Ch2-G-A). We then calculated four ACFs, six CCFs, and rel. cc. values from the amplitude ratios of the ACFs and CCFs. For all fluorophore species, ACFs with amplitudes significantly above zero were obtained. ACFs calculated for mEGFP and mEYFP were characterized by a higher SNR compared to those for the red FPs mApple and, in particular, mCherry2 ( Figure Noise levels of the CCFs were moderate ( Figure 4D-F), yet allowing robust fitting and estimation of cross-correlation amplitudes. Based on the determined rel. cc. values ( Figure 4G), the different samples could successfully be discriminated. In the first sample (mp-G + mp-Y + mp-A + mp-Ch2), negligible to very low values were obtained, that is, at maximum 0.11 ± 0.11 (mean ± SD, n = 12 cells) for mApple and mCherry2. In the second sample (mp-Ch2-G + mp-Y + mp-A ), similarly low rel. cc. values were obtained for all fluorophore combinations, for example, 0.10 ± 0.10 (mean ± SD, n = 13 cells) for mApple and mCherry2, with the exception of mEGFP and mCherry2, showing an average value of 0.55 ± 0.13. For the hetero-tetramer sample, high rel. cc. values were measured for all fluorophore combinations, ranging from 0.42 ± 0.07 (mean ± SD, n = 15 cells) for mEGFP and mApple to 0.78 ± 0.08 for mEGFP and mEYFP. Notably, a significant rel. cc. of 0.53 ± 0.10 was also determined for mApple and mCherry2 signals, that is, from the CCFs exhibiting the lowest SNR. RSICS can be extended to simultaneous detection of four fluorophore species Having identified a set of FPs that is compatible with four-species SFSCS, we aimed to extend the recently presented RSICS method (Schrimpf et al., 2018) to applications with four fluorophore species being detected simultaneously. To test the effectiveness of this approach, we carried out ) obtained from four-species SFSCS measurements on HEK 293T cells co-expressing mp-mEGFP, mp-mEYFP, mp-mApple, and mp-mCherry2 (A), mp-mCherry2-mEGFP heterodimers, mp-mEYFP, and mp-mApple (B), or expressing mp-mEYFP-mCherry2-mEGFP-mApple hetero-tetramers (C), as illustrated in insets. Solid thick lines show fits of a two-dimensional diffusion model to the correlation functions (CFs). (D-F) SFSCS cross-correlation functions (CCFs) (dark blue/ light blue/orange/yellow/red/dark red for CCFs calculated for mEGFP and mEYFP/mEGFP and mApple/mEGFP and mCherry2/mEYFP and mApple/mEYFP and mCherry2/mApple and mCherry2) from measurements described in ( measurements in the cytoplasm of living A549 cells co-expressing mEGFP, mEYFP, mApple, and mCherry2 in several configurations, similar to the SFSCS experiments presented in the previous paragraph. In more detail, we performed four-species RSICS measurements on the following three samples: (i) cells co-expressing free mEGFP, mEYFP, mApple, and mCherry2 (1x-G + 1x-Y + 1x-A + 1x-Ch2), (ii) cells co-expressing mCherry2-mEGFP and mEYFP-mApple heterodimers (Ch2-G + Y -A), and (iii) cells expressing mEYFP-mCherry2-mEGFP-mApple hetero-tetramers (Y-Ch2-G-A). Representative CFs obtained following RSICS analysis with arbitrary region selection (Hendrix et al., 2016) are shown in Figure 5. In all samples, ACFs with amplitudes significantly above zero were obtained, with the highest noise level detected for mCherry2 ( Figure 5A, C and E). A three-dimensional diffusion model could be successfully fitted to all detected ACFs. Detected CCFs showed the expected pattern: all six CCFs were indistinguishable from noise for the first sample with four independent FPs ( Figure 5B), whereas large CCF amplitudes were obtained for the pairs mEGFP and mCherry2, as well as mEYFP and mApple in the second sample (Ch2-G + Y-A) ( Figure 5D). Also, significantly large amplitudes were observed for all six CCFs for the heterotetramer sample, albeit with different levels of noise. For example, the lowest SNR was observed in CCFs for mApple and mCherry2 ( Figure 5F). Cross-correlation and molecular brightness analysis via three-species RSICS provide stoichiometry of IAV polymerase complex assembly To test the versatility of three-species RSICS, we quantified intracellular protein interactions and stoichiometries in a biologically relevant context. As an example, we focused on the assembly of the IAV polymerase complex (PC), consisting of the three subunits polymerase acidic protein (PA), polymerase basic protein 1 (PB1), and 2 (PB2). A previous investigation using FCCS suggested an assembly model in which PA and PB1 form heterodimers in the cytoplasm of cells. These are imported into the nucleus and appear to interact with PB2 to form heterotrimeric complexes (Huet et al., 2010). Nevertheless, the previous analysis could only be performed between two of the three subunits at the same time. Also, the stoichiometry of the complex was reported only for one of the three subunits, that is, PA protein dimerization. Here, we labeled all three subunits using FP fusion constructs and co-expressed The online version of this article includes the following source data and figure supplement(s) for figure 4: Source data 1. Relative cross-correlation values for four-species scanning fluorescence correlation spectroscopy measurements. PA-mEYFP, PB1-mEGFP, and PB2-mCherry2 in A549 cells. We then performed three-species RSICS measurements in the cell nucleus, where all three proteins are enriched ( Figure 6A). RSICS analysis was performed on an arbitrarily shaped homogeneous region of interest in the nucleus. We then calculated RSICS ACFs ( Figure 6B), CCFs ( Figure 6C), and rel. cc. values ( Figure 6D) for the three fluorophore combinations. The determined rel. cc. values were compared to the values obtained on negative controls (i.e., cells co-expressing free mEGFP, mEYFP, and mCherry) and positive controls (i.e., cells expressing mEYFP-mCherry2-mEGFP heterotrimers) ( Figure 6D). For the polymerase sample, high rel. cc. values were observed for all combinations: rel. cc. PB1-G,PA-Y = 0.93 ± 0.18 (mean ± SD, n = 53 cells), rel. cc. PB1-G,PB2-Ch2 = 0.47 ± 0.14, rel. cc. PA-Y,PB2-Ch2 = 0.39 ± 0.14. For the positive control, similar values were observed for mEGFP and mCherry2, rel. cc. G,Ch2 = 0.48 ± 0.11 (mean ± SD, n = 46 cells), whereas the values were higher than that measured for PCs for mEYFP and mCherry2, rel. cc. Y,Ch2 = 0.53 ± 0.11, and lower for mEGFP and mEYFP, rel. cc. G,Y = 0.65 ± 0.10. The lower average rel. cc. between PA-mEYFP and PB2-mCherry2 compared to the positive control indicates the presence of a minor fraction of non-interacting PA and PB2. These proteins could be present in the nucleus in unbound form when expressed in higher amount than PB1 since both PA and PB2 localize in the nucleus individually and were previously shown not to interact when both present without PB1 (Huet et al., 2010). This explanation is supported by the correlation between rel. cc. PA-Y,PB2-Ch2 and the relative abundance of PB1-mEGFP ( Figure 6-figure supplement 1A). Also, the observation that PB1 is only transported to the nucleus in complex with PA is confirmed by the lower concentration of PB1-mEGFP compared to PA-mEYFP in the nuclei of all measured cells ( Figure 6figure supplement 1A). Thus, the fraction of PB1-mEGFP bound to PA-mEYFP should be as high as the positive control, for a 1:1 stoichiometry. The observation of higher rel. cc. between mEGFP and mEYFP for the polymerase subunits indicates higher order interactions, that is, higher stoichiometry than 1:1 (Kaliszewski et al., 2018). To quantify the stoichiometry of the PC directly, we analyzed the molecular brightness of RSICS measurements for all three fluorophore species. We normalized the obtained values to the average values determined by RSICS on cells co-expressing monomeric mEGFP, mEYFP, and mCherry2, measured on the same day. To test whether RSICS can be used to obtain reliable brightness/oligomerization values for all fluorophore species, we first performed control experiments on cells co-expressing either (i) 2x-mEGFP homodimers with mEYFP and mCherry monomers (2x-G + 1x-Y + 1x-Ch2) or (ii) the three homodimers 2x-mEGFP, 2x-mEYFP, and 2x-mCherry2 (2x-G + 2x-Y + 2x-Ch2). In the first sample, we observed an increased relative brightness of 1.67 ± 0.38 (mean ± SD, n = 34 cells) for mEGFP, whereas values around 1 were obtained for mEYFP and mCherry2. This confirmed the presence of mEGFP dimers as well as mEYFP and mCherry2 monomers in this control sample, as expected ( Figure 6E). In the sample containing all three homodimers, increased relative brightness values were observed for all fluorophore species: 1.75 ± 0.37 (mean ± SD, n = 39 cells) for mEGFP, 1.77 ± 0.33 for mEYFP, and 1.61 ± 0.29 for mCherry2 (see Supplementary file 1b for data on dayto-day variations). These values indicate successful determination of the dimeric state of all three FP homodimers and are in good agreement with previous brightness measurements on homodimers of mEGFP, mEYFP, and mCherry2, corresponding to p f values of 60-75% . Next, we proceeded with the analysis of PC oligomerization. For each polymerase subunit, relative brightness values close to the values of homodimers were observed. Assuming p f values of 75, 77, and 61% (as calculated from the determined relative brightness values of homodimers) for mEGFP, mEYFP, and mEYFP ('Y'), mApple ('A'), mCherry2 ('Ch2') (A, B), mCherry2-mEGFP and mEYFP-mApple heterodimers (C, D), or mEYFP-mCherry2-mEGFP-mApple hetero-tetramers (E, F). (G, H) Relative cross-correlation values (G) and diffusion coefficients (H) obtained from four-species RSICS measurements described in (A-F). Data are pooled from two independent experiments. The number of cells measured is given in parentheses. Error bars represent mean ± SD. The online version of this article includes the following source data and figure supplement(s) for figure 6: Source data 1. Relative cross-correlation, normalized molecular brightness values, and diffusion coefficients for three-species raster spectral image correlation spectroscopy measurements on influenza A virus complex and fluorescent protein hetero-oligomers in the nucleus of A549 cells. in the x and y direction, respectively, across the three detection channels, as described in Materials and methods. (C) Relative triple-correlation (rel.3C.) values obtained from the measurements described in (A, B). The number of cells measured is given in parentheses. Error bars represent mean ± SD. Statistical significance was determined using Welch's corrected two-tailed Student's t-test (****p<0.0001). The online version of this article includes the following source data for figure 7: Source data 1. Relative triple correlation values for triple raster image correlation spectroscopy analysis of influenza A virus polymerase complex in the nucleus of A549 cells. Triple raster image correlation spectroscopy (TRICS) analysis provides direct evidence for assembly of ternary IAV polymerase complexes To directly confirm that IAV PC subunits form ternary complexes in the cell nucleus, we implemented a triple-correlation analysis (TRICS) to detect coincident fluctuations of the signal emitted by mEGFP-, mEYFP-, and mCherry2-tagged proteins. A similar analysis has previously been presented for threechannel FCS measurements (e.g., fluorescence triple-correlation spectroscopy [Ridgeway et al., 2012a], triple-color coincidence analysis [Heinze et al., 2004]), but was so far limited to in vitro systems such as purified proteins (Ridgeway et al., 2012a) or DNA oligonucleotides (Heinze et al., 2004) labeled with organic dyes. We performed TRICS on data obtained on cells co-expressing PC subunits PA-mEYFP, PB1-mEGFP, and PB2-mCherry2 or cells co-expressing free mEGFP, mEYFP, and mCherry as a negative triple-correlation control. To evaluate ternary complex formation, we quantified the relative triple-correlation (rel.3C., see Materials and methods) for both samples from the amplitudes of the ACFs and triple-correlation functions (3CFs). Figure 7A and B show representative 3CFs for the negative control and the PC sample, respectively. For the negative control, we obtained rel.3C. values fluctuating around zero ( Figure 7C), rel.3C. = −0.02 ± 0.54 (mean ± SD, n = 49 cells). In contrast, significantly higher, positive rel.3C. values were obtained for the polymerase samples, rel.3C. = 0.43 ± 0.38 (mean ± SD, n = 53 cells). The detection of ternary complexes is limited by non-fluorescent FPs, that is, only a fraction of ternary complexes present in a sample will emit coincident signals for all three FP species. In addition, imperfect overlap of the detection volumes for each channel will further reduce the fraction of ternary complexes that can be detected by TRICS. We therefore performed an approximate calculation of the expected rel.3C. value for a sample containing 100% ternary complexes assuming a p f of 0.7 for each FP species and estimating the reduction due to imperfect overlap from the pair-wise rel. cc. values detected on the positive cross-correlation control (see Appendix 1, Section A1.3 for details). For a 2:2:2 stoichiometry, we obtained an estimated rel.3C. of 0.48, that is, only slightly higher than the average value determined experimentally for IAV PCs. Thus, we estimate that around 90% of PC subunits undergo ternary complex formation in the cell nucleus when all subunits are present. Discussion In this work, we combine FFS techniques with spectral detection to perform multicolor studies of protein interactions and dynamics in living cells. In particular, we present SFSCS, a combination of FSCS (Benda et al., 2014) and lateral scanning FCS (Ries and Schwille, 2006). We show that SFSCS allows cross-talk-free measurements of protein interactions and diffusion dynamics at the PM of cells and demonstrate that it is capable of detecting three or four species simultaneously. Furthermore, we extend RSICS (Schrimpf et al., 2018) to investigate four fluorophore species and apply this approach to determine the stoichiometry of higher order protein complexes assembling in the cell nucleus. Notably, the technical approaches can be carried out on a standard confocal microscope, equipped with a spectral photon counting detector system. In the first part, we present two-species SFSCS using a single excitation wavelength and strongly overlapping fluorophores. Compared to the conventional implementation of FCCS with two excitation lasers and two detectors, two-species SFSCS has substantial advantages, similar to the recently presented sc-FLCCS (Štefl et al., 2020). Since it requires a single excitation line and is compatible with spectrally strongly overlapping FPs, it circumvents optical limitations such as imperfect overlap of the observation volumes. This is evident from higher rel. cc. values of 70-80% measured for mEGFP and mEYFP coupled in FP hetero-oligomers compared to 45-60% observed for mEGFP and mCherry2. Rel. cc. values around 70% are to be expected for the examined FP tandems even in the case of single-wavelength excitation, given that the p f for such fluorophores is indeed around 0.7 (Foo et al., 2012;) (see also SI, paragraph 1). On the other hand, in three-and four species measurements discussed below, FP pairs requiring two excitation wavelengths display the typical reduction of the rel. cc. due to imperfect optical volume overlap. For combinations of green and red FPs, rel. cc. values below 60% were also observed with single-wavelength excitation (Foo et al., 2012;Shi et al., 2009), indicating that overlap of both excitation and detection volumes (the latter requiring FPs with similar emission spectra) is required to maximize the achievable cross-correlation (Foo et al., 2012). Notably, two-species SFSCS can not only successfully discriminate between mEGFP and mEYFP, but is also applicable when using the red FPs mApple and mCherry2. These two FPs were successfully used in several FFS studies Foust et al., 2019;Sankaran et al., 2021), providing the best compromise between brightness, maturation, and photostability among available red FPs, which generally suffer from reduced SNR compared to FPs emitting in the green or yellow part of the optical spectrum Cranfill et al., 2016). In comparison to sc-FLCCS, it may be more robust to discriminate fluorophores based on spectra rather than lifetimes, which can be strongly affected by FRET (Štefl et al., 2020). The emission spectra of the FPs utilized in this study did not depend on cell lines or subcellular localization ( Figure 5figure supplement 1) and showed no (mEGFP, mEYFP) or little (mApple, mCherry2) variation with pH over a range of 5.0-9.2 ( Figure 5-figure supplement 2). For red FPs, specifically mApple, a red shift appeared at more acidic pH, in agreement with previous studies (Hendrix et al., 2008). This aspect should be considered for specific applications, for example, RSICS in the cytoplasm containing acidic compartments such as lysosomes. Generally, spectral approaches require accurate detection of photons in each spectral bin. A previous study using the same detection system reported intrinsic cross-talk between adjacent spectral bins (Foust et al., 2019). However, since the methodology presented here is based on temporal (SFSCS) or spatial (RSICS) correlation (both excluding the correlation at zero time or spatial lag), this issue can be neglected in our analysis. A major limitation of SFSCS is the reduced SNR of the CFs (see Figure 1, Figure 3-figure supplement 3) caused by the statistical filtering of the signal emitted by spectrally overlapping fluorophore species (see, e.g., Figure 4-figure supplement 1). This limitation applies to all FFS methods that discriminate different fluorophore species based on spectral (e.g., FSCS [Benda et al., 2014], RSICS [Schrimpf et al., 2018]) or lifetime patterns (e.g., sc-FLCCS [Štefl et al., 2020]). The increase in noise depends on the spectral (or lifetime) overlap of different species and is more prominent for species that completely lack 'pure' channels, that is, detection channels in which the majority of photons can be univocally assigned to a single species (Schrimpf et al., 2018). In sc-FLCCS, this issue particularly compromises the SNR of short lifetime species (Štefl et al., 2020) since photons of longer lifetime species are detected in all 'short lifetime' channels at substantial relative numbers. In these conditions, sc-FLCCS could not provide reliable results with sixfold (or higher) difference in relative protein abundance, even though the lower abundant protein was tagged with the brighter, longer lifetime FP (Štefl et al., 2020). Similarly in SFSCS, CFs corresponding to mEYFP or mCherry2 were most prone to noise ( Figure 1C and F) since all channels that contain, for example, mEYFP signal also contain mEGFP signal (Figure 1-figure supplement 1). In our experiments, cross-talk-free SFSCS analysis with two species excited with a single excitation wavelength could be performed for relative intensity levels as low as 1:10 (mEGFP/mEYFP) or 1:5 (mApple/mCherry2). In this range, SFSCS not only enabled the quantification of protein interactions via cross-correlation analysis, but also yielded correct estimates of protein diffusion dynamics and oligomerization at the PM. An improvement of the allowed relative concentration range can be achieved by using brighter or more photostable fluorophores, for example, organic dyes, compensating for reduced SNR due to statistical filtering. Alternatively, FP tags could be selected based on proteins' oligomerization state. For example, monomeric proteins exhibiting low molecular brightness should be tagged with fluorophores that are less prone to noise. It should be noted that the limitation of reduced SNR due to excess signal from another species also applies to conventional dual-color FCCS: bleed-through from green to red channels can be corrected on average, but reduces the SNR in red channels (Bacia et al., 2012), unless more sophisticated schemes such as pulsed interleaved excitation (Müller et al., 2005;Hendrix et al., 2013) are applied. Having demonstrated that two-species SFSCS is feasible with a single excitation wavelength in the green (mEGFP, mEYFP) or red (mApple, mCherry2) part of the visible spectrum, we finally implemented three-and four-species SFSCS as well as four-species RSICS. These extensions do not further compromise the SNR of CFs detected for mEGFP and mEYFP (see Figure 3-figure supplement 3A,B), but may additionally reduce the SNR of CFs corresponding to red FPs (in particular when mEGFP and/or mEYFP concentration is much higher than that of red FPs, Figure 3-figure supplement 3C). For this reason, three-and four-species analysis was restricted to cells with relative average intensity levels of 1:5 or less between species with adjacent emission spectra. In this range, the increase in noise due to statistical filtering was moderate, benefitting from the fairly large spectral separation of green/yellow and red emission (Figure 3-figure supplement 3). In addition, the higher molecular brightness of mApple (compared to mCherry2) compensated for the larger overlap of this FP with the tail of mEYFP emission. The excitation power for red FPs was generally limited by the lower photostability of mApple, which could be responsible for consistently lower rel. cc. values of mEGFP or mEYFP with mApple than with mCherry2. Nevertheless, four-species SFSCS and RSICS could successfully resolve different combinations of strongly overlapping FP hetero-oligomers, for example, a mixture of mEGFP-mCherry2 and mEYFP-mApple heterodimers, at the PM or in the cytoplasm of cells. To explore the interaction of four different FP-tagged proteins, four-species FFS may substantially reduce the experimental effort because all pair-wise interactions can be quantified in a single measurement (instead of six separate conventional two-species FCCS measurements). Yet, weak interaction of proteins, that is, a low amount of hetero-complexes compared to a high amount of unbound proteins, may not be detectable due to the large noise of the CCF in this case. The SNR might be further compromised by slow FP maturation or dark FP states, limiting the amount of complexes that simultaneously emit fluorescence of all bound FP species . Ultimately, the mentioned limitations currently restrict SFSCS and RSICS to four FP species. The approaches would thus strongly benefit from a multiparametric analysis. For instance, combining spectral and lifetime detection schemes would provide additional contrast for photons detected in the same spectral bin. This improvement could expand the range of detectable relative concentrations or might allow further multiplexing of FFS. Conventional two-color scanning FCCS has been previously applied to quantify receptor-ligand interactions in living zebrafish embryos (Ries et al., 2009b) and CRISPR/Cas9 edited cell lines to study such interactions at endogenous protein level (Eckert et al., 2020). SFSCS is thus directly applicable in the complex environment of living multicellular organisms. In this context, spectral information could be further exploited to separate low signal levels of endogenously expressed, fluorescently tagged proteins from autofluorescence background. As a first biological application of SFSCS, we investigated the interaction of IAV matrix protein M2 with two cellular host factors: the tetraspanin CD9 and the autophagosome protein LC3. We observed strong association of LC3 with M2, and consequent recruitment of LC3 to the PM (Figure 3-figure supplement 4), in agreement with previous in vitro and localization studies (Beale et al., 2014). Interestingly, molecular brightness analysis reported oligomerization (dimers to tetramers) of M2, but indicated a monomeric state of LC3 at the PM, that is, binding of LC3 to M2 in an apparent stoichiometry of 1:2 to 1:4. However, each M2 monomer provides a binding site for LC3 in the cytoplasmic tail (Claridge et al., 2020). A more detailed analysis of our data showed that in the analyzed cells (i.e., cells showing clear membrane recruitment of LC3, Figure 3-figure supplement 4A,B), the PM concentration of LC3 was on average only 30% compared to that of M2 (Figure 3-figure supplement 4C), although both proteins were expressed in comparable amounts in the sample in general. This suggests that not all potential binding sites in the cytoplasmic tail of M2 may be available to fluorescently tagged LC3, either due to binding of endogenous LC3, other cellular host factors, or steric hindrance. In contrast to the case of LC3, we did not detect significant binding of M2 with the tetraspanin CD9, a protein that was previously shown to be incorporated into IAV virions and supposedly plays a functional role during the infection process (Shaw et al., 2008;Hutchinson, 2014). Of note, we cannot exclude the possibility that the FP tag at the C-terminus of CD9 might hamper interactions with M2, in the specific case of M2-CD9 interaction being mediated by the C-terminal cytoplasmic tails of the two proteins. In future studies, the approach presented here may be used to further elucidate the complex interaction network of viral proteins, for example, matrix protein 1 (M1) (Hilsch et al., 2014), M2, HA, and neuraminidase, cellular host factors, and PM lipids (Bobone et al., 2017) during the assembly process of IAV at the PM of living cells (Rossman and Lamb, 2011). Finally, we demonstrated that RSICS allows the quantification of the stoichiometry of higher order molecular complexes, based on molecular brightness analysis for each FP species. As example of an application in a biological context, we determined the stoichiometry of the IAV PC. Our data provide strong evidence for a 2:2:2 stoichiometry of the PC subunits PA, PB1, and PB2, that is, dimerization of heterotrimeric PCs. Such interactions were previously proposed based on experiments in solution using X-ray crystallography and cryo-electron microscopy (Fan et al., 2019), co-immunoprecipitation assays (Jorba et al., 2008;Nilsson-Payant et al., 2018), as well as single-channel brightness analysis of FCCS data (for the PA subunit) (Huet et al., 2010). Intermolecular interactions in the PC are hypothesized to be required for the initiation of vRNA synthesis during replication of the viral genome (Fan et al., 2019;Chen et al., 2019). The results presented here provide the first quantification of these interactions in living cells and a direct estimate of the stoichiometry of PCs in the cell nucleus. The formation of ternary PC complexes in these samples could be extrapolated from the observed high rel. cc. values for all three pair combinations, indicating very low amounts of unbound PA, PB1, or PB2 and higher order interactions (see Appendix 1, Section A1.1 for additional details). Furthermore, this observation could also be directly confirmed by performing, for the first time in living cells, a triple-correlation analysis (TRICS), indicating the presence of a considerable amount of PA-PB1-PB2 complexes. It is worth noting though that the detection of coincident triple fluctuations is prone to considerable noise and thus still limited to molecular complexes present at low concentration and characterized by high molecular brightness for each fluorophore species (Ridgeway et al., 2012a;Ridgeway et al., 2012b). Of note, the RSICS approach presented here provides for the first time simultaneous information on molecular interactions, molecular brightness (and thus stoichiometry), diffusion dynamics, and concentration for all three complex subunits. This specific feature opens the possibility of a more in-depth analysis. For example, it is possible to quantify the relative cross-correlation of two subunits, e.g. PA and PB2, as a function of the relative concentration of the third subunit, for example, PB1 (Figure 6-figure supplement 1A). Similarly, molecular brightness and diffusion coefficients can be analyzed as a function of the abundance of each subunit (Figure 6-figure supplement 1B,C). With this approach, it is therefore possible to distinguish specific molecular mechanisms, such as inefficient PA-PB2 interactions in the presence of low PB1 concentration or efficient heterotrimer dimerization when all subunits are present at similar concentrations. The employed experimental scheme offers a powerful tool for future studies, exploring, for example, interaction of the PC with cellular host factors or the development of inhibitors that could interfere with the assembly process of the complex, as a promising therapeutic target for antiviral drugs (Massari et al., 2021). Limitations We summarize in this section the main instrumental, conceptual, and sample-related limitations and requirements connected to the multicolor FFS approach employed in this work. Instrumental limitations To perform multicolor FFS, a spectral photon counting detector system is required. Alternatively, the same conceptual approach can be implemented based on detection of fluorophore lifetimes rather than emission spectra (Štefl et al., 2020). For both approaches, two excitation wavelengths are currently required for three-and four-species detection. As a consequence, the overlap of excitation volumes of the two laser lines might be limited, thus reducing the maximum achievable rel. cc., as previously discussed for standard FCCS (Foo et al., 2012). For the instrumentation utilized in the present work, the time resolution for SFSCS was limited to 0.5 ms. However, RSICS can be applied to detect faster dynamics, as demonstrated by experiments on cytoplasmic proteins. Conceptual limitations FFS approaches generally require the proteins of interest to diffuse and thus cannot be applied in the case of immobile or strongly clustered targets (Ciccotosto et al., 2013). The statistical filtering of spectrally overlapping FP emission leads to increased noise of CFs. FPs lacking 'pure' channels, for example, mEYFP when co-expressed with mEGFP, are most compromised. As a consequence, the approach provides reliable results only in a certain range of relative protein abundance. For the presented three-and four-species SFSCS and RSICS experiments, relative signals were limited to 1:5 (i.e., range of 1:5 to 5:1). The given ratios characterize the minimum acceptable signal ratio for spectrally neighboring fluorescent species, for the FPs utilized in this work. The set of FPs may be optimized for specific applications. The increase in noise as a result of filtering may prevent detection of weak protein interactions due to the low SNR of CCFs in this case. Furthermore, detection of co-fluctuations of three FP species based on triple correlation is prone to considerable noise and thus limited to detection of molecular complexes present at low concentrations or characterized by high molecular brightness, as discussed previously for in vitro studies (Ridgeway et al., 2012a). Sample-related limitations To apply multicolor FFS, multiple FP species (e.g., FP-tagged proteins of interest) have to be expressed in the same cell, in relative amounts compatible with the ranges given above. Since tagging of proteins of interest with FPs is required (or other labels such as organic dyes, if the labeling ratio can be precisely determined), potential hindrance of protein interactions by the tags should be carefully evaluated. Typical measures consist in, for example, testing different positions for the tag in the protein of interest, trying different linkers with varying length and flexibility, using tags with smaller sizes, or bio-orthogonal labeling (Huang et al., 2014;Işbilir et al., 2021). The emission spectra of most FPs are typically well-defined, but might depend on physicochemical conditions (e.g., mApple showed red-shifted emission at more acidic pH). Differences between calibrated and actual spectra could induce errors in filtering and cause residual cross-talk between different FP species (Schrimpf et al., 2018). Therefore, the same optical components (e.g., filters, beam splitters) and experimental conditions (e.g., laser powers, sample media, dishes) should be used to calibrate the spectra. Due to lower photostability and quantum yield, red FPs suffer from reduced SNR and, thus, larger variation of parameter estimates compared to green FPs. This is most evident for mCherry2 in four-species applications. In addition, molecular brightness and cross-correlation analysis are compromised by FP maturation. Slow maturation will lead to an increased fraction of dark states, increasing the noise of CCFs and reducing the dynamic range for brightness analysis of protein oligomers Foo et al., 2012). Cross-correlation analysis may be further affected by FRET between different FP species, potentially reducing experimental rel. cc. values (Foo et al., 2012). This should be carefully evaluated, for example, by analyzing molecular brightness values relative to monomeric references, for both the proteins of interest and FP-hetero-oligomers used to calibrate the maximum achievable rel. cc. FRET artifacts can be minimized using appropriate linkers, for example, rigid linker peptides, as presented here. Conclusions In summary, we present here three-species and, for the first time, four-species measurements of protein interactions and diffusion dynamics in living cells. This is achieved by combining and extending existing FFS techniques with spectrally resolved detection. The presented approaches provide a powerful toolbox to investigate complex protein interaction networks in living cells and organisms. Materials and methods Cell culture and sample preparation Human embryonic kidney (HEK) cells from the 293T line (purchased from ATCC, Manassas, VA; CRL-3216TM) and human epithelial lung cells A549 (ATCC, CCL-185TM) were cultured in Dulbecco's modified Eagle medium (DMEM) with the addition of fetal bovine serum (10%), L-glutamine (2 mM), penicillin (100 U/mL), and streptomycin (100 µg/mL). Mycoplasma contamination tests and morphology tests were performed every 3 months and 2 weeks, respectively. Cells were passaged every 3-5 days, no more than 15 times. All solutions, buffers, and media used for cell culture were purchased from PAN-Biotech (Aidenbach, Germany). For the cloning of all following constructs, standard PCRs with custom-designed primers were performed, followed by digestion with fast digest restriction enzymes and ligation with T4-DNA-Ligase according to the manufacturer's instructions. All enzymes and reagents were purchased from Thermo Fisher Scientific. The plasmids GPI-mEYFP and GPI-EGFP were a kind gift from Roland Schwarzer. GPI-mEGFP was cloned by amplifying mEGFP from an mEGFP-N1 vector and inserting it into GPI-EGFP using digestion with AgeI and BsrGI. To generate GPI-mApple and GPI-mCherry2, mApple and mCherry2 inserts were amplified from PMT-mApple and mCherry2-C1, respectively, and inserted into GPI-mEYFP using restriction by AgeI and BsrGI. All plasmids generated in this work will be made available on Addgene. Confocal microscopy system SFSCS and RSICS were performed on a Zeiss LSM880 system (Carl Zeiss, Oberkochen, Germany) using a 40× , 1.2 NA water immersion objective. For two-species measurements, samples were excited with a 488 nm argon laser (mEGFP, mEYFP) or a 561 nm diode laser (mCherry2, mApple). For three-and four-species measurements, both laser lines were used. To split excitation and emission light, 488 nm (for two-species measurements with mEGFP and mEYFP) or 488/561 nm (for measurements including mCherry2 and mApple) dichroic mirrors were used. Fluorescence was detected in spectral channels of 8.9 nm (15 channels between 491 nm and 624 nm for two-species measurements on mEGFP, mEYFP; 14 channels between 571 nm and 695 nm for two-species measurements on mCherry2, mApple; 23 channels between 491 nm and 695 nm for three-and four-species measurements) on a 32-channel GaAsP array detector operating in photon counting mode. All measurements were performed at room temperature. Scanning fluorescence spectral correlation spectroscopy (SFSCS) Data acquisition For SFSCS measurements, line scans of 256 × 1 pixels (pixel size 80 nm) was performed perpendicular to the PM with 403.20 µs scan time. This time resolution is sufficient to reliably detect the diffusion dynamics observed in the samples described in this work (i.e., diffusion times ~6-60 ms). Typically, 450,000-600,000 lines were acquired (total scan time ca. 2.5-4 min). Laser powers were adjusted to keep photobleaching below 50% at maximum for all species (average signal decays were ca. 10% for mEGFP, 30% for mEYFP, 40% for mApple, and 20% for mCherry2). Typical excitation powers were ca. 5.6 µW (488 nm) and ca. 5.9 µW (561 nm). Spectral scanning data were exported as TIFF files (one file per three spectral channels), imported, and analyzed in MATLAB (The MathWorks, Natick, MA) using custom-written code (Dunsing and Chiantia, 2021). Data analysis SFSCS analysis followed the scanning FCS scheme described previously (Ries and Schwille, 2006;, combined with spectral decomposition of the fluorescence signal by applying the mathematical framework of FLCS and FSCS (Benda et al., 2014;Böhmer et al., 2002). Briefly, all scan lines were aligned as kymographs and divided in blocks of 1000 lines. In each block, lines were summed up column-wise and across all spectral channels, and the lateral position with maximum fluorescence was determined. This position defines the membrane position in each block and was used to align all lines to a common origin. Then, all aligned line scans were averaged over time and fitted with a Gaussian function. The pixels corresponding to the PM were defined as pixels within ±2.5 SD of the peak. In each line and spectral channel, these pixels were integrated, providing membrane fluorescence time series F k ( t ) in each spectral channel k (m channels in total). These time series were then temporally binned with a binning factor of 2 and subsequently transformed into the contributions Fi ( t ) of each fluorophore species i (i.e., one fluorescence time series for each species) by applying the spectral filtering algorithm presented by Benda et al., 2014: Spectral filter functions f k i were calculated based on reference emission spectra p k i that were determined for each individual species i from single species measurements performed on each day using the same acquisition settings: Here, M is a matrix with elements M ki = p k i and D is a diagonal matrix, In order to correct for depletion due to photobleaching, a two-component exponential function was fitted to the fluorescence time series for each spectral species, Fi ( t ) , and a correction formula was applied Ries et al., 2009a). Finally, ACFs and pair-wise CCFs of fluorescence time series of species i and j were calculated as follows using a multiple tau algorithm: To avoid artifacts caused by long-term instabilities or single bright events, CFs were calculated segment-wise (10-20 segments) and then averaged. Segments showing clear distortions (typically less than 25% of all segments) were manually removed from the analysis . A model for two-dimensional diffusion in the membrane and Gaussian focal volume geometry (Ries and Schwille, 2006) was fitted to all CFs: To ensure convergence of the fit for all samples (i.e., ACFs and CCFs of correlated and uncorrelated data), positive initial fit values for the particle number N and thus G ( τ ) were used. In the case of uncorrelated data, that is, for CFs fluctuating around zero, this constraint can generate low, but positive correlation amplitudes due to noise. This issue can be circumvented, if needed, by selecting adaptive initial values, for example, obtaining the initial amplitude value from averaging the first points of the CFs (see Figure 3-figure supplement 2). To calibrate the focal volume, point FCS measurements with Alexa Fluor 488 (Thermo Fisher Scientific) dissolved in water at 20 nM were performed at the same laser power. The structure parameter S was fixed to the average value determined in calibration measurements (typically between 4 and 8). From the amplitudes of ACFs and CCFs, rel. cc. values were calculated for all cross-correlation combinations: where G i,j (0) is the amplitude of the CCF of species i and j, and G i (0) the amplitude of the ACF of species i. The molecular brightness was calculated by dividing the mean count rate detected for each species i by the particle number N i determined from the fit: Ni . From this value, an estimate of the oligomeric state εi was determined by normalizing B i by the average molecular brightness B i,1 of the corresponding monomeric reference, and, subsequently, by the fluorescence probability p f,i for −1 pf,i + 1 , as previously derived . The p f was previously characterized for several FPs (for example, ca. 60% for mCherry2) . The SNR of the ACFs was calculated by dividing ACF values by their variance and summing over all points of the ACF. The variance of each point of the ACF was calculated in the multiple tau algorithm (Wohland et al., 2001). To ensure statistical robustness of the SFSCS analysis and sufficient SNR, the analysis was restricted to cells expressing all fluorophore species in comparable amounts, that is, relative average signal intensities of less than 1:10 (mEGFP/mEYFP) or 1:5 (mApple/mCherry2, three-and four-species measurements). Raster spectral image correlation spectroscopy (RSICS) Data acquisition RSICS measurements were performed as previously described (Ziegler et al., 2020). Briefly, 200-400 frames of 256 × 256 pixels were acquired with 50 nm pixel size (i.e., a scan area of 12.83 × 12.83 µm 2 through the midplane of cells), 2.05 µs or 4.10 µs pixel dwell time, 1.23 ms or 2.46 ms line, and 314.57 ms or 629.14 ms frame time (corresponding to ca. 2 min total acquisition time per measurement). Samples were excited at ca. 5.6 µW (488 nm) and 4.6 µW (561 nm) excitation powers, respectively. Laser powers were chosen to maximize the signal emitted by each fluorophore species but keeping photobleaching below 50% at maximum for all species (average signal decays were ca. 10% for mEGFP, 15% for mEYFP, 40% for mApple, and 25% for mCherry2). Typical counts per molecule were ca. 25 kHz for mEGFP (G), 15-20 kHz for mEYFP (Y), 20-30 kHz for mApple (A), and 5-10 kHz for mCherry2 (Ch2). To obtain reference emission spectra for each individual fluorophore species, four image stacks of 25 frames were acquired at the same imaging settings on single-species samples on each day. Data analysis RSICS analysis followed the implementation introduced recently (Schrimpf et al., 2018), which is based on applying the mathematical framework of FLCS and FSCS (Benda et al., 2014;Böhmer et al., 2002) to RICS. Four-dimensional image stacks I ( x, y, t, k ) (time-lapse images acquired in k spectral channels) were imported in MATLAB (The MathWorks) from CZI image files using the Bioformats package (Linkert et al., 2010) and further analyzed using custom-written code (Dunsing and Chiantia, 2021). First, average reference emission spectra were calculated for each individual fluorophore species from single-species measurements. Four-dimensional image stacks were then decomposed into three-dimensional image stacks Ii ( x, y, t ) for each species i using the spectral filtering algorithm presented by Schrimpf et al., 2018 (following the mathematical framework given in the SFSCS section). Cross-correlation RICS analysis was performed in the arbitrary region RICS framework (Hendrix et al., 2016). To this aim, a polygonal region of interest (ROI) was selected in the timeand channel-averaged image frame containing a homogeneous region in the cytoplasm (four-species measurements on FP constructs) or nucleus (three-species measurements on polymerase complex and related controls) of cells. This approach allowed excluding visible intracellular organelles or pixels in the extracellular space, but to include all pixels containing signal from the nucleus of cells. In some cells, nucleus and cytoplasm could not be clearly distinguished. In these cases, all pixels were selected and minor brightness differences between cytoplasm and nucleus, previously found to be ca. 10% , were neglected. Image stacks were further processed with a high-pass filter (with a moving four-frame window) to remove slow signal variations and spatial inhomogeneities. Afterwards, RICS spatial ACFs and pair-wise CCFs were calculated for each image stack and all combinations of species i, j (e.g., G and Y, G and Ch2, Y and Ch2 for three species), respectively (Schrimpf et al., 2018;Hendrix et al., 2016): . ACF amplitudes were corrected as described in Hendrix et al., 2016 to account for the effect of the high-pass filter. A three-dimensional normal diffusion RICS fit model (Digman et al., 2005;Digman et al., 2009b) for Gaussian focal volume geometry (with particle number N, diffusion coefficient D, waist ω0, and structure parameter S as free fit parameters) was then fitted to both ACFs and CCFs: where τp , τl denote the pixel dwell and line time and δs the pixel size. The free parameter ξ0 (starting value = 13 pixels) was used to determine which CCFs were too noisy (i.e., ξ0 > 4 pixels) to obtain meaningful parameters (typically in the absence of interaction). For ACF analysis, ξ0 was set to 0. To remove shot noise contributions, the correlation at zero lag time was omitted from the analysis. From the fit amplitudes of the ACFs and CCFs, rel. cc. values were calculated: where G i,j (0,0) is the amplitude of the CCF of species i and j, and G i (0,0) the ACF amplitude of species i. In the case of non-meaningful convergence of the fit to the CCFs (i.e., ξ0 > 4 pixels), the rel. cc. was simply set to 0. To ensure statistical robustness of the RSICS analysis and sufficient SNR, the analysis was restricted to cells expressing all fluorophore species in comparable amounts, that is, relative average signal intensities of less than 1:6 for all species (in all RSICS experiments). The molecular brightness of species i was calculated by dividing the average count rate in the ROI by the particle number determined from the fit to the ACF: Bi = ⟨Ii . The p f was calculated from the obtained molecular brightness B i,2 of FP homodimers of species i: pf = Bi,2 Bi,1 − 1 . TRICS analysis TRICS was performed using three-dimensional RSICS image stacks Ii ( x, y, t ) detected for three species i. First, the spatial 3CF was calculated: where ξ1, ξ2 denote spatial lags along lines and ψ1, ψ2 along columns of the image stacks. Contributions from δI triplets containing at least two intensity values from the same pixel position were not included in the calculation in order to avoid shot-noise artifacts (since all channels are detected here by the same detector). From the resulting four-dimensional matrix, a two-dimensional representation was calculated by introducing coordinates a, b for the effective spatial shift between signal fluctuations evaluated for the two-species combinations: The four-dimensional triple-correlation matrix was transformed into a two-dimensional representation G 3C (a,b) by rounding up a and b to integer values and averaging all points with the same rounded spatial shift. For example, for a one-pixel shift along a line in one FP channel and a onepixel shift along a column in the third FP channel (i.e., ξ1 = 1, ψ1 = 0, ξ2 = 0, ψ2 = 1 ), a = b = 1. G 3C (1,1) also includes in its averaged value the other seven correlation values corresponding, for example, to ( ξ1 = 0, ψ1 = 1, ξ2 = 1, ψ2 = 0 ), ( ξ1 = 1, ψ1 = 0, ξ2 = 0, ψ2 = −1 ), etc. As a further example, G 3C (2,0) includes and averages only the two correlation values corresponding to ψ1 = ψ2 = 0 (i.e., no shift along columns) and ξ1 = −ξ2 = ±1 (i.e., a one-pixel shift along a line, in opposite directions for the two channels). Note that the combinations ( ψ1 = ψ2 = 0, ξ1 = ±2, ξ2 = 0 ) and ( cψ1 = ψ2 = 0, ξ ) would also result in a = 2 and b = 0, but these values were not included since they refer to a correlation between identical pixel positions (e.g., ξ2 = 0, ψ2 = 0 ) between two FP channels and would be influenced by shot-noise artifacts (see above). To determine the triple-correlation amplitude G 3C (0,0), the closest points (e.g., G 3C (1,1), G 3C (1,2), G 3C (2,1), G 3C (2,2), G 3C (3,0)) of the two-dimensional triple correlation were averaged as an (slightly underestimated) approximation of the amplitude value at (0,0). Note that we chose not to include G 3C (2,0) because this point is the average of only two possible spatial shift combinations, resulting in large statistical noise. Also, the point G 3C (0,3) was not included since it refers to shifts along columns (i.e., the slow scanning direction), which, in turn, are characterized by a steeper decrease in amplitude. Finally, for best visualization, G 3C is plotted for a and b values ≥ 1 (see Figure 7 and Appendix 1figure 2). To account for reduction of the triple-correlation amplitude due to the high-pass filter, an empirical correction was applied based on simulated triple-correlation amplitudes with different sizes F of the moving window (see Appendix 1, Section A1.2 and Appendix 1- figure 1). Notably, applying this empirical correction to the auto-and cross-correlation amplitudes confirmed the previously introduced correction formula (see Appendix 1- figure 1 (Hendrix et al., 2016). The triple-correlation amplitude is related to the number of triple complexes N 3C (Heinze et al., 2004;Palmer and Thompson, 1987): where N i is the total number of proteins detected for species i. In analogy to the rel. cc., a relative triple correlation rel.3C. is defined, quantifying the fraction of triple complexes relative to the total number of proteins of the species that is present in the lowest concentration: Statistical analyses All data are displayed as scatter dot plots indicating mean values and SDs. Sample size is given in parentheses in each graph. Statistical significance was tested using Welch's corrected two-tailed Student's t-test in GraphPad Prism 7.0 (GraphPad Software) and p-values are given in figure captions. Appendix 1 A1.1 Is pair-wise cross-correlation analysis sufficient to detect ternary interactions? Generally, pair-wise cross-correlation analysis can only detect pair-wise interactions between fluorescently tagged protein species. To understand whether this analysis is sufficient to indicate the presence of heterotrimeric protein complexes for the specific case reported in this work, we investigated brightness and rel. cc. data obtained by RSICS measurements of IAV PC proteins in more detail. For all three protein species (PA-mEYFP, PB1-mEGFP, PB2-mCherry2, referred here simply as A, B, and C), normalized brightness values close to the values of FP-homodimers were observed in this work. As a simple approximation, we assume therefore that each species, independently of its participation in hetero-complexes, is either (i) exclusively dimeric or (ii) present as a well-defined mixture of monomers and homotrimers. For the latter case, the fraction of monomers ( f1,i ) and trimers ( f3,i ) for each species i can be calculated from the average molecular brightness ⟨ε⟩ i : where ε1,i and ε3,i denote the molecular brightness of monomers and trimers, respectively. We then calculate the maximum rel. cc. amplitudes that can be expected in the presence of optimal pair-wise interactions, while still assuming a negligible concentration of complexes containing A, B, and C. Generally, the ACF and CCF amplitudes for multiple populations (i.e., complexes of species i and j with variable stoichiometry) are calculated as follows (Kim et al., 2005): where εk,i and εk,j denote the molecular brightness of species i and j (assumed here to be the same for all species) for population k , present at a concentration Ck in the effective volume Veff . For the sake of simplicity, we discuss here only two simple possible scenarios for the two mixtures discussed above (i.e., each PC protein being present exclusively as homodimers or as a mixture of monomers and homotrimers), in the absence of complexes containing all three PC subunits: 1. Homodimers interacting with homodimers of the other species (i.e., AA-BB, AA-CC, BB-CC). 2. Monomers and oligomers interacting (exclusively) with monomers or oligomers of the other species (i.e., A-B, A-C, B-C, AAA-BBB, AAA-CCC, BBB-CCC). The two scenarios evaluated here correspond to configurations with the highest possible pair-wise correlations (in the absence of complexes containing A, B, and C), still compatible with an average oligomerization value of 2. For the two scenarios, we calculate ACF and CCF amplitudes according to the formulas given above, assuming the same total concentration for all species and replacing the concentrations by the derived relative fractions of monomers and oligomers. For each scenario, we determine rel. cc. values from the ratio of CCF and ACF amplitudes. Finally, we extend our calculations by considering incomplete maturation of FP tags based on the fluorescence probability Pf For simplicity, we assume the same Pf for each FP species, in agreement with the similar Pf values of ca. 60-75% observed here for mEGFP, mEYFP, and mCherry2. We use a binomial model for the relative occurrence of different subpopulations in each species . For example, actual trimers give rise to a fraction fk of fluorescent trimers (k = 3), dimers (k = 2), or monomers (k = 1) with a relative occupancy of fk = The obtained rel. cc. values for all models are given in Appendix 1-table 1 for = 1 or Pf = 0.7. For comparison, we also calculated rel. cc. values of the positive control, that is, the maximum pair-wise rel. cc. for 1:1 stoichiometry heterodimers (A-B/A-C/B-C) or 1:1:1 stoichiometry heterotrimers (A-B-C), resulting in values of 1 (for Pf = 1) and 0.7 (for Pf = 0.7). Experimentally, this control would also account for suboptimal overlap of the detection volumes for each FP combination, which we neglected here for simplicity. In the absence of ternary hetero-interactions, the determined rel. cc. values are at maximum 59% of the rel. cc. of the positive control (i.e., 0.59 for Pf = 0.7 for scenario 1). Higher normalized values (up to 1.19, see Appendix 1-table 1) can be obtained only in the presence of hetero-complexes involving all three PC subunits, which we calculated for comparison for the two mixtures (i.e., AA-BB-CC, or A-B-C in mixtures with AAA-BBB-CCC) and both Pf values. Appendix 1-table 1. Relative cross-correlation ( rel. cc.) values (here, same for all channel combinations) for pair-wise or ternary interactions of three-species mixtures. Values in brackets for p f = 0.7 give rel. cc. values normalized to that of the positive control (i.e., the pair-wise rel. cc. for 1:1 stoichiometry). Of note, in our experiments, rel. cc. values > 0.7 (relative to the positive control) were observed for all pair-wise interactions between PC subunits (detected average pair-wise rel. cc. values normalized to the positive control were 0.71 for B-C, 0.97 for A-C, and 1.43 for A-B, see Figure 6D). As shown based on the different binding models, such high pair-wise rel. cc. values are only possible if ternary complexes are present. Thus, by combining molecular brightness and crosscorrelation analysis, we conclude that PC proteins form a substantial amount of ternary complexes in the nucleus of cells. A1.2 TRICS analysis of simulated three-species RICS data To evaluate the performance of TRICS, we first analyzed simulated RICS data. We ran Monte Carlo simulations of three-species RICS for either (i) three independently diffusing species A, B, C or (ii) a heterotrimeric species (e.g., A-B-C complexes). Two-dimensional diffusion and image acquisition was simulated with the following parameters: diffusion coefficient D = 1 µm 2 /s (set to be the same for all species), N = 1000 particles (for each species), waist ω0 =0.2 µm, pixel size δs =0.05 µm, pixel dwell time τp =2 µs, 256 × 256 pixels, 100 frames. RICS ACFs, CCFs, and the TRICS 3CF were calculated. To correct for the reduction of the triple correlation due to the high-pass filter (with filter size of F frames), an empirical correction was applied. To this aim, the variance and third central moment of a series of 10 5 random numbers, sampled from a Poissonian distribution (with mean f0 = 10 ), were calculated within windows with variable size ∆F (Appendix 1-figure 1). The empirical function fi ( ∆F−1 ∆F ) b i was fitted to the variance (i = 2) and third central moment (i = 3). For the variance and third central moment, b 2 = 1.0 and b 3 = 3.4 were obtained, respectively. Thus, the reduction of variance and third central moment for a given value F can be corrected using the factor ( ∆F ∆F−1 ) b i . For the variance, the determined value b 2 is in agreement with a previously discussed correction (Hendrix et al., 2016), which was used here to correct experimental ACFs and CCFs. To test whether 3CFs can be effectively corrected with the obtained ( ∆F ∆F−1 ) b 3 factor, 3CFs were calculated with variable F (in the range 2-16) and the amplitude values determined without or with the correction. In the latter case, fairly constant 3CF amplitudes were obtained, agreeing with the 3CF amplitude calculated without the high-pass filter (data not shown). Exemplary 3CFs for the two simulated scenarios are shown in Appendix 1-figure 2. As expected, the rel.3C. values are close to 100 % in the case of heterotrimers and 0% in the case of independently diffusing monomers. The slight underestimation of the rel.3C. for heterotrimers is likely due to the approximated interpolation of the amplitude value from only the first five points of the 3CF. Appendix 1-figure 1. Effect of high-pass filter on calculation of variance and third central moment of random numbers sampled from a Poissonian probability distribution. Variance (f 2 , blue circles) and third central moment (f 3 , blue circles) were calculated with a moving average (window size ∆F ) for a set of 10 5 random numbers drawn from a Poissonian distribution with average 10. An empirical function (red solid line) of the form fi ( ∆F ) = f0 ( ∆F−1 ∆F ) bi was fitted to the variance (f 2 ) and third central moment (f 3 ), and used to correct for the undersampling effect. The corresponding values after applying the empirical correction are shown as blue circles in the panels labeled as 'corrected.'
17,732.4
2020-12-19T00:00:00.000
[ "Biology", "Chemistry", "Physics" ]
Chemical composition tuning induced variable and enhanced dielectric properties of polycrystalline Ga2‐2xWxO3 ceramics We report on the tunable and enhanced dielectric properties of tungsten (W) incorporated gallium oxide (Ga2O3) polycrystalline electroceramics for energy and power electronic device applications. The W‐incorporated Ga2O3 (Ga2−2xWxO3, 0.00 ≤ x ≤ 0.20; GWO) compounds were synthesized by the high‐temperature solid‐state chemical reaction method by varying the W‐content. The fundamental aspects of the dielectric properties in correlation with the crystal structure, phase, and microstructure of the GWO polycrystalline compounds has been investigated in detail. A detailed study performed ascertains the W‐induced changes in the dielectric constant, loss tangent (tan δ) and ac conductivity. It was found that the dielectric constant increases with addition of W in the system as a function of temperature (25°C‐500°C). Frequency dependence (102‐106 Hz) of the dielectric constant follows the modified Debye model with a relaxation time of ∼20 to 90 μs and a spreading factor of 0.39 to 0.65. The dielectric constant of GWO is temperature independent almost until ∼300°C, and then increases rapidly in the range of 300°C to 500°C. W‐induced enhancement in the dielectric constant of GWO is fully evident in the frequency and temperature dependent dielectric studies. The frequency and temperature dependent tan δ reveals the typical behavior of relaxation loses in GWO. Small polaron hopping mechanism is evident in the frequency dependent electrical transport properties of GWO. The remarkable effect of W‐incorporation on the dielectric and electrical transport properties of Ga2O3 is explained by a two‐layer heterogeneous model consisting of thick grains separated by very thin grain boundaries along with the formation of a Ga2O3‐WO3 composite was able to account for the observed temperature and frequency dependent electrical properties in GWO. The results demonstrate that the structure, electrical and dielectric properties can be tailored by tuning W‐content in the GWO compounds. INTRODUCTION effects of W into Ga 2 O 3 is not well understood at this time. Therefore, understanding of the W-mixed Ga 2 O 3 ceramics could be useful to predict the surface/interface diffusion and electrical properties of reaction compounds (if any) in such device applications involving W-Ga 2 O 3 contacts. 46,47 Also, the GWO bulk ceramic materials with controlled structure and properties may be useful to employ them as target materials for high-quality thin film deposition using physical vapor deposition. In view of all these considerations, the dielectric properties of GWO bulk ceramics were investigated as a function of W-concentration and under variable conditions. The present work may also contribute to the understanding of the effect of sintering behavior and chemical composition on the properties of mixed oxides or ceramic solid solutions based on Ga that is, Ga 2−x M x O 3 , where M represents a different cation. In fact, as widely reported in the literature, there have been significant efforts with a focus toward understanding various growth mechanisms and/or elucidating the effect of processing conditions on the synthesis of intrinsic and doped bulk Ga 2 O 3 ceramics for a range of optoelectronic applications. It was reported that the sintering temperature strongly influences the microstructure and final density of sintered Ga 2 O 3 ceramics. 48 The Ga 2 O 3 ceramic targets with high density were obtained by using Ga 2 O 3 micro-particles with uniform particle size, which shows promising applications for optoelectronic devices. 48 Similarly, the γ-Ga 2 O 3 -Al 2 O 3 solid solutions derived based on the reaction mixtures exhibit high catalytic activities for selective reduction of NO using methane as the reducing agent. 49 The annealing controlled growth optimization has been proved to grow La-doped a-GaOOH into nanostructures of -Ga 2 O 3 and -Ga 2 O 3 which has an influence on luminescence on La-doped -Ga 2 O 3 nano-spindles. A mixture of gallium oxide and silicon powders have been subjected to vapor-liquid-solid process or vapor-solid process to produce Si-doped Ga 2 O 3 which are sensitive to UV/ blue intensity ration and eventually causes a decline in it. 50 Spectacular cactus-like nanostructures of -Ga 2 O 3 were grown to investigate field emission properties which made a headway due to its first time reporting for possible applications. 51 Ga 2 O 3 transparent ceramics prepared by ceramic method with a controlled density and morphology were shown to exhibit excellent photoluminescence properties, which can mainly be attributed to the recombination between donors and acceptors. Ga 2 O 3 transparent ceramics have promising applications as transparent conductive materials and inorganic scintillators. 52 Similarly, even the growth mechanisms for synthesizing rod-shape like -Ga 2 O 3 structures by calcination pointed to a noticeable change in the porosity and pore distribution 53 which is essential to understand the inclusion of impurity materials for a given targeted application. Therefore, we believe that the present work on sintered W-mixed Ga 2 O 3 ceramics may also contribute to further advancements in the field, especially, in the light of existing efforts to the large class of mixed oxides and solid-solutions based on Ga 2 O 3 . Materials and ingredients After carrying out the stoichiometric calculations, Ga 2−2x W x O 3 (GWO) compounds were synthesized by conventional solid-state reaction method. The WO 3 concentration was varied from 0 ≤ x ≤ 0.20. The precursors Ga 2 O 3 (99.99%) and WO 3 (99.9% purity) were procured from Sigma-Aldrich. We adopted the previously established procedures and methods to synthesize all the GWO compounds. 42 Briefly, to prepare selected GWO composition, the precursors were weighed in stoichiometric proportions according to the calculations. An agate mortar is ideal for the quantity of powders which was used to pulverize the powder with acetone as wetting media. This method gives a homogeneous mixture of the GWO compounds. Calcination and sintering The homogeneous mixture of GWO powders were then transferred into crucibles and calcined at 1050 • C, 12 hours and 1150 • C, 12 hours in a muffle furnace. Each calcination was followed by intermediate grinding to ensure and assist the solid-state reaction. After the final calcination, the powders are thoroughly ground to enhance sinterability. The addition of polyvinyl alcohol (PVA) at this step is to give binding strength to the pulverized powder which is followed by pelletization into circular disc shape of 8 mm diameter and ∼1 mm thickness. A uniaxial hydraulic press was used to apply 1.5 ton of load for this process. These green pellets were then sintered at 1250 • C for 6 hours with a ramp rate of 5 • C/min, and binder burnout of pellets was ensured by holding at an intermediate temperature (500 • C) for 30 minutes. The crystal structure, surface chemistry and chemical composition of all the synthesized GWO compounds are thoroughly established. 42 Dielectric measurements Dielectric measurements were performed using a HIOKI IM3536 LCR meter. Circuit corrections were made to ensure the accuracy of the readings taken on the LCR meter. The sample pellets were prepared by fine polishing and coating with silver paste on both sides prior to measurement. The capacitors fabricated using the GWO as dielectric while silver (Ag) serves as the metal electrodes of the capacitor. The silver coated GWO pellets were cured at 90 • C for 2 hours to ensure the proper functioning of the electrodes. The capacitance, dielectric dissipation (tan ) and inductance data were collected between 1 kHz and 1 MHz frequency (at 125 frequencies) and a temperature range of 30 • C to 500 • C (for every 10 • C). It is well known that the electrical energy stored in a capacitor is function of capacitance, which is determined by the capacitor geometry and the dielectric constant of the oxide. 47,[54][55][56] Thus, the dielectric constant of a material represents the charge storage capacity when a potential is applied to it. 47,[54][55][56] It is calculated by the following equation, where the capacitance is given 47,56 by: where, is the dielectric constant of the material under investigation, C represents the capacitance, A the area of the capacitor's plate, d the distance between the capacitor's parallel plates, and 0 the dielectric constant of free space. Dielectric constant -frequency dependence The sintered GWO pellets were subjected to a frequency sweep ranging from 1 kHz to 1 MHz and the data obtained are shown in Figure 1. As it can be seen from Figure 1, the dielectric constant ( ′ ) at lower frequency range shows comparatively higher values. However, as the frequency increases, the ′ then decreases. It decreases until it plateaus out with the increase in frequency. This behavior of ′ with respect to frequency is typical for all the GWO compounds. The dielectric dispersion of any dielectric material is a complicated function of the frequency of applied electric field. It also depends on the microstructure (eg, grain size) of the material system under investigation. As reported elsewhere, the incorporation of W into Ga 2 O 3 results in microstructural changes. 42 Initially, the pristine Ga 2 O 3 exhibits a rod-shaped structure. The addition of small quantities of W into the system changes the morphology to spherical and as the W concentration increases the grains become faceted with square or hexagonal features. The facets even exhibit twin lamellae which are unique to the GWO system. Overall the grain growth is abnormal and is attributed to the unreacted phase that is accumulated at the grain boundaries. Also, the presence of twin lamellae is due to the WO 3 which enhances the diffusion of vacancies. The fact that this abnormal grain growth with twin lamellae is contributing to the increase in grain boundary area which is assisted with the unreacted WO 3 aggregated at the grain boundaries. This overall phenomenon is contributing to the increased resistance at grain boundaries which impairs conductivity. It is evident from Figure 1 that the real part of the dielectric constant, ′ , shows usual behavior with alternating frequency. Which can be explanied by addressing the polarization source in the system. 57,58 At the lower end of the frequency range, ′ assumes higher values throughout the sweep profile. This is due to the fact that ionic, space charge and grain boundary polarization contribute to the higher values of ′ at lower frequencies. 58,59 Perhaps, the presence of space charge polarization at the grain boundaries, may generate a potential barrier. Then, an accumulation of charge at the grain boundary occurs leading to higher values of the dielectric constant. 59,60 However, it may be noted that as the frequency increases, the ′ values deccreases rapidly. This can be attributed to the species that are contributing to the polarization phenomenon, lag behind the applied voltage in the high frequency domain. This typical dielectric behvaior can be elaborated by the dispersion due to Maxwell-Wagner polarization, 61 Figure 2 shows the dielectric dispersion behavior for GWO materials using the modified Debye function as shown in the following equation 59,60,62,64 : where ′ ( ) gives the complex permittivity, ( ′ 0 − ′ ∞ ) gives the dielectric relaxation strength, ′ 0 represents the low frequency permittivity (static) while ′ ∞ represents high frequency permittivity. ω is the angular frequency which is derived from the linear frequency (f ) of the applied electric field as = 2 f . represents the Debye average relaxation time while denotes the spreading factor of actual relxation time about the mean value. The intrinsic paramters of the GWO compunds were determined using Cole -Cole plots 65 as can be seen in Figure 3. The values for spreading factor " " and relaxation time " " were obtained by plotting ln [( ] as a function of ln ω with only the real part of the dielectric dispersion in context. In other words, the pioneering work of Cole-Cole with the standard procedure 65 were adopted to fit the experimental data, based on the real part of the dielectric constant instead of the complex part of the dielectric constant, and to obtain information on the dielectric relaxation behavior. 60 As reported F I G U R E 2 Real part of dielectric constant fitted to modified Debye Function. The plots are for different compositons of GWO elsewhere, such analyses and procedures were found to be quite useful to understand the dielectric relaxation behavior in complex ceramics or chemical compounds with multivalent cations present. 54,55,60 The values obtained for and were used to fit the data by computing the Debye function mentioned in Equation (1) and fitting it with the experimentally measured values for ′ at room temperature. As it can be seen from Figure 2, the experimental and calculated values show a good agreement which further corroborates the validity of the modified Debye's function in claiming the multiple ion contribution to the relaxation process. The and values determined are tabulated in Table 1. It can be noted that these values are in reasonable agreement with doped semiconductors and complex ceramic compounds. 54,55,60 Dielectric constant -temperature dependence The variation of ′ with temperature is shown in Figure 4. The data shown are for GWO compounds with variable W-content and measured at different frequencies. At lower frequency (1 kHz-10 kHz), polycrystalline -Ga 2 O 3 shows a single relaxation peak at ∼400 • C. The peak intensities fade at higher frequencies with no relaxation peaks at f = 100 kHz to 1 MHz. almost all the GWO composites show a relaxation peak and the dielectric constant follows an increasing monotonic function. The high intensity relaxation peaks observed in -Ga 2 O 3 are due to the conduction between the grains and grain boundaries. As the W concentration in -Ga 2 O 3 increases, more and more W 6+ ions are introduced into Ga 2 O 3 altering the grain morphology from rod shaped (un-doped Ga 2 O 3 ) to spherical shape which provides a larger area for conduction and hence the reduced intensity of relaxation peaks. Note that the dielectric properties of ceramics are highly dependent upon the microstructure, defect structure, type of ionic dopants, and temperature. 4,55,60,65,66 In fact, realizing that the energy storage properties mainly depends on the defect chemistry of the dielectric, several research groups have paid attention recently to tailor the multilayered materials for significant enhancement of energy storage performances by regulating the dielectric contrast between adjacent layers. 4,66 In the present case, as the W 6+ substitution increases in -Ga 2 O 3 , the microstructural properties are altered as evident in our previous work as well as the TEM analyses discussed in subsequent sections. Owing to the smaller ionic radius of W 6+ , when it substitutes for Ga 3+ (larger ionic radius), there is a shrinkage of the unit cell volume. This shrinkage results in enhanced charge carrying capability which in turn reduces the hopping distance. At x = 0.10, the dielectric relaxation is unnoticed but as the concentration increases, the low intensity relaxation peaks resurface. This phenomenon is attributed to the undissolved WO 3 agglomerated at the grain boundaries and increasing the resistance at interface. The abnormal grain growth arising due to vacancy assisted enhanced mass F I G U R E 5 Temperature dependence of ′′ in GWO compounds as a function of frequency at different W concentration transport is also contributing to the twin lamellae which explains the resurfacing relaxation peaks as x > 0.10. Thermal energy enhanced charge carrier mobility and improved hopping is generally observed at higher temperature. Thus, an increase in dielectric polarization contributes to the increased ′ values. The temperature dependence of the corresponding imaginary part of dielectric constant for GWO compounds is presented in Figure 5. The data presented are for GWO compounds with variable W-content and measured at various frequencies. It may be noted that as WO 3 concentration increases in the system, the value for ′′ also increases. It can be seen that there is a shift in the dielectric values due to the change in temperature. This is due to the increase in charge carrier mobility and the enhanced hopping rate due to increased temperature. The lower temperature does not support this phenomenon and hence the lower values of ′′ . The frequency dependence of the corresponding imaginary part of dielectric constant for GWO compounds is presented in Figure 6. The data presented are for GWO compounds with variable W-content and measured at various temperatures. It is evident that the ′′ values tend to be generally higher at lower frequencies but decreases rapidly with increasing frequency. On the other hand, ′′ is seen to increase with increasing temperature. Also, the data clearly indicates that the temperature dependence of ′′ is strongly dependent on the frequency of measurements. These variations can be understood if we consider the microstructure variation and interfacial contributions in the W-doped Ga 2 O 3 materials. The contribution of the interfacial losses and the loss from electrical conductivity (as discussed in the subsequent F I G U R E 6 The dependence of ′′ in GWO compounds as a function of W concentration at a particular frequency sections) is generally dominant at lower frequencies; however, these factors becomes negligible at higher frequencies. [55][56][57][58][59] Thus, the observed decrease in the imaginary part of the dielectric constant observed at higher frequencies can be attributed to the rapidly fading off the contributions from the interfacial as well as grain conductivity mechanisms. However, the large values -observed at lower frequency is mainly due to the W-doping induced complex chemistry of Ga 2 O 3 in terms of multiple valence cations, vacancies, and grain boundary defects. 55,57-59 Dielectric loss The commonly referred dielectric loss factor is defined as the ratio of imaginary part of dielectric constant to the real part and it is given by tan = ′′ / ′ , where δ is the phase difference between current and voltage of the applied electric field. 57,63 As shown in Figure 7, the loss factor with reference to change in temperature indicates that -Ga 2 O 3 assumes a lower value for lower frequencies until temperatures of 350 • C to 420 • C. Then the loss tan rises exponentially toward the higher range of measuring temperatures. This is true for higher frequency range as well, indicating a dormant behavior of the participating species. -Ga 2 O 3 is known to have intrinsic oxygen defects leading to space charge polarization which exists during the entire temperature sweep. As the temperature increase, this effect becomes predominant and the loss tan δ rises exponentially as seen in Figure 7. Figure 8 shows the variation of tan with temperature at different frequencies, it may be noted as the concentration of WO 3 in the system increase, tan increases. As reported elsewhere, a mixture of W 4+ and W 6+ valence states is present at lower W concentrations. 67 This gives rise to a charge imbalance which explains the substitution of Ga 3+ ions and with free W ions for conduction reducing the dielectric loss at lower concentrations. With the increment in WO 3 , the aggregation of mixed phase may give an improved mass transportation assisted Ga 3+ substitution. This may finally result in a certain amount of dopant infusion and additional amount to agglomerate. Figure 9 shows the ac conductivity of GWO samples as a function of ln . It is evident from the figure that the conductivity increases with the increase in frequency. It can also be seen that the conductivity increases with the increase in W content in the GWO system. This is attributed due to the electron hopping between various cations. This is also influenced by the increase in frequency which increases the hopping rate and hence the improved ac conductivity. To further expand the understanding of transport mechanism in GWO dielectric, a plot of acdc in log scale has been plotted against the log 2 as shown in Figure 10. The plots obtained validate the polaron hopping conduction mechanism involved in all the GWO dielectric materials. The polaron is often described in context of a deformable polar medium as a self-stablilized electronic charge. The lattice distortion induced slow motion in the polarons is known as polaron hopping. 55,68,69 As it can be seen in Figure 10, the W-doped Ga 2 O 3 ceramics exhibit a linear growth before being achieiving the saturation state at higher frequency. Intrinsic Ga 2 O 3 has the lowest magnitude as compared to other doped GWO samples which forms a cluster as the dopant level increases. The underlying mechanism behind this can be explained the following equation 68,69 : AC conductivity F I G U R E 10 The log( ac − dc ) vs log 2 plots for GWO dielectric materials. The data shown are for GWO dielectrics with variable W-concentration. The initial linear plots are evident of small polaron hopping mechanism (see text) operative in the electrical transport properties of GWO dielectrics. The elevated set of data with a different linear behavior and slope can be also noted for W-doped vs intrinsic Ga 2 O 3 where, is angular frequency and represents the average relaxation time. It is worth noting that for conduction occuring in a localized neighborhood with a small polaron hopping, 2 2 < 1, the log ( ac − dc ) vs log 2 will always represent a linear behavior which is clearly evident in Figure 10. While this observation clearly supports the fact that the electrical transport mechanism in GWO materials is based on the polaron hopping among the localized sites, the W-induced distortion of the lattice is the source of such localization of charge carriers leading to the polaron formation. Furthermore, metal-doping induced lattice distortion leading to such localization of charge carriers leading to polaron formation and polaron-hopping in the localized states facilitated electrical conduction mechanism was noted and reported in some of the doped ferrite semiconductors. 55,70-72 Proposed mechanism and model Finally, based on the observations made from previously reported structural details 42,67 and the present work on frequency and temperature dependent dielectric constant, dielectric loss, and ac electrical transport analysis, the effect of W-doping on the electrical conduction mechanism and dielectric properties of Ga 2 O 3 can be modelled simply from the microstructure and heterogeneity perspective. First of all, it must be emphasized that crystal structure, phase and microstructure analyses using X-ray diffraction (XRD), scanning electron microscopy (SEM) indicate that the W-doping induced changes are significant. 42,67 As reported previously, XRD analyses of GWO reveal the formation of a solid solution at lower concentrations W (x ≤ 0.10) while unreacted WO 3 secondary phase formation occurs at higher concentrations (x > 0.10). Insolubility of W at higher concentrations (x ≥ 0.15) leading to a Ga 2 O 3 -WO 3 composite formation is attributed to the difference in formation enthalpies of respective oxides that is, Ga 2 O 3 and WO 3 . Furthermore, the surface chemistry and electronic structure analyses using X-ray photoelectron spectroscopy (XPS) analyses also supported the formation of Ga 2 O 3 -WO 3 composite formation. 67 However, XPS studies reveal the lower valence state (W 4+ ) formation for GWO compounds with lower concentration of W, Thus, the structural and chemical analyses strongly support the idea of an heterogeneous and electronically differently characterized bilayer system for W-doped Ga 2 O 3 . Such simple two-layer or heterogeneous model can be formulated to account for the frequency and temperature dependent dielectric properties and electrical conduction mechanism in GWO materials. Formation of grain-interior and grain-boundary in GWO dielectric can be treated as a heterogeneous system as schematically presented in Figure 11. The proposed model contains grains of Ga 2 O 3 and the secondary phase of WO 3 , which may be nucleating and located more at the grain boundaries. We hypothesized the variation in dielectric constant is a result of the formation of heterogeneous system based on the Ga 2 O 3 -WO 3 composite with the gradual increase in W content. Although the electrical response of the grain and grain boundaries is entirely different, the grains are essentially separated by a thin layer of grain-boundaries. It is evident from Figure 11B,C that the effect of W-content on the dielectric constant of GWO samples is remarkable. The dielectric constant increases with increasing W-content and is attributed to the lattice distortion of the intrinsic Ga-oxide which is in-turn the result of enhanced atomic polarizability. Additionally, formation of a small amount of WO 3 leads to the heterogeneous system which presents itself as WO 3 phase at the grain boundaries leading to the accumulation of charges at the grain boundaries. The resulting interfacial polarization further contributes to an increase in dielectric constant. This would also explain the ac conductivity of the GWO dielectrics. At lower frequencies, the grain boundaries are highly F I G U R E 11 A, Proposed heterogeneous, two-layer model of the GWO dielectric. The chemical identification of the grain and grain boundaries are as indicated. B, Variation of the real part of the dielectric constant with W-concentration. The data shown is at room temperature. C, Variation of the real part of the dielectric constant with W-concentration. The data shown is at the highest temperature of the measurement (500 • C). It is evident from the data shown in B and C, that the effect of W-doping on the dielectric constant is significant active and the frequency of electron hopping frequency between the metal ions of variable valence states is observed to be at a lower level. This explains the lower conductivity of materials at lower frequency. However, as the frequency of the applied field increases, the GWO grains (interior) become more active. These highly active grains facilitate electron hopping between the same metal ions of variable valence state and, thereby, increases the hopping frequency. As a result, the electrical conductivity increases gradually with increasing frequency. This is clearly seen in frequency dependent electrical characteristics (Figure 10) of GWO dielectric materials. Having understood about the dielectric properties of GWO materials, it is imperative to shed light on their potential benefits, at least in the context of utilizing Ga 2 O 3 based mixed oxides. Ga 2 O 3 doping or mixed oxides are commonly used to design novel dielectric materials, particularly those ceramic compositions without any volatile elements such as Li or Na, for modern wireless communication devices, such as cellular phones, resonators, filters and oscillators in microwave integrated circuits. [73][74][75] Therefore, in addition to the traditional properties and applications, the present work on the dielectric properties of Ga 2 O 3 and Ga 2−2x W x O 3 ceramics may be useful and provide a road map when considering the mixed oxide ceramics based on Ga 2 O 3 for designing such materials. CONCLUSIONS A wide range of capacitor devices were fabricated using the tungsten doped Ga 2 O 3 (GWO) as dielectric and silver (Ag) as the metal electrodes of the capacitor. The electrical performance of the capacitor devices based on W-doped Ga 2 O 3 (Ga 2−2x W x O 3 , 0.00 ≤ x ≤ 0.20; GWO) dielectric materials were evaluated by comprehensively studying the dielectric properties and electrical transport mechanisms. The W-concentration strongly influences the dielectric properties of GWO. The significant effect of W-doping is, however, dominant for higher concentration of W, where there is a secondary phase formation leading to the Ga 2 O 3 -WO 3 composite. At f = 1 kHz, at room temperature, varying W-content (x) from 0.0 to 0.2 increases the GWO dielectric constant from ∼9 to ∼90. Further enhancement in the dielectric constant due to W-doping was evident in the dielectric studies as a function of temperature, 25 • C to 500 • C. While the dielectric constant is temperature independent until ∼300 • C, rapid increase in dielectric constant beyond 300 • C indicates the contribution from interfacial polarization to the enhanced dielectric constant. Modified Debye model accounts for the frequency (10 3 -10 6 Hz) dependent variation in the dielectric constant of GWO dielectrics. The fitting of the experimental data to the modified Debye model yield a relaxation time of ∼20 to 90 μs and a spreading factor of 0.39 to 0.65. We propose a two-layer heterogeneous model consisting of thick grains separated by very thin grain boundaries along with the formation of a Ga 2 O 3 -WO 3 composite to explain the remarkable effect of W-incorporation on the dielectric and electrical transport properties of Ga 2 O 3 . The results demonstrate that the structure, electrical and dielectric properties can be tailored by tuning W-content in the GWO compounds. We believe that this fundamental study of dielectric properties as a function of frequency, temperature and W concentration could be useful to initiate further studies on the doping effects of wide-band gap semiconductors and could be useful for designing dielectric materials for electronic, optoelectronic and high power electronic device applications. In addition to the traditional electronic and optoelectronic device, the present work on the dielectric properties of Ga 2 O 3 and Ga 2-2x W x O 3 ceramics may be useful and provide a road map when considering the mixed oxide ceramics based on Ga 2 O 3 for application in advanced communications.
6,805.4
2020-12-08T00:00:00.000
[ "Materials Science" ]
Solving constrained engineering design problems with multi-objective artificial algae algorithm Abstract Introduction Optimization problems with multiple objectives are called multi-objective optimization problems (MOOPs), and simultaneous optimization of these objectives is called multiobjective optimization (MOO).Real-world problems are generally in the type of NP-hard MOOPs.NP-hard means that it cannot be proven that there is a solution in polynomial time, or that the algorithms that can solve it efficiently are not known [1]; examples of these problems are found in engineering design, product and process design, land-use planning, management science, economics etc.In MOOPs, objective functions are generally inversely proportional.In other words, obtaining a satisfactory solution for an objective function result in a poor solution for the other objective function.Thus, it is not possible to obtain a global best solution as in single-objective problems, but instead, a solution set consisting of the best solutions is obtained. Engineering design problems are one of the most important real-world problems, many of which are constrained problems [44].Since optimal solutions of engineering design problems are difficult to find, metaheuristic algorithms are used in most studies [40], [45], [46]. The test set consists of 14 problems, two unconstrained and twelve constrained.The MOAAA was compared with NSGA-II, MOCell, MOVS, IBEA and PAES algorithms.The results obtained showed that the MOAAA performed better than the comparison algorithms on the used problem set. The study is organized as follows.In section 2, MOOPs, the concept of Pareto and performance metrics are presented.In section 3, AAA is summarized.The MOAAA is explained in detail in section 4. Section 5 shows the results and performance analysis of the algorithms used on the problems.Finally, section 6 details conclusion and recommendations for further studies. The major addition of the research The MOAAA is a recently proposed technique for the solution of MOOPs.The MOAAA was first tested on unconstrained MOOPs and produced successful results.EPSILON and iv. IGD.The obtained metric results show that the MOAAA is generally superior to the comparison algorithms. 2 Multi-objective optimization, the pareto theorem and performance metrics Multi-objective optimization problems MOOPs are mathematically defined as follows.[28,47]: Where is the number of functions, is the number of inequality constraints, is the number of equality constraints, and is the number of decision variables.Also, assigns the ℎ inequality constraint, ℎ assigns the ℎ equality constraint, is a candidate agents in the search area, is lower bound and is upper bound of the ℎ decision variable. Pareto theorem In the MOOPs, since functions are generally inversely proportional to each other, numerous solutions that produce different values for different objective functions are formed.Researchers generally use the Pareto Theorem to determine the best ones by comparing these solutions.Where is the ideal set of solutions -Those inside the search space that do not violate the problem constraints-and , ∈ , the Pareto Theorem consists of the following 4 rules [47]: 1. Pareto-dominance: If solution A is not poorer than solution B for any objective and is better at least in one objective, it dominates solution B and it is denoted as ≺ .For a minimization problem, the mathematical notation of this rule is given in Equation (2). If solutions A and B produce better values than each other in any objective, these are named as non-dominated solutions. 2. Pareto-optimal (PO): If no element in Q dominates solution A, it means that A is a PO solution. 3. Pareto-optimal-set (PS): A set of position vectors in the search area of PO solutions in Q. 4. Pareto-optimal-front (PF): A set of position vectors in the objective space of solutions in PS. Performance metrics In MOO, Pareto-optimal-fronts generated by the algorithms (Pareto front-estimated, ) are expected to ideally estimate the true Pareto-optimal-fronts (Pareto-front-true, ).The mathematical formulas of the performance metrics are given below [47], [48]. Artificial algae algorithm (AAA) The AAA was inspired by the behaviors of the real algae, such as turning towards the source of light, growth by photosynthesis, reproduction by mitosis after reaching a sufficient size and adaptation to medium for survival.AAA was initially applied to solve single-objective problems and achieved quite successful results.In Figure 1, the main steps of the AAA are given; detailed information about the AAA can be obtained from [50]. Multi-objective artificial algae algorithm (MOAAA) New multi-objective algorithms are proposed by rearranging metaheuristic algorithms that succeeded in the field of singleobjective optimization using suitable strategies (Pareto-based, decomposition-based etc.).Unfortunately, implementation of these strategies is not sufficient for the success of the new algorithms because, while single-objective algorithms are intended to find a single point that produces the best solution, multi-objective algorithms are intended to find the best set of solutions (Pareto-front).Therefore, the ability of singleobjective algorithms to distribute solutions needs to be improved.The arrangements for using the artificial algae algorithms in solving multi-objective problems are explained below. Non-domination rank & Crowding-distance strategies Non-domination rank (NDR): Non-dominated solutions were mentioned while explaining Pareto-dominance in section 2.2.When non-domination rank (NDR) strategy is used with NSGA-II [3], each set consisting of non-dominated solutions is indicated by a different number.The first front (FR1) in the population contains the best solution and is called the Paretofront (PF).Selecting solutions with a small front number as the parent solution, or transferring them to the next generation, contributes to the convergence performance of the algorithm. Crowding-distance (CRD): The NDR information is not sufficient to choose a solution from two coexisting solutions in the same front.Therefore, Deb et al. proposed the CRD strategy to determine which choice of solution will have a valuable contribution to the diversity.The CRD is calculated as follows: i. The values obtained for each objective function by each solution on the same front are sorted in ascending order, ii. The CRD values of the outermost solutions are assigned as infinite, iii. The CRD values of the solutions in between are calculated for each objective function by normalizing the difference between the two closest neighboring solutions.The CRD value of a non-extreme solution, with the total of M objective functions, is calculated as in Eq. 9. In MOAAA, NDR and CRD are used in two cases: i. When the parent algal colony cells are selected using binary tournament selection, ii. When the best N of the 2N solutions-the main population (N) and the child solutions (N)-is transferred to the next generation at the end of each iteration. Calculation of quality ranking (QR) Real algae grow by photosynthesizing as they approach a light source.The algae that are closer to the light source will grow more because the rate of photosynthesis will increase.This is modeled in AAA as follows: the algal colony is initialized with the same sizes (Greatness) (Algorithm-1 Step 2), the sizes are increased at every iteration according to the values of Greatness and the objective function (Algorithm-1, Step 3).The values of the objective function are normalized in the calculateGreatness(Gi, f(Xi)) function.A scaler value representing the quality of the solutions is needed for the normalization, so using the objective vector in MOO is not applicable.Calculation of quality ranking (QR), which calculates the quality rankings of the solutions according to NDR and CRD values, is proposed to overcome this problem.In this calculation, QR values of the extreme solutions in the first front are assigned as 1, and other solutions are sorted in descending order according to CRD values and their QR values are increased by 1.The same procedure is repeated for all remaining solutions, where the QR value of the extreme solutions in the second front is 1 more than the highest QR value in the first front.Figure 2 gives an example of how the QR values are calculated. Figure 2. Calculation of QR value. Polynomial mutation (PM) In the original AAA, the evolutionary process and adaptation allow the failed algal colony cells to be influenced by the most successful colony cells to go to better locations.While the strategies comparing the failed solutions to the most successful one have a positive contribution to the convergence process, they have a negative effect on diversity performance in multiobjective optimization algorithms.Therefore, polynomial mutation [51], which contributes to the diversity, is added in MOAAA instead of the evolutionary process and adaptation.Main steps of the MOAAA is given in Figure 3. Experimental environment This study was carried out in the jMetal 4.5 environment, which is a multi-objective optimization software package coded in Java.While NSGA-II, PAES MOCell and IBEA algorithms used in the study were available in the jMetal package, MOVS and MOAAA were coded by the authors.Engineering design problems and the KITA problem were also coded by the authors and added to the package.Since the size of the problems was small (between 2 to 7 dimesions), the maximum function evaluation numbers (maxFES) of the algorithms were kept relatively low.All operations were repeated 50 times for 4000 maxFES.A parameter analysis study was also carried out in order to determine the optimal values of the K and le parameters used in the MOAAA for the problem set in this study.In the original AAA algorithm, parameter K was used as 2 and parameter le as 0.3.In this study, the K parameter was kept constant at 2, the le parameter was increased by 0.1 between 0.1 and 1, and 10 different MOAAA versions were run on the problem set with 50 repetitions.The results of the obtained solution sets in EPSILON metric were ranked by Friedman test.The obtained results showed that the parameter le obtained the most successful results for the value of 0.3.Secondly, the parameter le was kept constant at 0.3 and the parameter K was tested by increasing it from 1 to 5. Obtained results showed that K parameter gives the most successful results for 2 values.The population was taken as 100 for all algorithms; other parameters are set to default values as given in Table 3.The Pareto-front-true (PFt) solutions for the problems were obtained to calculate these indicators.PFt solutions of the problems in the jMetal package were taken from the software website [53].PFt solutions of the problems coded by the authors were obtained by merging the Pareto-front-estimated (PFe) solutions that were generated by solving each problem 50 times for all algorithms and separating the non-dominated solutions (maximum 500 solutions) within these.The values obtained by the algorithms for the four performance metrics are given in Tables 4 to 7. The values are expected to be high for HV and low for the others.For readability, the two best results are highlighted in dark gray (the best) and light gray (the second best). The SPREAD metrics in Table 5 show that the MOCell is the most successful algorithm by taking first or second place in 13 (9+4) problems.The MOAAA, NSGA-II and PAES takes first or second place in 10 (4+6), 4 (1+3) and 1 (0+1) problems respectively. The IGD metrics in Table 7 show that, the MOAAA is the most successful algorithm by taking first or second place in 9 (7 + 2) problems.NSGA-II, MOCell and MOVS takes first or second place in 8 (3+5), 5 (3+2) and 5 (1+4) problems respectively.In Table 8, the Friedman test [54] results, which compare the average rankings of the algorithms, are given for each performance metric.In the Friedman test, it is desirable that the average ranking is high for HV and low for others.According to the results, the MOAAA has the best ranking for all metrics except the SPREAD metric.The MOAAA has the second-best ranking for the SPREAD metric.The NSGA-II has the secondbest ranking of all the metrics, except for SPREAD.The MOCell has the best ranking for SPREAD. Conclusions and recommendations In this study, the performance of the MOAAA, a recently proposed multi-objective optimization algorithm, has been tested for constrained benchmarks and engineering design problems.The test set consisted of 14 well-known problems. The values obtained for HV, SPREAD, EPSILON and IGD from the MOAAA test set are compared with the well-known NSGA-II, PAES, MOCell, IBEA algorithms and the recently proposed MOVS algorithms.When the Friedman test, which compares the average rankings of the algorithms was applied, it was observed that MOAAA had the best ranking of all metrics, except for SPREAD.Furthermore, when the Pareto fronts and the boxplots were analyzed, it is seen that MOAAA was a consistent and stable algorithm that successfully estimated the Pareto fronts.Finally, the Wilcoxon rank sum test showed that MOAAA is a unique algorithm that produces statistically significant results different to the compared algorithms.The results show that MOAAA is an alternative method that generates successful results in solving real-world multiobjective problems. In further studies, researchers could suggest modifications to enhance the distribution performance-SPREAD-of the MOAAA, or use MOAAA in solving discrete, dynamic or hybrid multiobjective real-world problems. Author contribution statement In this study, ÖZKIŞ focused on forming the idea, conducting experimental studies and evaluating the results; BABALIK, on the other hand, contributed to the review of the literature, spelling and checking the article in terms of content. Figure 1 . Figure 1.Main steps of the AAA. , SPREAD, EPSILON and HV quality indicators were used to compare the performances of the algorithms.Table 3. Parameter values of algorithms used in runs. Distribution of over .The quality of the algorithms was determined by comparing the solutions they produced.When the solutions produced by two different algorithms are similar, it is not possible to distinguish the better one by observation.Therefore, researchers have developed mathematical performance metrics that calculate convergence and diversity of the solutions.Some of these metrics calculate either convergence or diversity, while others calculate both.The inverted generational distance (IGD), Hypervolume (HV), EPSILON and SPREAD metrics used in this study are explained below: ]: It evaluates the quality of distribution of the objective function vectors in .It is calculated by using the distance ( (, )) of the vectors in to each other and to extreme solution vectors ( 1 , 2 , … , ) in .It is used to compute the diversity performance of a , • EPSILON[48]: It evaluates the minimum distance which is needed for converting every solution in with a view to it is able to dominate the of the problem, • IGD [48, 49]: It is used to measure the average from to . 5 Results and performance analysis 5.1 The test set [52]test set consists of 14 different multi-objective optimization problems: 7 benchmarks and 7 engineering designs.While twelve of these problems have various constraints, 2 of them are unconstrained problems.The number of objective, constraint and decision variables of the problem set are given in Table2.The mathematical formulas of the problems are given in [1],[11],[27],[28],[52]. Table 2 . Benchmarks and engineering desing problems. Table 4 . HV metrics of the algorithms. Table 5 . SPREAD metrics of the algorithms. Table 6 . EPSILON metrics of the algorithms. Table 7 . IGD metrics of the algorithms. B: Best.S: Second best. Table 8 . 12e average rankings of the algorithms for metrics.The Wilcoxon's rank sum test was applied at a 95% confidence level to show whether the metric values obtained by the MOAAA are statistically different to the other algorithms or not.If the p (probability) value is smaller than 0.05, it is denoted by "+" and shows that MOAAA statistically differs from the other algorithms.The Wilcoxon rank sum test results in Tables 9 to12show that results produced by the MOAAA are statistically different compared to the other algorithms.Estimated Pareto fronts (PFe) obtained by the algorithms are compared with the real Pareto fronts (PFt) in Figure4.When the figures are examined, it is observed that the proposed MOAAA generally estimates the PFt more successfully than the other algorithms.The box plot showing the results of the problems executed 50 times for each metric is given in Figure5.When the box plot is examined, it is seen that MOAAA is a successful algorithm that produces robust results in solving the multi-objective optimization problems in the test set. Table 9 . Wilcoxon's rank sum test results for HV metric. Table 10 . Wilcoxon's rank sum test results for SPREAD metric. Table 11 . Wilcoxon's rank sum test results for EPSILON metric. Table 12 . Wilcoxon's rank sum test results for IGD metric.
3,707.2
2023-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Super-virial Hot Phase in Milky Way Circumgalactic Medium: Further Evidences Recent discoveries of a super-virial hot phase of the Milky Way circumgalactic medium (CGM) has launched new questions regarding the multi-phase structure of the CGM around the Galaxy. We use 1.05 Ms of archival Chandra/HETG observations to characterize highly ionized metal absorption at z=0 along the line of sight of the quasar NGC 3783. We detect two distinct temperature phases with T$_1 = 5.83^{+0.15}_{-0.07}$ K, warm-hot virial temperature, and T$_2=6.61^{+0.12}_{-0.06}$ K, hot super-virial temperature. The super-virial hot phase coexisting with the warm-hot virial phase has been detected in absorption along only two other sightlines and in one stacking analysis. There is scatter in temperature of the hot as well as warm-hot gas. Similar to previous observations, we detect super-solar abundance ratios of metals in the hot phase, with a Ne/O ratio 2$\sigma$ above solar mixtures. These new detections continue the mystery of the mechanism behind the super-virial hot phase, but provide evidence that this is a true property of the CGM rather than an isolated observation. The super-virial CGM could hold the key to understanding the physical and chemical history of the Milky Way. INTRODUCTION The circumgalactic medium (CGM) plays an important role in the structure and evolution of spiral galaxies, surrounding the stellar and gaseous components of the galactic disk (Tumlinson et al. 2017).As the gas from the intergalactic medium (IGM) accretes onto the galaxy, it is shock heated to the virial temperature of the galaxy (Oppenheimer et al. 2016).Stellar and AGN (active galactic nucleus) feedback processes expel gas from the disk of the galaxy and into the CGM, bounded by the galaxy's virial radius, or beyond to the IGM.These mechanisms enrich, excite, and mix the CGM, where the gas cools and falls back down onto the disk of the galaxy, providing new material for star formation to take place.Understanding these processes, the global structure, and the interactions between the galaxy, the CGM, and the IGM has become a topic of great discussion in observational (Tumlinson et al. 2017;Mathur 2022) and theoretical (Oppenheimer et al. 2016;Corlies & Schiminovich 2016) work and the field has shown great development. Extending out to the Milky Way's virial radius of about 220 kpc, the CGM contains a considerable amount of mass and may account for the missing baryons and metals in the Galaxy (Gupta et al. 2012).The majority of the matter in the CGM is shown to be in the warm-hot phase at the virial temperature of ∼10 6 K, and it is diffuse, extended, massive, and anisotropic (Snowden et al. 2000;Henley et al. 2010; ★ E-mail<EMAIL_ADDRESS>& Shelton 2013;Gupta et al. 2012Gupta et al. , 2014Gupta et al. , 2017;;Nicastro et al. 2016;Gatuzz & Churazov 2018;Kaaret et al. 2020). More recent discoveries revealed a hotter, super-virial temperature phase along several different lines of sight in both X-ray absorption (Das et al. 2019a(Das et al. , 2021;;Lara-DI et al. 2023), and X-ray emission (Das et al. 2019b;Bhattacharyya et al. 2022;Bluem et al. 2022;Gupta et al. 2021Gupta et al. , 2023) ) with temperatures up to ∼10 7 K. Simultaneous analysis in emission and absorption yield a multi-phase model with temperature disagreement toward the same direction (Das et al. 2019a(Das et al. ,b, 2021;;Bhattacharyya et al. 2022).This indicates that absorption and emission spectroscopy probe different regions of the CGM, with emission probing denser regions and absorption probing more diffuse gas.Extensive all-sky emission analysis has shown that the hot phase is universal, seen all across the Galaxy (Bluem et al. 2022;Gupta et al. 2023).However, we cannot assume that the ubiquity of this phase in emission can be similarly applied to absorption.The super-virial hot phase has only been detected in absorption along two individual sightlines (Das et al. 2019a(Das et al. , 2021) ) and one stacking analysis (Lara-DI et al. 2023).The stack of observations indicates a universality of the super-virial phase, but we do not yet have enough information to claim the true distribution of the highly-ionized CGM.Thus, we aim to answer this question by adding to the collection of absorption observations in a new direction and drawing a more clear picture for the homogeneity or anisotropy of the super-virial hot phase. In this paper we analyze CGM absorption towards NGC 3783, a bright quasar away from the direction of the Galactic Center ( = 287.46• , = +22.95• ).The previous single-sightline CGM absorption papers (Das et al. 2019a(Das et al. , 2021) ) used featureless blazars as their background continuum, but NGC 3783 is a Seyfert galaxy meaning that is has a more complicated spectrum of its own (Krongold et al. 2003).Due to the source's low redshift of z=0.0097, the intrinsic warm absorbers (WA) features are very close to the CGM absorption lines that we are interested in observing.The signal from the WAs is significantly stronger and broader than the CGM signal, making it challenging to separate the two.However, the spectral resolution of Chandra gratings is sufficiently high such that we can distinguish the WA features from the z=0 lines, allowing us to probe the Milky Way CGM.Section 2 of this paper outlines the data analysis of grating spectra from Chandra and XMM-Newton, revealing the multi-phase properties of the highly ionized CGM along this line of sight toward NGC 3783.Section 3 outlines our initial motivation for the twophase model used in Section 4, which describes the spectral modeling conducted to determine the physical and chemical properties of the distinct phases.Section 5 discusses the implications of our analysis, the remaining open questions, and a prediction for the upcoming XRISM mission.We finally summarize our results in Section 6. Chandra NGC 3783 is a bright quasar that has been repeatedly observed by X-ray telescopes.We analyze the archival Chandra data to study the z=0 Milky Way CGM absorption of the quasar emission in the wavelength range of 4-22 Å, probing absorption lines of H-like and He-like ions of sulfur (S), silicon (Si), magnesium (Mg), neon (Ne), and oxygen (O). Chandra Data Reduction We extract the data necessary for this analysis from the Chandra public data archive, taken with the high energy transmission grating (HETG) spectra on ACIS-S.Of the ten available ACIS-HETG archived observations of NGC 3783, we choose to include eight 1 .The two omitted observations were taken while NGC 3783 was obscured in 2016 (Kaastra et al. 2018), decreasing the signal-to-noise (S/N) to a value below the required level for this analysis. We follow the standard data reduction tools and threads for grating spectra in CIAO 2 (Chandra Interactive Analysis of Observations).This produces a source spectrum, background spectrum, response matrix (RMF) and auxiliary response file (ARF) for each ObsID used.We consider only the first-order spectrum of the mediumenergy grating (MEG) arm of HETG, combining the positive and the negative first order spectra of all ObsIDs using the command combine_grating_spectra.We then combine the resulting positive stack and negative stack spectra to obtain one spectrum representing the sum of the first-order spectra of eight observations.The total exposure time for these observations is 1.05Ms. All relevant spectral analysis is conducted with Xspec in Heasoft 6.30.1 3 . Line Fitting The Chandra observations of NGC 3783 span 13 years of observation, making global continuum fitting difficult due to the flux variations and the presence of the warm absorbers in the source over that time.Instead, we simplify our analysis by fitting a local continuum around each absorption line of interest, listed in Table 1, within a region of about ±0.25 Å around the expected central wavelength.Each local continuum is fit with a simple power law, with the normalization and the photon index as free parameters.We do not include a commonly used neutral absorber from the disk of the Galaxy (TBabs in Xspec) in our model because there is no significant contribution to the continuum within the small range of wavelengths we consider.We place no constraints on either power law parameter to begin with, allowing Xspec to choose the best values.The presence of a strong absorption line pulls the continuum down to nonphysical levels when all parameters are free to vary.Thus we fix the photon index to the best fit value while the normalization is adjusted by eye to fit the local continuum more accurately. After determining the continuum parameters, we add a Gaussian absorption profile to the model with a negative normalization.We set a variable central wavelength for each of the lines, within a small range of possible values due to the presence of nearby strong quasar warm absorber features.We also fix the Gaussian width to be 10 −5 Å because these lines are expected to be unresolved by Chandra/HETG. The resulting best fit equivalent width (EW) of each absorption line is shown in Table 1 and the Gaussian profiles are shown in Figure 1.Several of the absorption lines are noticeably shifted from their expected wavelength, particularly Ne ix K and O viii, shown by the vertical dashed line in each panel in Figure 1.We assume this to be insignificant due to the fact that the observed wavelength shift is within ∼1 MEG wavelength resolution element of 23 mÅ. We detect z=0 absorption lines of O vii K, O vii K, O viii K, Ne ix K, and Ne x K with 4.2, 2.3, 2.5, 4.1, and 4.5 significance respectively.We also detect Ne ix K with 2.7 significance, but there is some contamination from Fe L-shell features from the intrinsic WA within 0.04 Å of the central Ne ix K wavelength.For this reason, we choose not to include it in our subsequent analysis.However, our detection is included in Figure 1.We provide 3 upper limits on the EW of S xvi K, Si xiv K, Si xiii K, Mg xii K, and Mg xi K, since they were not significantly detected in this data set (<2).This high-significance detection of Ne x is strong evidence in itself for the presence of super-virial hot gas. We are able to isolate only the Milky Way CGM absorption for fitting and modeling due to the high spectral resolution of Chandra. Figure 2 shows the separation of the z=0 CGM absorption line from the quasar warm absorber (WA) line for Ne x.The source NGC 3783 is at z=0.0097, but the Ne x WA line is blueshifted from the quasar to due to the outflow velocity (Krongold et al. 2003), with the observed WA redshift z=0.0075.This results in a wavelength separation of 0.09 Å between the z=0 line and the intrinsic WA line.All other lines are similarly separated, allowing us to minimize the contamination from the WA features in our analysis, even for such a small redshift.Any other possible contamination from other WA lines (i.e.Fe L-shell or Ca xvi transitions) is negligible, except in the case of Ne ix K as discussed above. O vii K was previously detected in Gupta et al. (2012) in their analysis of the same Chandra archival data of NGC 3783 to study the virial warm-hot gas of the Milky Way CGM.Their measured EW for this lines is 14.4 ± 2.5 mÅ while our measurement in this analysis gives an EW of 19.79 +4.03 −4.70 .The difference in detection significance is likely due to our choice of a simple background continuum while Several of the lines are redshifted or blueshifted from their expected rest wavelengths, rest (vertical dashed line), but the offset is consistent with rest within one resolution element.Ne ix K is detected, but is contaminated by a Fe L-shell WA feature at ∼ = 11.51,making it difficult to model in our later analysis.The equivalent widths and implied column densities of each of the lines are shown in Table 1.Gupta et al. (2012) chose a more complicated model to fit the continuum.However, there is still agreement between the two values within 1.The other lines detected in Gupta et al. ( 2012) also shows agreement with our own measurements.Fang et al. (2015) analyzed XMM-Newton observations of a collection of quasars to study O vii K CGM absorption, obtaining an EW of 23.30 ± 6.65 mÅ and 19.82 ± 6.61 mÅ with two different evaluation methods along the NGC 3783 sightline.Our results remain consistent with these previous detections, providing more agreement and confidence in our new results. XMM-Newton We briefly look at the XMM-Newton Reflection Grating Spectrometer (RGS) observations of NGC 3783 from the XMM-Newton Science Archive by stacking the observations of the unobscured quasar with a sufficient exposure time for the best possible S/N4 .We reduce the raw data with XMM-Newton Scientific Analysis System (SAS) and stack the resulting RGS1 and RGS2 spectra using the command rgscombine.The resulting combined observations have a total exposure time of 316 ks.We fit the O vii K line with a two-Gaussian model, one at the O vii K wavelength, and one centered at 21.72 Å, consistent with the model in Fang et al. (2015) with EW = 23.81+7.21 −6.26 mÅ.The spectrum around the O vii K line is shown in the left panel of Figure 3, normalized by the power law + Gaussian continuum.The S/N is still too low to make any new detections, so we instead overlay our Chandra best fit on the XMM-Newton data for the Ne x line, shown in the right panel of Figure 3.This model has all the same parameters as the Chandra model, but with the power law normalization adjusted to account for continuum variability between the two observations.We see that the Ne x line strength in XMM-Newton is similar to that in the Chandra observations. Although we were unable to confirm our other detections or fit any new absorption lines, there is agreement between the two data sets in the existence of highly ionized metal absorption in the CGM.The O vii K model is an independent fit to the XMM-Newton data, with a power law + Gaussian (centered at 21.72 Å) continuum consistent with Fang et al. (2015).The Ne x model is the best fit Chandra model in Figure 1 with an adjusted continuum normalization.The S/N in the XMM-Newton data is significantly lower than the Chandra data, so we do not expect an independent measurement of Ne x, or other absorption lines.However, the O vii K Chandra model is consistent with the XMM-Newton data and the previous measurements. MODEL-INDEPENDENT RESULTS With our detections of highly ionized Ne and O absorption, we want to estimate the temperature of the gas containing these metals in comparison with the known warm-hot virial phase of the Milky Way CGM.The temperature can be determined based on the EW and column densities (N) measured from the spectroscopic analysis in Section 2. We use the linear part of the curve of growth to determine the column densities of all of the unsaturated detected lines, given by Equation 1. where is the column density of the ion in cm −2 , is the equivalent width of the -th ion in cm, is the dimensionless oscillator strength of the given transition from the -th to -th atomic level, and is the wavelength of the absorption line in cm.The column densities calculated in this manner for each absorption line are shown in Table 1.Based on our measured EWs for O vii and Ne ix lines, we see that O vii K and Ne ix K are both saturated.When calculating the column densities for these lines, we combine the information from the K and K lines as previously done in Williams et al. (2005), Gupta et al. (2012), Das et al. (2021), andMathur (2022).These values are quoted in Table 1.This method gives column densities larger than would be calculated with the linear curve of growth analysis. In combination with the ionization fractions of each of the ions as a function of temperature ( ion (T), taken from CHIANTI Atomic Database 5 (Dere et al. 1997;Del Zanna et al. 2021)), we determine the approximate temperatures of the gas containing these metals.This can only be done for the cases where we have detections of two ions of the same species (O vii, O viii, Ne ix, and Ne x).The ratio of the ionization fractions of the same species as a function of temperature is equal to the ratio of the column densities previously calculated.For example: 5 https://www.chiantidatabase.org/ For simplicity, we assume that all of the Ne ix and Ne x are produced by one temperature component while all of the O vii and O viii are produced by another.Previous observations show that the ions are mixed between phases (Das et al. 2019a(Das et al. , 2021)), but we follow this simple assumption solely for the purpose of informing our formal model in Section 4. Under this assumption, we calculate the approximate temperatures of this two-phase CGM model.The O vii/O viii ratio implies log(T 1 ) = 6.23 ± 0.05 K.The Ne ix/Ne x ratio implies log(T 2 ) = 6.55 +0.04 −0.05 K.Because there is no 1 overlap in temperatures of these phases, we see that there are indeed two distinct phases along this line of sight, one near the virial temperature (O vii/O viii) and one at super-virial temperature (Ne ix/Ne x).Without modeling, there is no way to know how much each temperature component contributes to the overall EW of the lines, but this estimation gives us enough evidence to conclude that two distinct phases of the CGM are necessary to simultaneously produce the observed amount of metal absorption. With these model-independent temperature calculations, we can estimate the total column density of both Ne ix and O vii, which gives an absolute abundance ratio estimation of Ne/O = 1.24 +1.10 −0.52 , compared to a solar ratio of Ne/O = 0.23 ± 0.03.This implies the existence of Ne abundances up to 2 above the solar prescriptions of Asplund et al. (2021) along the sightline of NGC 3783. PHASE MODELING We use the hybrid-ionization model PHASE (models of collisionally ionized gas perturbed by photoionization by the meta-galactic radiation field, at a given redshift; Krongold et al. (2003), Das et al. (2019b), Das et al. (2021)) to model the detected absorption lines in Chandra.As discussed above, we do not include the Ne ix K line because the proximity and potential contamination of the Fe line does not allow for accurate modeling.The parameters in PHASE are temperature T, hydrogen column density N H , redshift, non-thermal velocity, ionization parameter, and the abundances relative to solar of various metals (including Ne and O, detected in our analysis).We set the ionization parameter in all of our models to be = 10 −3.9 , the lowest value allowed by PHASE.At this value, the photoionization is negligible, and the model is effectively a collisional ionization model only, as expected for the CGM.The non-thermal velocity is set to be zero for all absorption lines and the redshift is set based on the existing wavelength offset of the Gaussian best fit for each of the lines as discussed in Section 2.1. We choose a two-component PHASE model with a power law continuum (powerlaw * PHASE 1 * PHASE 2 ) to account for the different temperature phases determined above.Due to the low redshift of NGC 3783, the quasar warm absorbers have a small wavelength separation from the CGM absorption lines, creating added difficulty in a global PHASE fit.Additionally, since there is no appropriate global continuum fit used in this analysis as discussed above, we limit our fitted wavelength range to be small such that only the z=0 absorption line itself is included.This completely removes the effects of the warm absorbers and other emission or absorption lines that are beyond the focus of this analysis.We set the local continuum levels to be the same as the chosen parameters in the Gaussian best fit, ultimately only allowing N H , T, and Ne abundance to vary in PHASE. Solar Relative Abundances In the first fit, we fix the relative abundances of all metals in both of the defined PHASE components to be solar, ab(X) = 1 for all elements X.We call this our "solar" model.The only metals that are detected in our analysis are O and Ne, but PHASE considers the nondetection of the included S, Si, and Mg lines in the fitting process.Each of the lines are fit simultaneously with interdependent variable parameters.Only the local continuum and redshift are unique to the line. The resulting best-fitted parameters are listed in Table 2, with a final fit 2 /dof = 81.17/121.Most notable is the determined temperatures of log(T) 1 = 5.83 +0.15 −0.07 K and log(T) 2 = 6.61 +0.12 −0.06 K, consistent with a two-phase CGM model with a virial and super-virial phase, as expected.Including only PHASE 1 , 2 /dof = 96.61/121,and including only PHASE 2 , 2 /dof = 103.50/121,showing that a two-temperature component model is statistically the best fit to the data.With this model, PHASE fits the O lines appropriately, giving EWs consistent with our Gaussian fits.However, the model fits the Ne x absorption line poorly, with an EW of 1.85 +0.55 −0.83 mÅ, underestimating the strength of the Ne x line by about 3 from the Gaussian model.This difference indicates a non-solar mixture of metals, since even with a higher temperature, PHASE cannot replicate the amount of Ne x present under the assumption of solar abundance ratios.The full solar fit is shown with a dotted line in Figure 4.Note the underproduction of Ne x in the top left panel. Because the Ne x is underestimated in PHASE 2 with a solar model, our model suggests a non-solar Ne/O abundance ratio to account for the differences in Ne and O absorption.A variation in any other PHASE parameters is not sufficient. Non-Solar Relative Abundances Allowing super-solar abundance ratios, as necessitated by the results of the solar model, gives the "non-solar" model quoted in Table 2.We freeze the temperatures at their best-fitted values in the solar model, instead varying the metal abundances for both PHASE components.The ionization parameter, redshift, and non-thermal velocity are still fixed to be as defined in the solar model, and N H remains variable. PHASE assumes a solar absolute metallicity when determining the value of N H , but we do not measure a true metallicity along this line of sight.Therefore, we refer to all metal abundances as relative ratios.In our model, the Ne abundance varies relative to O, which is held fixed at solar metallicity.The O lines were fit well in our solar model, so we take this as a good approximation.We do not have enough information to be able to vary both ab(Ne) and ab(O) relative to solar in PHASE as these are the only detected metals. The resulting best fit has 2 /dof = 75.48/121,a significant improvement over the solar model.PHASE 1 , dependent on Ne ix, requires no change in Ne abundance, fit well with ab(Ne) 1 = 1.00 +2.49 .We fixed the abundance to be solar or above, so there is no 1 lower limit on this value.However, PHASE 2 , defined largely by the overabundant Ne x, requires ab(Ne) 2 = 5.51 +7.47 −2.84 (or [Ne/O] = 0.75 +0.38 −0.30 following Asplund et al. ( 2021)).This is ∼2 above solar abundances.When we force Ne/O to be the same between the two fit components, we get ab(Ne) = 3.59 +3.16 −1.64 , consistent within 1 with our best fit.There must be super-solar Ne abundances in the super-virial hot phase of the CGM along the line of sight of NGC 3783. Allowing N H , T, and ab(Ne) to vary simultaneously from our best fit values results in parameters within 1 of those quoted in Table * The lower limit of ab(Ne) 1 is pegged at 1. 2. Thus our choice to freeze the temperatures in PHASE 2 does not impact our results with significance. The non-solar model for each of the detected lines is shown in Figure 4 with each of the PHASE component contributions highlighted in different colors.The top panel shows the spectrum folded with the line spread function of Chandra/HETG, while the bottom panel shows the unfolded spectrum.We see that all of the Ne x and O viii absorption comes from the hot phase, while almost all of the O vii comes from the warm-hot phase.Ne ix K is split between both phases.Here, the Ne x line is well fit with the high Ne abundance, consistent with the Gaussian fit.The Ne ix K line was not included in the PHASE-modeled data, but the resulting predicted EW for Ne ix K is consistent with our Gaussian fit measurement.The two temperature component contributions to the EW are shown in Table 1.With this consistency, we can conclude that Ne ix K is likely not contaminated by the local Fe lines, but their proximity still complicates the independent modeling. At fixed T and fixed ab(O), N H is anti-correlated with ab(Ne) in the hot phase (PHASE 2 ).The hot phase is characterized dominantly by Ne x and O viii.The O lines are sensitive to the changes in N H , not in changes to ab(Ne), while Ne is sensitive to changes in both parameters.Leaving log(N H ) 2 and ab(Ne) 2 free, PHASE chooses a combination that best fits both ions.Increasing N H decreases the necessary ab(Ne) to fit Ne x, but also increases the strength of the O viii line.We have no prior knowledge of the expected hydrogen column density of the super-virial hot phase, so we can make no distinction between the chosen values in the solar and non-solar model.If, however, N H is higher than our best-fit modeled value (still assuming fixed T and solar ab(O)), this would imply a loweronly 1 super-solar-Ne/O than currently predicted. We find no similar correlation between the temperatures of PHASE 1 and PHASE 2 nor between the temperature and Ne/O of PHASE 2 . DISCUSSION We find that two distinct temperature phases coexist in the Milky Way CGM along the line of sight of NGC 3783 at temperatures at and above the virial temperature, with an overabundance of Ne in only the hot phase.This the fourth detection of the super-virial hot phase and the third detection of super-solar abundance ratios in absorption from the CGM.An in-depth discussion of the possible explanations for the super-virial temperatures and metal enrichment within the CGM has already been outlined in Das et al. (2019a) 12.10 12.12 12.14 12.16 0.4 2021), thus we will only reiterate the important and relevant information here. Implications for CGM Structure We have now detected the super-virial phase in absorption on four accounts, three individual sightlines with high S/N data and one stack of low S/N data towards other sightlines.Each of these studies has detected super-virial temperatures based on the presence of different highly ionized (H-like) metals, e.g., O viii, Ne x, Si xiv, S xvi.Even with this now seemingly universal property, there is still significant scatter in temperature within the super-virial phase and the wellunderstood virial phase as depicted in Figure 5.In these observations, we see three distinct phases in temperature-sub-virial, virial, and super-virial-with almost an order of magnitude variation in the super-virial phase and about a factor of 3 scatter in the virial phase.This temperature scatter has also been observed in emission (Das et al. 2019b;Gupta et al. 2023;Bhattacharyya et al. 2022).The consistent observation indicates that regions of gas in the CGM can be much hotter than the volume-filling virial phase, but is not homogeneously mixed or heated. In a stack of 46 quasar sightlines, Lara-DI et al. (2023) detect Si xiv and S xvi associated with the super-virial hot component of the Milky Way CGM at temperatures of T ≈ 10 7.5 K (temperature determination described in Lara-DI et al. 2023, in preparation).However, they do not detect any Ne x in their resulting spectrum, which has been necessary to constrain the hot temperature in the other studies, including this one.The main reason that Ne x is detectable is because of an overabundance of Ne, increasing the strength of the faint lines.This may be indicative of inhomogeneous mixing of metals, with some regions of the CGM being more Ne-rich than others. The CGM of the Milky Way is not uniform, it is patchy in chemical composition and temperature.Galactic feedback and outflow processes cause heating and turbulence in the CGM.In turn, turbulence can affect the heating, cooling, and mixing of the CGM gas.What we observe is the complex effects of all of these interactions, which combine to create the multi-phase structure of the highly ionized CGM. There are still many open questions regarding this phase-structure and the mechanisms powering the CGM.What processes are able to sustainably heat the gas to such hot temperatures observed?We see that the virial and super-virial phases coexist along the line of sight, but where does the hotter gas reside along that line?Given the likely solar Ne/O in the warm-hot component, but significantly super-solar Ne/O in the hot phase, we speculate that the two phases might not be co-spatial, and might arise from different physical processes.For example, the hot phase may occupy the extra-planar region of the Galaxy, while the warm-hot phase may fill the extended region out to the virial radius.Of course, adding more lines of sight to this group of analysis will allow us to probe more of the hot phase structure around the CGM and explore more of the metal mixtures that exist around the Galaxy. This super-virial phase and non-solar chemical composition is still new.However, in the absence of non-detections of the super-virial phase, we are confident in its universal presence in the Milky Way CGM.In their analysis of 3-D hydrodynamical simulations, Vijayan & Li (2022) were able to reproduce the hot multi-phase gas in the CGM.They find that, in the case of high star formation rates, an asymmetric distribution of temperature and metallicity exists inside and outside the produced biconical outflow, which may relate to the wide range of temperatures (Figure 5) and metal ratios in our observations.The temperature that we measure along the sightline of NGC 3783 (log(T)= 6.61 regions in Vijayan & Li (2022) and is a direct result of the stellar feedback in the galaxy.The combination of theory, simulations, and observations will allow us to shed light on the processes governing the Milky Way CGM.Understanding the nature of the super-virial hot phase, and the amount of baryonic matter and metals contained in it, is important for understanding the CGM formation, and thus, galactic evolution. XRISM Prediction With the upcoming XRISM mission, new doors are opening in the field of X-ray astronomy and an understanding the capabilities of the instruments in achieving our science is necessary.XRISM could be used to study the Milky Way CGM in greater depth, helping us understand the underlying mechanisms causing the observations we have obtained thus far.With the planned observations of NGC 3783, the results of this analysis could be confirmed and expanded with the better spectral resolution and higher effective area of XRISM.We simulate XRISM/Resolve 7 eV resolution observations of NGC 3783 based on our non-solar PHASE model to explore the potential results of new observations under the same analysis. The source is inherently variable, meaning that the observed flux changes over time, influencing the quality of the received data.We simulate an observation with 200 ks of integration time at the measured flux of the Chandra data (1.72 +0.005 −0.003 × 10 −11 erg/cm 2 /s between 0.295 and 2.4 keV).The resulting spectrum is shown in Figure 6 (middle row).We bin the spectrum for visual effect, but the quality of the data at 200 ks is roughly the same as 1 Ms of Chandra data, with an EW limit of 1.82 mÅ. However, at energies below about 0.7 keV (wavelengths at which the O lines can be detected), the effective area of XRISM/Resolve is below 100 cm 2 .This means that we are not able to detect O absorption lines with the same level of significance as Chandra.The 200 ks XRISM simulation measures O vii K with an EW of 27.31 +7.41 −7.37 mÅ, which has a lower significance (3.71) and larger uncertainties than our Chandra measurement.Thus, at longer wavelengths, Chandra is preferred when compared to a shorter XRISM observation. Archival ROSAT observations of NGC 3783 obtained a flux up to 5.66 × 10 −11 erg/s/cm 2 (Salvato et al. 2018) in the range 0.1-2.4keV, approximately 3 times higher than our Chandra observations.This would be equivalent to observing for 3 times longer, yielding a spectrum with a higher S/N and more significant detections of the absorption lines of interest.We simulate another XRISM spectrum, instead with 600 ks of exposure time to predict the results from a higher flux observation, also shown in Figure 6 (bottom row).With the longer exposure time, many of the absorption lines would be detected with more significance, which can be noticed by eye in the spectrum.This would allow for more precise measurements of line strengths, metal abundances, and temperature in the CGM.We suggest that NGC 3783 be monitored over time (with Swift, for example) and observed with XRISM at a period of high flux.This would allow us to detect more of the lines predicted by PHASE and confirm the existence of the super-virial temperature gas with different metals.Our PHASE model predicts many additional lines that could be potentially detected with as little as 200 ks of integration time.All of the lines with PHASE-predicted equivalent widths within the wavelength range of XRISM/Resolve are listed in Table A1.Detections of additional z=0 metals in the spectrum would allow us to better constrain the CGM metal mixture along this line of sight, giving us more than just Ne and O to add to our understanding of the thermal and chemical structure of the CGM. CONCLUSIONS In this paper, we present deep Chandra/HETG observations of NGC 3783 that show z=0 metal line absorption from the Milky Way CGM. • We fit the continuum around each of the absorption lines with a simple power law and add a Gaussian profile to determine EW of each detected absorption line.We see agreement when comparing to previous analysis of Chandra (Gupta et al. 2012) and XMM-Newton (Fang et al. 2015) data of NGC 3783. • We use PHASE to properly model the spectrum and determine the temperatures of the two detected CGM phases to be T 1 =5.83 +0.15 −0.07 K, warm-hot virial temperature, and T 2 = 6.61 +0.12 −0.06 K, hot supervirial temperature. • We detect a super-solar Ne abundance of ab(Ne) 2 = 5.51 +7.47 −2.84 relative to solar, or [Ne/O] = 0.75 +0.38 −0.30 , which is ∼2 above solar in the hot phase, and an abundance consistent with solar in the warm-hot phase.However, if the total N H is higher than predicted, our models call for a Ne abundance closer to solar with ab(Ne) 2 ≈ 2. • We make predictions for upcoming XRISM/Resolve observations and find that the same quality of data can be achieved with XRISM in only 200 ks, compared to 1 Ms with Chandra.We also suggest that, in order to maximize CGM absorption line detections, NGC 3783 be observed in a high flux state. In conjunction with the previous absorption observations of the Milky Way CGM hot phase, this analysis shows that the hot supervirial phase can be seen around the Galaxy and provides confidence in the multi-phase structure of the CGM.However, as discussed in Section 5, there remains variation even within the super-virial phase.Determining the underlying mechanisms behind the existence of the hot phase will require more observational and theoretical work in the development of this field. and their associate uncertainties.We only include those in the wavelength range of XRISM, as the Resolve instrument will provide the best opportunity to observe these lines. The quoted column densities are the totals for each ion.They have not been calculated with Equation 1, as in Table 1, but are simply an output of PHASE. Table A1.Predicted equivalent widths and column densities for all absorption lines in our non-solar 2-component PHASE model within the wavelength range of XRISM/Resolve.The quoted EW is the total EW for each of the lines, the sum of both phase contributions.N tot represents the total column density of the ion.Many of these lines should be detectable with XRISM when NGC 3783 is in a high flux state. Figure 1 . Figure 1.Best fit Gaussian profiles for the Chandra/HETG data for each of the detected z ≈ 0 absorption lines.All spectra are folded with the line spread function of Chandra/HETG and normalized by the best-fitted local continuum.Several of the lines are redshifted or blueshifted from their expected rest wavelengths, rest (vertical dashed line), but the offset is consistent with rest within one resolution element.Ne ix K is detected, but is contaminated by a Fe L-shell WA feature at ∼ = 11.51,making it difficult to model in our later analysis.The equivalent widths and implied column densities of each of the lines are shown in Table1. Figure 2 . Figure 2. Chandra/HETG spectrum around the z=0 wavelength of the Ne x line, showing the separation between the z=0 CGM absorption and the intrinsic z=0.0075 warm absorber (WA) feature of NGC 3783.The z=0 Ne x Gaussian profile is plotted in red.The wavelength separation between the CGM Ne x line and the WA line is 0.09 Å which allows us to isolate the z=0 Ne x absorption in the Milky Way from the WA. Figure 3 . Figure 3. Normalized XMM-Newton spectra for O vii K (left) and Ne x (right) with the best model plotted in red.The O vii K model is an independent fit to the XMM-Newton data, with a power law + Gaussian (centered at 21.72 Å) continuum consistent withFang et al. (2015).The Ne x model is the best fit Chandra model in Figure1with an adjusted continuum normalization.The S/N in the XMM-Newton data is significantly lower than the Chandra data, so we do not expect an independent measurement of Ne x, or other absorption lines.However, the O vii K Chandra model is consistent with the XMM-Newton data and the previous measurements. Figure 4 . Figure 4. Non-solar PHASE models for the detected absorption lines in the Chandra spectrum.The spectrum folded with the line spread function of Chandra/HETG is shown in the top row and the unfolded spectrum is shown in the bottom row.Both spectra have been normalized by the best-fitted local continuum.The best-fitted two-T PHASE model is shown in black solid lines, with the separate contributions from the lower temperature PHASE component, PHASE 1 (red), and the higher temperature PHASE component, PHASE 2 (blue).All of the Ne x and O viii is produced by the super-virial temperature component, while all of the O vii is produced by the virial temperature component.The solar fit is also plotted with a dotted black line, showing that solar abundances poorly fit the Ne x line, indicating the necessity for non-solar Ne/O abundance in the hot component. Figure 5 . Figure 5.A summary of the current absorption detections of the warm (sub-virial), warm-hot (virial), and hot (super-virial) phases of the Milky Way CGM.The width of the band represents the 1 uncertainty (3 for Mrk 421) on the measured temperature.The coexistence of three distinct phases of the highly-ionized CGM has been observed along three individual sightlines and in a stack of 46 low S/N galaxies.There is large scatter across orders of magnitude in temperature, even within each phase.[1] Das et al. (2021), [2] Das et al. (2019a), [3] Lara-DI et al. 2023, in preparation. Figure 6 . Figure 6.Simulated XRISM/Resolve 7 eV resolution observations showing predicted detections of Ne lines along the NGC 3783 line of sight.Top: The PHASE model used to simulate the XRISM spectra.Middle: 200ks integration time.Bottom: 600ks integration time, representing an observation at 3 times the flux of the middle panel.Both spectra are folded with the line spread function of XRISM/Resolve.An observation at a time when NGC 3783 has a higher flux will increase the S/N in the data, increasing the significance of detections and allowing us to measure new lines not currently detected in Chandra data. Table 1 . Observed ions in the CGM with their corresponding rest wavelength rest , rest energy E, oscillator strength f , fit wavelength obs , measured equivalent width EW, and associated column density N. EW T1 and EW T2 denote the equivalent width contributions of the lower and higher temperature gas phase respectively, determined by the non-solar two-component PHASE model.The errors quoted are 1 uncertainties for the parameter.3 upper limits are quoted for the lines with less than 2 detections. Table 2 . PHASE best fit model parameters and their 1 uncertainties for all detected CGM absorption lines with both solar and non-solar abundance ratios.The values with no quoted errors were fixed during the modeling and thus have no associated uncertainty.The EW of the resulting absorption lines of the non-solar model are shown in Table1.
9,469.6
2023-11-13T00:00:00.000
[ "Physics" ]
Dosimetric evaluation of synthetic CT image generated using a neural network for MR‐only brain radiotherapy Abstract Purpose and background The magnetic resonance (MR)‐only radiotherapy workflow is urged by the increasing use of MR image for the identification and delineation of tumors, while a fast generation of synthetic computer tomography (sCT) image from MR image for dose calculation remains one of the key challenges to the workflow. This study aimed to develop a neural network to generate the sCT in brain site and evaluate the dosimetry accuracy. Materials and methods A generative adversarial network (GAN) was developed to translate T1‐weighted MRI to sCT. First, the "U‐net" shaped encoder‐decoder network with some image translation‐specific modifications was trained to generate sCT, then the discriminator network was adversarially trained to distinguish between synthetic and real CT images. We enrolled 37 brain cancer patients acquiring both CT and MRI for treatment position simulation. Twenty‐seven pairs of 2D T1‐weighted MR images and rigidly registered CT image were used to train the GAN model, and the remaining 10 pairs were used to evaluate the model performance through the metric of mean absolute error. Furthermore, the clinical Volume Modulated Arc Therapy plan was calculated on both sCT and real CT, followed by gamma analysis and comparison of dose‐volume histogram. Results On average, only 15 s were needed to generate one sCT from one T1‐weighted MRI. The mean absolute error between synthetic and real CT was 60.52 ± 13.32 Housefield Unit over 5‐fold cross validation. For dose distribution on sCT and CT, the average pass rates of gamma analysis using the 3%/3 mm and 2%/2 mm criteria were 99.76% and 97.25% over testing patients, respectively. For parameters of dose‐volume histogram for both target and organs at risk, no significant differences were found between both plans. Conclusion The GAN model can generate synthetic CT from one single MRI sequence within seconds, and a state‐of‐art accuracy of CT number and dosimetry was achieved. | INTRODUCTION Traditional radiotherapy workflow relies on computer tomography (CT) image for anatomy acquisition, tumors/organs delineation, patient positioning, and dose calculation. In the past two decades, magnetic resonance image (MRI) as the complementary modality to CT has been increasingly used in clinical routine as it can provide superior soft-tissue contrast, especially for brain and pelvis site. Besides, the workflow in which CT images were replaced with MRI in each step of the entire radiotherapy chain, so-called MR-only workflow, is of growing interest. MR-only workflow is reported to be advantageous, as it can avoid the registration error between CT and MRI, reduce inter-and intra-observer contouring variation, lower the cost of radiotherapy, improve radiotherapy accuracy, reduce the patient exposure to ionization radiation, 1-10 etc. The key challenge to MR-only workflow is to extract the information of electron density from MRI for radiation dose calculation. Unlike CT number which can be directly converted to electron density, the pixel value in MRI only represents the magnetic relaxation time of tissue which has no direct correlation with electron density. However, the tissue relaxation time can be converted firstly into CT number and further into electron density, and the conversions can be categorized as three approaches. 11 The first approach, in general, is to assign bulk densities for different tissues in MRI, which can be inaccurate and labor-intensive because of manually contouring of tissue. The second approach is to establish CT number for the corresponding MRI voxel by aligning its voxel to an atlas with a preknown correlation between the MRI voxel location and the corresponding CT number. The third approach is the pixel-wise conversion, which establishes a correlation between pixel values of MRI and CT by training through machine learning. Among those approaches, neural networks as a specific method of machine learning stands out for its advantage of high accuracy and automation, and it is considered as the potential priority method for clinical MRIonly radiotherapy workflow. Deep convolutional neural network (DCNN) has been reported successful in a wide range of medical applications. Several studies utilized the convolutional neural network to perform the synthesis of CT from a variety of MRI sequences. Han 12 and Liu 13 applied the u-net 14 based network to convert MRI to sCT pixel by pixel. The encoder-decoder architecture in their networks enable the learning of a hierarchy of features from MRI through a downsampling process, then those features in various resolution were combined to generate high-resolution CT image through an upsampling process. Besides, the generative adversarial network (GAN) tailored for image-to-image translation has been applied in the translation of MRI to CT. [15][16][17][18][19] Those U-net based networks contain only the generator of CT image, while the GAN contain an additional adversarial network as the discriminator which would compete with the generator to distinguish generated CT images from real CT. Although those deep learning-based methods mentioned above have achieved state-of-the-art performance, there still a lot of factors, that is, MRI sequence, registration method, loss function, worthy spending efforts on since many of them can be influential to the results. In this study, we aimed to develop a GAN model to translate clinical standard MRI to synthetic CT, and evaluate its accuracy in terms of image pixel value and clinical radiotherapy dosimetry. 2.A | Patient data collection Thirty-seven brain cancer patients who had undergone external radiotherapy from July 2019 to April 2020 in our department were 2.B | Generative adversarial network A conditional generative adversarial network similar to "pix2pix" was adopted here. Two networks namely generator and discriminator comprised of the network. The paired MRI and CT images of each patient were feed into the generator for learning the mapping from CT from MRI, so that the generator can generate sCT from an input MRI. Then the discriminator was trained to compete with the generator and distinguish sCT from the corresponding real CT as well as possible. Through the training of generator and adversarially training of discriminator, the network would converge to its best performance. The detailed architectures of generator and discriminator were illustrated in Figs. 1(a) and 1(b), respectively. We adopted a "U-net" shaped encoder-decoder network as the generator. For the encoder, we have five convolutional layers with a filter size 4 × 4 and a stride of 2 to downsample the input 2D MRI slices from size 512 × 512 to 16 × 16. Each convolutional layer was followed by batch normalization and a Leaky rectified linear unit (Leaky ReLU). For the decoder, a mirrored upsampling process with skip connection to corresponding encoder layers decodes the lowresolution feature maps into 2D synthetic CT. The features from each encoder layer were copied and concatenated with the corresponding feature before each deconvolution layer except the first and last one. The dropout layers were applied after the first three batch normalizations in the decoder network to improve network generalization. 20 Compared to the original U-Net, the total number of convolutional layers was reduced from 19 to 11. Another modification to U-Net is that all pooling layers and unpooling layers were replaced by convolutional and deconvolutional layers, because fractionally strided convolutional layers can be trained to produce dense high-resolution feature maps, while unpooling layers use memorized pooling indices from maxpooling layers to produce sparse high-resolution feature maps. 21 The discriminator network consisted of five convolutional layers with a filter size 4x4 and a stride of 2. The concatenation of input MRI and synthetic or real CT was feed to the first convolutional layer. The leaky ReLU followed each convolutional layer except the last one, which was followed by a sigmoid function then output a score map of shape 1 × 32 × 512 to distinguish between synthetic CT and real CT. The loss function used in the generator network was mean absolute error (MAE) as defined in Section 2.D, to represent the pixel-wise difference between synthetic CT and real CT. For discriminator, we adopt the least square loss function since it strongly penalized the fake samples away from decision boundary and improve the stability of learning process. 22 The Loss term can be expressed as follows: where the D and G represented the discriminator and generator, respectively, and x,y represented the pair of real CT and MRI, G(y) is output of generator, namely the synthetic CT. The network weights were initialized using Xavier 23 and updated using the ADAM algorithm 24 with a fixed learning rate of 0.0002. The batch size was set to 20 to make best use of video memory, and around 32000 steps (720 epochs) were taken to converge each training. The training was performed on a 64-bit Windows workstation, with an Intel Core i7 CPU and an NVIDIA GeForce GTX Titian X graphics card with 12 G RAM. 2.D | Evaluation of synthetic CT For each testing patient, the mean absolute error (MAE) of each pixel value within patient body contour between sCT and real CT was calculated as follows: The peak signal to noise ratio (PSNR) is also evaluated as follows: where MAX strands for maximum signal value of real CT, and MSE stands for mean square error calculated by Figure 2 showed the comparison of MRI, synthetic CT, planning CT and difference map for one example patient. A good visual result of CT synthesis by GAN was shown, except for some blurry area in the vicinity of the interface between skull and brain tissue. 3.B | Dosimetric comparison between synthetic and real CT For each testing patient, a VMAT radiotherapy plan was optimized on the planning CT then calculated again on the corresponding sCT. For the comparison of dose distribution, the result was illustrated in Table 1. No significant differences were found for both target and OARs. The comparison of DVH for one example patient was illustrated in Fig. 5. On the converse, a VMAT plan was optimized on synthetic CT following clinical protocols, then transferred and calculated on planning CT. and the gamma analysis showed 99.96% and 97.99% with criteria of 3 mm/3% and 2 mm/2%, respectively, which is close to the comparison result when the VMAT plan was optimized on the planning CT as stated before. CONFLI CT OF INTEREST The authors have no relevant conflict of interest to disclose.
2,360.4
2021-02-01T00:00:00.000
[ "Medicine", "Physics" ]
Boron Nitride Nanosheet–Magnetic Nanoparticle Composites for Water Remediation Applications The combination of 0D nanoparticles with 2D nanomaterials has attracted a lot of attention over the last years due to the unique multimodal properties of resulting 0D-2D nanocomposites. In this work, we developed boron nitride nanosheets (BNNS) functionalized with manganese ferrite magnetic nanoparticles (MNPs). The functionalization process involved attachment of MNPs to exfoliated BNNS by refluxing the precursor materials in a polyol medium. Characterization of the produced BNNS-MNP composites was carried out using powder X-ray diffraction, transmission electron microscopy, vibrating sample magnetometry, Fourier transform infrared spectroscopy, and X-ray photoelectron spectroscopy. The adhesion of MnFe2O4 magnetic nanoparticles onto the BNNS remained unaffected by repeated sonication and heating in a furnace at 400 °C, underscoring the robust nature of the formed bond. FTIR spectra and XPS deconvolution confirmed the presence of strong bonding between BNNS and the MNPs. Membranes were fabricated from the BNNS and the BNNS-MnFe2O4 nanocomposites for evaluating their efficiency in removing the methylene blue dye pollutant. The membranes have been characterized by scanning electron microscopy, Brunauer–Emmett–Teller surface area analysis, and mercury intrusion porosimetry. The effectiveness of dye removal was monitored using ultraviolet–visible spectroscopy. The BNNS-MnFe2O4 nanocomposite membranes exhibited enhanced MB capture compared to membranes made from pure BNNS alone. The recyclability assessment of BNNS-MnFe2O4 demonstrated exceptional performance, retaining 92% efficiency even after eight cycles. These results clearly demonstrate the high potential of these magnetic nanocomposites as reusable materials for water filtration membranes. Furthermore, the introduction of magnetic functionality as part of the membrane brings an exciting opportunity for in situ magnetic heating of the membrane, which shall be explored in future work. INTRODUCTION In the modern world, environmental pollution poses a significant and pressing challenge, encompassing various forms, such as water pollution.Numerous man-made chemicals exhibit remarkable resistance to breakdown in the environment by natural means, leading to their role as environmental pollutants.This category includes pesticides, herbicides, pharmaceuticals, oils, polycyclic aromatic hydrocarbons, and artificial dyes. 1 Among these, synthetic dyes find extensive usage across diverse industries such as paper, plastic, leather, and textiles.A majority of synthetic dyes possess inherent toxicity and show formidable resistance to degradation due to their intricate molecular structures.Consequently, they are classified as hazardous organic compounds in the environment, with methylene blue exposure having been reported to cause unwanted symptoms. 2The conspicuous and undesirable visibility of even trace amounts of these dyes in water accentuates the issue.Thus, the appropriate disposal of synthetic dyes remains a subject of environmental concern.Removal of these dyes from the environment is critical with many research technologies having this as a main goal. 3dsorption of pollutants onto suitable sorbents is an efficient route to remove these pollutants from the environment with boron nitride being a potentially promising candidate. 4oron nitride (BN) is a material with several crystalline polymorphs such as the cubic, wurtzite, and hexagonal (h-BN) phases.h-BN has the moniker "white graphene" because of its similarity in structure with graphene, as it is composed of 2D layers of hexagonal rings of alternating B and N atoms, creating a honeycomb-type structure equivalent to graphene.Bulk h-BN can be exfoliated in water to produce individual 2D boron nitride nanosheets (BNNSs). 5BNNSs have unique properties such as high surface area and high thermal stability; they are chemically inert and stable to oxidation.These unique properties make the BNNS an attractive material for applications such as pollutant removal, 3 lubricants, 6 sensing applications, 7 and super hydrophobic coatings. 8Magnetic nanoparticles are another material with unique properties, namely, their magnetic functionality and large surface area.Magnetic nanoparticles include a broad range of materials such as the spinel ferrite class (e.g., Fe 3 O 4 , MnFe 2 O 4 , and CoFe 2 O 4 ).Magnetic nanoparticles and their composites have found application in targeted drug delivery, 9 magnetic resonance imaging (MRI) diagnostics, 10 magnetic heating in cancer hyperthermia therapy, 11 and data storage. 12MNPs can be prepared by numerous methods, which include coprecipitation, 13 thermal decomposition, 14 and thermal synthesis. 15oron nitride nanosheet-magnetic nanoparticle (BNNS-MNP) nanocomposites are a relatively new type of 2D-0D composite material with few papers published in this area.One of the reasons for this is because of the great challenge in attaching magnetic nanoparticles to the boron nitride sheets.Boron nitride, as mentioned above, is chemically inert and so is not amiable to functionalization.As a result of this, very harsh methods have previously been employed to add functional groups to the BNNS.One report involved reacting h-BN in the presence of ditert-butyl peroxide, which decomposes at 120 °C by homolytic fission to produce oxygen radicals, which then attack the BN sheets to produce tert-butyl functionalized BNNSs.Further reacting with piranha solution produced hydroxyl functionalized nanosheets (HO-BNNS). 16Another report involved the reacting of the BN with 5 M NaOH solution at 120 °C for 48 h.These harsh reaction conditions created hydroxyl functional groups on the BN. 17 There is a report of attaching Fe 3 O 4 to the surface of the BNNS with an in situ coprecipitation, but the TEM images in this article are inconclusive and unclear regarding the attachment and coverage of magnetic material. 18In another paper, an aerogel of BN with Fe 3 O 4 has been reported. 19This aerogel showed the ability to remove both organic dyes and toxic metal ions from water, but again, the TEM images showed some MNPs to be separate from the BN material. In this work, we have functionalized BNNSs with MnFe 2 O 4 magnetic nanoparticles resulting in the BNNS coated with the MNPs with no separate MNPs in the samples.We demonstrated that high coverage of BNNS with MNPs, can be achieved through a process devoid of harsh chemical conditions.Our method is reproducible and effective in coating the BNNS with the spinel ferrite to make the BNNS-MNP nanocomposites.Then, the produced BNNS-MnFe 2 O 4 nanocomposites were used to prepare new membranes, which were tested for nanofiltration applications.The membranes exhibited remarkable efficiency (over 99% retention before saturation) in eliminating MB dye pollutants from water, while also displaying the qualities of recyclability and sustained removal efficiency across multiple cycles.To the best of our knowledge, this composite material has not been reported in the literature to date.The added magnetic functionality brings with it the potential for inductive magnetic heating regeneration of the membrane to burn off and remove any adsorbed organic pollutant, an application which our group is currently exploring. BNNS Preparation and Characterization. As mentioned earlier, h-BN is a layered material with a honeycomb-like structure, and in order to obtain individual nanosheets, it must first be exfoliated.Liquid-phase exfoliation was chosen for this work, as it is an easy and low-cost method of producing exfoliated 2D materials without the use of complex equipment or hazardous chemicals.We have followed the procedure, which has been previously reported by our group on BN exfoliation in water to produce BNNS. 5After the exfoliation of h-BN in water, a milky white suspension was obtained.The product was characterized by transmission electron microscopy and scanning electron microscopy (TEM and SEM), X-ray diffraction (XRD), Fourier transform infrared (FTIR) spectroscopy, and X-ray photoelectron spectroscopy (XPS). TEM images were taken on Lacey carbon grids, with TEM and SEM images of a blank grid shown for reference (Figure S1).TEM and SEM images of the BNNS nanosheets on the Lacey carbon grids are shown (Figure 1), where we can see the exfoliated h-BN sample containing many thin layers of nanosheets with some aggregation occurring due to the drying process.The sheets vary from around 100−2000 nm in diameter.This large size distribution is common for liquidphase exfoliation. 20Flake size distributions for the BNNS were calculated from 50 nanosheets in the TEM images using ImageJ software.A size distribution chart is shown in the Supporting Information (Figure S2). The XRD pattern of the BNNS has the characteristic BN peaks at 2θ°with corresponding hkl planes at 26.7°(002), 41.6°(100), 43.8°(101), 50.1°(102), 55.0°(004), 71.3°( 104), 75.9°(110), and 82.2°(112) as can be seen (Figure 2).This matched to the PDF database (PDF 034-0421) for h-BN showing that the h-BN has not changed its crystal structure with the exfoliation process.Thus, the BNNS retain the crystalline nature of the bulk h-BN, that is similar to previous reports. 21 The Scherrer equation (eq 1) gives a relation between the peaks full width half maximum (FWHM) and the crystallite size in the material. 22Here, L is the crystallite size, K is the Scherrer constant taken as 0.9, λ is the X-ray wavelength, θ is the diffraction angle, and B is the FWHM broadening from the XRD data of a peak.From this equation, it can be seen that as the crystallite size L increases, the line broadening B decreases.Each of the peaks in the XRD data was analyzed using the Scherrer equation (Table S1) giving an average of 23.6 nm.There is some discrepancy between the crystallite sizes for individual peaks and the size of the flakes from TEM, but this can be understood as these are 2D flakes, not spherical particles. The Willimason-Hall method (eq 2) of crystallite size analysis takes into account the contributions to broadening from both size and local strain, 23 where ε is the strain.The Williamson-Hall method was also applied to the XRD data (Figure S3), and it gave a crystallite size of 8.3 nm, which is different than the Scherrer method.We can see that strain has negligible influence (≈0%) with the broadening coming from size.This discrepancy in size between the two methods can be rationalized again due to the BNNS not being spherical but anisotropic 2D flakes. In the XPS survey scan of the BNNS samples (Figure S4), there are peaks corresponding to the B 1s and N 1s core levels at 190.90 and 398.35 eV, respectively, when calibrated to the adventitious C 1s peak at 285.00 eV, 24 which is similar to our previous result. 25C and O are common impurities in XPS samples exposed to the atmosphere.The C impurity, known as adventitious carbon, can be used for calibration of the peak positions. 26When constrained to keep the FWHM the same for the deconvoluted peaks, the B 1s peak can be seen to be a combination of two separate peaks in the high-resolution scan of this region (Figure 3).These peaks have previously been assigned to B−N for the major peak at a binding energy of 190.90 eV and B−O for the minor peak at 191.90 eV.Here, the O possibly comes from both terminal oxygens 27 on the edges of the BN sheet and absorbed O on the BN sheets, 28 coming from −OH and H 2 O. FTIR data (see Figure S5 in Supporting Information) showed peaks corresponding to both the powder BN and the exfoliated BNNS at 755 cm −1 and a peak at 1300 cm −1 , similar to previously reported results where these peaks have been attributed to the out-of-plane B−N−B and the in-plane B−N− B vibrations, respectively. 16.2.Preparation of BNNS-MNP Nanocomposites.BNNS-MNP nanocomposites have been synthesized using protocols previously developed by our group.25 We prepared BNNS with MnFe 2 O 4 nanoparticles on the surface of the nanosheets (Scheme 1).In all cases, the procedure involved transfer of the exfoliated BNNS to ethylene glycol for a solvothermal type reaction to form the MNPS in situ on the BNNS.A molar ratio for BN:MnFe 2 O 4 of 1:0.05 was chosen, giving a ratio of one metal atom per six BN rings, as this ratio was found to give good coverage without separate MNPs observed. Th synthesis was carried out multiple times, consistently demonstrating high reproducibility. In the synthesis, MnCl 2 •4H 2 O and FeCl 3 •6H 2 O serve as precursors for forming the magnetic spinel MnFe 2 O 4 with the O coming from the hydrated water on the metal chlorides in the basic conditions as OH − .These OH − ions form metal hydroxides, which then further react to form the spinel. 29tONa acts as a base, forming ethanol from H + from the water.Ethylenediamine is used as a surfactant as it improves surface coverage.We found that this gave robust attachment of the MNPs to the BNNS even after repeated cycles of sonication and magnetic extraction, the MNPS did not separate from the BNNS.The BNNS-MNP nanocomposites and the corresponding membranes have been characterized by SEM, TEM, FTIR, VSM, XRD, XPS, MIP, and BET techniques. The TEM and SEM images of the BNNS-MnFe 2 O 4 nanocomposite are shown (Figure 4).These images confirm the excellent coating of the BNNS with the MnFe 2 O 4 MNPs, with no MNPs found separated from these sheets.As mentioned previously, the nanosheets varied from around 100−2000 nm in diameter.For the BNNS-MnFe 2 O 4 nanocomposite, the TEM and SEM images show only the BNNS with the MNPs distributed over the surface.The MNPs range in size from 20 to 80 nm approximately.The particle size distributions of the MNPs were determined by analyzing 100 particles in TEM images through ImageJ software.A size distribution diagram is shown in the Supporting Information (Figure S6). In the course of SEM analysis, an energy-dispersive X-ray (EDX) detector linked to the SEM was employed to conduct elemental analysis on the BNNS-MnFe 2 O 4 sample.This was done to elucidate the ratio of elements in the sample and to do elemental mapping of the sample to show where the MnFe 2 O 4 particles are in relation to the BNNS.For EDX quantification (Figure S7), a substantial sample layer was applied onto carbon stubs, followed by a wide field scan to minimize variations in local coverage.According to the EDX quantification results, the atomic percentages of the Mn:Fe:O:B:N were found to be 1.73:4.52:17.45:38.03:38.27.The ratio of B/N was approx-imately 1 as expected, while the ratio for Mn:Fe was slightly lower at 0.4 than the theoretical expected value of 0.5.This may indicate some of the Mn(II) ions were washed away in the reaction.The ratio of the metal ions to the O ions (Fe+Mn:O) was 0.36, which was lower than the theoretical expected value of 0.75.This would indicate the presence of adsorbed H 2 O molecules and terminal oxygen atoms on the surface of the nanocomposite.The Mn:Fe:N ratio measured 0.16, surpassing the anticipated value of 0.02.This suggests that during the magnetic extraction process, some of the BN might have been removed.Elemental maps for the BNNS-MnFe 2 O 4 sample are shown (Figure 5).The images show that the Mn, Fe, and O are evenly distributed on the BNNS to give a good coating of the BN sheet.This supports the SEM and TEM results above (Figure 4), proving that the 2D BNNS are well coated with 0D MNPs. XRD of these samples was used for identification of the constituent materials because the crystal phase of BN and ferrite spinels has well-defined crystal structures.XRD of the nanocomposites showed peaks for BN with smaller peaks for the various spinels in each of the scans.The boron nitride peaks (e.g., 26.7°(002)) are much larger than the MnFe 2 O 4 peaks (e.g., 35.0°(311)) because of the ratio of the BN to MNPs in the sample.The PDF-2004 database was used for identifying the phases in the samples. Application of the Scherrer equation to the peaks in this two-phase sample gave an average size of 9.02 nm for the MnFe 2 O 4 MNPs and 33.85 nm for the BNNS (see Table S2 in the Supporting Information).The Williamson-Hall method applied to this two-phase system gave crystallite sizes of 11.6 nm for the MnFe 2 O 4 MNPs and 21.0 nm for the BNNS with strain in both cases contributing <1% in both cases (Figures S9 and S10).The discrepancy between the BNNS flakes and the crystallite sizes here can be rationalized by the BNNS not having an isotropic morphology, and therefore, the result is an average of the lateral sizes and the thickness of the flakes.For MnFe 2 O 4 , the crystallite size is in good agreement with previous work as the polyol synthesis is known to give particles that are composed of grains 30 that form the spherical nanoparticle, and the XRD result gives the size of these grains. VSM analysis of the BNNS-MnFe 2 O 4 sample gave a magnetization value of 13.1 Am 2 kg −1 at 1 T.There was no residual magnetism when the field was 0 T, with the shape of the magnetization curve indicative of superparamagnetic behavior as there was no coercivity or remanence at 0 T, and the magnetization has not saturated at 1 T, as can be seen (Figure S11 in Supporting Information). FTIR analysis of the samples (Figure S12 in Supporting Information) gave peaks at 540 cm −1 corresponding to the metal-O stretch of the manganese ferrite. 31The peaks observed at 755 and 1300 cm −1 experienced slight shifts to 780 and 1330 cm −1 , respectively.This alteration aligns with the anticipated outcome when the ferrite interacts with the BN flakes.This result is also in line with our previous result where for BNNS-Fe 3 O 4 and BNNS-CoFe 2 O 4 , a similar shift was seen. 25These shifts indicate new interactions, which affect the out-of-plane and the in-plane vibration of the BNNS.Previous work has looked at the binding of iron oxides 18 and Fe ions 32 to the surface of BN.In the Fe ion study, the group proposed the formation of borazine-metal complex bonds. 32The emergence of these fresh interactions is expected to lead to alterations in the IR spectra of BN, corresponding to the observations witnessed in this case.The investigation of iron oxide using DFT calculations demonstrated the optimized binding between iron oxide and the BN sheet, aiming to enhance contaminant adsorption. 18ithin the XPS survey spectrum of the sample (Figure S4), distinct peaks representing B 1s, N 1s, Fe 2p, Mn 2p, and O 1s elements are evident, explaining the elemental composition of the sample.Additionally, a peak corresponding to adventitious C 1s suggests the presence of minor impurities resulting from atmospheric exposure.The adventitious C 1s peak was used for calibration. 26In the high-resolution B 1s peak XPS scan (Figure 7A) of the BNNS-MnFe 2 O 4 sample compared with the BNNS sample, we can see a clear difference in the peak position with a slight asymmetry in the BNNS-MnFe 2 O 4 compared to the BNNS.Deconvolution of the B 1s peak of the BNNS-MnFe 2 O 4 sample (Figure 7B) shows it to be a combination of 4 peaks, with the FWHM constrained to be the same for the 4 deconvoluted peaks.We see a shift to a lower binding energy compared with the pure BNNS for the B−N with a peak position of 190.64 eV and B−O with a 191.64 eV.Deconvolution shows that the peak also contain 2 smaller peaks at 191.87 and 191.28 eV that can be attributed to the formation of B−Fe and B−Mn interactions, respectively.The shifting of the B 1s peaks toward higher binding energies upon formation of the B-metal interactions, 191.87 and 191.28 eV respectively, indicates that there is a net movement of electron density from the B sites to the metal centers.The peak position for the B−Mn interaction is at a lower binding energy as compared to the B−Fe contribution, while in our previous paper, the B−Co contribution was at a higher binding energy with respect to the B−Fe interaction. 25This can be attributed to the changing electronegativity of Mn (1.55), Fe (1.83), and Co (1.91) with Co showing the largest shift.A decrease in the FWHM, seen in contrast to that of the pure BNNS (1.23 versus 1.32), suggests reduced chemical disorder at the B sites.This shift indicates that as the MNPs adhere to the BNNS through suggested B-metal bond formation, electron density shifts from B to the metal centers.Simultaneously, there is an electron density movement onto the remainder of the sheet likely due to back-donation from the bulk of the ferrite nanoparticles. To find the optimal ratio of BNNS:MnFe 2 O 4 , different ratios of BNNS to MnFe 2 O 4 were used in the synthesis, and then, TEM images were taken to view the coverage.The analyzed sample exhibited a BNNS:MnFe 2 O 4 molar ratio of 1:0.05.Additional nanocomposites were created at ratios of 1:0.1 and 1:0.01 to assess their impact on the coverage and magnetic properties.It was observed that the MNPs' coverage on the BNNS depended on the BNNS to MnFe 2 O 4 ratio, with higher molar ratios resulting in greater coverage and lower ratios yielding less coverage.The coverage of the MNPs on the BNNS was found to be dependent on the ratio of BNNS to MnFe 2 O 4 with the larger molar ratio giving high coverage and the lower ratio giving less coverage.In the high ratio sample, it was found that some separate MNPs formed apart from BN sheets.The TEM images (Figure S13) verified extensive coverage at the 1:0.1 ratio, while the other two ratios exhibited diminishing coverage.Throughout the varying molar ratios, the size of the MNPs on the surface remained consistent, measuring approximately 20−80 nm, as depicted in the images.The BNNSs were all similar sizes from 100 to 2000 nm in all the samples. The magnetic properties were also dependent on the molar ratio of the nonmagnetic BNNS and the magnetic MnFe 2 O 4 with the magnetization being 25, 13, and 4 Am 2 kg −1 for the 0.1, 0.05, and 0.01 ratios, respectively.The results of these observations are presented in the Table 1. Membrane Preparation and Surface Area Analysis.BNNS and BNNS-MnFe 2 O 4 membranes were produced by filtering an aqueous suspension through a 0.45 μm polyvinylidene fluoride (PVDF) filter using a fritted glass filtering apparatus.This method resulted in the formation of nanosheet membranes on the PVDF substrate.To illustrate the layering, SEM images of the edge profile of each membrane were captured.Figure 8 presents these SEM images depicting (A) BNNS and (B) BNNS-MnFe 2 O 4 in a 1:0.05 ratio.These images showcase the maintained excellent coverage of BNNS coated with MNPs during the membrane formation, demonstrating that the MNPs remained intact without detachment or washing away throughout the process. BET (Brunauer−Emmett−Teller) analysis provides quantitative data on the specific surface area, with porosity distribution over the range 0−50 nm for solid materials.High pressure MIP (mercury intrusion porosimetry) is a pore size and pore volume analysis technique with a range of 10− 10,000 nm.Both methods are suitable for a wide range of particulate and nonparticulate materials.BET and MIP analysis was carried out in close conjunction with adsorption analysis, as adsorption performance is closely related to the specific surface area and pore size distribution of a material. 33The surface area was calculated from the linear region of the adsorption branch of the isotherm (P/P 0 = 0.1−0.3),while the Barrett−Joyner−Halenda (BJH) method was used to calculate pore size and pore volume from the desorption branch of the isotherm.The BET and MIP analyses were carried out on the BNNS-MnFe 2 O 4 with a ratio of 1:0.05. The samples' surface area depends on the distinct structure of the nanosheets, made up of numerous layers separated by narrow voids.These flakes generally range from 100 to 2000 nm in width, with corresponding minute void spaces between them.Notably, hysteresis appeared in the adsorption− desorption patterns (type IV) for both the uncoated BNNS sample (Figure S14) and the MnFe 2 O 4 -coated BNNS sample (Figure S15 in the Supporting Information). The BET analysis in the linear region of the isotherm gave a surface area of 32.9 m 2 /g for the BNNS and a surface area of 46.3 m 2 /g for the BNNS-MnFe 2 O 4 .BNNS-MnFe 2 O 4 has higher surface area relative to the uncoated BNNS which can be attributed to the additional surface area contribution of the MnFe 2 O 4 nanoparticles.This value was higher than our previously reported surface area values for BNNS-Fe 3 O 4 (38.8 m 2 /g) and BNNS-CoFe 2 O 4 (34.6 m 2 /g). 25 The BJH pore size distributions of the coated and uncoated samples showed similar characteristics; however, the BNNS− MnFe 2 O 4 had stronger volume adsorption in the region corresponding to pores <5 nm.For uncoated BNNS (Figure S16), the associated pore volume for pores <5 nm was only 3.9% of the total.By contrast, for BNNS−MnFe 2 O 4 (Figure S17 in the Supporting Information), the pore volume was 11.5% of the total.This is believed to be due to light agglomeration of MnFe 2 O 4 nanoparticles, resulting in the creation of additional nanometer-scale void spaces, beyond those which are inherently present in the uncoated sample.The results of surface area and porosimetry analysis are summarized in the Table 2. From MIP analysis for BNNS (Figure S18 in the Supporting Information), a total intruded volume of 0.58 cc/g is observed. The intrusion of mercury corresponds to the filling of void spaces between the BN flakes.A major peak is observed at approximately 100 nm.The intrusion curve then plateaus out before a second sharp intrusion occurs at approximately 20 nm.This peak is due to the void spaces between the smallest BN flakes, ones that are sub-100 nm in width.Clearly, the peak at 100 nm is by far the more prominent one, indicating that the sample is dominated by larger pore spaces, with a subset of smaller pore spaces also present. For BNNS-MnFe 2 O 4 (Figure S19 in the Supporting Information), the total intruded volume is 0.70 cc/g.A bimodal pore size distribution is again observed; however, this time, the second peak dominates the distribution, at the expense of the first.This is due to the presence of large numbers of MnFe 2 O 4 nanoparticles, with void spaces in between.The pre-existing void spaces (ca.20 nm) between the smallest sub-100 nm flakes detected for BNNSs are also detected for BNNS-MnFe 2 O 4 ; they manifest as a broad shoulder on the curve, as they are drowned out by the much larger peak at 50−60 nm.The peak for the void spaces between the largest BN flakes is shifted to the left, indicative of larger void spaces between the flakes, relative to uncoated BNNS.The understanding is that the nanoparticulate coating adheres to the surfaces of the flakes and these attached nanoparticles act to maintain a distance between flakes, which might otherwise come into direct contact.This view is supported by the fact that BNNS-MnFe 2 O 4 has a larger total 25 To test the BNNS-MnFe 2 O 4 membranes for the retention of MB, membranes with a ratio of 1:0.05 were prepared.A PVDF membrane served as the base for supporting the BNNS-MnFe 2 O 4 membrane during filtration.When used independently with BNNS-MnFe 2 O 4 for filtration, the PVDF membrane, positioned on the fritted glass filtering apparatus, exhibited a minimal capability to retain MB, aligning with expectations.Through a calculation involving successive 20 mL portions of the 21.9 μM MB solution until reaching saturation, it was determined that the PVDF membrane, weighing 125 mg, adsorbs 0.1 mg of MB, resulting in an adsorption capacity of 2.5 mg per gram of the membrane.On average, the PVDF membrane measured 0.105 mm in thickness and weighed 125 mg. The preparation of the BNNS and the BNNS-MnFe 2 O 4 membrane for testing is described in the experimental section.This gave a membrane with a size of 0.001018 m 2 (36 mm diameter).For each of the membranes, the BNNS membrane and the BNNS-MnFe 2 O 4 membrane, a mass of 40 mg of the material was used to create the membrane.The thickness of the membrane, as measured using a micrometer screw gauge was similar to 0.050 mm for the BNNS and 0.053 mm for the BNNS-MnFe 2 O 4 . Flow rates for the prepared membranes were calculated at one bar pressure using Millipore water (MP H 2 O) filtered through the membrane five times for three different membranes of the BNNS and the BNNS-MNFe 2 O 4 (see Table S3 in the Supporting Information).The PVDF membrane exhibited a flow rate of 26,700 Lm −2 h −1 , surpassing the specified value provided by the membrane manufacturer. 34hen paired with PVDF, the BNNS membrane achieved a flow rate of 620 ± 53 Lm −2 h −1 , while the BNNS-MnFe 2 O 4 membrane combined with PVDF showed a flow rate of 404 ± 33 Lm −2 h −1 .These reductions in flow rates indicate smaller pores in the BNNS-MnFe 2 O 4 nanocomposite, which has been confirmed with the BET analysis above, where a higher percentage of the porosity was below 5 nm and the nanocomposite had a larger surface area per gram.These membrane characteristics are summarized in Table S4 in the Supporting Information. The prepared membranes were tested for the retention of MB dye in aqueous solution.The % removal for each aliquot can be calculated using the Beer−Lambert law using ultraviolet−visible (UV−vis) spectroscopy analysis of the filtrate after each 20 mL aliquot.This was done for the BNNS and the BNNS-MnFe 3 O 4 (Table S5 in Supporting Information).The membranes retained >99% of the MB dye prior to becoming saturated after which MB was found in the filtrate.For the testing, a 40 mg membrane of each material was made on the PVDF support.Subsequently, 20 mL portions of a 21.9 μM MB solution underwent filtration via vacuum filtration using the prepared membranes, resulting in an approximate effective membrane pressure of 1 bar.For the BNNS membrane, 20 mL took 114 s, while for the BNNS-MnFe 2 O 4 membrane, 20 mL took 175 s.The filtrate initially was clear but began to let the MB through as the membrane became saturated with successive additions of the MB solution. The outcomes were assessed using UV−vis spectral analysis specifically for the BNNS membrane, as visualized in Figure 9A.Notably, in the initial few filtrates, over 99% of the dye was successfully removed.However, by the fifth run, the BNNS membrane started allowing MB to pass through as it reached saturation.As filtration continued up to the tenth run, the solution approached the concentration of the original MB solution prior to filtration.At this stage, the membrane had become saturated, hitting its adsorption capacity limit.The same procedure was used to quantify the results for the BNNS-MnFe 2 O 4 membrane (Figure 9B) with successive 20 mL aliquots of the 21.9 μM MB solution.Here, the membrane began to let dye through on the sixth aliquot and did not reach saturation until the twelfth aliquot. Twenty mL aliquots of a 21.9 μM MB solution were used with MB having a molar mass of 373.9 g mol −1 .Each 20 mL will contain 0.164 mg of MB (0.02 L × 0.0219 mM × 373.9 g mol −1 ).From the UV−vis analysis, the total amount of MB absorbed on the membrane can be calculated when the membrane saturates as the graph begins to plateau.The graph shows this for the BNNS and the BNNS-MnFe 2 O 4 membranes S5 for numerical values. (Figure S20 in the Supporting Information).As can be seen, the BNNS membrane plateaus at 1.07 mg, while the BNNS-MnFe 2 O 4 membrane plateaus at 1.32 mg.This gives a capture of 1.07 mg for 40 mg of the membrane or 26.75 mg g −1 for the BNNS and a capture of 1.32 mg for 40 mg of the membrane or 33.00 mg g −1 for the BNNS-MnFe 2 O 4 . As mentioned earlier, the PVDF membrane captures 2.5 mg g −1 .Therefore, this value has to be subtracted from the calculated values of the BNNS and BNNS-MnFe 2 O 4 as these membranes had PVDF membranes as supports.This equates to values of 24.25 mg g −1 for BNNS and 30.50 mg g −1 for the BNNS-MnFe 2 O 4 .This result clearly demonstrates that the addition of the MnFe 2 O 4 MNPs to the BNNS gives an increase of 26% in adsorbent capacity.BN adsorption materials with various morphologies have been tested before by various groups for the removal of MB from aqueous solution.These results are summarized and compared to previously reported results for BN based membranes below in the Table 3. This work in combination with our previous work 25 show that MNP can be attached to the BNNS directly without the need for organic linkers.The BNNS and the BNNS-MNP nanocomposites can be used for water filtration, with the addition of the MNP to the BNNS surface increasing the adsorption capacity when MnFe 2 O 4 MNPs are used.Incorporating MNPS into the BNNS, thus introducing a magnetic aspect, opens up the potential for on-site magnetic inductive heating of the membrane.This feature holds promise for membrane regeneration once it reaches saturation or becomes blocked. Testing the BNNS-MnFe 2 O 4 Composites for Regeneration and Recycling.Adsorption of a pollutant onto an adsorbent removes the pollutant from solution but also creates waste in the form of the saturated adsorbent.Recyclability of an adsorbent is essential for sustainable pollutant removal applications. 41The BNNS-MnFe 2 O 4 nanocomposites were tested for recyclability to see if they could be used for filtration applications and then recycled to be used again for the same application without a loss of activity.This testing involved passing the MB solution through a prepared membrane in successive 20 mL portions until the membrane reached saturation.Subsequently, the membrane underwent methanol and water rinses to eliminate excess MB.Following this, the membrane was sonicated in acetone to detach the nanocomposite from the PVDF support.The material was then magnetically separated from acetone and subjected to a 400 °C furnace treatment to eliminate any remaining MB.Afterward, the nanocomposite was sonicated in water to form a new membrane from the same material, which was reused for filtering the MB solution.This recycling process was repeated eight times, and the percentage of MB removed was determined using UV−vis analysis based on the Beer−Lambert law for each cycle.A plot illustrating the adsorption of MB per gram of adsorbent (mg/g) against the recycle number is presented (Figure S21 in Supporting Information).As can be seen, there is not much variability between runs with the second run showing the maximum capture at 31.0 mg/g.There is a general trend of decreasing adsorption but the final two runs show the same capture efficiency at 28.8 mg/g.The same material was used each time and so there was a small loss of material in the process of recycling the membrane.This loss of material was quantified (Table S6), and this was taken into account when calculating the MB saturation adsorption value.TEM and SEM images of the recycled nanocomposites were taken after the 8 recycles to see if the nanocomposite had changed significantly (Figure S22 in Supporting Information).As can be seen in the images, there is no change to the nanocomposite with the MNPs still attached to the surface of the BNNS.TGA of the nanocomposite was performed prior to MB adsorption and with MB adsorbed (see Figures S23 and S24 in Supporting Information).As can be seen from the TGA curves, there is a large change in mass up to 200 °C which can be attributed to removal of adsorbed water.After this, there is the burning off of any organic pollutants.For the MB adsorbed nanocomposite, there is a larger percent change of 94.8% compared to 97.5%.This agrees with our previous analysis where the adsorbed MB makes up 3% of the mass (30.5 mg/ g).The attachment of the MnFe 2 O 4 magnetic nanoparticles to the surface of the BNNS causes an improvement in the adsorption efficiency of 26% compared to the BNNS alone.The excellent performance regarding the adsorption of MB and recyclability shows that this material has potential application in a water purification system.The addition of the magnetic functionality to the BNNS gives the possibility of magnetic inductive heating regeneration of the membrane.Further studies will be performed to test this as a potential property of this nanocomposite.EDX analysis was positioned on Lacey carbon TEM grids, mounted on a holder within the SEM, focusing specifically on individual MNP coated BN flakes.UV−vis spectra were gathered using the Agilent Cary 60 spectrophotometer, covering a range from 1100 to 190 nm and a quartz cuvette with a path length of 1 cm.For pXRD analysis, the Bruker D2 Phaser second generation powder sample X-ray machine was utilized, equipped with monochromatic high-intensity Cu Kα radiation (λ = 0.15406 nm).The XRD data collected were background subtracted, spanning 2θ angles from 15°to 85°.Magnetization measurements of the dry products were obtained using an in-house assembled VSM at room temperature, applying a field up to 1.1 T. Calibration of the VSM was carried out by employing a pure nickel sample with a known mass.Nickel, being a ferromagnetic material, possesses a recognized magnetic moment of 55.4 Am 2 kg −1 in an external field of 1 T at room temperature.FTIR spectra were acquired using a PerkinElmer spectrum 100, fitted with a diamond window covering a range from 4200 to 250 cm −1 .For the BET surface area analysis, a Nova 2400e surface area analyzer (Quantachrome, UK) employing nitrogen gas as the adsorbate was utilized.Prior to analysis, the sample underwent a degassing process at 200 °C under vacuum for 1 h.XPS measurements were conducted utilizing an Omicron EA 125 Energy Analyzer, employing a monochromated Al K-alpha source at 1486.7 eV.High-resolution core level XPS scans were performed with a pass energy of 20 eV, using high magnification mode and entrance and exit slits of 6 and 3 mm, respectively, resulting in an overall source and instrument resolution of 0.6 eV.MIP (Mercury Intruded porosimetry) analysis was conducted using an Autoscan-33 porosimeter (Quantachrome, UK). Preparation of BNNSs. BNNSs were synthesized from bulk BN using a method described in our previous publication. 5A mixture of 300 mg of bulk BN powder and 100 mL of ultrapure water was placed in a 150 mL round-bottom flask.The solution underwent 24 h of sonication using a Wise Clean WUC-A03H operating at 40 kHz with an output of 124 W. Subsequently, this solution was directly employed for transfer into ethylene glycol. 3.4.Preparation of EtONa.An 85 mL quantity of anhydrous ethanol (1.47 mol) underwent degassing and was placed under argon in a 250 mL round-bottom flask.Subsequently, 5.87 g of sodium hydroxide (0.15 mol) was introduced, and the mixture was stirred magnetically under argon until fully dissolved.Following this, 26.45 g of 300-mesh molecular sieves was added, and the RBF was sealed under an argon atmosphere.The solution was left undisturbed for 48 h.The liquid phase was separated from the molecular sieves under an argon atmosphere and then distilled to eliminate excess ethanol, resulting in a dry white powder of sodium ethoxide (EtONa).Ethylene glycol (100 mL), pre-degassed, was added to form a solution with a concentration of 0.1 g/ mL. 3.5.Transfer of BNNSs from Water to Ethylene Glycol.300 mg of BNNS (Boron Nitride Nanosheets) dissolved in 100 mL of water was combined with 120 mL of ethylene glycol in a 500 mL round-bottom flask and stirred using a magnetic stirrer.The water was distilled and gathered; the distillation ceased when 100 mL of water was added.Afterward, the solution was cooled to room temperature and subjected to 1 h of sonication to evenly disperse the nanosheets throughout the ethylene glycol. 3.6.Preparation of BNNS-MnFe 2 O 4 .100 mg of BNNSs, equivalent to 4 mmol of BN, was dissolved in 40 mL of ethylene glycol within a 100 mL round-bottom flask.Subsequently, a mixture of FeCl 3 •6H 2 O (0.109 g, 0.40 mmol), MnCl 2 •4H 2 O (0.039 g, 0.2 mmol), and ethylenediamine (0.40 mL, 6.0 mmol) was added, and the solution was sonicated for 30 min in an open-air environment.To this, a solution of EtONa (0.55 g, 8 mmol) in 5.5 mL of ethylene glycol was introduced.The resultant solution was mechanically stirred for 30 min at room temperature to ensure thorough mixing before undergoing reflux for 16 h in an open-air setup.After cooling to room temperature, the particles were separated using a magnetic process, washed twice with water (100 mL each) and ethanol (100 mL each), and finally stored in 100 mL of ethanol. Preparation of Boron Nitride Nanosheet− Magnetic Nanoparticle Nanocomposite Membranes. A solution of BNNS-MnFe 2 O 4 (40 mg) in ultrapure water (100 mL) underwent a 2 h sonication process to ensure complete dispersion of the material.Subsequently, the solution was filtered using a PVDF 0.45 μM filter on a fritted glass setup to form the membrane.The freshly formed membrane, with the PVDF filter in place on the fritted glass apparatus, was immediately utilized for filtration without allowing it to dry out.However, for measurements pertaining to mass and thickness, a newly prepared membrane was allowed to dry before conducting these assessments. 3.8.Testing of the Membrane for Extraction of Dye.A new membrane of 40 mg, composed of the specified material (BNNS or BNNS-MnFe 2 O 4 ), was created.A solution containing MB (20 mL, 21.9 μM) was passed through the membrane, and the resulting filtrate was collected for subsequent UV−vis measurements.This process was iterated with equal volumes of solution until the membrane reached saturation. 3.9.Testing the BNNS-MnFe 2 O 4 for Regeneration and Recycling.An utilized BNNS-MnFe 2 O 4 membrane underwent rinsing with methanol and water until the filtrate became clear.Subsequently, the membrane was sonicated in a small quantity of acetone to detach the PVDF support.The material was separated using a magnetic process from acetone and subsequently subjected to a 400 °C furnace treatment.The resulting material was sonicated to reintroduce it into a solution of ultrapure water (100 mL).This solution, containing the material, was then utilized to generate a new membrane and repeat the experiment. CONCLUSIONS Thus, we have prepared new BNNS functionalization with MnFe 2 O 4 magnetic spinel ferrite nanoparticles.The TEM and SEM images for this sample showed good coverage at a BNNS:MnFe 2 O 4 molar ratio of 1:0.05.FTIR and XPS analyses have indicated the formation of B-metal bonds between the MnFe 2 O 4 and the BNNS.The magnetic properties of the BNNS-MnFe 2 O 4 nanocomposite were sufficiently good with the sample capable of being extracted from solution with a permanent neodymium magnet.Surface area analysis and high pressure mercury intrusion porosimetry were performed on BNNS and BNNS-MnFe 2 O 4 nanocomposites.This showed that the surface area increased upon the addition of the MNPs to the BNNS surface.The BNNS and BNNS-MnFe 2 O 4 membranes were evaluated for their efficiency for removing MB dye from water.We have conducted filtration experiments using MB solution and quantified the membrane's retention ability through UV−vis spectroscopy analysis.Both membranes retained over 99% of MB until saturation.The BNNS-MnFe 2 O 4 membrane captured 26% more methylene blue than the BNNS membrane, with adsorption calculations quantifying this result.Comparisons with prior research demonstrated the effectiveness of the BNNS-MnFe 2 O 4 membrane in MB removal.The nanocomposites were tested for their ability to be recycled without a loss of efficiency.The nanocomposite was recycled eight times.The BNNS-MnFe 2 O 4 nanocomposite demonstrated consistent performance for MB removal with a slight decrease over runs.The nanocomposite's attachment of MnFe 2 O 4 nanoparticles improved adsorption efficiency compared to the BNNS alone.The combination of high MB adsorption and recyclability suggests great potential for water purification systems, and the addition of magnetic functionality for inductive heating regeneration holds promise for advanced future exploration. Figure 1 . Figure 1.(A) TEM and (B) SEM images of BNNS on a Lacey carbon TEM grid. Figure 2 . Figure 2. XRD pattern of exfoliated BNNS, showing the hkl planes for the major peaks. Scheme 1 . Scheme 1. Synthetic Outline for the Creation of the BNNS-MnFe 2 O 4 Nanocomposites Figure 5 . Figure 5. EDX maps of N, B, Mn, Fe, and O showing the detected X-rays from the sample, with the SEM electron image of the sample.From this analysis, we can clearly see that MnFe 2 O 4 coats the BN sheet. Figure 7 . Figure 7. High-resolution XPS spectra (A) B 1s of BNNS and BNNS-MnFe 2 O 4 composite, (B) B 1s of BNNS-MnFe 2 O 4 composite showing fitted and deconvoluted peaks.Note the improvement of the fwhm from Figure 3, indicating that there is less chemical disorder of the B sites in this composite. Table 1 . Summary of Product Characteristics for Different Molar Ratios of BNNS:MnFe 2 O 4 Table 2 . Summary of the BET Surface Area and BJH Pore Volume Data Testing the Membranes for Filtration Applica- tions. In our previous work, we had tested the BNNS membrane, the BNNS-Fe 3 O 4 and BNNS-CoFe 2 O 4 membrane for removal of methylene blue (MB) from aqueous solution 3.1. Materials. Iron The TEM instrument employed for this study was the JEOL 2100 instrument, operating at 200 kV.Meanwhile, the SEM utilized was the Zeiss Ultra plus SEM, capable of an accelerating potential ranging from 30 to 1 keV.To capture the images, the SEM was operated within a range of 15−2 keV.EDX analysis was conducted on the SEM, employing an Oxford Instruments 80 mm 2 XMAX EDX detector while operating the SEM at 15 keV.The sample for Table 3 . Adsorption Capacities for Various BN Based Materials from Previous Publications and Our Work
10,418.8
2024-01-15T00:00:00.000
[ "Environmental Science", "Materials Science", "Chemistry" ]
Fermionic steering is not nonlocal in the background of dilaton black hole We study the redistribution of the fermionic steering and the relation among fermionic Bell nonlocality, steering, and entanglement in the background of the Garfinkle-Horowitz-Strominger dilaton black hole. We analyze the meaning of the fermionic steering in terms of the Bell inequality in curved spacetime. We find that the fermionic steering, which is previously found to survive in the extreme dilaton black hole, cannot be considered to be nonlocal. We also find that the dilaton gravity can redistribute the fermionic steering, but cannot redistribute Bell nonlocality, which means that the physically inaccessible steering is also not nonlocal. Unlike the inaccessible entanglement, the inaccessible steering may increase non-monotonically with the dilaton. Furthermore, we obtain some monogamy relations between the fermionic steering and entanglement in dilaton spacetime. In addition, we show the difference between the fermionic and bosonic steering in curved spacetime. I. INTRODUCTION Einstein-Podolsky-Rosen (EPR) steering, first discussed by Schrödinger [1] in his response to the EPR paper, is a remarkable feature of nonlocality in quantum theory, wherein one party (Alice) can remotely steer another distant party (Bob) by her choice of measurements.EPR steering can be viewed as a quantum correlation (quantum resource) between quantum entanglement and Bell nonlocality, since it requires quantum entanglement as a fundamental resource for steering remote states, while EPR steering is not always sufficient to violate Bell inequality.Research on the relationship between Bell nonlocality, EPR steering, and quantum entanglement has made some progress [2][3][4], but it remains an open question.Unlike Bell nonlocality and quantum entanglement, EPR steering has a unique asymmetry, that is, one party can steer the another but not vice versa.Because of its asymmetric properties, EPR steering has potential applications in quantum secret sharing [5][6][7][8], quantum networks [9], and quantum key distribution [10,11]. String theory is a promising candidate for a unified theory between general relativity and quantum mechanics.Unlike general relativity, the string theory predicted that the existence of the dilaton fields changes the properties of black holes [12][13][14][15].The dilaton black holes are formed by gravitational systems coupled to Maxwell and dilaton fields, i.e., the Garfinkle-Horowitz-Strominger (GHS) dilaton black hole [13,14].The Hawking effect [16] of the GHS black hole relies not only on the mass of the black hole, but also on its dilaton field, since the latter is also the source of gravity.The Hawking effect for the dilaton black hole on quantum correlation, quantum coherence, and entropic uncertainty relation has been widely studied [17][18][19][20][21][22][23][24][25][26][27][28][29].However, the relationship between quantum correlations is still unclear in the background of the GHS dilaton black hole.Therefore, studying the relationship between quantum steering, Bell nonlocality, and quantum entanglement in dilaton spacetime is one of the motivations for our work. Another motivation for our work is to investigate the redistribution of the fermionic steering and the difference between the bosonic and fermionic steering in the background of the dilaton black hole.Based on these motivations, it is assumed that our model involves three fermionic modes: the first mode A is observed by Alice at the asymptotically flat region; the second mode B is observed by Bob who hovers near the event horizon of the dilaton black hole; the third mode B observed by Anti-Bob is restricted by the event horizon.By calculating the fermionic steering in dilaton spacetime, we find that the A → B fermionic steerability is always larger than the B → A fermionic steerability, but the A → B bosonic steerability is always smaller than the B → A bosonic steerability [22].Furthermore, the fermionic steering can always survive, while the bosonic steering suffers a sudden death in curved spacetime.We also find that the dilaton gravity can redistribute the fermionic steering, but cannot redistribute Bell nonlocality, which indicates that the physically inaccessible steering is not nonlocal.Finally, we obtain the relationships between the fermionic steering and entanglement in dilaton spacetime.Therefore, we can understand another type of quantum correlation through one type of quantum correlation in dilaton spacetime. Our paper is organized as follows.In Sect.II, we introduce the quantification of quantum steering and the Clauser-Horne-Shimony-Holt (CHSH) inequality.In Sect.III, we discuss the quantization of the Dirac field in the background of the GHS dilaton black hole.In Sect.IV, we study the redistribution of the fermionic steering in dilaton spacetime.In Sect.V, we obtain the monogamy relations between the fermionic steering and entanglement in dilaton spacetime. Finally, the Sect.VI is devoted to the conclusion. II. QUANTIFICATION OF QUANTUM STEERING AND CHSH INEQUALITY Bell nonlocality, quantum steering, and quantum entanglement have a strict hierarchy for the mixed states.Interestingly, Bell nonlocality can be indirectly detected by the notion of quantum steering [2], and quantum steering can be indirectly detected by the concept of quantum entanglement [3,4].We consider the density matrix of the X-state ρ x as where ρ ij is the real element satisfying ρ ij = ρ ji .As we all know, quantum entanglement of the bipartite states can be effectively identified by the concurrence.The concurrence of the X-state ρ x given by Eq.( 1) can be specifically shown as [30] C For a general bipartite state ρ AB shared by Alice and Bob, the steering from Bob to Alice can be witnessed if the density matrix τ AB defined as is entangled, where ρ A = Tr B (ρ AB ) and I is the two-dimension identity matrix [4,31].Similarly, we can witness the steering from Alice to Bob when the density matrix τ BA defined as is entangled, where ρ B = Tr A (ρ AB ). Through simple calculations, the matrix τ AB of the X-state ρ x can be expressed as with a = or where Using a similar method, we find that the steering from Alice to Bob can be witnessed via one of the inequality or According to the inequality, the steerability from Bob to Alice S B→A is found to be By exchanging the mode A and the mode B, we can obtain the steerability S A→B as The factor 8 √ 3 guarantees that the steerability of the maximally entangled state is 1.The Bell inequality can be violated in quantum mechanics, which means that quantum mechanics cannot be redefined as a local realist theory.As we all know, the typical Bell inequality is the CHSH inequality.To study the relationship between quantum steering and Bell nonlocality, we use the CHSH inequality to verify the nonlocality of quantum steering.The Bell operator for the CHSH inequality can be defined as where a, a ′ , b, and b ′ are unit vectors in R 3 , and σ = (σ 1 , σ 2 , σ 3 ) denotes the vector of Pauli matrices.To detect the Bell nonlocality of a state ρ, we employ the CHSH inequality and its expression of inequality for the two qubits to test local-realistic theories reads Thus, the requirement to violate the CHSH inequality is B(ρ) > 2, and the violation of this inequality implies the nonlocality of the state.We need to find the maximal Bell signal B(ρ), which for two-qubit systems can be equivalent to where K i and K j are the eigenvalues of the real symmetric matrix K(ρ) = T T ρ T ρ , and T = (t ij ) represents the correlation matrix with t ij = Tr[ρσ i σ j ].For the two-qubit X-state, K 1 , K 2 , and K 3 take the specific form As K 1 is greater than K 2 , we can represent the maximal Bell signal as with [32,33].Note that the maximal violation of the CHSH inequality for certain states is 2 √ 2, and this bound can only be obtained by the maximally steered states.We will use Eqs.( 14), (15), and ( 17) to judge whether quantum steering is nonlocal in the dilaton black hole. III. QUANTIZATION OF DIRAC FIELD IN DILATON BLACK HOLE Let us now introduce the massless Dirac equation in a general background spacetime [34,35] where γ a denotes the Dirac matrices, Γ µ = 1 8 [γ a , γ b ]e ν a e bν;µ is the spin connection coefficient, and the four-vectors e µ a represents the inverse of the tetrad e a µ .Note that the e a µ is defined by g µν = η ab e a µ e b ν with η ab = diag(−1, 1, 1, 1).The thermal Fermi-Dirac distribution of particles of the GHS dilaton black hole with the Hawking temperature T = 1 8π(M −D) has been computed [36][37][38].It is well known that the presence of such radiation is called the Hawking effect.The metric for the GHS dilaton black hole reads [39,40] where M and D represent the mass and the dilaton of the black hole, respectively.Throughout this paper, we set = G = c = κ B = 1 for convenience.In addition, the dilaton D and the mass M should satisfy the relationship D < M. To separate the Dirac equation in the following discussion, we utilize a tetrad as where f = (r−2M ) r and r = r − 2D.According to Eq.( 18), the massless Dirac equation in the GHS dilaton black hole can be specifically represented as If we use , we can solve the Dirac equation near the event horizon of the black hole.For the exterior region and interior region of the event horizon, we obtain the following positive frequency outgoing solutions [42-44] where O = t − r * , J is a four-component Dirac spinor, and k is the wave vector that can be used to label the modes.Using Eq.( 22) and Eq.( 23), the Dirac field can be expanded as where σ = (in, out), âσ k and bσ † k are the fermion annihilation and antifermion creation operators acting on the quantum state, respectively.The annihilation operator and creation operator satisfy the canonical anticommutation relations âout k , âout † Usually, we call the dilaton modes Ψ ± σ,k .The complete basis for positive energy modes, i.e., the Kruskal modes introduced by Domour-Ruffini [30], can make analytic continuations of Eqs.( 22) and (23).We can also use the Kruskal modes to expand the Dirac field where ĉσ k and dσ † k are the fermion annihilation and antifermion creation operators acting on the Kruskal vacuum, respectively.It can be seen from Eq.( 24) and Eq.( 25) that the Dirac field is decomposed into the Kruskal and dilaton modes, respectively.Thus, we can obtain the Bogoliubov transformations between Kruskal and dilaton modes.Using the Bogoliubov transformations, we can get the relations between the Kruskal and dilaton operators that take the forms Since the GHS dilaton black hole can be divided into physically accessible and inaccessible regions, the ground and only excited states in Kruskal spacetime become the two-mode squeezed state in the dilaton black hole.After properly normalizing the state vector, the Kruskal vacuum and only excited states in dilaton spacetime read where {|n out } and {|n in } correspond to the orthonormal bases for the outside and inside regions of the event horizon, respectively. IV. REDISTRIBUTION OF FERMIONIC STEERING IN DILATON SPACETIME We initially assume that Alice and Bob share a maximally entangled state in the asymptotically flat region of the dilaton black hole, which can be written in the following form where the modes A and B are observed by Alice and Bob, respectively.Then, Bob hovers near the event horizon of the dilaton black hole and Alice continues to stay at the asymptotically flat region.Therefore, Bob can detect a thermal Fermi-Dirac distribution of particles, meaning that his detector is found to be excited.We can use the dilaton modes of Eq.( 27) to rewrite the initial entangled state Here, the physically inaccessible mode B is observed by Anti-Bob inside the event horizon. The density matrix of quantum state φ AB B in the orthonormal basis {|0, 0, 0 , |0, 0, 1 , |0, 1, 0 , |0, 1, 1 , |1, 0, 0 , |1, 0, 1 , |1, 1, 0 , |1, 1, 1 } can be expressed as A. Physically accessible steering Because the exterior region of the black hole is causally disconnected from the interior region, Alice and Bob cannot approach the mode B. The physically accessible information is encoded in the mode A described by Alice and the mode B described by Bob.Taking the trace over the mode B inside the event horizon, we obtain a mixed density matrix of Alice and Bob in the basis {|00 , |01 , |10 , |11 }.Employing Eqs.( 11) and ( 12), quantum steering from Alice to Bob and quantum steering from Bob to Alice are found to be S > 0 indicates the presence of quantum steering, and S = 0 indicates the absence of quantum steering.Using Eq.( 17), the maximal Bell signal can be writen as We plot the A → B steering, B → A steering, and the maximal Bell signal as a function of the dilation D and plot the relationship of quantum steering between modes A and B, as shown in Fig. 1.It can be seen from Fig. 1(a) and (b) that the fermionic steering monotonically decreases to a fixed value with increasing dilaton D, while the bosonic steering first irreversibly degenerates and then undergoes sudden death under the influence of the dilaton [22].We find that the fermionic steerability from Alice to Bob is always larger than the fermionic steerability from Bob to Alice, while the bosonic steerability from Alice to Bob is always smaller than the bosonic steerability from Bob to Alice in dilaton spacetime.If we use the steerability from Alice to Bob over the steerability from Bob to Alice for relativistic quantum information tasks, we should use the fermionic steering rather than the bosonic steering.These results suggest that the fermionic steering contrasts sharply with the bosonic steering due to the difference between the Fermi-Dirac statistic and the Bose-Einstein statistic in dilaton spacetime.From Fig. 1(a) and (b), we can also see that the fermionic steering does not depend on the frequency of the modes ω in the limit of an extremely dilaton black hole (D → M).In other words, the fermionic steering is not affected by the frequency in the limit of D → M. In Fig. 1(c), we also find that B(ρ AB ) is equal to 2 for D → M, indicating that there is no Bell nonlocality in quantum state ρ AB .This means that the fermionic steering cannot be considered to be nonlocal in the extreme dilaton black hole.In Fig. 1(d), each circle represents an observer, and the arrows connecting two observers describe the bipartite steering relationship between them.Based on the characteristics of steering, quantum steering in the bipartite system includes two-way steering, one-way steering, and no-way steering.It is easy to find that the physically accessible steering between Alice and Bob is a two-way steering in curved spacetime. B. Physically inaccessible steering Next, we will explore the steering between Alice and Anti-Bob, and the steering between Bob and Anti-Bob under the influence of the dilaton.Since Anti-Bob is inside the event horizon of the black hole, we refer to this type of quantum steering as a "physically inaccessible steering". Firstly, we study the fermionic steering between Alice and Anti-Bob.Tracing over the mode B observed by Bob, we can get the density matrix between the modes A and B as Using Eqs.( 11), (12), and ( 17), the fermionic steering and the maximal Bell signal between Alice and Anti-Bob can be written as and The influence of the dilaton D of the black hole on the fermionic steerability and the maximal Bell signal between Alice and Anti-Bob for different frequencies ω are plotted in Fig. 2 Specially, for the non-dilaton and extreme dilaton black holes, quantum steering is independent of the frequency ω.In Fig. 2(c), we find that B(ρ AB ) is always less than or equal to 2, which means that there is no Bell nonlocality between Alice and Anti-Bob.Two different possibilities of bipartite steering between Alice and Anti-Bob are depicted in Fig. 2(d).According to the steering asymmetry of Alice and Anti-Bob, the fermionic steering exhibits unique directionality, which may lead to one-way steering, that is, Alice can steer Anti-Bob, but Anti-Bob cannot steer Alice. Note that the dilaton for the sudden birth of steering we can obtain S A→ B > 0 and S B→A = 0, which corresponds to one-way steering in Fig. 2(d). For D 0 < D ≤ 1, we find S A→ B > 0 and S B→A > 0, so the steering between Alice and Anti-Bob is a two-way steering. In addition, we also study the fermionic steering between Bob and Anti-Bob.We trace over the mode A and then get the density matrix for the modes B and The fermionic steering and the maximal Bell signal between Bob and Anti-Bob can be expressed as and In Fig. 3, we plot quantum steering and the maximal Bell signal between Bob and Anti-Bob as a function of the dilaton D, as well as its relationship in dilaton spacetime.From Fig. 3(a), we can see that the fermionic steering S B→B first increases from zero to the maximum and then suffers sudden death with the growth of the dilaton D. In Fig. 3(b), we can find that the fermionic steering S B→B is always equal to zero.The dilaton for the maximal steering is and the dilaton for the sudden death of quantum steering is In Fig. 3(c), we can also find that the maximal Bell signal B(ρ AB ) is always less than 2, indicating that there is no Bell nonlocality between Bob and Anti-Bob. C. Asymmetry of quantum steering Unlike quantum entanglement and Bell nonlocality, quantum steering has asymmetric.Therefore, we distinguish quantum steering into three cases: (i) the first one corresponds to no-way steering, showing that the state is nonsteerable in any direction; (ii) the second case is two-way steering, that is to see, the state can be steerable in both directions; (iii) the third case is one-way steering, indicating that the state is steerable only in one direction.The last case reflects the asymmetric nature of quantum steering.To measure the degrees of asymmetry in the dilaton black hole, we define the steering asymmetries between the modes A and B, A and B, or B and B as Fig. 4 shows how the dilaton D of the black hole influences the steering asymmetry for different frequencies ω.It can be seen that, as the dilaton D increases, the steering asymmetry S ∆ AB increases, while the steering asymmetry S ∆ A B increases from zero to the maximum and then decreases to a fixed value, and the steering asymmetry S ∆ B B suffers sudden death.In Fig. 4(a), we find that the steering asymmetry between Alice and Bob has the case of two-way steering, which means that they can steer each other.Fig. 4(c) presents quantum steering in one direction (one-way steering), indicating that quantum steering between B and B is completely asymmetric.From Fig. 4(b), we can also find that maximal steering asymmetry between A and B shows the the transformation between one-way steering and two-way steering.In other words, for the point D = D 0 , the system ρ A B is experiencing a transformation from one-way steering to two-way steering in dilaton spacetime.In addition, the steering asymmetry is also independent of the frequency ω for the non-dilaton and the extreme dilaton black hole. V. MONOGAMY RELATIONS BETWEEN QUANTUM STEERING AND ENTANGLEMENT IN DILATON SPACETIME In the previous section, we have studied the relationship between quantum steering and Bell nonlocality in detail, but the relationship between steering and entanglement is still not very clear in dilaton spacetime.As is well known, quantum steering is an intermediate form of quantum correlation between Bell nonlocality and quantum entanglement and a good connection between Bell nonlocality and quantum entanglement [45].Therefore, we try to find the relationships between Our model involves three observers: Alice, Bob, and Anti-Bob.Here, Alice stays stationary at an asymptotically flat region, Bob hovers near the event horizon of the dilaton black hole, and Anti-Bob is restricted by the event horizon of the black hole.We get that the fermionic steering between Alice and Bob decreases to a fixed value with the dilaton, while the bosonic steering suffers sudden death under the influence of the dilaton [22].In addition, the fermionic steerability from Alice to Bob is always larger than the fermionic steerability from Bob to Alice, whereas the bosonic steerability is the opposite of the fermionic steerability.These different properties between the fermionic and bosonic steering originate from the difference in statistics.We find that the physically accessible steering in the extremely dilaton black hole cannot be affected by frequency.Interestingly, the physically accessible steering in the extremely dilaton black hole has no Bell nonlocality, meaning that quantum steering cannot be considered nonlocality.We also find that quantum steering between Alice and Bob is a two-way steering in dilaton spacetime. We also investigate the properties of the physically inaccessible steering in dilaton spacetime. It is shown that the dilaton gravity can redistribute the fermionic steering, but cannot redistribute Bell nonlocality, meaning that the physically inaccessible steering cannot be considered to be nonlocal.We find that, as the dilaton increases, the steering between Alice and Anti-Bob increases monotonously, while the steering from Bob to Anti-Bob increases non-monotonically.When the steering from Anti-Bob to Alice experiences a sudden birth with the dilaton, we obtain the maximum steering asymmetry that indicates the transformation between one-way steering and two-way steering in dilaton spacetime.When the steering from Bob to Anti-Bob experiences a sudden death with the dilaton, it shows the transformation between one-way steering and no-way steering.Finally, we obtain some monogamous relations between the fermionic steering and entanglement in dilaton spacetime.Therefore, we can indirectly understand the redistribution of quantum steering by understanding the redistribution of quantum entanglement in curved spacetime. FIG. 1 : FIG. 1: (a), (b), and (c) the fermionic steerability and the maximal Bell signal between Alice and Bob as a function of the dilaton D, for different ω.We set the parameter M = 1.(d) an example of relationship of quantum steering between Alice and Bob. (a), (b), and (c).And the illustration in Fig.2(d) depicts the relationship of quantum steering between Alice and Anti-Bob.From Fig.2(a) and (b), we can see that the fermionic steering between Alice and Anti-Bob can be generated by gravitational effect, and the B → A steering undergoes the sudden birth with the dilaton D. In addition, the A → B steerability is always larger than the B → A steerability in dilaton spacetime.We also see from Fig.2(a) and (b) that, for the given dilaton D, the steering between Alice and Anti-Bob decreases monotonically with the increase of the ω. FIG. 2 : FIG. 2: (a), (b), and (c) the fermionic steerability and the maximal Bell signal between Alice and Anti-Bob as a functon of the dilaton D, for different ω.We set the parameter M = 1.(d) an example of relationship of quantum steering between Alice and Anti-Bob. Fig. 3 ( d) shows two relationships of quantum steering between Bob and Anti-Bob.The conditions of its relationship are given as follows: (i) the condition 0 < D < D 2 means one-way steering S B→ B > 0 and S B→B = 0; (ii) the condition D 2 ≤ D ≤ 1 implies no-way steering S B→ B = 0 and S B→B = 0.We compare Figs.1-3 in order to relate the two fundamental physical phenomena of quantum steering and Bell nonlocality.We conclude that the fermionic steering cannot be considered nonlocality in the extreme dilaton black hole.From Figs.2-3, we can find that the physically inaccessible steering is also not nonlocal.We can also find that quantum steering and the maximal Bell signal are not affected by the frequency of the fermionic field for the extreme dilaton black hole.In other words, quantum steering and the maximal Bell signal for different frequencies approach the same values in the extreme dilaton black hole. FIG. 3 : FIG. 3: (a), (b), and (c) the fermionic steerability and the maximal Bell signal between Bob and Anti-Bob as a functon of the dilation D, for different ω.We set the parameter M = 1.(d) an example of relationship of quantum steering between Bob and Anti-Bob. FIG. 4 : FIG. 4: The steering asymmetry S ∆ AB , S ∆ A B , and S ∆ B B as a function of the dilaton D, for different ω.We set
5,677
2023-11-15T00:00:00.000
[ "Physics" ]
Influence of Exoskeleton Use on Cardiac Index This study aims to assess the whole-body physiological effects of wearing an exoskeleton during a one-hour standardized work task, utilizing the Cardiac Index (CI) as the target parameter. N = 42 young and healthy subjects with welding experience took part in the study. The standardized and abstracted one-hour workflow consists of simulated welding and grinding in constrained body positions and was completed twice by each subject, with and without an exoskeleton, in a randomized order. The CI was measured by Impedance Cardiography (ICG), an approved medical method. The difference between the averaged baseline measurement and the averaged last 10 min was computed for the conditions with and without an exoskeleton for each subject to result in ∆CIwithout exo and ∆CIwith exo. A significant difference between the conditions with and without an exoskeleton was found, with the reduction in CI when wearing an exoskeleton amounting to 10.51%. This result corresponds to that of previous studies that analyzed whole-body physiological load by means of spiroergometry. These results suggest a strong positive influence of exoskeletons on CI and, therefore, physiological load. At the same time, they also support the hypothesis that ICG is a suitable measurement instrument to assess these effects. Introduction According to the ASTM International Technical Committee on Exoskeletons and Exosuits (ASTM F48), "an exoskeleton is defined as a wearable device that augments, enables, assists, or enhances motion, posture, or physical activity" [1]. Exoskeletons can be grouped by the way they create support, the supported body region, and their use case [1][2][3]. Exoskeleton support can be either passive, with force created by springs, levers, or elastic elements, or active, with force created by a power source. Another possibility is a mix of passive and active elements [4]. The supported body region could be any region of the body, although the most common are back, shoulder, and lower-limb exoskeletons [5]. The development and research on exoskeletons for the industrial use case has recently gained great momentum [2,3,[5][6][7] due to the prevalence of work-related musculoskeletal diseases (MSD) resulting from heavy physical work [8]. MSD can lead to a decreased quality of life, increase in sick leave, incapacity to work, and therefore results in the absence of skilled workers. Industrial exoskeletons for back and shoulder support have been shown to reduce the muscle activity and perceived strain in the target body region as well as the overall perceived strain [9][10][11][12][13][14]. These effects of exoskeletons on the target region are the focus of a majority of the research [2]. Effects on whole-body unloading have not yet been studied sufficiently, and the results are inconsistent [15][16][17]. While a reduction in heart rate from using an exoskeleton could not be shown for either upper or lower body exoskeletons, a reduced metabolic cost could be found for upper-body exoskeletons [6,18]. This paper focuses on assessing the whole-body physiological effects of exoskeletons instead of isolated areas. It is indispensable to show the effect of assistance systems Hearts 2022, 3 118 such as exoskeletons on cardiovascular load. One useful approach to study the effect of exoskeletons on whole-body loading, measured in oxygen consumption, was made by Knott et al. [19]. We want to build on this approach but utilize more realistic work scenarios [20] as well as a measurement method which is easier to use, more portable, and more comfortable for the worker. Thus, this paper aims to demonstrate the physiologically relieving effects of exoskeletons in a realistic work task by means of hemodynamics. Hemodynamic parameters describe the dynamics of blood flow, while oxygen is transported by blood. This means that oxygen consumption and the dynamics of blood flow are interrelated. According to the literature, the Cardiac Output (CO) is very closely correlated to the maximum oxygen consumption (Pearson's Correlation of r = 0.88-0.92) [21]. The regulatory system of the cardiac pump function is designed to meet the body's demand for oxygen at all times. CO represents a parameter that is indicative of acute physical stress, especially during moderate workload [22]. Stegemann describes the increase in CO as a consequence of sympathetic tone caused by physical work as a function linear to oxygen uptake [23,24]. To date, the conventional invasive methods for determining CO include cardiac catheterization and arterial punctures using the Fick's method or the dye dilution method, according to Hamilton. The standard invasive procedures also include pulmonary arterial thermodilution. The invasive nature of these procedures makes them unsuitable for use outside medical diagnostics. Likewise, non-invasive Doppler echocardiography or esophageal Doppler monitoring offers few possibilities of application in everyday work due to the need for a sonograph or an oesophageal catheter and the prerequisite of a resting patient [21,25,26]. Using spiroergometry, CO can be computed by calculating the arteriovenous oxygen difference, but it cannot directly be measured. Regardless of its accuracy, the use of this method is also not practical in an industrial setting due to the need for a breathing mask. The target value CO can be determined non-invasively by Impedance Cardiography (ICG) and shows a high correlation with conventional methods. Lorne et al. [27] could show a Pearson Correlation of r = 0.84 between the oesophageal doppler procedure and ICG in a study with 32 subjects. A similar result was found by Scherhag et al. [28] when comparing ICG to pulmonary arterial thermodilution (ATD). A Pearson correlation of r = 0.83 at rest and r = 0.85-0.87 during exercise was shown in a study with 20 patients. These results were confirmed by Yung et al. [29], who verified the correlation between ICG and ATD (Pearson's r = 0.80) as well as the correlation between ICG and the Fick method (Pearson's r = 0.84) in a population of 20 subjects. Therefore, we hypothesize that the effect of exoskeletons can not only be examined by means of oxygen consumption but also by means of hemodynamics, utilizing the CO, measured by ICG, as the main parameter to describe acute physical stress. Participants and Ethical Approval A total of N = 42 subjects participated in the study. The following criteria were considered: Inclusion criteria: • Trained professional welder • Professional welding experience • physically healthy Exclusion criteria: acute or chronic diseases All subjects were professionals with welding experience. All subjects were healthy, had no contraindicating musculoskeletal or cardiovascular diseases and gave written informed consent to participate in the study. N = 39 of total N = 42 participant datasets could be used for this evaluation. Subject 0305, 0308, and 0323 could not be taken into account due to weak signals of the ICG derivation, see Table 1. For these subjects an elevated BMI and body-fat percentage lead to a base-impedance that was outside of the working range of the system. The average age of the study population was 23.3 years. 36 of the participants were male (92.3%), 3 were female (7.7%). Body mass index (BMI) averaged at 26. The experiment received prior approval by the ethical committee of the University of Stuttgart on 20 September 2021 with the protocol code Az. 21-018. Experimental Design In order to ensure a safe experimental procedure, welding simulators "Soldamatic" from the company Seabery (Seabery Soluciones, Huelva, Spain) as well as grinding simulators designed by the Institute of Industrial Engineering and Management at the University of Stuttgart and the Fraunhofer IPA were used. These simulators accurately mimic the task of welding a seam and reworking the piece with an angle grinder under laboratory conditions. Standard DIN EN ISO 9606-1 for welding education served as a basis for the simulated workplaces, allowing to define real processes under authentic framework conditions. DIN EN ISO 9606-1 describes and defines welding in constrained positions. Since it is the welding process with the highest industrial impact, the metal active gas (MAG) welding process was chosen for this study. The following welding positions for this experiment were defined in cooperation with the SLV Nord welding research institute, Hamburg: 1. PF Position-vertical uphill (workpiece located in front of the body, end position slightly below eye level) (see Figure 1). 2. PE Position-overhead (workpiece positioned above head, approximately 300 mm in front of the eyes) (see Figure 2). Each subject welded a 250 mm seam in PF position, moving the welding torch along the workpiece with a speed of 3.5 mm/s, followed by simulated grinding in this position. For the grinding task, the prepared grinder is moved up and down along the simulated weld until a time of 20 s is reached. This is an inherent part of the realistic welding task and simulates the physical load arising from the weight of the grinder as well as the grinding pressure. This procedure was repeated 10 times. Directly following, each subject completed the same process in the PE position (10 times welding and grinding of the seam). The total time for the workflow was approx. one hour. Each study participant completed the defined workflow twice resulting in a randomized crossover study design: group one started with an exoskeleton and group two started without an exoskeleton. The assignment of subjects to the groups was randomized. 20 of the participants started without exoskeletons (51.3%), and 19 participants started with exoskeletons (48.7%). The subjects were given at least one hour to rest between the runs. Target Ratio In order to investigate the influence of the exoskeletons used on the hemodynamics during the defined one-hour activity, the Cardiac Index (CI) was observed. The CI is a normalized value, which is calculated by dividing the CO by the body surface area (BSA). The normalized CI has the unit L/min/m 2 and provides comparability among the test subjects [22]. Target Ratio In order to investigate the influence of the exoskeletons used on the hemodynamics during the defined one-hour activity, the Cardiac Index (CI) was observed. The CI is a normalized value, which is calculated by dividing the CO by the body surface area (BSA). The normalized CI has the unit L/min/m 2 and provides comparability among the test subjects [22]. Target Ratio In order to investigate the influence of the exoskeletons used on the hemodynamics during the defined one-hour activity, the Cardiac Index (CI) was observed. The CI is a normalized value, which is calculated by dividing the CO by the body surface area (BSA). The normalized CI has the unit L/min/m 2 and provides comparability among the test subjects [22]. The selection of exoskeletons was randomly determined. In order to be market-neutral and not to create a competitive advantage, no manufacturer-selective evaluation was conducted. This is possible because the design, the point of force application at the upper arm, and the supporting force of all three exoskeletons are similar. We distributed the exoskeletons randomly to the subjects and tried to keep the model use as equally as possible among the test persons as shown in Figure 3. The selection of exoskeletons was randomly determined. In order to be market-neutral and not to create a competitive advantage, no manufacturer-selective evaluation was conducted. This is possible because the design, the point of force application at the upper arm, and the supporting force of all three exoskeletons are similar. We distributed the exoskeletons randomly to the subjects and tried to keep the model use as equally as possible among the test persons as shown in Figure 3. For the measurements, two pairs of electrodes were placed at the neck and the thorax. An additional Arterial Compliance Modulation (ACM) sensor was placed at the earlobe. Hemodynamics and cardiac conduction were recorded over the complete duration of the trial. A baseline measurement was taken before the start of each trial. The subjects were finally prepared for the run and standing in front of their working space. For the run with exoskeleton, the baseline measurements were taken with the exoskeleton on. Grinding Simulator In a grinding experiment 35 N of axial force were measured in order to grind a welded seam appropriately. Based on this finding a grinding simulator was developed by the author to ensure appropriate force on the workpiece of 35 N in z-direction. This working point is based on a welding experiment conducted and analyzed in preparation of this study. The welding parameters in accordance with DIN EN ISO 9606-1 were physically welded in an internal test workshop and processed using an angle grinder. The forces occurring during grinding were determined and simulated using force transducers. The constructed test stand consists of a force-absorbing linearly mounted plate of polyoxymethylene that provides visual led feedback when a force of 35 N is reached in z-direction (see Figure 4). In a grinding experiment 35 N of axial force were measured in order to grind a welded seam appropriately. Based on this finding a grinding simulator was developed by the author to ensure appropriate force on the workpiece of 35 N in z-direction. This working point is based on a welding experiment conducted and analyzed in preparation of this study. The welding parameters in accordance with DIN EN ISO 9606-1 were physically welded in an internal test workshop and processed using an angle grinder. The forces occurring during grinding were determined and simulated using force transducers. The constructed test stand consists of a force-absorbing linearly mounted plate of polyoxymethylene that provides visual led feedback when a force of 35 N is reached in z-direction (see Figure 4). The operating point is not rigid and leaves around 5 mm space for movement as in real grinding. The counterpart is a commercially available angle grinder. Its functionality is deactivated by isolating the power connector. The grinder is combined with a dummy cutting disc manufactured of polyoxymethylene with the identical dimensions of a 125 mm cutting disc. The operating point is not rigid and leaves around 5 mm space for movement as in real grinding. The counterpart is a commercially available angle grinder. Its functionality is deactivated by isolating the power connector. The grinder is combined with a dummy cutting disc manufactured of polyoxymethylene with the identical dimensions of a 125 mm cutting disc. Data Analysis All data were analyzed using Minitab statistics software, version 20.1.2 (64 bit). For the statistical analysis, the last 10 min of each one-hour trial are used and compared with a previous baseline measurement that was taken directly before the start of the trial. Afterwards, the differences between the baseline measurement and the last 10 min of each trial were investigated as illustrated in Figure 5. Therefore, the distributions of the two samples, consisting of the differences between the averaged baseline measurement and the averaged last 10 min of all subjects (∆CI with exo and ∆CI without exo ) were tested with an Aderson-Darling test. Though the samples are non-normally distributed, we decided to use parametric statistics because, on the one hand, we obtained continuous data on the other hand, the sample size of N = 39 is large enough that the robustness of the methods is given. As Rasch and Guiard [30] as well as Gangestad and Thornhill [31] describe, this method is reasonable under the given conditions. The Levene method was used to analyze the variances of the two samples. The comparison of the variances was visualized using interval plots associated with the confidence interval of both samples. Finally, an unpaired t-test was used to examine significant differences of the samples in their mean values. Data Analysis All data were analyzed using Minitab statistics software, version 20.1.2 (64 bit). For the statistical analysis, the last 10 min of each one-hour trial are used and compared with a previous baseline measurement that was taken directly before the start of the trial. Afterwards, the differences between the baseline measurement and the last 10 min of each trial were investigated as illustrated in Figure 5. Therefore, the distributions of the two samples, consisting of the differences between the averaged baseline measurement and the averaged last 10 min of all subjects (∆ ℎ and ∆ ℎ ) were tested with an Aderson-Darling test. Though the samples are non-normally distributed, we decided to use parametric statistics because, on the one hand, we obtained continuous data on the other hand, the sample size of N = 39 is large enough that the robustness of the methods is given. As Rasch and Guiard [30] as well as Gangestad and Thornhill [31] describe, this method is reasonable under the given conditions. The Levene method was used to analyze the variances of the two samples. The comparison of the variances was visualized using interval plots associated with the confidence interval of both samples. Finally, an unpaired t-test was used to examine significant differences of the samples in their mean values. Distribution of the Two Samples Both samples ∆ ℎ (p < 0.005) and ∆ ℎ (p < 0.005) are non-normally distributed. As described in Section 2.5, parametric statistics are used despite the nonnormal distribution of both samples. The method is robust against asymmetric distribution and is more powerful than nonparametric statistics. Distribution of the Two Samples ∆CI with exo and ∆CI without exo Both samples ∆CI with exo (p < 0.005) and ∆CI without exo (p < 0.005) are non-normally distributed. As described in Section 2.5, parametric statistics are used despite the nonnormal distribution of both samples. The method is robust against asymmetric distribution and is more powerful than nonparametric statistics. Analysis of Variances of the Two Samples ∆CI with exo and ∆CI without exo The analysis of variance showed a similar scatter of the sample with the exoskeleton and the sample without the exoskeleton (see Figure 6). The Levene test resulted in a non-significant difference in the standard deviation of the two samples. The analysis of variance showed a similar scatter of the sample with the exoskeleton and the sample without the exoskeleton (see Figure 6). The Levene test resulted in a nonsignificant difference in the standard deviation of the two samples. Analysis of Means of the Samples ∆CI with exo and ∆CI without exo An unpaired t-test was used for the comparison of means of the samples' ∆CI with exo and ∆CI without exo . The t-test resulted in a significant difference of the two samples with a p-value of p = 0.000. The estimated absolute difference between the means of the two samples ∆CI absol. amounts to 0.365 L/min/m 2 (see Figure 7). If compared to the average CI of the last 10 min of the sample without exoskeletons (x L10m, without exo, total = 3.470 L/min/m 2 ) this corresponds to a 10.51% reduction in CI. An overview of the individual results for each subject can be found in Table A1 (Appendix A) An unpaired t-test was used for the comparison of means of the samp and ∆ ℎ . The t-test resulted in a significant difference of the two p-value of p = 0.000. The estimated absolute difference between the means of the two sam amounts to 0.365 L/min/m 2 (see Figure 7). If compared to the average CI of of the sample without exoskeletons ( 10 , ℎ , ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ = 3.470 L/min/ sponds to a 10.51% reduction in CI. An overview of the individual results f can be found in table A1 (Appendix). Discussion The analysis of variance using the Levene test shows no significant tween the samples and therefore indicates a similar standard deviation. It c that wearing an exoskeleton does not influence the accuracy of the measur and does not produce outliers or measurement errors. Thus, both sample pared in a comparison of means. Figure 7. Estimation for differences of ∆CI absol. and the associated 95% confidence interval. Fraunhofer IPA. Discussion The analysis of variance using the Levene test shows no significant difference between the samples and therefore indicates a similar standard deviation. It can be assumed that wearing an exoskeleton does not influence the accuracy of the measurement method and does not produce outliers or measurement errors. Thus, both samples can be compared in a comparison of means. The samples ∆CI with exo and ∆CI without exo showed a significant difference (t-test, p = 0.000) in means. This means that wearing an exoskeleton for the defined task leads to a significant decrease in the CI. Due to the correlation of CI to oxygen consumption [21], a significant reduction in the physiological load can be concluded. A reduction in CI of 0.365 L/min/m 2 when wearing an exoskeleton compared to the same one-hour task without using an exoskeleton indicates reduced physical stress. The reduction amounts to a 10.51% decrease in CI from wearing one of the available upper-body exoskeletons. Therefore, wearing an exoskeleton during a one-hour simulated welding task, consisting of in front of the body and overhead welding and grinding, reduces the acute physical stress by over 10%. A reduction in CI of more than 10% with the aid of an exoskeleton is consistent with the results of Schmalz et al. [32], where a reduction in the oxygen consumption during a defined task of up to 12% was found with the use of an upper-body exoskeleton. Considering the size of the population, the representative range of age, and BMI, but especially the use of different exoskeleton systems, which were not considered selectively, the result is very powerful. Several previous findings on the relieving effects of exoskeletons during physical work can be confirmed by these results. These results should further be confirmed in different standardized working scenarios as well as with a broader range of exoskeletons, i.e., a logistics task with back supporting exoskeletons or overhead assembly with upper-body exoskeletons. As described in the introduction, modern impedance cardiographic measurements are highly reliable [27][28][29], and there is a linear relationship between CO and oxygen consumption [21,22]. Considering these two factors, the results found here indicate a strong benefit of exoskeletons regarding the metabolic relief and indicate that ICG is a highly suitable method to study these effects. Nonetheless, the scaling of the shown reduction in CI has to be clarified. Further studies are needed to verify the relationship between the measured hemodynamics and the oxygen consumption and put them in relation to the absolute performance. Furthermore, the acceptable range regarding health and safety has to be clarified. The idea of using ICG for the evaluation of exoskeletons or even for performance physiological determination of work-related loads brings comfortable advantages. It seems to be promising based on the obtained evidence. However, the correlations must be specified and further confirmed in extended studies. Conclusions The effects of exoskeletons on different physiological, biomechanical, and subjective parameters have been increasingly studied in the past few years. Relieving effects have been shown for subjective effort, muscle activity in the target region, and joint moments. However, these studies mostly focus only on the target area of the exoskeleton and provide little to no understanding of the effects on the whole body. Whole-body relief has been shown through reduced energy expenditure or reduced heart rate, whereas these results are not consistent between studies. As heart rate does not seem sensitive enough to accurately analyze the effects of exoskeletons and spiroergometry is not suitable during real working scenarios, further methods to investigate whole-body loading are necessary. As several studies have shown a clear correlation between oxygen consumption and CO [22][23][24], which can reliably be measured by ICG [27][28][29], we hypothesized this to be an adequate parameter and corresponding measurement instrument to investigate the effects of exoskeletons. We set up a one-hour standardized work task which included overhead welding and grinding, using several different exoskeletons for overhead work, and measured the CO and the CI. Results showed a significant reduction in the CI when wearing an exoskeleton, which amounted to an over 10% reduction compared to not wearing an exoskeleton. Therefore, we can confirm the relieving effects of exoskeletons on the cardiovascular system as well conclude the suitability of the CI as a parameter to study these effects. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper. Data Availability Statement: The data that support the findings of this study are available on request from the corresponding author, M.S. The data are not publicly available due to the privacy of research participants. Conflicts of Interest: The authors declare no conflict of interest. Table A1. Averaged CI at baseline x BL , last ten minutes x L10m and difference ∆CI with and without exoskeleton for all subjects. With Exoskeleton Without Exoskeleton ∆CI absol. L/min/m 2 ID x BL L/min/m 2
5,685.4
2022-10-02T00:00:00.000
[ "Engineering", "Medicine" ]
Predicting cotton fiber properties from fiber length parameters measured by dual-beard fibrograph Cotton fiber properties, although strongly influenced by plant growth conditions, are largely dictated by the cotton variety; therefore, certain inherent associations exist among these properties. Previous studies examined the mutual influences of cotton properties (e.g., fiber maturity on strength), but latent associations between fiber length and other important properties (e.g., fineness, maturity and strength) have not been explored. This paper attempted to investigate these relationships, and to create regression models to predict the fiber properties from the length parameters so that an overview on cotton quality can be provided when only length measurements are available. We collected 100 cotton samples as a training set and 17 extra samples as a testing set, and measured the fiber length parameters using the dual beard fibrograph and the seven other fiber properties (strength, elongation, micronaire, nep, fineness, immature fiber content, and maturity ratio) using the High Volume Instrument and Advanced Fiber Information System. We then performed the correlations, multicollinearity, regression and clustering analyses on the fiber properties. It was found that the fiber length parameters had moderate associations (0.3<|r|<0.7) with the seven properties, and the prediction errors for the training set varied from 2.25% (maturity ratio) to 14.36% (nep). The Bland–Altman analysis proved that for all the seven properties, more than 94.9% of the predicted and actual points were within the 95% agreement limits and without systematic biases. The regression models based on the five cotton clusters consistently lowered the prediction errors through the optimally aggregated fiber properties. The comparable results were obtained from the testing set, which demonstrated the good generalization power of the prediction models. Introduction The dual-beard fibrograph (DBF) was developed to measure the fiber length distribution (FLD) of cotton (Jin et al. 2018;) and to calculate a set of length parameters defined by the industry, such as the upper half mean length (UHML), short fiber content (SFC), and uniformity index (UI), derived from the FLD (Zhou and Xu 2021). These length parameters are evaluated separately in the current classification system for cotton quality assessment (Cotton Incorporated 2018). For instance, UHML is reported in both 100ths and 32nds of an inch (or 25.4 mm), while length uniformity is classified into five levels based on different thresholds of UI (Cotton Incorporated 2018). Multiple length parameters can also be used together to perform a multivariate classification for evaluating length uniformity . Physical properties of cotton fibers, such as fiber length, fineness, maturity and strength, are largely dictated by the cotton variety or genetic makeup, but are susceptible to environmental conditions (e.g., temperature, water and nutrient) experienced by the plant (Basra and Saha 2020;Cotton Incorporated 2018;Seagull 2001). From an epidermal cell to a mature fiber, the fiber development takes four distinctive phases, including initiation (3 days before anthesis to 3 days post anthesis−DPA), elongation (3 to 25 DPA), secondary wall thickening (15 to 45 DPA), and maturation (45 to > 50 DPA) (Seagull 2001;Qin et al. 2011;Basra and Saha 2020;Wilkins and Jernstedt 2020). At the end of the development, a fiber can gain its length by 4000 − 5000 times, diameter by 2 − 3 times and wall volume by more than 10,000 times (Seagull 2001). Each phase involves specific biological processes, and directly impacts the final physical properties of fibers. However, considerable overlapping occurs between the 2nd (elongation) and 3rd (secondary wall thickening) phases, allowing the length, diameter and wall thickness of a fiber to grow simultaneously in a period varying from 5 to 10 DPA (or 10-20% of the entire DPA) (Seagull 2001;Ryser 2020) recorded the changes in the fiber length and diameter of a cotton variety (G. hirsutum, variety MD51 Ne) from 5 DPA to 50 DPA. Hernandez-Gomez et al. (2015) measured the fiber cell wall thickness of four cotton species (PimaS7, Fm966, Krasnyj and JFW15) on the 10th, 17th and 25th DPA. Both studies revealed that the fiber length, diameter and wall thickness had upward trends with the increase of DPA even though their changes were in different rates. Thus, some inherent associations may exist among these fundamental properties, which in turn can influence other properties, such as maturity, fineness and strength. To help cotton breeders select cotton varieties having the desired properties, Mangialardi et al. (1990) assessed the effect of the fiber fineness, maturity, and strength on the number of neps and concluded that neps were most highly correlated with maturity and fineness, with correlation coefficients of − 0.30 and − 0.40, respectively. A low but significantly positive correlation was also found between the nep count and fiber strength (van der Sluijs et al. 2016). Montalvo et al. (2004) used a near-infrared (NIR) instrument to analyze fineness, maturity and micronaire of cotton, and found a significant relationship between fiber strength and reflectance (Rd), micronaire, and moisture. In a subsequent study, Montalvo et al. (2005) further reported that cotton micronaire had linear relationships with the cotton fineness and maturity ratio, with R-squared values of 0.88 and 0.87, respectively. Kim et al. (2019) established relationships between the maturity and strength-related properties. These studies demonstrated that cotton maturity is positively correlated with the bundle fiber strength and elongation; moreover, maturity is a major factor that dictates fiber strength. Among the fiber properties, the cotton length is considered the most crucial attribute impacting the yarn spinning efficiency and quality (Thibodeaux et al. 2008;Krifa 2016) studied the influences of the cotton maturity, fineness, and strength on the fiber length distribution and observed that immature and weak cottons exhibited a unimodal length distribution, whereas mature and strong cottons tended to show a bimodal length distribution. However, few studies have explored the quantitative relationships between cotton fiber length and other properties. In a previous study (Zhou et al. 2021), we demonstrated a new imaging approach, DBF, which measures the FLD of a cotton sliver clamped randomly and combed to form two tapered beards. DBF can mitigate the fiber entanglement and alignment problems encountered in single-beard measurements (Jin et al. 2018;Breuer and Farber 2008;Pabich et al. 2010) and can output reliable and comprehensive length parameters derived from FLDs Zhou and Xu 2021). Based on the correlation analysis on the length parameters (Zhou and Xu 2020), we found three major parameters, UHML, SFC and UI, were most effective for characterizing cotton length attributes when used collectively. Here, UHML represents the average length of the longer half of the fibers, SFC refers to the number of fibers shorter than 12.7 mm (or 0.5 in), and UI indicates the overall length uniformity (Cotton Incorporated 2018). Together, these parameters provide a holistic view of the cotton length quality . Figure 1 shows the FLDs of two distinct cottons and their properties, including the UHML obtained from DBF, strength obtained from the High Volume Instrument (HVI), and maturity ratio obtained from the Advanced Fiber Information System (AFIS). The two FLDs follow approximately normal distributions, but Cotton 1 represents a distribution with a high long-fiber content and Cotton 2 has a high short-fiber content. Thus, their UHMLs are markedly different (30.41 mm vs. 25.58 mm). Because of the overlap between the 2nd (elongation) and 3rd (secondary wall thickening) phases, the co-developments of fiber length, diameter and thickness can cause inherent associations among these properties. It is likely that in the overlap range (5 to 10 DPA), a longer fiber has more DPA to thicken its secondary wall (daily layering of cellulose) than a shorter fiber, which leads to a more mature and stronger fiber. This association is evidenced by Cotton 1 and Cotton 2 in this case. Overall, Cotton 1 has a higher strength and maturity ratio than Cotton 2. Therefore, cotton fiber length has a potential to be a useful factor for estimating other physical properties of cotton. In a preliminary study, we examined the associations between the length parameters and other properties of 100 cotton samples through a multivariate regression analysis. The length parameters, UHML, SFC and UI, were measured separately by DBF, HVI and AFIS, and used as the input variables. The other properties, including strength and maturity ratio, were taken out of the HVI and AFIS measurements, and used individually as the dependent variable. As shown in Table 1, the correlation coefficients (|r|) between the fiber properties and the (UHML, SFC, UI) of the three methods were in a moderate range (0.3<|r|< 0.7) (Ratner 2009). There was no particular advantage or disadvantage for any of these methods when their length measurements were used to estimate the associations with the other fiber properties. In this study, we will focus on the use of the length measurements from DBF for predicting other important fiber properties to provide an overview on cotton quality without the HVI and AFIS measurements. This study attempts to expand the utilization of these three length parameters to evaluate other important properties of cotton, including its strength, maturity, fineness and nep, and to add a useful function to DBF for the quick assessment of cotton fiber properties as soon as a dual-beard sample is scanned on DBF. By incorporating measurements from different fiber testing methods, we first explore the associations between the cotton fiber length parameters measured by DBF and the other physical properties obtained from HVI and AFIS, and then establish models for estimating these properties based on the length parameters. The outcomes of this study can enrich the understanding of the relationship among various cotton properties and provide a fast and efficient means for the comprehensive evaluation of cotton quality when only fiber length measurements are available. This is particularly useful when the HVI or AFIS testing is not attainable. Materials and methods In this study, we used two batches of U.S. upland cotton samples provided by the Fiber and Biopolymer Research Institute, Texas Tech University (FBRI-TTU). The first batch contained 100 samples to be used as the training set, and the second batch contained 17 samples to be used as the testing set. Each cotton sample was divided into three specimens that were tested separately by AFIS (Uster AFIS PRO 2, Knoxville, TN) and HVI (Uster HVI 1000, Knoxville, TN) at FBRI-TTU, and by DBF in our research lab at the University of North Texas (UNT). For each specimen, one replica was used for the AFIS and HVI testing, and three replicas were used for the DBF testing. The three testing methods yielded a large set of the measurements of cotton properties related to the fiber length, strength, elongation, maturity, fineness, and nep. In the subsequent analyses, the three major length parameters obtained from DBF, i.e., UHML, SFC, and UI, were used as the input (or independent) variables, and the seven other fiber properties, i.e., the strength, elongation, and micronaire (MIC) from HVI and nep, fineness, immature fiber content (IFC), and maturity ratio (MR) from AFIS, were used individually as the dependent variable. Table 2 lists the basic statistics of the properties of the 117 cotton samples measured using DBF, HVI, and AFIS. It can be seen that the selected cotton samples cover a wide range for each of these properties, which is important for examining associations among the properties. Correlation, agreement, and hypothesis tests were performed on the test data for the 117 cotton samples to examine the associations between the cotton properties measured by HVI and AFIS with the length parameters measured by DBF to build prediction models for the fiber properties. To circumvent the estimation of errors of the models caused by the variability of individual cottons, a clustering analysis was conducted, in which cotton samples with high similarity in length parameters were grouped into the same cluster, and a few distinct clusters were created to represent collective cotton features based on the cluster centroids. Regression models based on the cluster centroids may better reflect the inherent associations of fiber properties. In the multivariate regression analysis, we used R-squared (R 2 ) or the correlation coefficient (r = ± √ R 2 ) to measure the correlation or association between two variables, the F-test to report how well a regression model fits the data, and the t-test (or coefficient analysis) to analyze the significance of each independent variable in the model. When |r| is between 0.3 and 0.7, a moderate (either positive or negative) correlation exists between the variables (Ratner 2009). The significance level was set at α = 0.05. If the p-value in the F-test is below α = 0.05, the regression model is considered to be statistically significant for predicting the dependent variable, i.e., a good fit with the input data. If the p-value in the t-test is below α = 0.05, the coefficient of a term (an input variable) is deemed to be significant, i.e., the contribution of the variable to the model is important. The variance inflation factor (VIF) was calculated to measure the multicollinearity among the input variables in a multivariate regression. In general, a VIF greater than 10 indicates a high correlation or low independence between variables (Wu 2020). A more conservative level of 2.5 is often used for more constrained applications (Mueller et al. 2016). In this study, we set the VIF threshold to 2.5. In addition, Bland-Altman analysis (Bland and Altman 1986) was performed to assess the agreement between the predictions and the actual measurements of fiber properties. SPSS and Excel were used to perform the abovementioned statistical analyses. Results and discussion Associations between fiber length parameters and other fiber properties of cotton When UHML, SFC, or UI is used as the input variable in a linear regression analysis, their correlation coefficients (r) with the seven properties can be calculated individually, and the results are listed in Table 3. The obtained r values range from low to moderate, and some of the p-values are above 0.05 (the numbers in italics). Thus, these three parameters are not ideal for creating single-variate regression models to predict these seven properties. Before using combinations of UHML, SFC, and UI together as the input variables in a linear regression model, we must verify the correlation, significance tests, and multicollinearity associated with the models. Table 4 lists the linear regression statistics between the length parameters (input/independent variables) and the HVI, strength, elongation, and MIC (dependent variables). Here, R 2 indicates the correlation between the actual and predicted values of one of the dependent variables, the F-test shows the overall effectiveness of the regression model, the t-test is the coefficient analysis to verify the contribution of each input variable to the model, and VIF measures the multicollinearity of the input variables in the model. With regard to strength, the R 2 value of Model 1 is 0.295 (equivalently, |r| = 0.543), showing that 29.5% of the overall variance of the strength can be explained by UHML and SFC, and the small p-values (< 0.05) in both the F test and t-test verify that UHML and SFC contribute significantly to the model. A small VIF of 1.964 (< 2.5) indicates no collinearity between UHML and SFC. Therefore, there is sufficient evidence that UHML and SFC can be used to predict strength. The same result can be obtained for Model 2, in which UHML and UI are the input variables. However, in Model 3, R 2 is 0.278, and SFC and UI have high collinearity because the VIF (3.904) is above the threshold (2.5). In Model 4, the three input variables (UHML, SFC, and UI) generate the same correlation as in Model 1 (R 2 = 0.295, |r| = 0.543), but they do not pass the significance tests (p-values > 0.05), and multicollinearity exists among the three variables (VIFs > 2.5). Thus, a model whose input variables include the pair of (SFC and UI) should be excluded from strength prediction. Because UHML and SFC in Model 1 exhibit a slightly stronger association with strength than UHML and UI in Model 2, UHML and SFC were selected as predictors for strength. With regard to MIC, almost the same results as those in the above analysis for strength can be derived. Model 1 shows the best correlation (R 2 = 0.495, or |r| = 0.704), i.e., the strongest association between the pair of (UHML, SFC) and MIC among the four models, and it passes the significance tests (p-values < 0.05) and the VIF check (VIF < 2.5). Although Model 4 has a correlation equivalent to that of Model 1, it has multicollinearity problems, as indicated by the high VIF values. In terms of elongation, Model 4 yields the highest R 2 (R 2 = 0.180 or |r| = 0.424), but its VIFs are all above the threshold (multicollinearity) and the p-values (0.149 and 0.682) of two independent variables (SFC and UI) are greater than 0.05 (insignificant contributions). Therefore, Model 1 (i.e., UHML and SFC) was chosen for the elongation prediction. Table 5 summarizes the linear regression statistics of the fiber properties for both the training and testing sets (a total of 117 samples). Combining the criteria for the correlation, p-value, and VIF for selecting a reliable regression model, we can see that Model 1 is an optimal regression model for predicting all four dependent variables. In short, Model 1, consisting of UHML and SFC, generates moderate associations with the seven properties (0.3 < |r| < 0.7), and both parameters contribute significantly to the model without collinearity. Although Model 4 takes advantage of the three parameters and provides equivalent correlations, it has excessive multicollinearity (VIFs > 2.5). Predictions of the Fiber Properties of Cotton samples According to the R 2 values in Tables 4 and 5, the two predictors (UHML and SFC) show moderate associations with strength, elongation, MIC, nep, fineness, IFC, and MR. This is because there is a considerable overlap between the elongation and thickening phases, which allows a cotton fiber to grow its length, diameter and wall thickness simultaneously and thus produces certain associations among these properties. In turn, fiber diameter and wall thickness can impact fineness, maturity, strength, and other properties. However, environmental factors, such as plant nutrients, climate, insects, and boll populations, can also greatly influence the growth of a cotton plant and alter these inherent relationships. Thus, only moderate associations can be expected among the fiber properties. Nevertheless, the length parameters still provide certain clues to other fiber properties, and thus can be used to estimate these properties when their testing equipment (e.g., HVI or AFIS) is not available. The equations below show the prediction models for the fiber properties based on the above multivariate linear regression analysis of the training set (100 cotton samples). (1) (2) Elogation = −0.368 * UHML − 0.083 * SFC + 18.141 While R 2 indicates how well a linear model fits the dependent variable, the mean absolute error (MAE) or root mean square error (RMSE) also provide a measure of the goodness-of-fit between the predicted and actual values. Table 6 lists the mean values and MAEs of the seven dependent variables for the training set (100 cotton samples). Relative errors (%MAE), calculated as (MAE/mean×100%), are also included in parentheses in the table. The %MAEs of the training set vary from 2.25% (for MR) to 14.36% (for nep). The %MAEs in the predictions of MR, strength, MIC, and fineness (note that MIC is a measure combining both maturity and fineness) are less than 4%. The high error of nep prediction is related to the high variability of nep measured by AFIS (see Table 1). The performances of the prediction models shown in Eqs. (1)-(7) were verified using the testing set (Table 7). It can be seen that the errors of the prediction models for the testing set all increased when they were applied to the 17 new samples. Note that these 17 samples were tested using HVI, AFIS, and DBF at different times from the 100 samples in the training set, and thus their measurements could potentially deviate from those of the training set. The models were able to maintain the relative errors of the training set of under 20%, demonstrating a certain level of generalization. In addition to the correlation analysis for independent and dependent variables, Bland-Altman analysis was used to assess the agreement between the predicted and actual measurements of the dependent variable. Table 8 summarizes the Bland-Altman analysis results for the seven fiber properties of the 117 samples. The 95% limits of agreement are bias ± 1.96 SD, where the bias is the difference between the means of the predicted and actual measurements and SD is their standard deviation. Figure 2 shows the Bland-Altman plots of the seven fiber properties to visualize the mapping of individual cotton samples in comparison with the 95% limits of agreement. First, all seven agreement tests are significant (or strong) because the p-values are < < 0.05. Second, the biases of the seven predicted properties are close to zero, and thus there is no systematic bias between the two sets of data. Third, as shown in Fig. 2, the number of outliers, i.e., the points outside the limit lines, in each of the seven plots is ≤ 6, meaning that more than 94.9% (= 1 -6/117) of points are in agreement at α = 0.05. Predictions of Fiber Properties based on cotton clusters The fiber properties of cotton inherently possess a great amount of variability, resulting in only moderate correlations and certain estimation errors in their regression models, as shown above. To circumvent the model estimation errors caused by cotton variability, we can group cotton samples that have high similarity in the length parameters into the same cluster, thus forming a limited number of distinct cotton clusters. Then, the centroids of the clusters can be used to represent categorical features of cotton that may more reliably reflect the inherent associations of cotton properties. In this section, we apply clustering analysis to the data points in the space of (UHML, SFC), and then create prediction models based on the clusters. K-means clustering is the most commonly used method for partitioning data points into K clusters (Li et al. 2012). The within-cluster sum-of-squares (WCSS) is a metric used to evaluate the variability of the data within each cluster and can be used to determine the optimal number of clusters . In this study, we found that K = 5 was the optimal number for partitioning the 100 samples in the training set. Figure 3 shows the classified cotton samples in the five clusters in the (UHML, SFC) space. It is clear that the five clusters are sharply separated in the (UHML, SFC) space, with Cluster 1 being far from the other clusters. Table 9 lists the sizes and mean properties of the five clusters. From Clusters 1 to 4, the mean UHML shows an increasing trend, as do the mean strength, MIC, fineness, and MR. This means that longer fibers are more likely to have higher strength, fineness, and maturity. The mean SFC and IFC values change in the same direction. Compared to Cluster 4, Cluster 5 has a similar UHML but significantly higher SFC, representing a group with both long and short fiber contents (UHML = 29.46 mm, SFC = 6.02%). The mean UHML and SFC represent the centroids of the clusters. We can reconstruct the linear regression models using the cluster centroids to avoid variations in individual points. Equations (8)-(14) are the linear prediction models for the seven properties (strength, elongation, …, MR) based on the centroids. The R 2 values of the linear regressions are all significantly improved compared to those of the models without clustering. The correlations between the length parameters and properties increase to a high level (0.707 < |r| < 0.990). . It can be seen that the cluster-based models reduce the prediction errors for the first six properties and yield the same error for MR as the model without clustering. The two sets of errors are highly correlated but not significantly different (R 2 = 0.997, p-value = 0.718 > 0.05). However, across the seven fiber properties, the %MAEs with clustering are consistently lower than those without clustering as shown in the figure. Thus, the centroid-based models can improve the accuracy of the fiber-property predictions. Because cotton samples possess high betweenand within-bale variability (Gourlot et al. 2012) and the test results are highly susceptible to instruments' makes and models (Hunter L 2003), most fiber measurements do suffer poor repeatability/reproducibility. It was reported that the differences of the HVI measurements between two separate labs could be as high as 21.7% for MIC, 18.0% for strength and 35.5% for elongation (Hunter L 2003). This high variability certainly impacts the accuracy of the above prediction models, causing some fiber properties, e,g., elongation, MIC and nep, to suffer relatively high %MAEs, as shown in Fig. 4. . Conclusion This study thoroughly examined the associations of fiber length parameters measured by DBF with other fiber properties measured by HVI and AFIS for cotton fibers, and created prediction models to estimate these fiber properties based on the fiber length distributions generated by DBF. In the study, we collected 117 cotton samples, which were separated into a training set (100 samples) and testing set (17 samples), and used the three testing methods (DBF, HVI, and AFIS) to generate the comprehensive fiber property measurements for the two sets of samples. We then conducted regression analysis, hypothesis testing, Bland-Altman analysis, and clustering on the two sets of data to assess the correlations, multicollinearity, agreement, and clusters of the fiber properties. It was found that the fiber length parameters had moderate associations (0.3 < |r| < 0.7) with the seven other properties (strength, elongation, micronaire, nep, fineness, immature fiber content, and maturity ratio), and the prediction errors of the training set varied from 2.25% (for MR) to 14.36% (for nep). The Bland-Altman analysis confirmed that for any of the seven properties, there was no systematic bias between the actual and predicted values, and more than 94.9% of the predicted values were in agreement with the actual values (α = 0.05). The regression models based on cotton cluster centroids consistently lowered the prediction errors for all of the properties. The analyses using the testing set showed that the prediction models generated comparable results as for the training set, demonstrating a certain level of generalization for new samples. The prediction models established in this study make it possible to assess the fiber strength, maturity, fineness, and other important properties when their testing conditions are not available, and to provide an inexpensive method for a quick overview of cotton quality using only two fiber length parameters (UHML and SFC).
6,171.2
2023-01-03T00:00:00.000
[ "Physics" ]
Effect of the Subsampling Ratio in the Application of Subagging for Multivariate Calibration with the Successive Projections Algorithm Este artigo estuda o efeito da razão de subamostragem na abordagem subagging para regressão linear múltipla com seleção de variáveis pelo algoritmo das projeções sucessivas. Para isso, apresentam-se investigações envolvendo dados simulados e também determinação de umidade e proteína em trigo e temperaturas de destilação (T10 e T90), massa específica e enxofre em diesel por espectrometria no infravermelho próximo. Em termos de capacidade de predição e sensibilidade a ruído, os melhores resultados foram obtidos para razões de subamostragem em torno de 40%. Introduction The successive projections algorithm (SPA) 1 was developed to select subsets of variables with small multicollinearity for use in multiple linear regression (MLR) models.MLR-SPA has been employed, for example, in spectrometric determination of solubility of solids in beers, 2 glucose in human blood, 3 quality parameters of vegetable oils, 4 phenolic compounds in sea water, 5 sulphur in diesel, 6 and various other applications.A graphic user interface for MLR-SPA is publicly available at http://www.ele.ita.br/~kawakami/spa. In addition to new analytical applications of MLR-SPA, several works have also been conducted on the implementation aspects of the algorithm itself.Gains in parsimony were achieved, for example, by identifying variables that can be removed from the model without compromising its prediction ability. 7Improvements were also obtained by exploiting the correlation with the dependent variable in the projections phase of MLR-SPA. 8ore recently, a modification was proposed to deal with the presence of unknown interferents in the samples to be analyzed. 9Improvements concerning computational issues have also been reported. 10,11n this context, it has been shown 12 that MLR-SPA results can be improved by using a statistical technique called subagging (subsample aggregating).Such a technique consists of combining different models obtained as the result of a process of subsampling. 13In the MLR-SPA-subagging case, the subsampling procedure can be regarded as a random splitting of the modelling data into calibration and validation sets.In a subsequent work, this approach was adapted for use in a calibration transfer framework. 14For this purpose, transfer samples were inserted in the validation set formed at each subsampling iteration. An important factor, which was not addressed in the previous MLR-SPA-subagging works, 12,14 concerns the choice of the subsampling ratio employed in the calibration/validation splitting of the modelling samples. In both investigations, 12,14 this ratio was arbitrarily set to approximately 60% (number of calibration samples/number of calibration plus validation samples).The present paper investigates the effect of the subsampling ratio on the resulting MLR-SPA-subagging model.For this purpose, case studies involving simulated data, as well as nearinfrared (NIR) spectrometric determination of moisture and protein in wheat and T10, T90, specific mass and sulphur in diesel are presented.The results are evaluated in terms of the prediction ability and sensitivity to spectral measurement noise. MLR-SPA In the original implementation of MLR-SPA, 1 it is assumed that the N mod samples available for modelling purposes have been divided into a calibration and a validation sets, with N cal and N val samples, respectively (where N mod = N cal + N val ).The instrumental response data of the calibration set are then arranged in a matrix X cal (N cal × K) where N cal and K denote the number of samples and variables, respectively.A series of projection operations involving the columns of matrix X cal is then employed to form K chains with M variables each, where M = min(N cal -1, K).The first element of the k th chain corresponds to variable x k .Each subsequent element in the chain is selected in order to display the least collinearity with the previous ones.Subsets of variables extracted from the chains are then evaluated on the basis of the prediction ability of the resulting MLR models in the validation set.The best subset of variables is then chosen according to a suitable performance criterion, such as the root-mean-square error of validation.Finally, a statistical hypothesis test is employed to remove variables from this subset without compromising the prediction ability of the MLR model. 7 MLR-SPA-subagging The MLR-SPA outcome depends on the choice of calibration and validation sets from the available samples.In the MLR-SPA-subagging approach, 12 this aspect is exploited to generate a pool of different MLR models which are then aggregated into an ensemble model.Each individual model is obtained by randomly splitting the modelling samples into calibration and validation sets and then applying MLR-SPA.At the end, model aggregation is carried out by co-averaging the individual model predictions as (1) where ŷ av and ŷ (n) denotes the predictions of the ensemble model and the nth individual model, respectively.In what follows, the number of subsampling iterations (i.e., the number of aggregated models) will be denoted by P, as in equation 1.Typically, it has been found 12 that the MLR-SPA-subagging procedure tends to converge after the aggregation of approximately P = 30 individual models. It is worth noting that the co-averaging procedure expressed in equation 1 can also be reformulated in terms of the regression coefficients of the MLR models, i.e., (2) (3) where (4) If a certain variable x k was not selected by MLR-SPA for inclusion in a particular individual model, the corresponding regression coefficient b k is set to zero in that model.For example, assume that K = 3 variables are available for selection and that P = 2 individual models are obtained in the MLR-SPA-subagging procedure.Furthermore, suppose that variables x 1 and x 3 are selected in the first MLR-SPA model and variables x 2 and x 3 are selected in the second MLR-SPA model.In this case, these models could be expressed as (5) ( 6) Equations 5 and 6 can be rewritten as ( 7) (8) where a null regression coefficient was assigned to variables x 2 and x 1 in the first and second models, respectively.The coaveraging procedure can then be employed as in equation 4 with b 2 (1) = 0 and b 1 (2) = 0. Within this context, an important design parameter is the subsampling ratio q defined as (9) Vol.22, No. 11, 2011 For illustration, Figure 1 depicts the MLR-SPAsubagging procedure for two different subsampling ratios, namely q = 50% (Figure 1a) and q = 70% (Figure 1b). As mentioned above, in previous works concerning the use of MLR-SPA-subagging, 12,14 the subsampling ratio q was arbitrarily set to approximately 60%.However, it may be argued that better ensemble models could be obtained by using a different choice for q, which motivates the present investigation. Simulated data set Simulated spectra were generated as in a previous work involving MLR-SPA. 7For this purpose, a linear relation was assumed between the matrix X of instrumental responses and the matrix Y of analyte concentrations: where N is a noise term.Three analytes (termed A, B, and C) and K = 300 spectral variables were considered.Matrix W (3 × 300) contains the proportionality coefficients between the analyte concentrations and the instrumental responses.The W-values adopted in this simulated study are presented in Figure 2a.A total of 200 spectra were generated by using a matrix Y (200 × 3) with concentration values randomly distributed in the range 1-10 (arbitrary units).Gaussian noise with zero mean and standard deviation of 0.1 was added to all spectra as in equation 10.The resulting spectra are presented in Figure 2b. The overall set of N tot = 200 spectra was divided into a modelling set with N mod = 100 samples and a prediction set with N pred = 100 samples (N tot = N mod + N pred ) by applying the Kennard-Stone (KS) algorithm 19 to the matrix X (200 × 300) of instrumental responses.The modelling set was employed in the MLR-SPA-subagging procedure, as described in the previous section.The prediction set was only employed to evaluate and compare the performance of the resulting models. Wheat data set This public data set consists of NIR diffuse reflectance spectra of N tot = 100 wheat samples, along with reference values of moisture and protein content. 15,16The spectra were acquired in the range 1100-2500 nm with a 2 nm resolution, resulting in 701 spectral variables. Figure 3 shows the NIR spectra of the 100 wheat samples.As can be seen in Figure 3a, the spectra display baseline shifts, which were eliminated by using a first-derivative Savitzky-Golay filter with a 2 nd -degree polynomial and a 21-point window. 17The resulting derivative spectra are shown in Figure 3b.Finally, the number of variables was reduced by discarding those for which the maximum signal intensity over all derivative spectra did not exceed 2% of the maximum signal intensity in the overall data set. 18The resulting spectra comprised K = 652 variables. The overall set of N tot = 100 wheat samples was divided into a modelling set with N mod = 70 samples and a prediction set with N pred = 30 samples (N tot = N mod + N pred ) by applying the KS algorithm to the matrix X (100 × 652) of derivative spectra. Diesel data set This data set, which comprises 170 diesel samples collected from gas stations in the city of Recife (Pernambuco State, Brazil), was employed in a previous MLR-SPAsubagging study. 12The reference values for sulphur content, specific mass, and distillation temperatures (T10 and T90) were obtained according to the ASTM (American Society for Testing and Materials) D4294-90, 4615, and D86 methods, respectively.NIR spectra in the range 885-1600 nm were acquired using a FT-NIR/MIR spectrometer Perkin Elmer GX with an optical path length of 1.0 cm and a spectral resolution of 2 cm -1 .Systematic variations in the baseline were circumvented by using derivative spectra calculated with a Savitzky-Golay filter (2 nd -order polynomial, 11-point window).As a result, the number of spectral variables was K = 1431.The original and derivative NIR spectra of the 170 diesel samples are presented in Figures 4a and 4b, respectively. The overall set of N tot = 170 diesel samples was divided into a modelling set with N mod = 85 samples and a prediction set with N pred = 85 samples (N tot = N mod + N pred ) by applying the KS algorithm to the matrix X (170 × 1431) of derivative spectra. Evaluation of the MLR-SPA-subagging models The MLR-SPA-subagging models were obtained for nine different subsampling ratios, namely q = 10%, 20%, ..., 90%.It is worth noting that such percentages are expressed in terms of the N mod modelling samples, as indicated in equation 9.In each case, the ensemble models were evaluated in terms of predictive ability and sensitivity to instrumental noise.The predictive ability was assessed by calculating the root-meansquare error in the prediction set (RMSEP) as where y i and ŷ i are the reference and the predicted values of the property under consideration for the ith prediction sample. Sensitivity to instrumental noise was taken into account as suggested elsewhere [20][21][22] by calculating the 2-norm of the regression vector (||b av ||), which is defined as: (12) where b k av denotes the regression coefficient associated to variable x k in the ensemble model.In fact, by following the demonstration provided by Pinto et al., 20 it can be shown that s ŷ av = s noise ||b av ||, where s noise is the standard deviation of the instrumental noise (assumed to be homoscedastic and uncorrelated across the model variables) and s ŷ av is the standard deviation of the error in the ensemble model predictions resulting from the propagation of the instrumental noise.Ideally, improvements in the ensemble model should provide reductions in both RMSEP and ||b av ||. It is worth noting that MLR-SPA-subagging has a stochastic nature due to the random subsampling operations.Therefore, given a certain subsampling ratio q and number of iterations P, the RMSEP and ||b av || values may vary for different realizations of the MLR-SPA-subagging procedure.For this reason, in order to assess the dispersion of the results, a Monte Carlo simulation 23 was carried out by calculating the average and standard deviation of the results over several realizations.In the present work, n MC = 25 realizations were employed in the Monte Carlo simulation. Software All calculations were carried out using the Matlab 2009b software.The subsampling operations in the MLR-SPA-subagging procedure were performed by using random permutations with the "rand" Matlab routine. Simulated data set Figure 5 presents the results obtained for analytes A and B with a fixed subsampling ratio (q = 70%) and a number of iterations P ranging from 1 to 50.The results for analyte C were similar to those obtained for analyte A and are thus omitted for brevity.As can be seen, there is a marked improvement in RMSEP and ||b av || for both analytes as the number of iterations P is increased.However, the gains become marginal after P = 30, which is in agreement with the findings reported elsewhere. 12he average results obtained for different subsampling ratios are shown in Figure 6.In this case, the number of iterations was set to P = 30, because the improvements for more iterations were marginal as discussed above.The error bars for RMSEP and ||b av || correspond to standard errors, which were calculated as the standard deviation s divided by the square root of the number of Monte Carlo realizations n MC (i.e., ).In the ||b av || × RMSEP plots presented in Figure 6, the best results are those situated closest to the origin (small values for both ||b av || and RMSEP).In this sense, Figure 6a indicates that appropriate subsampling ratios for determination of analyte A range from 20% to 40%.Within this range, the changes in ||b av || are minor and the difference between the smallest and largest RMSEP values are not significant according to an F-test at 95% confidence level.Other values for q (10% and 50-90%) lead to an increase in both ||b av || and RMSEP.The same comments regarding the choice of q can be applied to analyte B (Figure 6b).It is worth noting that the RMSEP and ||b av || results are generally worse for analyte B as compared to analyte A. Such a finding can be ascribed to the fact that the spectrum of analyte B is strongly overlapped by the two other analytes (A and C, as seen in Figure 2a). Wheat data set Figure 7 presents the protein and moisture results obtained for a fixed subsampling ratio (q = 70%) and a number of iterations P ranging from 1 to 50, as in Figure 5.As in the simulated case study, a marked improvement in RMSEP and ||b av || for both protein and moisture can be observed as the number of iterations is increased up to P = 30.In this case, it is worth noting that the results are statistically unstable for P < 10, as indicated by the large standard deviation values at the beginning of the curves. The average results obtained for different subsampling ratios with P = 30 are shown in Figure 8.As can be seen in Figure 8a, appropriate subsampling ratios for moisture determination range from 30 to 70%.Within this range, the changes in ||b av || are minor and the difference between the smallest and largest RMSEP values are not significant according to an F-test at 95% confidence level.Smaller values for q (10 and 20%) lead to a noticeable increase in RMSEP, whereas larger values for q (80 and 90%) result in substantially larger ||b av || values.In the protein case (Figure 8b), the best RMSEP results were obtained for q ranging from 40 to 90%.By taking the ||b av || criterion into account, the best choice becomes q = 40%. It is worth noting that the RMSEP values obtained for q = 10 and 20% were significantly larger than those obtained with the other subsampling ratios.In the protein case, for instance, the RMSEP for q = 10% was twice the value obtained for q ranging from 40 to 90%.Such a result for q = 10 and 20% may be ascribed to the small number of samples N cal employed in the calibration of each individual MLR-SPA model, which limits the maximum number M of spectral variables that can be selected, as M = min(N cal -1, K).This handicap was particularly adverse in the case of protein, because the bulk protein content in wheat involves a complex mixture of several components.Therefore, MLR-SPA models with few variables may not be able to capture the various vibrational phenomenae involved in the NIR analysis of protein content. On the other hand, the largest subsampling ratios (80 and 90%) yielded models with considerably high ||b av || values.In this case, since more calibration samples were employed, MLR-SPA was able to include a larger number of spectral variables in each individual model, which resulted in an increase in ||b av ||.Therefore, although suitable RMSEP values were obtained, the resulting MLR-SPA-subagging models are more sensitive to noise in the spectral measurements.This feature would compromise prediction accuracy if the models were applied to new measurements with lower signal-to-noise ratio, as illustrated elsewhere. 20n view of the above discussions, by taking into account the results of ||b av || and RMSEP for both properties, the most suitable subsampling ratios would be in the range 40-60%. Diesel data set As in the two case studies above, a marked improvement in RMSEP and ||b av || was observed for all diesel properties as the number of iterations was increased up to P = 30.Therefore, the corresponding graphs are omitted for brevity.The average results obtained for different subsampling ratios with P = 30 are shown in Figure 9. Again, the worst results in terms of either RMSEP or ||b av || were always obtained for the extreme values of q (10 and 90%).On the overall, the best results for these two metrics were obtained for q ranging from 30 to 50%. Conclusions This paper was concerned with the effect of the subsampling ratio on MLR-SPA-subagging models.For this purpose, investigations involving simulated data, Vol.22, No. 11, 2011 as well as near-infrared spectrometric determination of moisture and protein in wheat and distillation temperatures (T10 and T90), specific mass and sulphur in diesel were carried out.The results were evaluated in a multi-criterion framework by considering prediction ability (RMSEP) and sensitivity to spectral measurement noise (||b av ||).In view of these metrics, it was found that 30 subsampling iterations were sufficient to obtain convergence of the MLR-SPA-subagging procedure, which is in agreement with the findings of a previous study. 12The best results were obtained for subsampling ratios in the range 20-40% (simulated data), 40-60% (wheat) and 30-50% (diesel).Therefore q = 40% is found to be an appropriate compromise choice.In terms of the number N cal of calibration samples, these q percentages correspond to 20-40 samples (simulated data), 28-42 samples (wheat) and 26-43 samples (diesel).The smaller number of calibration samples required for the simulated dataset can be ascribed to the fewer sources of variability as compared to the wheat and diesel datasets, which involve actual physical/ chemical phenomenae.It is worth noting that the range of N cal values indicated above for these real-life datasets is in agreement with the guidelines of the ASTM E 1655 05 standard, 24 which recommends the use of at least 24 calibration samples. Figure 2 . Figure 2. (a) Pure spectra for analytes A, B, C and (b) mixture spectra. Figure 3 . Figure 3. (a) Original and (b) derivative spectra of the wheat samples. Figure 4 . Figure 4. (a) Original and (b) derivative spectra of the diesel samples. Figure 5 . Figure 5. MLR-SPA-subagging results as a function of the number of iterations: RMSEP for (a) analyte A and (b) analyte B, ||b av || for (c) analyte A and (d) analyte B. The solid and dashed lines represent the average result and the ±1s boundaries obtained from n MC = 25 Monte Carlo realizations. Figure 6 . Figure 6.||b av || versus RMSEP for (a) analyte A and (b) analyte B using different subsampling ratios.Standard errors are indicated by horizontal and vertical bars. Figure 7 . Figure 7. MLR-SPA-subagging results as a function of the number of iterations: RMSEP for (a) moisture and (b) protein, ||b av || for (c) moisture and (d) protein.The solid and dashed lines represent the average result and the ±1s boundaries obtained from n MC = 25 Monte Carlo realizations. Figure 8 . Figure 8. ||b av || versus RMSEP for (a) moisture and (b) protein using different subsampling ratios.Standard errors are indicated by horizontal and vertical bars.
4,774
2011-11-01T00:00:00.000
[ "Physics" ]
Signaling Complexity Measured by Shannon Entropy and Its Application in Personalized Medicine Traditional approaches to cancer therapy seek common molecular targets in tumors from different patients. However, molecular profiles differ between patients, and most tumors exhibit inherent heterogeneity. Hence, imprecise targeting commonly results in side effects, reduced efficacy, and drug resistance. By contrast, personalized medicine aims to establish a molecular diagnosis specific to each patient, which is currently feasible due to the progress achieved with high-throughput technologies. In this report, we explored data from human RNA-seq and protein–protein interaction (PPI) networks using bioinformatics to investigate the relationship between tumor entropy and aggressiveness. To compare PPI subnetworks of different sizes, we calculated the Shannon entropy associated with vertex connections of differentially expressed genes comparing tumor samples with their paired control tissues. We found that the inhibition of up-regulated connectivity hubs led to a higher reduction of subnetwork entropy compared to that obtained with the inhibition of targets selected at random. Furthermore, these hubs were described to be participating in tumor processes. We also found a significant negative correlation between subnetwork entropies of tumors and the respective 5-year survival rates of the corresponding cancer types. This correlation was also observed considering patients with lung squamous cell carcinoma (LUSC) and lung adenocarcinoma (LUAD) based on the clinical data from The Cancer Genome Atlas database (TCGA). Thus, network entropy increases in parallel with tumor aggressiveness but does not correlate with PPI subnetwork size. This correlation is consistent with previous reports and allowed us to assess the number of hubs to be inhibited for therapy to be effective, in the context of precision medicine, by reference to the 100% patient survival rate 5 years after diagnosis. Large standard deviations of subnetwork entropies and variations in target numbers per patient among tumor types characterize tumor heterogeneity. Traditional approaches to cancer therapy seek common molecular targets in tumors from different patients. However, molecular profiles differ between patients, and most tumors exhibit inherent heterogeneity. Hence, imprecise targeting commonly results in side effects, reduced efficacy, and drug resistance. By contrast, personalized medicine aims to establish a molecular diagnosis specific to each patient, which is currently feasible due to the progress achieved with high-throughput technologies. In this report, we explored data from human RNA-seq and protein-protein interaction (PPI) networks using bioinformatics to investigate the relationship between tumor entropy and aggressiveness. To compare PPI subnetworks of different sizes, we calculated the Shannon entropy associated with vertex connections of differentially expressed genes comparing tumor samples with their paired control tissues. We found that the inhibition of up-regulated connectivity hubs led to a higher reduction of subnetwork entropy compared to that obtained with the inhibition of targets selected at random. Furthermore, these hubs were described to be participating in tumor processes. We also found a significant negative correlation between subnetwork entropies of tumors and the respective 5-year survival rates of the corresponding cancer types. This correlation was also observed considering patients with lung squamous cell carcinoma (LUSC) and lung adenocarcinoma (LUAD) based on the clinical data from The Cancer Genome Atlas database (TCGA). Thus, network entropy increases in parallel with tumor aggressiveness but does not correlate with PPI subnetwork size. This correlation is consistent with previous reports and allowed us to assess the number of hubs to be inhibited for therapy to be effective, in the context of precision medicine, by reference to the 100% patient survival rate 5 years after diagnosis. Large standard deviations of subnetwork entropies and variations in target numbers per patient among tumor types characterize tumor heterogeneity. Keywords: molecular target, precision medicine, chemotherapy, RNA-seq, interactome INTRODUCTION Statistical and epidemiological data indicate that cancer is a growing global health problem. The World Health Organization (WHO) predicts an estimated 27 million new cases of cancer worldwide by 2030. Cancer initiation and progression involves genetic and epigenetic changes that reprogram complex regulatory circuits. Within this context, Hanahan and Weinberg (Hanahan and Weinberg, 2011) characterized 10 consensus processes, called cancer hallmarks, which are representative of oncogenesis. Traditionally, a protocol of chemotherapy is considered beneficial for an entire patient subpopulation with common tumor traits and is, therefore, referred to as one-size-fits-all. However, molecular diversity increasing with tumor development promotes therapy resistance (van Wieringen and van der Vaart, 2011;Banerji et al., 2015). Moreover, chemotherapy drugs may result in harmful side effects for patients due to their low selectivity that adversely affects both tumor and normal cells (Siegel et al., 2012). Thus, the process of therapeutic target identification is complex and implies the recognition of molecular differences between tumor and healthy cells, most of them based on gene regulation. Accordingly, the profile of up-regulated genes in tumor tissues is used in a personalized (individualized) medicine approach. Personalized medicine is expected to bring higher benefits to patients. The development of personalized medicine is directly related to high-throughput technologies that became available in recent years. High-throughput techniques, such as RNA sequencing, are important tools for the characterization of tumor and control cells. These techniques allow a better understanding of tumor biology and demonstrate that each tumor is unique. Many efforts are being made to identify new targets that could assist in individual treatment. An approach recently used was the identification of five specific therapeutic targets for each of seven breast cell lines (Carels et al., 2015a). This strategy combines protein-protein interactions (PPI) and RNA-seq data to infer the topology of the regulatory network for each cell line. Three concepts were considered in this approach: i) a vertex with a high expression level is more influential than a vertex with a low expression level; ii) a vertex with a high connectivity level (hub) is more influential than a vertex with a low connectivity level; and iii) a protein target must be expressed at a significantly higher level in tumor cells than in control cells to reduce harmful side effects to the patient after its inhibition. It is worth mentioning that each combination of targets that most closely satisfied these conditions was specific for its respective cell line. This approach was validated experimentally in vitro in MDA-MB-231 (a triple-negative cell line of invasive breast cancer) (Tilli et al., 2016) and showed that the inactivation, by interference RNA, of the five top-ranked targets identified for this cell lineage resulted in a significant reduction of cell proliferation, colony formation, cell growth, cell migration, and cell invasion. Inactivation of these targets in other cell lines, such as MCF-7 (non-invasive breast cancer) and MCF-10A (control), showed little or no effect, respectively (Tilli et al., 2016). In addition, the effect of joint target inactivation was greater than the one expected from the sum of individual target inhibitions, which is in line with the buffer effect of regulatory pathway redundancy in tumor cells. Inactivating multiple hubs may be necessary to shut down alternative pathways that maintain the tumor malignancy. Other authors have also shown that the use of combined drugs is more efficient than monotherapies (Fumiã and Martins, 2013). The analysis of signaling pathways as networks has been widely used to explore the synergistic effect of targeting multiple proteins and for identifying new targets for cancer treatment. Topological measures regarding node centrality, degree, and path metrics have been used in the identification of regulation patterns and new potential targets for cancer treatment (Schramm et al., 2010;Peng et al., 2014). For instance, Azevedo and Moreira-Filho (Azevedo and Moreira-Filho, 2015) used node degree and betweenness centrality measures to explore the synergistic effects of potential target combinations to overcome chemotherapy resistance to temozolomide in glioma. The network robustness after node removal was assessed considering the following network parameters: diameter, shortest path length, size, and the clustering coefficient. Winterbach et al. (Winterbach et al., 2013) reviewed network metrics and types to discuss their applications and limitations in descriptive and predictive network analyses. The signaling network of a biological system is scale-free (Albert et al., 2000), which means that few proteins have high connectivity values and many proteins have low connectivity values. As a consequence, the inhibition of proteins with high connectivity values has a greater potential for network disruption than randomly selected proteins (Albert et al., 2000). The impact of node removal can also be evaluated by the use of Shannon entropy, which has been proposed as a network complexity measure and applied by many authors to determine a relationship between network entropy and tumor aggressiveness. Breitkreutz et al., for instance, found a negative correlation between the entropy of networks composed by genes documented in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database considering cancer types and their respective 5-year survival (Breitkreutz et al., 2012). Other studies adapted the Shannon entropy formula to combine a unique signaling network and multiple transcriptome data related to the considered phenotypes. Wieringen and Vaart (van Wieringen and van der Vaart, 2011) found that, when considering transcriptome data, the entropy level of cancer samples is higher than that of normal samples. The same behavior was found considering tumor stages, where more advanced stages were characterized by higher entropy than the earlier ones (Breitkreutz et al., 2012;Winterbach et al., 2013;Banerji et al., 2015). The Shannon entropy is calculated according to formula 1 below (Shannon, 1948) and allows the quantification of information content associated with the likelihood that a given vertex may have a given connectivity value in the considered network. The Shannon entropy (H) is given by the formula where p(k) is the probability that a vertex with a connectivity value (k) occurs in the analyzed network. Since entropy is an extensive thermodynamic function of states, it should not be normalized for network size. In this report, we considered the concepts of connectivity hubs, up-regulated genes, and Shannon entropy in order to assess tumor complexity and to infer a personalized medicine approach. We used the transcriptome data from tumors and their paired non-tumoral tissue considered as control samples to determine their up-regulated genes, construct their corresponding subnetworks, and calculate their respective entropy. We performed this exercise individually for the data collected from 475 patients distributed among nine cancer types. The results confirmed the existence of a negative correlation between the entropy of a tumor's PPI subnetwork and the corresponding survival rate using data from bench experiments (The Cancer Genome Atlas, TCGA). We also propose a method to infer the suitable number of targets for inhibition according to the 100% patient survival 5 years after treatment. This method concerns the number of connectivity hubs that should be inactivated in a tumor to lower its subnetwork entropy to a level that maximizes patient survival. To our knowledge, this is the first report aiming at the application of Shannon entropy, transcriptome data, and individual signaling subnetwork mining in the design of a personalized approach for cancer therapy. Gene Expression Data The gene expression data were obtained as RNA-seq files in their version 2 (Illumina Hi-Seq) available for tissues affected by cancer or not (paired tissues), from TCGA (https://cancergenome.nih. gov/) accessed in February 2016. Version 2 gives gene expression values for 20,532 genes referred to as GeneSymbol, calculated by RNA-seq through expectation maximization (RSEM) (Li and Dewey, 2011) and normalized according to the upper quartile methods. The 9,190 genes for which the equivalence between GeneSymbols and UniProtKB could be obtained went through further analysis. This equivalence list is available in Supplementary Table 1. The data selection followed two criteria: i) for each cancer type, a minimum of 30 patients with paired samples (control and tumor samples from the same patient) was required to satisfy statistical significance; and ii) the tumor sample had to be from a solid tumor. The data used in this work included 475 paired samples, shown in Table 1. The cancer molecular subtypes could be determined based on the following references: ( Guo et al., 2016;The Cancer Genome Atlas Research Network, 2016). However, the number of paired samples in each subtype did not reach the threshold of statistical significance (n = 30), and they were therefore not considered in this paper. Identification of Hubs in Up-regulated Genes of Tumors To identify genes that were significantly differentially expressed in the tumor samples of patients, we subtracted gene expression values of control samples from their respective tumor paired sample. The resulting values were called differential gene expression. Negative differential gene expression values indicated higher gene expressions in control samples, while positive differential gene expression values indicated higher gene expressions in tumor samples. We analyzed the frequency distribution of differential gene expressions of 9,190 genes for each patient. The relative frequencies obtained, represented by y i , were transformed using the relationship y' i = log10 (y i + 1) and approximated to a Gaussian distribution by best fitting in GraphPad Prism software with a 95% level (Carels et al., 2015a;Carels et al., 2015b). We considered the area under the Gaussian curve to determine the one-tail threshold values that would limit p-values ≤ 0.05. The up-regulated genes were those with expression values above the one-tail threshold. This analysis was performed for each patient individually in R. In a subsequent step, the PPI subnetworks were inferred for the proteins identified as products of up-regulated genes. We only considered up-regulated genes since they are those representing the tumor phenotype and the inhibition of the proteins they encode is expected to minimize potential toxic side effects to patients. The subnetworks were obtained by comparing these gene lists with the human interactome. The human interactome was obtained from the intactmicluster.txt file (version updated December 2017) accessed on January 11, 2018, at ftp://ftp.ebi.ac.uk/pub/databases/intact/ current/psimitab/intact-micluster.txt. We excluded incomplete and non-human interactions from this file, and the resulting file presented 151,631 interactions among 15,526 human proteins with UniProtKB accessions. These data can be retrieved from Supplementary Table 2. We used the PPI subnetworks of up-regulated genes from each patient to identify the node degree of each protein through automated counting of their edges. These values were used to calculate the Shannon entropy of each PPI subnetwork as explained in Section "Shannon Entropy" below. In parallel, we selected the 10 proteins with the highest degree (hubs) for each patient (top-10 proteins), and we validated the five most frequent hubs among them for each tumor type regarding their biological relevance as targets through literature searches. Finally, we characterized the up-regulated genes from the MDA-MB-231 breast cancer cell line as described in Ref. (Carels et al., 2015a). Those genes were used in the PPI subnetwork construction, which was performed with the interactome and the methodology described above. The resulting subnetwork was used for Shannon entropy analysis as a reference to the extension of cell line inferences to tumor tissues as presented in this report. Shannon Entropy The Shannon entropy was calculated with formula 1, where p(k) is the probability of occurrence of a vertex with a rank order k (k edges) in the subnetwork considered. The subnetworks were generated automatically from gene lists found to be up-regulated in each patient and the cell line MBA-MD-231 as described in the previous section. All operations were performed using Perl codes that can be obtained upon request. The Shannon entropy was also used to assess the relevance of treatment directed against connection hubs by comparing the decrease in subnetwork entropy induced by hub removal with that obtained by random target selection. We randomly removed five nodes from the network and calculated the resulting entropy. This process was repeated 1,000 times for each patient to build an empirical distribution of entropies. Next, we compared the entropy found after hub removal with the distribution of entropies found after removal of nodes selected at random (see Supplementary Table 3). Overall Survival The 5-year survival rates of the tumor tissues were inferred based on the overall survival (OS) data available from The Cancer Genome Atlas Clinical Cata Resource (TCGA-CDR) (Liu et al., 2018), which contains curated clinical and survival data from TCGA patients whose purpose was to eliminate incomplete survival (follow-up) information. Table S1 of Liu et al. (Liu et al., 2018) has two columns, "OS" and "OS.time, " that were used in GraphPad Prism software for survival curve analysis, indicating death/event as 1 and censored data as 0. This analysis resulted in survival rates corresponding to days to "death/last follow-up" for each cancer type (Supplementary Table 4). The survival rate found over 5 years (in days) was used to represent each cancer type. Finally, to determine each patient survival rate, we retrieved the number of days to "death/last follow-up" from each patient (Supplementary Table 5) and searched for its respective survival rate in Supplementary Table 4. These data were used to calculate the correlation between survival rate and entropy considering patients with the same tumor type. Average Target Number Per Tumor Tissue We analyzed the subnetwork entropies of up-regulated genes associated with each cancer type and their respective 5-year survival rate. We performed a Kruskal-Wallis test and a Wilcoxon signed-rank test to determine if all samples had the same entropy average and, if not, which pairs had significantly different averages. We also analyzed the correlation coefficient between the averages of entropies per tumor type and their respective 5-year survival rate and the fitted linear regression (see Supplementary Table 6). The correlation obtained between subnetwork entropies from up-regulated genes and survival rates allowed the inference of the approximate number of targets to be inactivated for each patient. The 20 proteins from the up-regulated subnetwork with the largest connection counts were called top-20 targets. In order to mimic the effect of inhibiting top-1 to top-20 targets, we excluded each target from the patient subnetwork of up-regulated genes. The Shannon entropy was calculated for the resulting subnetworks, and the suitable number of hubs for inactivation was found when the entropy of top-n subnetworks was equal to or less than the entropy that would correspond to the 100% survival rate (see Supplementary Table 7). In addition, we performed the same experiments considering the suitable number of hubs for inactivation and the entropy found for patients in a given tumor type. In this case, we selected tumor types with at least three patients with "death/last follow-up" in each survival rate interval (100%-81%, 80%-61%, 60%-41%, 40%-21%) to calculate the entropy and survival rate averages. Only LUAD and LUSC satisfied these requirements (Supplementary Table 5). Identification of Up-regulated Hubs in Patients We identified 273 proteins among all hub combinations of top-10 proteins for each patient. From those proteins, 112 (41.0%) were patient specific, 143 (52.4%) were specific for one cancer type, and only 16 (5.9%) were found in combinations over every cancer type. Furthermore, only four patients shared the same top-10 combination, two from BRCA and two from PRAD. This means that 99% of patients had a unique combination of top-10 hubs, even if some hubs could be found conserved across a significant part of the patient population. This property can be related to the variation in the number of connections for each hub according to the patients' subnetwork of up-regulated genes and to the variation of hubs that are up-regulated from one tumor to the other. The hub combination for each patient and their respective connection number in each subnetwork are given in Supplementary Table 8. The five most frequent hubs among patients from each cancer type are listed in Table 2 in association with their cancer hallmarks and the respective literature reference. The heat shock protein AB1 (HSP90AB1) was identified in 65% of all patients and is the most common hub identified among all 475 patients. High expression of HSP90AB1 has been associated with aggressive phenotypes in HER2-negative breast cancer (Cheng et al., 2012), and it is also identified as a hub target for MDA-MB-231, a triple-negative breast cell line. Its inhibition in combination with four other hubs decreased cell growth, proliferation, migration, and invasion in vitro (Tilli et al., 2016). Table 2 also shows two other heat shock proteins: HSPA5 and HSPB1. Since heat shock proteins function as chaperones, they are essential for cell maintenance and survival. Their relationship October 2019 | Volume 10 | Article 930 Frontiers in Genetics | www.frontiersin.org with tumor development is associated with their ability to stabilize mutant proteins, resulting from increased genomic instability, which would be degraded without the chaperones' assistance (Haase and Fitze, 2016). Specifically, HSPA5 has been identified in 38.7% of patients belonging to all nine cancer types, while HSPB1 has been identified in 27.2% of patients distributed among eight cancer types. The latter protein has been described as responsible for cancer escaping cell death and has been proposed as a specific biomarker for monitoring ovarian cancer patients' response to chemotherapy (Sun et al., 2015;Stope et al., 2016). Fibronectin 1 (FN1) is found in 64% of patient combinations belonging to all nine cancer types analyzed. FN1 has been widely described in tumor progression and is a member of multiple hallmarks, such as cell adhesion, invasion, migration, growth, and cell death escape (Soikkeli et al., 2010;Wang et al., 2017). Its high expression level has been associated with increased aggressiveness in thyroid cancer (Sponziello et al., 2016), and its expression in renal clear carcinoma is associated with a higher disease-related mortality rate (Steffens et al., 2012). Furthermore, its inactivation through microRNA has inhibited papillary thyroid carcinoma progression (Ye et al., 2017). The tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activation protein zeta (YWHAZ) was identified in 55.9% of all patients' combinations from all nine cancer types. This protein is present in high levels in different cancer cells and is associated with tumor cell proliferation, cell invasion and migration, and drug resistance (Nishimura et al., 2013;Hong et al., 2018;Deng et al., 2019). This protein target has prognostic potential, and its overexpression is associated with short OS time in non-squamouscell lung carcinoma (Deng et al., 2019); its knockdown has suppressed tumorigenesis in ovarian cancer cells (Hong et al., 2018). It is interesting to note here that we found some common targets between tumors and cell lines, which is expected since the origin of malignant cell lines is a tumor sample. However, the treatment of tumors is more complex because it is heterogeneous and formed by several cell lines. For this reason, the protein targets identified for each patient are encoded by up-regulated genes identified by comparison to their paired control since the RNA-seq of a tumor cannot differentiate the various cell lines that compose this tumor. In order to assess the effect of inhibiting selected hubs on tumor subnetwork entropy, we compared it to the effect that would be obtained inhibiting randomly selected targets in these subnetworks. Indeed, the inactivation of five hubs had a significantly higher effect in decreasing entropy than the inactivation of five targets selected at random. One example for each cancer type is given in Figure 1, and all results are given in more detail in Supplementary Table 3. These results indicate that the entropy increase in cancer is driven mainly by hubs , which has the corollary of an increase in the number of alternative pathways in more aggressive tumors. The effect of disrupting tumor subnetworks by hub inactivation is similar to the one obtained with the subnetwork of MDA-MB-231 (Figure 1). Indeed, as discussed in the introduction, the simultaneous inactivation of the top-5 hubs identified for this cell line resulted in significant reduction of tumor activity without any side effects to a non-tumoral cell line used as a control (Tilli et al., 2016). This result suggests that our strategy should be as successful in heterogeneous tumor samples as it was in cell lines. In this sense, this report is a generalization to tumors of our entropy-based strategy formally established with cell lines. Since such a strategy is not obvious a priori, it may be considered as a significant progress for translational medicine because it enables to us infer rational strategies for cancer therapies by personalized approaches. Tumor Entropy and Its Correlation With Overall Patient Survival Rate We considered the subnetworks of up-regulated genes to calculate the Shannon entropy relative to the tumor sample of each patient and used the averages to represent each cancer type. The number of genes up-regulated in tumors varied from patient to patient, but the average number for each cancer type was between 250 and 450. A supplementary table shows the number of up-regulated genes and the entropy calculated for each patient (see Supplementary Table 6). The entropy found for the subnetworks of up-regulated genes was used to analyze tumor complexity and its relationship with OS rate. The OS data available in ref. (Liu et al., 2018), which contains curated clinical and survival data from TCGA patients, were used to infer the 5-year survival rate for each cancer type (for more details, see "Materials and Methods"). The non-parametric Kruskal-Wallis test was performed to assess whether the entropy averages were the same for all cancer types. We found a chi-squared value of 94.9, degrees of freedom = 8, and a p-value < 2.2e−16, which refutes the null hypothesis of average entropy equality and indicates that at least one cancer type has an average significantly different from another one (Figure 2A). The Wilcoxon pairwise comparison test confirmed that cancer types with large differences in survival rates have significant (Loubeau et al., 2014;Box et al., 2016;Lin et al., 2016) NDRG1 28 56.0% Invasion and migration, growth (Menezes et al., 2017) average entropy differences ( Figure 2B). For instance, Figure 2B shows that thyroid cancer (THCA) and prostate cancer (PRAD), the cancer types with the highest survival rates, have significantly different entropy averages in comparison to almost all other cancer types, with the exception of LIHC. In addition, LUSC also has significantly different entropy averages when compared to BRCA and LIHC (Figure 2B). The p-values found for each pairwise analysis can be found in Supplementary Table 9. The relationship between tumor entropy and 5-year survival rates was characterized by a negative correlation (r = −0.68, p-value = 0.043), and the fitted regression line (Y = −0.004X + 2.507) had a slope within a 95% confidence interval from −0.00800 to −0.00017 (Figure 2C). These results indicate that the regression line was significantly different from the horizontal (slope = 0.00000). These results were based on patient data and are in agreement with those proposed elsewhere (Teschendorff and Severini, 2010;van Wieringen and van der Vaart, 2011;Breitkreutz et al., 2012;West et al., 2012;Banerji et al., 2015), indicating that the subnetwork entropy of tumor tissues increases together with the tumor aggressiveness. The linear regression indicates a common trend between malignancy and entropy for each tumor. However, the standard deviations associated with each tissue's averages indicate a variation in entropy between patients with the same cancer type, which means that despite sharing some common molecular features, each patient has his/her own tumor complexity and aggressiveness. The relationship between tumor entropy and survival rates also holds when considering patients from the same tumor type. For LUAD, we found a significant negative correlation of −0.98 between entropy average and survival rate, with a significant p-value = 0.02 and a slope in the 95% confidence interval from −0.005 to −0.001. For LUSC, we found a negative correlation of −0.70, with a nonsignificant p-value = 0.29 and a slope in the 95% confidence interval from −0.015 to 0.007. The lack of statistical significance is most likely due to the small sample size (n = 4). All these data can be seen in more detail in Supplementary Tables 5 and 6. The limitation in the tissue diversity of our report is justified by statistical constraints. However, this tissue diversity covers the whole range of tumor aggressiveness. The small slopes found in the regression analyses (found across patients with tumors in multiple tissues and within the set of patients with tumors in the same tissue, such as LUAD and LUSC) can be explained by the use of Shannon entropy as a complexity measure, whose values vary in the first decimal between 2.0 and 2.5 for patients between 35% and 100% survival, on the average. This approach requires a significant difference of survival rates between patients with different tumor types or patients within one tumor type to reveal a significant difference in their respective entropies. For this reason, it was not possible to analyze patients among PRAD and THCA. Both included patients with high survival rates, between 100.0% and 93.0% for PRAD, and from 99.7% to 89.6% for THCA. Furthermore, the other tissues (kidney renal clear cell carcinoma -KIRC, kidney renal papillary cell carcinoma -KIRP and stomach adenocarcinoma -STAD) had only three survival rate intervals available, which is not enough to generate a statistically significant regression line. Moreover, correlations between subnetwork size and entropy (r = −0.12, p-value = 0.74) or between subnetwork size and survival rates (r = 0.48, p-value = 0.18) were not large enough and not even statistically significant to suggest any linear relationship. Many efforts were made for stratifying patients based on their molecular subtypes that would suggest better patient treatment ( Research Network, 2015;Guo et al., 2016;The Cancer Genome Atlas Research Network, 2016). Unfortunately, due to the small number of patients (n~3) with paired samples characterized in each subtype and the few subtypes described in each tumor type (~2), we could not explore the potential differences in entropy and survival rates at the subtype level. Identification of Hubs Whose Inhibition Could Benefit Patients Our results suggest a prognostic potential of the entropy measure: the higher the entropy, the worse the prognosis. This relationship was previously described for the patient outcome according to breast cancer major subtypes and lung cancer . Therefore, reducing the subnetwork entropy by inhibiting the most connected hubs encoded by up-regulated genes should improve the patient's prognosis. As indicated above, each tumor has a specific entropy value around the mean of its corresponding tissue. For this reason, the question is how much one should decrease the entropy value on a case-to-case basis in order to improve the patient's prognosis and treatment benefit. In this context, we hypothesized that decreasing the entropy value to the one corresponding to 100% survival rate, we would increase the patient's prognosis by a signaling network restructuring specific to tumor tissues, which is in line with the in vitro validation performed earlier (Tilli et al., 2016). When we compared the tumor entropies found for each patient to the entropy corresponding to 100% survival, we found out that 318 of them would need at least one hub inhibition. The remaining patients showed tumor entropies lower than the one expected for 100% survival. This unexpected behavior could be explained by two biological facts: signaling heterogeneity, which may occur in early tumor stages when the difference in cell differentiation between normal and tumor tissues is still very small, and sampling heterogeneity, when surrounding tumor tissues, considered as control samples in this work, are invaded by cancer stem cells (Lau et al., 2017). The entropy of cancer stem cells has been previously described as higher than that of differentiated normal or tumor cells due to their larger signaling pathway heterogeneity (Banerji et al., 2013), which may explain why the resulting entropy for these patients is lower than the one expected for 100% survival rate. The Kruskal-Wallis test and a Wilcoxon pairwise comparison considering the number of targets indicated for inhibition show a significant difference between tissues averages ( Figure 3A and Supplementary Table 10). The relationship between the number of targets and 5-year survival rates was characterized by a negative correlation (r = −0.77, p-value < 0.011) and had a slope = −0.067 within a 95% confidence interval from −0.114 to −0.020 ( Figure 3B). This result suggests that, on average, cancer types with lower survival rates need a higher number of hubs for inhibition (around eight) than cancer types with higher survival rates (around four hubs). Yet, the selection of as many targets as necessary to decrease the tumor entropy level to that of 100% patient survival showed that patients from all nine cancer types needed the inhibition of target combinations varying from 1 to 20 hubs. The effect on entropy and the number of targets to be inhibited for each patient is given in Supplementary Table 7. DISCUSSION Many efforts were made to better understand cancer through Shannon entropy approaches. We confirmed here the existence of a significant negative correlation between the entropy of up-regulated gene subnetworks and patients' survival rate in real cases, which suggests a larger ability of aggressive tumors to switch among alternative pathways and to overcome environmental challenges such as drug treatments. The Relevance of Targeting Connection Hubs The decrease of entropy values after hub inactivation was significantly higher than the decrease of entropy after inactivation of targets selected at random, which confirms the benefit of a targeted attack on scale-free systems as shown by Albert et al. (Albert et al., 2000). The success of this approach has been validated in cell lines in vitro (Tilli et al., 2016). The relevance of targeting hubs allows us to investigate the quality (nature) and the quantity (number) of hubs that should be taken into consideration to rationally design a drug cocktail for a patient with a given gene expression profile. For this purpose, we investigated the number of connectivity hubs indicated for inhibition according to tumor aggressiveness using Shannon entropy. The benefit of characterizing a topology using entropy is based on the fact that this measure is invariant with respect to network size since the entropy calculation involves probabilities rather than absolute values. This concept is important in the context of this study because the subnetworks investigated were varying in size even if the criteria for their construction were kept identical from one patient to another. Shannon Entropy and Tumor Aggressiveness Our results are in accordance with previously published works that also describe a negative correlation between tumor entropy and aggressiveness. For instance, Breitkreutz et al. considered entropy as a tumor complexity measure and found an r 2 = 0.7 between the Shannon entropy of tumors and the 5-year survival rate after treatment, which means that 70% of the variance among the 14 samples considered was well represented by linear regression (Breitkreutz et al., 2012). Despite these outstanding results, the reference to KEGG regulatory pathways (http://www.genome. jp/kegg/) can be argued as non-representative of the real cases since these pathways include only a few vertices, ranging from 25 to 50, and are constructed as a consensus based on the data from many patients. Obviously, tumors are more complex and may present variations of the KEGG patterns of regulatory pathway dysregulation between patients with the same cancer type. In addition, entropy was used as a measure of system randomness (Teschendorff and Severini, 2010), system disorder (West et al., 2012), and heterogeneity in order to quantify the network signaling rather than network topology . All these works found higher entropies for cancer samples than control samples. Larger entropies were also associated with metastatic compared to non-metastatic tumors and also with advanced stages compared to early tumor stages (van Wieringen and van der Vaart, 2011). All these methodologies considered a unique signaling network with different interaction weights assigned according to the phenotypic expression data. In a similar way, we took expression data into consideration in our analysis but through the determination of expression thresholds above which genes would have to be considered as up-regulated. This binary approach of incorporating expression data into PPI directly affects the features of subnetworks regarding their topology. Identification of Molecular Targets We found that despite using the same protocol to select up-regulated genes and the same interactome, all subnetworks had different sizes and hub profiling. The variation of Shannon entropy among patients of the same tumor type indicates how much the individual profiling of molecular targets is recommended for rational therapeutic design. As shown here, the transcriptome and PPI data currently available allow the development of personalized medicine, which offers the possibility of a rational therapeutic approach based on individual molecular data, i.e., maximizing the patient benefit of therapy by designing the best combination of targets to be inhibited through personalized cocktails of drugs and/or biopharmaceuticals. Furthermore, the inhibition of hubs preferentially expressed in tumor samples would minimize overlapping toxicity effects. The reference to the entropy corresponding to 100% patient survival enabled us to explore putative individual therapeutics as well. Despite the general trend of higher entropy associated with aggressive tumors, each patient would need the inhibition of a different hub combination to reach the entropy level corresponding to 100% patient survival. These ideas were first proposed through bioinformatic inference with cell line expression data (Carels et al., 2015a) and then validated in vitro (Tilli et al., 2016). Here, we extended these concepts to their application to patient data as an initial step toward translational medicine. Our approach was shown to be robust once the hubs identified were validated in the literature as key players in the processes known to be key for tumor development. Future Direction The large target number between 10 and 20 necessary in some patients to reduce the entropy of the tumor tissue to the level corresponding to the 100% patient survival probability may be a consequence of our approach. The effects of hub inactivation were analyzed considering a static network, in which we had to implement all interventions in order to reach the desired state. However, the signaling network is dynamic due to regulatory interactions between proteins and genes, and it is possible that the inhibition of a smaller target set may trigger a cascade effect resulting in irreversible tumor cell death as suggested by Tilli et al. (Tilli et al., 2016). In this context, each gene expression profile of the signaling network may be interpreted as a state. The set of all states represents a continuous and multidimensional state space defined by the number of genes analyzed and their expression. Some states may define a cell phenotype or correspond to cell differentiation. Those states can be called attractors and are characterized as an equilibrium point in phase space. However, due to the stochastic nature of gene regulation, other states might result in the same phenotype as the attractor, and for this reason, they form the basin of attraction (Huang et al., 2009). Once we consider cancer disease as resulting from attractors in the phase space of cellular dynamics, therapy should lead the signaling network toward a new basin of attraction of active cell death (Cornelius et al., 2013;Huang et al., 2009). The analysis of the signaling network state space should allow the identification of the basin related to the desired attractor, optimizing the number of targets indicated for treatment. Moreover, this strategy would also allow the identification of an order of priority for therapeutic interventions required to reach the basin of attraction related to the desired state (Cornelius et al., 2013). Also, we assumed here that any target can be inhibited in different ways, using drugs, aptamers, interference RNA, or other methodologies that will have different consequences for patients depending on their off-target activity. This exercise also assumes that the inhibition of any target combination is possible, without considering interactions between drugs or any other side effect of a pharmacokinetic nature (differences in drug metabolism among human haplotypes). Therefore, our approach still requires a case-by-case examination according to additional layers of complexity associated with the dynamics of signaling networks and methods for target inactivation. October 2019 | Volume 10 | Article 930 Frontiers in Genetics | www.frontiersin.org CONCLUSION Topological measures of PPI networks bring useful information for personalized treatment of cancer. Among the measures of node and path metrics, we focused this study on the application of Shannon entropy to subnetworks of tumors' up-regulated genes. The results of our analysis in this paper show the following: (i) As proposed by Albert et al. (Albert et al., 2000), our experiment showed that removing the most connected targets is more effective than removing targets at random, whatever their connectivity degree. (ii) The gross approximation by Breitkreutz et al. (Azevedo and Moreira-Filho, 2015) is confirmed using interactome and RNA-seq data of real tumors, but the slope of the regression line obtained is lower than that published by these authors. This shows the need for large changes in network complexity to observe a difference in survival rates. (iii) As expected from the intra-tumor heterogeneity in cell line composition, we found a large standard deviation of entropy by tissue. The highly personalized molecular profile of tumors justifies an individual diagnostic and therapeutic (theranostics) approach in order to reduce toxic side effects of treatment to patients. (iv) When considering 100% survival as the goal of the treatment, the negative correlation indicates that aggressive tumors will need a larger set of therapeutic agents (drugs and/or biopharmaceuticals) than benign ones, on average, to reduce the entropy of the subnetwork of up-regulated genes to achieve a higher life expectancy. DATA AVAILABILITY STATEMENT All data sets generated and analyzed for this study are included in the manuscript/Supplementary Files. AUTHOR CONTRIBUTIONS AC participated in the conception, design, analysis, and interpretation of the work. JT participated in the conception, design, and drafting of the work. FS participated in the design of the work and substantially revised it. NC participated in the conception, design, analysis, and interpretation of the work and substantially revised it. All authors participated in the report writing and approved the final version. FUNDING This study was supported by a fellowship from Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ) to AC. JT acknowledges research support from Natural Sciences and Engineering Research Council (NSERC) (Canada), the Alberta Cancer Foundation, and the Allard Foundation.
9,646
2019-10-21T00:00:00.000
[ "Medicine", "Computer Science", "Biology" ]
Multi-criteria collaborative filtering recommender by fusing deep neural network and matrix factorization Recommender systems have been an efficient strategy to deal with information overload by producing personalized predictions. Recommendation systems based on deep learning have accomplished magnificent results, but most of these systems are traditional recommender systems that use a single rating. In this work, we introduce a multi-criteria collaborative filtering recommender by combining deep neural network and matrix factorization. Our model consists of two parts: the first part uses a fused model of deep neural network and matrix factorization to predict the criteria ratings and the second one employs a deep neural network to predict the overall rating. The experimental results on two datasets, including a real-world dataset, show that the proposed model outperformed several state-of-the-art methods across different datasets and performance evaluation metrics. only on interactions between users and items such as ratings [3]. Matrix Factorization (MF), is the most popular CF techniques, which maps users and items into a joint latent space, using a vector of latent features to represent a user or an item [4]. Then the interaction of a user on an item is obtained from the inner product of their latent vectors. Although MF is effective, the simple choice of the interaction function is insufficient to model the complex relation between the users and items. Deep Learning (DL) has achieved immense results in many research fields, such as speech recognition, natural language processing, computer vision, and recently in recommender systems, where Deep Neural Networks (DNNs) have proved their capability of modeling nonlinear users and items relationships and approximating any continuous function [5]. Therefore, it is convenient to fuse the DNN with MF to formulate a more general model that makes use of both the nonlinearity of DNN and the linearity of MF to enhance recommendation accuracy. The objective of this paper is just to propose a multi-criteria collaborative filtering recommender by fusing DNN and MF, to improve collaborative filtering performance. Our model consists of two parts: in the first part, we get the users and items' features and feed them as an input to a fused model of a DNN and MF to predict the criteria ratings, and we use a deep neural network to predict the overall rating in the second part. By doing experiments on two datasets including a real-world dataset, it can be observed that our proposed model achieves significant improvement compared to the other state-of-the-art methods. The main contributions of this work are as follows: 1. We present a multi-criteria collaborative filtering recommender by fusing DNN and MF, to combine the non-linearity of DNN and linearity of MF in a multi-criteria RS for modeling user-item latent structures. 2. We do comprehensive experiments on two datasets including a real-world dataset to show the efficiency of our model and the importance of using multi-criteria and deep learning in collaborative filtering. The rest of this paper is organized as the following: In Sect. 2, we survey the related work. We will give a detailed overview of the system in Sect. 3. Section 4 presents the experimental evaluations and discussion and in Sect. 5, we provide a brief conclusion and potential future work. Related work Multi-criteria recommendation techniques can be divided into two general classes: memory-based and model-based methods. In memory-based methods, the similarity can be computed in two ways: the first approach calculates the similarity on each criteria rating separately using the traditional way and then aggregates the values into a single similarity using an aggregation method such as average [6], worst-case [6], and weighted sum of individual similarities [7]. The second approach uses the multidimensional distance metrics (Euclidean, Manhattan, and Chebyshev distance metrics) to calculate the distance between multi-criteria ratings [6]. Model-based methods use the user-item ratings to learn a predictive model and later use this model to predict unknown ratings, like Probabilistic Modeling [8,9], Multilinear Singular Value Decomposition (MSVD) [10], Support Vector Regression (SVR) [11], and aggregation-function-based [6], this approach assumes that there is a relation between the item's overall rating and the multicriteria ratings and it is not independent of other ratings. Many efforts have been made to improve MF, Koren [12] merged MF and neighborhood models. Wang et al. [13] combined MF with topic models of item content. Rendle [14] introduced Factorization Machines that combine Support Vector Machines with factorization models. He et al. [15] proposed Neural Matrix Factorization (NeuMF) model that changed the linearity nature of MF by combining it with Multi-Layer Perceptron (MLP). There are very few researches on applying deep learning to Collaborative Filtering model. Salakhutdinov et al. [16] proposed restricted boltzmann machines model to learn the item ratings correlation, thereafter Georgiev et al. [17] expanded the former work by adding both user-user and item-item correlations. Ouyang et al. [18] used autoencoder in autoencoder-based collaborative filtering to model users ratings on items. He et al. [15] introduced neural collaborative filtering model that uses MLP to learn the interaction function. Nassar et al. [19] presented deep multi-criteria collaborative filtering (DMCCF) model which is the only attempt in applying deep learning and multi-criteria to collaborative filtering. The model follows the aggregation-function-based approach, where they used a deep neural network to predict the criteria ratings and another DNN to learn the relationship between the criteria ratings and the overall rating in order to predict the overall rating. Method The proposed model is based on the model that Nassar et al. [19] presented. It contains three steps: a. Predict criteria ratings r 1 , r 2 , . . . , r k using a DNN. b. Learn aggregation function f , which represents the relationship between the criteria ratings and the overall rating using a DNN. c. Predict overall ratings using the predicted criteria ratings and the aggregation function f . In our model, we used a fused model of DNN and MF to predict the criteria ratings r 1 , r 2 , . . . , r k in the first step, while in the second step, we kept using a DNN to learn aggregation function f , as illustrated in Fig. 1. Criteria ratings model This model is used to predict the criteria ratings for a user on an item. In this model, we will fuse MF and DNN as He et al. [15] proposed in their NeuMF framework. MF that applies a linear kernel to model the latent feature interactions, and DNN that uses a nonlinear kernel to learn the interaction function from data. MF We can use the item ID or user ID as a feature since it is unique, but the ID is a categorical feature. Therefore, we converted the IDs into embedding vectors initialized with random values that adjusted to minimize the loss function during the model training. In the input layer, the input vector x is given by: where v u and v i are user and item embedding vectors and ⊙ is the element-wise product of vectors. The formula of the output layer is as follows: where a out and w T are the activation function and weights of the output layer. DNN The input vector x is given by: where v u and v i are user and item embedding vectors. Then followed by a number of dense Rectified Linear Units (ReLU) layers, we choose ReLU activation function because it is the most efficient [20]. The formulation of which is given as follows: The output of a hidden layer l is formulated as: In the model output layer, we predict the user criteria ratings r 1 , r 2 , . . . , r k using Eq. (9) and (10): Overall rating deep neural network The overall rating DNN is used to learn the aggregation function f , which represents the relationship between the overall rating r 0 and the criteria ratings r 1 , r 2 , . . . , r k , in order to predict the overall rating: In the input layer, the input vector is the criteria ratings r 1 , r 2 , . . . , r k for user u and item i , as shown in Fig. 2. We normalize the continuous features r 1 , r 2 , . . . , r k because the DNNs are sensitive to the inputs scaling and distribution [21]. The normalization of a sample r i is calculated as: with m i the mean of the training samples for rating r i and s i the standard deviation of the training samples for rating r i . The input vector becomes: Then followed by a number of hidden layers, the output of a hidden layer is given again by Eq. (5). In the output layer, we predict the overall rating r 0 in Eq. (6), where: We used Adam optimizer [22] and MAE loss function in both parts. Recommendation After the finishing of the model training, where each part was trained individually without knowing each other. We can use the model to predict the user's overall rating on the new items. The recommendation process happens as shown in Fig. 1, for each user u and item i pair we: a) Get the user ID and the item ID, and use them as an input for the Criteria Ratings model, to compute the criteria ratings r ′ 1 , r ′ 2 , . . . , r ′ k . b) Normalize the previously computed criteria ratings r ′ 1 , r ′ 2 , . . . , r ′ k and use them as an input for the Overall Rating DNN, to compute the overall rating r ′ 0 . c) Recommend the user the items as in traditional recommender systems using the overall rating r ′ 0 . Results and discussion Dataset We evaluated our model on two multi-criteria rating datasets, a real-world TripAdvisor 1 dataset and a Movies 2 dataset. TripAdvisor dataset Multi-criteria rating dataset for hotels. It includes an overall rating and seven criteria ratings Value, Rooms, Location, Cleanliness, Check in/front desk, Service, and Business service, the rating range between 1 and 5. Table 1 demonstrates the statistics of the dataset and Table 2 Multi-criteria rating dataset for movies, available on GitHub. It contains four criteria ratings and an overall rating, the rating range between 1 and 13. Tables 3 and 4 demonstrate the dataset statistics and the distribution of the different criteria ratings and the overall rating. Evaluation To evaluate the performance of our model we used the same metrics used in [19]. a. Mean Absolute Error (MAE) [23] where r ui the true rating of user u for item i , r ui the predicted rating, and M is the size of the test set. b. F-measure ( F 1 and F 2 ) [23] where P the precision and R the recall. c. Fraction of Concordant Pairs (FCP) [24] where n u c the number of concordant pairs for user u , and we calculate n u d the number of discordant pairs for user u in a similar way. i, j |r ui �r uj andr ui > r uj (19) [25] is the average of the first relevant item rank for all users. Settings We used Keras 3 with TensorFlow 4 as a backend to implement our model. Criteria Ratings Model Settings We conducted several experiments to find the optimal parameters for DNN and MF. • For DNN, we randomly initialized the DNN parameters using a normal distribution with mean of 0 and standard deviation of 0.05. We used Adam optimizer, with 0.001 learning rate and parameters values as provided in [22]. For TripAdvisor dataset, we used batch size of 512 and set epochs to 2. We set each of user and item embedding vector size to 64, and we selected the [128 → 64 → 32 → 16 → 8] hidden layers. For Movies dataset, we used batch size of 64 and set epochs to 4. We set each of user and item embedding vector size to 32, and we selected the [64 → 32 → 16 → 8] hidden layers. • For MF, we tried to find the optimal embedding vector size. As shown in Fig. 3 We initialized randomly the DNN parameters like the previous DNN, using a normal distribution with mean of 0 and standard deviation of 0.05. We also used Adam optimizer with 0.001 learning rate and the same parameters values. For TripAdvisor dataset, we set the epochs to 50, and for Movies dataset, we set the epochs to 100. While for both datasets, we set the batch size to 512. We used [64 → 32 → 16 → 8] hidden layers. Finally, in the output layer, there is 1 neuron, for the overall rating. Results We used fivefold cross-validation method, to split the data randomly into 20% test set and 80% training set including a 1% validation set. We repeated this process 5 times and calculated the mean value of each metric. We compared the performance of our model to the DMCCF model [19], then to a single DNN that predicts the overall rating directly where the finest results of this DNN were acquired from settings shown in Table 5. In addition, we compared our model to a number of famous single rating recommendation methods such as SVD [26], SVD++ [12], Baseline Estimates [27], and SlopeOne [28]. This comparison was done on the overall rating. The results are illustrated in Table 6. We can see that our model achieves the best performance on both datasets, significantly outperforming each of DMCCF model, single DNN, and the other state-of-the-art methods on all the evaluation metrics. This indicates the high expressiveness of our model by fusing the non-linear DNN and linear MF models to capture the methods. F1 and F2 of our model are better than the other methods. Our model surpasses the other models in FCP and it exceeds user-item interactions. According to the results, in MAE, our model excelled the other compared methods at MAP. In MRR, our model is the best. Conclusion and future work In this paper, we proposed multi-criteria collaborative filtering recommender by fusing DNN and MF. The model consists of two parts: in the first part, we get the users and items' features and feed them as an input to a fused model of a DNN and MF to predict the criteria ratings, and in the second part, we use a DNN to predict the overall rating. Then we demonstrated the efficiency of our proposed model, where the experimental results showed that our model significantly outperformed the other methods on both datasets for all the evaluation metrics, which proved that the application of deep learning and multi-criteria in collaborative filtering is a successful attempt and it can be enhanced using different deep learning techniques or by building more complex models. In future work, we will study the use of different deep learning techniques, such as Autoencoder, Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN) in recommendation systems and attempt to further improve the performance of our model, we will also try other feature representation methods precisely to solve the cold start problem by using user and item content features.
3,553.2
2020-05-24T00:00:00.000
[ "Computer Science" ]
Evaluation of Rock Bolt Support for Polish Hard Rock Mines . The article presents different types of rock bolt support used in Polish ore mining. Individual point resin and expansion rock bolt support were characterized. The roof classes for zinc and lead and copper ore mines were presented. Furthermore, in the article laboratory tests of point resin rock bolt support in a geometric scale of 1:1 with minimal fixing length of 0.6 m were made. Static testing of point resin rock bolt support were carried out on a laboratory test facility of Department of Underground Mining which simulate mine conditions for Polish ore and hard coal mining. Laboratory tests of point resin bolts were carried out, especially for the ZGH Bolesław, zinc and lead “Olkusz – Pomorzany” mine. The primary aim of the research was to check whether at the anchoring point length of 0.6 m by means of one and a half resin cartridge, the type bolt “Olkusz – 20A” is able to overcome the load.The second purpose of the study was to obtain load – displacement characteristic with determination of the elastic and plastic range of the bolt. For the best simulation of mine conditions the station steel cylinders with an external diameter of 0.1 m and a length of 0.6 m with a core of rock from the roof of the underground excavations were used. Introduction In Poland the underground exploitation of ore deposits is carried out in two regions. Wherein the primary source of lead are zinc-lead ores deposits of Mississippi Valley type in the Silesian-Cracow region and copper ore deposits in the Fore-Sudetic Monocline. The exclusive primary sources of zinc in Poland are zinc and lead ore deposits of Missisippi Valley type in the Triassic dolomites of the Silesian-Cracow region. In the last year there were the following deposits operated in the Olkusz region, i.e. Olkusz, Pomorzany and Klucze I [4,5]. It is worth mentioning that the mentioned deposits are explored in the only one underground zinc and lead mine "Olkusz-Pomorzany" belonging to ZGH Bolesław S.A. in Bukowno. Copper ores are excavated in the Fore-Sudetic Monocline deposits, which contains many accompanying minerals. Currently underground exploitation is carried out in three mines "Rudna", "Polkowice -Sieroszowice" and "Lubin" of Legnica-Głogów Copper District (LGOM), that belong to KGHM Polish Copper SA. The basic type of rock mass reinforcement method for both preparatory and operational excavations in underground metal ore mines, both in Poland and in different countries across the world, is the expansion shell or adhesive-bonded rock bolt [3,6,8]. The annual consumption of rock bolt supports in Polish ore mining is more than 3 million units (Tab. 1). It is worth mentioning that, exploitation of zinc and lead in "Olkusz -Pomorzany" mine is carried out at depth of about 100 m below ground surface whereas exploitation of copper in Legnica-Głogów Copper District (LGOM) mines is carried out at depth over 1000 m up to 1200 m below ground surface. Therefore in the "Olkusz -Pomorzany" mine there are only rigid bolts used with satisfactory strength and only failing, when taking over the load near the limit of strength of bolt rod, with relatively small deformation capacity, whilst in the copper mines there are used rigid and yielding bolts capable of carrying significant deformation, with relatively low carrying capacity and also more and more there are used energy-absorbing of dynamic load bolts, characterized by high load carrying capacity and the ability to carry relatively high displacement and deformation. Choice of rock bolt support The variety of conditions in which the rock bolt support is used makes it suitable for securing the stability of the excavation or as reinforcement of existing rock bolt support [3]. Choice of rock bolt support in "Olkusz -Pomorzany" mine is based on the determination of roof's class which is determined on the basis of the results of geomechanical tests and on the basis evaluation of weakness factor of rock mass "c" carried out by an expert [4]. There are five classes of the roof, for which a specific parameters are determined for application of rock bolt support; way of fixing (point or continuous), the mechanism of action (resin, expansion or friction), the length of bolts, spacing in rows, with the additional security and, above all, cross-sectional shape of excavation (Tab. 2). Selection of excavation support in copper ore mining is performed by the Head of the Mining Works Department on the basis of knowledge of geological and mining conditions (Tab. 3). In practice, this insight is implemented by determining the roof class, in accordance with the instruction [2,10] used by the KGHM, developed by an expert. The classification of the roof rocks is based on the hierarchy of parameters' validity, which basically determines the stability of the roof of mining excavations. These parameters include: roof delamination (divisibility in the vertical direction), density of mineralized fractures in the roof of excavations, degree of fault, average throw of fault, tensile strength. Individual parameters are obtained as a result of analysis of drill cores, analysis of drilling hole walls by means of endoscopes, analysis of exposures, and research on geomechanical properties of rocks. In addition, correction coefficients have been introduced to determine the rock of the roof, taking into account the impact of the open width of the working space and the method of roof management. Furthermore, when selecting the rock bolt support, the compressive strength, intended use projected dimensions of the excavation and the experience in the application of rock bolt support are taken into account. In the case of long-term excavations, grouted-monted bolts are preferable, more resistant to corrosion and do not require obtaining and then maintaining a high pre-tension. In mining fields, where the period from the completion of the excavation to its liquidation is relatively short, expansion bolts are more often used, which due to the less time consuming construction, allow for greater bolting efficiency [1]. Examples of bolting diagrams for zinc and lead and copper ore mining are shown in Figures 1 and 2. Laboratory test facility Static tests of point resin rock bolts were carried out at the laboratory test facility in simulated mining conditions, in particular for the zinc and lead "Olkusz -Pomorzany" mine. Studies were carried out using a consistent methodology. In the static load mode, the rock bolts were stretched and broken by the use of maximum force resulting from the power of the pump. Displacement and force sensors were connected to a QuantumX MX840 universal measuring amplifier via 15-pin plugs. During the tensile process the results of force and displacement measurements were recorded continuously by a program specialized in the field of measurement technology -known as "CATMAN-EASY" [9]. Resin rock bolts type "Olkusz 20A" used in the laboratory tests constitute the basis of mining excavation supports in underground zinc and lead "Olkusz -Pomorzany" mine. Rock bolt type "Olkusz 20 A" was modified for the needs of the conducted experience and consists of a ribbed bar with a 1.6 m length which has a diameter of 20 mm, at one end has a thread with M20 (Fig. 3a) and circular shaped bearing plate with a diameter of 150 mm and a thickness of 6mm (Fig. 3c). The height of the bearing plate was 24.8 mm, made of steel grade St3SAL or 18G2A. The other end of the rod is angled 30-35 in order to improve mixing of the resin. Additionally bolt was equipped with connecting sleeve (Fig. 3d) and threated extension (Fig. 3b). This configuration made it possible to install the bolt without a nut. Support's rods are made of stainless steel "EPSTAL", B500SP steel grade, which is characterized by the following parameters: yield strength (Re) of 500 MPa to 625 MPa; ratio of tensile strength to yield strength (Rm/Re) from 1.15 to 1.35; percentage elongation (A5) a minimum of 16% and the percentage elongation at maximum force (Agt) equal to at least 8%. In order for laboratory studies to best satisfy mining conditions, rock cores obtained from underground excavations were used (Fig. 4). Rock cores (dolomites) had holes with a diameter 0.028 m (Fig. 4a) and characterized compressive strength up to 100 MPa. Bolt was installed into core by means of one and a half resin cartridge with a length of 0.6 m. Resin cartridge type "Lokset" has a length 0.4 m and diameter 0.24 m (Fig. 4b). It consists of binding compound, a hardener and a plastic coating. In addition, there is an overlay at one end of the cartridge, the task of which is to keep the cartridge in the hole so that it does not fall out. Load-displacement characteristics of point resin rock bolt The aim of this study was to obtain load-displacement characteristics of point resin rock bolt on the basis of which it will be possible to determine individual phases of load and deformation of the bolt. During tests the results of force and displacement characteristics were recorded continuously and visualization and evaluation of the measurement were tracked on-going. In addition, after the tests were completed, reports documenting the results of the measurements were created and then stored as an ASCII file extension. Examples of the characteristic parameters obtained in the tests are shown in Figure 5. The obtained characteristics were divided into five phases, visible in Figure 5a (A, B, C, D, E). The marked area A together with the area C constitutes the whole and shows a similar growth character of the pull out force, relative to the displacement. The growth trend is disturbed by slipping the bearing plate -phase B, which was observed during the test. For a maximum tensile force equal 153 kN, the displacement was 45 mm. A sudden drop of load (phase D) was caused by pulled out the bolt. The next phase E is the total extension of the bolt from the hole. The load-displacement characteristic of the Olkusz-20A bolt number 2 (Fig. 5b) has been divided into five characteristic parts. The tensile force diagram in phase A is presented in accordance with the reference to tensile characteristics of the bolt. In the initial phase B, smoothing occurs, which is caused by the notch of the bearing plate which was observed in time. Then a slight increase is noticeable, followed by a drop in load relative to the increasing of displacement. This is due to the extension of the bolt from the hole, which was also noticed during the test. In phase C, the breaking force increases, up to the critical value, which was 162 kN and the bolt breaks at the thread location (Fig. 5c). Phase D presents the breaking of the bolt, while phase E shows the extension of the broken part. Conclusions At present rock bolt support in underground zinc and lead "Olkusz -Pomorzany" mine is installed in the bolt holes made by means of self-propelled drilling -bolting machines type SWK-2 Hz and Robolt 320-22C. Drilling bolt holes is one of the most dangerous operations, as is done with an unprotected roof of the excavation. The excavations, in which the roof is very fragile also is used manual bolting. W 2017 it was used about 123000 resin rock bolts support "Olkusz 16A" type at manual bolting and about 7000 resin rock bolts support "Olkusz 20A" type at self-propelled drilling-bolting mining machines. For bolts "Olkusz 16A" type, holes are drilled with a diameter of 35 mm. For fixing rod is used at least two loads of resin "Lokset" type with a diameter of 30 mm and length 600 mm for which the gel time is equal 2 minutes. Wherein for bolts "Olkusz 20A" type in the roof are drilled holes with a diameter of 33 mm. For fixing the rod typically are used four resin cartridges "Lokset" type, which have a diameter of 24 mm and length 400 mm for which is gel time is equal 30 seconds. According to Polish Standard Norm [7] capacity of resin bolts in underground exploitation of ore deposits of lead and zinc should be at least 90 kN. Control of the bolting consists of a pull out tests with load of force up to 40 kN. In laboratory tests, the point resin rock bolts were fixed at the length of 0.6 m. Based on the obtained characteristics, it is possible to state that the support has taken over the minimum required load. This means that it is possible to propose a modification of the bolting technology, which would mean point mounting instead of the entire length in the bolting holes. Some restriction resulting from geological conditions. It should be remembered that in "Olkusz -Pomorzany" mine, roofs of excavations (room) are represented very often by oxidized and strongly fractured ore bearing dolomites.
2,956.8
2018-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Survey of Distances between the Most Popular Distributions : We present a number of upper and lower bounds for the total variation distances between the most popular probability distributions. In particular, some estimates of the total variation distances in the cases of multivariate Gaussian distributions, Poisson distributions, binomial distributions, between a binomial and a Poisson distribution, and also in the case of negative binomial distributions are given. Next, the estimations of Lévy–Prohorov distance in terms of Wasserstein metrics are discussed, and Fréchet, Wasserstein and Hellinger distances for multivariate Gaussian distributions are evaluated. Some novel context-sensitive distances are introduced and a number of bounds mimicking the classical results from the information theory are proved. Introduction Measuring a distance, whether in the sense of a metric or a divergence, between two probability distributions (PDs) is a fundamental endeavor in machine learning and statistics [1]. We encounter it in clustering, density estimation, generative adversarial networks, image recognition and just about any field that undertakes a statistical approach towards data. The most popular case is measuring the distance between multivariate Gaussian PDs, but other examples such as Poisson, binomial and negative binomial distributions, etc., frequently appear in applications too. Unfortunately, the available textbooks and reference books do not present them in a systematic way. Here, we make an attempt to fill this gap. For this aim, we review the basic facts about the metrics for probability measures, and provide specific formulae and simplified proofs that could not be easily found in the literature. Many of these facts may be considered as a scientific folklore known to experts but not represented in any regular way in the established sources. A tale that becomes folklore is one that is passed down and whispered around. The second half of the word, lore, comes from Old English lār, i.e., 'instruction'. The basic reference for the topic is [2], and, in recent years, the theory has achieved substantial progress. A selection of recent publications on stability problems for stochastic models may be found in [3], but not much attention is devoted to the relationship between different metrics useful in specific applications. Hopefully, this survey helps to make this treasure more accessible and easy to handle. The rest of the paper proceeds as follows: In Section 2, we define the total variation, Kolmogorov-Smirnov, Jensen-Shannon and geodesic metrics. Section 3 is devoted to the total variation distance for 1D Gaussian PDs. In Section 4, we survey a variety of different cases: Poisson, binomial, negative-binomial, etc. In Section 5, the total variation bounds for multivariate Gaussian PDs are presented, and they are proved in Section 6. In Section 7, the estimations of Lévy-Prohorov distance in terms of Wasserstein metrics are presented. The Gaussian case is thoroughly discussed in Section 8. In Section 9, a relatively new topic of distances between the measures of different dimensions is briefly discussed. Finally, in Section 10, new context-sensitive metrics are introduced and a number of inequalities mimicking the classical bounds from information theory are proved. The Most Popular Distances The most interesting metrics on the space of probability distributions are the total variation (TV), Lévy-Prohorov, Wasserstein distances. We will also discuss Fréchet, Kolmogorov-Smirnov and Hellinger distances. Let us remind readers that, for probability measures P, Q with densities p, q, We need the coupling characterization of the total variation distance. For two distributions, P and Q, a pair (X, Y) of random variables (r.v.) defined on the same probability space is called a coupling for P and Q if X ∼ P and Y ∼ Q. Note the following fact: there exists a coupling (X, Y) such that P(X = Y) = TV(P, Q). Therefore, for any measurable function f , we have P( f (X) = f (Y)) ≤ TV(X, Y) with equality iff f is reversible. In a one-dimensional case, the Kolmogorov-Smirnov distance is useful (only for probability measures on R): 's, and Y has a density w.r.t. Lebesgue measure bounded by a constant C. Then, Kolm(P, Q) ≤ 2 CWass 1 (P, Q). Here, Wass 1 (P, Let X 1 , X 2 be random variables with the probability density functions p, q, respectively. Define the Kullback-Leibler (KL) divergence ). The total variance distance and the Kullback-Leibler (KL) divergence appear naturally in statistics. Say, for example, in the testing of binary hypothesis H 0 :X ∼ P versus H 1 :X ∼ Q, the sum of errors of both types as the infimum over all reasonable decision rules d: Moreover, when minimizing the probability of type-II error subjected to type-I error constraints, the optimal test guarantees that the probability of type-II error decays exponentially in view of Sanov's theorem where n is the sample size. In the case of selecting between M ≥ 2 distributions, The KL-divergence is not symmetric and does not satisfy the triangle inequality. However, it gives rise to the so-called Jensen-Shannon metric [4] JS(P, Q) = D(P||R) + D(Q||R) with R = 1 2 (P + Q). It is a lower bound for the total variance distance 0 ≤ JS(P, Q) ≤ TV(P, Q). The Jensen-Shannon metric is not easy to compute in terms of covariance matrices in the multi-dimensional Gaussian case. The proof is sketched in Section 6. The upper bound is based on the following. Proposition 2 (Pinsker's inequality). Let X 1 , X 2 be random variables with the probability density functions p, q, and the Kullback-Leibler divergence KL(P X 1 ||P X 2 ). Then, for τ(X 1 , X 2 ) = TV(X 1 , X 2 ), Proof of Pinsker's inequality. We need the following bound: If P and Q are singular, then KL = ∞ and Pinsker's inequality holds true. Assume P and Q are absolutely continuous. In view of (7) and Cauchy-Schwarz inequality, To check (12), define [Mark S. Pinsker was invited to be the Shannon Lecturer at the 1979 IEEE International Symposium on Information Theory, but could not obtain permission at that time to travel to the symposium. However, he was officially recognized by the IEEE Information Theory Society as the 1979 Shannon Award recipient]. For one-dimensional Gaussian distributions, In the multi-dimensional Gaussian case, Next, define the Hellinger distance and note that, for one-dimensional Gaussian distributions, For multi-dimensional Gaussian PDs with δ = µ 1 − µ 2 , In fact, the following inequalities hold: where dx. These inequalities are not sharp. For example, the Cauchy-Schwarz inequality immediately implies τ(X, Y) ≤ 1 2 χ 2 (X, Y). There are also reverse inequalities in some cases. Proposition 3 (Le Cam's inequalities). The following inequality holds: Therefore, by Cauchy-Schwarz: where ∆ is small enough. Let r < d and A be r × d semi-orthogonal matrix AA T = I r . Define τ := τ(AX, AY). Then, Proof. In view of Le Cam's inequalities, it is enough to evaluate η 2 . Note that all r eigenvalues of [Ernst Hellinger was imprisoned in Dachau but released by the interference of influential friends and emigrated to the US]. Bounds on the Total Variation Distance This section is devoted to the basic examples and partially based on [5]. However, it includes more proofs and additional details ( (1), Pois(1 + a)). (a) Note that the upper bound becomes useless for p 2 − p 1 ≥ 0.07; (b) blue and orange curves -exact TV distance: the blue curve works for 1 ≤ λ 2 λ 1 ≤ 2 and the orange curve for 2 ≤ λ 2 λ 1 ≤ 4. Note that the linear upper bound (red curve) is not relevant and the square root upper (green curve) bound becomes useless for λ 2 λ 1 ≥ 4. Proposition 4 (Distances between exponential distributions). (a) Let X (23) Given y > 0, the area of an Proof. Let us prove the following inequality: where p = p 1 , p + x = p 2 and q = 1 − p. By concavity of the ln, given p ∈ (0, 1) and This gives the bound np 1 ≤ l as follows: (32) On the other hand, as h(0) = 0 and h (x) = ln(1 + x/p − ln(1 − x/q) ≤ 0; this implies the bound l ≤ np 2 . Indeed: The rest of the solution goes in parallel with that of Proposition 5. Equation (27) is replaced with the following relation: if S n (p) ∼ Bin(n, p); then, In fact, iterated integration by parts yields the RHS of (35) the LHS of (35). Proposition 7 (Distance between binomial and Poisson distributions Alternative bound The stronger form of (39): where S n (u) ∼ Bin(n, u) and Suppose r d, and we want to find a low-dimensional projection A ∈ R r×d , AA T = I r of the multidimensional data X ∼N(µ 1 , Σ 1 ) and Y ∼N(µ 2 , Σ 2 ) such that TV(AX, AY) → max. The problem may be reduced to the case µ 1 = µ 2 = 0, Σ 1 = I n , Σ 2 = Σ, cf. [6]. In view of (44), it is natural to maximize where g(x) = 1 x − 1 2 and γ i are the eigenvalues of AΣA T . Consider all permutations π of these eigenvalues. Let Then, rows of matrix A should be selected as the normalized eigenvectors of Σ associated with the eigenvalues γ i . . For the optimization procedure in (47), the following result is very useful. Estimation of Lévy-Prokhorov Distance Let P i , i = 1, 2, be probability distributions on a metric space W with metric r. Define the Lévy-Prokhorov distance ρ L−P (P 1 , P 2 ) between P 1 , P 2 as the infimum of numbers > 0 such that, for any closed set C ⊂ W, where C stands for the -neighborhood of C in metric r. It could be checked that ρ L−P (P 1 , P 2 ) ≤ τ(P 1 , P 2 ), i.e., the total variance distance. Equivalently, where P (P 1 , P 2 ) is the set of all jointP on W × W with marginals P i . Next, define the Wasserstein distance W r p (P 1 , P 2 ) between P 1 , P 2 by In the case of Euclidean space with r(x 1 , x 2 ) = ||x 1 − x 2 ||, the index r is omitted. Total Variation, Wasserstein and Kolmogorov-Smirnov distances defined above are stronger than weak convergence (i.e., convergence in distribution, which is weak* convergence on the space of probability measures, seen as a dual space). That is, if any of these metrics go to zero as n → ∞, then we have weak convergence. However, the converse is not true. However, weak convergence is metrizable (e.g., by the Lévy-Prokhorov metric). The Lévy-Prokhorov distance is quite tricky to compute, whereas the Wasserstein distance can be found explicitly in a number of cases. Say, in a 1D case W = R 1 , we have Theorem 5. For d = 1, Proof. First, check the upper bound W 1 (P 1 , Then, in view of the Fubini theorem, For the proof of the inverse inequality, see [8]. Proposition 10. For d = 1 and p > 1, Proof. It follows from the identity The minimum is achieved forF(x, y) = min[F 1 (x), F 2 (y)]. For an alternative expression (see [9]): , ϕ and Φ are PDF and CDF of the standard Gaussian RV. Note that, in the case µ X = µ Y , the first term in (74) vanishes, and the second term gives We also present expressions for the Frechet-3 and Frechet-4 distances All of these expressions are minimized when Cov(X j , Y j ), j = 1, . . . , d are maximal. However, this fact does not lead immediately to the explicit expressions for Wasserstein's metrics. The problem here is that the joint covariance matrix Σ X,Y should be positively definite. Thus, the straightforward choice Corr(X j , Y j ) = 1 is not always possible; see Theorem 6 below and [10]. [Maurice René Fréchet (1878-1973), a French mathematician, worked in topology, functional analysis, probability theory and statistics. He was the first to introduce the concept of a metric space (1906) and prove the representation theorem in L 2 (1907). However, in both cases, the credit was given to other people: Hausdorff and Riesz. Some sources claim that he discovered the Cramér-Rao inequality before anybody else, but such a claim was impossible to verify since lecture notes of his class appeared to be lost. Fréchet worked in several places in France before moving to Paris in 1928. In 1941, he succeeded Borel at the Chair of Calculus of Probabilities and Mathematical Physics in Sorbonne. In 1956, he was elected to the French Academy of Sciences, at the age of 78, which was rather unusual. He influenced and mentored a number of young mathematicians, notably Fortet and Loève. He was an enthusiast of Esperanto; some of his papers were published in this language]. Wasserstein Distance in the Gaussian Case In the Gaussian case, it is convenient to use the following extension of Dobrushin's bound for p = 2: For simplicity, assume that both matrices Σ 2 1 and Σ 2 2 are non-singular (In the general case, the statement holds with Σ −1 1 understood as Moore-Penrose inversion). Then, the L 2 −Wasserstein distance W 2 (X 1 , where (Σ 1 Σ 2 2 Σ 1 ) 1/2 stands for the positively definite matrix square-root. The value (78) is achieved when Note that the expression in (79) vanishes when Σ 2 1 = Σ 2 2 . 1 ρ ρ 1 and ρ ∈ (−1, 1). Then, Note that, in the case Proof. First, reduce to the case µ 1 = µ 2 = 0 by using the identity W 2 2 (X 1 , Note that the infimum in (19) is always attained on Gaussian measures as W 2 (X 1 , X 2 ) is expressed in terms of the covariance matrix Σ 2 = Σ 2 X,Y only (cf. (81) below). Let us write the covariance matrix in the block form where the so-called Shur's complement S = Σ 2 2 − K T Σ −2 1 K. The problem is reduced to finding the matrix K in (80) that minimizes the expression subject to a constraint that the matrix Σ 2 in (80) is positively definite. The goal is to check that the minimum (81) is achieved when the Shur's complement S in (80) equals 0. Consider the fiber σ −1 (S), i.e., the set of all matrices K such that σ(K) It is enough to check that the maximum value of tr(K) on this fiber equals Since the matrix S is positively defined, it is easy to check that the fiber S = 0 should be selected. In order to establish (82), represent the positively definite matrix where the diagonal matrix D 2 r = diag(λ 2 1 , . . . , λ 2 r , 0, . . . , 0) and λ i > 0. Next, U = (U r |U d−r ) is the orthogonal matrix of the corresponding eigenvectors. We obtain the following r × r identity: It means that Σ −1 The matrix O r parametrises the fiber σ −1 (S). As a result, we have an optimization problem in a matrix-valued argument O r , subject to the constraint O T r O r = I r . A straightforward computation gives the answer tr[(M T M) 1/2 ], which is equivalent to (82). Technical details can be found in [11,12]. Remark 3. For general zero means RVs X, Y ∈ R d with the covariance matrices Σ 2 i , i = 1, 2, the following inequality holds [13]: Example 5 (Wasserstein-2 distance between Dirac measure on R m and a discrete measure on R d ). Let y ∈ R m and µ 1 ∈ M(R m ) be the Dirac measure with µ 1 (y) = 1, i.e., all mass centered at y. Let x 1 , . . . , x k ∈ R d be distinct points, p 1 , . . . , p k ≥ 0, p 1 + . . . + p k = 0, and let µ 2 ∈ M(R d ) be the discrete measure of point masses with µ 2 (x i ) = p i , i = 1, . . . , k. We seek the Wasserstein distanceŴ 2 (µ 1 , µ 2 ) in a closed-form solution. Suppose m ≤ d; then, noting that the second infimum is attained by b = y − k ∑ i=1 p i Vx i and defining C in the last infimum to be Let the eigenvalue decomposition of the symmetric positively semidefinite matrix C be C = QΛQ T with Λ = diag(λ 1 , . . . , λ d ), and is attained when V ∈ O(m, d) has row vectors given by the last m columns of Q ∈ O(d). Note that the geodesic distance (7) and (8) between Gaussian PDs (or corresponding covariance matrices) is equivalent to the formula for the Fisher information metric for the multivariate normal model [15]. Indeed, the multivariate normal model is a differentiable manifold, equipped with the Fisher information as a Riemannian metric; this may be used in statistical inference. Example 6. Consider i.i.d. random variables Z l , . . . , Z n to be bi-variately normally distributed with diagonal covariance matrices, i.e., we focus on the manifold M diag = {N(µ, Λ) : µ ∈ R 2 , Λ diagonal}. In this manifold, consider the submodel M * diag = {N(µ, σ 2 I) : µ ∈ R 2 , σ 2 ∈ R + } corresponding to the hypothesis H 0 : σ 2 1 = σ 2 2 . First, consider the standard statistical estimatesZ for the mean and s 1 , s 2 for the variances. Ifσ 2 denotes the geodesic estimate of the common variance, the squared distance between the initial estimate and the geodesic estimate under the hypothesis H 0 is given by which is minimized byσ 2 = s 1 s 2 . Hence, instead of the arithmetic mean of the initial standard variation estimates, we use as an estimate the geometric mean of these quantities. Finally, we present the distance between the symmetric positively definite matrices of Then, the distance is defined as follows: In order to estimate the distance (93), after the simultaneous diagonalization of matrices A and B, the following classical result is useful: Context-Sensitive Probability Metrics The weighted entropy and other weighted probabilistic quantities generated a substantial amount of literature (see [16,17] and the references therein). The purpose was to introduce a disparity between outcomes of the same probability: in the case of a standard entropy, such outcomes contribute the same amount of information/uncertainty, which is appropriate in context-free situations. However, imagine two equally rare medical conditions, occurring with probability p 1, one of which carries a major health risk while the other is just a peculiarity. Formally, they provide the same amount of information: − log p, but the value of this information can be very different. The applications of the weighted entropy to the clinical trials are in the process of active development (see [18] and the literature cited therein). In addition, the contribution to the distance (say, from a fixed distribution Q) related to these outcomes, is the same in any conventional sense. The weighted metrics, or weight functions, are supposed to fulfill the task of samples graduation, at least to a certain extent. Let the weight function or graduation ϕ > 0 on the phase space X be given. Define the total weighted variation (TWV) distance τ ϕ (P 1 , P 2 ) = 1 2 sup Similarly, define the weighted Hellinger distance. Let p 1 , p 2 be the densities of P 1 , P 2 w.r.t. to a measure ν. Then, Conclusions The contribution of the current paper is summarized in the Table 1 below. The objects 1-8 belong to the treasures of probability theory and statistics, and we present a number of examples and additional facts that are not easy to find in the literature. The objects 9-10, as well as the distances between distributions of different dimensions, appeared quite recently. They are not fully studied and quite rarely used in applied research. Finally, objects [11][12] have been recently introduced by the author and his collaborators. This is the field of the current and future research.
4,860.8
2023-03-01T00:00:00.000
[ "Mathematics" ]
Creative synthesis of the technical solutions in the field of industrial processes When a technical problem is approached, as a direct consequence of analyzing certain distinct existing solutions, an improved solution of the problem could be elaborated. Over the years, some methods able to help the designers were identified and developed. The purpose of this paper was to take into consideration and to develop a succinct comparison of some creative considered methods of solving research problems in the field of industrial processes. The first group of analyzed design method was based on the initial use of the so-called ideas diagram. The results of the analysis were used to propose solutions concerning a device for investigation of the effects developed on sharp peaks by the electrical discharges. The axiomatic design was applied as a creative design method in the case of equipment for the investigation of degradation in time of the subsystems of the computers. The reverse engineering method was considered in the case of a device for supporting the tablet in a convenient position for the user. A final succinct comparison of the three ways of synthesizing an improved solution of a technical problem from the field of industrial processes highlights the essential characteristics of the approached methods. Introduction The concept of "technique" comes from the Greek language and one of its meanings refers to the assembly of production tools and knowledge applied to produce goods. In the field of manufacturing engineering, there are many technical problems for which technical solutions must be found. A technical solution must be an adequate answer to a technical problem. A general conventional classification of the technical problems shows that there are: 1. Strategic problems, whose solutions could involve high material values and/or significant people groups; when such a problem is approached, it is important to find many solutions and subsequently to select the most convenient one, eventually by consulting other specialists; 2. Tactic problems, specific to the current situations, when a solution must be found in short time, and whose result does not involve high material values and/or large people groups. The above-mentioned considerations are valid inclusively in the case of the engineer specialized in the field of manufacturing engineering when solutions of constructive or technological problems must be identified. In the two situations (strategic problems and tactical problems), creative or routine solutions could be identified and applied. When the time for solving the approached problem is short or the solution must be found in the shortest time, usually the routine solutions could be preferred. When there is a higher duration in which a solution must be identified, the designer could try to propose a creative solution. To efficiently search and identify creative solutions, over the times the specialists invested their efforts in defining and promoting methods able to use in a large extent the technical creativity. In this way, simpler or more complex methods for creative solving of the technical problems were proposed. Essentially, such creative methods for solving technical problems could be methods based essentially on the intuition and logical intuitive methods, respectively. As intuitive creative methods, one could consider the classical brainstorming, the lateral thinking method, lotus flower method, analogy method, advocate's method etc. Among the logical intuitive methods, the following methods could be considered: ideas diagram method, TRIZ method, axiomatic design, the method of the generalized object of the technical creation etc. When considering the way in which a creative solution could be found, the researchers have distinct opinions. Thus, Harvey appreciated that the groups have significant creative capacities by which they could generate radical creative solutions and it is necessary to combine the groups cognitive, social and environmental resources so that finally success solutions could be identified [1]. Hoeltzel and Cheng analyzed the possibility of using knowledge-based approaches to synthesize mechanisms in a creative way. They concluded that in comparison with other design strategies, the knowledge-based computer-aided design tools could ensure higher efficiency in generation of solutions able to meet the initially assumed functional requirements [2]. The research presented in this paper aimed to take into consideration some design methods considered to be helpful in identifying and developing improved solutions for industrial processes or equipment. Methods based on the initial use of the ideas diagram One of the methods able to facilitate the synthesis of creative solutions for technical problems is the so-called ideas diagram methods. This method uses a graphical representation that along horizontal line includes the subassemblies identified for a possible solution of the approached technical problem. Subsequently, along vertical lines, distinct versions for each subassembly are mentioned. Both at the end of the horizontal lines and of the vertical lines, interrogation signs are also included, to highlight the idea that other subassemblies or versions of a certain subassembly could be subsequently identified. One considers that in comparison with other creative methods, the ideas diagram method suggests an open character of searching the problem various solutions, just by using these interrogation signs [3]. As one could notice, this ideas diagram method is practically a possible first stage in searching new or improved solutions for the approached problems; the applying of other methods could ensure proper conditions for identification of the wanted creative solutions. To illustrate some aspects corresponding to the possible generation of creative solutions by the initial use of the ideas diagram method, one could consider the simplified diagram presented in figure 1. This ideas diagram was elaborated in the case of equipment for studying the effects of the electrical discharges on the sharp conical peaks that belong to test samples made of various electroconductive materials [4]. In the stages found after the elaboration of the graphical representation that corresponds to the ideas diagram, all the possible solutions could be highlighted, by means of the morphological method or the lexical graphical method [5][6]. In the case of the problem approached in this paper, one preferred to elaborate a morphological table in which the versions of each subassembly are symbolized by indexes added to the distinct subassemblies versions A, B, and C. In table 1 all these versions were thus mentioned. A solution A2B1C3 includes practical the version 2 for the subassembly A, version 1 for the subassembly B and version 3 for the subassembly C. One could notice that the higher the number of the subassemblies and the number of the subassemblies versions are, the higher the number of the possible solutions is. In the case of the approached problem, for the three versions of the subassembly A and B and for the four versions of the subassembly C, the total number Nt of the possible solutions will be: To analyze in detail 36 solutions, a large time could be necessary and this fact could be less convenient. To diminish the number of the solutions that could be analyzed in detail, some distinct methods could be used (global evaluation, the successive sequential method, the method of dividing in morphologies, the method of simple randomization, the method of weighted randomization etc. Taking into consideration the method of global evaluation, one considered that the solutions A2B1C1, A3B3C2, and A1B1C2 were considered as presenting innovative aspects and they were selected to be analyzed in detail. To develop such in detail analysis, other various methods could be applied (the value analysis method, the method of the utilities, AHP method etc.). To continue the creative synthesis in the case of the equipment for studying the behavior of sharp conical peaks under the action of the electrical discharge, the applying of the method of the matrix with double entries was preferred. With this aim in view, table 2 was elaborated. In this table, both along the first line and first column, the three previously selected possible solutions were mentioned. Subsequently, each possible solution will be compared with the other two solutions versions and the score 1-0 will be established when the first solution is considered as better, 0-1 when the second solution is appreciated as more convenient and 0.5-0.5 when the two When applying the more laborious method of imposed decision, initial evaluation criteria are established and subsequently each version is compared with the other version by means of each of the considered criteria [3,5]. Axiomatic design The first considerations concerning the axiomatic design were published by the professor Nam Pyo Suh in 1978. Subsequently, many other researchers have contributed to the promoting and development of the theory and applications concerning the axiomatic design. Essentially, within axiomatic design, two axioms are considered. In accordance with the first axiom, the functional requirements that correspond to the problem to be solved must be independent. The second axiom shows that among many solutions for a problem, one must select the solution that needs minimum information [7,8]. Certain successive stages are specific to the applying of the axiomatic design. Within a succinct example of solving a design problem by means of the axiomatic design principles, the necessity of identifying equipment for the investigation of degradation in time of the subsystems of the computers will be approached. To define the so-called customer needs, one could consider that the above-mentioned equipment is necessary to develop doctoral research. In this way, the functional requirement of zero order could be formulated: FR0: design equipment able to highlight the changes in operating conditions for certain subsystems of the computers when taking into consideration less adequate operating conditions. If this functional requirement of zero order is deeper analyzed, the functional requirements of the second first order could be identified: FR1: ensure the change of the operating temperature; FR2: ensure the change of cooling air flow rate; FR3: ensure the deposition of a dust layer on the computer subsystem housing; FR4: ensure an adequate space where the investigated computer subsystem and various sensors could be placed; FR5: measure the computer unit performances. To each of the functional requirement of the first order, adequate solving means must be found; these solutions are called design parameters (DPs). In the case of the equipment to be defined, such design parameters of first order could be: DP1: subsystem to controlled change of the temperature; DP2: Subsystem for controlled change of the cooling air flow rate; DP3: dust layer adhered to the investigated computer subsystem housing; DP4: recipient; DP5: computer system and software for the evaluation of the computer system performances. Aiming at dividing the functional requirements of first order, the following functional requirements of second order could be formulated: As one can see, the identification of the functional requirements and design parameters of zero, first and second order needs a periodical successive approaching both of functional requirements domain and design parameters domain; this activity constitutes the so-called zigzagging between the two domains. A more suggestive image concerning the functional requirements, design parameters and the correlations derived from the first axiom could be obtained by including this information in a table that represents also a decision matrix (Table 2). Taking into consideration this simplified approach of some stages that correspond to the axiomatic design principles, the solution presented in figure 3 was gradually defined. The examination of the information included in table 3 shows that there are functional requirements that are met by the same design parameters; the temperature controller is used both for measuring the temperature, comparing the real temperature with the pre-established temperature and activating the electrical resistance. This fact highlights that possibilities of improving the proposed designed solution. Reverse engineering Most authors define reverse engineering by considering a concept proposed by Chikofsky and Cross in a paper published in 1990 [9], but there is the possibility of using the concept some years before. They referred to the concept of reverse engineering as a process of analyzing a subject-system to establish its components and relations among the components and to create representations of the system in another way and using a high level of abstracting. This method is also called back engineering and is used in mechanical, electronic, chemical, software engineering. At present, the reverse engineering is considered as a scientific method of analyzing a technology, equipment etc. to understand how they were designed or function and thus identify the ways of improving and increasing the performances of the analyzed system. To succinctly illustrate the stages of applying reverse engineering, the solution of a book holder is subsequently analyzed. 1. Identification of the product to be analysed. As above-mentioned, the case of a book holder will be taken into consideration. As an initial solution, the main image presented in the U.S.A. patent no. 5,351,927 [10] will be used ( fig. 4). 2. Investigation or disassembling of the information concerning the structure and the way of using the original system. One could notice here that the original system includes a proper book holder placed at the end of a vertical telescopic tubular column. This column is attached to a base component that includes 3 rollers designed to provide a displacement of the book holder assembly in a horizontal plane. Some cylindrical joints facilitate the angular positioning of the proper book holder. Fig. 3. Solution for equipment aiming to test the computer system components. system. Taking into consideration the information achieved by the investigation of the solution included in the U.S.A. patent no. 5,351,927, the solution presented in figure 5 was gradually developed [11]. As someone could see, the proposed version of the book holder uses a flange piece by which it is fixed to the wall and to which book support is attached by means of a telescopic joint and a spherical joint, respectively. To attach the book to the proper book support, a cover of transparent material is used. The position of the cover could be changed for distinct books thicknesses. To ensure the adequate illumination of the book, a led type lamp could be clamped at the end of the flexible tube mounted on the proper book support. As the main improvement of this solution in comparison with the initially analyzed book holder, one could take into consideration the possibility of attaching the book holder to a wall and the inclusion of a light source. 4. Practical generation of a new product could be appreciated as one of the final stages of applying the reverse engineering method. Use of the accumulated information to generate a modified version of the original As one can see, the proposed solution does not copy the components of the initial system and, in this way, the copyright laws are not violated. The main advantages of the reverse engineering method derive from the diminishing of the time for documentation and for generation of the wanted solution; practically, the attentive examination of the wanted object existing version could directly suggest certain possibilities of improving the initial solution to a creative designer. As a disadvantage, one could take into consideration the possible psychological obstacle represented by the existing solution, which can prevent the designer from seeing how other solutions, quite different from the original solution, could be designed. Succinct comparative analysis Some succinct details concerning the use of certain creative methods in solving the problems of synthesizing improved equipment were previously presented. There are also other creative methods susceptible to be applied in synthesizing technical solutions intended to be used within various processes. Each researcher or designer could select the method he considers the adequate for solving his technical problem. To develop a short comparison of the above-mentioned creative methods, some evaluation criteria were identified. Thus, one took into consideration the method complexity level (one considers that a method involving many stages is more complex and time-consuming), the necessity to consider an initial possible known (existing) solution, the presence of stages when a retaking of the synthesizing process is necessary. In this evaluation, three qualitative criteria (high, medium and low) were used; one could accept that in establishing these qualitative criteria some subjective consideration could be present (Table 4). In fact, as above mentioned, over the times, each researcher or designer could establish his own way of efficient creative developing of new or improved equipment or processes, but it is important to have information concerning the existing creative methods that could be used to solve the technical problems or just to combine the characteristics or stages of distinct methods to more efficiently solve the assumed problems. Conclusions The problem of using design methods able to usually lead to new or improved solutions of equipment or processes has captured the interest of the researchers from the field of design activities. Over the years, many methods of creative solving of the technical problems were proposed and promoted. Three of these methods were taken into consideration in the research presented in this paper to be applied in finding a solution for a certain problem. Thus, some methods based essentially on the use of ideas diagram were used to identify a device for studying the behavior of conical peaks under the action of the electrical discharges, while some principles of the axiomatic design were applied to develop equipment for the investigation of the computer subsystems in less convenient operating conditions. In a third example, the reverse engineering method was applied to find a solution for a book holder. A succinct comparison of these creative methods of solving technical problems was achieved by considering some qualitative criteria. One concluded that the use of one of the presented creative methods or even of other creative methods depends on the researcher abilities, knowledge and confidence in the possible success of using a certain creative method or a group of creative methods. Another conclusion takes into consideration the necessity that, when the designer has a certain professional experience, he could elaborate a first solution without a deep investigation of the available literature, so that the designer could not be influenced by the existing solutions. Only in a second stage, a comparison of his solution with the solutions found in the available literature could contribute to the improving of his initial version of solution.
4,176.2
2019-01-01T00:00:00.000
[ "Engineering" ]
Retinal Ganglion Cells: Global Number, Density and Vulnerability to Glaucomatous Injury in Common Laboratory Mice How many RBPMS+ retinal ganglion cells (RGCs) does a standard C57BL/6 laboratory mouse have on average and is this number substrain- or sex-dependent? Do RGCs of (European) C57BL/6J and -N mice show a different intrinsic vulnerability upon glaucomatous injury? Global RGC numbers and densities of common laboratory mice were previously determined via axon counts, retrograde tracing or BRN3A immunohistochemistry. Here, we report the global RGC number and density by exploiting the freely available tool RGCode to automatically count RGC numbers and densities on entire retinal wholemounts immunostained for the pan-RGC marker RBPMS. The intrinsic vulnerability of RGCs from different substrains to glaucomatous injury was evaluated upon introduction of the microbead occlusion model, followed by RBPMS counts, retrograde tracing and electroretinography five weeks post-injury. We demonstrate that the global RGC number and density varies between substrains, yet is not sex-dependent. C57BL/6J mice have on average 46K ± 2K RBPMS+ RGCs per retina, representing a global RGC density of 3268 ± 177 RGCs/mm2. C57BL/6N mice, on the other hand, have on average less RBPMS+ RGCs (41K ± 3K RGCs) and a lower density (3018 ± 189 RGCs/mm2). The vulnerability of the RGC population of the two C57BL/6 substrains to glaucomatous injury did, however, not differ in any of the interrogated parameters. Introduction Being part of the central nervous system (CNS), and alongside its accessibility, the retina is considered a highly valuable tissue to study neurodegenerative diseases. It is currently viewed as a window to the brain, allowing a non-invasive and early detection of neurodegenerative injury signs in various CNS diseases, even in those that are not primarily associated with visual system deficits, e.g., Alzheimer's and Parkinson's disease [1,2]. One important type of neuron that resides in the retina is the retinal ganglion cell (RGC), whose axon connects our eye to our brain. These RGCs are under attack in common CNS disorders [3], including the highly prevalent glaucoma [4], and their loss often leads to vision impairment or even blindness. The retinal cell population of a common laboratory mouse (Mus musculus, C57BL/6J substrain) was first scrutinized by Jeon et al. in 1998 [5]. Historically, estimations on the RGC number in rodent species were reported using post-mortem axon counts or via retrograde tracing experiments, both exploiting the fact that the RGCs are the only afferent neurons of the retina. These methods represented the most straightforward way to assess RGC numbers before the identification of RGC markers. Nowadays-and as RGCs occupy the innermost retinal layer within the retina-RGCs can be easily assessed on (entire) wholemount retinas via immunohistochemistry with RGC markers or via murine reporter lines, both in combination with standard epifluorescence microscopy. A reporter line that specifically labels the RGC population in the murine retina is the VGLUT2-IRES-Cre × THY1-STOP-YFP mouse, as introduced by the Sanes lab [6]. These mice have been increasingly used to isolate RGCs via fluorescence-activated cell sorting (FACS) [6][7][8]. Panmarkers that specifically label RGCs include tubulin beta-3 chain (TUBB3) [9], brain-specific homeobox/POU domain protein 3A (BRN3A) [10] and RNA binding protein with multiple splicing (RBPMS) [11]. Following the identification of these RGC markers, the development of (semi) automated RGC counting algorithms on retinal wholemounts was fostered, e.g., for BRN3A [10,12,13] or RBPMS [13,14] labeling. We recently developed a deep learning tool for the automated detection and quantification of murine RBPMS-immunopositive (RBPMS + ) RGCs, called RGCode-short for Retinal Ganglion Cell quantification based On DEep learning [14]. Compared to manual counting on frames, fully automated counting of entire retinal wholemounts promotes scientific rigor as it allows for higher throughput, total blinding to experimental groups and reducing both bias and inter-/intra-operator variability. Additionally, it may also facilitate inter-study comparisons of RGC density data, e.g., between different mouse strains or different glaucoma models. As only a limited number of retinas was used to set up RGCode, we aimed to run a bigger pool of retinas through the tool to assess the RBPMS + RGC population in common laboratory mice, i.e., C57BL/6J and -N mice. This allowed us to deduce the definitive number of RGCs in widely used laboratory mice and interrogate possible substrain-and sex-related differences in RGC counts/densities. Both substrainand sex-related differences are important issues raised by many research groups, yet still repeatedly causing problems in the field. In addition to interrogating the global RGC count/density between C57BL/6J and -N mice, we also included a comparative analysis of retinal layer thickness via optical coherence tomography (OCT) and RGC functioning via electroretinography (positive scotopic threshold response or pSTR measurements). The abundant and widespread use of C57BL/6J and -N mice also implies that they are bred at various locations across the globe, including vendors and independent academic colonies. This most likely introduces heterogeneity between mice from the same substrain, yet bought from a different supplier and/or bred at a different location for several generations, e.g., between European and American mice. For example, Jeon et al. reported a difference in the total number of cells in the ganglion cell layer of American versus European C57BL/6J mice [5]. In the glaucoma research field, there have been some problems with adopting the popular experimental microbead occlusion model in geographically dispersed research groups, allegedly due to differences between American versus European mice. C57BL/6N mice are known to harbor a mutation (Rd8) that introduces mild photoreceptor degeneration [15][16][17][18][19], possibly rendering their retinas more prone to glaucomatous injury. Mattapallil et al. reported the presence of the Rd8 mutation in all interrogated C57BL/6N cohorts, each bought from American vendors [16], yet much less is known about European C57BL/6N mice. For this reason, we also assessed whether the RGCs of European C57BL/6J and -N mice harbor a different vulnerability to glaucomatous injury. Experimental Animals Within this study, 10-13-week-old C57BL/6J (JAX stock #000664, KU Leuven's breeding colony, Belgium, originally acquired via Charles River Laboratories, France, the European supplier of Jax ® mice) or C57BL/6N (JAX stock #005304, acquired from Charles River Laboratories, Italy) mice of either sex were used and housed under standard laboratory conditions. All experiments were approved by the Institutional Ethical Committee of KU Leuven and were in accordance with the European Communities Council Directive of 22 September 2010 (2010/63/EU). Glaucoma Model The microbead occlusion model was used to induce a glaucomatous-like injury in the eyes of C57BL/6J and-N mice, according to the protocol of Ito and Belforte et al. [20] and described in more detail in [21]. Briefly, 2 µL of magnetic microbeads (Dynabeads™ M-450 Epoxy, ThermoFisher Scientific, Waltham, MA, USA) was intracamerally injected and manually repositioned with a handheld magnet towards the iridocorneal angle under general anesthesia (isoflurane, Iso-Vet 1000 mg/g, Dechra, Northwich, UK). Mice were euthanized five weeks post-microbead occlusion. Retrograde Tracing, Electroretinography and Optical Coherence Tomography To retrogradely trace the RGCs, a foam was soaked with hydroxystilbamidine (OHSt, 4%, Life Technologies, Carlsbad, CA, USA) dissolved in saline with 10% demethylsulfoxide (Sigma-Aldrich, Saint Louis, MO, USA). Six days before euthanasia, this foam was placed on top of the superior colliculus after aspirating the overlying cortex, according to the protocol of [22] and described in more detail in [21]. For this surgical procedure, mice were sedated via an intraperitoneal mixture of medetomidine and ketamine (1 mg/kg, Domitor, Pfizer, New York City, NY, USA and 75 mg/kg, Anesketin, Eurovet, Bladel, The Netherlands), which was reversed with a subcutaneous injection of 1 mg/kg atimapezol (Antisedan, Pfizer). Functioning of RGCs was studied via the positive scotopic threshold response (pSTR), as described previously [7,21]. Briefly, mice were dark adapted overnight, one day before euthanasia. The next day, and upon pupil dilation, responses to 50 dim white light flashes (0.0001 cd·s/m 2 ) were recorded in a dark room (Celeris, Diagnosys, Lowell, MA, USA) under general anesthesia (Cfr. mixture above). The amplitude was defined as the difference between the peak amplitude of the positive wave (pSTR) and the baseline signal, whereas the latency time was defined as the time between the flash onset and the occurrence of this peak amplitude of the pSTR. Next, retinal layers were imaged via spectral domain spectral domain optical coherence tomography (OCT, Envisu R2210, Bioptigen, Morrisville, NC, USA). The thickness of each layer was measured at 16 different locations across the retinal area and averaged per mouse via the InVivoVue Diver 3.0.8 software (Bioptigen), all as described previously [21]. Imaging, RGC Counting and Statistics Retinas were imaged with an upright, wide-field epifluorescence microscope (Leica DM6, Wetzlar, Germany). Via the Las X Navigator, each retina was outlined using a 5× objective, followed by tile scanning of the entire wholemount with a 20× objective. Imaged retinas were uploaded in the RGCode tool without any image preprocessing. RGCode is a fully automated deep learning tool that outlines the retinas and counts the RGCs, rendering information about the global RGC number and density per wholemount. RGCode was originally set up to detect RBPMS + RGCs, yet as RBPMS and OHSt are both cytoplasmic labels and thus render a similar signal, RGCode was retrained to count OHSt + RGCs. As such, both RBPMS + and OHSt + RGCs were automatically quantified via RGCode. A detailed description of this tool can be found in [14], and the tool can be downloaded via https://gitlab.com/NCDRlab/rgcode. Graphs and statistical parameters were extracted from Prism (GraphPad, San Diego, CA, USA, v9.3.1). Statistical significance was set to p ≤ 0.05 for all analyses, and statistical tests are provided in the figure legends. Data are reported as mean ± SD in the text and visualized as mean ± SEM in the figures. Imaging, RGC Counting and Statistics Retinas were imaged with an upright, wide-field epifluorescence microscope (Leica DM6, Wetzlar, Germany). Via the Las X Navigator, each retina was outlined using a 5× objective, followed by tile scanning of the entire wholemount with a 20× objective. Imaged retinas were uploaded in the RGCode tool without any image preprocessing. RGCode is a fully automated deep learning tool that outlines the retinas and counts the RGCs, rendering information about the global RGC number and density per wholemount. RGCode was originally set up to detect RBPMS + RGCs, yet as RBPMS and OHSt are both cytoplasmic labels and thus render a similar signal, RGCode was retrained to count OHSt + RGCs. As such, both RBPMS + and OHSt + RGCs were automatically quantified via RGCode. A detailed description of this tool can be found in [14], and the tool can be downloaded via https://gitlab.com/NCDRlab/rgcode. Graphs and statistical parameters were extracted from Prism (GraphPad, San Diego, CA, USA, v9.3.1). Statistical significance was set to p ≤ 0.05 for all analyses, and statistical tests are provided in the figure legends. Data are reported as mean ± SD in the text and visualized as mean ± SEM in the figures. No Sex-Related Differences in Retinal Area, Global RGC Number or Density To evaluate whether the differences between C57BL/6J and -N mice could be sexdependent, the data were split according to their sex. Notably, no differences in retinal area, RGC count or RGC density between female versus male mice were observed for any of the studied substrains (Figure 2a-c, Table 1). No Sex-Related Differences in Retinal Area, Global RGC Number or Density To evaluate whether the differences between C57BL/6J and -N mice could be sexdependent, the data were split according to their sex. Notably, no differences in retinal area, RGC count or RGC density between female versus male mice were observed for any of the studied substrains (Figure 2a-c, Table 1). Mild Photoreceptor Layer Thinning in C57BL/6N Mice but No Difference in RGC Functioning between C57BL/6 Substrains To study the differences in RGC count and density in more depth, the thickness of the retinal layers was studied via OCT (Figure 3a,b), and RGC functioning was interrogated via pSTR measurements (Figure 3c). Modest retinal layer thinning was observed in the photoreceptor layer of C57BL/6N mice compared to C57BL/6J mice, corresponding to a thinning of 7.60 ± 4.22%. The thickness of other retinal layers as well as the total neuroretina did not differ between C57BL/6J and -N mice (Figure 3b, Table 2). RGC functioning was also not found different between the two substrains, both in terms of pSTR amplitude and latency time (Figure 3c). Mild Photoreceptor Layer Thinning in C57BL/6N Mice but No Difference in RGC Functioning between C57BL/6 Substrains To study the differences in RGC count and density in more depth, the thickness of the retinal layers was studied via OCT (Figure 3a,b), and RGC functioning was interrogated via pSTR measurements (Figure 3c). Modest retinal layer thinning was observed in the photoreceptor layer of C57BL/6N mice compared to C57BL/6J mice, corresponding to a thinning of 7.60 ± 4.22%. The thickness of other retinal layers as well as the total neuroretina did not differ between C57BL/6J and -N mice (Figure 3b, Table 2). RGC functioning was also not found different between the two substrains, both in terms of pSTR amplitude and latency time (Figure 3c). No Substrain-Dependent Differences in RGC Vulnerability to Glaucomatous Damage In addition to strain-and sex-dependent differences in RGC number/density, we evaluated strain-dependent differences in the susceptibility of RGCs to glaucomatous injury. For this purpose, the most widely employed experimental glaucoma model, i.e., the microbead occlusion model, was used. The effect of glaucomatous injury on RGC No Substrain-Dependent Differences in RGC Vulnerability to Glaucomatous Damage In addition to strain-and sex-dependent differences in RGC number/density, we evaluated strain-dependent differences in the susceptibility of RGCs to glaucomatous injury. For this purpose, the most widely employed experimental glaucoma model, i.e., the microbead occlusion model, was used. The effect of glaucomatous injury on RGC numbers was eval-uated via RBPMS labeling (Figure 4a), retrograde tracing with OHSt ( Figure 4b) and pSTR measurements (Figure 4c), all five weeks after the induction of the glaucomatous pathology. No difference in the susceptibility of the RGC population was noted between C57BL/6J and -N mice in any of the studied parameters (Figure 4a-c). Mean loss of RBMPS + RGCs in C57BL/6J was on average 9.94 ± 7.10% versus 8.35 ± 5.93% in C57BL/6N mice. The average loss of OHSt + RGCs was estimated at 12.36 ± 10.02% and 18.38 ± 6.36% RGCs in C57BL/6J and -N mice, respectively. Last, C57BL/6J and -N mice also showed a similar decline in pSTR amplitude: 25.79 ± 23.15% versus 21.93 ± 21.84%, respectively. numbers was evaluated via RBPMS labeling (Figure 4a), retrograde tracing with OHSt ( Figure 4b) and pSTR measurements (Figure 4c), all five weeks after the induction of the glaucomatous pathology. No difference in the susceptibility of the RGC population was noted between C57BL/6J and -N mice in any of the studied parameters (Figure 4a- The Total Number of RGCs in C57BL/6J and -N Mice The total number of RGCs in standard laboratory mice is reckoned to range between 40,000 and 60,000 cells, representing ± 1% of the total retinal cells. With the help of our freely available deep learning model to count RGCs on RBMPS-stained wholemounts-RGCode [14]-we studied differences in retinal area, global RGC count and -density in two C57BL/6 substrains, i.e., C57BL/6J and -N. The average RGC numbers and standard deviations reported here as assessed via the pan-RGC marker RBPMS highly correspond to other studies, in which the RGC number was determined via axon counts, retrograde tracing or BRN3A immunolabeling [5,12,[23][24][25][26][27][28][29][30][31] (Table 3). Similarly, the total retinal area obtained in our study is similar to previous observations, who reported an average retinal area of 14.6 ± 0.9 mm 2 in C57BL/6 mice [23]. The computed global densities of naive C57BL/6J and -N retinas are also in line with previous reports. RGC densities in C57BL/6 mice are usually estimated to be around 3000 RGCs/mm 2 , calculated after retrograde tracings [23,26,32] or with manual counts of RGCs on wholemount retinas after RBPMS immunolabeling [33,34]. Substrain-Dependent Differences in Retinal Area, RGC Count and -Density Nowadays, RGC numbers/densities in murine models are estimated via RGCs somas counts on retinal wholemounts instead of axonal counts on optic nerve cross sections (Cfr. Introduction). However, most research groups still manually count RGCs on retinal sections or on small sampling areas from retinal wholemounts-not covering the entirety of the retina. The global RGC number/density is then approximated via area calculations, which are rough estimates as the RGC density greatly varies between the central and peripheral retina. Our automated approach, i.e., the deep learning tool RGCode, enables the quantification of entire retinal wholemounts, providing a precise assessment of the entire RGC population. In addition to these differences in the employed technique to assess the RGC number, variations in RGC number/density estimates could be explained by (sub)strain differences. Even between cohorts of identical (inbred) mouse strain, differences are denoted, especially when bought from different suppliers and thus bred at different locations, as evidenced by [5,24]. C57BL/6J and -N mice are, by far, the most commonly used inbred laboratory mice in neuroscience. While originally derived from the same parental mice, comparative genome sequencing has identified apparent genetic differences between C57BL/6 substrains [36][37][38][39][40]. In ophthalmologic research, the use of C57BL/6J mice is preferred over C57BL/6N as the latter harbor a universally spread Rd8 mutation in the Crb1 gene across all C57BL/6N mice. This mutation is associated with mild photoreceptor disorganization and degeneration, which worsens upon aging [15][16][17][18][19]. Reported ophthalmologic dissimilarities between both substrains include differences in retinal organization, visual acuity (optomotor response) [38], number of retinal vessels, occurrence of white spots (fundus endoscopy) [15,16,38,41], response to circadian disruption [42] and expression of pro-inflammatory markers [43,44]. Notably, both the retinal area and RGC number were found significantly lower in C57BL/6N mice compared to C57BL/6J mice in our study. The lower RGC number was, however, not proportional to the smaller retinal area, as the total RGC density was also significantly lower in C57BL/6N mice. The reported differences in global RGC count are in accordance with the study of Williams et al. in 1996, who compared the RGC axon number of different inbred and outbred laboratory mouse strains [24]. The authors did not compare C57BL/6J and -N mice, but they did show a remarkable difference (±17%) in RGC number between C57BL/6J cohorts originating from two different Jackson Laboratory colonies (Bar Harbor, ME, USA). Of note, various reports mention the use of "C57BL/6" mice but do not always specify the substrain and/or breeder ( Table 3). As evidenced by our findings and by others, genetic background effects could be a confounding factor in any study. Hence, we urge breeders as well as scientists to thoroughly document any information regarding the experimental mice to guarantee reliability and reproducibility of the research data. No Sex-Related Differences in Retinal Area, Global RGC Number or Density Gender differences are widely known to affect disease prevalence and accompanying treatments, including the well-known and persisting gender bias in clinical research [45]. In glaucoma, gender is an acknowledged risk factor, with a higher incidence in women [46,47]. Despite all this knowledge, many animal studies use mixed-sex cohorts and little attentionespecially in the field of glaucoma-has been paid to how male and female mice respond differently in preclinical studies. In humans, gender-related differences were found both on a structural (OCT of retinal layers) [48,49] and functional (electroretinography) [50,51] level. Sex-related differences in the visual system of mice are also noted, including differences in contrast sensitivity [52], divergent age-related changes in retinal gene expression [53] and accelerated degeneration in female retinal degeneration models [54][55][56]. In C57BL/6N mice, Rd8 lesions are also more common in male versus female mice [15]. In our study, the difference in RGC count and density between C57BL/6J and -N mice could, however, not be explained by sexual dimorphism, as no significant differences between the retinal area, global RGC count or density between female and male mice were found. This finding is also in accordance to the Williams study, who also did not detect sex differences in RGC number [24]. Hence, mixed-sex cohorts of C57BL/6J or -N mice can be used in the study of RGC number/density, yet one should always bear in mind that responses to any injury model and/or therapy could differ in male versus female mice in such preclinical studies. Mild Photoreceptor Layer Thinning in C57BL/6N Mice but No Difference in RGC Functioning between C57BL/6 Substrains Building on the finding that the RGC density differs between C57BL/6 substrains, we evaluated retinal layer thickness via OCT and RGC functioning via pSTR measurements. In line with reports showing identical retina-wide functioning via full-field flash electroretinography between wildtype and Rd8 mice [19,57], we did not observe a difference in RGC functioning between both substrains. The global thickness of the (neuro)retina was unaltered, while thinning of the photoreceptor layer was apparent in C57BL/6N mice. This thinning could probably be attributed to Rd8 mutation that primarily affects the photoreceptors in C57BL/6N mice [15], although we did not verify the presence of the Rd8 mutation in our mouse cohort Our reported values for the thickness of each retinal layer in C57BL/6J and-N mice via OCT imaging highly correspond to those reported by Moore et al. [15] and Ferguson et al. [58], respectively. No Substrain-Dependent Differences in RGC Vulnerability to Glaucomatous Damage We previously showed a difference in RGC susceptibility to glaucomatous damage in pigmented (C57BL/6N) versus albino (CD-1) mice [59]. However, in the same study, we did not observe a difference between wildtype and genetically modified C57BL/6N mice, the latter being albino C57BL/6N-Tyr C mice with a single homozygous Cys103Ser mutation. In the current study, we evaluated the intrinsic vulnerability of RGCs to glaucomatous damage upon microbead occlusion in two commonly used laboratory mouse strains, i.e., C57BL/6J and -N mice. Interestingly, and although the C57BL/6N mice possess a mutation that is associated with retinal degeneration, no differences in RGC loss, axonal transport loss or loss of RGC functioning were observed. This finding is in accordance with other studies reporting no difference in susceptibility of C57BL/6J and -N mice to retinal damage, e.g., after autoimmune optic neuritis [19], laser-induced choroidal neovascularization [44] or light-induced apoptosis [60]. Of note, all parameters were evaluated at five weeks postmicrobead occlusion, as significant RGC loss was detected from this sampling time point on. Proportional to the average reduction of RBPMS + cells, the loss of OHSt + cells was slightly higher. This marked difference denotes the percentage of RGCs that are disconnected from their target area, yet still alive, and/or the occurrence of retrograde transport losses in the microbead occlusion model. However, comparing structural with functional RGC loss revealed that functional deficits precede structural ones. The decline in pSTR peak amplitude was proportionally more than twice as large as the reduction in RBPMS + cell number. Hence, the pSTR seems to be a more sensitive measure to evaluate the effect of mild ocular hypertension on RGCs as compared to RBPMS immunolabeling, as also discussed in [21]. A last discussion point we would like to briefly highlight is the occurrence of a contralateral effect after a unilateral injury, also referred to as the mirror effect. Various reports denote responses in the contralateral, uninjured eye after unilateral optic nerve injury, including molecular changes, neuroinflammation and even cell death, often proportional to the severity of the retinal insult [61][62][63][64][65][66][67][68]. In addition, in pressure-dependent glaucoma models, a bilateral glial response has been previously denoted upon unilateral injury, e.g., in the episcleral vein cauterization model [69], laser photocoagulation model [70] and the microbead occlusion model [71]. In the microbead occlusion model, the Calkins lab reported a redistribution of astrocyte-derived metabolites from unstressed (contralateral) to stressed (microbead occluded) optic nerves [71]. In our study, however, we did not observe anatomical (RBPMS density) or functional (pSTR) differences between naïve and contralateral eyes five weeks after unilateral microbead occlusion (data not shown). Conclusions In the search towards neuroprotective strategies, the quantification of RGC numbers offers a measurable end point to determine the degree of protection. In that context, knowing the total RGC number and/or global densities in standard laboratory animals is a prerequisite, alongside the use of proper control mice. In this report, we documented the global, normative RGC numbers/densities of two most commonly used laboratory mice in neuroscience, i.e., C57BL/6J and -N mice of ± 3 months old, by automated countings of RBPMS + cells on entire wholemount retinas via the deep learning tool RGCode. We highlighted differences in RGC numbers/densities between (European) C57BL/6J and -N mice. Once more, this study provides a valuable warning to the vision science community to be mindful when choosing control mice, i.e., using controls with an identical genetic background and preferably even littermates, as well as to provide detailed descriptions of the experimental mice in any research communication. Although we did not detect sexual dimorphism in RGC number/density, nor any substrain-dependent differences in RGC vulnerability to glaucomatous damage, we wish to advise researchers to always validate whether sex or genetic differences are a cofounding factor in their study.
5,775
2022-08-29T00:00:00.000
[ "Biology" ]
Crystal structure of SARS-CoV-2 main protease in complex with protease inhibitor PF-07321332 experiment; Cloning, protein expression, and purification of SARS-CoV-2 M pro The cell cultures were grown and the protein was expressed according to a previous report (Jin et al., 2020). The cell pellets were resuspended in lysis buffer (20mM Tris-HCl pH 8.0, 150 mM NaCl, 5% Glycerol), lysed by high-pressure homogenization, and then centrifuged at 25,000g for 30 min. The supernatant was loaded onto Ni-NTA affinity column (Qiagen, Germany), and washed by the lysis buffer containing 20 mM imidazole. The His-tagged M pro was eluted by lysis buffer containing 300 mM imidazole. The imidazole was then removed through desalting. Human rhinovirus 3C protease was added to remove the C-terminal His tag. SARS-CoV-2 M pro was further purified by ion exchange chromatography. The purified M pro was transferred to 10 mM Tris-HCl pH 8.0 through desalting and stored at -80 degrees until needed. Crystallization, data collection, and structure determination PF-07321332 is synthesized by Prof. Ma's Lab. SARS-CoV-2 M pro (6 mg/ml) was incubated with 1 mM PF-07321332 for 1 hour at room temperature and the complex was crystallized by hanging drop vapor diffusion method at 20 °C. The best crystals were grown using a well buffer containing 0.1 M MES pH 6.0, 5% polyethylene glycol (PEG) 6000, and 3% DMSO. The cryo-protectant solution was the reservoir but with 20% glycerol added. X-ray data were collected on beamline BL19U1 at Shanghai Synchrotron Radiation Facility (SSRF) at 100 K and at a wavelength of 0.97852 Å using a Pilatus3 6M image plate detector. Data integration and scaling were performed using the program XDS (Kabsch, 2010). The structure was determined by molecular replacement (MR) with the PHASER (McCoy et al., 2007) and Phenix 1.19.2 (Liebschner et al., 2019) using the SARS-CoV-2 M pro (PDB accession number: 6LU7) as a search template. The model from MR was subsequently subjected to iterative cycles of manual model adjustment with Coot 0.8 (Emsley et al., 2010) and refinement was completed with Phenix REFINE (Afonine et al., 2012). The inhibitor PF-07321332 was built according to the omit map. The phasing and refinement statistics are summarized in Table S1. Coordinates and structure factors have been deposited in PDB with the accession number 7VH8. Intact protein analysis 1 μL PF-07321332 (10mM in DMSO) was add into 50uL of the proteins (1 mg/mL). The mixtures were kept in room temperature for 30 min. Liquid chromatography-mass spectrometry (LC-MS) analyses were performed in positive-ion mode with an Agilent 6550 quadrupole-time-of-flight (QTOF) mass spectrometer (Santa Clara, CA) coupled with an Agilent 1260 high-performance liquid chromatograph (HPLC; Santa Clara, CA) for detecting the molecular weight of intact proteins. The samples were eluted from a Phenomenex Jupiter C4 300Å LC Column (2×150 mm, 5 μm) over a 15 min gradient from 5% to 100% acetonitrile containing 0.1% formic acid at a flow rate of 0.5 mL/min. The acquisition method in positive-ion mode with Dual Agilent Jet Stream electrospray voltage used a capillary temperature of 250 °C, a fragmentor of 175 V, a capillary voltage of 3000 V. Mass deconvolution was performed using Agilent MassHunter Qualitative Analysis B.06.00 software with BioConfirm Workflow. Tandem mass spectrometry analysis The samples were precipitated and resolved with 8 M urea, and then digested for 16 h at 25 °C by chymotrypsin at an enzyme-to-substrate ratio of 1:50 (wt/wt). The digested peptides were desalted and loaded onto a homemade 30 cm-long pulled-tip analytical column (ReproSil-Pur C18 AQ 1.9 μm particle size, Dr. Maisch GmbH, 75 μm ID× 360 μm OD) connected to an Easy-nLC1200 UHPLC (Thermo Scientific) for mass spectrometry analysis. The elution gradient and mobile phase constitution used for peptide separation were as follows: 0-1 min, 5%-8% B; 1-114 min, 8-35% B; 115-116 min, 35-50% B; 116-120min, 60-100% B (mobile phase A: 0.1% Formic Acid in Water and mobile phase B: 0.1% formic acid in 80% Acetonitrile) at a flow rate of 300 nL /min. Peptides eluted from the LC column were directly electro-sprayed into the mass spectrometer with the application of a distal 1.8-kV spray voltage. Survey full-scan MS spectra (from m/z 300-1500) were acquired in the Orbitrap analyzer (Eclipse) with resolution r =120,000 at m/z 400. The dynamic exclusion time was set at 30 seconds. One acquisition cycle includes one full-scan MS spectrum followed by top MS/MS events in the cycle time setting at 3 s, sequentially generated on the most intense ions selected from the full MS spectrum at a 30% normalized collision energy. The acquired MS/MS data were analyzed UniProtKB E. coli database (database released on Nov. 11, 2016) containing nsp5 using Protein Discoverer 2.4. In order to accurately estimate peptide probabilities and false discovery rates (FDR), we used a decoy database containing the reversed sequences of all the proteins appended to the target database. FDR was set at 0.01. Mass tolerance for precursor ions was set at 20 ppm. Chymotrypsin was defined as cleavage enzyme and the maximal number of missed cleavage sites was set at 4. Protein N-terminus acetylation, methionine oxidation and compounds covalent bindings were set as variable modifications. The modified peptides were manually checked and labeled. The synthesis of PF-07321332 The starting material (amine salt 1) were prepared according to the literature procedure (Venkatraman et al., 2006).
1,223.6
2021-10-22T00:00:00.000
[ "Chemistry", "Physics" ]
An Image Processing Approach based on GNU Image Manipulation Program GIMP to the Panoramic Radiography : We have recently proposed, in a paper published by the International Journal of Sciences, the use of some freely available image processing tools to enhance images of the fundus of the eye. In particular, we have discussed the use of GIMP, the GNU Image Manipulation Program, and of some wavelet filters and fractional gradient methods from other image processing programs. Here we propose GIMP to enhance the images given by panoramic radiography. This approach can produce an output image, which helps detecting faint details in such radiographic plates, which are quite important for dental treatments. Some case studies are discussed. Introduction Medical imaging is a sub-discipline of the biomedical engineering, a new and rapidly evolving interdisciplinary field, which aims filling the gap between biology and several disciplines of engineering and applied science.It is therefore a branch of the applied science, mainly oriented to the medicine for both diagnostic and therapeutic purposes.In particular, the medical imaging is devoted to the study of noninvasive techniques that aim obtaining images of some internal aspects of the body.Among the wellknown techniques of medical imaging, we have radiology, ultrasonography and magnetic resonance.These techniques comprise both technical aspects of data acquisition and problems connected with diagnostic interpretation. Imaging and image processing turn out to be valuable means to infer some properties of biological structures from the corresponding observed signals [1].Sometimes, the raw images obtained from diagnostic equipment need an improvement.There are many resources useful for processing images, most of them freely available and quite friendly to use, that can help the user to separate the objects, relevant to the given study, from the background of the image.In a recent paper [2] for instance, we have proposed the use of some tools to enhance images of the fundus of the eye.In particular, we have discussed the use of GIMP, the GNU Image Manipulation Program, and the use of wavelet filters and fractional gradient tools from other image processing programs [3,4].Here we propose GIMP to enhance the images given by panoramic radiography.This approach can produce an output image, which helps the detection of faint details in such radiographic plates.Some case studies are discussed. GIMP software GIMP is a free and open-source software used for processing images and for free-form drawing.It is useful for resizing, cropping and converting images between different formats.It is also collecting several specialized tasks.GIMP is designed to be augmented with plug-ins and extensions, which can improve its functionality. According to the GIMP user manual, any image can be edited, considering it made of many layers in a stack.A GIMP image then is as a stack of transparencies, where each transparency is a layer.Each layer in an image is made of several channels.In RGB images, there are normally three channels, each consisting of a red, green and blue channel.Colour sublayers look like slightly different grey images; when put together, they make a complete image.A fourth channel can exist, the alpha channel, which measures the opacity of the image.A toolbox allows accessing the tools available for image editing.Among them, we find filters and brushes, as well as transformation, selection, layer and masking tools.For what concerns colours and grey-tones of images, we can adjust brightness and contrast, and also change them with the Curves tool.Gradients are also integrated into the toolbox: there are a number of default gradients included with GIMP, such as Laplace and Sobel, suitable for edge detections.Moreover, GIMP has more than a hundred of standard effects and filters, including those supporting sharpening and blurring of images.A part of them we can find in the GIMP Auto submenu.This submenu contains operations which automatically adjust the distribution of grey-tones, without requiring any input from the user.We can "Stretch Contrast", "Stretch HSV" and "Normalize" the histogram (the reader can find more details in the tutorials at http://www.gimp.org). GIMP supports importing and exporting with a large number of different file formats, the GIMP's native format is designed to store all information GIMP can contain about an image.The software supports image formats such as BMP, JPEG, PNG, GIF and TIFF.Other formats with read/write support include PostScript documents and X bitmap images.It can import Adobe PDF documents and the raw image formats used by many digital cameras, but cannot save to these formats. Panoramic radiography The panoramic radiography of dental arches is a procedure that produces an image of the teeth, upper and lower jaws and jawbones on a single image.To obtain the projection of the dental arches, which are curvilinear structures, it is necessary to use X-ray techniques based on a rotating tube about the patient's head.Let us note that panoramic radiography is a form of tomography and that these techniques can be compared [5].In panoramic radiography, images of multiple planes are recorded to have a composite final image.Maxilla and mandible are into focus, whereas the structures that are superficial or deep inside are blurred. The panoramic radiography is an essential element in oral radiology and dentistry [6][7][8].Its principle was described in 1922, but first commercially available machines are of the early 1960s [9].Today, such radiographic devices, which are fundamental for an initial assessment of the state of teeth prior to a dental treatment, are rather common.After a panoramic image, the dentist can perform a more targeted intraoral radiography.Panoramic plates are also useful to evaluate the state of dentition in individuals in the age of development, to highlight any irregular tooth or impacted teeth and bone lesions.Moreover, they reveal inflammatory problems or cystic tumours. In the Figure 1, we can see in the upper panel, a basic panoramic radiograph image.The image is a courtesy of the Coronation Dental Specialty Group on Wikipedia.In the lower panel of the same figure, we can see the output of Sobel filter for edge detection of GIMP.This filter is detecting the edges of objects in the image: in this manner the endodontic, periodontal and coronal-radicular details are enhanced.Let us stress that a sequence of panoramic images can be interesting for the recording the evolution of teeth and the planning of their treatments.This is important, in particular for teeth that have suffered an endodontic treatments, such as root canal treatments, in their medical history [10]. In the Figure 2 we can see how the panels of Figure 2 look like after an inversion of colour tones made by GIMP.In the lower panel of Figure 2, we can see a clear enhancements of the details of the roots of teeth.Panoramic images are also used for the mixed dentition, as shown in the Figure 3 (courtesy Coronation Dental Specialty Group on Wikipedia): in it, we can see the wisdom teeth buds.The image is processed using the Sobel filter: the result is shown in the lower panel with grey-tones inverted. Here in the following, we discuss some case studies where specific tools of GIMP are applied to enhance the details in images. Cysts of the jaws Bones of jaws, mandible and maxilla, are the bones of human body with the highest prevalence of cysts.Since these cysts rarely cause any symptoms [11], most are discovered during the panoramic radiography.In the X-ray images, cysts appear as radiolucent dark areas, that is, areas permitting the passage of radiant energy that have radiopaque white borders [12].In the Figure 4 in fact, we see a cyst with the borders enhanced by the Sobel filter of GIMP.Let us emphasize that, with GIMP, we can select an area and process just the selected region. Stafne defects A Stafne defect, also known as Stafne bone cyst, is a depression of the mandible on the lingual surface, that is, the side nearest the tongue.The Stafne defect is thought to be a normal anatomical variant and does not represent a pathologic lesion.This defect is usually discovered by chance during routine dental radiography [13,14].In the Figure 5 we can see a panoramic radiograph showing Stafne defect in the right mandible, below the inferior alveolar nerve canal.The image is a courtesy of the Coronation Dental Specialty Group on Wikipedia.After selecting the part of the image containing the defect, we can apply GIMP Auto submenu.It contains operations which automatically adjust the distribution of grey-tones, without requiring any input from the user.Here we show the results of "Stretch Contrast", "Stretch HSV" and of "Normalize".Using these automatic filters we can have some information on the density of bone near and inside the defect. Pattern of mental nerve As discussed in References 15 and 16, the pattern of mental nerve into the mental foramen is an important pre-surgical landmark in mandibular premolar regions.As panoramic radiographs are routinely used in pre-surgical evaluation, the researchers in [16], undertook a study to evaluate the reliability of panoramic X-ray machines for determination of the location of mandibular foramen.The study revealed that the most common pattern of entry of mental nerve was a straight one (about 79% of the total radiographs examined). The researchers in [16] tell that panoramic radiography may not be a very reliable imaging modality for identifying the presence of an anterior loop of the nerve, a condition which needs to be determined before planning the surgical procedures.In any case, we can try to enhance the images of nerve and mental foramen with GIMP.In Figure 6, we can see the mandibular canal after applying the GIMP Auto submenu "Equalize", "White Balance" and "Stretch Contrast".In the Figure 7, we can see an example of image enhancement obtained by GIMP Retinex.This tool improves visual rendering of an image when lighting conditions are not good.The algorithm, which is the root of Retinex filter, the MultiScale Retinex with Color Restoration algorithm, is inspired by the eye biological mechanisms to adapt itself to these conditions.The result of Retinex filter can be adjusted selecting different levels, scales and dynamics.In the Figure 8, we have another example of the use of Retinex. Retinex can be applied to the whole image, as proposed in the Figure 9: the enhancement of the mental canals in the output image is evident. Conclusion In this paper we have proposed the use of an image processing program to enhance the images obtained by means of X-ray panoramic radiography.Let us remark that several other researches on image processing for panoramic radiography are available in literature.For instance, image processing can be used to obtain a panoramic X-ray device suitable for complete maxillofacial diagnoses, extending therefore the diagnostic coverage of panoramic images [17].In [18], the image processing is used to enhance the images, when a reduction of the radiation dose is required, and in [19], an appropriate approach for the robust estimation of noise statistic in dental panoramic X-rays images is given.Here, aiming to reach a large audience, we have proposed the use of a program, the GIMP, which is freely available on the web.In the paper we have shown that GIMP has several tools which can be quite useful to enhance and investigate the details of panoramic images.Some of them are able of automatically adjust the distribution of grey-tones, without requiring any input from the user.These are, of course, the simplest to use in preliminary investigations.In the case of the analysis of mental canal and foramen, Retinex seems to be the tool providing the best results.In them, the pattern of mandibular canal and the mental foramen are important landmarks.We can try to enhance the related images using GIMP; here an example from the panoramic image of Figure 5, a courtesy of the Coronation Dental Specialty Group on Wikipedia.We can see the mandibular canal.After selecting the part of the image containing this canal, we can apply the GIMP Auto submenu.Here we used "Equalize", "White Balance" and "Stretch Contrast". Figure 1 - Figure 1 -In the upper panel, we can see a basic panoramic radiograph showing impacted wisdom teeth in a 16 year old.The image is a courtesy of the Coronation Dental Specialty Group on Wikipedia.In the lower panel, we can see the image obtained using the Sobel filter for edge detection of GIMP. Figure 2 - Figure 2 -In the upper panel, we can see the panoramic radiograph of Figure 1 with inverted color tones, as obtained by GIMP.The same for the lower panel, showing the edge detection.Note the details of the roots of teeth. Figure 3 -Figure 4 -Figure 5 - Figure 3 -In the upper panel we can see a panoramic radiograph which is showing the mixed dentition of a nine year old child (courtesy Coronation Dental Specialty Group on Wikipedia).We can see also the wisdom teeth buds.The image is processed using the Sobel filter.The final result is proposed with the grey-tones inverted. Figure 6 - Figure6-Panoramic radiographs are routinely used in pre-surgical evaluation.In them, the pattern of mandibular canal and the mental foramen are important landmarks.We can try to enhance the related images using GIMP; here an example from the panoramic image of Figure5, a courtesy of the Coronation Dental Specialty Group on Wikipedia.We can see the mandibular canal.After selecting the part of the image containing this canal, we can apply the GIMP Auto submenu.Here we used "Equalize", "White Balance" and "Stretch Contrast". Figure 7 - Figure 7 -To enhance the pattern of the mandibular canal and to see the mental foramen, we can use another GIMP tool, the "Retinex" tool.Retinex improves visual rendering of an image when lighting conditions are not good.The algorithm, which is the root of the Retinex filter, the MultiScale Retinex with Color Restoration algorithm, is inspired by the eye biological mechanisms to adapt itself to these conditions.In this figure, Retinex is applied to the whole image (middle) and to a part of it (bottom).The original image (top) is that of Figure 1, a courtesy of the Coronation Dental Specialty Group on Wikipedia. Figure 8 - Figure 8 -Another image of mandibular canal and mental foramen.We can see the different results obtained by the GIMP Auto submenu for "Equalize" and "White Balance", and by the "Retinex" GIMP tool.The original image is a courtesy of Wikipedia User Werneuchen. Figure 9 - Figure9-This is the panoramic image of Figure1, as we can see it after applying the "Retinex" GIMP tool.
3,274.6
2015-05-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Cognitive decline monitoring through a web-based application Cognitive decline usually begins after individuals reach maturity, which is more evident in late adulthood. Rapid and constant cognitive screenings allow early detection of cognitive decline and motivate individuals to participate in prevention interventions. Due to accelerated technological advances, cognitive screening and training are now available to the layperson using electronic devices connected to the internet. Large datasets generated by these platforms provide a unique opportunity to explore cognitive development throughout life and across multiple naturalistic environments. However, such data collection mechanisms must be validated. This study aimed to determine whether the data gathered by commercial visuospatial and phonological working memory tests (CogniFit Inc., San Francisco, USA) confirm the well-established argument that age predicts cognitive decline. Data from 3,212 participants (2,238 females) who were 45 years old or older were analyzed. A linear regression analysis explored the relationship between age and working memory while controlling for gender, sleep quality, and physical activity (variables that are known to affect working memory). We found that age negatively predicts working memory. Furthermore, there was an interaction between age and gender for visuospatial working memory, indicating that although male participants significantly outperformed females, the relationship between age and working memory differs for females and males. Our results suggest that the computerized assessment of visuospatial and phonological working memory is sensible enough to predict cognitive functions in aging. Suggestions for improving the sensitivity of self-reports are discussed. Further studies must explore the nature of gender effects on cognitive aging. Introduction Age-associated cognitive decline refers to the non-pathological deterioration of cognitive processes through aging, usually related to fluid intelligence -such as memory, processing speed, and reasoning-but not necessarily crystallized abilities -like phonological and numerical abilities or general knowledge (Deary et al., 2009;Murman, 2015, but see Buades-Sitjar et al., 2022).Most people present a relatively slow deterioration that is detectable at the 10.3389/fnagi.2023.1212496decade timescale, and those whose cognitive abilities deteriorate faster are more likely to have a type of neuropathology (Hayden et al., 2011).In addition, research has found that cognitive function is associated with health-related quality of life in older people (Kazazi et al., 2018).In the preservation of these functions, there may also be mediating factors such as those underlying cognitive reserve (Adam et al., 2013;Guzmán-Vélez and Tranel, 2015;Alvares Pereira et al., 2022).Therefore, cognitive functioning has become a priority for interventions that target the wellbeing of elderly individuals. Among the cognitive abilities that undergo decline with age, working memory has gained considerable attention in recent years.Working memory refers to the multi-component cognitive system that allows individuals to temporarily hold and manipulate information necessary to perform complex cognitive tasks (Baddeley, 2017).This multi-component system consists of a phonological loop -that stores and rehearses verbal informationa visuospatial sketchpad -that stores and manipulates visual and spatial information and a central executive -that coordinates both.Working memory is crucial in everyday tasks such as language, problem-solving, and decision-making.Studies have shown an inverse relationship between age and working memory, demonstrating a decline in working memory with increasing age (Bopp andVerhaeghen, 2005, 2020;Pliatsikas et al., 2019).Furthermore, Pliatsikas et al. observed an interaction between age and gender, with men exhibiting a more pronounced decline in working memory than women as a function of age.Finally, evidence suggests that phonological working memory resists aging, while age affects visuospatial working memory more (Kumar and Priyadarshi, 2013). According to the International Psychogeriatric Association, the inclusion criteria for age-associated cognitive decline are (1) selfperception of cognitive decline for at least 6 months and (2) low scores at least one standard deviation below age and education in cognitive tests.When using these criteria, the prevalence of agerelated cognitive decline is greater than 20% in people older than 64 and increases linearly with age (Hänninen et al., 1996;Busse et al., 2003;Schönknecht et al., 2005).In addition, conversion rates from age-associated cognitive decline to dementia -a pathological form of cognitive decline-are around 30%, as reported in various longitudinal studies (Ritchie et al., 2001;Busse et al., 2003). Being able to track age-associated cognitive decline could help identify those individuals at risk for dementia.Identifying them at the pre-clinical stage is a priority for health professionals, given that early detection allows clinicians to design better interventions and prevention strategies (Cruz-Oliver and Morley, 2010).However, there are multiple challenges associated with such identification.One of these is that cognitive functions usually change gradually and continuously across the lifespan.Severity might be among the few differentiating factors between non-pathological age-related cognitive decline and early onset of dementia (Murman, 2015).Salthouse (2019) indicates that the timing of cognitive decline onset is critical to determine the best course of action.Describing typical cognitive aging, including age-associated cognitive decline, is crucial to detect such onset.To understand the heterogeneity of cognitive aging across individuals, Tucker-Drob (2019) recommends framing it as a continuous model that involves analyzing changes in cognitive trajectories over time.Even though gold-standard diagnostic tools for dementia (like the Mini-Mental State Test) are essential, relevant, and valid, a continuous model that constantly addresses cognitive functioning could detect changes in earlier stages, providing opportunities for prevention and early interventions (Grodstein, 2012). Continuous monitoring of cognitive functions can be achieved through computerized assessments (see Asensio and Duñabeitia, 2023), which enable the collection of vast amounts of data from a single patient, overcoming the impracticalities related to having multiple face-to-face interviews.Allard et al. (2014) suggest that measuring cognition over time on mobile devices effectively complements traditional testing.Given the latest advances in artificial intelligence, massive amounts of data can be quickly processed and analyzed, contributing to dementia diagnosis and treatment (Khodabandehloo et al., 2021;Revathi et al., 2022).CogniFit (CogniFit Inc., San Francisco, USA) is a digital health platform offering computerized cognitive assessments and interventions for users, researchers, and practitioners.Assessments and interventions are available for desktop and laptop computers and mobile devices like tablets and smartphones.CogniFit's general cognitive assessment (CAB) is an online neurocognitive assessment that aims to evaluate cognitive functions in the following domains: attention, perception, memory, executive functions, coordination, and physical, psychological, and social wellbeing.This instrument has been widely used for cognitive assessment (Yaneva and Mateva, 2017;Sánchez-Sansegundo et al., 2021;Berbegal et al., 2022;Tapia et al., 2022Tapia et al., , 2023aTapia et al., , 2023b;;Duñabeitia et al., 2023). Given the availability of large databases from CogniFit users, with diverse naturalistic environments, as opposed to controlled research environments, in the current study, we investigated the validity of user-generated data to capture cognitive decline across the lifespan.Furthermore, with the technology available for continuous monitoring of cognitive functions across different ages, this study aimed to determine whether the data gathered by commercial visuospatial and phonological working memory tests confirm the well-established argument that age predicts cognitive decline.10.3389/fnagi.2023.1212496 2. Materials and methods Participants The dataset for this analysis was extracted from a more extensive CogniFit user database of 27,619 users.To ensure the dataset was appropriate for analysis, we excluded participants who were younger than 45 years of age.This resulted in a final sample of 3,212 participants (2,238 women; mean age = 54.99;SD = ±8.49). No further exclusion criteria were applied as the study aimed to explore user-generated data. Visuospatial working memory Participants completed the CogniFit game "Glowing Circles" to measure this ability.This task is a variant of the well-known Corsi test, a widely used neuropsychological assessment tool for evaluating visuospatial working memory (Kessels et al., 2000).In the Corsi test, an experimenter taps a series of squares on a board, and participants must reproduce the same tapping sequence.Similarly, in the glowing circle task, participants are presented with a sequence of glowing circles that they must reproduce in the same order. In this study, participants were exposed to 10 circles on the screen in a determined set of coordinates.The computer flashed some of these circles for 2 s on each trial that participants had to click in the same order.The task was presented in two blocks of different interstimulus intervals.The first block had no interstimulus interval, while the second block had an interstimulus interval of 4 s.Each block had nine trials.The first block started with two flashing circles, while the second block started with the maximum correct trials from the first block minus 3.Each subsequent trial in a block added an extra flashing circle.Figure 1 contains a visual representation of the task.If a user responded incorrectly to one trial of a given difficulty level, another trial of the same difficulty level would be presented.The task would stop upon incorrect completion of two trials of the same difficulty level.Data are presented in z-scores, obtained by standardizing accuracy rates by using the following formula: where: z is the z-score; x is the raw accuracy data; µ is the mean (average) of the distribution; σ is the standard deviation of the distribution.The z-scores were calculated using reference databases with a sample size of 915,400 users. Phonological working memory Participants completed the CogniFit game "The Numbers" to measure this ability.This task is a variation of the Wechsler Digit Span test, which measures phonological short-term and working memory (Wechsler, 1981).In this task, the participant is presented with a progressively longer series of numbers that must be recalled in the same order they were presented. In this study, participants were presented with a large circle in the middle of the screen displaying a random set of numbers sequentially for 1 s, followed by a 1-s delay.Once the set of numbers was presented, ten smaller circles appeared around the central circle, each containing a number from 0 to 9 arranged clockwise.Participants were expected to click on the numbered circles in the correct order to reproduce the series that was shown previously.After each successful trial, a number was added to the series of numbers.The block had nine trials.As in the case of the preceding task, here, participants also had two attempts to complete a trial of a given difficulty level.The task was discontinued after two consecutive errors in trials of the same difficulty level.Figure 2 contains a visual representation of the task.Data are presented in z-scores, obtained by standardizing accuracy rates.The z-scores were calculated using reference databases with a sample size of 883,192 users. Sociodemographic and wellbeing questionnaire The wellbeing questionnaire asks participants about their physical, psychological, and social wellbeing.The questionnaire also asks about more general issues outside these themes, such as manual dominance.Given that exercise and sleep quality are wellknown modulators in the relationship between age and cognitive functioning, we included the questions that explored them in our analysis.In this sense, we included the yes/no questions (1) Do you get 7-9 h of sleep each night?and (2) Do you exercise often.In terms of potential cognitive reserve factors, considerations included whether the participant is currently engaged in a profession (Are you presently employed?)and whether they are proficient in multiple languages [Do you speak two (or more) languages fluently?]. Data analysis plan To conduct our analyses, we used the software RStudio, version 4.2.1 (R Core Team, 2021).To fulfill the purpose of the study, which was to explore the relationship between working memory and age, we conducted a linear regression for each dependent variable (visuospatial and phonological working memory test scores).The independent variables were age but also included physical activity and sleep duration as control variables and an age × gender interaction. Concerning cognitive reserve factors, supplementary analyses of covariance (ANCOVA) were conducted with gender and cognitive reserve variables as fixed effects and age as a covariate.To determine whether these factors nullified the main effects examined in this study.Post hoc analyses were conducted using the Bonferroni multiple comparisons test in cases where significant interactions were found.Visuospatial working memory task.The participants were given instructions to press circles in a specific order under two different conditions: Block 1, where there was no break between trials, and Block 2, where a 4-s break was introduced between each trial. Results Before individually assessing each task, we ran a Pearson's correlation analysis to confirm that the participants' scores in the phonological and visuospatial working memory tasks were significantly correlated (r = 0.36, p < 0.001), as expected for two tasks that measure partially overlapping cognitive skills. First, we examined the relationship between visuospatial working memory, age, gender, sleep duration, physical activity, Phonological working memory.The participants were given instructions to memorize a sequence of numbers and subsequently reproduce them in the exact order in which they were presented.indicating that the difference between genders was more marked at younger ages, with older participants showing negligible differences depending on their gender.See Figure 3 for a visual representation. Second, we examined the relationship between phonological working memory, age, gender, sleep duration, and physical activity.In addition, the model tested for an interaction effect between age and gender.Overall, the model accounted for a small amount of variance in phonological working memory [R 2 = 0.056, F(5, 3,206) = 37.79, p < 0.001].Our results showed a significant negative effect of age on phonological working memory test scores [β(unstandardized) = −0.031,t(3,206) = −11.661,p < 0.001].Gender, sleep quality, and physical activity did not significantly predict phonological working memory.The interaction between age and gender was also not significant.See Figure 4 for a visual representation. Discussion This study aimed to determine if user-generated data on cognitive abilities was sensitive to the well-known effects of aging on memory.The results showed a progressive loss of phonological and visuospatial working memory capacity with increased age, confirming earlier reports in the literature (Bopp andVerhaeghen, 2005, 2020;Pliatsikas et al., 2019). We also found a gender effect showing that male participants significantly outperformed female participants in visuospatial working memory tasks but not phonological working memory, confirming the results of previous studies (Fournet et al., 2012;Morais et al., 2018).A possible explanation for gender differences in visuospatial working memory is the reported visuospatial advantage for male individuals (Voyer et al., 2017).This result, however, contradicts (Choi et al., 2014), who found gender differences in phonological working memory in elderly individuals, in which male participants outperformed female participants.However, although significant, the explained variance of gender was negligible. This result, however, contradicts who found gender differences in phonological working memory in elderly individuals, in which male participants outperformed female participants.However, although significant, the explained variance of gender was negligible. In addition to finding a gender effect, we also found an age × gender interaction for the visuospatial working memory Phonological working memory by gender and age.Phonological working memory was calculated using z-scores, while age was measured in years.Participants' gender was indicated by F for female and M for male. tasks.Male participants show a steeper decline than female participants.There is conflicting evidence about this age × gender interaction, as there are studies that confirm our results (Lipnicki et al., 2013;McCarrey et al., 2016;Pliatsikas et al., 2019) while other studies show that women tend to have a faster cognitive decline than men (Zhang, 2006;Lin et al., 2015;Levine et al., 2021).The literature suggests that the effects of age-related neuroendocrine changes on cognitive performance should be studied separately by sex (Hampson, 1990), given the relevant impact of testosterone levels on memory, executive functions, and spatial task execution (Holland et al., 2011), as well as the influence of estrogens on verbal fluency, spatial tasks, and memory in women (Hampson, 1990).Hormonal levels not only change across the lifespan but also exhibit cyclical patterns. We included sleep quality and exercise in the model since these variables have been shown to affect cognitive performance (Angevaren et al., 2010;Lee et al., 2015;Brocklebank et al., 2021;Joo et al., 2021;Song and Park, 2022).However, we did not find an effect of the variables when considered.This does not imply that sleep or exercise do not contribute to cognitive health but that the instruments used to measure them might not have been sensitive enough to capture the effect.They were measured with a single yes/no question that does not fully describe the nature of these lifestyle choices.Further data collection could contemplate using a validated psychometric instrument to better explore the nuances of sleep quality and physical exercise to evaluate their effect on older individuals' cognitive functioning.In addition, online computerized cognitive assessment platforms could also integrate physiological data obtained by wearable devices that continuously track sleep and physical activity data.More accurate and dynamic data will improve any cognitive evaluation platform's predictive power. Similarly, additional analyses were conducted exploring the interaction of factors considered relevant to participants' cognitive reserve.In this regard, we observed that the effects of participants' age and gender persisted on their scores, both in visuospatial and phonological working memory.Furthermore, we noted a protective effect of participants' professional activity on their scores in both explored domains of working memory.This supports the notion that continuous occupational engagement helps maintain cognitive function (Adam et al., 2013;Alvares Pereira et al., 2022).Likewise, we observed an effect of fluency in other languages on phonological working memory.In this regard, the evidence suggests that bilingualism contributes to preserving cognitive function (Bialystok et al., 2007;Guzmán-Vélez and Tranel, 2015;Antón et al., 2019;Bialystok, 2021). The present study has several limitations that should be considered before its interpretation.The first limitation is that the evaluations were conducted in an uncontrolled and naturalistic environment, so researchers did not have control over the conditions in which these data were gathered.The second limitation is that the data presented here comes from a crosssectional design, which does not account for an actual cognitive decline, but describes lower performance with increasing age.Further studies should consider collecting multiple data points of users to characterize cognitive function change across time and evaluate the effect of exposure to the tests (Salthouse, 2019).This is especially important when discriminating between ageassociated cognitive decline and dementia, given that one of their differentiating factors is the rate at which deterioration occurs (Deary et al., 2009).Also, future investigations should delve more extensively into the relationship between occupational activity, bilingualism, and visuospatial and phonological working memory, especially considering the interactions highlighted in the supplementary analyses of this study. Notwithstanding these limitations, this study is relevant given the large sample tested and the need for developing robust platforms that continuously monitor aging individuals' cognitive function and lifestyle characteristics.Such platforms have the potential to characterize better dynamic cognitive functions as well as the potential buffering mechanisms to prevent pathological cognitive decline by providing individualized, targeted strategies to approach age-associated cognitive decline.More research needs to be conducted to better characterize individual variability and factors that predict cognitive decline in the aging population (Erickson et al., 2022).If used as a recurrent evaluation tool, technological tools, such as app-based screeners, could better characterize these risk factors and eventually identify individuals at risk for developing pathological cognitive decline.Such an approach opens doors to a new potential for intervention and the use of artificial intelligence trained to recognize patterns in large amounts of data.Artificial intelligence could aid in creating individualized intervention plans well-suited to contexts, risk factors, lifestyles, and needs, contributing to the wellbeing of the aging population.
4,338.6
2023-10-05T00:00:00.000
[ "Computer Science", "Medicine", "Psychology" ]
Diagnostics of condition of soil lying in the foundation of sewer collectors and metro tunnels by crosshole sounding method in urban underground space Non-destructive geophysical crosshole sounding method can be used to control and monitoring the condition of natural and improved soils. This work studies the application of the crosshole sounding method to determine the condition of soils lying in the foundations of sewer collectors and tunnels. The investigation uses crosshole sounding apparatus APZ-1 developed by LLC “Geodiagnostika”. The study considered the results of crosshole sounding of soils at the site of the reconstruction of the Church of The Icon of the Mother of God “Joy of All Who Sorrow” and on the section of buried metro tunnels between “Lesnaya” and “Ploschad Muzhestva” stations in St. Petersburg (Russia). The author postulates that crosshole sounding provides the most accurate information about the foundation soils for making a decision on strengthening them for trouble-free operation of sewer collectors and tunnels. Introduction The conditions of underground construction in St. Petersburg (until 1991, Leningrad) are unique in the large volume of unstable soft soils occurring at depths of up to 100 meters below ground surface. The risks of the construction and operation of underground structures in soft saturated soils are associated with various construction incidents including disruption of the continuity of fences, breakthroughs of groundwater into the workings areas, decompaction of the soils of the bases. An huge accident occurred in St. Petersburg in 1995 between the metro stations "Lesnaya" and "Ploschad Muzhestva", after which sections of the tunnels were flooded. The urgency of the problem of control and monitoring conditions of soils, slurry walls, jet grouting and frozen ground and grout curtains in the underground space is determined by the requirements of accident-free policies for underground works as well as maintaining safety of buildings during their operations and service life. Since 1955 All-Russian Scientific Research Institute of exploration methods (VITR), and then from 2006 LLC "Geodiagnostika" (www.geodiagnostics.ru) investigated the relationship between the parameters of elastic waves distribution in soils and had performed tests to investigate continuity of different ground improvement measures including ground freezing and various types of grouting (jet grouting, compaction grouting, etc.) for purpose of ensuring their effectiveness. Results of the study had been used as a basis development of new control and monitoring technologies of the condition of natural and improved soils by geophysical crosshole sounding method. The aim of research was to obtain the dependencies between the parameters of elastic waves and modulus of elasticity (deformation) of soil. The ultimate objective the research is arriving at the creation 1. Physical backgrounds of crosshole sounding method Soils (rocks) are different in their physical and mechanical properties. Elastic wave parameters (speed, sound pressure and spectrum) depend on properties of the environment they propagated in. Changing parameters of elastic waves during crosshole sounding is a physical background for determining the condition of natural and improved soils using grouting methods, jet-grouting, spoil freezing, etc. [1][2][3][4]. A lot of researches were carried out earlier proving a correlation between elastic wave parameters and strength and elasticity of materials [5][6][7]. Tunnels and sewer collectors are a "foreign" body embedded in the soil massif. Propagation of elastic waves in soil mass is defined by general acoustic rules. If there is a different body in the path of the elastic wave in the tested soil mass, then an "acoustic shadow" is observed in the form of a sharp decrease of speed, sound pressure and frequency of the elastic wave impulse. The physics of the acoustic shadow phenomenon is based on reflection of the elastic wave on the boundary between the tunnel surface and the soil mass and on the elastic wave's rounding (diffraction) from the obstacle with extension of its propagation path. The occurrence of the "acoustic shadow" phenomenon when sounding between two holes is a physical background for determining the location of underground tunnels. Methodology for crosshole sounding from holes or embedded pipes The author has been working on developing a new scientific field "Seismoacoustic diagnostics of the condition of man-made and natural soil massifs" that considers soils and artificial objects arranged in the underground space (tunnels, fences, piles) as an object of diagnosis and elastic waves transmitted through or reflected from an object as carriers of a diagnostic information [2]. Research methods included crosshole sounding from observation holes drilled on both sides of sewer collector or tunnels and comparison of measured parameters of elastic wave with a table of diagnostic indications of different soil conditions ( Figure 1). The distance between the observation holes was 10 -50 m. Diagnostic parameters showing the condition of materials are the speed of elastic wave (arrival time), the acoustic pressure range and the elastic wave pulse spectrum ( Figure 2). The spectrum is calculated in a digital signal processing program using the fast Fourier transform (FFT) method. The FFT parameters are selected based on the recording parameters. The speed of the longitudinal elastic wave is the key diagnostic parameter. Additional advantages of the crosshole testing method are the following: a possibility to implement crosshole tomography due to the higher number of differently directed rays passing through the crosshole area. At present, author has accumulated a fairly large amount of production data on the change in the parameters of an elastic wave during propagation in natural and artificial soils [1,4]. The state of the soil is characterized by a certain combination of parameters (diagnostic feature) of the elastic wave impulse, which makes it possible to accurately diagnose the location of the soil in the section. An increase in the speed of an elastic wave indicates a decrease in the plasticity of sandy-clayey soils (in the direction from fluid to solid) and an increase in the strength of artificial soils. The shift of the spectrum towards high frequencies indicates the presence of water saturation in sandy soils and an increase in fluidity and thixotropy of sandy-clay soils (sandy loam, loam, clay). The decrease in the amplitude of the elastic wave is associated with the attenuation of the elastic wave and is caused by inhomogeneity, decompaction, porosity, lack of groundwater or stratification of soils. Apparatus APZ-1 ( Figure 3) is designed for measuring propagation time, amplitude and impulse frequency of elastic waves in formations between the source and receiver to determine elasticity characteristics of the area. The equipment received a metrological calibration certificate from the All-Russian Research Institute of Metrology named after D. I. Mendeleev (St. Petersburg). Basic relative error of time measurement is + 3 %. Operation principle of hardware system consists of acoustic wave emitting, receiving impulse upon passing through the area (concrete, soil, soil-cement, etc.), registration of impulse on PC hard drive, measuring impulse parameters by the operator, processing the observation results and assessment of the state of propagation area according to impulse parameters. For excitation of elastic wave impulse, the sender enables the current impulse generator, cable and electric spark source by electrohydraulic shock. The metering system consists of receivers of elastic waves, cables and PC based set of hardware and software tools. Software consists of WINDOWS Operational System, command mode software and WinPOS digital signals processing program. There can be additional software programs for digital processing of these measurements, including tomographic image of cross-hole space. The maximum sounding distance no less 100 m is achieved in hard monolithic rocks such as granite, silicified sandstone and quartzite. The results of crosshole sounding of soils at the site of the reconstruction of the Church of The Icon of the Mother of God "Joy of All Who Sorrow" The Church of The Icon of the Mother of God "Joy of All Who Sorrow" was built in St. Petersburg in the period 1893-1896. In 1933 the church was blown up and dismantled. On the site of the church, only the basement and the foundation had survived. In 2016, the reconstruction of the church began. The problem of the construction of the new church building was the sewer collector, laid under the foundation of the church in 1973. To develop a design solution to strengthen the foundation of the church, it was necessary to examine the soils under the collector. Due to the presence of the collector and the foundation, it was impossible to take soil samples under the collector by drilling. To obtain information on the condition and deformations modulus of soil at the base of the collector in 2016, the crosshole sounding method was used. Crosshole sounding was carried out between observation holes 1, 2, 3, 4 and 5, drilled from the ground surface to a depth of 15 m (Figure 4). The holes were encased with steel pipes and filled with technical water. The measuring interval via transducers along the hole was 0.5 -1 m. To study the relationship between the speed of elastic wave and the modulus of elasticity of soils, we used the data of laboratory studies of soils from the construction sites of mine shafts of the metro and sewage system in St. Petersburg, where crosshole sounding was carried out. The experimental calibration dependence "speed of elastic wave (m/s) -elastic modulus E (MPa)" was approximated by a function of the form: = 0,025 × − 33,1 MPa. The calculated modulus of elasticity of soft loam at the base of the sewer collector is 6.2 MPa, plastic loam -10.3 MPa, hard-plastic loam -13 MPa. 4.2. Results of investigation of the geological structure and position of the buried tunnels on the section between "Lesnaya" and "Ploschad Muzhestva" metro stations In St. Petersburg, near the square named "Ploschad Muzhestva" in the underground space, there are two buried metro tunnels. Tunnels with a diameter of 6 m were laid in 1974-1975 across the valley of the ancient bed of the river Neva and exploited for 20 years. In December 1995, the tunnel sections between "Lesnaya" and "Ploschad Muzhestva" stations were disconnected from the rest of the metro network and forcibly filled with water after sand and water penetrated inside the tunnels. In 2012, Geodiagnostika LLC performed crosshole sounding to determine the state of the soil and the position of the buried tunnels ( Figure 7). Figure 7. The relationship between the speed of elastic wave and depth in vertical planes crossing (1) and not crossing (2) the buried tunnels at the section PK183 between "Lesnaya" and "Ploschad Muzhestva" metro stations in St. Petersburg The position of the tunnels in the underground space is established on the basis of the physical phenomenon of the "acoustic shadow" by a sharp decrease in the speed of the elastic wave (Figure 7), sound pressure and vibration frequency along the sound rays crossing the tunnel. The upper buried tunnel is located in the depth interval 64 -70 m. The lower buried tunnel is located in the depth interval 76 -82 m. Depth measurement error is + 1 m. In vertical planes that do not intersect the tunnels, the values of the speed of elastic wave, sound pressure and vibration frequency along the acoustic rays correspond to natural soils. The geological section according to the complex of acoustic features is divided in depth into the following parts (Figure 7). The groundwater level is at a depth of 5 m. Water-resistant soil layers occur at depths of 22 -26 m (layered loams) and 32 -46 m (solid consistency loams). Water-resistant interlayers differ in characteristics. Loams in the depth interval of 22 -26 m are characterized by lower values of speed, sound pressure and frequency of elastic wave, which indicates layering and, mainly, plastic consistency. Loams in the depth interval 32-46 m are characterized by an increase in speed of elastic wave, a decrease in the sound pressure and frequency of elastic wave, which indicates mainly a hard-plastic consistency. Water-filled and fluid soils (sand and sandy loam) occur in the depth intervals of 5-22, 26-32 and 46-86 m and stand out at a relatively low speed at high values of sound pressure and frequency (5000-7000 Hz) of elastic wave. It is very strange that the lower tunnel was built in water-bearing sands that appear not to have sufficient bearing capacity. One of the causes of the discontinuity in the tunnel walls and the penetration of fluid soils into the tunnels in 1995 could be a decrease in density in the soils in the base of the tunnels. Conclusions A result of research made the following conclusions 1. Crosshole sounding technology provides diagnostics the condition of soils lying in the foundation of sewer collectors and metro tunnels. 2. The main diagnostic parameters in crosshole sounding method for determining the type and condition of soils are speed, sound pressure and spectrum of elastic waves. 3. Monitoring of the condition of the soils of the foundations of sewer collectors and tunnels in the most critical sections by crosshole sounding from observation holes would allow assessment of potential soil improvement measures application as needed to ensure continuity of facility operations.
3,005.8
2021-01-01T00:00:00.000
[ "Geology" ]
Node Scheduling for AF-Based Over-the-Air Computation Over-the-air computation is a promising technique for efficiently aggregating data in sensor networks. This method requires that signals from all nodes arrive at the sink aligned in signal magnitude, which faces the reliability issue, especially in times of channel fading. To solve this problem, in this letter, we propose an amplify-and-forward based relay, Coherent Relay with Node Scheduling (CohR-NS), where a relay node is used to help forward signals of multiple nodes. Relay transmission power (TP) increases with the number of nodes using the relay, which is a bottleneck. We investigate how relay TP changes with relay position, and under the constraint of relay TP, study (i) how to select nodes to use relay when not all nodes requiring a relay can be supported simultaneously, (ii) how to select more nodes to use relay so as to reduce node TP, when there is a surplus in relay TP. We formulate this as an ILP (integer linear programming) problem, propose an efficient heuristic method, and confirm its effectiveness by simulation evaluation. AirComp requires that signals from all nodes arrive at the sink simultaneously and be aligned in signal magnitude. However, this is difficult when some nodes face channel fading. Transmission power (TP) control [6], [7], which preamplifies signals, helps to partially solve this problem, but it alone cannot well deal with deep fading. Among the methods [8] to improving the reliability of AirComp, one is CohRelay, an amplify-and-forward (AF) based relay method [9], where one relay node is used to help multiple nodes, forwarding their signals to the sink. It was shown that the relay TP increases with the number of nodes using relay, which becomes the bottleneck of the system. But the constraint of relay TP is not explicitly considered. In this letter, we propose a Coherent Relay with Node Scheduling (CohR-NS) method to optimize computation mean squared error (MSE) of the AF-based AirComp. We start with the optimal solution where all nodes transmit their signals directly. Then, we use the relay to help nodes whose signal magnitudes are misaligned, and further study how to schedule fewer or more nodes to use the relay, according to relay TP. To the best of our knowledge, this is the first work on node scheduling for AirComp. The contribution of this letter is three-fold, as follows: • We investigate how relay TP changes with relay position, which enables new policies for node scheduling. • We formulate node scheduling as an ILP problem and propose an efficient heuristic method. • We analyze and discuss the effectiveness, complexity, and optimality of the proposed scheduling method. Simulation evaluations confirm that the proposed method effectively reduces MSE meanwhile suppresses node TP. II. RELAY MODEL FOR AIRCOMP We consider a task of data aggregation in a sensor network consisting of K nodes, a sink d and a relay r, as shown in Fig. 1. Sink d collects data from all nodes and computes a function of these data, sum as an example in this letter. Nodes near the sink directly transmit their signals to the sink, but nodes far from the sink may experience deep channel fading. A This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ relay r is used to help these nodes, by forwarding an extra copy of their signals. In the transmission, both relay r and nodes face the constraint of maximal transmission power. Therefore, all nodes are divided into two groups. A node k ∈ N d will only directly transmit its signal to sink d while a node k ∈ N r will use relay r, besides the direct transmission. Here, N r ∪ N d = {1, 2, . . . , K }, and N r ∩ N d is empty. All nodes, relay r and sink d use a single antenna. The extension of this model to support multiple antennas at the sink is left as future work. We assume that (i) channel coefficients, h k ,r ∈ C (C is the set of complex numbers) from node k to relay r, h k ,d ∈ C from node k to sink d, and h r ,d ∈ C from relay r to sink d, are constant within the period of transmission, and (ii) sink d knows all the channel coefficients. This assumption on the availability of channel coefficients is common in previous studies on AirComp [6], [7]. The whole transmission is divided into two slots, analogy to the conventional unicast AF method [10]. In the first slot, all nodes in N r simultaneously transmit their signals to relay r, and node k transmits a signal x k ∈ C, which has zero mean and unit variance (E{|x k | 2 } = 1, |x k | ≤ v ), using a Tx-scaling factor b k ,1 . After applying a Rx-scaling factor a r ∈ C, the computation result at relay r is In the second slot, all nodes, including both nodes in N r and those in N d , simultaneously transmit their signals to sink d, and node k uses a Tx-scaling factor b k ,2 . Meanwhile, relay r also forwards the signals (y r ) received in the first slot, using a Tx-scaling factor b r . Then, after applying a Rx-scaling factor a d ∈ C, the computation result at sink d is Here, n r and n d are additive white Gaussian noise with zero mean and variance being σ 2 . This result can be rewritten as where a r = a d h r ,d b r a r is the equivalent Rx-scaling factor for the relayed signals. The computation MSE is computed as the expectation of squared difference between y d and k x k , with respect to random signals (x k ) and noises (n d , n r ), as follows: where it is assumed that signals are uncorrelated and independent of noises. Because it is possible to adjust the phase of b k ,1 and b k ,2 to ensure that a r h k ,r b k ,1 and a d h k ,d b k ,2 are positive real numbers, for simplicity, it is assumed that d all belong to R + , the set of positive real numbers, in the analysis. The instantaneous TP, |b k ,i x k | 2 , i = 1, 2, should be no more than P , the maximal TP. Let P max denote P /v 2 . Then, we have |b k ,i | 2 ≤ P /v 2 = P max . A node k ∈ N d only transmits in the second slot, b k ,1 = 0 and |b k ,2 | 2 ≤ P max . A node k ∈ N r transmits its signal twice, and the overall TP constraint, |b k ,1 | 2 + |b k ,2 | 2 ≤ P max , is also applied. At the relay node, the power to transmit a signal x k is The selection of N r should both ensure k ∈Nr TxR k ≤ P max and minimize MSE. Then, the problem is how to find optimal parameters (a r , a d , b k ,1 , b k ,2 ) and node scheduling (N r ) that minimize MSE, under the power constraint, as follows: III. SCHEDULING NODES TO USE RELAY To solve the problem in (6), it is necessary to first decide N r , which is called node scheduling. A. Initializing N r Without relay, a r = 0, all nodes directly transmit their signals to the sink, and MSE is computed as follows, where b k and a are the Tx-and Rx-scaling factors, respectively. Then, the optimal Rx-scaling factor a = a 0 is computed by considering misalignment in signal magnitude [6], [7]. A straightforward relay method is to let nodes whose signals are misaligned form a set N r and the other nodes form a set N d . B. Deciding Node Transmission Power When using a relay, node k divides its TP into two parts, b k ,1 and b k ,2 , and transmits its signal twice. Sink d receives two copies of the same signal (one directly and the other via relay). With properly set parameters, the two copies are in phase and add constructively. This is called coherent relay, which helps to achieve larger signal magnitude with the same overall power. The signal magnitude is a sum of two parts, , so that the overall magnitude approaches 1.0. To effectively use node TP, it is necessary to properly compute b k ,1 and b k ,2 , under the constraint |b k , and the equality holds if and only if With |b k , In the basic method, with a d fixed to a 0 and the initial N r , other parameters a r , b k ,1 , b k ,2 will be computed by minimizing MSE r in (4). This method only tries to fill the gap in the magnitude of misaligned signals, and is called CohR-ZF (zeroforcing). It is not necessarily optimal. In addition, γ k (P max ) may still be less than 1 because of the constraint of relay TP. C. Node Scheduling Now look back at the TP consumed at the relay for a node k ∈ N r in (5). There are two cases to be considered. 1) Case (i): When relay r is close to node k while far from sink d, h r ,d is small and h k ,r is large, which leads to large TxR k . The number of nodes that can be helped by the relay is small. So it is possible that not all nodes in N r can be helped. We compute the reduced MSE for x k as the difference of MSE before and after using relay, The MSE for x k after using relay is 0, if γ k (P max ) is no less than 1.0. The required relay TP TxR k is computed by (5). Here both MSE k and TxR k are non-negative. A flag I k is used to denote the node scheduling. I k = 1 indicates that node k ∈ N r uses the relay, and 0 otherwise. It is expected that MSE is reduced to the largest extent with the limited relay TP. Then, the node scheduling problem in case (i) is defined as follows: Then, all nodes with I k = 1 in N r forms a new N r . This is an ILP (integer linear programming) problem [11], which is NP-hard. Here we consider a heuristic method. Because relay TP is limited, it is expected that a large MSE k is achieved at the cost of a small TxR k . Hence, we compute as the metric for evaluating the priority of a node using relay. Nodes with low priority will be removed from N r to N d if the overall relay TP, k ∈Nr TxR k , is greater than the constraint. 2) Case (ii): When relay r is close to sink d, but far from node k, h r ,d is large and h k ,r is small. So TxR k tends to be small. In such cases, there is a surplus in relay TP after helping nodes whose signals are misaligned in the direct transmission. Nodes in N d do not need help to reduce MSE, but the TP at nodes may be large. Then, the remaining relay TP can be used to reduce node TP. By using a small amount of relay TP, we wish to reduce more node TP. Because the relay TP is limited, it is also necessary to decide which nodes to move from N d to N r first. By using relay, the TP saved at node k is the difference before and after using relay, TxP k = b 2 k −(b 2 k ,1 +b 2 k ,2 ) ≥ 0, while the TP consumed at relay for node k is TxR k . Using I k to indicate whether node k ∈ N d uses the relay, node scheduling is defined as follows: Then, all nodes with I k = 1 in N d are moved to N r . This is also an ILP problem. Here, we consider a heuristic method, and compute as the metric. Obviously, a node with larger η k (η k > 1) should be given higher priority to use the relay, unless the constraint of relay TP is reached. The heuristic method to the ILP problem is not necessarily optimal. For the cases where |N r | or |N d | is no more than 16, we confirmed by brute force search that the heuristic metric in (13) is optimal with a probability 97.8% and the heuristic metric in (15) is optimal with a probability 74.6%. D. Whole Algorithm In order to use relay to reduce MSE more aggressively, it is necessary to adjust both a d and a r . But separately adjusting a d and a r may over reduce the noise and greatly increase node TP [9]. To focus on minimizing signal distortion, we choose to keep noise power fixed. In other words, a 2 d + (a r ) 2 = a 2 0 . Then, when switching a node from direct transmission to using relay, its signal magnitude, is a decreasing function of a d if h k ,r > h k ,d , which is the typical case. So signal distortion |1 − min(γ k (P ), 1)| decreases with a d until it reaches 0. If node TP is large enough, magnitudes of signals in N d remain 1.0, while the overall MSE decreases. But this requires larger node TP for the direct transmission, and more nodes will be removed from N d to N r . Under the constraint of relay TP, some nodes cannot use the relay, and the overall MSE will increase again. Therefore, we can gradually decrease a d from a 0 while increase a r from 0, so that MSE may reach a minimum. Although this is not necessarily the global minimum, it is so under most cases, and in the case of a local minimum, its difference from the global minimum is small. Invoke ProcRelay (N r , h k ,r , h k ,d , a r , a d ), get b k ,1 8: Compute TxR k for k ∈ N r by (5) 9: if ( k ∈Nr TxR k >P max ) then Dec nodes in N r 10: Compute ξ k for k ∈ N r by (13) 11: Sort nodes in N r in increasing order of ξ k 12: return a r , a d , MSE(a r , a d ) 29: end procedure 30: procedure PROCRELAY (N r , h k ,r , h k ,d , a r , a d ) 38: return MSE r 39: end procedure The whole process of finding optimal parameters and the corresponding computation MSE is described in Algorithm 1. With K nodes, the complexity is O(K 2 ) in AirComp. In Algorithm 1, finding the initial value a 0 by AirComp (line 2) takes the same computation. The computation costs of computing TxR (lines 7-8), decreasing nodes in N r (lines 9-12) and moving nodes from Simulation scenario: 50 nodes (×) randomly distributed in a 400m × 200m area, 1 sink (♦, (100, 100)) and 1 relay ( , (250, 100)). Node deployment changes per evaluation. IV. SIMULATION EVALUATION Here, we evaluate the proposed method (CohR-NS), comparing it with the AirComp method [6] that only exploits the direct link, the SimRelay (when a node uses a relay, its direct transmission is neglected), and CohRelay method in [9]. SimRelay and CohRelay are modified so that the overall relay TP is no more than the constraint. The basic method only involving Sections III-A and III-B is named as CohR-ZF. The method that iterates over all possible a d to find global minimal MSE is called CohR-Opt. Fig. 2 shows the simulation scenario. 50 sensor nodes are randomly and uniformly distributed in a rectangle area (400m × 200m). The frequency is set to 2.4GHz. A hybrid free-space/two-ray path loss model is used and path loss is 80dB at a distance of 90m. Each link experiences independent block Rayleigh fading (channel gains are the same in two slots). It is assumed that the relay-sink link does not experience fading. As for the power setting, P max = 15dB, and P max · E{|x k | 2 } corresponds to 5dBm. When receiving signals, both sink d and relay r first amplify each signal to around the noise level (the strength of all signals is much greater than that of noise) for the A/D conversion and then amplify the signal in the digital domain to E{|x k | 2 } = σ 2 = 1 without affecting signal to noise ratio. In the evaluation, we mainly use MSE and average power as the metrics. Average power is computed as the ratio of the overall power (TP of all nodes and the relay plus receive power of the relay, for which, it is assumed that the relay consumes the same power for receiving and transmission for simplicity) to the number of nodes. The simulation is run for 500 times in the MATLAB environment and the average results are presented. First we fix the sink position to (100, 100) and change the relay position along the line Y=100, from (125, 100) to (400, 100). The relay-sink distance changes from 25m to 300m accordingly. Fig. 3(a) shows the computation MSE in different methods. AirComp has the largest MSE, while SimRelay and CohRelay reduce MSE by using the relay. But it is obvious that their performance degrades when relay-sink distance increases. In comparison, CohR-ZF helps to reduce MSE in the long distance range, while CohR-NS further achieves the least MSE in the whole range. We also confirmed that the degradation of MSE in CohR-NS in comparison to CohR-Opt is only 0.05% and 0.28%, nearly optimal, when relay-sink distance is 150m and 200m, respectively. There is a tradeoff between MSE and power consumption, and a low computation MSE is usually achieved at the cost of increased power. As shown in Fig. 3(b), average power increases a little in CohR-NS, compared with that in CohRelay. But it is still less than that in AirComp in the typical distance ranges, because using coherent relay helps to reduce node TP. Fig. 3(c) shows the number of nodes requiring or using relay in different methods. Unsurprisingly, when the relaysink distance increases, more power is required for helping each node, and the number of nodes using relay decreases in all relay methods. But more nodes can use the relay in CohR-NS than in other methods. It is this aggressive relay policy that helps CohR-NS to reduce MSE. Next we evaluate the impact of the number of nodes in the network. The relay-sink distance is fixed to 150m, but the number of nodes is changed from 20 to 500. Computation MSE and average power are summarized in Table I. It is clear that MSE increases with the number of nodes, because it becomes difficult to align signal magnitude when there are more nodes. As a result, signal magnitude actually decreases, so nodes with large channel gain can save power, and the average power decreases. Compared with CohRelay, MSE reduction in CohR-NS is more than twice the increase in average power, which confirms that the proposed method is effective in reducing MSE while suppressing the increase of power consumption. V. CONCLUSION AirComp as a promising data aggregation method for future sensor networks faces the reliability issue. To address this problem, this letter enhances the AF based relay method for AirComp and proposes a new method for node scheduling, considering the constraint of relay TP and node TP. Simulation evaluations confirm that the proposed method is more effective than previous methods in reducing computation MSE meanwhile suppressing the increase of power consumption, and scales better with the number of nodes. In the future, we will further study the relay selection problem.
4,765.6
2022-09-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Molecular identification of the Danzhou chicken breed in China using DNA barcoding Abstract Mitochondrial cytochrome C oxidase subunit I (COI) has been used as a DNA barcode to identify population genetic diversity and distinguish animal species as it is variable enough to distinguish between species, yet suitably conserved. A new native chicken breed, named Danzhou chicken was discovered in Hainan, China in 2014, although identification is difficult by morphological examination alone. The mitochondrial COI genes of six chicken breeds, including four local and two imported breeds (Danzhou, Wenchang, Bawang, Beijing-You, Hy-Line Brown, and Ross) were compared and assessed in terms of their efficacy for DNA barcoding. The results showed that the number of COI gene variants in Danzhou chickens was less than those of other breeds, except Bawang and the genetic structure was relatively stable. The Kimura 2-parameter genetic distance between Danzhou chickens and the five other breeds was from ∼0.001 to 0.734. The genetic distance of the six breeds was ∼0.001–0.339, with that of Danzhou being the highest (0.339). Danzhou chickens clustered with Bawang and Wenchang chickens in the phylogenetic tree due to geographic closeness. Danzhou chickens could be identified more accurately using COI barcoding. Multiple molecular markers combined with morphological differences were more persuasive for identifying species. Introduction Danzhou chicken is a small type of local variety in Hainan island of China, a good genetic resource in Hainan. Danzhou chickens have a series of characteristics and advantages, such as strong adaptability, disease resistance, crude feeding resistance, strong wildness, low-fat content, and delicious taste. They have the potential to develop into a high-quality chicken breed. The body weight, body size, living habits, and carcass quality of Danzhou chickens has been previously studied. However, Danzhou chickens cannot be accurately identified by morphological characters alone. Molecular methods for species identification are generally considered to be more accurate and efficient. Therefore, we aimed to determine an effective molecular-level method to identify Danzhou chickens. DNA barcoding technology has been used for identification and classification of different taxa and has successfully identified new species and varieties (Hebert et al. 2004;Hajibabaei et al. 2006;Lane et al. 2007). Mitochondrial DNA sequences, including mitochondrial cytochrome oxidase I (COI), have many advantages as molecular markers. They are variable but sufficiently conserved, of appropriate sequence length, general primers for amplification and sequencing are available, molecular markers of COI can quickly identify species as a DNA barcode, and have been widely used in vertebrates and invertebrates classification, species identification, genetic diversity, and molecular evolutionary studies (Liu et al. 2001;Hebert et al. 2004;Donald et al. 2005;Wood et al. 2007;Ståhls and Savolainen 2008;Zhen et al. 2015;Wang et al. 2017;Tan et al. 2018). Hebert et al. (2003) first proposed to use a specific segment of the first half of the COI gene as a DNA barcode (Hebert et al. 2003). DNA barcodes can effectively differentiate 98% of the marine fish from freshwater fish (Ward et al. 2005). A DNA barcode based on the COI gene has been used for species identification in chicken varieties (Yap et al. 2010;Yacoub et al. 2015), including the introduced and Chinese breeds that have a certain phylogenetic distance (Gao et al. 2007). In this study, Danzhou and five other chicken breeds were identified using the COI gene as a DNA barcode to evaluate the differential expression. Animal experiments and DNA extraction A total of 315 Danzhou chickens, three local breeds, and two introduced breeds were used in the present study CONTACT Zhen Tan<EMAIL_ADDRESS>(Supplementary Table 1). Wenchang is the most typical local breed on Hainan Island. Bawang belongs to the red jungle fowl, Hainan subspecies. Beijing-You is a representative chicken breed in China. Ross is a commercial egg-laying breed from the UK. Hy-Line Brown is a well-known egg-laying breed from the USA. In this study, blood samples from each adult were collected from farms in collection localities Supplementary Table 1, cryopreserved, and transported to the laboratory for analysis. Then DNA was extracted from by TIANGEN blood DNA kit following the manufacturer's instructions and the molecular markers were evaluated by DNA barcode technology (Hebert et al. 2004). Chicken cytochrome oxidase PCR amplification The primers used in the experiment were designed according to the COI gene sequence of Chinese red raw chicken Cox1 (AP003322) published in GenBank: forward: 5 0 -GCACAGGATGGACAGTTTAC-3 0 , reverse: 5 0 -ATAGCATAGGGG GGTCTCAT-3 0 . The primer was synthesized by Shanghai Bioengineering Co., Ltd. (Shanghai, China). The PCR product was 651 bp in length. The PCR reaction system contained about 50 ng genomic DNA, 2 ml primers (20 molL À1 ), 2 ml dNTP mixture (2.5 mmol L À1 ), 16.75 ml d 2 H 2 O and 0.5 ml PFU enzyme (Shanghai Bioengineering Company). PCR products were analyzed by electrophoresis on 2% agarose gels. Then, PCR products were sent to Shanghai Bioengineering Co. Ltd. for two-way sequencing. Comparative analysis of chicken COI gene sequences The sequences were read by Chromas Software, checked and proofread manually and corrected by bidirectional sequencing. The obtained sequences were compared with DNA MAN5.2.2 software (Lynnon Biosoft., USA) and the DNA barcode sequence of red raw chicken mitochondrial COI gene (AP003322) in GenBank was used as the standard for data analysis. The genetic distance was calculated by Kimura 2parameter method in MEGA5.0 software and then the phylogenetic tree was constructed using the neighbor-joining method (Kumar et al. 2008). Gel electrophoresis of DNA and PCR products The blood DNA molecular weight of the six breeds of chickens was about 650 bp (Supplementary Figure 1) and verified by real-time PCR that the molecular weight of each breed chicken as about 650 bp (Supplementary Figure 2) proved that DNA was successfully extracted and could be sequenced and followed up. Specific sites of the COI sequences of the six chicken breeds There were 105 single nucleotide polymorphisms on the COI gene sequences and the variation points of the six chicken breeds were quite different (Supplementary Table 2). Among them, Danzhou had the largest number of variation points (36). The number of variation points in Bawang was the least (6). Analysis of genetic distance between populations of the six chicken breeds Kimura's 2-parameter genetic distance and the net genetic distance of COI gene sequences between Danzhou and the five other breeds is shown in Table 1 and ranged from $0.001 to 0.339. The genetic distance of Wenchang was 0. The genetic distance of Danzhou was 0.339. The genetic distances of the other four varieties were all small, within the range of $0.001-0.003. The intrabreed genetic distances were $0.001-0.734. The genetic distance between the two foreign breeds and domestic breeds was relatively large. Apart from Danzhou chickens, the genetic distances between the other three breeds and the two foreign species were large (all >0.7). This was probably due to hybridization and geographical origin. The genetic distance of the two foreign breeds was also large and the genetic distance of Hy-Line Brown was from $0.002 to 0.727; that of Ross was from $0.002 to 0.730. The net genetic distance of the six breeds was from $0.001 to 0.459. Phylogenetic tree of Danzhou and five other breeds of chicken The hybridization of Danzhou chicken was obvious in the phylogenetic tree (Figure 1). Some sequences were clustered with Bawang and Wenchang and the reference sequence from NCBI. The other sequences were clustered with Hy-Line Brown. Beijing-You, Wenchang, Bawang, Hy-Line Brown, and Ross were generally clustered on their respective branches. Discussion Morphological characteristics and DNA barcoding are two of the main methods used for species identification. The most commonly used molecular markers for DNA barcoding are COI genes, which are located in the mitochondrial genome (mtDNA). The COI gene has been used for identification of different vertebrates and invertebrates (Tautz et al. 2002;Hebert et al. 2003;Kress et al. 2005;Ward et al. 2005;Zhen et al. 2015;Wang et al. 2017;Tan et al. 2018), including chickens (Gao et al. 2007). Danzhou chickens are of small size, have a 'beard', yellow feet, and hemp feather. There are morphological differences between Danzhou chickens and the other breeds, yet difficult to distinguish from others accurately. Danzhou chicken population could be evaluated more comprehensively by changing the information of the heterotopic points. In this way, can we objectively understand the specific information and characteristics of Danzhou chickens. This study measured six breeds of chicken, Danzhou, Wenchang, Bawang, Beijing-You, Hy-Line Brown, and Ross, and a total of 315 individual COI gene sequences, which were found to have a 651 bp variable region, 105 mutation loci. The main types of variation were inversion and transformation. Danzhou chickens had specific variations that provided more reliable information for evaluating Danzhou chickens at the molecular level. Therefore, DNA barcoding could effectively identify and distinguish chicken breeds and strains. These results were consistent with those of a previous study (Bondoc et al. 2015). Danzhou chickens had a relatively low mutation rate and low genetic diversity, but the largest genetic distance (0.339). This was not entirely related to the formation of stable chicken breeds in Danzhou, which might be due to the lack of selective breeding in Danzhou chickens, consistent with the result of Yacoub et al. (Yacoub et al. 2015). The genetic distance of the other five breeds was relatively small indicating that the selection intensity in these breeds was higher Note: the upper triangle is the net genetic distance (Da) and the lower triangle is Kimura 2-parameter genetic distance. than that in Danzhou chickens. This was partly due to the initial breeding of Danzhou chickens. The genetic distance between Danzhou chickens and Bawang, Wenchang, and Beijing-You chickens was 0.242, 0.242, and 0.244, respectively. The genetic distance between Danzhou chickens and Ross and Hy-Line Brown chickens was 0.488 and 0.487, respectively. The genetic distance from the foreign breeds was larger than from the local breeds indicating that Danzhou chickens were the most estranged from the two introduced breeds and relatively closely related to the three domestic breeds. The phylogenetic tree showed that Danzhou chickens clustered with Wenchang, Bawang, and Hy-Line Brown chickens. Danzhou chicken was a low-intensity variety. The COI gene isolated from Danzhou, Wenchang, and Bawang chickens had similar sequences and insertion locations, the result of many hybridization events or recent differentiation. Relationships between geographical distribution, Danzhou, Bawang, and Wenchang chickens are from the Hainan area and a close genetic relationship between these breeds. Also, Hainan island is a relatively closed natural environment. Under such circumstances, for Danzhou chicken, which has a relatively low selection intensity, it is inevitable that they have a certain relationship with the Bawang and Wenchang chickens in Hainan. However, there were a series of crossovers between Danzhou chickens and the foreign Hy-Line Brown breed since its introduction to Hainan. This suggests that there were genetic exchanges between Danzhou chickens and Wenchang, Bawang, and Hy-Line Brown chickens. In general, Danzhou chickens are used for breeding. The crossover between local and foreign chicken breeds was expected. Firstly, the six chicken breeds in this study have a large feeding range in Hainan. Due to geographical proximity and frequent trade, different chicken breeds inevitably experience different degrees of hybridization. Secondly, the relative conservatism of the COI gene means that genetic differences between breeds are relatively small. To evaluate the breeding potential of Danzhou chickens and other breeds, it is necessary to determine differences in morphology, growth performance, and cytology between them. Molecular aspects provide a molecular basis for the selection and breeding of Danzhou chicken. There may be gene exchanges between Danzhou chickens and Wenchang, Bawang, and Hy-Line Brown chickens in Hainan province. Hainan Island is a relatively closed natural environment; therefore, the close relationship between Danzhou, Bawang, and Wenchang chickens is understandable. Since Danzhou chickens have relatively weak choices, let alone live in a relatively closed natural environment. In addition, our results indicate that there may have been a crossover between Danzhou chickens and imported Hy-Line Brown chickens. This might be due to crossbreeding when the Hy-Line Brown breed was introduced to Hainan. Conclusions In this study, specific sequences of the mitochondrial COI gene were used as DNA barcodes for different breeds of chicken. It was found that this method can preliminarily assess the differences between Danzhou chickens and other breeds. Moreover, Danzhou chickens were identified and evaluated at the molecular level after morphological studies, which played an important role in improving the breeding of Danzhou chickens and the results of the present study will contribute to research on Danzhou chicken breeding strategies in the future. According to the results of this study, the mitochondrial COI, a specific segment of a specific gene, is used as the basis for DNA barcoding, based on its sequence polymorphism, specific site, and specific haplotype and sequence clustering results. Molecular identification of varieties has certain possibilities, also can be used for genetic diversity studies in local chickens, but standard DNA barcoding technology cannot effectively distinguish chicken breeds with small differences. DNA barcoding using the COI gene has the advantages of convenience, rapidity, economy, and accuracy in identifying different chicken breeds. However, the number of samples involved in this experiment and the sequences generated are not large enough. Therefore, further research is required on different breeds and the number of samples for sequencing needs to be expanded to verify the application of DNA barcoding.
3,077.2
2019-07-03T00:00:00.000
[ "Biology" ]
Semantic Localization System for Robots at Large Indoor Environments Based on Environmental Stimuli. In this paper, we present a new procedure to solve the global localization of mobile robots called Environmental Stimulus Localization (ESL). We propose that the presence of common facts on the environment around the robot can be considered as stimuli for the procedure. The robust performance of our approach is supported by two concurrent particle filters. A primary particle filter estimates and tracks the robot position, while a secondary filter is fired by environmental stimuli, helps to reduce the influence of measurement errors and allows an earlier recovery from localization failures. We have successfully used this method in a 5000 m2 real indoor environment using as inputs the available environment information from a Geographical Information System (GIS) map, the robot’s odometry and the output of an algorithm for the perception of facts from the environment. We present a case study and the result of different tests, showing the performance of our method under the influence of errors in real applications. Introduction Robot navigation in indoor environments constitutes the main challenge of search and rescue activities at urban locations (Urban Search and Rescue (USAR)) or, less dramatically but in a more usual fashion, at domestic or surveillance tasks. Disaster evacuation missions at an office building (for example an epidemiologic accident, bacteriologic attack, hazardous material leaking, etc.) require that the navigation task specification manages symbolic information with semantic meaning, like "the victim is found in room 424". In this way, recent advances in Building Information Modelling (BIM) and Geographical Information System (GIS) suggest that the robot can makes use of the advanced semantic and geometric information included in an intelligent map, instead of the 2D geometric information of a map (mostly obtained with SLAM (Simultaneous Localization And Mapping approaches) [1,2]. So, it is clear that it would be quite useful for the robot to use the information from teh GIS systems directly. It can be also stated that tasks like rescue require that the robot can boast the global localization in order to provide an answer to questions like "where is the victim?" or "how to reach him?". Moreover, in USAR tasks, time is an important factor and the use of a robot can be restricted by the time that robot would need to explore and build a map. Proposals like [3] try to reduce it using emergency maps to support the SLAM strategy. Mur-Artal et al. [4] propose a visual inertial ORB-SLAM (ORB denotes an Oriented fast and Rotated BRIEF fact detection procedure) that is able to close loops in real time and can reuse the map, but it needs to perform the mapping task. Our work tries to avoid this situation drastically with an original global location approach that combines a dual particle filter (PF) and a semantic map with natural landmarks obtained through GIS and BIM (Building Information Modelling) technologies In this context, several algorithms approach this problem using grid-based Markov filters [5] or multi-hypothesis Kalman filters [6], but the most widely studied and tested algorithms are based on particle filters [7]. More recently, the approach provided by [8] uses particle filter to perform the localization task from MEMS (MicroElectroMechanical Systems) and camera devices, but it requires loop-use and the availability of previous image-formatted maps. These filters, like other global localization algorithms, use the movement model and external perceptions to determine the robot's localization (position and orientation). Modeling all the movements of a mobile robot with accuracy is very difficult because they depend on uncontrollable factors (shocks, slippages, etc.). In these situations, as it happens in the kidnapped robot problem, the PF can fail because all particles would be in the wrong place. Several authors have proposed modifications to the Monte-Carlo localization algorithm (MCL) [9], trying to resolve these situations. It requires that the number of added particles is large enough to be likely to hit the actual robot location. If this method is used for global localization in a very large environment, the number of particles required may be too high, or the filter may take a long time to converge to the correct position. An improvement of this method is the SRL (Sensor Resetting Localization) algorithm [10], based on the addition of particles according to the probability distribution provided by sensor readings. While this method can improve the convergence of particle filters and solve the kidnapped robot problem in a very efficient way, it can also cause unwanted side effects when dealing with persistent erroneous measurements, like false positives detecting beacons, or errors in the map, like a beacon placed in a wrong position. This paper describes a new localization method called Environmental Stimuli Localization (ESL), based on a dual particle filter. It consists of two PF running in parallel using the same data input: a primary or main filter and a secondary filter. The primary PF estimates and tracks the robot position, while the secondary filter has a short lifetime, has its own shooting and resetting conditions based on environmental stimuli and allows an earlier recovery from localization failures. In Section 3.2 this method and its implications will be explained in detail. As we will show, our method is also robust against persistent errors in perceptions, errors in the map or unmodelled robot movements. There exist several works [11][12][13] where the dual particle filter idea is included, and more concisely at [14] it is applied to location tracking. All of them are based on the management of different state definitions at each filter where there are couplings of the respective prediction models. The data exchange between the filters is done through the shared parameters, or variables, of the models. In the opposite, at our proposal, both filters, Long-Term (LT) and Short-Term (ST) ones, consider the same state, that is, the robot position and orientation, and the same observations, doors or natural facts. They will have different firing conditions and execution (or duration) times. The data exchange mechanism is also established in an asynchronous way determined when the convergence of the ST filter is suitable. In this way, this is a new approach, as far as we know. Another key aspect of this paper is the selection of the observation model. External measurements required by global localization algorithms are usually obtained by means of laser range finders [15], wireless transceivers [14,16] or cameras using pattern-detection algorithms [17,18]. In most cases, the implementation of these solutions requires a previous work on the whole robot environment, which implies taking images, mapping from laser readings, or inserting custom markers. The use of inherent marks in the environment avoids this stage of preprocessing. Indoor, these marks could be rooms, windows, doors, walls, etc., while outdoors monuments, buildings, traffic signs, etc. could be considered. These marks should be matched with those extracted from the map of the robot's navigation environment. In this work, we will present the application of the proposed ESL method and the observation model for a real mobile robot with wheels that travels through an interior environment and how it is able to work fine in large surface buildings (our campus). Among the distinguishing facts in indoor, we decided to use doors as external landmarks. Their main advantages are that they are quite abundant, since doors can be found in all rooms, and they are easier to detect than other objects since most of them share many common features. The size, shape and color of doors is fairly standard, especially within the same building, they are usually on the ground or over a doorstep, they usually have a frame, etc. Other objects, such as windows, present larger variations in shape, location, colors and sizes. Halls and walls are elements that have been widely used for localization tasks, but their detection is often achieved by means of a laser and it is more difficult to achieve with cheaper sensors like cameras. To detect doors, we use a method developed at our research group [19] that works well at a rate higher than 20 Hz and no environmental intervention is required. By means of a large number of experiments, our procedure has been proven to be very useful for confidently detecting nearby doors. Test scenarios included doors with different morphological characteristics as size, color, texture, shape, double and single doors, etc., objects with similar characteristics to the doors to test false positive and different livings spaces, as narrow and wide corridors, hallways, hall, among others. Semantic facts, as doors at a building, are specifically represented at BIM and GIS systems. Our global localization solution does not require a manual pre-processing phase and it makes use of semantics facts automatically extracted from a GIS map, e.g., the doors localization on a given floor of the building. Currently, public and private institutions have services that maintain up-to-date GIS maps of indoor and outdoor areas, which can be downloaded or consulted online. The spatial databases that store GIS maps allow, among other advantages [20], to make queries on a wide variety of spatial operations For example, by means of an automatic procedure, an on-line query to the building's PostGIS database, we can obtain the location of the doors that are at a given distance, taking as origin the actual robot location. Hence, since it is possible to take advantage of this information to feed localization algorithms and is also possible for robots to detect doors, windows or other elements of those maps, it is feasible to implement global localization algorithms for a whole organization in a short time, without having to preprocess the environment or create custom maps. When PFs are considered for global localization in large environments, one of the problems is that the number of particles required can be too high to run the algorithm in real-time. Without a proper method to initialize the position of the particles, in order to effectively sample the correct position of the robot, many thousands of particles can be required even for relatively small environments, just a few hundreds of square meters. USAR evacuation mission against an accident or hazardous material attack is posed at higher education buildings like the building at the Science campus of the University of Salamanca where our research group is located. The whole set of buildings in our University has an extension of many thousands of square meters, hence the number of particles required, using known approaches, could prevent the development of real-time global localization solutions. We will show how our proposed algorithm is able to deal with this problem. It is able to provide correct results in real-time doing global localization in a real environment using a low number of particles, even less than 1000, for a surface of more than 5000 m 2 that corresponds to the Science Faculty building of our University where the robotic system was deployed. The paper is organized as follows. First, we consider the main problems related to particle filters when they are applied to mobile robots. This analysis will be useful to contextualize the improvements of our approach. In Section 3 we present our proposal that, as a main result, improves the success rate of global localization tasks when working in large environments with a reasonable particle number and with erroneous maps or measurements. To prove it, we will present the real scenario where our localization tests are performed and, based on it, we will show the results of different tests and a case of study that evaluates the effectiveness of our method. Particle Filters Applied to Robot Localization in Real Large Contexts In this section, common problems that usually appear doing global localization of mobile robots in real environments will be discussed. Sometimes, sensors produce unexpected measurements, which are not related to the information provided by our maps. When creating a map from laser measurements, objects such as furniture or paper bins, will appear in the generated map. If someone moves these objects, measurements from certain places won't be as expected and therefore, localization algorithms will perform worse. When using computer vision techniques, as in our case to detect doors, these are especially prone to generate erroneous measurements. Depending on factors such as light, decoration, obstacles or the perspective of the camera, it is possible that certain objects are incorrectly detected, resulting in false positives or false negatives. Within each of these two types of errors, we can differentiate two subtypes: persistent errors and short term errors. By short-term errors we mean those that happen during short periods of time (a few seconds) because of very specific circumstances of lighting, perspective, etc. that confuse the detection algorithm. By persistent errors we mean those that are caused by the presence of objects very similar to those we want to detect. Such objects cause errors in the detection algorithm that remain until they leave the robot's field of view. The effect of these unexpected measures on the probability density will be a probability decrease at the real robot location and a probability increment at locations that maximize p z t |x t is referred as the state or robot localization). Once unexpected readings disappear, the filter tends to converge again to the correct position. However, we should consider that these filters only have a finite number of particles and that, due to the resampling phase, they coalesce in places where the probability is higher. If during the time that wrong measurements are being received, all particles in the correct place disappear, the filter will not be able to recover and will fail in its estimation of the robot localization. If erroneous measurements appear at the initialization phase of a filter and it is based on the probability distribution provided by sensor readings, the consequences can be fatal. It may be that, from the beginning, there are no particles placed near the robot's real location or that, in the early stages of the filter, when there are a lot of places to explore and very few particles in each one, all particles near the real robot position disappear due to its worse matching with the erroneous measurements. Less severe cases in which particles disappear from the right place can be usually solved by increasing the number of particles, by slowing down the convergence of the filter or by running the resampling phase only in some iterations of the filter. However, these solutions are not always feasible. Increasing substantially the number of particles is not possible when we have a limited computing power, as in embedded systems, or when the localization must be performed on a very large map. However, there are optimizations that can help us to mitigate this problem like the usage of performance improvements [21] or adaptive sample sets [22]. Slowing down the convergence of the filter is not acceptable if you want to have a reliable estimation as soon as possible. In some cases, when using global localization modules, we may want to obtain a fast estimation more than a reliable estimation which requires too much time. In real applications, if we have a relatively accurate estimation of the robot's position, we can activate the planning and navigation algorithms, so that the robot can start moving. While moving, the localization algorithm can continue working and finding the most accurate position, if the previous one was a wrong one. The omission of the resampling phase in some of the iterations of the filter has a dual effect [23]. On one hand, it can prevent the disappearance of the particles located in the correct area. On the other hand, especially in the first iterations, it may cause the number of particles to track each possible robot position to become too small to successfully track the robot movement. If we want to avoid this, we must increase the number of particles according to the time we omit the resampling and to the dispersion of the initial probability distribution of the robot state. When these problems cannot be fixed with these three approaches, we must restart the particle filter or add particles regardless of the normal execution. One possible solution is to reset the filter when we realize all the particles of the filter are at the wrong places. This can be detected when the sum of the particle weights before normalization is below a threshold. However, when using low accuracy sensors in environments with a lot of marks, it is very difficult for this solution to work properly in practice. If we set the threshold too low, the time to detect the localization failure can be too long because of casual coincidences between real and detected marks. Otherwise, if the threshold is too high, it is possible to reset the filter because of sensor errors even when it contains the correct robot location. It is, therefore, necessary to be very conservative when using this solution. The addition of particles in the filter regardless of the normal execution is done in methods like SRL or Mixture-MCL [9] to enhance the performance of particle filters, but it has severe problems when dealing with unexpected sensor measurements, errors in the map or unmodeled robot movements. Proposed ESL Method for Global Localization Our approach is inspired by the natural behavior of humans trying to determine their own location. At any moment, a person can have a reasonable certainty of being in a specific location but, when a significant fact is perceived in the environment that conflicts with his estimation (maybe a monument or anything familiar to the person), if it is important enough, it can create a doubt and the person can start evaluating a second guess. Both the current estimation and the second guess can be evaluated in parallel and, if the doubt becomes important enough based on the next perceptions, the person can change his mind about his estimated position. The core components of our proposed ESL localization method is detailed in Figure 1 where the architecture of our approach is presented. We propose the usage of a double concurrent particle filter and a GIS map as the sources of information, the output of a door detector [19] and the odometry of the robot. However, ESL filter doesn't have any dependency on those sources of information, so our method should be easily applicable using others. In order to explain how the ESL filter works, we will start showing how its state is evaluated. This evaluation is a key element in defining what the final output of our localization method is, and also the interaction between the two particle filters that run concurrently. This interaction allows the ESL filter to react to stimuli provided by the environment and, at the same time, it dramatically reduces the negative impact of sensor misreading, unmodeled robot movements or map errors. It will also be demonstrated that the number of particles required to get successful results using ESL is highly reduced compared to other approaches, so it is suitable to be applied in huge real environments. Evaluation of Particle Filters In order to save computing resources, ESL uses a simple clustering algorithm to group individual particles. Starting with an empty set of clusters, it iterates over the whole set of particles. When a particle is closer than a threshold to the center of an existing cluster, the particle is added to this cluster, its probability is added to the probability of this cluster, and the cluster center is updated to stay in the weighted mean position of the particles it contains. Otherwise, a new cluster is created with the same center and probability as the particle. Resulting clusters can be observed in Figure 2. Using these clusters as input, the convergence of the filter is estimated using the entropy H t that is defined as wherep C t represents the probability of each cluster C. When H t is high, the filter is too dispersed and needs to continue working to provide a reliable output. When H t is low, the convergence of the filter is high, with few clusters accumulating most of the probability. This value is essential for two different purposes: managing the interaction between the LT and the ST particle filters and providing a reliable output. Concurrent Dual Particle Filter When we consider the information provided by the environment (in our case the doors in front of the robot at a short distance), due to similarities and symmetries that appear in real environments, it is not possible to determine the localization of the robot based on the first detected doors. In these cases, if the available number of particles is low and the accuracy of sensors is not high enough, it is probable that one PF converges towards a wrong position. For that reason, it is important to take advantage of significant information that can appear in front of the robot at any time. This event has been referred to as stimulus in our approach. In our case, that significant information can be an accumulation of doors in few meters or a distribution of doors that is unlikely on other points of the map. Since significant information or stimulus about the environment can be detected by the robot at any moment, that might not be the initial state, we propose a second particle filter (Short-Term PF, or ST). The ST filter is started every time that the environment provides significant information about the robot position. As can be seen in Figure 1, we propose the usage of two particle filters running concurrently. The LT filter represents the certainty of being on each location of the map, so that the localization of the cluster with the highest probability (p C t ) in the LT filter is considered the output of our algorithm when H t is below a certain threshold. The ST filter is intended to react to extra information provided by robot environment, stimulius, and provide that second guess, in this case a probability distribution, to the LT filter when the number of external measurements is higher than a given threshold. This environmental stimulus will be referred as the firing event of the ST filter 2, defined as follows where Inf T is the selected threshold and Inf t is defined as In our case, external observations z (j) t are the doors in front of the robot detected by the method proposed by Fernandez-Carames et al. [19]. So Inf t is the number of detected doors. Just after the firing event the ST filter is initialized according to the probability distribution provided by sensors (Section 3.3.1). In this sense, our approach ESL is related to the SRL or Mixture-MCL proposal. The main difference is that, in those approaches, new particles are immediately introduced in the filter and, as we have commented in the Section 1, that causes severe problems when particles are added based on wrong external measurements. In our proposal ESL, the new particles are evaluated in the ST filter, without affecting the stability of the LT filter and the output of our algorithm. To evaluate the quality of the particles in the ST filter, we use the filter entropy H t , defined in Equation 1. When that value is low enough, the particles in the ST filter participate in the resampling phase of the LT filter, taking a low percentage of the available probability. After that, the ST filter is reset, waiting for the next firing event based on environmental stimulus. So, we propose the reset condition as where H T will measure the maturity of the ST filter. This value should be low enough to ensure that the filter has received enough external information to be able to converge, grouping most of its particles in few clusters with a high probability. In our case, the value H T was experimentally determined. Thanks to this hybrid resampling phase on the LT filter, some particles of those high-probability clusters of the ST filter start being evaluated in the LT filter. This makes a big difference between ESL compared to SRL or Mixture MCL. In those methods, added particles are introduced in the filter just based on a single observation (which could be wrong) and are immediately evaluated with very related sensor readings, giving new particles a clear advantage over others and worsening the negative effects of measurement errors. With the ESL method, the new particles have been already evaluated in the ST filter during a period of time, and are the result of the convergence of the ST filter. When these new particles start participating in the LT filter, current sensor readings are, with a high probability, not so related to the ones used to initialize them due to the time and robot movement that has happened during the maturation of these particles in the ST filter. These two factors reduce the amount of noise introduced in the LT filter and protect it against particles generated based on wrong external measurements or errors in the map. Of course, with the ESL method, it is still possible to introduce wrong particles generated by wrong external measurements during the initialization of the ST filter but, in that case, those wrong particles stay isolated in the ST filter without affecting the stability of the LT one. When they are finally sampled by the LT filter, the external measurements used to evaluate them are much more independent of the measurements that generated them, so it is very unlikely that the wrong particles still evaluate better with current measurements than correct ones in the LT filter. So, when the LT filter is wrong and few correct ST filter particles are added, it is probable that they cause the LT filter converges to the correct localization. When the ST filter just adds wrong particles, it is highly unlikely that they compromise the stability of the LT filter and cause it to fail. As we will show in the next section, the ESL approach has been successfully implemented and tested in a large real environment with a short response time and a very high success rate. Additional Improvements in Standard Phases of Particle Filters In order to improve our proposal, we have also introduced additional optimizations in the standard phases of particle filters applied to robot localization. In the initialization phase, we reduce the number of particles required to get successful results using the probability distribution provided by sensors. Additionally, we skip the resampling phase in some iterations of the filter to reduce the adverse effect of wrong external measurements. We explain these improvements in detail in the next sections. Initialization Phase In order to locate the robot in a short period of time, the filter must sample the correct position and that usually requires the use of a huge amount of particles to explore a big surface (in our case, over 5000 m 2 ). As the number of particles required by a filter usually increases with map dimensions, the required computational power to run the filter in real-time also increases. When the available hardware can not meet these requirements, additional solutions are needed to achieve successful results with a reduced set of particles. The most used improvement in this sense is to select the initial particle set based on the probability distribution generated from the sensor readings (p (x t |z t )). This improvement was proposed as part of the SRL algorithm, in which the particles are placed in locations that maximize (p (x t |z t )). Since we use a GIS map as a source of information, we use it to initialize the filter according to the probability distribution generated by sensor readings. Our external landmarks are doors perceived by our robot. To take advantage of them, only at the initialization phase for both filters, we group all detected doors in pairs, its relative distance is calculated and a search for pairs of doors with the same distance between them in the GIS map is performed, in order to achieve successful results with a reduced set of particles. Thanks to the use of a GIS map (stored in a PostGIS database), we obtain these pairs with a simple spatial query (Figure 3), where distance is the distance between both doors detected by the robot, and error is the estimated error for the measurement. The first condition in the WHERE clause prevents the evaluation of each pair of doors twice (in a different order) while the second and the third conditions select pairs of doors according to their distance. Once we have the candidate door pairs, the detection of the robot is inversely applied over each door pair to calculate from which points of the map those doors could be seen in that way. Applying this method, each door pair provides two candidate positions for the robot on the map. To finish the initialization, the particles of the filter are distributed among those places, following a Gaussian distribution around each point. This way, the required number of particles to get a successful result does not depend on the extension of the map, but on the possible candidate positions, which is limited by the number of doors in the map and is much lower than its surface. Therefore, the required number of particles is significantly reduced thanks to an initialization based on the instantaneous probability distribution provided by sensors and our map. This significantly reduces the time to locate the robot globally. Resampling Phase The resampling phase of particle filters is intended to explore more efficiently the places of the map with a higher probability to be the actual position of the robot. However, it can also remove all particles from areas that have a low probability, but not zero, to be the current robot location. If all particles close to the actual robot position disappear, the filter will fail unless further actions are taken. An improvement to mitigate this undesired effect is to omit the resampling phase in some iterations. When the resampling is omitted, we must propagate the weights to the next iteration (Equation (5)) to not lose the information acquired during the correction phase. This modification has already been proposed in [23] to prevent loss of diversity in the positions of the particles and is implemented in the localization algorithm from the Carnegie Mellon Robot Toolkit (CARMEN), providing good results. In our algorithm, this improvement has also proved to be helpful in mitigating the negative effect of short term errors in measurements. Results and Discussion For tests, the Morlaco robot, designed at the Robotics and Society Group of the University of Salamanca (GroUsal) was used. It was equipped with a Sick LMS 200 laser range finder and an inexpensive Webcam Pro 9000 monocular camera from (Logitech, Lausanne, Swidish) Both sensors were connected to a Lenovo Thinkpad Edge laptop with an Intel Core i3 U380 CPU with two 2.54 GHz cores and 2GB of RAM (Lenovo, Beijing, China), where CARMEN was running. Our proposed USAR task was developed at an office building where a hazardous material leaking, epidemiologc accident or bacteriological attack occurred and the building structure was not damaged. In this scenario, the Morlaco robot developed typical Search and Rescue activity. As Morlaco has heat, sound and video sensors, it could wander in the building at a victim search task so, finally, he can answer questions like: Where is the victim? or "how to reach him?". The tests were developed on the second floor of the Faculty of Sciences of the University of Salamanca, which has an extension of about 5000 m 2 and a significant number of doors. Figure 2 shows the experimental environment where the experiments were performed. It is a large environment in which other approaches fail due to the very large number of required particles. As a working procedure, we have moved the robot around different locations of the floor, saving all robot perceptions and odometry data using the logger tool from CARMEN. We have also provided the initial position of the robot to CARMEN to let it successfully track its movement and have an accurate estimation of the actual position of the robot to compare it with the result of our algorithm. Later, using the tool that CARMEN provides to replay logs, we ran multiple tests changing different configuration values and comparing the results. In order to ease the understanding of the proposed ESL method, we are going to show a case study in detail. Later, our method will be compared to an SRL filter working in the same circumstances. Finally, the number of particles required by our approach will be compared to the usage of a random initialization, and the results of 50 different tests of the ESL method in our working environment will be shown. Case Study Among all tests we made, we are going to focus on one of them. This case study is especially interesting because it shows the behavior of our concurrent particle filter in difficult circumstances. The performance of the filter can be observed in the video (http://gro.usal.es/Localization.mp4), where the estimated position of the Morlaco robot is shown at the same time as the images of the camera used for the detection of doors (stimuli). The video shows the robustness of the method in situations where no facts are detected (the doors are marked as small black circles) or even false facts are detected (false positives and negatives) due to perception errors or inaccuracies in the map, so the behavior of the procedure is shown in a realistic fashion. In the case that there exists no door in the robot's surroundings for a long time period, the ST will not be fired, so only the LT filter will still be running, that is, existing particles will be propagated using the odometry although uncertainty will increase. However, at the time if a stimuli, such as doors, appears, the ST filter will be fired and in a short period it will introduce some particles with high quality (small uncertainty) to LT filter, and this one will get a more satisfying situation with better particles. In this way, the global localization process will be clearly enhanced It would be useless to compare localization algorithms in an ideal use-case in which the map is perfect and external measurements are accurate because all methods should succeed. Instead, during our case study, inaccurate perceptions force our double filter to demonstrate that it can provide good results even in that hard scenario. The ST filter is reset several times according to our method (eight times can be observed in the video). Due to inaccuracies in external measurements, the ST filter succeeded in some of the iterations and failed in others, but it never caused problems to the LT filter, that kept stable once the correct position of the robot was found. Figure 2 shows the state of our concurrent particle filter just after the first resampling phase involving both filters. The big circles with a transparent background and different sizes and colors are the clusters of particles of each filter (the red color for LT filter and the green color for ST filter). The radius of the circle is related to the sum of the probabilities of the particles in the cluster (P C ). The actual localization of the robot at that moment was at the main horizontal corridor, near the fourth office starting from the left side, in the middle of the screenshot. The actual position is displayed as a big blue circle and the estimated position is shown as a big yellow circle, both overlapping in Figure 2. In Figure 4, we can see the temporal evolution of our solution. The upper graph corresponds to the accuracy of both particle filters (the probability next to the correct location). The red line is associated with the LT filter and the green line corresponds to the ST filter. The second plot corresponds to the entropy H t (Equation (1)) for both filters. The third graph represents the number of detected doors In f t (Equation (3)). It can be observed that most of the time a small number of facts are detected (one or even zero), but several times the firing event is met. The last graph shows the total distance traveled by the robot, so we can observe the amount of time and distance needed to get a correct localization. Firing events and resetting conditions are tagged in Figure 4 with vertical lines. All of them are also labeled at the video to get a complete understanding of the filter behavior. As it can be seen in the video and in the Figure 4, few meters are needed (about 8 m, t = 35 s) to detect the correct position with a normalized accuracy (Equation (7)) close to 1, as it will be demonstrated later with a large set of executions. It is also very relevant that the correct localization is successfully maintained with time even, in situations where no information is available or the ST filter is wrong. As can be observed, the initialization of the LT filter (t = 4 s) puts some particles in the correct position at the beginning of our test. In the vicinity of t = 18 s (about 4 m), the first firing event (In f t > 1) starts the ST filter. After 7.5 m (t = 54 s), the entropy of the ST filter becomes low enough to trigger the resetting condition. We have experimentally determined that a reasonable threshold is H T = 3. The moment of the first resetting condition is spatially represented in Figure 2. When particles of the ST filter are sampled in the resampling phase of the LT one (we reserve a 20% of the probability for it), the entropy of the LT filter raises a bit and, very quickly, the LT filter converges to the correct position. After that (around 7 s after), the firing event is met, so the ST filter starts a new iteration. In order to evaluate the robustness of the method, we define the plausibility of the filter as where A C is normalized accuracy of the cluster (the inverse of the distance from the centre of the cluster to the correct position) being D C the distance from the cluster center to robot position and a small number greater than 0 to avoid singularities; and where N C is normalized size of the cluster being In Figure 5, each cluster is represented by its normalized size N C in the horizontal axis and the normalized accuracy A C in the vertical axis. Clusters located in the actual position of the robot are green and others are red. Different plots show the state of the clusters (lost or not) for both filters when the resetting condition is met and the LT filter takes particles from the ST one. Each row of plots corresponds to a different resetting condition (timestamps at the video). For each row, the first plot represents the state of the LT filter before the hybrid resampling, the second one the state of the ST filter, and the third one the state of the LT filter after the hybrid resampling. As it can be observed at the first timestamp, the value of V for the filter is not too high. This situation is described also in Figure 4. At the same time, the ST filter has recovered from the environment enough information to place particles on the correct location (a green cluster, although without the majority of particle population). The result of the first resetting process is shown on the upper-right plot of the Figure 5a, with the LT filter having incorporated new particle clusters that let it explore places in the map that otherwise wouldn't be sampled anymore, and therefore helping it to recover from errors without compromising its stability. In the next resetting condition (timestamp 2) the few correct particles added by the ST filter (at the first timestamp) have evolved with the following observations and have managed to put all particles of the LT filter in the correct localization. Although the ST filter added wrong particles to the LT filter on the second (more than 60% of wrong particles) and the third (almost all the wrong particles) resetting conditions, the LT filter discarded them before the next iteration. So, we can see how the addition of wrong particles from the ST filter does not cause any problem to the LT one. Mainly, because it is very hard for wrong particles on the ST filter, that was added based on wrong external measurements from the past, to compete with correct particles in the LT filter being evaluated by independent external measurements (obtained several meters far from the others). In these two situations, the entropy of the LT filter will increase slightly, but the filter estimation remains robust over time and discards the incorrect ST filter particles in a short time. Comparison ESL with SRL In order to prove the feasibility of our ESL approach, we have compared it with the performance of an SRL implementation in the same scenario. Figure 6 shows how our proposal correctly detects the position of the robot while SRL is unable in a moment of that test. The complete execution can be observed on a video (http://gro.usal.es/ComparisonWithSRL.mp4). The continuous addition of particles based on the probability distribution provided by sensors helps to SRL filter to recover from errors (like our ST filter) but also destabilizes it when there are perception errors or inaccuracies in the map. In consequence, SRL lose track of the correct position of the robot several times, while our ESL keeps it very accurately during the whole test. So, our method keeps the best advantage of SRL but avoids its main inconvenience. . It can be seen that the robot on SRL is posed wrongly at a different position (bottom on the right) while on our ESL the robot is "strongly" placed (the red circle, in the middle, is large). Number of Particles Based on a previous case study, we have made several tests using ESL with a different number of particles to know how many are needed to get successful results and how many to get repeatable results in the course of the tests. During the initialization phase of the LT and ST filters, the number of candidates to be the current localization of the robot, based on the doors detected and the GIS map, is between 199 and 375 in our case study. For that reason, we started using 400 particles on each filter. With a lower number of particles, it wouldn't be possible to sample all candidate localizations, not even with one particle. We ran 10 tests with 400, 800, 1200, 2500 and 5000 particles per filter, using in all cases the same odometry and perceived data as in our case study. The obtained results can be found in Table 1. The repeatability is defined from executing 10 times each initialization. Very low indicates, in a qualitative way, that the clusters obtained are completely distinct from one execution to others and the results vary significantly so the localization is unbelievable. Low indicates that the clusters do not present such a big variation but the result has lower quality. The high value is assigned when main clusters appear at every execution and finally, at very high, no difference between clusters can be appreciated. From Table 1, it can be concluded that when 5000 particles are considered, the localization result has great confidence. Based on the previous case study, we have also run several tests to compare ESL initialization with random initialization with a different number of particles (Table 2). We started with 5000 particles because, due to the extension of our environment (about 5000 m 2 ), it would be very unlikely to sample the correct position with a lower number of particles in random initialization. Using 5000 particles, our 10 tests failed. Table 2 shows that by using 50,000 particles, we got a 20% of success. With 100,000 particles, our CPU was already at 100% usage, but with a 60% of correct results. Using 200,000 particles, the delay in data processing was clearly noticeable. Probably for that reason, the number of successful tests was reduced to 30%. Tests with 500,000 particles made clear that the hardware was unable to process the data fast enough and provided wrong results in all cases. To get results at least 60-70% of success, we needed 400 particles per filter using our ESL initialization method, and 100,000 using a random initialization. So the ESL method reduced around 250 times the number of needed particles. Our method also made it possible to get successful and repeatable results in all executions in our environment, while it was impossible for our hardware using a random initialization. Success Rate In order to know the success rate of the ESL method, we made 50 different tests to our case of study in order to be able to repeat and analyze the possible failures in different circumstances. During the tests, the Morlaco robot was moved between different places in our environment. We have used 5000 particles since our hardware can still run the algorithm in real-time. We have stored the error in the estimated position by ESL algorithm at each moment. In order to take into account not only the position of the robot but also its estimated orientation, we have considered each 20 degrees of deviation as 1 m of error. Figure 7a shows the graphic representing the error of all tests (in meters) during a displacement of 12 m. We can see that the as meters the robot travels, more tests providing successful results will appear. For a better readability, Figure 7b shows the number of tests that provide a successful result (less than 2 m of error) versus the distance traveled by the robot. In just 4 m, in 50% of the tests the robot was already correctly localized. After 9 m, in 90% of the tests, our method was providing the correct position of the robot. After 12 m, 98% of the tests were successful. Since only one test failed after 12 m, we investigated it to see the cause. When we repeated that test using the same data, we saw that the first initialization failed due to false positives in the door detection algorithm. Some meters later, the ST filter was able to start sampling the correct position and sent some correct particles to the LT filter, but due to symmetries in the environment and inaccurate door detections, another position took more probability. The problem continued during several meters in which the robot was unable to perceive any door, until the 12 m that we considered the end of the test. Allowing the test to continue, as soon as the robot started detecting new doors, the localization algorithm started providing the correct localization, in this case after a total movement of 18 m. So, even in cases like this one, in which there were very adverse circumstances, our method was able to finally provide the correct localization of the robot as soon as it perceives significant information from the environment. We consider that the fact that the robot gets a global localization in 90 % of the execution tests when the robot displacements are less than 9 m can be seen as very satisfactory with a temporal perspective when the large workspace (5000 m 2 ) is considered. Any human in the same scenario would have this sensation. On the other hand, the measurement accuracy is supported by the door detection precision that, although influenced by the observation angle, is around 1 cm of precision [19]. Conclusions In this work, we have proposed a new method for indoor robot localization based on two concurrent particle filters, one of which reacts to additional information provided by the environment (stimuli). The localization can be related within a BIM approach so this approach can be quite useful at search and rescue missions or domestic task completion. The proposed method has been applied to the localization of a real robot working in a large and complex environment. Intense experimental work has been carried out with realistic tests in which the robot follows different paths. In 90% of the tests, after 9 m of travel, our method provided the correct position of the robot, despite persistent errors in perceptions, errors in the map or unmodeled robot movements. There are several reasons that explain and justify the results obtained by applying this algorithm. First, the importance of the filter initialization. If we want to use a reduced number of particles in a large map, we must rely on the sensor measurements to place our particles in the most probable locations. If these measurements are erroneous, the initialization will be erroneous too, and the filter will probably fail. The stimuli based reinitialization of our ST filter gives the LT filter more chances to find the correct location of the robot. Second, the independence between initialization data of the ST filter and correction data in the LT filter after the hybrid resampling (usually far enough in time and distance) prevents that the extra particles coming from the ST filter destabilize the LT one. In methods like SRL or Mixture-MCL, the new particles are added directly to the filter and are corrected using the same data used to generate them or, at least, with a strong conditional dependence (because they start being evaluated immediately after the addition). This gives new particles an advantage over the others, worsening the negative effects of measurement errors. With our method, even if the added particles were initialized based on wrong measurements, they do not cause the LT filter to converge to new erroneous locations and, at the same time, ESL still has the advantages of SRL or Mixture-MCL: fast recovery from errors or unmodeled robot movements. The robustness of our ESL approach has been proved on a real large indoor environment. As future work, we could take into consideration the use of several secondary (or ST) filters where the stimuli are fired from different information sources or different kinds of characteristics or facts, like doors, columns or any other facts that can be contained at a GIS map. In this way, if we have environments where information is repeated (poor information in fact), like a corridor with symmetrically distributed doors or with a few doors, then we could use the other stimuli that provide useful information in such a way the particles will have more quality. A problem can appear if several filters are fired and the priority that would be assigned to these ones. Conflicts of Interest: The authors declare no conflict of interest.
11,834.2
2020-04-01T00:00:00.000
[ "Computer Science", "Engineering" ]
GRADIENT BOUNDS FOR STRONGLY SINGULAR OR DEGENERATE PARABOLIC SYSTEMS . We consider weak solutions u : Ω T → R N to parabolic systems of the type u t − div A Introduction In this paper, we are interested in the regularity of weak solutions u : Ω T → R N of strongly degenerate or singular parabolic systems of the type ∂ t u i − div A i (x, t, Du) = f i for i ∈ {1, . . ., N} in Ω T = Ω × (0, T ). (1.1) Here and in the following Ω is a bounded and open subset of R n (n ≥ 2), T > 0 and N ≥ 1. The exact assumptions on the vector field A = (A 1 , . . ., A N ) : Ω T × R N n → R N n and the vectorvalued inhomogeneity f = (f 1 , . . ., f N ), N ≥ 1, will be discussed in detail later.As our main result we will show that weak solutions are locally Lipschitz-continuous in the spatial directions, i.e. that the spatial gradient Du is locally bounded.The chosen terminology can be illustrated by the model system of equations.In fact, in the prototypical system we have in mind, the vector field A is given by for some p > 1.First, we note that any time-independent 1-Lipschitz continuous function u is a solution of the homogeneous model system.Accordingly, well established methods from regularity theory, using the second weak spatial derivatives of u, cannot be utilized.We will circumvent this difficulty by approximation.Before we specify in detail the structure of the considered system of equations, we briefly discuss some results already available in the literature.So far, most progress has been made for the associated elliptic, i.e. time-independent, problem.L. Brasco [4] proved the Lipschitz continuity of weak solutions.A. Clop, R. Giova, F. Hatami and A. Passarelli di Napoli [9] generalized this result for systems (N ≥ 2).The two previous works arise from the study of strongly degenerate functionals.These can be regarded as asymptotically convex functionals, i.e. 1 functionals having a p-Laplacian type structure only at infinity.Such class of functionals has been widely investigated, since the local Lipschitz regularity result by Chipot and Evans [8].In particular, we mention generalizations allowing super-and sub-quadratic growth [22,25,29], as well as lower order terms [28].Extensions to various other contexts can be found in the non-exhaustive list [12]- [14], [16]- [20], [32].The first result concerning gradient continuity was achieved by F. Santambrogio and V. Vespri [31].The authors restricted themselves to equations and the dimension n = 2 and proved that a gradient dependent function of the form g(Du) is continuous whenever g is continuous and vanishes on the set of degeneracy, i.e. on the unit ball in case of the prototypical equation.Subsequently, M. Colombo and A. Figalli [10,11] extended this result to arbitrary dimensions n ≥ 2. A similar result in the vectorial setting was recently shown by V. Bögelein, F. Duzaar, R. Giova and A. Passarelli di Napoli [6].L. Mons [27] extended this result to systems satisfying more general structural conditions.On the contrary, for the time-dependent problem, little is known so far.The first author [2] succeeded in showing a fractional higher differentiability result.Furthermore, A. Gentile and A. Passarelli di Napoli as well as the first author and A. Passarelli di Napoli obtained a higher differentiability result in [21] and [3] respectively. In this paper we prove the boundedness of the spatial gradient under quite general assumptions on the vector field A. In particular, this can be seen as a parabolic analogue of Brasco's Lipschitz-continuity result.Our main result reads as follows.For notation and definitions we refer to Section 2. In the subcritical case 1 < p ≤ 2n n+2 , the weak solutions to (1.1) may not be bounded (see [15,] and note that the p-Laplacian satisfies our growth assumptions).Therefore, the extra assumption u ∈ L ∞ (Q R (z 0 ), R N ) in Theorem 1.1 (b) is natural. Note that some widely degenerate systems can be interpreted, similarly to functionals, as asymptotically regular systems.Therefore, in the special case that A(x, t, Du) ≡ A(Du) ≈ |Du| p−2 Du for |Du| ≫ 1 and p > 2n n+2 we recover the results obtained in [24,5]. Finally, we explain the choice of the parameter n, which is defined by for some β > 0. If n = 2 and 1 < p < 2 we choose β in such a way that 0 < β < 4(p − 1) 2 − p .(1.4)This choice ensures that in the case n = 2 we have p > 2n n+2 for any p > 1.Indeed, this condition will be decisive to carry out the proof in Subsection 6.3. 1.1.Structural conditions on the vector field A. To specify the structure of the vector field A : Ω T × R N n → R N n , we consider a function which is convex with respect to the s-variable and vanishes for all s ∈ [0, 1].Furthermore, we shall assume that the partial map s → F (x, t, s) is in C 1 (R + ) ∩ C 2 (R + \ {1}) for almost every (x, t) ∈ Ω T , while for every (t, s) ∈ (0, T ) × [0, ∞) the map x → F (x, t, s) is differentiable almost everywhere.We additionally suppose that there exist an exponent p > 1 and some positive constants L, C 1 > 1 and K such that, for all s > 1 and for almost every x, y ∈ Ω and t ∈ (0, T ), the function F satisfies the following growth assumptions: Then, the vector field A : Ω T × R N n → R N n is defined by A(x, t, ξ) := ∂sF (x,t,|ξ|) |ξ| Note that for the prototypical case where a : Ω T → R + and x → a(x, t) is a Lipschitz continuous function that is bounded from below by a positive constant, one can easily deduce that the growth conditions (H 1 )−(H 4 ) are fulfilled.If furthermore a(x, t) ≡ 1, then we recover the vector field in (1.2). 1.2.Strategy of the proof.Our approach is inspired by [9,12,16].The proof will be achieved by a parabolic Moser iteration technique.However, the implementation is quite subtle due to the degeneracy of the differential operator.Since weak solutions may not be twice weakly differentiable with respect to the x-variable, we approximate the original system (1.1).The approximation is chosen in such a way that the regularized vector field A ε , ε > 0, satisfies standard p-growth assumptions.However, proceeding at this point with a standard Moser iteration, the constants would blow up as ε ↓ 0. This problem will be overcome by the choice of an appropriate test function in the derivation of the Caccioppoli-type inequality.At this point it is helpful to realize that for δ > 0 large enough and |Du ε | > 1 + δ the p-growth conditions of the approximating systems are satisfied independently of ε.Therefore, we choose a test function vanishing on the set {|Du ε | ≤ 1 + δ}.We note that the choice of such a test function requires the existence of second weak spatial derivatives, which is the reason why we had to introduce the approximating solutions.This test function allows to obtain a Caccioppolitype inequality and in turn we set up a kind of Moser iteration scheme.Note that, similarly to the treatment of the parabolic p-Laplacian, in the Moser iteration we have to distinguish between three different regimes of the parameter p.The first one is the superquadratic case p ≥ 2, the second one is the subquadratic supercritical case 2n n+2 < p < 2 and finally we have the subcritical case 1 < p ≤ 2n n+2 .In the last case, we need to ensure that also the approximating solutions u ε are uniformly bounded with respect to ε.This is achieved by a maximum principle.In all three previous cases, we prove the boundedness of the spatial gradient Du ε of the approximating solutions together with quantitative estimates.Here it is worthwhile to note that our quantitative estimates are uniform in ε.Since the approximating solutions converge strongly in L p to the original solution, the statement of Theorem 1.1 follows after passing to the limit as ε ↓ 0. 1.3.Plan of the paper.In Subsection 2.1, we introduce the notation adopted throughout the paper.In Subsections 2.2 and 2.3, we recall some basic facts on the used function spaces and the regularization in time.In the following Subsections 2.4 and 2.5, the approximating vector field is defined and its growth properties are obtained.Thereby, we distinguish the subquadratic and superquadratic cases.Subsection 2.6 is devoted to some algebraic inequalities needed for our purposes. In Section 3, we define the approximating problems and prove a strong convergence result in L p .As a by-product, we obtain the uniqueness of the weak solutions to a Cauchy-Dirichlet problem associated with system (1.1). In Section 4, we establish a maximum principle for solutions to the approximating problem.This allows us to show that the approximating solutions are essentially bounded, provided that the original solution is itself essentially bounded. In Section 5 we lay the groundwork for Section 6, where we prove local L ∞ -bounds for the spatial gradient of the approximating solutions.The proof is divided into several subsections.In Subsection 6.1 we establish a suitable Caccioppoli-type inequality, from which we derive a reverse Hölder-type inequality in Subsection 6.2.In these subsections, we need to address separately the three regimes of the parameter p that we referred to above.For this reason, the Moser iteration procedure is split into two steps: in Subsection 6.3 we deal with the supercritical case, while in Subsection 6.4 we consider the subcritical one. Finally, in Section 7 we give the proof of Theorem 1.1. Acknowledgements.Part of this paper was written while P. Ambrosio was visiting the University of Salzburg in November and December 2022.He is thankful to Verena Bögelein and the host institution for their kind invitation and constant support.The authors gratefully acknowledge fruitful discussions with Verena Bögelein and Antonia Passarelli di Napoli, who provided valuable comments and suggestions during the preparation of the manuscript.This work has been partially supported by the FWF-Project P36295-N.P. Ambrosio is a member of the GNAMPA group of INdAM that partially supported his research through the INdAM−GNAMPA 2024 Project "Fenomeno di Lavrentiev, Bounded Slope Condition e regolarità per minimi di funzionali integrali con crescite non standard e lagrangiane non uniformemente convesse" (CUP: E53C23001670001). Preliminaries 2.1.Notation and essential tools.In this paper we shall denote by C or c a general positive constant that may vary on different occasions, even within the same line of estimates.Relevant dependencies on parameters and special constants will be suitably emphasized using parentheses or subscripts.The norm we use on the Euclidean spaces R k will be the standard Euclidean one and it will be denoted by |•|.In particular, for the vectors ξ, η ∈ R N n , we write ξ, η for the usual inner product and |ξ| := ξ, ξ 1 2 for the corresponding Euclidean norm.For points in space-time, we will frequently use abbreviations like z = (x, t) or z 0 = (x 0 , t 0 ), for spatial variables x, x 0 ∈ R n and times t, t 0 ∈ R. We also denote by B ̺ (x 0 ) = {x ∈ R n : |x − x 0 | < ̺} the n-dimensional open ball with radius ̺ > 0 and center x 0 ∈ R n ; when not important, or clear from the context, we shall omit to denote the center as follows: B ̺ ≡ B ̺ (x 0 ).Unless otherwise stated, different balls in the same context will have the same center.Moreover, we use the notation for the backward parabolic cylinder with vertex (x 0 , t 0 ) and width ̺.We shall sometimes omit the dependence on the vertex when all cylinders occurring in a proof share the same vertex. For a general cylinder Q = B × (t 0 , t 1 ), where B ⊂ R n and t 0 < t 1 , we denote by where 1 denoting the standard, non-negative, radially symmetric mollifier in R n+1 .Obviously, here the function υ is meant to be extended by the k-dimensional null vector outside Q. In this work, we define a weak solution to (1.1) as follows: We conclude this first part of the preliminaries by recalling the following iteration lemma, which is a standard tool for "reabsorbing" certain terms and can be found, for example, in [23,Lemma 6 for all ρ 0 ≤ ρ < r ≤ ρ 1 , for some σ > 0, ϑ ∈ [0, 1) and a non-negative constant C.Then, there exists a constant κ ≡ κ(σ, ϑ) > 0 such that for all ρ 0 ≤ ρ < r ≤ ρ 1 we have Orlicz spaces. Here we recall some basic properties of Orlicz spaces that will be needed later on (for more details, we refer to [1]).Let Ψ : [0, ∞) → [0, ∞) be a Young function, i.e.Ψ(0) = 0, Ψ is increasing and convex.If Σ is an open subset of R k , we define the Orlicz space L Ψ (Σ) generated by the Young function Ψ as the set of the measurable functions v : for some λ > 0. When equipped with the Luxemburg norm The Zygmund space L q log α L(Σ), for 1 ≤ q < ∞, α ∈ R (α ≥ 0 for q = 1), is defined as the Orlicz space L Ψ (Σ) generated by the Young function Ψ(s) ≃ s q log α (e+ s) for every s ≥ s 0 ≥ 0. Therefore, a measurable function v on Σ belongs to L q log α L(Σ) if Moreover, we record that for the function holds for every v ∈ L q log α L(Σ) and some θ = θ(q) > 0 (see [9]). Steklov averages. In this section, we recall the definition and some elementary properties of Steklov averages.Let us denote a domain in space time by Q ′ := Ω ′ × I, where Ω ′ ⊆ Ω is a bounded domain and I := (t 1 , t 2 ) ⊆ (0, T ).For every h ∈ (0, where k ∈ N, the Steklov average [v] h (•, t) is defined by for x ∈ Ω ′ .This definition implies, for almost every The proof of the following result is straightforward from the theory of Lebesgue spaces (see [15,Lemma 3.2]). For further needs, we now recall the well-known Steklov average formulation of (1.1) in 2.4.Approximation of the function F for 1 < p ≤ 2. As already mentioned, we assume that for almost every (x, t) Moreover, we notice that in the case 1 < p < 2 both bounds for ∂ ss F (x, t, •) in (H 3 ) blow up as s → 1 + .This very singular behavior of F must be avoided, since we need to use the second derivative ∂ ss F to establish a local bound for the spatial gradient of the weak solutions to (1.1).Therefore, for 1 < p < 2 and for almost every (x, t) ∈ Ω T we approximate the partial map s → F (x, t, s) by smoothing it around the singularity of ∂ ss F (x, t, •), in such a way that the resulting approximation F ε (x, t, •) coincides with F (x, t, •) outside a small neighborhood of the singularity s = 1.Thus, in this section we will assume that 1 < p ≤ 2, unless otherwise stated.For ε ∈ 0, 1 2 we define the function where η 1 ∈ C ∞ 0 ((−1, 1)) denotes the standard, non-negative, radially symmetric mollifier in R. Keeping this definition in mind, in what follows we will show that an approximation of F (x, t, •) is given by Fε (x, t, s) := F (x, t, v ε (s) + 1), ε ∈ 0, 1 2 .Firstly, we need to prove that Fε satisfies non-degenerate growth conditions for 0 < ε < 1 2 .Therefore, we begin our analysis by studying the growth of this function. By assumption, we know that for almost every (x, t) ∈ Ω T the map s → Fε (x, t, s) belongs to C 2 (R + ).For later purposes, we now note that one can easily check that for all s ≥ 0 and for some universal constant C > 0. Furthermore, from the growth assumption (H 1 ) and from definition (2.3) we can deduce that for all s ≥ 1 + 2ε and for almost every (x, t) ∈ Ω T .As for the derivatives of Fε with respect to the s-variable, for almost every (x, t) ∈ Ω T we have and and from assumptions (H 2 ) and (H 3 ) it follows that ∂ s Fε , ∂ ss Fε ≥ 0.Moreover, combining (2.5), (2.6), (H 2 ) and the fact that v ε (s) ≤ max{2ε, s − 1} ≤ s for every s > 1 and every ε ∈ 0, 1 2 , for almost every (x, t) ∈ Ω T we obtain for any s > 0 and any ε ∈ 0, 1 2 .As for the second derivative ∂ ss Fε , combining (2.5), (2.7), (H 2 ) and (H 3 ), for every s > 0 and for almost every (x, t) ∈ Ω T we find Now, using the fact v ε (s) ≤ 2ε for s < 1+2ε, we can estimate the second term on the right-hand side of (2.8) as follows: (2.9) In order to deal with the first term on the right-hand side of (2.8), we distinguish between the cases 1 < s < 1 + 2ε and s ≥ 1 + 2ε.In the first case, we have v ε (s) ≥ ε and therefore we get (2.10) If, on the other hand, s ≥ 1 + 2ε, then we have v ε (s) = s − 1 due to (2.4).In this case, using that (s − 1) 2 ≥ ε 2 4 (1 + s 2 ) we obtain (2.11) Joining estimates (2.8)−(2.11),for every s > 0 and for almost every (x, t) ∈ Ω T we then have where c ≡ c(p, C 1 ) > 0. This concludes the necessary growth estimates of Fε .Now, in order to prove that the function Fε is indeed a good approximation of F , it remains to analyze the behavior of Fε as ε ց 0. To this end, we notice that (2.4) immediately implies for s / ∈ (1, 1 + 2ε). (2.12) Furthermore, for s ∈ [1, 1+2ε] we can estimate the difference of these two derivatives as follows: Let us explicitly observe that the last term tends to zero as ε ց 0. To ensure the convergence result of Lemma 3.4 below, we need to accelerate the rate of convergence.For this reason, for any 1 < p < 2 we now define the new approximation Collecting the above conclusions, we note that F ε has the following properties: Lemma 2.4.For every ε ∈ (0, 2 1−p ), almost every (x, t) ∈ Ω T and every s ∈ R + 0 , we have However, notice that 2 1−p ≥ 1 2 whenever 1 < p ≤ 2, which implies that the previous lemma holds for any ε ∈ (0, 1/2). 2.5.Approximation of the vector field A. With the approximation (2.14) of F and Lemma 2.4 in mind, we can define for every ε ∈ (0, 1/2) the vector field (2.15) We thus approximate the structure function A by means of the vector fields A ε , in order to be allowed to apply some results from the theory of singular or degenerate parabolic systems to the weak solutions of problem (P ε ), introduced in Section 3. Therefore, we now need to check whether A ε fulfills non-degenerate growth conditions.This is what we will do hereafter. From the growth conditions of ∂ s F ε and from the structure of the approximation, for any p > 1 and for any ε ∈ (0, 1 2 ) we immediately obtain for every ξ ∈ R N n and for almost every (x, t) ∈ Ω T .As for the spatial gradient of A ε , by the assumption (H 4 ) we have (2.17) for every ξ ∈ R N n , for every ε ∈ (0, 1 2 ) and for almost every (x, t) ∈ Ω T .Now we want to examine the structure of D ξ A ε (x, t, ξ).To this end, for any ξ ∈ R N n \ {0} and for any ε ∈ (0, 1 2 ) we define the bilinear form and observe that The next lemma provides the relevant ellipticity and boundedness properties of the bilinear form A ε (x, t, ξ).Lemma 2.5.Let 1 < p < ∞ and ε ∈ (0, 1 2 ).Then, there exists a positive constant (2.19) ) Proof.From (2.15) it follows that In order to prove the assertion, we distinguish between two cases.When ∂ s h ε (x, t, |ξ|) < 0, from Lemma 2.4 and the growth assumption (H 2 ) we obtain where we have applied the inequality for s > 0 and p > 2. This proves the upper bound in (2.19), and the one in (2.20) is an immediate consequence.Moreover, using the Cauchy-Schwarz inequality, we get thus proving the lower bound in (2.19).To obtain the lower bound in (2.20), we observe that for some positive constant c ≡ c(p, C 1 , δ). In the case ∂ s h ε (x, t, |ξ|) ≥ 0, applying the Cauchy-Schwarz inequality again, from Lemma 2.4 and the growth condition (H 3 ) we get where we have used the inequality (s − 1) p−2 , which holds for every s ≥ 0 and every p > 2. To obtain the lower bound in (2.19), we neglect the term ∂ s h ε (x, t, |ξ|) and use the fact that for every p > 1. Finally, to establish the bounds in (2.20), one can argue as above.This time, for |ξ| ≥ 1 + δ 2 we are allowed to use the growth assumptions (H 2 ) and (H 3 ) also in the case 2 |ξ| 2 by zero from below and make use of After doing this, we get the desired estimates by means of the following inequalities, which hold for and (2.22) From the previous lemma, it follows that the bilinear form (λ, ζ) → A ε (x, t, ξ)(λ, ζ) defines a scalar product on the Euclidean space R N n 2 .As for the modulus of A ε , we get the following result: Lemma 2.6.Let 1 < p < ∞ and ε ∈ (0, 1 2 ).Then, there exists a positive constant C ≡ C(p, C 1 , ε) such that, for any ξ ∈ R N n \ {0} and any λ, ζ ∈ R N n 2 , we have Now, by (H 2 ) and (H 3 ), in the case p > 2 we get In the case 1 < p ≤ 2, we use the estimates from Lemma 2.4 to find that We thus obtain the first conclusion of this lemma.Finally, due to equality (2.12), if |ξ| ≥ 1 + δ 2 we only need to use the assumptions (H 2 ) and (H 3 ) together with the inequalities (2.21) and (2.22) to obtain from (2.23) These inequalities conclude the proof. 2.6.Algebraic inequalities.In this section, we gather some relevant algebraic inequalities that will be needed later on.We start with an elementary assertion, which will be used in the Moser iteration. Lemma 2.7. (2.25) Proof.The proof of the second inequality is similar to the one of Lemma 2.3. in [7].Thus we only prove the first inequality: For δ ∈ (0, 1] we define the auxiliary function The following two lemmas are concerned with auxiliary estimates for the functions A ε and G δ defined above. Lemma 2.8.Let 1 < p < ∞, δ > 0 and ε ∈ (0, 1 2 ).Then, there exists a positive constant Proof.Here we argue as in [6,Lemma 2.8].The inequality on the left-hand side of (2.20) implies that It remains to estimate the integral in the right-hand side of (2.26).To this end, we distinguish whether or not this implies a bound from below in the form Thus, we obtain that In the case | ξ| > |ξ|, we estimate |ξ s | from below as follows Therefore, for s ∈ 3|ξ|+1+δ/2 4|ξ| , 1 we get which yields Using the previous lemma, we can easily achieve the following result: Then, for every ν > 0 and almost every x ∈ Ω we have Therefore, the claimed inequality immediately follows from Lemma 2.8, by the positivity of the right-hand side. Thus, we only need to consider the case where either |ξ| > 1 + δ or | ξ| > 1 + δ.In order to do this, we first assume that |ξ| ≥ | ξ|.Note that this implies while in the case 1 < p < 2 we use Young's inequality with exponents where we have used the fact that 0 < ε < Absorbing the last term on the right-hand side into the left-hand side, we obtain the desired result. A family of regularized parabolic problems In this section, we let u be a weak solution of (1.1) and introduce the approximating Cauchy-Dirichlet problem (P ε ), where ε ∈ (0, 1 2 ) is the approximation parameter.Denoting by u ε the unique weak solution of this problem, we will prove the strong convergence G δ (Du ε ) → G δ (Du) in L p as ε → 0. As an easy consequence of this convergence, we then establish the uniqueness of weak solutions to an initial-boundary value problem associated with (1.1). To set up the approximating problem, for ε ∈ (0, 1 2 ) we define the truncated vector-valued function f ε by (3.1)Moreover, we consider a space-time cylinder Q ′ := Ω ′ × I, where Ω ′ ⊆ Ω is a bounded domain and I := (t 1 , t 2 ) ⊆ (0, T ).In what follows, it will suffice to assume that In this framework, we identify a function as a weak solution of the Cauchy-Dirichlet problem if and only if u ε is a weak solution of (P ε ) 1 and, moreover, Remark 3.2.We know that the regularized parabolic system (P ε ) 1 fulfills standard p-growth conditions by virtue of (2.18) and Lemma 2.5.The existence of a unique weak solution to (P ε ) can be inferred from the classic existence theory, cf.[26, Chapter 2, Theorem 1.2 and Remark 1.2].By a difference quotient method, one can show that Du ε is locally bounded and that u ε admits second weak spatial derivatives in L 2 loc , see [15,Chapter 8].For this to hold in the subcritical case 1 < p ≤ 2n n+2 , we additionally have to require that u ε belongs to L r loc (Q ′ , R N ), where r ≥ 2 satisfies n(p − 2) + rp > 0 (see again [15,Chapter 8]).Since, in the subcritical case, we always assume that u ε ∈ L ∞ loc (Q ′ , R N ), the latter requirement is trivially fulfilled.Theorem 3.3.Let p > 1 and u ε be the weak solution of problem Moreover, assume that u ε satisfies the requirements of Remark 3.2 in the case We now prove the following strong convergence result: be a weak solution of (1.1) and assume that is the unique energy solution of problem (P ε ).Then, there exists a constant C ≡ C(p, n, δ) ∈ (0, min{ 1 2 , ( δ 4 ) p−1 }) such that for every ε ∈ (0, C), the estimate holds for some positive constant C ≡ C(p, C 1 , δ).In particular, this estimate implies that Proof.We observe that (u ε − u) ∈ L p (I; W 1,p 0 (Ω ′ , R N )) by the lateral boundary condition.Unfortunately, we cannot test systems (1.1) and (P ε ) 1 with the function u ε − u, since weak time derivatives might not exist.Therefore, we resort to the equivalent Steklov averages formulations of (1.1) and (P ε ) 1 , thus obtaining for every t ∈ I = (t 1 , t 2 ), for every h ∈ (0, t 2 − t 1 ) and every φ ∈ W 1,p 0 (Ω ′ , R N ) ∩ L 2 (Ω ′ , R N ).Then, for a fixed time slice Ω ′ × {t} we can choose the function φ(•, t) defined by as a test function in (3.4).For any fixed τ ∈ I, the term involving the time derivatives yields for every h ∈ (0, τ − t 1 ).Now we pass to the limit as h → 0. Using Lemma 2.3 and taking into account the growth conditions of A ε and the fact that u ε = u on Ω ′ × {t 1 } in the L 2 -sense, from (3.4) and (3.6) we obtain the following inequality for every τ ∈ I. Taking the supremum over τ ∈ I, we thus obtain We now apply Young's inequality with exponents (2, 2) to control the last integral as follows Joining the two previous estimates, we get In order to estimate the first integral on the right-hand side of (3.7), we now distinguish whether or not p ≥ 2. If p ≥ 2, then we have which implies In what follows, we will denote by C a general positive constant that only depends on p, C 1 and δ.Using the previous estimate, Lemma 2.9 with ν := n 2(n+2) < 1 2 and Young's inequality with exponents (p, p p−1 ), we have where we have also exploited the facts that ε < ε 1−ν and ε 1−ν < ε ν , since 0 < ε < 1 2 .In the case 1 < p < 2, we need to use (2.13) and (2.14), which imply for every ξ ∈ R N n \ {0} and for almost every (x, t) ∈ Ω ′ × I. Using the above estimate with ξ = Du and arguing as in the case p > 2, we now obtain from (3.7) that for any 0 < ε < min{ 1 2 , ( δ 4 ) p−1 }.Now, from the proof of [6, Lemma 2.3] we have that Using this information to estimate the right-hand side of both (3.8) and (3.9), for every p > 1 we get At this point, notice that since ν := n 2(n+2) > 0. Also, recall that ε is small enough to have 0 < ε < min{ 1 2 , ( δ 4 ) p−1 }.From this and from (3.10) it follows that there exists a constant C ≡ C(p, n, δ) ∈ (0, min{ 1 2 , ( δ 4 ) p−1 }) such that, for every ε ∈ (0, C), we have This allows to absorb the second term on the right-hand side of the last estimate into the left-hand side, so that we finally obtain the desired conclusion. We are now in a position to prove the uniqueness of weak solutions to the Cauchy-Dirichlet problem where Theorem 3.5.Let Ω ⊂ R n be a bounded domain and let f ∈ L 2 (Ω T , R N ).Then, the Cauchy-Dirichlet problem (3.11) admits at most one weak solution. )) be a weak solution of (3.11) and let be the unique weak solution to (P ε ) with Q ′ := Ω ′ × I = Ω T .Then, by Lemma 3.4, for every δ > 0 there exists a constant C ∈ (0, min{ 1 2 , ( δ 4 ) p−1 }) such that for any ε ∈ (0, C) we have for some positive constant C that is independent of ε.Notice that the right-hand side of (3.12) converges to zero as ε → 0 + .Moreover, we have Combining estimates (3.12) and (3.13), we infer that This implies the uniqueness of the weak solutions to problem (3.11), by virtue of the uniqueness of limits in Lebesgue spaces and the uniqueness of the energy solutions to problem (P ε ). Maximum principle for the homogeneous system In this section, we want to establish a maximum principle for the homogeneous system in the case 1 < p ≤ 2n n+2 .For the proof, we need to assume that f = 0 and note that the assumed boundedness of u ε in Remark 3.2 is implied by the maximum principle.Therefore, this assumption is not restrictive whenever f ≡ 0. By (3.1), we then have f ε = 0 for every ε ∈ (0, 1 2 ).Keeping in mind the notation introduced in Section 3, we now set and denote by w the N-dimensional vector whose components are all equal to k.Notice that w is a trivial solution of system (1.1) with f = 0.Moreover, one can easily check that, for every i ∈ {1, . . ., N} and for almost every t ∈ I := (t 1 , t 2 ), we have Using the Steklov averages formulations of (P ε ) 1 and (1.1) with w instead of u, and arguing as in the proof of Lemma 3.4 with (4.1) in place of (3.5), one can easily obtain for every τ ∈ I. Since A ε (x, t, •) is a monotone vector field and Dw = A ε (x, t, Dw) = 0, we can omit the latter integral, thus obtaining Therefore, for every i ∈ {1, . . ., N} and every ε ∈ (0, 1 2 ), we have Similarly, but using the function 1), we get u i ε (x, τ ) ≥ −k for any i ∈ {1, . . ., N}, for any ε ∈ (0, 1 2 ) and for almost every (x, τ ) ∈ Q ′ .Thus, we can conclude that for all ε ∈ 0, 1 2 . Weak differentiability Here we derive some higher differentiability results that will be useful in the following.These results involve the spatial gradient of the weak solution to problem (P ε ) with To begin with, for each γ ≥ 0 and a > 0 we consider the function Φ γ,a (w) := w 2 (a + w) γ−2 , w ≥ 0. (5.1) For this function, one can easily check that from which we can immediately deduce for every w ∈ (0, a).Using these results, we obtain the following lemma. Lemma 5.1.Let p > 1, ε ∈ (0, 1 2 ), γ ≥ 0 and a > 0.Moreover, assume that Notice that for m ∈ R + the function Φ γ,a,m (w) := min{Φ γ,a (w), m}, w > 0, is Lipschitz continuous.By Theorem 3.3 (see also Remark 3.2 for the case 1 < p ≤ 2n n+2 ), we have that Then, by virtue of the chain rule in Sobolev spaces, we obtain This is sufficient to prove the assertion.Moreover, we have Now we focus on the map Using the notation (2.1), we obtain the following result: Proof.Integrating by parts, for 0 < ̺ ≪ 1, for every φ ∈ C ∞ 0 (Q R (z 0 )) and every i ∈ {1, . . ., n} we obtain In order to pass to the limit as ̺ ց 0 under the integral signs, we need to estimate the above integrands.From (2.16) it follows that for some positive constant c ≡ c(p, C 1 , ε).As for the first integrand on the right-hand side of (5.4), thanks to (2.17) we get Moreover, by Lemma 2.6 we obtain where C ≡ C(p, C 1 , ε) > 0. Now we use a well-known result on the convergence of mollified functions to deduce that ), as ̺ → 0 + .Combining this with estimates (5.5)−(5.7)and applying the Generalized Lebesgue's Dominated Convergence Theorem (see [30,page 92,Theorem 17]) to both sides of (5.4), we find that ) and every i ∈ {1, . . ., n}.This implies the assertion.In addition, from (5.8) we obtain that for every i ∈ {1, . . ., n}. For further needs, we now introduce the auxiliary function H λ : R N n → R + defined by where λ > 0 is a parameter.For this function we record the following result, whose proof is omitted, since it is similar to that of Lemma 5.1. Local boundedness of Du ε As before, by u we denote a weak solution of (1.1) and we let u ε be the unique weak solution of the regularized problem Our aim in this section is to establish local L ∞ estimates for Du ε with constants independent of ε.We will achieve this result by using the Moser iteration technique, which is based on Caccioppolitype inequalities.We will obtain this kind of inequalities by first differentiating the system of differential equations, and then testing the resulting equation with a suitable power of the weak solution itself.The groundwork for doing this has been laid in Section 5. To move forward, we now fix δ > 0 and γ ≥ 0.Moreover, to shorten our notation we set and we drop the subscript ε for the weak solution u ε and the subscripts γ, a for the function Φ γ,a defined in (5.1).Therefore, from now on we will simply write u and Φ in place of u ε and Φ γ,a respectively, unless otherwise stated.To simplify our notation even more, we additionally introduce the function P : Q R (z 0 ) → R + 0 defined by P := (|Du| − a) + , and its "mollified" version with an intentional abuse of the notation (2.1) on the left-hand side of (6.1).By the elementary properties of Sobolev functions, the weak spatial gradient of P is given by Step 1: Caccioppoli-type inequalities.In order to prove the local boundedness of Du, we now test the weak formulation of system (P ε ) 1 with the function D i φ̺ , where ) is a non-negative cut-off function that will be specified later.We thus obtain At this stage, we fix ).With such a choice of ψ and integrating by parts, for the term involving the time derivative we get Now, for γ > 0 we estimate the inner integral from above and below as follows: and Recalling that ∂ t χ ≤ 0, ∂ t ω ≥ 0 and χ, ω ≥ 0, and plugging the two previous inequalities into (6.4),we obtain Inserting this into (6.3) and letting ̺ ց 0 yields the following inequality where the terms J 0 -J 9 are defined by For convenience of notation, we now abbreviate . In what follows, we will estimate each of the last nine terms above.We first prove that J 2 is non-negative, thus we can drop it in the following.We limit ourselves to dealing with the case p > 2, since for 1 < p ≤ 2 the same result follows in a similar way.For p > 2 we have where δ mk and δ ℓj denote the Kronecker delta.Using the above equality and the expression for D|Du|, we then obtain almost everywhere in the set Q r ∩ {|Du| > a}.Now, the Cauchy-Schwarz inequality implies that Combining this with the fact that ∂ s F (x, t, |Du|), ∂ ss F (x, t, |Du|) ≥ 0, from (6.6) we infer that almost everywhere in Q r ∩ {|Du| > a}.Furthermore, we obviously have Thus, taking into account that ψ ≥ 0 and Φ ′ (w) = (a + w) γ−3 w(γw + 2a) ≥ 0 for every w ≥ 0, we arrive at the desired conclusion, that is J 2 ≥ 0. We now deal with the terms J 1 and J 3 .We let 0 < ε < min{ 1 2 , ( δ 4 ) p−1 }.By Lemma 2.5 we have Using the Cauchy-Schwarz inequality together with Young's inequality and again Lemma 2.5, we get where where, in the last line, we have used that Now we estimate the second integral on the right-hand side of (6.10).Using Young's inequality with exponents (2, 2) as well as inequalities (5.2) and ( 5.3) with w = P , for every ζ > 0 we obtain where the last inequality is due to the fact that for every p > 1 on the set Q r ∩ {a < |Du| < 2a}.We now recall that, by virtue of Theorem 3.3, we have At this point, joining estimates (6.10)−(6.13),we obtain We now turn our attention to J 7 and J 9 .Using (2.17) and Young's inequality, we find Similarly, we have Finally we estimate J 8 .Using estimate (2.17) and equality (6.2), we obtain Now we deal with the second integral on the right-hand side of (6.17).Using the fact that |Du| ≤ 2P in Q r ∩ {|Du| ≥ 2a}, the inequality (5.3) with w = P as well as Young's inequality, we get We now estimate the first term on the right-hand side of (6.17) by resorting to the same procedure leading to (6.13).Thus, for every σ > 0 we have Letting σ ց 0 in the last inequality, we then obtain At this point, combining estimates (6.17)−(6.19),we find Now, observe that from inequality (6.5) it follows that since J 2 ≥ 0. Plugging estimates (6.7)−(6.9),(6.14)−(6.16) and (6.20) into (6.21) and choosing κ = 1 4c 2 (n+4) , we arrive at where c ≡ c(n, p, δ, C 1 , K) > 1. At this stage, we perform a particular choice of the function χ.For a fixed time τ ∈ (t 1 −r 2 , t 1 ) and ϑ ∈ (0, t 1 − τ ), we define the Lipschitz continuous function χ by Therefore, we have Hence, letting ϑ ց 0 and taking the supremum over τ ∈ (t 1 − r 2 , t 1 ), estimate (6.22) turns into In summary, we have so far obtained the following result. holds true for some constants In what follows, we will write again u and Φ in place of u ε and Φ γ,a respectively, unless otherwise specified.First, we estimate the second integral on the left-hand side of (6.23) from below.To this end, we note that From this identity, we infer where, in the second to last line, we have used the fact that Using estimate (6.24) in combination with Young's inequality, we then obtain from which we deduce Now we consider the case 1 < p ≤ 2n n+2 .This implies Our goal now is to reduce the exponent of H δ 2 (Du) in the second integral on the right-hand side of (6.23).To this end, we observe that In addition, for |Du| > a we have Using the above identity to estimate the last integral of (6.27) and taking into account that η ∈ C ∞ 0 (B r (x 1 )) and ω is independent of the x-variable, we obtain ˆQr(z1) (6.30) Combining (6.23) and (6.29), we obtain in the case 1 < p ≤ 2n n+2 the following inequality For convenience of notation, we now use the indicator function to indicate that the terms multiplied by it need to be taken into account only in the subcritical case 1 < p ≤ 2n n+2 .Integrating (6.25) over Q r (z 1 ) and combining the resulting inequality with (6.31) in the case 1 < p ≤ 2n n+2 , after some algebraic manipulation, for every p > 1 we get the following estimate: Proposition 6.2 (Caccioppoli-type inequality).Let p > 1, γ > 0, δ > 0, b = 1 + δ and 0 < ε < min{ 1 2 , ( δ 4 ) p−1 }.Moreover, let p be defined as in (6.30) and assume that is the unique energy solution of problem (P ε ) with Q ′ = Q R (z 0 ) ⋐ Ω T and u a weak solution of (1.1), satisfying the additional assumption of Remark 3.2 if 1 < p ≤ 2n n+2 .Then, for any parabolic cylinder Q r (z 1 ) ⋐ Q R (z 0 ) with r ∈ (0, 1) and any cut-off functions η ∈ C ∞ 0 (B r (x 1 ), [0, 1]) holds true for some constant k > 1 depending on n, p, δ, C 1 and K. and the Sobolev embedding theorem on the time slices B t := B r (x 1 ) × {t} for almost every t ∈ (t 1 − r 2 , t 1 ), we obtain where C S ≡ C S (N, n, p) ≥ 1.Now, using the definitions of H δ and b, the properties of ω and η, and applying Proposition 6.2, we can estimate the second mean value on the right-hand side of (6.32) as follows: where c ≡ c(n, p, δ, C 1 , K) > 1 and Inserting the preceding inequality into (6.32),integrating with respect to time over (t 1 − r 2 , t 1 ) and using Proposition 6.2 again, we obtain where For a fixed µ > 0 that will be chosen later, we now split the cylinder Q r (z 1 ) as follows where θ ≡ θ(n) > 0. Joining (6.33) − (6.35) and (6.38), after some algebraic manipulation we arrive at where At this point, we consider estimate (6.33) in the case f L n+2 log α L(Qr(z 1 )) = 0 and the above inequality if f L n+2 log α L(Qr(z 1 )) > 0. In the latter case, choosing µ such that from (6.39) we obtain where the constants on the right-hand side are of the type for some constant C ≡ C(N, n, n, α, p, δ, C 1 , K) > 1.Moreover, from (6.33) it immediately follows that inequality (6.40) holds true also when f L n+2 log α L(Qr(z 1 )) = 0. Let us now consider the case 1 < p ≤ 2n n+2 .Arguing exactly as above, this time we arrive at where the constants C 4 and C 5 are of the same type as in (6.41). 6.3. Step 3: the iteration for p > 2n n+2 .Thanks to Proposition 6.3, we can now start the Moser iteration procedure.Once again, we shall keep both the assumptions and the notations used in Propositions 6.2 and 6.3.At this step, we assume that p > 2n n+2 .By virtue of (1.3) and (1.4), this implies that p > 2n n+2 .We define by induction a sequence {γ k } k∈N 0 by letting γ 0 = 0 and Notice that γ k > 0 for every k ∈ N, since p > 2n n+2 .Furthermore, this sequence diverges to +∞ as k → ∞ and by induction we have In addition, one can easily check that .43) Now, for k ∈ N 0 and s ∈ (0, 1) we set Note that this is possible since Using equality (6.43) and replacing γ, η and ω with γ k , η k and ω k respectively, we obtain from (6.42) the following recursive reverse Hölder-type inequality which holds for any k ∈ N 0 .Exploiting the properties of η k and ω k as well as the definitions of r k , Q k , b, H δ and p, we obtain for every k where, in the last line, we have used the fact that (6.45) In addition, one can easily deduce that for any k ∈ N 0 .Joining the last four inequalities and using the fact that r k = 2r k+1 −sr ≤ 2r k+1 and r k ≤ r for every k ∈ N 0 , we find for any k ∈ N 0 , where C ≡ C(N, n, n, α, p, δ, C 1 , K) > 1 and Θ ≡ Θ(n, α) > 0. To shorten our notation, we now set so that the last inequality turns into for k ∈ N 0 .Iterating the above estimate, we obtain for any k ∈ N.This inequality can be rewritten as follows: (6.50) Next, we observe that where One can easily check that p − ϕ ≥ 0 and ϕ ≤ p whenever p > 2n n+2 .Using this information and the fact that c s > 1, we can apply inequality (2.24) to obtain the following estimate: Now we use that 1 < γ j + p ≤ p( n+2 n ) j for any j ∈ N 0 to estimate the following product by means of (2.24) and (2.25): for any k ∈ N, with c ≡ c(N, n, n, α, p, δ, C 1 , K) > 0. Since the constant c is independent of k ∈ N, recalling (6.48), (6.51), (6.52) and (6.53) and passing to the limit as k → ∞ in both sides of ( , where, in the second to last line, we have applied Young's inequality with exponents ( ϕ 2−p , ϕ ϕ−2+p ).Note that the preceding inequality holds for any s ∈ (0, 1).Hence, we can absorb the essential supremum on the right-hand side using Lemma 2.2 with ρ 0 = sr and ρ 1 = r.This yields ess sup . We have thus proved the following result, which ensures the desired local L ∞ -bound of Du ε for all p > 2n n+2 . Theorem 6.4.Let p > 2n n+2 , δ > 0 and 0 < ε < min{ 1 2 , ( δ 4 ) p−1 }, where n is defined according to (1.3) − (1.4).Moreover, assume that is the unique energy solution of problem (P ε ) with Q ′ = Q R (z 0 ) ⋐ Ω T and u a weak solution of (1.1).Then, for any parabolic cylinder Q r (z 1 ) ⋐ Q R (z 0 ) with r ∈ (0, 1) and any s ∈ (0, 1), we have ess sup Step 4: the iteration for 1 < p ≤ 2n n+2 .We now start the iteration procedure for 1 < p ≤ 2n n+2 , keeping both the assumptions and the notations used in Propositions 6.2 and 6.3.Note that, by the choice of β in (1.4), the condition 1 < p ≤ 2n n+2 implies that n ≥ 3, and therefore n = n.We will however restrict ourselves to the case f = 0. We again define by induction a sequence {γ k } k∈N 0 by letting γ 0 = 0 and This sequence diverges to +∞ as k → ∞ and, by induction, we have Moreover, one can easily check that which holds for any k ∈ N 0 .Exploiting the properties of η k and ω k as well as the definition of Q k , for every k ∈ N 0 we get r 2 Qr(z 1 ) where, in the last line, we have used (6.45).Combining the two previous inequalities with (6.44) and (6.46), and using the fact that r k ≤ 2r k+1 and r k ≤ r for every k ∈ N 0 , we find for any k ∈ N 0 , where C ≡ C(N, n, α, p, δ, C 1 , K) > 1 and Θ ≡ Θ(n, α) > 0. To shorten our notation, we now set (6.63) Thus, recalling the definition (6.48), the preceding inequality turns into for k ≥ 0. Note that the constant c ε depends on ε and r through the quotient Iterating the above estimate, we obtain for any k ∈ N.This inequality can be rewritten as follows: Now we use that 1 < γ j + p = p( n+2 n ) j for any j ∈ N 0 to estimate the following product by means of (2.24) and (2.25): Finally, let us consider the case f = 0 and 1 < p ≤ 2n n+2 .Arguing as above, but this time using Theorem 6.5 instead of Theorem 6.4, we get ess sup |Du| ≤ ess sup This concludes the proof. 6 .57) Combining estimates (6.54)−(6.57)and recalling the definitions of c s and c r in (6.49), we obtain from (
12,315.8
2023-12-21T00:00:00.000
[ "Mathematics" ]
Dissimilar effects of stereoisomers and racemic hydroxychloroquine on Ca2+ oscillations in human induced pluripotent stem cell‐derived cardiomyocytes Abstract All currently employed pharmaceutical formulations of hydroxychloroquine (HCQ) sulfate are a racemate, consisting of equal parts mixture of two stereoisomers: R(−)HCQ and S(+)HCQ sulfates. The aims of the current study were first, to obtain and characterize pure HCQ enantiomers. The separation and purification of free base HCQ enantiomers from the racemate form were performed using semi‐preparative chiral high‐performance liquid chromatography. Second, we compared the pharmacological properties of both optical isomers and racemic mixture on the intracellular Ca2+ oscillations employing an in vitro model of human cardiomyocytes derived from induced pluripotent stem cells (iPSCs). The results of the pharmacological investigations indicate that the racemic and pure stereoisomer forms of HCQ sulfate exhibited a dose‐dependent inhibition of spontaneous Ca2+ oscillations (as measured from their frequency and Ca2+ peak widths) in cardiomyocytes 5–45 min following exposure. In addition, the concentration‐response relationships for all three compounds indicated that the rank order of potency (IC50) was R(−)HCQ >racemic HCQ >S(+)HCQ for the frequency of the Ca2+ oscillations and width of Ca2+ peaks for all time points examined. These studies indicate that both R(−) and S(+) stereoisomers exhibit differing pharmacological actions on hiPSC cardiomyocytes, with the former effecting a greater potency on cell Ca2+ handling. | INTRODUCTION The current COVID-19 pandemic brought several older drugs back to the spotlight because of their potential antiviral activity. One of the most controversial drugs from this group includes hydroxychloroquine (HCQ) which had been recognized for several years as an effective agent in treating malaria, rheumatoid arthritis, and other autoimmune disorders (Schrezenmeier & Dörner, 2020;Shukla & Wagle, 2019). However, after HCQ began to be employed in the prophylaxis and/or treatment of COVID-19 in a larger number of patients, its use had been reported to be associated with significant pro-arrhythmic activity (Bauman & Tisdale, 2020;Oscanoa et al., 2020;Sinha & Balayla, 2020). This was more apparent when administered with other QT prolonging drugs or taken in higher doses when compared to that of the typical anti-malaria dosage. Although these adverse effects could be multifactorial, they are most likely associated with the block of several voltage-gated ion channels. These include both the delayed rectifier K + channels (hERG or K IR ) coded by the ether a-gogo-related gene and Kir2.1 potassium channel (product of KCJN2 gene) (Tamargo et al., 2004). Other channels that could be targeted by HCQ include voltagegated Ca 2+ (Ca V 1.2), Na + (Na V 1.5), and K + (K V 4.3 and K V 7.1) channels, albeit at higher concentrations (Ballet et al., 2022;Capel et al., 2015;Grilo et al., 2010;Okada et al., 2020). Additionally, Capel et al. (2015) reported that HCQ blocked the pacemaker currents (I f , "funny" current) in isolated guinea pig myocytes. All currently used pharmaceutical formulations of HCQ sulfate are a racemate, consisting of approximately equal parts mixture of two stereoisomers: S(+)HCQ and R(−) HCQ sulfates. It has been demonstrated previously that chirality can influence the pharmacokinetic and pharmacodynamic properties of HCQ enantiomers (Brocks & Mehvar, 2003;Ducharme et al., 1994;Jia et al., 2022;Miller & Ulrich, 2008;Tett et al., 1994;Wainer et al., 1994), but to a lesser degree with the latter. There is a scarcity of information detailing whether the HCQ enantiomers exhibit differences regarding their effects on cardiac muscle given their differing pharmacological block of K + channels resulting from their enantiomeric nature (Ballet et al., 2022;Li et al., 2020). If, in fact, such difference could be established, then the less cardiotoxic enantiomer would be a more likely suitable and effective drug when used as the sole agent in the pharmaceutical formulations of HCQ, when compared with the current racemic formulations as previously described (D'Acquarica & Agranat, 2020;Lentini et al., 2020). Human induced pluripotent stem cell (iPSC)-derived in vitro model systems have recently emerged as a physiologically relevant and highly reproducible option for testing cardiomyocyte Ca 2+ handling properties in drug development (Burnett et al., 2021;Satsuka & Kanda, 2020). The iPSC-derived cardiomyocytes are a particularly attractive in vitro model as they form a synchronously beating monolayer that can be used to reliably reproduce drug-associated cardiac physiology phenotypes using a fast-kinetic fluorescence assay capable of monitoring of Ca 2+ changes . Of particular interest is the ability to employ this cardiomyocyte cell type to test for the potential of several compounds to induce cardiac arrhythmias as a result of interfering with cardiomyocyte repolarization, that is, the in vitro equivalent to clinical QT prolongation. Therefore, the main aims of the current study were, first, to obtain and characterize pure forms of HCQ enantiomers. Second, we compared the pharmacological properties of these optical isomers using this in vitro human cell system. | Preparation of free base racemic HCQ HCQ sulfate powder (racemate form) was purchased from AK Scientific, Inc., (Union City, CA). Prior to the separation of the optical isomers, HCQ sulfate was converted into the free base. In short, a 10% NaOH solution was added dropwise to a stirred solution of racemic HCQ sulfate (1.0 g, 2.3 mmol) in 10 mL of H 2 O at 0°C and the reaction mixture was stirred for 1 h at room temperature. The formed free base HCQ racemate was then extracted with ethyl acetate (3 × 15 mL). The combined organic extracts were dried with sodium sulfate and concentrated to obtain the free base HCQ racemate oil (750 mg) which had a pale-yellow appearance. This product was used for chiral chromatography separation without further purification. | Separation of HCQ enantiomers by chiral high-performance liquid chromatography (HPLC) Separation of both optical isomers from racemic sulfatefree HCQ was achieved using chiral HPLC. We first employed an analytical column (Enantiocel R G1-5, 250 × 4.6 mm) to establish and optimize the separation conditions. To obtain larger amounts (i.e., >1.0 g), the Enantiocel G1-5 semi-preparative HPLC column (250 × 10 mm, 5 μm) was employed. Both columns were obtained from ColumnTek, LLC (State College, PA). The isocratic HPLC system consisted of Water 410 pump, Rheodyne manual loop injector, and multi-length Water detector at 240 nm. The mobile phase consisted of tertbutylmethyl-ether and ethanol (HPLC grade, both from Sigma-Aldrich Inc.) at a ratio (v/v) of 90:10 with the addition of 0.2% of diethylamine (HPLC grade, Sigma-Aldrich Inc.). The flow rate was 1 mL/min for the analytical column and 4 mL/min for the preparative column at room temperature. The fractions corresponding to two distinct HPLC peaks were collected and stored at −20°C. After combining fractions from several HPLC runs, the HPLC phase was evaporated with a rotary evaporator to an amorphous, semi-solid substance for both fractions consisting of a free base of pure HCQ isomers. In our initial experiments, it was determined that the amorphous material was not suitable for the synthesis of HCQ sulfate. Therefore, the product was cleaned by adding water, which caused precipitation of water-insoluble precipitate that was subsequently diluted in pure ethanol. The concentrated sulfuric acid (27 mg, 1 mmol) was added at 0°C of optically pure HCQ R(−) or S(+) enantiomer (100 mg, 0.28 mmols) solution in absolute ethanol, and the reaction mixture was stirred for 12 h at room temperature. After complete conversion, the precipitates of corresponding HCQ salts were filtered off and washed with diethyl ether to yield the corresponding optically pure HCQ sulfate (100 mg) as a white solid. The analysis of both enantiomers (R(−)-and S(+)-HCQ sulfate) was performed using both analytical HPLC on Enantiocel column, magnetic resonance spectroscopy, and circular dichroism (CD) spectra to identify the purity of enantiomers. | Cardiomyocyte cultures Human iPSC-derived cardiomyocytes were obtained from Caucasian donor cells (gender of the donor was not provided by manufacturer of the cells) with no known disease-related (iCell™ cardiomyocytes, FujiFilm Cellular Dynamics International). Both plating and maintenance media were also provided by FujiFilm Cellular Dynamics. Cardiomyocytes were plated and maintained according to the manufacturer's recommendations as described previously . Cardiomyocytes were plated onto 0.1% gelatin coated 96 multi-well plates wells at a density of 50,000 cells/well in maintenance media (final volume, 200 μL). The cardiomyocytes were stored in a humidified incubator (5% CO 2 /95% air) at 37°C for 10 days. The well media were replaced every 48 h. Synchronous beating of the cardiomyocytes was evident 5-7 days after plating. Fluorescence experiments were performed 10 days after plating. At this time, the cardiomyocytes demonstrated regular synchronous beating (Video S1, Control Cardiomyocytes Play Speedx4.mp4 [figshare.com]). Ca 2+ fluxes In this set of experiments, Ca 2+ ion flux assay was initially optimized for the high throughput screening in 96-well plates using the FlexStation 3 multi-mode microplate reader (37°C) and EarlyTox Cardiotoxicity Kit (both from Molecular Devices). We measured the HCQ-mediated changes in both the concentration-dependent widening of Ca 2+ peaks and in the cell-beating of the cardiomyocytes. On the day of experiment, the cells were loaded with Ca 2+ indicator dye (CardioTox) for 1 h with a final volume of 100 μL at 37°C. Prior to exposure to the HCQ compounds, baseline (control) Ca 2+ levels were first acquired. Thereafter, the plate reader's robotic arm system applied the compound of interest. Only one concentration per well was performed. Fluorescence readings were acquired 5, 15, 30, and 45 min following application of the compounds. Fluorescence measurements of the CardioTox dye (excitation at 485 nm, emission at 535 nm) were acquired at about three frames per sec (3 Hz), 0.388 s interval for three wells simultaneously, and up to 78 frames per well were collected during a total period of 30 sec. The selected sampling rate of the continuous fluorescence signal exceeded the Nyquist frequency (1.5 Hz) calculated for the highest expected Ca 2+ oscillation frequency observed in the cardiomyocytes (20/min or 0.33 Hz). Based on Nyquist theorem, the sampling rate exceeding Nyquist frequency is required for presumed maximum frequency of the sample signal provided that a sampling rate was performed at a sufficiently high rate. The plates were returned to the incubator between fluorescence measurements. In some cases, we calculated the concentration required for complete cessation of the Ca 2+ oscillations. | Data analysis The primary outcome for the analyzed data was represented by peak Ca 2+ oscillations frequency (i.e., beats per minute) and width of the recorded Ca 2+ peak at 10% of the maximum amplitude, evaluated during at least a continuous 30 s period of recording during selected time points after drug administration. The HCQ concentrationresponse curves for each HCQ compound were fit to the Hill equation: where I is the % inhibition, I MAX is the maximum inhibition of the compound, IC 50 is the half-inhibition concentration, [compound] is the HCQ concentration and nH is the Hill coefficient. The IC 50 value was calculated separately for each HCQ sulfate isomer (or racemate) and both outcome parameters recorded and represent drug concentration (in M) at which 50% decrease in the beat frequency, maximum amplitude, and peak width occur. | Statistical analysis For statistical analysis of the non-linear fits of averaged (over 30 s observation period) measured parameters versus log of compound concentrations were determined by extra sum-of-squares F test for nested models employing statistical calculator from Prism version 9.5 (GraphPad). In addition, the statistical significance of differences between inhibitory effects of investigated compounds on Ca 2+ oscillations at different time points were analyzed using independent ANOVA test. | RESULTS The separation and purification of free base HCQ enantiomers from the racemic form were performed using semi-preparative chiral HPLC (Figure 1). In the first set of experiments, we employed an analytical chiral HPLC column to separate the racemic compound in order to collect the stereoisomers. The results from this trial separation, as shown in Figure 2a, indicate that we could successfully obtain both HCQ enantiomers. We next employed a semi-preparative column that would yield larger quantities (i.e., greater than 1000 mg) of both free base R(−) and S(+)HCQ. The plot shown in Figure 2b indicates that we were successful in obtaining sufficient quantities of both enantiomers. Athough the peak height for the S(+) compound was taller than that obtained for the R(−) enantiomer, the latter exhibited a wider peak, suggesting that the area under the curve (AUC) for both species would be comparable. Thereafter, each enantiomer compound was transformed into crystalline, water-soluble sulfate salts. The analysis of the chiral identity and purity of the HCQ stereoisomers were obtained using analytical chiral HPLC of each fraction (Figure 2c), NMR spectra, and CD spectra ( Figure 3). The results from these three processes indicated that the purity of the isolated stereoisomers exceeded 99.5%. In the next set of experiments, cardiomyocyte contraction rate and pattern were characterized by measuring changes in intracellular Ca 2+ measured with the Ca 2+ sensitive dye. In the initial experiments, we found that HCQ concentrations greater than 300 μM led to extreme and erratic changes in Ca 2+ levels within seconds of exposure (data not shown). Video S2 shows contraction and relaxation of cardiomyocytes 5 min after exposure to 400 μM racemic HCQ (Plus HCQ 0.4 mM Play Speedx4.mp4 [figshare.com]). Full concordance between mechanical contraction and Ca 2+ signal was demonstrated by adding a high dose of HCQ, which inhibited completely the Ca 2+ oscillations and cardiomyocyte contractions in an identical period. Further, these effects were irreversible and, consequently, we did not employ HCQ concentrations greater than 300 μM. Figure 4a shows representative recording of the baseline (control) spontaneous Ca 2+ oscillations in cardiomyocytes prior to exposure to HCQ compounds. The frequency of these oscillations ranged from 17 to 23 waves (or beats) per min (18.9 ± 2.0, mean ± SD, n = 3). Following the addition of the control buffer, we did not observe overt changes in the frequency during the entire recording period (45 min Tables 1 and 2. It can be observed that the R(−)HCQ isomer exerted the highest effect on both parameters 45 min post-compound exposure with an IC 50 of 8.3 μM and 9.5 μM for the Ca 2+ oscillations and Ca 2+ wave width, respectively. Figure 8 also illustrates that there was a time-dependent leftward shift of the fits resulting from increased Ca 2+ wave width. The absolute values of the Ca 2+ oscillation frequency are shown in Tables 3-6 at times following the exposure of the cells to the compounds. The absolute values for each compound concentration were compared using one-way ANOVA test and p < 0.05 was considered statistically significant. The results indicate that the oscillation frequency decreased F I G U R E 1 Schematic representation of chemical structure and the separation of racemic hydroxychloroquine (HCQ) into its optical isomers. | DISCUSSION There are several known ways of obtaining pure optical isomers of HCQ. The synthetic one, patented in 1994 (US Patent 5,314,894 by Blaney et al., 1994;Stecher et al., 1994), is relatively time-consuming, expensive, requires specialized chemical laboratory, and is more suitable for the synthesis of a larger amount of HCQ. There have been also described some new pathways of stereo HCQ synthesis, which possibly can make the synthetic process easier (Ni et al., 2022). We also attempted to obtain pure isomers of HCQ using previously described kinetic resolution of racemic HCQ (Craiga & Ansari, 1993); however, the obtained optical isomers were not sufficiently pure for use in subsequent pharmacological studies. We used, consequently, chiral HPLC columns to separate isomers after F I G U R E 2 Chiral high-performance liquid chromatography (HPLC) separation of free base racemic hydroxychloroquine (HCQ) on the analytical HPLC column (a), serial injections used for fraction collections on semi-preparative HPLC column (b), and HPLC image of purified free bases of R(−)HCQ and S(+)HCQ after injection of pure optical isomers onto analytical chiral HPLC column (c). Note that the recordings were performed with different amplification of the signal and recording speed. F I G U R E 3 Circular dichroism (CD) spectra of R(−)hydroxychloroquine (HCQ) (2nd-peak on high-performance liquid chromatography (HPLC) separation, orange trace) and S(+)HCQ sulfate (first peak on HPLC separation, blue trace). Data are presented as molecular ellipticity difference (delta epsilon in M −1 cm −1 ) for 1 mM concentrations of stereoisomers in water versus wavelength (nm) (for details of the method see Okuom et al., 2015). The R(−) enantiomer has a positive cotton effect and the S(+) enantiomer has a negative cotton effect centered between 200 and 290 nm. injection of racemic HCQ free base, similar to methods described previously (Ballet et al., 2022;Li et al., 2020;Xiong et al., 2021). The resolution of racemic HCQ into two optical isomers was complete and of excellent quality from both analytical and semi-preparative columns, which allowed for the collection of fractions from both chromatographic peaks for further synthesis of HCQ sulfate enantiomers. The results of further chemical analysis of both isomers indicated that both isomers exhibited >99.5% purity and were suitable for pharmacological testing in cardiomyocytes. In addition, we were able to design a method of conversion of the lipid-soluble HPLC output into water-soluble and crystalline sulfate salts. The results of pharmacological investigations showed that HCQ sulfate (both racemate and stereoisomers) produced a dose-dependent inhibition of spontaneous Ca 2+ oscillations in human cardiomyocytes over the investigated time of 5-45 min (i.e., a decrease in Ca 2+ oscillation frequency and an increase in the width of Ca 2+ peaks). In addition, based on the results of the dose-response analysis, the potency of HCQ to inhibit these oscillations was dependent on the optical F I G U R E 4 Effect of R(−) hydroxychloroquine (HCQ) on Ca 2+ oscillations and Ca 2+ wave width. Sample baseline Ca 2+ oscillations acquired from a single microplate well in human induced pluripotent stem cell (iPSC)-derived cardiomyocytes 5 min after the application of control buffer (a), 30 μM R(−)HCQ (b), 100 μM R(−)HCQ (c) and 300 μM R(−) HCQ (d). Note that oscillation frequency decreased, and wave width increased with HCQ concentrations greater than 100 μM. F I G U R E 5 Effect of S(+) hydroxychloroquine (HCQ) on Ca 2+ oscillations and Ca 2+ wave width. Sample baseline Ca 2+ oscillations acquired from a single microplate well in human induced pluripotent stem cell (iPSC)-derived cardiomyocytes 5 min after the application of control buffer (a), 30 μM S(+)HCQ (b), 100 μM S(+)HCQ (c) and 300 μM S(+) HCQ (d). Note that oscillation frequency decreased, and wave width increased with HCQ concentrations greater than 100 μM. configuration of HCQ, with a "pro-arrhythmic" potency magnitude of R(−)HCQ >racemic HCQ >S(+)HCQ. Identical potency differences were obtained for all methods evaluating Ca 2+ oscillations frequency and width of Ca 2+ peaks, as well as all investigated time intervals. The longer time of incubation with HCQ resulted in increased inhibitory effects on Ca 2+ oscillations. Based on the obtained results, it can be concluded that the Ca 2+ oscillations reflect pro-arrhythmic effect of different optical formulations of HCQ and that S(+)HCQ represent less pro-arrhythmic potential compared with R(−) HCQ, with racemic formulations representing intermediate potency. The present study aimed to compare the cardiotoxic properties of HCQ enantiomers without further dissecting its molecular basis. Drug-induced long QT syndrome is an established cardiac side effect of a wide range of medications and represents a significant concern for drug safety. The rapidly and slowly activating delayed rectifier K + currents, mediated by channels encoded by the human ether-a-go-go-related gene (hERG) and inwardly rectifying Kir2.1, are two main K + currents responsible for F I G U R E 6 Effect of (±) hydroxychloroquine (HCQ) racemate on Ca 2+ oscillations and Ca 2+ wave width. Sample baseline Ca 2+ oscillations acquired from a single microplate well in human induced pluripotent stem cell (iPSC)-derived cardiomyocytes 5 min after the application of control buffer (a), 30 μM (±)HCQ (b), 100 μM (±)HCQ (c) and 300 μM (±)HCQ (d). Note that oscillation frequency decreased, and wave width increased with HCQ concentrations greater than 100 μM. F I G U R E 7 Racemate and enantiomeric hydroxychloroquine (HCQ) concentration-response relationships of human cardiomyocytes 5-(a), 15-(b), 30-(c), and 45 min (d) following HCQ exposure. Each data point represents the (mean ± SD) oscillation frequency (beats/ min) from three independent experiments with 6-8 wells per concentration. The smooth curves were obtained by fitting the data points to the Hill equation. The IC 50 values are listed in Table 1. ventricular repolarization. Several recent studies employing HEK cells or human-derived cardiomyocytes (Ballet et al., 2022;Thomet et al., 2021;Wang et al., 2021) have demonstrated block by HCQ at μM concentration range of both of these K + channels. However, the block of Kir2.1 currents was greater (i.e., lower potency) than that observed for hERG channel currents (Ballet et al., 2022). Like the present study, the R(−)HCQ enantiomer exerted a more potent inhibition than S(+)HCQ for both hERG and Kir2.1 channels, with an enantiomeric separation of 2-to 4-fold (Ballet et al., 2022). In subsequent experiments in the rabbit Purkinje fibers, the authors demonstrated different electrophysiological properties for both HCQ enantiomers, with R(−)HCQ prominently depolarizing the membrane resting potential and inducing autogenic activity, while S(+)HCQ primarily increased the action potential duration. Nevertheless, the authors concluded that the chirality of HCQ does not substantially influence the arrhythmogenic properties in vitro. The results from the current study indicate that with a similar HCQ concentration range and similar enantiomeric separation (2-to 4-fold), both enantiomers of HCQ (R>S) produced significant pro-arrhythmic activity by altering the Ca 2+ oscillations in human cardiomyocytes, commonly known accepted as the model for pre-clinical studies of potentially pro-arrhythmic drugs. One explanation for this difference could be that Ballet et al. (2022) observed single channel data, whereas in our model we registered a more complex, end-effect (i.e., Ca 2+ oscillations) which includes HCQ-induced alterations of many channels in cardiomyocytes. In addition, their data on Purkinje cells were obtained using the rabbit model which may display some interspecies differences from our human cardiac cells. It should also be mentioned that another report lent support to the assumption that the S(+)HCQ enantiomer is less potent than the R(+) enantiomer in blocking hERG channels in transfected cells (Li et al., 2020). Although no effects of HCQ (up to 90 μM concentrations) were observed for Na V 1.5, Ca V 1.2, K V 4.3 or K V 7.1 channels (Ballet et al., 2022), both Thomet et al. (2021) and Wang et al. (2021) observed block of these channels by HCQ. The latter two studies did not indicate whether HCQ was racemic. Also, experimental conditions may help explain this discrepancy. The results obtained in this study should be also discussed with respect to previously described data about circulating levels of HCQ after administration of therapeutic doses (i.e., 400 mg daily PO). The previously reported average HCQ plasma concentrations were in the range 0.5-1 μM (Carlsson et al., 2020). The in vitro plasma protein binding studies of HCQ enantiomers indicate that the free fraction of R(−)HCQ (i.e., responsible for the pharmacodynamic effect of HCQ) was about two times F I G U R E 8 Racemate and enantiomeric hydroxychloroquine (HCQ) concentration-response relationships of human cardiomyocyte Ca 2+ peak width for R(−)HCQ (a), S(+)HCQ (b), and ±HCQ compounds. The plots in each panel denote the (mean ± SD) Ca 2+ peak width 5-, 15-, 30-, and 45 min after HCQ compound application. Each data point was acquired from three independent experiments with 6-8 wells per concentration. The smooth curves were obtained by fitting the data points to the Hill equation. The IC 50 values are listed in Table 2. T A B L E 1 IC 50 values for frequency of Ca 2+ oscillation response calculated from concentration-response curves for both responses (Ca 2+ peak width and frequency of Ca 2+ oscillations). Statistical comparison of dose-response curves indicate that all curves were different within each dose-response group at time intervals at p < 0.001 (extra sum-of-squares F test). higher compared to S(+)HCQ, at 63% and 36%, respectively (McLachlan et al., 1993). The described data clearly indicate that the reported therapeutic plasma concentrations of HCQ are in sub-micromolar range in humans, and thus well below concentrations of HCQ effective in our in vitro model, in which concentrations of either HCQ enantiomers or its racemate did not produce any effects on Ca 2+ oscillations at <10 μM. It should be mentioned, however, that in specific circumstances (e.g., doses of HCQ >400 mg daily, long QT syndrome, electrolyte imbalance, concomitant use of drugs prolonging QT interval), HCQ and/or enantiomers can produce pro-arrhythmic effects at a lower concentration range. Based on our results, in such situations, the therapeutic use of S(+)HCQ could be more beneficial than the use of racemate compound. In addition, the therapeutic use of HCQ in doses higher than those prescribed for antimalaria prophylaxis/treatment was advocated based on recent data obtained from several clinical trials. Although the use of HCQ remains a questionable topic for the scientific community, based on their ability to suppress in vitro replication of several coronaviruses, it is nevertheless T A B L E 2 IC 50 values for Ca 2+ peak width response calculated from concentration-response curves for both responses (Ca 2+ peak width and frequency of Ca 2+ oscillations). Statistical comparison of dose-response curves indicate that all curves were different within each doseresponse group at time intervals at p < 0.001 (extra sum-of-squares F test). clinically employed to treat COVID-19 in some countries due to the lack of availability of more adequate medication. The hypothesis that HCQ could enhance patients' clinical outcomes with SARS-CoV-2 is confirmed. Few reports indicate that it has some activity against viral infection (Das et al., 2021). If this is the case, the availability and therapeutic use of S(+)HCQ instead of racemic (±)HCQ could be beneficial from the safety point of view. This hypothesis should be, however, confirmed in further clinical studies comparing pharmacological activity and side effects associated with the use of HCQ racemate and its S(+) enantiomer. This seems to be particularly important in light of some recent and still controversial observations indicating in vitro differences in anti-COVID virus efficiency between HCQ enantiomers (Li et al., 2020;Ni et al., 2022). Time of recording post-compound exposure R(−)HCQ (M) S(+)HCQ (M) (±)HCQ (M) Although the primary goal of the study was to compare the pro-arrhythmic effects of enantiomers of HCQ in human iPSC cardiomyocytes, there are some limitations. That is, we did not examine the mechanism (i.e., ion channel activity) of the observed differences in the pharmacological profile of the HCQ enantiomers. The identification of the actual cell surface target could be further explored in the subsequent studies. Another limitation of the study has been the selection of only one method (i.e., Ca 2+ sensitive fluorescent dye) for the observed effects of HCQ enantiomers on the Ca 2+ handling in the cardiomyocytes. The other limitation of the study involves rather slow capturing rate of the Ca 2+ oscillations. In summary, we found a significant degree of enantiomeric separation between R(−) HCQ and S(+)HCQ in their pro-arrhythmic activity on Ca 2+ fluxes in a human iPSC cardiomyocyte model. In both evaluated parameters (Ca 2+ oscillation frequency and Ca 2+ peak width), the effects of the R(−) isomer were most potent. Considering that therapeutic oral doses of racemic HCQ produce submicromolar free plasma concentrations of both enantiomers, the present data agree with previous findings of low arrhythmogenic events during HCQ therapy. However, caution should be exercised when racemic HCQ is prescribed at higher doses, especially in patients with heart conditions characterized by congenital long QT interval and an electrolyte imbalance. It can be speculated that in such situations the use of S(+)HCQ would produce T A B L E 5 Effect of different optic isomers of hydroxychloroquine (HCQ) or racemate on the Ca 2+ oscillation frequency in human cardiomyocytes 30 min after administration (n = 3 independent experiments). One-way ANOVA analysis of frequency values obtained for each isomer (p < 0.05). a higher therapeutic window compared with the use of HCQ racemate. Concentration AUTHOR CONTRIBUTIONS Piotr K. Janicki and Victor Ruiz-Velasco: Conceptua lized the study idea. Piotr K. Janicki, Victor Ruiz-Velasco, Amandeep Singh, and Arun K. Sharma: Designed the study protocols; Piotr K. Janicki and Victor Ruiz-Velasco: Collected, analyzed, and interpreted data; and Piotr K. Janicki, Victor Ruiz-Velasco, and Arun K. Sharma: Prepared the manuscript and figures. All authors approved the manuscript in its final version. FUNDING INFORMATION This work was funded by a Research Allocation Panel grant from the Penn State College of Medicine, Department of Anesthesiology and Perioperative Medicine to PKJ and VR-V.
6,617.8
2023-07-01T00:00:00.000
[ "Biology" ]
Two decades of Helicobacter pylori : A review of the Fourth Western Pacific Helicobacter Congress 1Division of Gastroenterology, McGill University, Montreal, Quebec; 2Division of Gastroenterology, McMaster University, Hamilton, Ontario; 3Department of Physiology, University of British Columbia, Vancouver, British Columbia; 4Department of Laboratory Medicine Pathology, University of Toronto, Toronto, Ontario; 5Department of Medical Microbiology and Immunology, University of Alberta, Edmonton, Alberta Correspondence: Dr Carlo A Fallone, Division of Gastroenterology, Room R2.28, McGill University Health Center – Royal Victoria Hospital, 687 Pine Avenue West, Montreal, Quebec H3A 1A1. Telephone 514-843-1616, fax 514-843-1421, e-mail<EMAIL_ADDRESS>Received for publication April 17, 2002. Accepted June 13, 2002 CA Fallone, N Chiba, A Buchan, B Su, D Taylor. Two decades of Helicobacter pylori: A review of the Fourth Western Pacific Helicobacter Congress. Can J Gastroenterol 2002;16(8):559-563. ANIMAL MODELS Several animal models continue to be developed for the study of Helicobacter pylori-associated diseases.Bjorkhölm (USA) described a novel approach to identifying virulenceassociated genes using the germ-free transgenic mouse and microarray technology.The Mongolian Gerbil model was used by Takshi (Japan) to determine that the vacA s2/m2 genotype is a very important virulence factor in the development of a gastric mucosal lesion.Using a mouse model, Rourke (USA) determined that the progression of mucosaassociated lymphoid tissue lymphoma depends on both bacterial virulence factors and how long the infection had been present.Interestingly, other Helicobacter species were found to colonize mouse models.Wadström (Sweden) found that 75% of eight well-known mouse strains were positive for Helicobacter typhlonicus, Helicobacter rodentium, Helicobacter mesocricetorum and H pylori using polymerase chain reaction (PCR)-denaturing gradient gel electrophoresis and DNA sequencing. GENOMICS AND MOLECULAR STUDIES Falkow (USA) presented an excellent overview of customdesigned gene chips to quantify changes in H pylori gene expression.Alterations in gene transcripts between different strains of the bacterium and in response to coculture with a human gastric cell line (AGS) were outlined.Custom-designed mouse and human gene chips (Stanford Genome Centre, USA) track how mammalian gene expression patterns are affected.The mouse model used BALB/C mice infected with the Sydney strain and the human model used AGS cells infected with a virulent strain of H pylori.The data from the human gene chip experiments identified the small GTPase, cdc42, as a major target of H pylori. Falkow then discussed how H pylori affected cell to cell adhesion molecules.A substrain of H pylori capable of attaching to the polarized epithelial cell line, MDCK, was isolated.The attachment sites of these bacteria correlated with the distribution of the tight junction protein, ZO-1. In the same session, Trust (USA) discussed how designer drugs are generated using genomes.The genomes of 61 bacteria are available, and these data can be mined to identify bacteria-specific genes.A subset of essential genes can be determined, and targets for those genes developed.Screening of the H pylori genome unfortunately has not yet identified any obvious drug targets. In a free paper session, the use of new genetic technologies was discussed.S Hjalmarsson (Sweden) dealt with pyro sequencing that gives fast, accurate sequence data on 20-to 25-base pair sections of genes.Further information on the technology can be found at www.pyrosequencing.com.Wadstrom (Sweden) discussed the use of protein chip technology to monitor changes in protein profiles.Blomstergren (Sweden) presented ongoing studies of the total genome analysis of three H pylori strains.His data so far indicate that the cagA gene is highly variable and that changes in the mRNA are translated into changes to the final protein structure.Wu (Taiwan) examined the effect of deleting the flgK gene encoding a flagellar protein homologous to the Escherichia coli hook-associated protein.Deletion had no effect on either adhesion or urease production.The bacteria colonized mice but at a lower density than wild-type bacteria.Buchan (Canada) discussed the use of gene arrays to dissect the differences between primary gastric epithelial cells and commonly used human gastric cell lines infected with H pylori on human gene expression.None of the three human gastric cell lines examined (AGS, NCI-N87 and MKN45) were found to be good models of the normal cells.Infection with H pylori modulated the gene expression and cellular location of proteins involved in cell to cell adhesion and intracellular vesicular trafficking pathways. In a related session, Blaser (USA) dealt with the use of genetic screens (guanine to cytosine ratio, trinucleotide difference index) to identify variable regions in the H pylori genome.These techniques confirmed the existence of multiple regions of genetic variation that show geographical strain differentiation.In individuals infected with H pylori, the bacterium mutates over time.Coevolution of the babA and babB genes was documented in a study of infected individuals in Holland over a seven-year period. The considerable genomic diversity among strains of H pylori was discussed by Taylor (Canada).This diversity has been demonstrated by various molecular typing techniques, direct sequencing and, recently, microarray analysis.Macrodiversity (diversity in gene order) probably occurs by inversion and/or transposition within plasticity zones of low guanine and cytosine content (35%) relative to the rest of the genome (39%).It could account for loss of the cagA pathogenicity island (PAI), resulting in the selection of a less virulent strain.Microdiversity in H pylori is highlighted by the considerable nucleotide divergence due to synonymous substitutions (15% to 21% for H pylori genes compared with 1% to 2% for genes from Salmonella typhimurium strains).In addition, at least 27 genes contain simple nucleotide repeats, including outer membrane proteins, lipoprotein synthesis enzymes and DNA restriction/modification systems, where frameshift mutations are easily generated by replication slippage.H pylori also lacks a methyldirected mismatch DNA repair system, which could increase the frequency of mutation.Such genetic changes control membrane lipid composition and Lewis antigen synthesis genes, the expression of which is known to vary during the course of infection and likely optimize the survival of H pylori in the stomach. HOST BACTERIAL INTERACTIONS Apart from changing itself, H pylori may alter and regulate its environment.Mobley (USA) discussed how this bacterium induces both subtle and dramatic changes in its environment, allowing the organism to persist, in most cases, for decades.These changes include the modulation of the acquired as well as innate immune responses, including alterations in cytokine levels.Cytoskeletal rearrangements in the epithelium are induced by injecting cagA into the Fallone host cell via a type IV secretion system synthesized by the bacterium.The pH is modulated by the combination of bacterial urease activity and vacA-mediated urea permeation through the epithelium.Finally, gene expression patterns of nonparietal cells are altered by the presence of the bacterium.The response to colonization by H pylori is usually compatible with a lengthy host-parasite relationship; however, overt damage can result from this interaction.The mechanisms for a number of these host-parasite interactions are now better understood and offer targets for both therapeutic intervention and prevention by vaccination. A free paper session was held on the host immune response.Mitchell (Australia) demonstrated that the cytokine response (interferon gamma in particular) to H pylori in subjects from developed and developing countries may differ.Thus, geography may add to the diversity encountered in all aspects of the host-bacterial interaction.Two interesting talks on the adhesion of H pylori to the host cells via toll-like receptors (TLR) presented contradictory results.Su (Canada) demonstrated that TLR4 might act as a receptor for H pylori. TLR4 RNA is expressed and upregulated by gastric epithelial cells in response to H pylori infection.Protein expression of TLR4 is also increased in this setting.In contrast, Bäckhed (Sweden), who compared TLR mRNA expression from different cell lines, was unable to demonstrate any TLR4 expression in primary gastric epithelial cells and concluded that TLRs are not involved in gastric mucosal recognition of H pylori. H PYLORI AND PEDIATRICS Megraud (France) presented results of a multicentre European Study of over 500 children aged two to 17 years undergoing endoscopy.He confirmed the appropriateness of the 13 carbon urea breath test as a diagnostic test in children (sensitivity 96%, specificity 95%).Serology was not as good, and urine antibody detection was poor.Stool antigen testing results, shown to be excellent in adults (Sheu, Taiwan), were pending in this European pediatric study. Several studies from Aboriginal communities (known to have a high prevalence of H pylori) were also presented, including one from a Northern Manitoba Community (Song, USA), which demonstrated the presence of H pylori via PCR from saliva and nipple samples, supporting the theory of oral-oral transmission. There were few treatment studies in the pediatric population, but Casswall (Sweden) presented results of a large study of 131 patients aged 10 to 21 years using a single daily dose regimen of lansoprazole, azithromycin and tinidazole for six days, achieving 93% compliance.The intention to treat eradication rate, however, was disappointing (63%). H PYLORI AND GASTRIC CANCER Several advances have occurred in the realm of gastric cancer research as it relates to H pylori in recent years.Given the high rate of this disease in Asia and Europe, there are many scientific contributions from these continents.Although a specific symposium was dedicated to gastric cancer in Asia, the theme of gastric cancer kept reappearing throughout much of the meeting.One of the most important recent developments has been the results of a study by Uemura et al (1), who demonstrated that gastric cancer was associated with H pylori; none of the uninfected patients developed cancer, whereas all of the patients who developed gastric cancer were infected with H pylori.In addition, those who were infected but whose H pylori was eradicated did not develop gastric cancer, although the duration of follow-up in this subgroup of patients was much smaller.A further advancement was the development of an excellent Mongolian gerbil model used to study the effects of p53 gene expression in the development of gastric cancer.In addition, Sipponen (Finland) described a panel of blood tests, incorporating several markers of atrophic gastritis, including H pylori serology, pepsinogen ratios and gastrin 17 levels.Computer-calculated risks of having atrophic gastritis using these results demonstrated a good correlation with the histological diagnosis of atrophic gastritis, with a sensitivity and specificity of 80% to 90%.He concluded that this test could perhaps be used to detect those who may be at higher risk of subsequently developing gastric cancer. In a free paper session, Van Doorn (the Netherlands) identified that, in H pylori-infected Portuguese patients, both vacA and cagA genotypes were associated with an increased risk of gastric carcinoma (odds ratio [OR] 6.7 to 17) and that patients with interleukin (IL)-1 polymorphisms such as IL-1β-511 T carriers and IL-1RN*2 also had an increased risk (smaller OR of 3.3).However, what was particularly striking was that those with both virulent H pylori genotypes in association with human IL-1 polymorphisms had a dramatically increased risk of gastric cancer, suggesting a synergistic interaction between bacterial virulence and host genetic susceptibility (OR as high as 108, 95% CI 10 to 1148 if vacA s1 and IL-1 polymorphism existed).This may partly explain why only some patients develop gastric cancer. There was interest in elucidating the role of cagA+ strains as a marker of increased gastric cancer risk.PAI leads to nuclear factor kappaB and IL-8 secretion, which may play a central role in host response to H pylori infection.CagA is one of the markers of the PAI.Wu (United Kingdom) reported that patients who were seropositive for cagA had a significantly increased risk of distal gastric cancer (OR 2.1, 95% CI 1.1 to 9.3) but no influence on junctional (gastric cardia and esophageal adenocarcinoma) cancer.Previous studies have suggested that patients infected with a cagA-positive strain may be protected against junctional cancers.At this meeting, a recurring warning was that the traditional cagA ELISA serological tests may not be sufficiently sensitive or specific enough and that cagA determination through PCR or immunoblotting was preferred (Shimoyama, Japan; Engstrand, Sweden).Engstrand presented data on 298 Swedish patients with gastric carcinoma and found that 76% were positive for H pylori by ELISA, whereas 88% of those who were negative on ELISA were positive for cagA by immunoblot.This indicates that, unless researchers also look for other markers of H pylori, the gastric cancer risk associated with H pylori could be underestimated. H PYLORI AND ASSOCIATION WITH OTHER CLINICAL DISEASES H pylori is associated with peptic ulcer disease, mucosa-associated lymphoid tissue lymphoma and gastric cancer.Controversial associations include functional dyspepsia, gastroesophageal reflux disease (GERD) and nonsteroidal anti-inflammatory drug (NSAID)-induced ulcer disease. The role of H pylori in predicting endoscopic findings in patients with uninvestigated dyspepsia from the Canadian Adult Dyspepsia Empiric Therapy -Prompt Endoscopy (CADET-PE) study was presented by Chiba (Canada).In the 1013 patients with available H pylori status, H pylori prevalence was 30% and increased with age.Overall, clinically significant findings were observed in 58% of all patients.Gastric and duodenal ulcers were rare (3% or less for each), but both were more common if H pylori was present (5.6% versus 2.0% and 6.6% versus 1.3%, respectively; each P<0.002).Erosive esophagitis was the most common overall finding.The prevalence of esophagitis was less common in H pylori-positive (36%) than in H pylori-negative (46%) patients (P<0.002).The results of this study suggest that, while endoscopic findings are common, the vast majority would be appropriately treated with empirical acidsuppressive therapy and prompt endoscopy would not necessarily alter initial management. While it is well accepted that H pylori and NSAIDs are the most important causes of duodenal ulcer disease, it is also recognized that there is an increasing population that appears to be negative for H pylori through traditional testing of gastric biopsies.An interesting study demonstrated that H pylori was present in the duodenum of 6.8% of patients who did not have H pylori in their gastric specimens.This report suggests that additional duodenal biopsies may help decrease the prevalence of apparently H pylori-negative ulcer subjects (Kullavanijaya, Thailand). Hawkey et al (2) found that those who were infected with H pylori experienced greater healing of NSAIDinduced gastric ulcer.However, Chan et al (3,4) showed that the eradication of H pylori reduced the risk of ulcer development.With these somewhat discrepant results and observations, Huang et al (5) reported a recent meta-analysis that demonstrated that H pylori increased the risk of NSAID-associated ulcer and bleeding.This was in keeping with data presented at this meeting of a single centre experience of 125 patients in Thailand (Mahachai) in which upper gastrointestinal bleeding was seen more frequently in NSAID users who were co-infected with H pylori than in those who were not co-infected with H pylori.Although H pylori and NSAIDs are the most important risk factors for ulcer and bleeding, 27% of bleeding subjects in one study (Ootani, Japan) had neither risk factor. The effect of H pylori on GERD is still controversial, with some studies suggesting an increased incidence of GERD after H pylori eradication (6,7) and more recent publications (8,9) suggesting that eradication does not worsen GERD.Fallone (Canada) presented the results of an interim analysis demonstrating that GERD patients who were negative for H pylori had more severe GERD, as determined by the Spechler Gastrointestinal Reflux Disease Activity Index, and used proton pump inhibitors more often than infected patients.Other validated GERD severity scores, including questionnaires and 24 h pHmetry, demonstrated similar trends.These results suggest that H pylori infection results in less severe GERD and is in keeping with the CADET-PE data showing that H pyloripositive patients had a lower prevalence of endoscopic esophagitis than uninfected patients. TREATMENT AND ANTIBIOTIC RESISTANCE Koga (Japan), presented a novel finding using probiotics to eradicate H pylori.He found that Lactobacillus gasseri OLL2716 (LG21) eradicated H pylori in mice.In humans, it was shown to decrease urea breath test 13 carbon levels at 24 weeks.In an in vitro study, Vilaichone (Thailand) found that Lactobacillus acidophilus had an inhibitory effect on H pylori in 87% of patients tested.These agents certainly require further study, but it is very exciting to see an agent as innocuous as yogourt work in the treatment of H pylori infection. A key predictor of eradication failure is antibiotic resistance.Clancy (Australia) showed that one-third to one-half of those who had failed to respond to therapy were infected with H pylori that had been shown to be resistant to metronidazole or clarithromycin when these antibiotics were used as part of triple therapy.Host factors also played a part in treatment failure, including longer symptom duration, regular alcohol ingestion and low levels of IL-4 secretion.A high prevalence of clarithromycin resistance (40%) was reported in an urban area in central Italy (Toracchio).This rate is much higher than that cited in Canada and the United States (1% to 5%), or in other parts of Europe (1% to 25%). With regard to the causes of metronidazole resistance, Taylor (Canada) demonstrated that mutations in the frx gene only occurred in metronidazole-resistant strains, although not as frequently as mutations of the rdxA gene encoding a reduced nicotinamide adenine dinucleotide phosphate nitroreductase.Often, double rdxA, frxA mutations were found.Denaturing high performance liquid chromatography (dHPLC) is a new method for detecting metronidazole-resistant mutations and was compared with other PCR-based methods (Tuazon, USA).Although dHPLC can detect mutations at A2142 or A2143 in 23S ribosomal RNA genes, it cannot identify which position is mutated.Nevertheless, the results are rapidly available in a couple of hours, and dHPLC has great potential as a screening technique, although it has yet to be used directly with biopsy material. A Belgium team (Lamy) reported that their routine clinical strategy since 1988 has been to perform H pylori anti-Fallone bacterial susceptibility testing and provide these results to general practitioners before eradication therapy is given.Even with their strategy, imidazole resistance increased from 16.9% to 20.7%, clarithromycin resistance increased from 6.7% to 14% and multiresistance increased from 1.5% to 2.8%.However, their resistance rates were lower than for the rest of Belgium, where imidazole resistance ranges from 20% to 40%, macrolide resistance from 3% to 25% and multiresistance from 5% to 10%.They believed that their practice influenced general practitioners' choices of eradication therapy -an important consideration because the overall rates of resistance are gradually rising. VACCINES A vaccine against H pylori has been sought for many years.In a symposium dedicated to the topic, Tetlin (USA) first described how the Institute for Genomic Research makes use of genomic data for the development of novel vaccine candidates.There are currently 61 complete microbial genomes available (see www.tigr.org),allowing comparisons and alignments to be performed at the DNA and protein levels.Surface-exposed lipoproteins have been used as vaccine targets for certain organisms such as Neisseria meningitidis.Rapuoli (Italy) subsequently discussed the H pylori vaccine.Because of the association of H pylori virulence factors with peptic ulcer and gastric cancer, Chiron (USA) developed a vaccine using cagA, vacA (the vacuolating cytotoxin) and NAP (neutrophil activating protein) components of H pylori.This is different from other vaccines, which are based on the urease enzyme (Aebisher, Germany).In mice, the Chiron vaccine eradicated H pylori in 80% of animals and prevented reinfection.Malfertheiner (Germany) described the results of a phase I clinical trial with this vaccine on healthy German volunteers.The Chiron cagA, vacA, NAP vaccine elicited both a significant immunoglobulin G antibody response and a cellular immune response in terms of cytokine profile.A mild skin reaction was noted, which was also present in controls and is related to the use of aluminum hydroxide as the adjuvant.Occasional headache and slight fever, but no severe adverse effects, were also noted.Results of additional trials for eradication and protection from H pylori infection in humans are eagerly awaited. Finally, Graham (USA) discussed the pharmacoeconomics of an H pylori vaccine.Although in developed countries the incidence of H pylori is decreasing, the high prevalence in developing countries, the distinct association with the development of gastric cancer and the increase in antibiotic-resistant strains are likely to make an H pylori vaccine a useful adjunct to antibiotic-based therapies for eradication and protection. CONCLUDING REMARKS One highlight of the meeting was a lively debate on the future of H pylori research.Specifically, the motion that "the H pylori bubble had burst" was defended by Graham, Blaser and Mobley (USA), who agreed that the pharmaceutical industry had substantially reduced, if not eliminated, funding for H pylori research and that interest in the field was at an all time low because the pharmaceutical industry had mistakenly concluded that the most important areas had already been studied.On the other hand, Lee (Australia), Falkow (USA) and Forman (United Kingdom) opposed this motion.They argued that 40% of the world population was still infected by this gastric carcinogen and as indicated by Hu (China) earlier in the conference, H pylori remains a major problem in areas such as China and India.The pharmaceutical industry should realize that there are markets other than the Western world.In addition, this organism is a good organism to study for the purpose of advancing science in general, and for the study of the epidemiology, transmission and pathogenesis of gastric cancer. This conference demonstrated that many questions about H pylori remain unanswered.Appropriately, it was Barry Marshall (Australia), the man who started it all, who summarized it best, when he stated that many lessons remained to be learned about the organism, which in turn can teach us an enormous amount about the human body. Two decades of H pyloriCan J Gastroenterol Vol 16 No 8 August 2002
4,849.6
2002-08-01T00:00:00.000
[ "Biology", "Medicine" ]
The EADGENE and SABRE post-analyses workshop Address: 1INRA AgroParisTech, Animal Genetics and Integrative Biology, Populations Statistics Genomes, 78350 Jouy-en-Josas, France, 2Aarhus University, Faculty of Agricultural Sciences, Department of Genetics and Biotechnology, P.O. Box 50 DK-8830 Tjele, Denmark, 3INRA, UMR444 Laboratoire de Genetique Cellulaire, F-31326 Castanet-Tolosan, France, 4Sigenae UR875 Biometrie et Intelligence Artificielle, INRA, BP 52627, 31326 Castanet-Tolosan Cedex, France and 5Roslin Institute and R(D)SVS, University of Edinburgh, Roslin, EH25 9PS, UK Background Analysis of genome-wide gene expression using DNA microarrays has become pervasive in almost all areas of biology. The area of biology addressed by this workshop is gene expression studies in livestock looking at transcriptomic differences between treatments as well as genotypes and combinations of these. Two years ago, we organized a workshop to discuss the best approaches to analyze twocolour DNA microarray data in our area of research and the outcomes of that workshop have been published in 4 open access publications [1][2][3][4]. While there is currently a reasonable amount of consensus on the statistical analyses of a microarray experiment (i.e. getting a gene list), the subsequently analysis of the gene list is still an area of much confusion to many scientists. During a three-day workshop in November 2008, we discussed five aspects of these so-called post analyses of microarray data: 1) re-annotation of the probe set on DNA microarrays, 2) pathway analyses to identify significantly affected biological processes from microarray results, 3) reverse engineering of regulatory networks from microarray results, 4) the integration of gene expression studies with QTL detection studies and 5) the prediction of phenotypic outcomes using gene expression results. Prior to the workshop, we distributed two sets of data to the workshop participants. The first set of gene expression data deals with experimental challenge of chicken with two types of Eimeria. This experiment is described in some detail in one of the summary papers [5], while the actual data is available from ArrayExpress http://www.ebi.ac.uk/ microarray-as/ae/ under accession number E-MEXP-1972. The second experiment deals with the transcriptomic effects of adrenocorticotropic hormone (ACTH) treatment in two breeds of pigs. These gene expression results are available from Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo, GSE8377 -DH06 Adrenal ACTH Sus scrofa). Re-annotation of microarray probe set Up-to-date annotation and target specificity is essential for functional analysis of microarray data. Three annotation pipelines were used to re-annotate 791 selected probes from the chicken microarray [6][7][8] and subsequently compared [9]. The main difference between annotation pipelines came from differences between the thresholds that were applied in order to link a probe to a certain type of annotation. It was recommended to have flexible thresholds in order to evaluate the effect of strin-from EADGENE and SABRE Post-analyses Workshop Lelystad, The Netherlands. 12-14 November 2008 gency and strike the right balance between reliability and coverage of the annotation. The application of pathway analyses Several conceptually different analytical approaches, using both commercial and public available software, were applied by the participating groups to interpret the affected probes from the chicken experiment [10][11][12][13][14][15]. A total of twelve pathway related software tools were tested on the chicken data. The main focus of the approaches was to utilise the relation between probes/genes and their gene ontology and pathways to interpret the affected probes/genes. The lack of a well annotated chicken genome did limit the possibilities to fully explore the tools. The main results from these analyses showed that the biological interpretation is highly dependent on the statistical method used but that some common biological conclusions could be reached [5]. Reverse engineering of regulatory networks Graphical Gaussian models, as implemented in the R library GeneNet, were applied to 85 gene transcripts from the chicken experiment that were selected for their significance and lack of missing data. While a large number of significant relationships (edges) were found between these 85 genes, they could not be confirmed using pathway analyses because of limited annotation [16]. Integration of microarrays with QTL results Using the pig experiment, three groups evaluated different ways to link the gene expression results to QTL results: 1) co-location between differentially expressed genes and QTL results from the same experiments [17,18], 2) colocation between differentially expressed genes and QTL from the public domain, and 3) overlap between genes and QTL regions at the Pathway level: genes and QTL may not co-locate but differentially expressed genes hare enriched pathways with genes in the QTL region [19]. Because the pig has only a preliminary draft genome sequence, comparative mapping approaches were also used to compare QTL locations and differentially expressed genes. Because of very limited annotations, no meaningful pathway comparisons could be made. Phenotypic prediction from microarray data The pig data has two treatments and two genotypes. In order to predict these grouping using the microarray data the authors used a Random Forest approach and also compared the classical Partial Least Squares regression (PLS) with a novel approach called sparse PLS [20]. All methods performed well on this data set. The sparse PLS outperformed the PLS in terms of prediction performance and improved the interpretability of the results. Both approaches are well adapted to transcriptomic data where the number of features is much greater than the number of individuals. Only a small number of genes (<20) was required to give perfect prediction of the four groups. Take home message The central theme of the meeting was the lack of annotation. This was not in terms of bioinformatics tools to link sequences between species but a clear lack of knowledge regarding gene function. This was not specific for livestock species and considerable efforts are required before pathway based approaches will really come to fruition. In this context, there is a clear benefit for methods that do not require any level of annotation such as reverse engineering of networks and phenotypic prediction from microarray data. One challenging opportunity is to catalogue this level of experimental annotation (e.g. 'up-regulated after infection with Eimeria') as an alternative means to derive functional links over time.
1,384.6
2009-07-16T00:00:00.000
[ "Biology" ]
A trapped ultracold atom force sensor with a $\mu$m-scale spatial resolution We report on the use of an ultracold ensemble of $^{87}$Rb atoms trapped in a vertical lattice as a source for a quantum force sensor based on a Ramsey-Raman type interferometer. We reach spatial resolution in the low micrometer range in the vertical direction thanks to evaporative cooling down to ultracold temperatures in a crossed optical dipole trap. In this configuration, the coherence time of the atomic ensemble is degraded by inhomogeneous dephasing arising from atomic interactions. By weakening the confinement in the transverse direction only, we dilute the cloud and drastically reduce the strength of these interactions, without affecting the vertical resolution. This allows to maintain an excellent relative sensitivity on the Bloch frequency, which is related to the local gravitational force, of $5\times10^{-6}$ at 1\,s which integrates down to $8\times10^{-8}$ after one hour averaging time. Introduction Atom interferometry has led to extremely sensitive and accurate inertial sensors such as gravimeters [1][2][3], gradiometers [4] or gyrometers [5]. These sensors are of great interest to perform tests of fundamental physics such as measuring fundamental constants [6][7][8][9], testing the equivalence principle [10][11][12][13], detecting gravitational waves [14,15] or probing short range forces [16][17][18]. Trapped atom interferometers in particular, allowing for longer interrogation times and thus for a better measurement sensitivity without increasing the interrogation spatial area, are paving the way for much more compact sensors. Moreover they provide better spatial resolution when compared to free falling atoms which is a key feature for short range forces measurements. In this context, using as a test mass atomic clouds featuring at the same time the smallest size, for better spatial resolution, and the largest number of atoms, for optimized signal to noise ratio, leads to a regime of high densities, where atomic interactions become an important issue. Such interactions induce mechanisms that can be either detrimental (inhomogeneous dephasing, collisional shifts [19,20]) or favorable (spin-self rephasing [21,22], spin squeezing [23], entanglement [24][25][26]) to the measurement and can lead to complex spin dynamics during the interferometric sequence [27]. In the experiment reported here, where tens of thousands ultracold 87 Rb atoms are trapped in only a few wells of a shallow 1-D vertical optical lattice, we explore high atomic densities ranging from 10 10 to 10 12 atoms/cm 3 . An interferometer in a Ramsey-Raman configuration [28,29] allows to probe the energy difference between adjacent lattice sites consequently providing a local measurement of the vertical force. Far from any source mass, the sensor measures the Earth gravitational acceleration. Close to the surface of the retroreflecting mirror creating the lattice potential, this sensor will allow for a very sensitive measurement of short range forces, and more specifically of the Casimir-Polder force with an expected relative uncertainty better than the percent. In a previous study [30] we have demonstrated a relative sensitivity of 3.9 × 10 −6 at 1 s in the measurement of the Bloch frequency, which got later improved down to 1.8 × 10 −6 at 1 s [31], comparable to the best ever reported sensitivity for a trapped cold atom force sensor of 1.5 × 10 −6 at 1 s [12]. Yet, this interferometer had been performed with a large and diluted cloud loaded into the lattice from an optical molasses. With an atomic cloud of about 2 mm size, spreading over thousands of wells, this interferometer did not offer the spatial resolution required for short range measurements. In this paper, we report on the use of all-optical evaporative cooling to reach a thousand time better spatial resolution, while preserving high sensitivity. The size of the atomic sample is reduced to the order of a few microns and the temperature is also decreased from ∼ 2 µK to ∼ 100 nK. With atomic densities in the 10 10 to 10 12 atoms/cm 3 range, inhomogeneous dephasing induced by interactions constitute a limitation for the coherence of the atomic ensemble. In the following, we report on the precise evaluation of its impact onto the measurement sensitivity, and demonstrate that it can be mitigated by optimizing the experimental parameters, without compromizing the spatial resolution. After a brief description of the experimental setup and measurement principle, we present, for different trap parameters, contrast measurements as a function of the number of atoms, and show that coherence loss can be mitigated by reducing the confinement in the horizontal directions. We finally show how the sensitivity of the measurement can be optimized. For that purpose, we carry out a detailed analysis of all relevant sources of noise and finally determine the optimal interferometer parameters. Principle The system has been described in detail in previous work [28][29][30]32,33]. 87 Rb ultra-cold atoms are trapped in a shallow 1D vertical optical lattice. This system features localized pseudo-eigenstates, which compose the so-called ladder of Wannier-Stark states [34,35] |W S m , where the quantum number m is an index labeling the lattice sites. The energy separation between two consecutive Wannier-Stark states is simply given by the difference in gravitational potential between two consecutive wells : where λ l = 532 nm is the lattice laser wavelength and ν B ≈ 568.5 Hz is the Bloch frequency [36], m Rb is the atomic mass, g the acceleration of gravity. Taking into account the two internal states |f = |5 2 S 1/2 , F = 1, m F = 0 and |e = |5 2 S 1/2 , F = 2, m F = 0 , one obtains two Wannier-Stark ladders of states separated by an energy hν HFS where ν HFS ≈ 6.835 GHz is the hyperfine structure frequency. The WS ladders for the two sets of eigenstates |f, W S m and |e, W S m are shown in figure 1. For shallow lattice depths (U 0 < 10 E rec , where E rec is the recoil energy of a lattice photon), the atomic wave function spans across several wells, allowing for a laser induced coherent tunneling between different lattice sites. Resonant two-photon Raman transitions, using two counter-propagating beams, can be performed to couple the two states |g, W S m and |e, W S m+∆m in the same well (∆m = 0) or in different wells (∆m = 0). When the frequency difference between the two Raman lasers yields the resonance condition ν Raman = ν HFS + ∆m × ν B , we selectively address transitions between states separated by ∆m wells of the lattice (see figure 1). The coupling is optimized by tuning the lattice depth to adjust the WS state delocalization for a chosen separation ∆m [28,33]. Atomic source The lattice is loaded with ultracold atoms produced according to the following preparation sequence. First, 1.5 × 10 9 atoms are trapped in a 3 dimensional magnetooptical trap (3D-MOT), loaded within 600 ms from a 2D-MOT. The cloud is then compressed for 100 ms in a dark MOT, by lowering the intensity of the repumper laser and increasing both the magnetic field gradient and the detuning. After turning off the Figure 1: Ladders of Wannier-Stark states for a two level atom. We consider here the example of the two hyperfine ground states of an alkali atom, separated in frequency by ν HFS . Adjacent wells are separated by the Bloch frequency ν B . Raman laser pulses allow to couple neighbouring states, with Rabi frequencies Ω ∆m , which depend on the absolute distance between the wells. MOT, about 10 7 atoms are transferred into an optical dipole trap, predominantly in the |F = 1 state. Two beams of a high power Yb fiber laser at 1070 nm are crossed in the horizontal plane, with an angle of 43 • . They are switched on 100 ms before the MOT is turned off and focused onto the atoms with 50µm and 70µm radii at 1/e 2 and maximum powers of respectively 10 and 20 W. The power is then exponentially ramped down to 0.12 and 0.23 W within 1.25 s, which yields fast evaporative cooling. A sample of ∼ 10 5 atoms at a temperature of ∼300 nK is obtained. The final phase space density is 0.5, close to degeneracy. At the beginning of the evaporation, a vertical bias field of about 70 mG is applied and the atoms are optically pumped into the |F = 1, m F = 0 state with 70 % efficiency with a 1.2 ms long pulse of horizontally linearly polarized light tuned on the |F = 1 → |F = 0 . We attribute this imperfect pumping to the absorption of the pumping light by the optically thick sample of atoms. In order to purify the polarization of the sample, a sequence of microwave and pusher pulses is then used at the end of the evaporation. A first microwave pulse transfers the atoms from |F = 1, m F = 0 into |F = 2, m F = 0 and a subsequent 12 ms long pulse of optical pumping heats up the atoms remaining in |F = 1 , which escape from the trap. A second microwave pulse transfers the atoms from |F = 2, m F = 0 back into |F = 1, m F = 0 . The small fraction of atoms remaining in |F = 2 (about 3%) is finally pushed and more than 99.5% of the atoms are in the state |F = 1, m F = 0 which is insensitive, to first order, to magnetic fields. The atoms are then adiabatically transferred, within 100 ms, into the vertical optical lattice created with a retro-reflected laser beam at 532 nm. With a 500 µm waist and a power of 6 W, the maximal lattice depth is 7 E rec . This standing wave being blue-detuned, the atoms are not trapped in the transverse direction. We therefore superimpose a red-detuned progressive wave created by a laser beam at 1064 nm, with a waist ranging from 150 to 300 µm and a power of up to 2 W, to constrain the atoms at the maximum intensity of the lattice. Finally, we end up in this trap with a maximum number of atoms of a few 10 4 at transverse temperatures in the range 50-200 nK, depending on the power and waist of the radial confinement laser, which corresponds to atomic densities in the 10 11 − 10 12 at/cm 3 range (about three orders of magnitude higher than in previous configurations [30], where the cloud was loaded from an optical molasses). Interferometer schemes and measurement We describe in this section our interferometer geometry. Atoms, initially in the state |F = 1, m F = 0, W S m , are coupled to the state |F = 2, m F = 0, W S m+∆m via a twophoton Raman transition (see section 2.1). The counterpropagating Raman lasers are phase locked onto an ultra low noise reference oscillator. They have σ + -σ + polarizations, identical waists of about 1 mm and powers of a few mW. They are detuned from the D2 transition by 300 GHz to avoid loss of coherence due to spontaneous emission and to reduce differential light shift (DLS) inhomogeneities. Typical Rabi frequencies are of the order of Ω ef f ∼ 2π × 25 Hz, corresponding to a π/2 pulse duration τ π/2 ∼ 10 ms. A sequence of two π/2 pulses, acting as a beamsplitter and a recombination pulse, and separated by a free evolution time T , allows to create a Ramsey-Raman interferometer [29] with separated spatial states (see figure 2b). The phase shift of the interferometer is proportional to the difference in energy between the two states separated by ∆m wells: where ν Raman is the detuning between the two Raman lasers. This interferometer is at the same time a clock and an inertial sensor: it is sensitive to inertial effects through the dependence of this phase on the Bloch frequency and to clock shifts through its dependence on the hyperfine frequency. When used as a force sensor, it will thus be biased by frequency shifts of the clock transition, such as due to the DLS induced by the trapping lasers [20,30] and the cold collision shift induced by atomic interactions [37]. More, dephasing due to frequency shift inhomogneities lead to contrast loss at long interrogation times and degrades the sensitivity of the force measurement. To suppress, at least partially, the impact of these frequency shifts, we realize a symmetric version of the interferometer, as displayed in figure 2c, where two additional microwave π pulses are inserted in between the two Raman pulses in order to exchange the two internal states without changing the external states, so that the spatially separated wavepacket now spend the same time in each of the two hyperfine states. This symmetrization procedure reduces dephasing, which improves the contrast [31], and makes the interferometer phase independent of the hyperfine structure splitting. This phase is now given by: where ν MW is the frequency of the microwave source. Clock type Ramsey interferometers can also be performed, with no separation, using only two MW pulses. The interferometer phase shift is then: We can also symmetrize this interferometer with a third MW pulse as shown in figure 2a. This configuration is used to separate noise contributions from the Raman laser contribution. It is insensitive to the clock transition and the interferometer fringes are recorded by scanning the phase of the last MW pulse. (a) Symmetrized Ramsey-MicroWave Interferometer (SRMWI). We exploit the state labelling of the two-photon Raman transitions [29,38] to read out the interferometer phase. Fluorescence measurements of the populations N 1 and N 2 in the two internal states |F = 1 and |F = 2 allow to extract the interferometer phase via the measurement of the transition probability given by where C is the contrast of the interferometer. This normalized detection scheme makes the transition probability insensitive to atom number fluctuations. The measurement of the Bloch frequency ν B is realized with a symmetrized Ramsey-Raman interferometer (see figure 2c). A digital lock is performed [29] to operate at mid-fringe of the central fringe of the interferometer, where the sensitivity is maximal. We interleave frequency measurements with separations of ±∆m wells (which have identical Raman coupling, but different Raman resonance conditions). Measurements of the Bloch frequency are finally derived from the difference of frequencies between the two interleaved configurations. This measurement procedure allows suppressing common mode systematic frequency shifts and improving the long term stability. Optimization of the sensitivity From equation (3) and (5) and for ∆φ → π 2 (where the sensitivity is maximal), the relative sensitivity on the Bloch frequency at 1 s measurement time is given by: for a separation of ∆m wells, with σ P the standard deviation of shot to shot fluctuations of the transition probability and T p = 2.3 s the dead time, corresponding to the preparation and detection phases. The larger the separation ∆m, the better the sensitivity, but increasing the separation requires to lower the lattice depth to optimize the Raman coupling and reduce the coupling inhomogeneities. For separations ∆m ≥ 7, lattice depth below 1.3 E rec are required, for which the number of atoms drops drastically, due to the exponential scaling of losses via Landau Zener tunelling [39], resulting in a detrimental increase in detection noise. We therefore use in the following a maximum separation of ∆m = 6. To optimize the coupling, the lattice laser power is then set to 1.2 W which corresponds to a depth of 1.9 E rec . The three other parameters impacting the sensitivity in equation (6) are C, σ P and T . For small enough interferometer phase noise, σ P will be dominated by detection noise, which depends only on the number of detected atoms. As for the contrast, it decreases with interrogation time due to the finite coherence time of the interferometer. Optimizing the sensitivity thus results from a compromize between good contrast -but poor scale factor -at small T and large scale factor -but poor contrast -for large T . In our case, as we show later, the contrast decay rate depends on the number of atoms, which makes the optimum search multi-parameter. Contrast decay To determine the contrast loss rate, we record the interferometer fringes by increasing T while scanning the phase difference between the two laser pulses, with microwave and Raman frequencies fixed on resonance. Such a fringe decay signal is displayed on figure 3a. The contrast as a function of time is extracted from the Ramsey fringes using the standard deviation of a sinusoidal function over one period and fitted with the function C(T ) = C 0 e −γT , where γ is the contrast decay rate. the duration of the second MW pulse during the preparation (see section 2.2). We verified that, for each set of transverse confinement parameters,this selection method leaves the temperature and cloud volume unchanged, ensuring that the density scales linearly with the number of atoms. We observe an increase of the contrast decay rates with N at , which indicates that the dephasing due to atomic interactions is not completely suppressed by the symmetrization. The effect of the symmetrization for this Ramsey-Raman interferometer with seperated arms is in noticeable contrast with the behaviour observed in Ramsey-MW interferometers (with no spatial separation), for which, in such regimes of densities, exchange collisions have been shown to lead to spin synchronization and spin self rephasing (SSR) [21] and where special attention must be paid, when symmetrizing the interrogation pulses sequence, to the joint effects of SSR and the symmetrization pulses [27]. Here, contrary to [21,27], the two wavepackets being spatially separated (∆m = 0), the exchange collision rate and hence the SSR are reduced. We thus observe neither the extended coherence times of [21,27] nor any non-monotonic behavior of the contrast due to the symmetrization pulses. Fitting the data from figure 3b with the function γ(N at ) = γ 0 + α N at allows to distinguish interaction effects from other decoherence sources, such as related to the trapping lasers. The extrapolated dephasing rate γ 0 for N at = 0 can be attributed to imperfect suppression of DLS (due for instance to laser intensity and pointing fluctuations, or to temperature changes caused by heating of the atomic cloud or residual evaporation) and to inhomogeneities in the parasitic dipolar forces if the atoms are not perfectly placed at the waist of the transverse confinement laser [30]. Given that interaction effects scale linearly with density, α is expected to be inversely proportional to the volume of the atomic cloud. The decoherence induced by the interactions, the DLS and the parasitic dipolar force all depend on the power P and the waist w of the transverse confinement laser. Decreasing the power of the transverse confinement laser allows to reduce these three deleterious effects together. Nevertheless lowering the trap depth below ∼ 1µK reduces drastically the number of trapped atoms. Consequently, we chose, for a given trap depth, to increase the waist and the power of the beam in order to dilute the cloud further and decrease significantly the effect of the interactions. The blue points and the red crosses in figure 3b correspond to measurements done with the same waist of 150 µm and different powers of respectively 613 and 215 mW. The trap is deep enough in both cases for all the atoms to be transferred from the dipole trap after the evaporation. As expected, both the offset γ 0 and the slope α are both reduced. The black squares correspond to a waist of 300 µm, for which the density (for a given number of atoms) is much lowered, thus corresponding to a reduced slope α. We repeated such measurements for different intensities, in the 6-60 W/mm 2 range, and waists in the 150-300 µm range. Figure 4 displays the parameter α extracted from these fits, as a function of the transverse trap frequency ν r . Assuming that the atoms are loaded adiabatically in the transverse potential and well within the harmonic regime (the trap depth ranges from 5 to 20 times the atomic cloud temperature), the volume of the trap, and thus α, is expected to scale linearly with ν r . For low trap depths, the harmonic approximation is not valid anymore which explains why α does not increase linearly at small ν r . In light of this analysis, we choose for the rest of the study to increase the waist of the transverse confinement laser from 150 µm to 300 µm. The transverse size of the cloud is then increased from 36 µm to 74 µm and the density is divided by a factor 4. The contrast decay rate, at large atom number, is reduced by approximately a factor 4. With 15000 atoms and a power of 1.36 W, the coherence time of the atomic sample is finally 1/γ = 2.5 s. Fluctuations of the transition probability To optimize the relative sensitivity (6) and choose the optimal parameters for the symmetrized Ramsey-Raman interferometer, we need to characterize the shot to shot fluctuations of the transition probability σ P . We quantify below the contributions to σ P arrising from detection noise σ det (which depends only on the number of atoms), and from the interferometer phase noise σ φ (which depends on the interferometer duration T). Taking those two contributions into account, σ P is expressed as: Detection noise The detection noise σ det is given by: where the first contribution is related to electronics noise (such as related to digitization noise, background light or voltage noise of the transimpedance circuit), the second to quantum projection noise and the third to technical noise (such as related to normalization noise or detection laser intensity and frequency noise). A detailed characterization of our detection scheme, with the atoms prepared in equal superposition of the |F = 1 and |F = 2 states using a single π 2 MW pulse, gives a = 58 and b = 10 −3 . Phase noise To determine the interferometer phase noise, we operate the interferometer at mid fringe and calculate the Allan standard deviation of the measured transition probability. To separate phase noise from detection noise, we then (quadratically) substract this latter contribution (estimated from (8) at a given atom number). The results, displayed in figure 5, show an increase of the measured interferometer phase noise as a function of T . To obtain an analytic expression and take this increase of the phase noise with T into account in the estimation of the relative sensitivity (see next section), we perform a linear fit to the data with the law σ φ = σ φ 0 + k T . We find σ φ 0 = (56 ± 7) mrad and k = (21 ± 6) mrad/s. The dispersion of the data could be due to non stationarity of the measured noise, the graph collecting measurements realized over several days. We verify that using a quadratic fit to the data or even a simple time-independant average of the phase noise does not change significantly the optimal parameters of the interferometer derived in the following. Expected relative sensitivity Now that we have determined how the paramaters C, σ φ and σ det depend on T and N at , we can evaluate the expected sensitivity for different trap geometries and number of atoms. We use equations (6), (7) and (8), the fit from figure 3b for the expression of C(N at , T ) and the fit from figure 5 for the expression of σ φ (T ). Figure 6 displays calculated short term relative sensitivities σν ν as a function of T for three numbers of atoms and two IR transverse waist sizes (left: smaller waist = 150 µm, right: larger waist = 300 µm). As expected, we find optimal interferometer times, which result from the compromize discussed above. This optimal interferometer time, and the corresponding optimal sensitivity, depends on the number of atoms. Increasing the IR transverse size improves the optimal sensitivity, and reduces the variation of the optimal interrogation time with the number of atoms. Finally, optimizing on both parameters at a time, we find best sensitivities of the order of ∼ 5 × 10 −6 at 1 s for T ranging from 3 to 3.5 seconds and for a number of atoms lying in between 10 000 and 60 000. (6) and (7), where the contrast, the detection noise and the phase noise are extracted from measurements (see text). Left: Transverse confinement laser waist = 150 µm -Right: Transverse confinement laser waist = 300 µm. Results We then performed interferometer measurements for various T and number of atoms, in the range 50 ms to 4 s and 1500 to 30000 atoms and found an optimal sensitivity of 5 × 10 −6 for T = 2.7 s, N at = 15000 and C = 0.3, close to the expected values. A typical fringe pattern for such a set of parameters is shown in figure 7. The corresponding temperature cycles result in alignement and polarization fluctuations whose main effects are atom number and atomic density fluctuations on the one hand, and Raman coupling fluctuations on the other hand. The solid blue line is the differential measurement corresponding to the half difference between interleaved ∆m = -6 and +6 measurements. This method rejects common mode frequency shifts and supresses the drifts discussed above. The relative stability of the differential measurement decreases down to 8 × 10 −8 after one hour of integration. To compare with earlier results [31], the measurement with a cloud loaded from a molasses (much lower density and spatial resolution of about 2 mm) is also presented (dashed red line). It reached a short term relative sensitivity of 1.8 × 10 −6 at 1 s. This best sensitivity was achieved thanks to a shorter dead time (no evaporation stage), and to a negligible impact of interactions as already explained. Limitations To better understand the limits in the sensitivity, we have performed an exhaustive analysis of the impact of all noise sources. The different contributions are listed in The main contribution is the detection noise and has been described in section 3.2. Vibrations Since we are probing the vertical potential, our sensor is sensitive to inertial noise in this direction. The acceleration due to the vibration of the lattice retroreflecting mirror is measured with a seismometer placed at the top of the vacuum chamber, next to the mirror. The velocity signal is acquired during the interferometer and weighted by the transfer function of the interferometer. We calculate frequency fluctuations due to the vibrations of σ ν,vib ∼ 2.65 mHz. Phase noise of the reference signal A crystal oscillator is used as a reference for the frequency of the microwave source and for the frequency difference betwen the Raman lasers. We use an ultra low phase noise oven controlled crystal oscillator (O-CDFF28ISN-R-10MHz/100MHz from NEL Frequency Controls, Inc.). To avoid long term drifts, the quartz is locked onto an ultra stable reference signal distributed in the laboratory, with a bandwidth of the order of 0.1 Hz. The power spectral density of phase fluctuations of our quartz oscillator was measured, and the impact of its phase noise onto the symmetrized Ramsey-Raman interferometer phase noise was calculated, using the formalism of the sensitivity function [40,41]. For our interferometer parameters (T p = 5 s and T = 2.7 s), we obtain a shot to shot frequency fluctuation due to our reference signal of σ ν,ocxo = 1.32 mHz. This contribution is not directly measurable since we cannot seperate it from other noise contributions such as the trapping laser fluctuations. Trapping lasers The trapping laser (lattice laser and transverse confinement laser, see section 2.2) intensity fluctuations induce differential light shift fluctuations which increase with the laser power. To evaluate their impact, we realize a symmetrized Ramsey-MW interferometer with T = 3 s using only MW pulses (see figure figure 2a) in order to be insensitive to Raman coupling fluctuations. We measure a shot to shot frequency noise of 2.3 mHz with 17000 atoms. We then substract the detection noise and the noise due to the reference signal calculated above, we then find a noise of 1.7 mHz that we attribute to the trapping laser intensity fluctuations. Raman laser Raman lasers can bring additional contributions to the noise budget : laser phase noise (especially outside the bandwidth of the PLL) and differential light shift fluctuations. To evaluate their impact, we compared the sensitivities of symmetrized Ramsey interferometers with MW pulses and with Raman pulses, and found no difference. We thus found that the Raman lasers do not add a significant noise contribution. Lattice depth When driving the symmetrized Ramsey-Raman interferometer (see figure figure 2c), the Raman coupling between the ground and excited state depends on the lattice depth [33]. Thereby, lattice laser fluctuations (intensity, waist, pointing), as well as pointing instabilities of the transverse confinement laser, induce coupling fluctuations through the depth variations seen by the atoms. Even if the phase of the interferometer is in principle insensitive to coupling fluctuations, these modify the contrast and offset of the fringe pattern, and hence the transition probability. We will thus in the following determine the amplitude of depth fluctuations, and quantify their impact on the measurement of the transition probability. First, we drive a π/2 Raman pulse on the ∆m = 6 transition at a lattice depth of ∼ 2.5 E rec , away from the optimal depth of 1.9 E rec and where the coupling varies linearly with the depth [28,31]. We first determine this slope by measuring the change of transition probability when deliberately varying the depth from 2.3 to 2.7 E rec . Then we set the depth to 2.5 E rec and measure the fluctuations of the transition probability. This last measurement, combined with the slope, allows to evaluate the amplitude of shot to shot depth fluctuations of about 1%. This results in relative coupling fluctuations as low as σ Ω /Ω ∼ 5 × 10 −4 when the depth is adjusted for optimal coupling. The impact of such coupling fluctuations is calculated to be negligible (on the order of 10 −5 Hz shot to shot fluctuations). Conclusion The quadratic sum of these different contributions is close to the fluctuations σ P measured with the optimized set of parameters. The slight difference could be explained by non-stationarity in the noise, by additional light shift induced by stray light from the dipole trap laser or eventually by an unidentified source of noise. Prospect for short range forces measurement Operating this sensor close to the lattice retro-reflecting mirror will allow to probe for short range forces. The atoms will be moved in the vicinity of this surface by the mean of a moving lattice. The resolution could be improved further by selecting the atoms in a single Wannier-Stark state, by lifting the degeneracy between neighbouring transitions. This could be realized for instance by manipulating Zeeman states in a magnetic field gradient [42], by applying an additional light shift or force using an appropriately shaped optical potential [43], or even using the Casimir-Polder interaction itself when very close to the surface of the mirror. The number of atoms will then be reduced, thus increasing detection noise and degrading the sensitivity, as presented on the left plot of figure 9, assuming a selection of 10% of the initial sample. Nevertheless, improving the detection noise down to the quantum projection noise level would allow to maintain the sensitivity, as shown on the right plot of figure 9. Spatial resolution We define the spatial resolution of our sensor as the time averaged standard deviation of the atomic position distribution along the vertical axis σ z . As the resolution of our imaging system is about 4.5 µm, we cannot measure this distribution purely optically. We will thus infer the spatial resolution from the determination of the mean atomic density and radial size of the sample. The mean density will be deduced from the measurement of the frequency shift induced by cold collisions in a standard Ramsey-MW interferometer performed in the trap. It is given by [37]: where a 11 and a 22 are the relevant scattering lengths. Atomic distribution in a periodic potential A knowledge of the atomic distribution is necessary to link the measured mean density to σ z . To determine this distribution, we first model the state of the atomic sample after the evaporation as a statistical mixture of minimal wavepackets, of temperature 300 nK, distributed in a Gaussian distribution, of rms radius σ z,DT . The initial quantum state in the lattice is then obtained by projecting these wavepakets into the subbasis of the WS eigenstates of the fundamental band of the lattice. The statistical mixture of projected wavepackets leads to the atomic distribution displayed as a blue line in the left part of figure 10. There, the initial Gaussian distribution with σ z,DT = 2.8 µm is displayed as a red line. The lattice depth is 1.9 E rec . The position distribution in the lattice is not stationary, but evolves periodically at the Bloch frequency. The corresponding evolution of σ z is displayed at the right of figure 10. Having determined the distribution, we can now calculate the mean linear vertical density and the rms size σ z . We finally find the following relationship between these two quantities (at the lattice depth of 1.9 E rec ): κ =n z σ z = 0.34 (10) We find that this product does not depend on the initial cloud size σ z,DT as long as this size is larger than two lattice sites. More, its value varies by less than 3% during a Bloch oscillation, since the size increases while the density decreases. The mean density we measure is in fact averaged over a Bloch period, thus corresponding to a time averaged size. For the example given in figure 10, the size σ z oscillates between 2.8 and 3.2 µm (right). The trap being harmonic in the tranverse directions, the vertical size of the cloud can now be deduced from the average densityn 3D : Results With a waist of 300 µm and a power of 1.36 W for the transverse confinement laser, we measure with our imaging system a transverse size of σ r = (74 ± 5) µm. The collisional frequency shift is measured to be (25 ± 5) mHz for 30000 atoms, from which we deduce, with equation 9, a density ofn = (5 ± 1)×10 10 at/cm 3 . The corresponding vertical size, derived from equation 11, is then σ z = 3.0 ± 0.7 µm, at our lattice depth of 1.9 E rec . This corresponds to an initial size σ z,DT of 2.8 µm as chosen in figure 10. Here, the confinement in a shallow lattice increases the parameter κ (of equation 10) by a factor 1.2 only, with respect to the harmonic case where κ harmo = 1 2 √ π . For deeper lattices, the effective volume is significantly smaller and the density increases. At 10 E rec , we calculate κ to increase by a factor of about 2. The size σ z = 3 µm determined here is the one for which we obtained the best sensitivity. Evaporating further down resulted in a drastic loss of atoms and a degradation of sensitivity. With a different dipole trap geometry (with waists of 25 and 200 µm), we obtain smaller sizes of about 1 µm, but with higher densities in the ∼ 10 12 at/cm 3 range, resulting in a degraded optimal relative sensitivity. Impact of the cloud center oscillations In principle, the spatial resolution determined above is degraded by the periodic motion of the center of the cloud (Bloch oscillations in position). At a depth of 1.9 E rec , the amplitude of this oscillation is of about 0.6 µm. When averaged over one cycle, this results, in our case, in a negligible increase (less than 1%) of the width of the position distribution. This change would be more pronouced for smaller initial clouds. For example, in the limit of a single wavepacket at 300 nK, we calculate an increase of 15%. Delocalization and Casimir-Polder measurement We now discuss the impact of the delocalization of the wavefunction on the measurement of the Casimir-Polder (CP) interaction. The shifts of the energy levels of the WS ladder differ from the simple expression of the CP potential at each energy site, and need to be precisely calculated. Such calculations were performed in [44,45], where the influence of the presence of the surface and of the CP interaction on the eigenstates and eigenenergies of the problem have been numerically evaluated. In principle, comparing the results of [44] with the measurements to come will require a precise knowledge of the distribution of the WS states initially populated, except if only a single WS state is populated or selectively interrogated. The wavefunction will nevertheless remain spatially delocalized at shallow lattice depths (at 1.9 E rec , the rms size of the WS states is 3 lattice sites). The impact of this delocalization onto the phase shift of the interferometer could be reduced, for instance by increasing the lattice depth during the free evolution period. Conclusion We have demonstrated a quantum force sensor which combines both a very high spatial resolution of 3 µm and a very high sensitivity of 5 × 10 −6 , and where interactioninduced dephasing is efficiently mitigated by diluting the atomic sample in the transverse direction. To establish a figure of merit for our atomic interferometer, we define the variable η = σν ν σ z = σ F F σ z , where σν ν is the relative sensitivity at 1s (see equation (6)), F is the measured force corresponding to the frequency ν and σ z is the width of the Gaussian vertical atomic distribution, assimilated to the spatial resolution. With the size of σ z = 3 µm (see section 5) and our best sensitivity at 1 s of 5 × 10 −6 , we reach a value of η = 1.5 × 10 −11 m. The force sensor described in [46] is based on the measurement of Bloch oscillations of 88 Sr atoms trapped in a vertical lattice. A resolution of σ z = 12 µm Resolution Relative sensitivity η Relative sensitivity on g on Casimir-Polder This work 3 µm 5 × 10 −6 at 1 s 1.5 × 10 −11 m 1% after 5 s averaging time [12] 37 µm 1.5 × 10 −6 at 1 s 5.6 × 10 −11 m N/A [46] 12 µm 5 × 10 −6 at 1 s 6 × 10 −11 m N/A [47] 2.4 µm N/A N/A 10% after 10 min averaging time Table 2: Comparison between a selection of trapped force sensors is achieved, combined with a relative sensitivity of 5 × 10 −6 , thus corresponding to η = 6 × 10 −11 . In [12], a state-of-the-art relative sensitivity of 1.5 × 10 −6 is obtained with the same experiment but the measurement is based on the delocalization of the wave-packet and thus can't achieve the same resolution, the cloud width reaches 37 µm corresponding to η = 5.6 × 10 −11 m. Well resolved measurements of the Casimir-Polder (CP) potential using ultracold atoms were performed in [47]. In this experiment, the CP potential is derived from the shift in the oscillation frequency of the center of mass of a BEC trapped in a magnetic field in the vicinity of a surface. A Thomas-Fermi radius of σ z = 2.4 µm is achieved and the sensitivity on the frequency shift allows to reach a relative sensitivity on the CP potential at 6µm from the surface of about 10%. This uncertainty is obtained after more than ten minutes measurement time. In comparison, the sensitivity we have demontrated would allow for the determination of the CP potential at the same distance of 6 µm with an uncertainty of the order of 1% in a single shot measurement. This combination of a very high sensitivy on a force measurement and of a very high spatial resolution makes this sensor a perfect device for short range forces measurements such as the CP force as mentionned above or for the search of a deviation to the gravitational potential at short range [17,48].
9,452.8
2018-07-16T00:00:00.000
[ "Physics" ]
Malaysian views on COVID-19 vaccination program: a sentiment analysis study using Twitter ABSTRACT INTRODUCTION In December 2019, China was hit by a sudden outbreak of COVID-19 caused by the SARS-CoV-2 virus.The World Health Organization (WHO) labeled it a pandemic due to its severe and widespread nature, which can lead to severe pneumonia, respiratory failure, and death.The entire world, including Malaysia, has been affected by this pandemic.To combat the spread of COVID-19, a global effort has been made to develop and test vaccines.Furthermore, one of the best methods for lowering the prevalence of infectious diseases is vaccination.Numerous vaccines have been developed and approved in a short time frame to counter the pandemic.One example is the Pfizer/BioNTech vaccine, which was the first to be approved for widespread use in the United Kingdom on December 2, 2020, less than a year after the pandemic was declared.However, a sizable portion of people express hesitation and even hostility toward vaccination [1], [2].This hesitation mainly comes from the acceptance of public concern about the vaccination in terms of its: i) health risk, ii) cultural acceptance, iii) religious acceptance, iv) economic growth, and v) political stand [3].This hesitation and reluctance has led to the way individuals perceive the risk of getting infected, as well as how they view the gravity of the infection which in turn leads to a low acceptance rate of the vaccine [4], [5].The reluctance to get vaccinated could have a significant and far-reaching impact on the acceptance of COVID-19 vaccines by people in the community as it poses a threat not only to the hesitant individual but to the entire ISSN: 2302-9285  Malaysian views on COVID-19 vaccination program: a sentiment … (Mohamed Imran Mohamed Ariff) 437 community.Delays and rejections would make it impossible for communities to reach the required level of vaccine uptake necessary for herd immunity to be achieved [6].Currently, the focus is on developing a vaccine to protect the population from COVID-19, but it is important for stakeholders to be prepared for the next challenge, which is ensuring the vaccine is accessible and accepted by the public. LITERATURE REVIEW Over the last few years, as the COVID-19 pandemic spread globally, the COVID-19 vaccine-related issues have received increased public attention, especially relating to the public hesitation to be vaccinated.The COVID-19 pandemic has raised public concern about vaccine hesitancy, which can be broken down into three main reasons: i) evaluating the risks and benefits of vaccines, ii) lack of knowledge and awareness, and iii) influence of religious, cultural, gender, and or socio-economic factors [7].This hesitancy is a result of poor health literacy thus leading to a low acceptance of the COVID-19 vaccines [8], [9].Another major reason leading to the limited uptake of vaccines is the impact of social media, especially the usage of Twitter [10].An extensive literature review has shown that social media, particularly Twitter is an excellent channel for expressing emotions, perspectives, and viewpoints [11]. Furthermore, Twitter is a social media platform where people can openly share their opinions.Twitter provides a place for individuals to honestly communicate their ideas in real-time, with over 100 million active users and up to 500 million tweets generated everyday [12].Twitter is also a useful tool for evaluating the true public mood since users may express themselves freely and at ease, in contrast to traditional face-to-face interviews.Additionally, data collection for studies including opinion analysis is facilitated by Twitter's application programming interface (API) and open database access [13].Previous research has shown that the public's regular use of Twitter during COVID-19 boosted health awareness [14] and the execution of appropriate health safety measures during the pandemic.As a result, several government organisations have started using tweets to manage crises and deliver real-time updates [15], [16].Additionally, prior research has shown that these tweets' messages (such as opinions) can help the responsible authority get a high-level grasp of the actual situation, particularly during the COVID-19 pandemic [17].Since these beliefs and related ideas such as sentiments, attitudes, and emotions are fundamental to human activity, applying sentiment analysis to analyse tweets could show how the general public feels about the COVID-19 vaccination [14], [15].The process of analysing the opinions, feelings, and sentiments represented in words or sentences is known as sentiment analysis, sometimes known as opinion mining [18], [19].Sentiment analysis has grown in favour in the medical industry as a useful method for determining peoples' views toward vaccinations, immunisation, and public health in general [13]. According to by Hussein et al. [20], sentiment analysis of tweets about the COVID-19 vaccination could be a valuable tool for policymakers and governments as it enables them to keep track of public opinion and make informed decisions.According to by Rosis et al. [21], by stating important measures against COVID-19, such as getting vaccinated, wearing masks, practicing social distancing, and maintaining personal hygiene, has greatly contributed to controlling the spread of the virus.Twitter, as one of the prominent social media platforms, plays a significant role in raising awareness of these crucial measures.Public perception and attitude towards the pandemic are crucial in developing effective strategies to combat it [21], [22].In this regard, the analysis of social media provides valuable information for health professionals and government officials in their decision-making processes.Based on the paragraphs above, this study aims to gain deeper insights into what people are thinking and feeling regarding COVID-19 vaccination, by examining tweets.Furthermore, this study collects tweets using keywords related to vaccines and health concerns post-vaccination to gain insight into public perception and assist policymakers in planning the vaccination effort and health measures.By analyzing the Twitter data, healthcare professionals and policymakers can gain understanding of how the public is reacting to the COVID-19 vaccine during the pandemic.The study also hopes to shed light on people's views on health guidelines for COVID-19 prevention after receiving the vaccine. METHOD In order to attain the goal of the study, the machine learning life cycle (MLLC) method was selected.This technique is a highly effective method with broad applications and has been shown to produce results with superior accuracy when compared to those that involve human intervention, which tend to have a lower accuracy rate [23], [24].The MLLC method consists of seven steps, including: i) data collection, ii) data preparation, iii) data cleaning, iv) data analysis, v) model training, vi) model testing, and vii) implementation.These steps will be discussed briefly in the following sub-sections. Data gathering Data gathering is the first stage of the machine learning life cycle, tries to identify and gather data related problems.Identifying different data sources, such as files, databases, the internet, and mobile devices, is part of this process.The hashtags "COVID-19Vaccine", "#AstraZeneca", "#Sinovac", "#Pfizer", and "VaccineSideEffect" were used to collect data for this study on the Twitter platform.An open-source tool called StartBot was used to gather information going back to 2002.In order to create a cohesive dataset that will be used in the following stages, the process entails identifying many data sources, gathering data, and integrating data from different sources. Data preparation After collecting data, the next step is to plan for the following stages, which includes data preparation.This process involves organizing the data in an appropriate location and preparing it for use in machine learning training.This includes randomly selecting the data's ordering, extracting tweet details such as the hashtag, username, user handle, date of postings, tweets, retweet counts and like counts, and saving it in an excel file for faster access to the project.Data exploration and data pre-processing are two procedures that fall under this category.Data exploration is used to understand the type of data being dealt with, identifying features, format, and quality of the data.This step helps in identifying correlations, general patterns, and outliers in the data.The next phase in the data pre-processing process is data pre-processing for analysis.This dataset contains only tweets concerning the COVID-19 vaccine expressed in English. Data cleaning The act of cleaning and turning raw data into a usable format is known as data wrangling.It is the process of cleaning the data, selecting the variable to utilise, and changing the data into a suitable format for analysis in the following phase.It is one of the most crucial phases in the entire procedure.To overcome the quality concerns, data must be cleaned.It is not required that the data gathered must be of constant use to anyone, as part of the data may not be.Missing values, duplicate data, invalid data, and noise are all problems that might arise in real-world applications.As a result, cleaning the data involves a variety of filtering approaches.The above issuess must be identified and resolved since they might have a detrimental impact on the quality of the final product.The text cleaning in this project may be done using Python code that removes numbers, stickers, old style retweets 'RT', hashtags, punctuation and stop words. Data analysis The data has now been cleansed and prepped and is ready to be analysed.This process entails choosing analytical methodologies, creating models, and analysing the results.The goal of this stage is to create a machine learning model that will study the data using a variety of analytical approaches and then evaluate the results.This stage involves categorising the term as positive, negative, or neutral.To analyse the data in this study in context of Malaysian views on the COVID-19 vaccine, polarity and subjectivity have been estimated.It begins with determining the issue type, after which machine learning techniques such as classification, regression, cluster analysis, association, and others are chosen.The model is then built using the data that has been prepared, and the model is subsequently evaluated.As a result, during this stage, it will take the data and develop the model using machine learning methods.The Naïve Bayes approach was utilised to create the sentiment classifier used for emotion identification of Malaysian perspectives on COVID-19 vaccination. Model training The following stage is to train the model and, in this phase, the model must be trained to increase its performance in order to achieve a better solution to the problem.It employs a variety of machine learning methods to train the model utilising datasets.A model must be trained for it to comprehend the numerous patterns, rules, and characteristics.A dataset is utilised in this project to train the model using the Naïve Bayes technique in the scikit-learn Python module. Model testing The machine learning model may be tested once it has been trained on a specific dataset.The assessment of the correctness of the model during this stage is done by feeding it a test dataset.The percentage of correctness for the model is determined by testing it against the project or problem's requirements.The project must go through the testing model phase in order to assess the accuracy of sentiment analysis.Section 4 provides detailed calculations and explanations on how to determine the accuracy score, precision, recall, and F1-score.However, testing is usually done to see if the suggested design fits the initial set of business Testing may be done again to look for mistakes, defects, and interoperability.Verification and validation are two more aspects of this phase that will assist to assure the program's success. Implementation The focus of this study was to conduct a comprehensive analysis of sentiment analysis techniques applied to Twitter data, aiming to provide insights into their effectiveness and performance.Given the scope and depth of the analysis conducted, the decision was made to defer the implementation stage to future research, allowing for a more thorough investigation of real-world deployment challenges and considerations. RESULTS AND ANALYSIS This section will demonstrate the outcomes and interpretation of the research, structured in: i) interface, ii) process of gathering tweets, iii) findings of sentiment analysis, and iv) evaluation of the sentiment analysis classifier model.Further, the interface design will illustrate how users can interact with the sentiment analysis system and interpret the results effectively, enhancing usability and accessibility.The detailed exposition of the tweet gathering process will provide a clear understanding of data collection methodologies, addressing potential biases and limitations in the dataset.Additionally, the presentation of sentiment analysis findings will delve into nuanced insights derived from the analysis, shedding light on patterns, trends, and potential applications.Lastly, the evaluation of the sentiment analysis classifier model will encompass rigorous quantitative metrics and qualitative assessments, establishing a comprehensive assessment of its performance and generalizability. Interface of the sentiment analysis application The web dashboard as shown in Figure 1 incorporates dynamic visualizations that allow users to interact with the sentiment analysis results in real-time, enabling the exploration of sentiment trends across different time periods, user demographics, and tweet characteristics.By utilizing Tableau's robust features, the dashboard provides an intuitive and comprehensive representation of the sentiment analysis outcomes, enhancing the accessibility of insights and aiding decision-making processes for various stakeholders. Tweets gathering process This section describes the process of collecting tweets for the dataset.The data was obtained from Twitter using five specific hashtags, which were: i) '#COVID 19Vaccination', ii) '#AstraZeneca', iii) '#Pfizer', iv) '#Sinovac', and v) '#VaccineSideEffects'.The process started by utilizing the StartBot open-source program to extract tweets from Twitter.Afterwards, the dataset underwent data preparation and cleaning, as outlined in the methodology section.During the data cleaning step, any unimportant punctuation, stop words, and sentences were removed from the tweet's column.Figure 2, illustrates the sample code employed in the data. Sentiment analysis results The analysis of the cleaned dataset revealed that most of the COVID-19 data was neutral in tone, while only a small proportion had a negative sentiment refer Figure 3.This suggests that most people have a positive attitude towards the vaccination program.This finding can be reinforced by the daily updates on the Malaysian COVID-19 website, which provides information about the progress of the vaccination program. The study presents a word cloud to showcase the public's opinions about COVID-19.The word cloud, displayed in Figure 4, focuses specifically on the topic of COVID-19 vaccinations.The results show that the Pfizer vaccine is the most talked about and tweeted vaccine on Twitter, as it is highlighted in a larger font size and in bolded letters. Evaluation of the sentiment analysis model (classifier) To assess the performance of the sentiment analysis classifier, four evaluation metrics are used: precision, recall, F1-score, and accuracy.These metrics are commonly used in the evaluation of classification models.The results of the evaluation are depicted in Figures 5 and 6.Based on Figure 5, the accuracy of this study stands at 97.3%.This can be considered a decent level of accuracy, as any value above 70% is considered a good model in evaluating sentiment analysis performance [25]. The classification report in this study, depicted in Figure 6, shows the: i) precision, ii) recall, and iii) F1-score of the study's results.The precision score of 93% indicates that the model is effective in identifying genuine positive outcomes among all correctly predicted positive results.The recall score of 94% shows that the model is capable of accurately predicting every instance in the training dataset.The F1-score of 94% is a high score, which confirms that this model is a good and reliable model for use in sentiment analysis. CONCLUSION The research aimed to provide a comprehensive understanding of Malaysians' perceptions of the COVID-19 vaccination through sentiment analysis.To achieve this, data was collected from Twitter and analysed to determine the sentiment behind the tweets.The results of the study showed that Malaysians have a generally neutral perception of the COVID-19 vaccination.This study highlights the importance of understanding public perception and sentiment towards a critical issue like the COVID-19 vaccination program.The findings can be used to inform and guide healthcare professionals, policymakers, and the public in making informed decisions regarding the COVID-19 vaccine.This study can also be used as a foundation for future research in the field of sentiment analysis, with the potential for improvement and expansion.Further, the findings of this study suggest that the general perception of COVID-19 among Malaysians is neutral.The increasing number of vaccinations being administered is evidence of this.While some individuals remain skeptical of the COVID-19 vaccine, awareness about it is growing.The word cloud analysis shows the frequency with which the COVID-19 vaccine is being discussed on Twitter.The accuracy of this study is 94%, which demonstrates its effectiveness in achieving its overall aim. Limitations: this study faced several limitations in its implementation.Although the automated method can recognize and analyze text in various contexts, it has difficulty understanding complex language features such as sarcasm, irony, negations, jokes, and exaggerations.This can lead to incorrect sentiment classification, as the system is not able to grasp the intended meaning behind the words.For example, the word "sad" may be classified as negative, but in the context of "I was not sad," it should be classified as positive.Similarly, an automated sentiment analysis tool may not be able to detect sarcasm, such as in the statement "I'm really loving the enormous pool at my hotel!" accompanied by a picture of a small pool.This highlights the challenges faced by sentiment analysis tools in accurately analyzing sentiment in complex language. Future research: this study has the potential for further development and improvement.One potential enhancement is to implement real-time updates of the sentiment analysis results for tracking Malaysians' emotions.This would allow users to track citizens' emotions without having to manually extract new tweets.Additionally, the study could incorporate a sentiment classifier correction tool to address typos and misspelled words in the dataset and new tweets, which could lead to improved data quality and a higher overall accuracy of the classifier.The study could also incorporate user engagement elements such as photos, videos, additional buttons for navigation, and notifications to inform users when the data is ready for analysis Figure 2 . Figure 2. Sample code fragment used in the data cleaning process  ISSN: 2302-9285 Bulletin of Electr Eng & Inf, Vol. 13, No. 1, February 2024: 436-443 442 and evaluation.Moving forward, the study aims to add new algorithms to improve the accuracy of sentiment analysis and to continue making advancements in this field of study.
4,127.2
2024-02-01T00:00:00.000
[ "Medicine", "Political Science", "Computer Science", "Sociology" ]
Electronic Effects in Oxidation Reactions Utilizing Dinuclear Copper Complexes with the Bis[3-(2-hydroxybenzylideneamino)phenyl] Sulfone Ligand Copper acetate and the ligands bis[3-(3-tert-butyl-2-hydroxy-5-methoxybenzylideneamino)phenyl] sulfone and bis[3-(3,5-di-tert-butyl-2-hydroxybenzylideneamino)phenyl] sulfone were reacted to form the complexes with 2:1 copper:ligand ratio, Cu2[B(t-Bu) (OMe)BAPS](μ-OCH3)2 (4) and with 2:2 copper:ligand ratio, Cu2[B(t-Bu)2BAPS]2 (5), respectively. Structures of 4 and 5 were determined based on IR, UV-Vis, and FAB-MS data in comparison with previously characterized related copper complexes. The two complexes 4 and 5 were utilized in the oxidation of the substrates 2,4and 2,6-di-tertbutylphenol (dtbp) at -50C with H2O2 in CH2Cl2. The coupling products are preferred in both cases. For 2,4-dtbp, yields of 4,600% and 7,200% of 3,3’,5,5’-tetra-tert-butyl-2,2’biphenol were achieved with the use of 4 and 5, respectively. For 2,6-dtbp, yields of 1,900% and 400% of 3,3’,5,5’-tetra-tert-butyl-4,4’-biphenol were realized utilizing 4 and 5, respectively. These show that the methoxy groups activated the complex. Based on low temperature UV-vis results, a μ-η2:η2-peroxo or a μ-hydroperoxo intermediate was possibly formed by the reaction of 4 with the H2O2. This effected the oxidation of the 2,4and 2,6dtbp substrates but also resulted in the attack of other complexes which acted as substrates. A proposed oxidation mechanism using complex 4 and related complexes is presented. INTRODUCTION Different metal ions play central roles in various coordination complexes in the facilitation or inhibition of chemical reactions essential in life processes. Some of the notable critical processes requiring metal ions include respiration, metabolism, development, signal transduction, and photosynthesis. The biomimetic chemistry of metal containing proteins and enzymes has produced a variety of synthetic systems that contributed to our https://doi.org/10. 26534/kimika.v25i2.11-22 KIMIKA • Volume 25, Number 2, July 2014 understanding of how proteins function. More importantly, the role of various metalloenzymes and their synthetic analogs as essential catalysts in oxidation reaction for laboratory and industrial use is indispensable (Weiner et al. 2010, Tolman et al. 2006, Punniyamurthy et al. 2005, Gamez 2004, Sima'ndi 2003, Funabiki 1997. It is very challenging to study metalloenzymes because of their huge sizes and as such, small molecule models have been found to be useful (Fenton 1995). The reactivity of the metalloenzyme lies mainly in the active site, i.e. the metal centers and the immediate surrounding ligands. Several factors control the coordination of metalloproteins and metalloenzymes. Among these factors include steric and electrostatic repulsion (Panek 2009), electronic structures and metal orbital energies (Kleifeld 2003), and the constraints imposed by proteins (Estiu 2004). Bioinorganic complexes, furthermore, are susceptible in changing acid-base reactivity and oxidationreduction potentials (Panek 2009). Synthetic models of various metalloenzymes have been successful in the deduction of the structure and understanding the mechanisms of chemical reactions mediated by these naturally occurring bioinorganic complexes. Copper containing enzymes and proteins constitute a general class of important bioinorganic complexes. Proteins containing copper ions at the active site are mainly involved as redox catalysts in a range of biological processes, such as electron transfer, dioxygen transport and oxidation of various bio-substrates (Gamez 2004). One of the most studied enzymes in biomimetic chemistry is galactose oxidase, GOase. Galactose oxidase is a fungal enzyme, produced by Dactylium dendroides, that has found its main use in the quantitative determination of galactose in blood and other biological fluids. First isolated in 1959, this type-2 copper site containing enzyme selectively catalyzes the two-electron oxidation of galactose and primary alcohols to the corresponding aldehydes with the corresponding simultaneous reduction of molecular oxygen to hydrogen peroxide (Tolman 2006, Baron et al. 1994). The first effective bio-inspired catalysts described for galactose oxidase were described in 1998 -a Cu(II) species, namely [Cu(II)BSP] where BSP symbolises a salen-type ligand with a binaphthyl backbone and thioether functions able to catalyse the oxidation of benzylic and allylic alcohols under dioxygen at room temperature (Stack et al. 1998). Tyrosinase (E.C.1.14.18.1; monophenol monoxygenase) is another copper-centered protein, widely distributed throughout the phylogenetic scale from bacteria to mammals (Ikehata 2000) that acts as an oxygenase and results in phenol conversion to catechol and eventually to quinone (Fenton 1995, Kitajima & Moro-oka 1994. These reactions are important because of their tremendous potential in bioremediation (Vinci et al. 2004). Since there is no crystal structure of the active site of tyrosinase, various reactivity and spectroscopic data have been used in the elucidation of its structure and activity (Solomon et al. 2001). Structurally, tyrosinase share the same type 3 copper protein with hemocyanin and their spectroscopic charateristics are similar to that of tyrosinase (Decker and Tukzek 2000). Several models have been made on tyrosinase using different ligand systems (Lewis & Tolman 2004). In the present study, we have utilized a series of bis[3-(2-hydroxybenzylideneamino)phenyl] sulfone as ligand to model tyrosinase ) and this included the complexes with the unsubstituted (1), tert-butylsubstituted (2) and chlorine-substituted (3) ligands, shown in scheme 1. To look more into electronic and steric effects, we now report the preparation of a complex with methoxy-substituted (4) ligand. This has resulted into an increase in reactivity of the copper complex. Another complex with additional tert-butyl substituents (5) was prepared resulting into a 2:2 Cu:ligand complex. We also present a proposed mechanism of the oxidation process using our small molecule models. (3) model tyrosinase complexes . MATERIALS AND METHODS The triethylamine, THF and diethyl ether used for the salicylaldehyde starting material syntheses were freshly distilled prior to use. All other starting materials and solvents used were purchased commercially and used as received. Proton NMR spectra and infrared (IR) spectra were measured using a 500 MHz JNM-LA500 and a Shimadzu FTIR-8300, respectively. UV-vis spectra were recorded on a Shimadzu MultiSpec-1500 at room temperature with CH2Cl2 as solvent. For low temperature UV-vis measurements, an Alpha Engineering Ltd. CS-96-2 cryostat and Model TC 22-A thermo controller were utilized. High resolution and electron-ionization mass spectrometric measurements were carried out on a JEOL JMS-SX102A. For the complex, fast atomic bombardment (FAB) with pnitrobenzyl alcohol (NBA) as matrix was utilized. Elemental analyses were obtained with the use of a Perkin-Elmer 2400-II and magnetic susceptibility measurements were done by way of the Faraday method using a CAHN 1000 Electrobalance and Hg[Co(SCN)4] as reference compound. Preparation of Cu2[B(t-Bu)2BAPS]2 (5). The ligand BH(t-Bu)2BAPS (0.14 g, 0.20 mmol) was dissolved in THF (20 mL) and methanol (20 mL). This solution was added to copper acetate monohydrate (0.16 g, 0.80 mmol) in MeOH (30 mL). After overnight stirring, the brown powder of 5 was collected. This is recrystallized by diffusion of ether to a dichloromethane solution of 5. The yield of the brown crystals was 0.30 g, 50%. Oxidation of 2,4-and 2,6-Di-tert-butylphenol (dtbp) Using the Complexes and H2O2. In DCM or THF (15 mL) were dissolved the subtrates 2,4-or 2,6-dtbp (0.21 g, 1.0 mmol) and the complexes 4 and 5 (0.010 mmol), respectively. This was then cooled for 2 h at -50 °C in a constant temperature bath. After this, 5 drops of 30% (aq.) H2O2 were added and the solution was stirred for 24 h. Normal work-up was done with 10% HCl and extraction with DCM (30 mL). This was repeated three more times (40 ml each). The combined DCM solution was dried with MgSO4, filtered and the solvent removed by rotary evaporation. The reaction mixture for 2,4-dtbp was separated by column chromatography. For 2,6-dtbp, the product components were isolated by using a JAI LC-908 preparative HPLC after a prior column chromatography to remove the reactant ). Low Temperature UV-Vis Studies. UV- Vis spectra were recorded on a Shimadzu Multi-Spec-l500 with an Alpha Engineering Ltd. CS-96-2 cryostat and Model TC22-A thermo controller. A solution of the complex in THF was prepared. The solution was cooled at −50 °C for two hours followed by the addition of 30% H2O2. The spectra were obtained at certain time intervals. (See Supporting Information). Figure 2 is the synthesis of 5. The ligand BH(t-Bu)2BAPS (1 equivalent) is dissolved in a 1:1 THF:methanol solvent system and added to excess copper acetate monohydrate (4 equivalents) in MeOH in a similar manner as in the preparation of 1 ). The product, however, is a 2:2 copper:ligand complex. Shown in Characterization. Complexes 4 and 5 were analyzed by FT-IR spectroscopy, UV-Vis spectroscopy and FAB-MS. Selected peaks and their assignments are shown in the supporting information. Based on the analogous preparation and the similarities of the FT-IR, UV-Vis and FAB-MS spectra, we propose that the structure of complex 4 ( Figure 1) is similar to that of complexes 1-3. Complex 1 has been crystallographically characterized . The structure of complex 5 is strongly supported by the molar absorptivity values from the UV-vis results and the 1484 m/z signal from the FAB-MS spectrum. The Oxidation Reactions. Complexes 4 and 5 were utilized in the oxidation of 2,4-and 2,6-di-tertbutylphenol (Figures 3 and 4). The results are compared with the related complexes 1-3 along with copper (II) acetate monohydrate in Tables 1 and 2. Looking at the results from Tables 1 and 2, in general, it can be said that in CH2Cl2 there is a preference for the coupling products, both diphenoquinone and biphenol (Entries 1, 3 and 7 of both Tables). In THF, there is an increase in the production of quinone (Entries 1-2, 3-4 and 7-8 of both Tables). The very low yield for 3 in CH2Cl2 is caused by its insolubility at low temperature (Entries 5-6 for both Tables). This is the same case with 5 in THF. Complex 3 dissolves in THF, but still, the yields are low. This may be the effect of the electron-withdrawing Cl atoms towards the reactivity of these copper complexes . It was hoped that 4 with electron-donating methoxy groups (Entries 7-8 for both Tables) would result into higher yields but the results show otherwise. Complex 5 showed the best yield of 7,200% (Entry 10, Table 1) for 2,4-dtbp oxidation based on the amount of complex. With 2,6dtbp as substrate, the yield using 5, compared with the yield using 1-2 are contrasting and may indicate a different reaction mechanism for 5 specially since the structure of 5 is different from 1. Looking at the results from Tables 1 and 2, in general, it can be said that in CH2Cl2 there is a preference for the coupling products, both diphenoquinone and biphenol (Entries 1, 3 and 7 of both Tables). In THF, there is an increase in the production of quinone (Entries 1-2, 3-4 and 7-8 of both Tables). The very low yield for 3 in CH2Cl2 is caused by its insolubility at low temperature (Entries 5-6 for both Tables). This is the same case with 5 in THF. Complex 3 dissolves in THF, but still, the yields are low. This may be the effect of the electron-withdrawing Cl atoms towards the reactivity of these copper complexes . It was hoped that 4 with electron-donating methoxy groups (Entries 7-8 for both Tables) would result into higher yields but the results show otherwise. Complex 5 showed the best yield of 7,200% (Entry 10, Table 1) for 2,4-dtbp oxidation based on the amount of complex. With 2,6dtbp as substrate, the yield using 5, compared with the yield using 1-2 are contrasting and may indicate a different reaction mechanism for 5 specially since the structure of 5 is different from 1. Figure 3. Oxidation of 2,4-di-tert-butylphenol using 1-5. Low Temperature UV-vis Studies. The results for the low temperature UV-vis measurements for the reaction of the complex 4 with H2O2 are shown in Figures 5 and 6, divided into Stages 1 and 2, respectively. The low temperature UV-Vis studies for complexes 1, 2, and 3 were compared as reported . The reaction was divided into two stages to monitor possible formation of intermediates as indicated by the changes in the absorptivity of the complex and that of the corresponding spectra. Figure 5. UV-Vis spectra for the reaction of 4 and H2O2 in CH2Cl2 at -50° C (Stage 1). An upward arrow indicates increasing absorbance and a downward arrow indicates decreasing absorbance at that particular wavelength. After the addition of H2O2, the peaks at 297 and 444 nm start to decay ( Figure 5). This is accompanied by the growth of the peak at 398 nm (Stage 1). This 398 nm peak reaches a maximum after approximately 0.7 h and afterwards starts to decay ( Figure 6, Stage 2). The final spectrum (after ~ 2 h since H2O2 was added) is very much different from the starting spectrum indicating the decomposition of 4. The growth of the peaks in Figure 5 The peaks are also close to those of the hydroperoxodicopper(II) complex that utilized the ligand XYL-H or a,a'-bis[N,Nbis[2-(2-pyridyl)ethyl]amino]-m-xylene, 395 nm (e = 8000 M -1 cm -1 ) and 620 nm (e = 450 M -1 cm -1 ), thus, this intermediate cannot be discounted (Karlin et al. 1987, Karlin et al. 1988). However, these peaks are also similar to those of another known intermediate, bis moxo (Stack et al. 2004). The UV spectral range for this is lmax = 297-435 nm, e = 9-23 mM -1 cm -1 and lmax = 393-494 nm, e = 0.3-28 mM -1 cm -1 . This is probable because of interconversion between µ-η 2 :η 2 -peroxo and bis m-oxo peroxo intermediates (Cahoy et al. 1999, Tolman et al. 2004, Stack et al. 2006. It was expected that 4 with electron-donating methoxy groups would be more reactive than 1 and 2 since the use of 3 with electronwithdrawing Cl atoms resulted to very low yields in THF. The intermediate 7 may preferentially attack the salicyl moiety of 4 or 7 instead of the phenol despite its great amount and cause decomposition of the complex. The methoxy groups of the salicyl moiety activate each of the two aromatic rings and make it more susceptible to attack by 7 over the phenol substrate. After reaction, 7 can go back to the catalytically inactive form, react with H2O2 to form the peroxo complex, and subsequently, once again, react with the substrate or complex 4 which is now dwindling in amount. This causes the low yield using 4 relative to 1 and 2. Kitajima et al. 1994 The previous argument can be related to the UV-vis spectra of the reaction of 4 and H2O2 at -50° C. At the start, there is the formation of the peroxo intermediate ( Figure 8). After approximately two hours, still at -50° C, the spectrum changes into something totally different from the original complex. This is thought to be the decomposition of 4. The hydroxylation of a ligand's arene rings has been observed in synthetic systems using dinucleating ligands with meta-xylyl spacers, which due to their bridging position, are predisposed toward this intramolecular reaction. (Karlin et al. 1987) A substantial electronic effect on the rate of decomposition was observed with electron-withdrawing groups slowing down the rate of decomposition and electron-donating methoxy groups making the decomposition rate so fast that no intermediate could be observed. This bolsters the idea of 7 attacking 4. Bis m-oxo intermediates also are capable of hydroxylating arenes . The flexibility of the C-S-C angle of the ligands may be able to contract to such a low value to allow the formation of the bis m-oxo adduct ). The occurrence of the attack was tested by the reduction of the concentration of 4 in the oxidation of external substrates. When 0.5 mol% of 4 (in CH2Cl2) was used, the yield of the biphenol, based on the complex amount, went up to 4600%. There were less chances of 7 attacking the other complexes 4 or 7 and increased chances of reacting with the more numerous phenol substrates. Low temperature and room temperature UVvis analysis of the oxidation of 2,4-dtbp was performed and for several hours, there was no change in the intensity of the peak at 444 nm and no peak at 398 nm is observed. There was an increase of the intensities of the peaks at 230-300 nm and this may indicate the formation of the biphenol. After some time, the 444 nm peak intensity slowly goes down. The time when there was no change in the 444 nm peak corresponds to the reaction of 7 with the substrate. When most of the substrate has reacted, 7 is able to react with other 4 or 7 resulting to the decrease in intensity of the 444 nm peak indicative of complex decomposition. TLC analysis of the reaction mixture shows a big spot at an Rf where the coupling product is found. With 4 as catalyst, the yield when 2,6-dtbp is used is lower since the reactivity of this substrate, compared to 2,4-dtbp, is lower. Intermediate 7 reacts more readily with other 4 resulting to the low yield. Complexes 1 and 2 as substrates are not as reactive as 4 and are able to react more with 2,6-dtbp (after the formation of the µ-η 2 :η 2 peroxo intermediate), thus the higher yield. A mechanism is proposed to summarize and the oxidation reactions stated above. In the mechanism, two paths are available and these are both accessible in as much as both intermediates 8 and 10 are possibly formed, both the coupling product and quinone are produced, and different amounts of products are obtained in CH2Cl2 and THF. In path A, a µ-η 2 :η 2 -peroxo intermediate, 8, is formed which reacts with two molecules of the phenol to produce the coupling products. If the biphenol is produced, this can be oxidized by 8 to the diphenoquinone. After 8 has abstracted a hydrogen atom, a µ-hydroxo intermediate, 9, is formed which can once again react with H2O2 to form 8 for the oxidation path C. Intermediate 9 can also react with H2O2 to form the µ-hydroperoxo intermediate 10 with which the phenol radical is oxidized to quinone. Intermediate 10 can be accessed from the starting complexes through path B. The phenoxy radical can also be obtained by the reaction of phenol with the OHradical coming from the µ-hydroperoxo intermediate. After the reaction of 10 with the phenoxy radical, intermediate 11 is formed which can react with phenol to produce the phenoxy radical. At the same time, 11 is converted to 9 to complete another cycle. CONCLUSION In summary, the ligand bis[3-(2-hydroxy benzylideneamino) phenyl] sulfone and its derivatives were prepared and used for the syntheses of complexes 4-5. These complexes have been found to be able to oxidize orthosubstituted phenols. Bulky alkyl groups in the 3-position of the salicyl moiety of the ligand increase the catalytic activity of the complexes and enhance the yields of the oxidation products. The two complexes 4 and 5 were utilized in the oxidation of the substrates 2,4and 2,6-di-tert-butylphenol (dtbp) at -50C with H2O2 in CH2Cl2. For 2,4-dtbp, yields of 4,600% and 7,200% of 3,3',5,5'-tetra-tertbutyl-2,2'-biphenol were achieved with the use of 4 and 5, respectively. For 2,6-dtbp, yields of 1,900% and 400% of 3,3',5,5'-tetra-tert-butyl-4,4'-biphenol were realized utilizing 4 and 5, respectively. Thus, these show that the methoxy groups activated the complex. Based on low temperature UV-vis results, a µ-η 2 :η 2peroxo or a m-hydroperoxo intermediate was possibly formed by the reaction of 4 with the H2O2. A mechanism was proposed to summarize these oxidation reactions mediated by complex 4. ACKNOWLEDGEMENT We would like to thank the Japan Ministry of Education, Science, Sports and Culture (Monbusho) for financial support to AMG and we also thank Dr. Y. Inomata for her help in the measurements of magnetic moments.
4,489.8
2014-10-20T00:00:00.000
[ "Chemistry" ]
Organopolymer with dual chromophores and fast charge-transfer properties for sustainable photocatalysis Photocatalytic polymers offer an alternative to prevailing organometallics and nanomaterials, and they may benefit from polymer-mediated catalytic and material enhancements. MPC-1, a polymer photoredox catalyst reported herein, exhibits enhanced catalytic activity arising from charge transfer states (CTSs) between its two chromophores. Oligomeric and polymeric MPC-1 preparations both promote efficient hydrodehalogenation of α-halocarbonyl compounds while exhibiting different solubility properties. The polymer is readily recovered by filtration. MPC-1-coated vessels enable batch and flow photocatalysis, even with opaque reaction mixtures, via “backside irradiation.” Ultrafast transient absorption spectroscopy indicates a fast charge-transfer process within 20 ps of photoexcitation. Time-resolved photoluminescence measurements reveal an approximate 10 ns lifetime for bright valence states. Ultrafast measurements suggest a long CTS lifetime. Empirical catalytic activities of small-molecule models of MPC-1 subunits support the CTS hypothesis. Density functional theory (DFT) and time-dependent DFT calculations are in good agreement with experimental spectra, spectral peak assignment, and proposed underlying energetics. Supplementary Setup for hydrodehalogenation reactions. a Photoreactor setup for parallel batch reactions with reaction vessels suspended above the stir plates by an elastic cord. The reactor was equipped with a thermometer which confirmed that the temperature with the heat from the lights and stir plates was consistently 37 °C. b Overlapping MPC-1-0 absorption and blue LED emission spectra indicating the suitability of the selected irradiation source. Steady-state absorption was measured on a Cary Bio 50 UV-Vis spectrometer (Agilent Technologies) with a quartz cuvette in chloroform using the chloroform-soluble component of MPC-1-0 which passed through a cotton plug. Blue The ideality of the different MPC-1 preparations was first assessed using NMR and GPC. A comparison of 1 H NMR data is provided in Supplementary Figure 12 and analysis of the MPC-1-2 spectrum is presented in Fig. 2a. GPC results are summarized S98 To elucidate the possibility of structural defects in MPC-1 chains, a series of DFT frequency calculations were conducted in conjugation with FTIR measurements, with the focus being nitrile stretching modes present in various environments due to possible polymer chain defects. Geometry optimizations and frequency calculation were carried out at B3LYP/6-31g(d) level of theory. With the measured peaks from CHR1 and CHR3, Gaussian fits were carried out to locate the maximum of each peak. Using these maxima and the predicted vibrational frequencies of the isolated chromophore models, a scaling factor was deduced to adjust the transition frequencies predicted by DFT for all investigated defect models. The overall scaling factor for all transitions was chosen to be 0.9503, while the precompiled correction factor for B3LYP/6-31g(d) is reported as 0.960 1 . A depiction for the predicted modes for all investigated structures is shown in Supplementary Figure 17a Figure 17n, gray sticks) occur at a slightly lower frequency (~2231-2237 cm -1 ) than the observed peak center. These transitions are not far enough away from the peak center to conclusively assess the presence or absence of this type of defect. However, an argument can be made that the presence type-2 defects in abundance would shift the overall nitrile peak center to a lower frequency and skew the peak to a less symmetrical shape. Supplementary Note 2 Use of the Frenkel-Davydov exciton model In general, electronic states of a homodimer (X-X) or a heterodimer (X-Y) with similar chromophores can be well described by the Frenkel-Davydov exciton model [2][3][4][5][6] . In this model, the ground state electronic wavefunction, Ψ '' is approximated by the product of the two chromophores' ground-state wavefunctions, |)⟩ and |+⟩: Ψ '' = |)⟩. |+⟩. The lowest two valence states (VSs) are associated with the promotion of an electron from HOMO to LUMO of each chromophore. HOMO and LUMO of the two chromophores within the investigated heterodimer DXY are depicted in Figure 5b. Wavefunctions of the two first excited-state diabats of the dimer can therefore be approximated as Ψ /' = |) * ⟩. |+⟩ and Ψ '/ = |)⟩. |+ * 1, where |) * ⟩ and |+ * ⟩ are electronic wavefunctions of the excited chromophores and "g" and "e" denote "ground" and "excited", respectively. Energy separation between the two electronically excited eigenstates of the dimer is determined by the unperturbed energies of the diabats and their coupling strength, they can be approximated using second-order perturbation theory. At the limit of degenerate states, wavefunctions of the eigenstates are linear combinations of the diabatic wavefunctions with same weight, Ψ 23 where the "±" sign indicates the symmetry of the overall wavefunction. In addition to neutral excitons (vide supra), charge-transfer (CT) excitons are also incorporated into the heterodimer model. Wavefunctions of CT diabats may be phenomenologically expressed as Ψ :; = |) < ⟩. |+ = ⟩ and Ψ ;: = |) = ⟩. |+ < ⟩, where "a" and "c" denote "anion" and "cation", respectively. Similar to the valence states, vibronic and other coupling mechanisms including spin-orbit interactions mix the two CT diabats to form adiabatic CT states (CTSs), wavefunctions of which at the degenerate limit may be written as Ψ >? Transitions from the ground electronic state (gg) to the VSs, "ge" and "eg", are expected to dominate the steady-state optical absorption spectrum, while transition dipole moments between "gg" and CTSs are expected to be small because they involve promotion of an electron from HOMO of X to LUMO of Y or vice versa. Indeed, calculated oscillator strengths for the electronic transitions confirm these predictions (Supplementary Table 10). Supplementary Note 3 Ground-state geometries Cartesian coordinate information of the ground state optimized structures of X, Y, and DXY are listed below. X Y Z Figure 5b), an electron transfers from HOMO to HOMO-1 (Supplementary Figure 5a), or LUMO+1 to LUMO (Supplementary Figure 5b) to create the CTS1-equivelent, at which a partial positive charge is localized on Y part of the DXY dimer, and a partial negative charge is localized on X part of the DXY dimer. Finally, Supplementary Figure 6 shows a comparison between the maps of electrostatic potentials (MESP) of GS, VS1, and CTS1 geometries, the charge distribution is consistent with our current model of eventual charge transfer from Y-chromophore unit to Xchromophore unit within the DXY dimer, either through VS1 state, or VS2 state. Supplementary Note 5 Steady-state absorption/photoluminescence acquisition and decomposition analysis Spectral shape and position were highly similar for all excitation energies explored for both MPC-1-1 and MPC-1-2. To better understand the influence of chromophores X and Y on the energetics of the absorption (Abs) and photoluminescence (PL) spectra ( Fig. 5), a decomposition routine was applied to both MPC-1-1 and MPC-1-2 spectra. However, intensity of the PL signal is determined not only by the transition dipole moment (µ), which the TD-DFT calculations predict, but also by the transition energy (ν). Explicitly, the PL signal is proportional to ν 3 . Experimentally obtained PL spectra were therefore scaled by 1/ν 3 (eV -3 ) then normalized to the maximum, while Abs spectra were normalized to the maximum, prior to the decomposition analysis. Normalized Supplementary Note 6 Transient absorption kinetics analysis Assuming all processes follow a first order reaction, and since charge transfer rates are much faster than photoluminescence rates, the reaction mechanism was approximated to be of a consecutive nature. The four kinetic traces extracted from the TA spectra were fitted simultaneously with an IRF-convoluted sum of exponential growths and decays, such that: Initial guesses of all parameters were approximated through a trial-and-error process while simulating and visually assessing the fit lines. Both samples were fitted with the same initial guess to avoid any bias, and the fit was repeated multiple times to ensure Supplementary Note 10 Triplet state investigation Triplet state involvement in MPC-1 photophysics was considered. Initially, a series of DFT calculations at the B3LYP/6-31G(d) level of theory in chloroform was employed. Following geometry optimization of heterodimer DXY to its singlet ground-state equilibrium geometry, triplet state vertical excitation calculations located the lowest triplet state at 2.08 eV (595 nm). However, attempting to optimize the DXY geometry to target the lowest triplet state failed to converge. Experimentally, an attempt was made to detect any phosphorescence emission from MPC-1-2 dissolved in chloroform under ambient conditions, as well as after degassing by purging the solution with N2 gas for >20 minutes. These measurements utilized a phosphorescence detection scheme available in the spectrofluorometer (LS 55 PerkinElmer). During an acquisition cycle in this scheme, a single monochromatic excitation shot with a full width at half maximum of ~20 µs is sent to the sample, followed by monochromatic detection with a photomultiplier tube detector. The detection scheme is controlled by two variables: delay (waiting time between the release of the excitation shot and the start of the detector signal accumulation) and gate (the time window during which the detector accumulates the signal). Both variables can be adjusted in 10 µs steps. The resultant spectra from these experiments are summarized in Supplementary Figure 74. Comparison between ambient and degassed conditions should reveal phosphorescence emission through increase in intensity at the wavelength range corresponding to the triplet state energy. However, change was not observed (Supplementary Figure 74a-b). Also, delay times larger than 0 µs should better isolate the singlet-related emission from triplet-related emission due to the significant disparity between triplet state lifetimes (typically > 1 µs) and singlet state lifetimes (typically < 100 ns). However, no change in intensity or spectral shape was observed when changing the delay time (Supplementary Figure 74c-d). It is worth noting here that any acquisition with a delay > 30 µs did not result in any meaningful intensities. Similarly, a longer acquisition window, i.e., gate width, should accumulate higher intensity corresponding to delayed triplet-related emission, yet no change in intensity or spectral shape was observed (Supplementary Figure 74e-f). The spectra presented in Supplementary Figure 74 show no indication of triplet-state population; however, our instrument's temporal resolution is relatively low (namely, 20-30 µs). By comparison, ladder-type polymers with a higher conjugation degree than MPC-1 were reported to have phosphorescence lifetimes on the order of 10s to 100s of microseconds, and their oligomer subunits to have a lifetime on the order of 100s of milliseconds [26][27][28] . Another argument can be made that the emission decay we measured with TRPL from the VS1 state had a single exponential decay of ~10 ns without any longer components detected, which was consistent with photoluminescence lifetimes deduced from timecorrelated single photon counting measurements for a single-chromophore-type polymer similar to MPC-1-2 29 . Also, the < 20 ps charge transfer rates between vibronically coupled states VS1 and CTS1 leaves little chance for the slow, spin-forbidden, intersystem crossing process (typical > 10s ns) to occur from VS1 to any available triple state. Therefore, we concluded that triple states involvement in the photophysical and catalytic processes of MPC-1 was minimal. An alternative synthetic procedure using environmentally benign aqueous PS-750-M surfactant was also employed 30 . The vessel was shaken briefly to more evenly incorporate material on the walls of the vessel, and then the vessel was placed in a sand bath heated to 90 °C to stir at 800 rpm. The mixture was monitored every 5 minutes by briefly removing the vessel and shaking so as to evenly incorporate material on the walls of the vessel; changes in the coating left on the vessel walls and the solution appearance and viscosity were observed over time. At 50 minutes the liquid had become too viscous to effectively coat the vessel walls when shaking. At 60 minutes the stir bar was observed to intermittently slow, struggling against the increased viscosity. At 70 minutes the viscosity had become sufficiently high that the stir bar was whipping tiny bubbles into the reaction mixture, and at this point the vessel was removed from heating and allowed to cool to room temperature. After cooling, the vessel contents solidified. The vessel was then unsealed and 4 mL deionized water was added, which caused the solidified polymer to break from coating the walls. The vessel was resealed and the contents were subjected to stirring at 1700 rpm while heating at 110 °C in a sand bath for 10 minutes, with the stirring being intermittently turned on and off to help break the stir bar free from the polymer. The suspension was then stirred vigorously at room temperature for 5 minutes. The supernatant liquid and some of the suspension was removed by syringe and transferred to a test tube. The reaction vessel was then thrice rinsed by stirring with 2 mL portions of deionized water which were also transferred into the test tube. The test tube was centrifuged, and its supernatant liquid was removed. The test tube contents were then twice admixed with 2 mL deionized water, centrifuged, and likewise separated from the supernatant liquid. The test tube contents were dissolved in chloroform and transferred back into the reaction vessel. Additional chloroform was added to bring the total volume added to 10 mL, and then 10 mL deionized water was S124 also added. The mixture was stirred, allowed to settle, and the chloroform layer was syringe-transferred to a test tube. The aqueous layer was then extracted with 2 x 1 mL chloroform to complete the transfer. The combined chloroform layers were then mixed with 1 mL deionized water, and the aqueous layer was transferred to the reaction vessel. Subsequently, the combined chloroform layers were combined with 10 mL methanol, which caused the majority of the polymer to precipitate. The test tube was subjected to centrifugation and the supernatant liquid was decanted away. The centrifuge pellet was dissolved in chloroform, transferred to a glass storage vial, subjected to rotary evaporation under reduced pressure to remove the chloroform, and then placed in a vacuum oven. The oven was evacuated to 28 in Hg and then heated to 130 °C and allowed to sit for 14 h. The vacuum was then re-established to 28 in Hg (from 15 in Hg) and allowed to sit for 2 h, and which point the vacuum was briefly re-established, and then the oven was vented, and the sample were allowed to cool. Obtained 369 mg of translucent A stir-bar-equipped 20-mL vial was charged with 3,3,3',3'-tetramethyl-1,1'-spirobiindane-5,5',6,6'-tetraol (2 equiv., 12.0 mmol) and potassium carbonate (4 equiv., 24.0 mmol). The vessel was fitted with a rubber septum and thrice evacuated/argon-backfilled before adding 18 mL dry DMF by syringe. 1,2-Dibromoethane (1 equiv., 6.0 mmol) was added by syringe and then the septum punctures were covered with electrical tape and the septum was wrapped with PTFE tape. The mixture was allowed to stir at room temperature for 5 min before it was placed to stir in a sand bath pre-heated to 100 °C. After 12 h, the vessel was allowed to cool to room temperature. The reaction mixture was combined with 20 mL ice-cold deionized water and extracted with ethyl acetate (3x25 mL A stir-bar-equipped 4-mL vial was charged with 2,3,5,6-tetrafluoroterephthalonitrile (1 equiv., 0.019 mmol), S3 (2.2 equiv., 0.042 mmol), and flame-dried potassium carbonate (4.5 equiv., 0.086 mmol). The vessel was fitted with a rubber septum and thrice evacuated/argon-backfilled before adding 200 µL dry DMF by syringe. The septum puncture was sealed with electrical tape and the septum was wrapped with PTFE tape. The vessel was placed to stir in a reaction block pre-heated to 90 °C for 2.5 h. The vessel was allowed to cool to room temperature. The reaction mixture was diluted with 0.5 mL ice-cold water and extracted (3x0.5 mL ethyl acetate). Combined organic layers were dried over sodium sulfate. Solvent was removed under reduced pressure. Crude material was purified by flash chromatography (hexanes/ethyl acetate). Obtained product as 33 mg (98%) yellow solid, Rf 0.55 to 0.36 (7:3, hexanes/ethyl acetate). 1 H NMR (400 MHz, CDCl3) δ 6.88-6.71 (m, 6H), 6.71-6.56 (m, 2H), 6.55-6.35 (m, 6H), 6.34- A spin-vane-equipped 3-mL test tube was charged with 2,3,5,6-tetrafluoro-4-((1,3,5-trimethyl-1H-pyrazol-4yl)sulfonyl)benzonitrile (1 equiv., 0.030 mmol), S3 (2.2 equiv., 0.066 mmol), and potassium carbonate (4.5 equiv., 0.135 mmol). The vessel was fitted with a rubber septum and thrice evacuated/argon-backfilled before adding 300 µL dry THF by syringe. The septum puncture was sealed with electrical tape and the septum was wrapped with PTFE tape. The vessel was placed to stir in a sand bath pre-heated to 65 °C for 10 h. After TLC indicated that the reaction was still not complete, the solvent was removed under reduced pressure, the vessel was thrice evacuated/argon-backfilled, and 300 µL dry DMF was added by syringe. The vessel was placed to stir in a sand bath pre-heated to 90 °C for 2 h. The vessel was allowed to cool to room temperature. The reaction mixture was diluted with 300 µL chloroform and admixed with ca. 0.5 g crushed ice. The organic layer was set aside and the aqueous layer extracted (3x300 µL chloroform). Combined organic layers were washed (1x1 mL ice-cold water, 1x1 mL brine) and the dried over sodium sulfate. Solvent was removed under reduced pressure. Crude material was purified by flash chromatography (hexanes/DCM). Obtained product as 30 mg (52%) yellow solid, Rf 0.21 to 0.03 (7:3, hexanes/ethyl acetate
3,957.8
2019-04-23T00:00:00.000
[ "Chemistry", "Materials Science", "Environmental Science" ]
An exploration of EEG features during recovery following stroke – implications for BCI-mediated neurorehabilitation therapy Background Brain-Computer Interfaces (BCI) can potentially be used to aid in the recovery of lost motor control in a limb following stroke. BCIs are typically used by subjects with no damage to the brain therefore relatively little is known about the technical requirements for the design of a rehabilitative BCI for stroke. Methods 32-channel electroencephalogram (EEG) was recorded during a finger-tapping task from 10 healthy subjects for one session and 5 stroke patients for two sessions approximately 6 months apart. An off-line BCI design based on Filter Bank Common Spatial Patterns (FBCSP) was implemented to test and compare the efficacy and accuracy of training a rehabilitative BCI with both stroke-affected and healthy data. Results Stroke-affected EEG datasets have lower 10-fold cross validation results than healthy EEG datasets. When training a BCI with healthy EEG, average classification accuracy of stroke-affected EEG is lower than the average for healthy EEG. Classification accuracy of the late session stroke EEG is improved by training the BCI on the corresponding early stroke EEG dataset. Conclusions This exploratory study illustrates that stroke and the accompanying neuroplastic changes associated with the recovery process can cause significant inter-subject changes in the EEG features suitable for mapping as part of a neurofeedback therapy, even when individuals have scored largely similar with conventional behavioural measures. It appears such measures can mask this individual variability in cortical reorganization. Consequently we believe motor retraining BCI should initially be tailored to individual patients. Background Brain computer interfaces (BCI) have been suggested as a means by which neuro-rehabilitation following stroke may be enhanced [1][2][3][4][5][6]. EEG-based BCI in particular are the focus of current endeavours. As many stroke patients suffer complete paralysis of a limb, this non-invasive physiological measurement modality provides a means through which brain activity associated with motor control can be monitored, even in the absence of the normal behavioural information provided by the movement itself. It is conjectured that rehabilitation therapy may be effectively *Correspondence<EMAIL_ADDRESS>1 National University of Ireland Maynooth, Maynooth, Co. Kildare, Ireland Full list of author information is available at the end of the article administered to patients incapable of movement through the provision of feedback on their attempt to move as determined by the BCI. Communication and control BCIs based on motor paradigms typically aim to decode EEG patterns to allow a user to learn to control an external device, such as a computer or motorised wheelchair, in the absence of motor control [7][8][9][10]. In this study, however, we are interested in overt and attempted movement of a subject, as the ultimate goal of the rehabilitative BCI modality considered here is to train a patient to regain control of the affected appendage. A rehabilitative BCI in this context should encourage and reward the subject for attempted movement, to encourage positive neuroplastic changes in the brain and facilitate recovery of motor control [11]. Such http://www.jneuroengrehab.com/content/11/1/9 an approach is subtly different from the motor imagery BCI paradigm also applied in stroke rehabilitation. Under the motor imagery paradigm the BCI is used to provide feedback to the patient on their engagement with motor imagery tasks. Motor imagery requires that the patient engage in a mental rehearsal of the targeted movement without attempting to actually execute the movement. It has been suggested that such an approach can supplement conventional therapy for certain patient groups [12]. The work here is predicated on the attempted movement paradigm, which seeks to provide positive reinforcement feedback in response to successful engagement of the patient's motor networks associated with the targeted motor task. It is speculated that such an approach can help reduce the possibility of the learned non-use phenomenon through the delivery of contingent rewards -a form of neurofeedback therapy [13]. There are significant engineering obstacles to the achievement of such a goal however. These challenges are, in many respects, similar to those encountered by researchers attempting to make BCI more usable for healthy subjects for the purposes of communication and control. Conventional BCI design requires attention to usability issues such as reducing setup complexity by minimising the number of electrodes required, reducing training time and lightening the cognitive workload associated with operation. These aspects must be satisfied while maintaining useful function -not an easy task as it requires maintenance of robust performance in the face of poor instrumentation setup, artefactinducing subject movement and other detrimental factors. These problems present an even greater barrier to adoption of this technology when considered in the context of stroke rehabilitation due to the impact of the condition on the abilities of the user. A typical stroke sufferer for whom this technology is potentially useful will obviously have very limited ability to manipulate a device precisely and accurately on to their head unaided and therefore any solution must be tolerant to such setup errors. In addition it is well established that stroke sufferers fatigue very easily [14][15][16][17] and therefore in order to maximize therapy during a session minimal (and ideally zero) time should be lost to training the classifier. Finally, as stroke is an injury to the brain, the stereotypical patterns of brain activity upon which conventional BCI paradigms rely are not guaranteed to manifest themselves conventionally in response to movement intentions and therefore it is not clear how best the BCI should use the signals presented by the user. To compound this latter aspect further, it is not clear how the EEG of the recovering brain will resolve over time, which has ramifications in terms of pattern recognition and subsequent interpretation. The purpose, of the study reported in this paper, is focussed on this latter aspect. We perform a comparative exploratory analysis of the reliability and stability of motor-related EEG features in stroke subjects from a machine learning perspective. We wish to explore if such features are sufficiently universal that machine learning parameters trained using healthy subjects can be used for stroke-affected patients and further if these remain useful and valid during the critical period of recovery bridging the sub-acute to chronic phases. If BCI trained with healthy stereotypical data provides sufficiently good performance with stroke sufferers then such a deployment paradigm would make BCI for stroke rehabilitation far more practical in a clinical setting. If, on the other hand, the stroke-affected EEG presents sufficiently differently from healthy EEG or that it changes over time, the practical application of BCI in such a context will require more sophisticated design from a machine learning perspective. The study here attempts to shed some light on this pragmatic issue. Subjects Fifteen subjects in total participated in the study. Ten subjects were healthy while five were stroke patients. The healthy subjects (8 men and 2 women, mean age 57.2 ± 17.6 years) each participated in one recording session. The stroke subjects (3 men and 2 women, mean age 59.0 ± 9.4 years) participated in two recording sessions. The average time from stroke to the first recording session was 22.2 ± 12.9 days. The average time between first and second session was 190.6 ± 26.1 days. Stroke patients were recruited from the Adelaide & Meath Hospital, Dublin while control subjects were recruited from the National University of Ireland Maynooth. Inclusion criteria for the stroke patients is detailed elsewhere [18] and summarised here as: Patients must (1) be cognitively high functioning, (2) be able to give informed consent and follow experimental instructions, (3) not suffer from a visual field defect or visual neglect, and (4) have upper limb motor paresis in either their dominant or non-dominant hand. When possible, the Mini Mental State Exam (MMSE) was used to ensure absence of serious cognitive impairment in the stroke patients. One subject was unable to conduct this test at the time of the first trial due to strokeinduced expressive dysphasia, severely affecting their ability to produce speech. This subject was included in the study following demonstration of cognitive requirements and consultation with the patient's stroke physician. The Kapandji finger opposition test [19] was used to determine motor ability in the stroke-affected hand. This test involves the subject attempting to touch the thumb on their stroke-affected hand to 10 points on the same hand http://www.jneuroengrehab.com/content/11/1/9 in order from points 0 to 10, as shown in Figure 1. Four of the stroke subjects scored at least 6/10, meaning they were able to perform finger tapping with all of their digits. One subject had minimal motor ability in their stroke-affected hand and scored 0/10. Demographic information of the stroke patients, including the MMSE and Kapandji scores at the times of both trials is shown in Table 1. Demographic information of the control subjects is shown in Table 2. The locations of brain tissue damage due to stroke were varied, including both cortical and subcortical bilateral tissues: the left and right posterior parietal cortex, left frontoparietal cortex, right temproparietal areas, right medial temporal lobe, left thalami and internal capsules, periventricular white matter lesions and centrum semiovale lesions. In all cases, the stroke was ischemic in nature. Subject-specific lesion information can be found in Table 3. In accordance with ethical requirements, participants were provided with a verbal as well as a written description of this research. Subjects provided written consent to the conduction of the experiment and the publication of their details. In the cases of two stroke patients who were unable to give written consent due to their stroke, verbal consent was accepted. Ethical approval for the experiments was granted by the SJH/AMNCH Research Ethics Committee of the Adelaide & Meath Hospital, Dublin and by the Ethics Committee of the National University of Ireland Maynooth. The experiments were conducted at the Adelaide & Meath Hospital, Dublin. Experimental setup and motor paradigm During a recording session, the subject was seated in a comfortable chair in front of a laptop computer for instruction presentation. The subject was asked to follow on-screen instructions to perform finger-tapping while the words "Move your fingers" were displayed and to rest their hand while the word "Relax" was displayed. Before the first instruction, the screen read "The experiment will begin shortly" while after the final instruction, the screen read "Experiment now over. Please stay still". The healthy subjects were instructed to perform the task with their dominant hand, while the stroke subjects were instructed to use their stroke-affected hand. Stroke subjects took part in two recording sessionsan "early" session which took place up to 6 weeks following stroke onset and a "late" session which took place roughly 6 months after the early session. This period of time between early and late sessions was chosen such that spontaneous recovery processes would have had time to run their course. Healthy subjects only participated in one recording session. The finger-tapping task involved repeatedly touching the thumb to the tips of their 2nd to 5th digits on the same hand at a self-paced speed. During his early recording session, subject S3 was unable to touch his thumb to any other digit yet still had some movement. In this case, the subject still attempted to overtly perform the task. A session consisted of 20 activation trials and 20 rest trials, beginning with activation and alternating until all 40 trials had been completed. Each trial lasted 10 seconds with no rest time between trials (see Figure 2). Subject S3 reported being fatigued during his first recording session and completed only 32 trials. EEG data acquisition EEG data was acquired using a BioSemi ActiveTwo system (BioSemi B.V., Amsterdam, Netherlands) providing 32 Ag/AgCl electrodes positioned according to the 10/20 system. The system also recorded analogue event signals received from the presentation laptop. All data was acquired at a sample rate of 1024 samples per second. EEG data analysis Recorded EEG data was processed off-line in Matlab 7 (Mathworks, Natick, Maine, USA) using a combination of scripts from EEGLAB [20], Ramussen and Williams' GPML code [21] and custom code. We implemented an off-line BCI based on Filter Bank Common Spatial Patterns (FBCSP) [22] as illustrated in Figure 3. FBCSP is an adaptation of the Common Spatial Patterns (CSP) algorithm [23]. The general steps of FBCSP are: 1. Filter the EEG into a number of frequency ranges. 2. Apply the CSP algorithm separately to each frequency range, decompose the EEG and perform feature extraction. In our case we used Marginal Relevance (MRelv) to rank our features and Gaussian Process Classification (GPC) to classify the selected features. More detail on CSP, MRelv and GPC is provided in following sections. Two types of BCI training were explored in our investigation: individual and grouped. For individual BCI training, only one EEG dataset was used to train the BCI ("Train EEG"). Training the BCI in this way involved obtaining the CSP filters, ranking and selecting the CSP features and using them to train the classifier, all from a single EEG dataset. Another EEG dataset was used to test the BCI ("Test EEG"). This involved filtering the EEG, applying the trained CSP model, selecting the same features as before and classifying the features with the classifier trained beforehand. For grouped BCI training, all event data from a subset of datasets was used to obtain a general CSP model. As before, this model is applied to all of the training data, the resulting CSP filters are ranked, selected and finally used to train the classifier. This general CSP and feature selection model is then applied to other individual EEG datasets and the resulting CSP features are classified. Datasets are identified primarily by Subject ID (Table 1 and 2). Stroke subjects took part in an "early" (E) and a "late" (L) session and the datasets from these sessions are labelled accordingly. Therefore, healthy subject datasets are labelled H1-H10, early stroke datasets are labelled S1E-S5E and late stroke datasets are labelled S1L-S5L. Raw data was both visually inspected and analysed for abnormally high signal power to check for any movement artefact that may have affected the trial data. None was found and so no trials were rejected. Pre-processing In the case of subjects who performed the finger tapping task with their left hand, their EEG data was initially mirrored in the sagittal plane in order to more accurately compare their dominant hand EEG patterns with subjects who performed the task with their right hand. http://www.jneuroengrehab.com/content/11/1/9 S2 Left parietal infarction, left thalami and internal capsule infarcts. Periventricular deep white matter change. Bilateral lacunar infarcts in the centrum semiovale and basal ganglia. 1.5 cm acute infarct in left centrum semiovale. S3 Area of acute infarction adjacent to the body of the right lateral ventricle involving the right centre of semiovale. S4 Right posterior parietal and temproparietal regions. Background periventricular ischemic changes involving left frontal parietal region. S5 Medial right temporal lobe focal infarct. Periventricular deep white matter ischemic disease. Raw EEG data was temporally filtered with a filter bank made up of 9 frequency ranges. A zero-phase 4thorder Butterworth filter was used to filter the EEG signals into the frequency ranges 4-8, 8-12, 12-16, 16-20, 20-24, 24-28, 28-32, 32-36 and 36-40 Hz. This filtered EEG was then separated into windowed time segments for each trial of rest and activity. Segments began 2 seconds following the trial onset and lasted 6 seconds as shown in Figure 2. Common spatial patterns The CSP algorithm [23,24] was then applied to the segments of EEG data from each frequency range. The CSP algorithm produces a set of spatial filters which when used to decompose the EEG signals generates signals whose variances can be used to discriminate optimally between two classes of activity: where E b,i ∈ R c t ×t is the ith trial EEG measurement from the bth frequency range, W b ∈ R c t ×c t is the CSP projection matrix for the bth frequency range, Z b,i ∈ R c t ×t is the ith trial spatially filtered EEG signals for the bth frequency range, c t is the total number of channels, t is the number of time samples per trial and T denotes the transpose operator. The feature of an individual trial of data is the logarithm of the proportional variance of one trial compared to all other trials, within each frequency range. Feature extraction and forming of the feature matrix proceeds as: is the features for each frequency ranges ordered into a single feature vector for each trial,V ∈ R i t ×(b t ·c t ) is the full feature set for all trials, y ∈ R i t ×1 is the true class label vector, i t is the total number of trials, b t is the total number of frequency ranges and c t is the total number of channels. Marginal relevance Only a selection of CSP features are used for classification training and testing. During our BCI training, CSP features to be used as features for classifier training are ranked and a number of the highest ranked CSP features are selected. For this study, we chose to use Marginal Relevance (MRelv) as our feature ranking method. The MRelv score for each feature in a feature set is the ratio of their between-group to within-group sum of squares. This idea underpins statistical methodologies such as ANOVA and is explained in more detail elsewhere [25] where it was used to screen out features when a large number of spurious features are present. The spatial filters for each CSP channel (rows of W ) have a corresponding spatial filter within the same filter set at a mirrored location within the filter set W. Using both filters together offers the best classification resultsfor example, the 1st and last rows of W should be used together or the 3rd and 3rd-from-last rows. Accordingly, we selected the four highest ranked features [22] and their corresponding features for classifier training. This feature selection was then used during the subsequent BCI testing stage. Gaussian process classification The Gaussian process (GP) model is an example of the use of a flexible, probabilistic, non-parametric model with uncertainty predictions. It fits naturally in the Bayesian modelling framework in which, instead of parameterising a mapping function f (x), a prior is placed directly on the space of possible functions f (x) which could represent the nonlinear mapping from input vector x to output y. Its use and properties for modelling are reviewed in [26,27]. Various applications (e.g. [28,29] in medicine and bioengineering fields) have exploited different properties of GP models for regression problems. In the field of geostatistics GP regression models are used for probabilistic analysis of data and are more commonly known under the term "Kriging". A GP is a generalization of the Gaussian probability distribution. Beside regression, GP models can also be used for probabilistic classification [27,30,31]. In the case of classification the output data, y, are no longer connected simply to the underlying function, f, as in the case of regression, but are discrete. Since the classification is binary, variable y can have one value for one class and another for the other class, e.g. y ∈ {1, −1}. The classification of a new data point x * involves two steps instead of one. In the first step, a latent function f, which models qualitatively with a GP model how the likelihood of one class versus the other changes over the x axis, is evaluated. In the second step, the output of the latent function f is squashed onto the range [0, 1] using any sigmoidal function, π(f ) = prob(y = 1|f ). This means that the squashed output of GP model represents the probability of a data point belonging to one of two types. The result then, after classification, is that each event is assigned a probability value in the range [0, 1] where a score of 0 indicates complete confidence that the event belongs to one class and a score of 1 indicates complete confidence that an event is of the other class. In practice, the majority of events take intermediate values. We applied a decision threshold of 0.5 to the probability scores to determine which class an event had been classified as belonging to by GPC. Analyses The first analysis carried out was 10-fold cross-validation on each dataset. Trials were split into 10 subgroups, separated in temporal order. Nine of the subgroups were used for: (1) training the FBCSP model, (2) selection of the top ranking features using MRelv and (3) training the GPC model. For the remaining subgroup, the previously created FBCSP model was applied with the same features selected as determined by MRelv and those features were then classified by the GPC model. This was the repeated for each of the 10 subgroups in a dataset. The purpose of this analysis is to establish the consistency of the EEG responses and the classification features derived during processing. A poor average classification result would indicate that the responses recorded in a dataset were inconsistent and thus possibly unsuitable for deriving a general response. http://www.jneuroengrehab.com/content/11/1/9 Individual BCI training was carried out using individual healthy datasets. Each dataset, including the healthy ones, were then tested using each of the trained BCIs. This resulted in a set of one-on-one BCIs, where one subject's EEG patterns were classified against each of the healthy subject's EEG patterns. Although this is an atypical BCI modality approach, it allows us to see how the classification rates vary between subjects. Grouped BCI training was also carried out using all of the healthy datasets. A general BCI was trained from the 10 healthy subjects. Each stroke subject dataset was individually tested against this general BCI. This is a common implementation for communications BCI and so is useful for our investigation. Furthermore, these classification results are useful for comparison to the individual BCI results obtained earlier. Similarly, we carried out Leave-One-Out Cross-Validation on the healthy datasets. All but one healthy dataset were grouped to train a BCI and the excluded healthy datset was tested against this model. This was then repeated for each healthy dataset. These classification results are useful for comparison with the stroke-affected results. Individual BCI training was performed for each subject where the early dataset was used to train the BCI and then the same subject's late dataset was then tested on that BCI. The comparison of the results from this analysis with the results from training the BCI on healthy EEG patterns are important to our investigation. Another result of interest is the frequency ranges of selected CSP features for each dataset. To investigate this, the frequency ranges of the selected CSP features was recorded. For each group of Healthy, Early Stroke and Late Stroke, we obtained a histogram of selected frequency ranges to see which were favoured and highlight any differences between groups. Single dataset 10-fold cross validation The classification results following 10-fold crossvalidation on each dataset are shown in Table 4. 8/10 healthy subject datasets scored 100% and the remaining 2/10 scored 97.5% while only 5/10 stroke datasets scored 100% and the remaining 5/10 scored between 85% and 97.5%. There is no distinction between the early/late stroke datasets as 2/5 early stroke datasets scored 100% while 3/5 late stroke datasets scored 100%. We tested a range of k values for k-fold cross validation of k = 2, 4, 6... 16. We saw no significant changes in these results compared to k = 10. Individual healthy dataset models applied to all data A table of individual classification accuracies when training the models and classifier on each healthy dataset and then testing on all other datasets is presented in Table 5. Wilcoxon Rank Sum tests were used to evaluate statistical differences between these classification results for the Healthy, All Stroke, Early Stroke and Late Stroke groups. There were significant differences found between Healthy Grouped healthy dataset model applied to stroke data Classification accuracies of each stroke dataset when the BCI is trained on all of the healthy EEG datasets grouped together is presented in Table 6. Wilcoxon Signed Rank tests were used to test for statistical significance in the change in classification accuracy when using grouped healthy datasets to train the BCI as compared to the average results when using individual healthy datasets to train BCIs. We found no significant change (p > 0.05) in classification accuracy for datasets S1E, S1L, S2L, S3E and S4E. We found significant increases (p < 0.05) in classification accuracy for datasets S3L, S4L, S5E and S5L and a significant decrease (p < 0.05) for dataset S2E. Leave-one-out cross-validation of healthy datasets Classification accuracies of each healthy dataset when the BCI is trained on all other healthy datasets grouped together is presented in Table 7. Wilcoxon Rank Sum tests were used to evaluate statistical differences between these classification results for the Healthy, All Stroke, Early Stroke and Late Stroke groups when grouped healthy datasets were use to train the BCI. There was a significant difference found between Healthy (Median = 94) and All Stroke (Median = 70) (Z = 3.04, p < 0.05, r = 0.68), between Healthy (Median = 94) and Early Stroke calculated when very few data points were available. These between-group significant difference results are the same as those obtained when using individual BCIs trained on healthy datasets. Early Stroke datasets used to classify corresponding Late Stroke datasets Classification results of Late Stroke datasets when training with the corresponding Early Stroke dataset are shown in Table 8. Classification accuracy of the five Late Stroke datasets ranged from 62.5% to 95% with a median of 75.0%. We can compare these classification accuracy results to those obtained when training the BCI on individual healthy datasets and those obtained when training on grouped healthy datasets. Wilcoxon Signed Rank tests were used to compare these longitudinal classification results to those obtained when using BCIs trained on individual Healthy datasets. A significant (p < 0.05) increase was seen for S1L, S2L and S3L. There was no significant change (p < 0.05) found for S4L and S5L. Comparing the longitudinal classification accuracies to those obtained when training the BCI on grouped healthy datasets, we see that S1L improved from 62.5% to 82.5%, S2L improved from 55% to 72.5%, S3L improved from 87.5% to 95%, S4L reduced from 65% to 62.5% and S5L reduced from 90% to 75%. A table of collated classification results of each BCI training method for each stroke dataset is presented in Table 9. Frequency ranges of selected CSP features Presented in Table 10 are the frequency ranges of the CSP features selected for classifier training for each full dataset. We also present a corresponding histogram of this data grouped for Healthy, Stroke Early and Stroke Late datasets in Figure 4. This histogram suggests that, for healthy EEG, the frequency ranges of the CSP features in the 16-20 Hz and 20-24 Hz are most frequently selected. Early stroke datasets display some of the healthy datasets' preference for selection of features in the 16-24 Hz range however there is also increased selection of features in the 8-16 Hz range. Late stroke datasets appear to shift towards further selection of CSP features in lower frequency ranges, with a noticeable increase in selection in the 4-16 Hz range and a relative decrease in selection from 16 Hz upwards. Discussion Our first analysis result following 10-fold cross-validation demonstrated that stroke-affected EEG is more likely to contain individual trials misclassified than healthy EEG. We can speculate on possible reasons why this is so. For example, it is possible that EEG patterns from a stroke-affected brain are more variable and are less stable than those from a healthy brain, even if the stroke subject is consistent in their motor task. Given that even with healthy subjects engaging successfully in a motor task, flawless classification is not always possible then it is not unreasonable to expect similar or even worse consistency in stroke-affected brains. It is also possible that these misclassifications are due to the subject mis-performing the task. A lapse in concentration on the part of the subject, a restless hand movement, an involuntary leg twitch or possibly the effects of fatigue could reasonably cause a change in the event-related EEG confounding the efforts of the classifiers. We make the assumption that each subject performed the task correctly and to the best of their ability. Visual supervision of the subjects did not reveal any movement incidents and neither did our artefact analysis of trial data. One aspect of recording experimental data with stroke subjects is that minimization of preparation time and set up is important to reduce the likelihood of a subject becoming fatigued and being unable to complete the task. Therefore screening for extraneous muscular artefact based on recording activity of other peripheral muscle groups with, for example, electromyography (EMG) would add substantially to the instrumentation set up burden as well as risk the further discomfort of the stroke patient. Incidentally, subject S3 reported being fatigued during their early experimental session, resulting in only 36 out of the potential 40 trials being completed. Dataset S3E also scored the 2nd lowest 10-fold crossvalidation classification rate of all datasets at 93.75%. This may suggest a link between fatigue and low k-fold crosscorrelation result but that the lowest scoring dataset was S4L, where no fatigue was reported. This illustrates the difficulty of describing the processes which underlie the variable EEG features identified. Three options for training a BCI were analysed. The first two, training on healthy EEG and testing on stroke EEG, represent zero-training BCI methods -an important consideration for stroke rehabilitation BCI. In one case, we trained a BCI for each healthy dataset and in the second, we trained a single BCI on all healthy datasets grouped together. This latter method is similar to a general BCI http://www.jneuroengrehab.com/content/11/1/9 used for communication and control. The former method, however, provides more information relating to individual training and testing datasets. We can see, for example, in Table 5 that dataset S1E was classified quite well with the healthy dataset H3 (97.5%) yet was classified poorly with the healthy dataset H6 (47.5%). These cross-dataset EEG classifications are important because the reasons for such varying classification successes may be important for advancing rehabilitation BCI and our understanding of stroke-affected EEG, yet these are not results that we would see if we restricted ourselves to the more typical general BCI method. A full investigation into these potential reasons is beyond the scope of this study yet may be very interesting and useful future work. Although training numerous BCIs on each healthy EEG dataset is useful for exploring aspects of stroke-affect EEG for BCI, they do not represent a real-world implementation of a zero-training BCI. This is the purpose of the single general BCI trained on grouped healthy EEG datasets. With this, we can see how well individual stroke EEG patterns would be classified in a zero-training scenario. We find that classification rates differ significantly from the average classification rates of the individual BCIs in 5/10 stroke datasets (S2E, S3L, S4L, S5E and S5L), with 4/5 displaying an improvement. We can see how classification rates of subject's EEG patterns change from the early session to the late session. Some subjects see a marked increase (S3 and S5), some see little change (S2 and S4) and one sees a marked decrease (S1). These results suggest that the EEG signal space related to the motor task alters significantly over time, in at least some stroke cases. The results of accuracy measurements reported here may be useful in characterizing the change in EEG activity patterns during the recovery phase following stroke and so may potentially be used as a measurement of the magnitude of neuroplasticity and compensatory changes in the brain's motor networks. We suspect that these changes are due to numerous unmeasured factors, such as lesion location, patient physical rehabilitation or the patient's typical use of the stroke-affected hand. Although we have a measure of each subject's Kapandji score relating to their hand movement capabilities, we have not attempted to relate this to a subject's classification accuracies or their change in classification accuracies over time. While Kapandji score, or other measures of stroke-affected movement, may be related, we do not have a large enough dataset to attempt to make a connection. For the late stroke datasets, we can compare classification results for the third scenario where the BCI has been trained on that subject's own EEG recorded 6 months previously. Table 9 presents the classification results of all three methods and shows us that for 4/5 late stroke datasets, training the BCI on the early stroke dataset provides the best classification accuracy. There are some interesting points of discussion here regarding whether to train a rehabilitation BCI on healthy EEG patterns or a subject's own previously recorded EEG patterns. Firstly, using the healthy EEG datasets, classification results are lower. This may lead to frustration for the stroke patient, resulting in non-compliance with rehabilitation BCI therapy, even though the EEG patterns the patient must generate reflect those typical of healthy cortex. Secondly, training on the early stroke EEG patterns could potentially result in a less frustrating experience and better engagement from the patient, improving their rehabilitation outcomes. Unfortunately, as we have seen in Figure 4, the early stroke EEG patterns are not characteristic of EEG http://www.jneuroengrehab.com/content/11/1/9 from a healthy brain reflecting, most likely, the network pathophysiology resulting from the stroke. It seems that the advantages and disadvantages of training a rehabilitative BCI on either general healthy EEG or a subject's own earlier recorded EEG will have to be considered. We feel that this will be an important question to be answered for this field of research. Presented in Figure 5 and Figure 6 are CSP plots of the highest ranked CSP features for both classes of activity for all datasets. Unfortunately we have far too few datasets for these plots to provide more than a qualitative analysis of the differences between stroke-affected and healthy CSP plots. It appears that there is more left/right asymmetry in the common spatial patterns of healthy datasets than stroke-affected datasets. As the differences between the two groups is not strong enough to draw any conclusions, we instead feel that these plots suggest that stroke-affected CSP plots are not dissimilar to healthy CSP plots. Perhaps with a much larger dataset a thorough analysis of the differences between stroke-affected and healthy CSP plots would be possible. The decision to record a session of EEG activity to train a BCI for each subject may also depend on a trade-off between improved classification accuracy and any possible negative effects of subjecting a stroke patient to an EEG recording session. Possible negative effects include anxiety (as many stroke patients are elderly and may have apprehension about participating in an EEG recording session), loss of therapy time (as time spent training leads to a reduction in time spent using the BCI in a therapeutic mode) and fatigue (because a stroke patient may become fatigued as a result of training, leaving little energy for the therapeutic interaction). In these patients where the above factors are prevalent the BCI may have to be trained using healthy data. The disadvantage of this approach from a therapy perspective is that the inferior performance of the classifier may lead to frustration on the part of the patient and a potential rejection of the therapy. Given the changes in the EEG pattern in stroke compared to the stereotypical patterns for healthy subjects and their evolution over time it is clear that there is considerable scope for improved machine learning techniques which can work from short session data and continually adapt to the user. There is some recent work in this area for healthy subjects using passive movement approaches [32] and data space adaptation techniques [33]. However, we wish to remark here that it is incredibly important to note the tension between using machine learning to adapt the interface to the EEG patterns on one hand and forcing the patient to adapt to a classifier which is targeting the appropriate cortical networks for healthy movement on the other. To understand this somewhat subtle point, it is worth noting that natural recovery in stroke is often suboptimal (spasticity, abnormal muscle synergies, etc.) and these neurological symptoms can be related to pathophysiological motor and compensatory networks that have arisen from the reorganization process. It is these changes which are most likely reflected in the EEG measurements reported here. If a machine learning algorithm consistently adapts to the patient to optimize communication with the feedback interface the therapy may well lead to reinforcement of these maladaptive changes. It may be better that the patient adapts to a classifier which is set up to expect EEG features which are more typically associated with engagement of those areas of cortex more associated with healthy movement. The catch is that such a classifier may be far too frustrating to use and therefore some trade-off between encouraging engagement and directing recovery will have to be met for an effective BCI instrument in this use case scenario. This issue should be contrasted with the corresponding case for communicative BCI which instead adapts to whatever aspects of a subject's EEG is under volitional control requiring less adaptation on the part of the user. In terms of the machine learning options, Gaussian Process classification was our chosen method because, as an alternative to the more commonly used method of Naive Bayesian classification for BCI, GP classification makes no assumptions about the underlying class boundary between regressors, including allowing for non-linear class boundaries. As we are working with stroke-affected EEG, we feel that this is a more robust classification method to use when we are uncertain of the class space. At the other extreme, neural networks would provide the most detailed class boundary. However, GP classification requires optimization of relatively few parameters compared to neural networks. We see this as an advantage over both Naive Bayesian classification and neural networks. We wish to explore this method further as part of our ongoing investigation into its usefulness for BCI applications. Finally, GP classification produces more information than that reported in this study and this could potentially be used for gaining deeper insight into the variability of the features. As stated in the description, GPC does not simply return a binary class membership but a probability of class membership. We applied a decision threshold to this probability but the probabilities themselves are information that could potentially be explored in depth. After initial investigations, we found that there is a notable difference in the variance of class membership probabilities for stroke patients compared to healthy subjects. This is an investigation that we are carrying out presently. Conclusions Rehabilitative BCIs must take into account the difference in EEG patterns between healthy subjects and strokeaffected subjects in order for the system to be effective and http://www.jneuroengrehab.com/content/11/1/9 Figure 5 Stroke CSP plots. Plots of the highest-ranking common spatial patterns (columns of W −1 ) for each stroke dataset along with the frequency range the CSP plot belongs to. http://www.jneuroengrehab.com/content/11/1/9 Figure 6 Healthy CSP plots. Plots of the highest-ranking common spatial patterns (columns of W −1 ) for each healthy dataset along with the frequency range the CSP plot belongs to. http://www.jneuroengrehab.com/content/11/1/9 to aid in recovery. The ideal scenario of a zero-training rehabilitative BCI is possible using healthy EEG but the classification accuracy is lower than for healthy subjects which could be excessively frustrating for patients. Classification accuracy of stroke EEG is improved significantly through subject-specific BCI training sessions even 6 months prior however this comes with a cost in terms of loss of rehabilitation time and potentially over-adaptation to the user, which may be detrimental in terms of optimal recovery. It is clear that a rehabilitative BCI must have different technical requirements to those for a communication and control BCI and these differences must be considered when developing the appropriate machine learning scheme for this use case.
9,286.2
2014-01-28T00:00:00.000
[ "Medicine", "Engineering" ]
Pattern Discovery in White Etching Crack Experimental Data Using Machine Learning Techniques : White etching crack (WEC) failure is a failure mode that a ff ects bearings in many applications, including wind turbine gearboxes, where it results in high, unplanned maintenance costs. WEC failure is unpredictable as of now, and its root causes are not yet fully understood. While WECs were produced under controlled conditions in several investigations in the past, converging the findings from the di ff erent combinations of factors that led to WECs in di ff erent experiments remains a challenge. This challenge is tackled in this paper using machine learning (ML) models that are capable of capturing patterns in high-dimensional data belonging to several experiments in order to identify influential variables to the risk of WECs. Three di ff erent ML models were designed and applied to a dataset containing roughly 700 high- and low-risk oil compositions to identify the constituting chemical compounds that make a given oil composition high-risk with respect to WECs. This includes the first application of a purpose-built neural network-based feature selection method. Out of 21 compounds, eight were identified as influential by models based on random forest and artificial neural networks. Association rules were also mined from the data to investigate the relationship between compound combinations and WEC risk, leading to results supporting those of previous analyses. In addition, the identified compound with the highest influence was proved in a separate investigation involving physical tests to be of high WEC risk. The presented methods can be applied to other experimental data where a high number of measured variables potentially influence a certain outcome and where there is a need to identify variables with the highest influence. containing only high risk oils and one containing only low risk oils. Rules were mined from each of Two main findings were obtained from this analysis. The first finding was that high risk oils were more heterogeneous than low risk oils. In other high risk oils were much more likely to contain more than two compounds as compared to low risk oils, which almost always contained a maximum of two compounds, as shown in Figure 4. This was indicated by the significantly lower number of association rules obtained from low risk oils (seven rules) compared to high risk oils (62 rules). Several root cause investigations of WEC failures found that lubricants and their components, e.g., additives, can play an important role in leading to WECs [22,24,25]. In one investigation, an experiment was performed with a lubricant composed of only a base oil (no additives), resulting in no WEC failure even after 1000 h of testing, while another experiment with a lubricant containing over-based calcium sulfonates as rust preventers and short-chain zinc dithiophosphates as antiwear additives resulted in WEC failure after 40 h of testing [22]. Paladugu et al. also performed life tests on cylindrical roller thrust bearings in different oils [26] A so-called 'WEC critical oil' with additives resulted in premature bearing failure within 5% of the lifetime of another bearing that was lubricated with a mineral oil containing no additives [26]. These results not only implicate the so-called 'WEC critical oil', but also indicate that oil additives may have an influence on risk of WECs. Similarly, several other investigations used a specific oil, containing additives, to successfully promote WEC failure [1,10,21,23,27], the most recent of which is the investigation by Gould et al., where lubricant additives were systematically varied to study the effect of different additive combinations on bearing time until failure [24]. The investigation found that the lubricant containing zinc dialkyl-dithiophosphate (ZnDDP) led to WECs sooner than any other tested lubricant under the test conditions [24]. While WECs were produced under controlled conditions in several investigations in the past, converging the findings from the different combinations of factors that led to WECs in different experiments remains a challenge. This challenge could be addressed using machine learning (ML) algorithms that are able to discover patterns in high-dimensional data belonging to several experiments. However, ML algorithms are often criticized for a lack of transparency. Transparency into the drivers of accuracy of ML algorithms are crucial if such algorithms are to be used to identify root causes from experimental data. This paper addresses these issues by first developing machine learning models that are able to learn patterns from experimental data and demonstrate high skill in identifying risky variable combinations from different experiments. The developed models are then further tested following a technique designed to reveal the inner-workings of the models driving the accuracy of their judgements. More specifically, the models were tested to identify which variables are important for the performance of the models and to what extent, relative to one another. In order to train and assess the models in identifying risky conditions with respect to WECs from previous experiments, a dataset containing roughly 700 high-and low-reference oil compositions was used. The data was provided by Schaeffler on the condition that the identities of the constituting oil compounds remained anonymized. The dataset was compiled based on physical tests and chemical simulations performed by Schaeffler in collaboration with 4LinesFusion, a supplier of industrial analytics solutions [28,29]. Three data analysis methods were designed and applied to the dataset to identify patterns between high-reference oil compositions leading to knowledge of the constituting chemical compounds, which made a given oil composition high-reference with respect to WECs. The methods presented in this paper can be applied to other experimental data where a high number of measured variables influence a certain outcome and where there is a need to identify variables with the highest influence. Since this is a common objective of many root-cause investigations in tribology, the authors aim to support the efforts of a large audience in the field of tribology with the outcomes of this paper. Data Description Roughly 700 low-and high-reference oil compositions were present in the available dataset. More specifically, 352 oil compositions were present, which were identified by Schaeffler and 4LinesFusion to be low risk with respect to WECs. Additionally, 327 oil compositions identified to be high risk with respect to WECs were present in the dataset. Eight oil compositions were identified as medium risk. These compositions posed a significant class imbalance in the dataset due to their considerably lower number of examples in the available data set compared to the number of examples of high and low risk oil compositions. Such a pronounced class imbalance can negatively impact the performance and accuracy of the developed ML models later on [30]. Therefore, the 8 oil compositions were neglected in the subsequent analyses. From here on, low-and high-reference oil compositions are referred to as high or low risk oils, respectively. The oil compositions contained either 1 or 2 additives in addition to the base oil. Additives and base oils, from here on referred to as compounds, were anonymized by compound identification numbers (IDs), e.g., c1, c2, or c21. In total, 21 compound identification numbers were present in the dataset. For clarity, Table 1 shows two oil compositions from the dataset. The oil compounds selected for this investigation were used in bearing lubricants in several test benches by project partners to instigate WEC failure. Bearings in wind turbine gearboxes as well as other industrial applications suffer from costly, unplanned maintenance due to WECs [6][7][8][9][10][11]. In addition, oil additives have been shown to influence risk of WECs [22,24,25]. Therefore, there is high interest in identifying the degree to which the selected oil compounds influence WEC risk. Methods Overview Three methods were used to discover patterns in the available data. First, models based on random forests and artificial neural networks were trained and tested to identify oil compounds that influence the risk level of a given oil composition with respect to WECs. In addition, association rule mining was utilized to investigate the relationship between compound combinations and WEC risk, leading to results supporting those of previous analyses. The methods are explained in more detail in the following subsections. Random Forests In order to discover the pattern in the available data and correctly classify WEC risk level of a given oil composition using the percentages of its constituting compounds as input, a random forest (RF) model was developed. The available data of 679 oils including their respective constituents' percentages and their risk classification (high or low) were used to train and test the RF models. The random forest [31] model relies on the collective ability of multiple weak classifiers (decision trees) to learn to approximate a function. In this case, the desired function should output the risk level of a given oil composition (high or low) using the percentages of the 21 possible compounds contained in the oil as input variables. Since a random forest is no more than an ensemble of decision trees, Figure 1 illustrates how an example decision tree would classify a given oil based on its constituting compounds. Starting from the root of the tree at the top of the figure, a given oil either follows the left or right path depending on its percentage of c9. It then follows the appropriate path depending on its percentage of c6 or c3 to the so-called leaves of the decision tree, illustrated as pie charts in Figure 1. After a number of oil compositions go from the root of the tree to one of the four leaves depending on their constituting compounds, each leave would have a ratio of high and low risk oil compositions as shown in the figure. This process is referred to as training the decision tree. In this example tree, 90% of the oil compositions that made it to the leftmost leaf are low risk oils. If a new oil composition with unknown risk level reaches the leftmost leaf, then the decision tree estimates with 90% probability that the new oil composition is low risk with respect to WECs. A random forest contains a number of such decision trees with different numbers of branches and different splitting criteria at each branch to collectively reach a more accurate classification. To develop a random forest, some design parameters, so-called hyperparameters, need to be decided by the investigator in a process of tuning the RF to reach optimal performance. Some of the most influential hyperparameters on RF performance are [32]: • Sample size: the size of the sample selected from the total number of oils to be the training data for each tree in the random forest. Decreasing this value will most likely result in less accurate predictions by the individual trees. However, increasing this value can also result in overfitting, where the RF achieves significantly higher performance on the training data, but performs poorly on the test data, i.e., new oil compositions with unknown risk levels. • Number of tried features at each split (from here on referred to as ftry): the number of randomly selected candidate variables, in this case compound IDs, for each split in a given decision tree in the RF when growing it. A split in a decision tree is every point when a given oil either follows a right or left path. For example, in Figure 1, the first split is performed according to the percentage of c9 in the oil. If two variables are tried with an ftry = 2, then the variable that best splits high and low risk oil compositions is selected. For example, if c1 and c2 are tried and c1 results in a split with the right side of the split containing only high risk oils and the left side containing only low risk oils, and c2 results in a mixture of high and low risk oils on both sides of the split, then c1 is chosen. This is because the split according to the percentage of c1 in the oils, in this example, results in a purer separation of high and low risk oils compared to c2. If ftry is equal to 3, then three compound IDs are instead evaluated at each split. Similar to sample size, decreasing ftry results in worse performance by the individual trees, but increasing it can result in overfitting. Much like the case with sample size, the right balance needs to be found where the highest performance by the RF is reached. • Node size: the minimum number of oils in a terminal node of any tree in the RF. Without going into more details, the typically used value for classification problems is 1, which was the value chosen for developing the RF in this investigation since it generally provides good results [32]. When attempted, increasing the node size did not lead to higher accuracy. Probst et al. provide more details on random forest hyperparameters as well as some best practices for tuning RF models [32]. In addition, the pioneering paper by Breiman [31] provides more information about random forests. The number of trees in the random forest is also a design decision when developing a random forest. The degree of influence of this hyperparameter is controversial with the research consensus favoring setting it to a computationally feasible large number [32,33]. In this investigation, increasing the number of trees above 500 trees did not lead to higher accuracy, so the number of trees was set to 500. In order to identify the optimal ftry and sample size values, hyperparameter tuning was performed by trying different combinations of the two hyperparameters and assessing the performance of the resulting random forests. Ultimately, the combination resulting in the random forest with the least classification error was selected. In case of ties, the parameters requiring less computational effort was selected. Since there were only 21 compound IDs present in the data, ftry values could only range from 1 to 21. For the sample size, it was decided to try the range from 1 to To develop a random forest, some design parameters, so-called hyperparameters, need to be decided by the investigator in a process of tuning the RF to reach optimal performance. Some of the most influential hyperparameters on RF performance are [32]: • Sample size: the size of the sample selected from the total number of oils to be the training data for each tree in the random forest. Decreasing this value will most likely result in less accurate predictions by the individual trees. However, increasing this value can also result in overfitting, where the RF achieves significantly higher performance on the training data, but performs poorly on the test data, i.e., new oil compositions with unknown risk levels. • Number of tried features at each split (from here on referred to as ftry): the number of randomly selected candidate variables, in this case compound IDs, for each split in a given decision tree in the RF when growing it. A split in a decision tree is every point when a given oil either follows a right or left path. For example, in Figure 1, the first split is performed according to the percentage of c9 in the oil. If two variables are tried with an ftry = 2, then the variable that best splits high and low risk oil compositions is selected. For example, if c1 and c2 are tried and c1 results in a split with the right side of the split containing only high risk oils and the left side containing only low risk oils, and c2 results in a mixture of high and low risk oils on both sides of the split, then c1 is chosen. This is because the split according to the percentage of c1 in the oils, in this example, results in a purer separation of high and low risk oils compared to c2. If ftry is equal to 3, then three compound IDs are instead evaluated at each split. Similar to sample size, decreasing ftry results in worse performance by the individual trees, but increasing it can result in overfitting. Much like the case with sample size, the right balance needs to be found where the highest performance by the RF is reached. • Node size: the minimum number of oils in a terminal node of any tree in the RF. Without going into more details, the typically used value for classification problems is 1, which was the value chosen for developing the RF in this investigation since it generally provides good results [32]. When attempted, increasing the node size did not lead to higher accuracy. Probst et al. provide more details on random forest hyperparameters as well as some best practices for tuning RF models [32]. In addition, the pioneering paper by Breiman [31] provides more information about random forests. The number of trees in the random forest is also a design decision when developing a random forest. The degree of influence of this hyperparameter is controversial with the research consensus favoring setting it to a computationally feasible large number [32,33]. In this investigation, increasing the number of trees above 500 trees did not lead to higher accuracy, so the number of trees was set to 500. In order to identify the optimal ftry and sample size values, hyperparameter tuning was performed by trying different combinations of the two hyperparameters and assessing the performance of the resulting random forests. Ultimately, the combination resulting in the random forest with the least classification error was selected. In case of ties, the parameters requiring less computational effort was selected. Since there were only 21 compound IDs present in the data, ftry values could only range from 1 to 21. For the sample size, it was decided to try the range from 1 to 469 with steps of 26, since the training set contained 475 oils. All possible ftry values were tried. The combination resulting in the best performance was ftry = 5 and sample size = 53. This combination led to a 10-fold cross validation accuracy of 98.51%. The R package by Meyer et al. was utilized for tuning the hyperparameters of the random forest models in this paper [34]. It is worth noting that the dataset was initially split into a 70%, 30% split before tuning the RF. The division was performed randomly. The tuning was performed using only the 70% set (containing 476 oils). Ten-fold cross validation (CV) was used to estimate the error of each RF model. The benefit of this method is that it allows for testing the machine learning algorithm with the chosen hyperparameters on every available oil in the dataset. Therefore, this was the method of choice for validating the generalizability of all machine learning algorithms in this investigation. For a more detailed explanation of how 10-fold cross validation was applied in this investigation, the reader is referred to publication [28]. The following steps provide an overview of the analysis performed on the oil compositions using random forests: 1. Splitting the 679 available oil compositions randomly into two smaller datasets: 70% of oils are selected as the training set and 30% are selected as the test set. 2. Hyperparameter tuning: different combinations of sample size and ftry are used to train a random forest model using the training set. Ten-fold cross validation is used to estimate the classification performance of each resulting random forest model. The combination resulting in the top performance is identified as the optimal combination. 3. Developing a tuned RF classifier: the optimal hyperparameter combination is used to develop an RF classifier, trained using the training set. 4. Testing the tuned RF classifier: the developed tuned RF classifier is tested on the test set to verify its accuracy on unseen data. 5. Reaching a more representative estimate of model accuracy: use the optimal hyperparameter pair to perform 10-fold cross validation on all 679 oils. This is done to reach an estimate of accuracy that involves testing every available oil rather than only the 30% of the available oils in the testing set. After developing a random forest classifier to accurately classify the WEC risk level of oil compositions, the focus shifted to reveal the inner workings of the developed ML model and gain an understanding of what drives the accuracy of its classifications. In other words, the task was identifying which compound IDs had an influence on WEC risk of a given oil composition and to what extent. This was achieved by following the Boruta algorithm [35]; 21 randomly shuffled versions (so called shadows) of the compounds were added to the data, and a statistical test was used to iteratively remove the compounds proven to be less important in WEC risk classification than the random shadows. A compound was considered unimportant if, on average over several iterations, it was found to be less important than the most important shadow compound. Each shadow was a randomly shuffled copy of one of the 21 compound identification numbers present in the dataset. Kursa and Rudnicki also provide more details on the Boruta algorithm and the calculation of the importance values [35]. Artificial Neural Networks Artificial neural network (ANN) models were trained to classify the WEC risk (high or low) of an oil, taking the identities of its constituting chemical compounds and their respective percentages as input. The available dataset of oil compositions and their risk classification were used to train and test the ANN models. Similar to the random forest model, developing an ANN model involved selecting and tuning hyperparameter values to improve model accuracy. Eleven neural networks were developed, gradually increasing the 10-fold cross validation classification accuracy on unseen test oils to 99.8% by tuning the hyperparameters of the networks [28]. Changing the following hyperparameters proved most influential on model performance: the number of hidden layers, the number of nodes per layer, the types of activation functions, the type and parameters of regularization, the type of loss function, and the parameters of the optimizer function. The network delivering the highest accuracy of 99.8% contained 3 hidden layers with L2 regularization applied only after the first hidden layer to help prevent overfitting. Ng provides more details on L2 regularization [36]. The 3 hidden layers contained 19, 15, and 9 nodes, respectively. The activation function used after every hidden layer was the leaky rectified linear unit (leaky ReLU) [37] as a countermeasure against the vanishing gradient problem. The adaptive moment estimation (Adamax) optimization function [38] was used to optimize the neural network during training with the exponential decay rates for the first and second moment estimates set to 0.93 and 0.98, respectively, and the learning rate set to 0.0018. Categorical cross entropy was used as the loss function. Finally, the output layer consisted of two nodes corresponding to high or low risk with respect to WECs. Softmax [39] was used as the activation function following the output layer in order to facilitate the determination of the target classification, high or low risk, of a given oil composition. ANN models, through the process of training, approximate a desired function taking in the available input and producing the desired output. In this case, the input was the composition of each oil under investigation with respect to the 21 possible compounds in the dataset; i.e., there were 21 input variables. The output was the WEC risk level of the lubricant, high or low risk. More complex problems require more complex neural networks, and the aforementioned hyperparameters allowed for modularity in constructing the ANN model to meet the required complexity of the problem at hand. The process of tuning the hyperparameters involved iterations of trial and testing guided by previous experience and domain knowledge. Schmidhuber provides a more detailed overview of artificial neural networks and deep learning [40]. The neural networks were tested further to identify the most influential oil compounds in terms of risk of WECs. Twenty-one compound identities were present in the available data, so 10-fold cross validation was performed 21 times on each network architecture. Each time, a network with the previously selected hyperparameters was trained on correct data but tested on data with one of the compound's information shuffled. If the average 10-fold cross validation classification accuracy did not significantly decrease (>97%) as a result of distorting the data of a compound, then it was concluded that this compound was not influential in classifying WEC risk of an oil. As far as we know, this method of feature selection was developed during this investigation, inspired by the fundamental idea of the Boruta analysis. Additional work will be done to test its capabilities with different datasets before the release of a more detailed publication concerning the method. For ease of reference, this method is named Neural Network-based Feature Selection (hereafter referred to as "NN-based FS"). Association Rule Mining In addition to identifying individual oil compounds that are influential to WEC risk classification, an analysis was performed to investigate the relationship between frequently occurring combinations of compounds in the available data and WEC risk. The motivation behind this analysis came from previous investigations, such as [5], which concluded that certain additive combinations resulted in WECs, while others did not. The algorithm used to perform this task is called the Apriori algorithm [41]. The algorithm searches for frequently occurring sets, or combinations of compounds in the oil dataset in an unsupervised manner based on user-defined minimum criteria. The minimum criteria ensure a standard for the quality of rules with quality referring to the strength of the identified associations and their frequency of occurrence in the dataset. The algorithm then generates association rules based on the identified frequent sets that shed light on which compounds are likely to join which other compound or which other groups of compounds. For example, the two association rules shown in Table 2 describe the likelihood of finding c12 in an oil that already has c16 (rule number 1) and the likelihood of finding c12 in an oil that already has the compound combination of c8 and c9 (rule number 2). The four main metrics used to describe the likelihood of a given association rule can also be used, for example, to define the minimum criteria by the user to filter out rarely occurring associations or association rules with low confidence. Several metrics are used to describe the likelihood or the quality of a given association rule. The metrics of the rules in Table 2 are listed in Table 3. These metrics are explained below [42]: 1. Support: the proportion of oils in the dataset that contain all the compounds in a given association rule. For example, the support of rule number 1 from Table 3 is calculated by dividing the number of oils containing both c16 and c12 in the dataset (36 oils) by the total number of oils in the dataset (679 oils); 36/679 = 0.0530. 2. Confidence: the proportion of oils that contain the compound(s) on the left hand side (LHS) of an association rule divided by the support of the rule. Using rule number 2 from Table 3 as an example, confidence is calculated by dividing the proportion of oils in the dataset that contain both c8 and c9 (0.0133) by the support of the rule (0.0133), which would equal 1. This essentially means that all oils that contain both c8 and c9 also contain c12. 3. Lift: the confidence of a rule divided by the proportion of oils in the dataset that contain the compound(s) on the right hand side (RHS) of an association rule. This metric indicates how surprising an association rule is given the expected probability of finding the RHS compound(s) in an oil in the dataset. For instance, rule number 3 from Table 3 has a lift of almost 1, which indicates that the probability of finding c16 in any oil in the dataset is almost identical to the probability of finding c16 in an oil that already contains c3. This means that the association suggested by rule number 3 is weak. In contrast, rule number 2 from Table 3 has a lift of 5.853, which indicates that the association indicated by the rule is strong. For rule number 1, the lift value is below 1, which means that it is more likely not to find c12 in an oil that contains c16 than it is to find c12 in an oil that contains c16. 4. Count: the number of oils in the dataset that contain all the compounds in a given association rule. Using rule number 2 from Table 3 as an example, the count is 9, which means that the number of oils in the dataset that contain the combination of c8, c9, and c12 is 9. Hahsler et al. provide more details on the Apriori algorithm used in this investigation and the metrics of association rules [42]. The defined minimum criteria for this investigation were chosen to be a confidence of 50% and a support of 0.1%, since confidence and support are the best-known constraints for this algorithm [42]. After identifying association rules from the oils dataset using these criteria, the focus shifted to the goal of identifying and comparing association rules from low risk oils and high risk oils. This was performed by splitting the dataset into two datasets consisting of low risk oils and high risk oils, respectively, and mining each of these two datasets for association rules using the same minimum criteria. Finally, the generated rules and their respective metrics were compared to investigate the relationship between compound combinations and WEC risk. Random Forests An RF model was used to classify risk level of oil compositions after the dataset was randomly split into training (70%) and testing (30%) sets. The accuracy of the test set was 99.03% with two misclassifications. The chosen combination, which led to the previously mentioned 99.03% accuracy value, was of ftry = 5 and sample size = 53. Using these hyperparameters, 10-fold cross validation was applied to the entire dataset leading to a slightly more pessimistic but more representative estimated accuracy of 98.51%. The Boruta algorithm [35] was used to identify important compounds for classifying WEC risk levels of oils. Broadly speaking, the importance of a given variable, in this case compound, relates to the potential loss in accuracy if that variable was excluded from the input. Kursa and Rudnicki provide more details on the definition of importance in the context of the Boruta algorithm [35]. That led to the identification of eight significantly important compounds: c16, c9, c6, c21, c14, c7, c8, and c11. Shown in Figure 2, 13 of 21 compounds were found to be important. However, the last eight compounds on the right hand side of the figure (shown in a circle) were clearly significantly important in comparison with the other compounds. Shown in the figure are also the mean, minimum, and maximum importance values of the shadows, indicated as s.Mean, s.Min, and s.Max, respectively. The unimportant compounds were those with less importance than s.Max. Therefore, they are arranged to the left of the s.Max in Figure 2. Random Forests An RF model was used to classify risk level of oil compositions after the dataset was randomly split into training (70%) and testing (30%) sets. The accuracy of the test set was 99.03% with two misclassifications. The chosen combination, which led to the previously mentioned 99.03% accuracy value, was of ftry = 5 and sample size = 53. Using these hyperparameters, 10-fold cross validation was applied to the entire dataset leading to a slightly more pessimistic but more representative estimated accuracy of 98.51%. The Boruta algorithm [35] was used to identify important compounds for classifying WEC risk levels of oils. Broadly speaking, the importance of a given variable, in this case compound, relates to the potential loss in accuracy if that variable was excluded from the input. Kursa and Rudnicki provide more details on the definition of importance in the context of the Boruta algorithm [35]. That led to the identification of eight significantly important compounds: c16, c9, c6, c21, c14, c7, c8, and c11. Shown in Figure 2, 13 of 21 compounds were found to be important. However, the last eight compounds on the right hand side of the figure (shown in a circle) were clearly significantly important in comparison with the other compounds. Shown in the figure are also the mean, minimum, and maximum importance values of the shadows, indicated as s.Mean, s.Min, and s.Max, respectively. The unimportant compounds were those with less importance than s.Max. Therefore, they are arranged to the left of the s.Max in Figure 2. Artificial Neural Networks As mentioned earlier, ANN models were trained to classify the WEC risk (high or low) of an oil. Eleven neural networks were developed, gradually increasing the 10-fold cross validation classification accuracy to 99.8% by altering the networks architecture [28]. In addition to increasing the number of hidden layers and adjusting the number of nodes per layer, using the leaky rectified linear unit (ReLU) activation function and an adaptive moment estimation (Adamax) optimizer proved useful in increasing model accuracy. The top performing ANN consisted of three hidden Artificial Neural Networks As mentioned earlier, ANN models were trained to classify the WEC risk (high or low) of an oil. Eleven neural networks were developed, gradually increasing the 10-fold cross validation classification accuracy to 99.8% by altering the networks architecture [28]. In addition to increasing the number of hidden layers and adjusting the number of nodes per layer, using the leaky rectified linear unit (ReLU) activation function and an adaptive moment estimation (Adamax) optimizer proved useful in increasing model accuracy. The top performing ANN consisted of three hidden layers. The R package by Allaire and Chollet was utilized to implement the neural network algorithms [43]. The ANN models were tested further to identify the most influential oil compounds of WEC risk following the NN-based FS method described in the methods section. Shown in Figure 3, eight compounds were found to be influential. Figure 3 shows box plots where the average of each box represents the 10-fold cross validation accuracy for the respective compound ID. Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 14 layers. The R package by Allaire and Chollet was utilized to implement the neural network algorithms [43]. The ANN models were tested further to identify the most influential oil compounds of WEC risk following the NN-based FS method described in the methods section. Shown in Figure 3, eight compounds were found to be influential. Figure 3 shows box plots where the average of each box represents the 10-fold cross validation accuracy for the respective compound ID. In order to verify the importance of the eight identified important compounds shown in Figure 3, a new ANN classifier was developed. The classifier was trained to classify WEC risk level of oils based only on the data of the eight identified important compounds. In other words, the input to the new classifier did not include the composition data of the remaining 13 compounds available in the dataset. The developed classifier was able to achieve 10-fold CV accuracy of 98.5% [28]. Association Rule Mining The Apriori algorithm was used to investigate the relationship between compound combinations and WEC risk. As discussed in the methods section, the minimum criteria implemented to mine association rules were a confidence of 50% and a support of 0.1%. Twenty-two rules were mined using these criteria, and afterwards the available dataset was split into two datasets: a dataset containing only high risk oils and one containing only low risk oils. Rules were mined from each of the two datasets separately using the same minimum criteria resulting in 62 rules from the high risk set and only seven rules from the low risk set. The R package by Hahsler et al. was utilized to implement the Apriori algorithm [44]. Two main findings were obtained from this analysis. The first finding was that high risk oils were more heterogeneous than low risk oils. In other words, high risk oils were much more likely to contain more than two compounds as compared to low risk oils, which almost always contained a maximum of two compounds, as shown in Figure 4. This was indicated by the significantly lower number of association rules obtained from low risk oils (seven rules) compared to high risk oils (62 rules). In order to verify the importance of the eight identified important compounds shown in Figure 3, a new ANN classifier was developed. The classifier was trained to classify WEC risk level of oils based only on the data of the eight identified important compounds. In other words, the input to the new classifier did not include the composition data of the remaining 13 compounds available in the dataset. The developed classifier was able to achieve 10-fold CV accuracy of 98.5% [28]. Association Rule Mining The Apriori algorithm was used to investigate the relationship between compound combinations and WEC risk. As discussed in the methods section, the minimum criteria implemented to mine association rules were a confidence of 50% and a support of 0.1%. Twenty-two rules were mined using these criteria, and afterwards the available dataset was split into two datasets: a dataset containing only high risk oils and one containing only low risk oils. Rules were mined from each of the two datasets separately using the same minimum criteria resulting in 62 rules from the high risk set and only seven rules from the low risk set. The R package by Hahsler et al. was utilized to implement the Apriori algorithm [44]. Two main findings were obtained from this analysis. The first finding was that high risk oils were more heterogeneous than low risk oils. In other words, high risk oils were much more likely to contain more than two compounds as compared to low risk oils, which almost always contained a maximum of two compounds, as shown in Figure 4. This was indicated by the significantly lower number of association rules obtained from low risk oils (seven rules) compared to high risk oils (62 rules). Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 14 The second finding from the resulting association rules was that occurrence of certain compound groups in low risk oils was different compared to that of high risk oils. This finding is clearly visible in Table 4, which lists association rules mined from the high risk oils dataset that were not in common with the association rules mined from the low risk oils datasets as well as their respective metrics. Despite relaxing the minimum criteria of low risk oils to attempt to extract more, albeit weaker, association rules in common with the ones mined from high risk oils, many association rules from high risk oils were still unique to high risk oils. This further supported the finding that the occurrence of compound combinations was significantly different in high and low risk oils. Table 4 lists a selection of these rules with confidence above 80%. Table 4. Association rules from high risk oils (minimum confidence of 80%). Rules Confidence Discussion The similar results of different methods help verify the validity of the results reached using the aforementioned data analyses. The important compounds found using ANNs through the newly developed neural network-based feature selection algorithm (NN-based FS) and those found using random forests through the Boruta algorithm are listed in Table 5 in the order of importance. Comparing the identified important compounds from the two methods, it becomes clear that the two results are in agreement with a slight difference in the order of importance towards the relatively less The second finding from the resulting association rules was that occurrence of certain compound groups in low risk oils was different compared to that of high risk oils. This finding is clearly visible in Table 4, which lists association rules mined from the high risk oils dataset that were not in common with the association rules mined from the low risk oils datasets as well as their respective metrics. Despite relaxing the minimum criteria of low risk oils to attempt to extract more, albeit weaker, association rules in common with the ones mined from high risk oils, many association rules from high risk oils were still unique to high risk oils. This further supported the finding that the occurrence of compound combinations was significantly different in high and low risk oils. Table 4 lists a selection of these rules with confidence above 80%. Table 4. Association rules from high risk oils (minimum confidence of 80%). Rules Confidence Discussion The similar results of different methods help verify the validity of the results reached using the aforementioned data analyses. The important compounds found using ANNs through the newly developed neural network-based feature selection algorithm (NN-based FS) and those found using random forests through the Boruta algorithm are listed in Table 5 in the order of importance. Comparing the identified important compounds from the two methods, it becomes clear that the two results are in agreement with a slight difference in the order of importance towards the relatively less important compounds. This helps validate the two results since two different methods led to an almost identical conclusion. Table 5. Identified important compounds. Method Important Compounds Neural Network-based Feature Selection (NN-based FS) c16, c9, c6, c21, c7, c14, c11, c8 Boruta [35] c16, c9, c6, c21, c14, c7, c8, c11 A significant observation was made after reaching the order of important compounds listed in Table 5 using the NN-based FS method. Based on chemical domain knowledge, if those compounds were to be ordered based on their respective ability to release hydrogen, that order would match the order of importance identified using the NN-based FS method, as listed in Table 5. This indicates that the results of this investigation are in agreement with previous investigations [2,45], which found the release of hydrogen and its diffusion into the bearing steel to be a driver of WEC formation. As for the investigation of the relationship between combinations of compounds and WEC risk, two important observations are visible from the resulting association rules. Firstly, the compound associations, listed in Table 4, which were found only in high risk oils, had one thing in common: they almost always, with one exception, contained one or more of the top three important compounds identified by the other analyses to be influential to WEC risk classification. This result further supports the results of the other analyses that pointed at these compounds as risky. In addition to the first observation, the fact that low risk oils generally contain less compounds than high risk oils, as shown in Figure 4, indicates a possibility that oils with more compounds may be more likely to result in WEC failure compared to oils with fewer compounds. It may also be the case that having more compounds in an oil increases the likelihood that a high risk compound is present in the oil. Future investigations might also use this observation as a starting point to examine these possibilities. A possibility still remains that certain combinations of compounds that are not risky on their own may become risky when combined. Since the compounds in the association rules in Table 4 are not even weakly associated in low risk oils yet strongly associated in high risk oils, they may be, pending further investigations, risky combinations with respect to WEC failure. This investigation shows the applicability of data analytics approaches on phenomena where several factors seem suspicious for having an influence on a certain outcome. With the help of these methods, it is possible to identify the influential factors out of a number of suspicious factors. An investigation [24] involving a number of tests with different oils led to a conclusion consistent with the results of the data analyses presented in this paper. The completed data analyses on the available dataset pointed to c16 as the most important oil compound for classifying WEC risk of an oil. The project partner Schaeffler also reported that several tests pointed to c16 as a high risk oil compound with respect to WECs. This agreement between the results of the analysis performed and the results from the project partner corroborates the findings and applications presented in this paper. Conclusions This paper presented applications of three machine learning techniques to tackle the challenge of pattern discovery in high-dimensional data belonging to multiple experiments on WEC bearing failure. This includes the first application of the purpose-built Neural Network-based Feature Selection (NN-based FS) method. The main conclusions are as follows: 1. It is possible to converge findings from multiple experiments using the presented ML models to discover patterns and conduct root-cause analyses on WECs using only historic data from previous experiments. 2. It is possible to reach said patterns via ML models while maintaining transparency into the drivers of accuracy of the ML models using the techniques presented in this paper. 3. The presented techniques are able to identify patterns to classify a given oil composition as highor low-risk with respect to WECs with high accuracy using data from previous experiments. 4. The presented techniques are able to identify influential oil compounds on WEC risk using data from previous experiments. 5. NN-based FS was developed and applied during this investigation as a method of feature selection based on neural networks. Since this is the first application of the method, the authors aim to test its capabilities with different datasets before releasing a more detailed publication of the method.
10,557.4
2019-12-14T00:00:00.000
[ "Engineering", "Materials Science", "Computer Science" ]
Conservation Hospice: A Better Metaphor for the Conservation and Care of Terminal Species The extinction crisis creates a need to increase conservation funding and use it more efficiently. Most conservation resources are allocated through inefficient political processes that seem ill equipped for dealing with the crisis. In response, conservation triage emerged as a metaphor for thinking about the optimization of resource allocation. Because triage operates primarily as a metaphor, not means for allocating resources, its metaphorical implications are of particular importance. Of particular concern, the triage metaphor justifies abandoning some species while acquiescing to inadequate conservation funding. We argue conservation hospice provides an alternative medical metaphor for thinking about the extinction crisis. Hospice is based on the underlying principle of caring for all (species) and places particular emphasis on expected survival time, symptom burden and relief, treatments, ability to “stay at home” (i.e., in situ conservation), and maintaining support for related species and landscapes. Ultimately, application of hospice principles may be ethically obligated for a society that accepts the idea that least some organisms are intrinsically valuable and may help place emphasis on resource allocation issues without providing implicit justification for abandoning species to extinction. INTRODUCTION The existence of an unprecedented human-caused extinction event (Barnosky et al., 2011;Ceballos et al., 2015) exacerbated in recent decades by climate change (Thomas et al., 2004) is well established. Considered in combination with insufficient resources (McCarthy et al., 2012), the rapidly accelerating extinctions highlight a need for hyper-efficient resource use. Unfortunately, conservation resources are often allocated through relatively inefficient processes (Ando, 1999) that can be driven by ideology (Wallace, 2003), values (Karns et al., 2018), and ultimately, voting. Some conservation experts see triage as a rational response to this context. Conservation triage borrows from triage in medicine to suggest rapid calculations (e.g., optimization and utility functions) about the likelihood of extinction, and sometimes, the value of a species can guide resource allocation toward saving species in the most efficient manner possible (Bottrill et al., 2008). The medical metaphor, however, extends beyond the uncontested idea of efficiency to imply a need for abandoning expensive and potentially doomed species as a means to provide adequate resources to species with better prognoses and less expensive treatments (Jachowski and Kesler, 2009). Decisions on how triage should be implemented are high stakes, but Gerber's (2016) research suggests this is precisely why they are useful. For example, although endangered species conservation funding in the United States is positively related to success in recovering many species (Miller et al., 2002), some more futile efforts spend well in excess of recovery plan targets without curbing population declines. Gerber found eliminating the budget surpluses (i.e., all spending in excess of recovery plan recommendations) for 50 such species would fully fund conservation for more than 180 other endangered species. Conservation triage, however, faces several criticisms from the conservation community. First, conservation triage simply is not used to allocate most conservation resources, so its impacts stem less from improving efficiency than from changing how people think about conservation funding and dying species. Regarding the latter, conservation triage suggests resource allocation is so urgent that decisions must be made before changes to resource availability are effected, and this erroneously implies current socioeconomic contexts are static sideboards for resource use (Vucetich et al., 2017). With sufficient political will, conservation funding could change by orders of magnitude in a relatively short time, thus providing sufficient resources for protecting all biodiversity (Parr et al., 2009)-particularly if support is shifted from other domains (e.g., military spending). Further, the traditional conservation triage metaphor is biased against species that occur at low densities that inherently cost more to protect or recover than others (Noss, 1996), and efforts to protect those species may be precisely the ones that push innovation and public awareness forward in ways that promote additional resources for conservation in general (Pimm, 2000). The universality of equal and high value of human life undergirding the triage concept does not apply in wildlife conservation contexts in which no universally accepted valuation of species exists (Vucetich et al., 2017). Finally, and somewhat ironically, the pragmatic appeal of efficient resource use falls short. Wilson and Law (2016) convincingly argue that public dialog and debate over how resources are used to save species is essential for triage to be used within a "wider system of care, " and we suggest conservation hospice intuitively provides insights for such a system of care. Conservation hospice may provide a "third way" for thinking about the management of terminal species. The construct may also provide crucial insight into the large number of conservation-dependent species (Goble et al., 2012) that will go extinct rapidly without perpetual anthropogenic interventions. We intentionally use the "third way" label popularized by Bill Clinton and Tony Blair in reference to developing pragmatic solutions to left-and right-wing political gridlock because conservation hospice attempts to reconcile similarly divergent perspectives. Dame Cicely Saunders is credited with creating the modern concept of hospice in the 1950s (Clark, 1998). Insights from the movement Saunders started may help conservation biologists think more constructively about classifying and caring for dying species. Although wildlife conservation differs from emergency medicine in key ways (Vucetich et al., 2017), conservation has striking similarities with more traditional medical care, and in both cases, those with terminal prognoses often receive less attention than they should. In this essay, we suggest hospice care may offer some valuable insight for wildlife conservation during the ongoing anthropogenic mass extinction. We begin by outlining ways wildlife conservation might learn from hospice. Prior application of hospice constructs to management of landscapes being lost to salinization provides a precedent for the extension from human medicine to care for nature (Hobbs et al., 2003). CONSERVATION HOSPICE Conservation hospice avoids tacit acceptance of resource constraints as justification for abandoning species to extinction. Rather, the fundamental underlying principle of hospice care means even doomed species merit some level of care and associated resources. Those resources, however, would be allocated even in cases in which species extinction was acknowledged to be more likely than recovery. Although this model may seem radical, hospice is already applied to numerous species. Arguably, the list includes many species with high extinction risk and those defined as "conservation reliant, " including cheetah (Acinonyx jubatus) (Ginsberg, 2017), rhinoceros (Rhinocerotidae) (Haas and Ferreira, 2015), polar bear (Ursus maritimus) (Hunter et al., 2010), and snow leopard (Panthera uncia) (Johansson et al., 2016). Acceptance of resource scarcity has little bearing on conservation hospice because caring for species that our collective actions have harmed is a socially just response to those harms, especially in cases in which they cannot be reversed. Since its inception in medicine, the field of hospice evolved to provide a host of principles for selecting candidates for care, identifying their needs, and meeting them. Given the disciplinary depth and clearly established threestage process, it is unfortunate when hospice is erroneously equated to managing pain for a dying patient. As with the medical context of hospice, describing the population considered for care provides a first step for conservation of dying species. Common attributes used for patient selection by hospice experts include expected survival time, symptom burden, treatments, do-notresuscitate status, quality of life, and wishes of patients (Kaasa and Loge, 2003;Gómez-Batiste et al., 2012). Although the latter two categories are difficult to apply in biodiversity conservation contexts, the previous four are directly relevant. Expected survival time certainly relates to wildlife conservation although time scales in conservation are longer than for human hospice, which uses categories ranging from weeks to a year (Kaasa and Loge, 2003). In wildlife conservation, time scales of concern may range from 1 year to 50 or more (Brooks et al., 1999). The relative place on this temporal continuum may provide practitioners guidance for how conservation "treatments" should be applied. For example, species or populations with projected extirpation or extinction being evaluated on a decadal scale (vs. years) may warrant more expensive and labor-intensive treatments because longer persistence of the species may allow for political, economic, or technological innovations that could change a terminal prognosis. This dynamic may be evident in a comparison of the Northern white rhinoceros (Ceratotherium simum cottoni) and polar bear. The former was cared for with relatively small allocation of resources despite being functionally extinct; the remaining population consisted of two related females (Groves et al., 2010), whereas Derocher (2010) and others advocate large-scale resource use, policy change, and technological innovation to save the polar bear from longer term extinction threats posed by climate change. The concept of symptom burden also relates to wildlife conservation. The ultimate symptoms of concern would be small and declining populations, and these symptoms derive from many causes, some of which are clearly treatable (e.g., poaching), whereas others are less so (e.g., sea level rise). Population levels relative to thresholds for genetic bottlenecks or long-term increases in extinction risk can help classify the symptom burden of a potentially dying species even if the thresholds are variable depending on attributes of populations, including how long-lived individuals are (Flather et al., 2011;Shoemaker et al., 2013). Species on low-lying islands carry among the highest symptom burdens because they fill niches that do not exist on adjacent continental areas (Raia and Meiri, 2006;Losos and Ricklefs, 2009) and face complete loss of habitat from sea level rise. Symptom burden for species, however, exists on a continuum. For example, the vaquita (Phocoena sinus) population decreased from approximately 150 in the early 2000s to <20 in a 15year period (Jaramillo- Legorreta et al., 2007), and conservation solutions are relatively expensive because they require creating major changes in lucrative fisheries (Dunch, 2019). Conservation triage likely would not allocate resources to conservation in this context. A hospice model rooted in respect for the species' intrinsic value, however, would support the current model of allocating resources to managing the symptom burden, perhaps allowing solutions to the primary threat of by catch in nets to emerge soon enough to save the species. Treatments also clearly relate to classifying and managing a dying species. Some treatments, such as prescribed fire, have welldefined impacts on endangered species persistence in a landscape and have clearly established costs (James et al., 1997). Other treatments, such as releasing sterile coyotes (Canis latrans) and red wolf-coyote hybrids as placeholders to buffer further red wolf (Canis rufus)-coyote hybridization (Gese and Terletzky, 2015), may be more experimental in nature (Bohling et al., 2016) and only considered when delaying extirpation of a population or if the species is critical ecologically, economically, or culturally. Perhaps surprisingly, the do-not-resuscitate status is emerging as relevant to thinking about hospice for wildlife species in part because advances in biotechnology have made de-extinction possible (Sherkow and Greely, 2013;Shapiro, 2015). Wildlife conservation introduces hospice issues that are different from human contexts. Foremost among these is the complex context for determining who decides whether a species merits hospice and what criteria are used in said decisions. Although society generally accepts intrinsic worth of all humans, that judgment is less universal for other species (Bruskotter et al., 2019). If, however, we adopt the idea that unique species have intrinsic value, then it follows that those species have a right to be treated with respect for their welfare regardless of their future viability or values they provide to ecosystems and people (Vucetich et al., 2015). Whether one choses to intervene with hospice care depends upon the criteria one adopts for intervention, and these are likely to differ from the criteria used in human cases, but conservation hospice would highlight the need to publicly consider and debate the criteria rather than relegate their determination to modelers and the principle of efficiency (Wilson and Law, 2016). Another unique attribute of hospice care decisions for wildlife conservation is that, unlike dying people, dying species can be preserved after they are extinct in the wild via captive breeding facilities, and genetic engineering seems likely to render de-extinction more pragmatic in the near future (Valdez et al., 2019). Gene banking might be seen as one form of conservation hospice, but likely not a pragmatic one because the practice may render losing the in situ conservation battle more psychologically acceptable (Valdez et al., 2019). In human hospice contexts, admission for hospice is followed by identification of outcomes of care. Such outcomes typically focus on quality of life and patient wishes, which include but are not limited to the ability to stay in the home, symptom relief, building and maintaining support systems for individuals and their families, respecting cultural context, and developing synergies with therapies designed to prolong life (Kaasa and Loge, 2003;Connor, 2008;WHO, n.d.). In the conservation context, this requires identifying outcomes other than perpetual persistence of the species for which to manage. For example, "promoting staying at home" reflects the priority given to in situ conservation or conserving species in natural habitats as long as possible (Primack, 2012) and suggests, among other things, that species maintained through assisted migration are not likely to be managed with hospice care. Building support systems for individuals and their families has equally intuitive applications to hospice care for dying species. Species are given hospice care because they have intrinsic value, but the impacts of their losses on ecological and social structures will affect the well-being of remaining species after the dying species becomes extinct. Working to maintain or restore the integrity of ecosystems upon which a dying species relies could both delay its extinction as well as benefit other intrinsically valuable species, including humans. Thus, protection of ecosystem functions needed to care for a dying species represents a therapy with synergies linked to persistence of other species relying on the same ecosystems. If the dying species provides important ecosystem functions, however, hospice care providers considering impacts on "family" must also consider replacement of populations (when possible) that fill the same ecological niche previously occupied by a dying species. In medical contexts, practitioners are guided by care for the patient, understanding conditions for withholding and withdrawing treatment, maintaining communication and trust, and understanding and respecting cultural contexts (Danis et al., 1999). In contexts of species management, understanding and respecting cultural contexts may be critical yet overlooked. Local people valuing species for historic, religious, or other cultural reasons may justify continuing hospice treatments longer than would be dictated by models rooted in economic efficiency. In addition to understanding human reliance on such species (Joint Secretariat, 2015), study of local traditional knowledge of tribal nations, for example, could provide a better understanding of the threats faced by these species and, thus, help delay extinctions. For example, First Nations in Canada act as stewards and advocates for conservation of species that have high cultural value, such as the eulachon (Thaleichthys pacificus; Moody, 2008;Eckert et al., 2018). When species are culturally significant, local populations might be motivated to act as caregivers for these populations, thus providing benefits to both the species facing low survival probability as well as the human populations with the strongest connections to the species in question. CONCLUSION Perhaps the most valuable attribute of a conservation hospice construct is providing a framework for constructive thinking about conservation of dying species. Triage advocates often claim a severe form of conservation pragmatism is both necessary and realistic, but we suggest assuming resources saved by abandoning doomed species will be allocated to the easiest-to-save species in an efficient manner is neither pragmatic nor realistic. People will demand resources to save tigers and polar bears until the last one disappears independent of any optimization function generated by scientists. Why squander that demand in the name of efficiently allocating declining resources? Caring for charismatic species, even when they appear doomed, may prove essential to turning the tide of declining conservation funding. Just as hospice care demonstrates reverence for life, caring for doomed species demonstrates respect for the intrinsic value of wildlife and reflects the importance of welfare considerations in conservation. Adopting a conservation hospice approach would require more practitioners interested and engaged in ethical, cultural, and social dynamics of conservation, just as hospice required new kinds of healthcare workers (Connor, 2008), but this need is well established within the conservation community already (Bennett et al., 2017). Hospice patients live longer than others with similar symptoms (Connor et al., 2007), so caring for doomed species might even allow them to persist until "miracle cures, " such as reasonable levels of conservation funding emerge. AUTHOR CONTRIBUTIONS All authors contributed to developing this perspective piece, and did so with contributions reflecting author order.
3,838.8
2020-06-17T00:00:00.000
[ "Philosophy", "Environmental Science" ]
Nuclear cross sections in $^{16}\text{O}$ for $\beta$ beam neutrinos at intermediate energies The nuclear cross sections for charged lepton production induced by $\beta$ beam neutrinos (anti-neutrinos) in $^{16}$O have been presented at intermediate energies corresponding to the Lorentz boost factor $\gamma<250 (150)$. The calculations for quasi-elastic lepton production includes the effect of Pauli blocking, Fermi motion and renormalization of weak transition strengths in the nuclear medium. The calculations for the inelastic lepton production is done in the $\Delta$ dominance model. The renormalization of $\Delta$ properties in a nuclear medium and final state interactions of pions with the final nucleus are taken into account. The results may be useful in performing feasibility studies for the future CERN-FREJUS base line neutrino oscillation experiments. Neutrino experiments done with atmospheric [1], accelerator [2], reactor [3] and solar [4] neutrinos provide evidence for neutrino oscillations. In a three flavor oscillation scenario for the Dirac neutrinos, the three neutrino masses (m i , i=1,2,3) and mixing angles θ ij (i =j=1,2,3) and a CP violating phase δ have to be determined. The present experiments provide limits on ∆m 2 12 , θ 12 , ∆m 2 23 and θ 23 , while the mixing angle θ 13 is poorly determined and the δ phase is still unknown. In addition, the hierarchal structure of ∆m 2 ij and the absolute scale of neutrino masses have also to be determined. The high precision neutrino experiments to be performed in the future are expected to improve the present limits on the various parameters of three flavor neutrino oscillation phenomenology. For the purpose of future long base line neutrino oscillation experiments, new sources of neutrino beams like neutrino factories [5], superbeams [6] and β-beams [7] have been proposed. One of these sources, the β-beams, provide a source of pure single flavor, well collimated and intense neutrino(antineutrino) beams with a well defined energy spectrum obtained from the β-decay of accelerated radioactive ions boosted by a suitable Lorentz factor γ. The radioactive ion and the Lorentz boost factor γ can be properly chosen to provide the low energy [8]- [13], intermediate and high energy [14]- [21] neutrino beams according to the needs of a planned experiment. In the feasibility study of β-beams, 6 He ions with a Q value of 3.5 MeV and 18 Ne ions with a Q value of 3.3 MeV are considered to be the most suitable candidates to produce antineutrino and neutrino beams [22]. The possibility of accelerating these ions using the existing CERN-SPS, upto its maximum power enabling it to produce beta beams with γ= 150 (250) for 6 He( 18 Ne) ions has been discussed in the literature [15], [18] which may be used to plan a base line neutrino experiment at L=130 km to the underground Frejus laboratory with the 440 kT water Cerenkov detector [14]- [16]. The feasibility of such an experimental setup and its response to β-beam neutrinos corresponding to various values of the Lorentz boost factor γ has been studied by Autin et al. [22]. In the range of high γ, this provides greater sensitivity to the determination of the mixing angle θ 13 and the CP violating phase angle δ [17]. In addition, such a facility is also expected to provide the low energy neutrino nuclear cross sections corresponding to very low γ, which may be useful in calibrating various detectors planned for the observation of supernova neutrinos [8], [12] and neutrinoless double β-decay [13]. In this letter, we discuss the nuclear response for the β-beam neutrinos (antineutrinos) of intermediate energy corresponding to the various values of γ discussed in the literature. In particular, we study the neutrino nucleus interaction cross sections in 16 O for β-beam neutrino(antineutrino) energies corresponding to the Lorentz boost factor γ in the range of 60< γ <250 (150). The energy spectrum of β-beam neutrinos(antineutrinos) from 18 Ne( 6 He) ion source in the forward angle(θ = 0 o ) geometry, corresponding to the Lorentz boost factor γ is given by [10]: In the above equation b = ln2/m 5 e f t 1/2 and E e (= Q − E ν ), p e are the energy and momentum of the outgoing electron, Q is the Q value of the beta decay of the radioactive ion A(Z, N ) → A(Z ′ , N ′ ) + e − (e + ) +ν e (ν e ) and F (Z ′ , E e ) is the Fermi function. In Fig. 1, we show the representative spectra for neutrinos(antineutrinos) corresponding to the Lorentz boost factor γ =250 (150). In this energy region the dominant contribution to the charged lepton production cross section comes from the quasielastic reactions. However, the high energy neutrinos corresponding to the tail of an energy spectrum, specially for higher γ (see Fig.1), can contribute to the inelastic production of charged leptons through the excitation of the ∆-resonance. In addition to the genuine inelastic production of the charged leptons which will be accompanied by the pions, the neutral current induced inelastic production of π 0 without any charged lepton in the final state can mimic the quasielastic production of charged leptons in which one of the photons from the π 0 decays is misidentified as a signature of the quasielastic electron production. We, therefore, study the quasielastic and the inelastic production of charged leptons induced by the charge current. We also study the neutral current induced production of π 0 which gives major contribution to the background of the electron production in the quasielastic reactions induced by neutrinos and antineutrinos. The cross section for quasielastic charged lepton production for the process ν e + 16 O → e − + 16 F ⋆ is calculated in a local density approximation using the standard model Lagrangian for the weak interaction using Budd, Bodek and Arrington (BBA03) [23] weak nucleon axial vector and vector form factors with M A =1.05 GeV and M V =0.84 GeV. The Fermi motion and Pauli blocking effects in nuclei are included through the imaginary part of the Lindhard function for particle hole excitations in the nuclear medium. The renormalization of weak transition strengths, which are quite substantial in the spin-isospin channel, are calculated in the random phase approximation(RPA) through the interaction of p-h excitations as they propagate in the nuclear medium using a nucleon-nucleon potential described by pion and rho exchanges. The effect of Coulomb distortion of the electron in the field of the final nucleus is also taken into account by using a local version of the modified effective momentum approximation [24]- [25]. The details of the formalism and the relevant expressions for the cross section are given in refs. [25]- [26]. The cross section for inelastic charged lepton production for the process ν e (ν e ) + 16 O → e − (e + ) + π α + X, where α is the charge state of the pion, is calculated in the ∆ dominance model using a local density approximation. The sequential production of pions through the excitation of the ∆ resonance and its subsequent decay in pions through the ∆ → N π process is considered. The ∆ resonance is described by a Rarita Schwinger field and the matrix element for the ∆ excitation is written using the weak N∆ transition form factors which are determined from the analysis of the data available on the photo-, electro-and neutrino-excitation of the ∆ resonance. The use of CVC along with the experimental data on electromagnetic excitation of the ∆ is used for determining the vector form factors while the hypothesis of PCAC along with the experimental data on neutrino excitation of ∆ from ν µ − d reactions have been used to determine the axial vector form factors. The matrix elements and the form factors have been discussed in refs. [27]- [28]. The effect of a nuclear medium on the width and mass of the ∆ is included in a model where the self energy of the ∆ in nuclear medium is calculated in a local density approximation [27], [29]. The final state interaction of pions with the final nucleus is described by using energy dependent pion absorption probability provided by Vicente Vacas [30]- [31]. This formalism is also applied to calculate the neutral current induced π 0 production process i.e. ν e + 16 O → ν e + π 0 + X in order to study the major source of background to the quasielastic charged lepton events [16]. The numerical calculations for the total cross section σ(E ν ) in 16 O have been made using the 3-parameter Fermi density ρ(r) given by [32]: with c= 2.608 fm, z= 0.513 fm and w= -0.051 and the results have been presented in Fig.2 and Fig.3 for the neutrino and the antineutrino reactions. We see from Fig.2 that for the neutrino reactions, the charged lepton production is dominated by the quasielastic production and the inelastic charged lepton production becomes comparable only around E ν ∼ 1.5 GeV. The neutral current inelastic production of π 0 is small and is about 12−15% of the quasielastic charged lepton production in the energy range of 0.8 GeV< E ν <1.0 GeV. Therefore, the background to the quasielastic lepton events due to the neutral current π 0 production is expected to be important only at high γ (for example γ=250) where it could be around 15% corresponding to the average neutrino energies E ν ∼ 1.0 GeV. Qualitatively, similar results are obtained for the antineutrino reactions and are shown in Fig.3. We would like to emphasize that nuclear medium effects play an important role in reducing the cross sections specially for the quasielastic charged lepton production in the low energy region. For example, we find that with the Pauli blocking and the Fermi motion of the nucleons in the nucleus the cross sections reduce from the free case by 30 % at E νe = 200 MeV, 15% at E νe = 400 MeV, 10% at E νe = 750 MeV and around 9% at E νe = 1.0 GeV. When the RPA correlation in the nuclear medium is also taken into account there is a total reduction of 60% at E νe = 200 MeV, 40% at E νe = 400 MeV, 26% at E νe = 750 MeV and around 23% at E νe = 1.0 GeV. In the case of inelastic charged lepton production, we find that when nuclear medium modification effects on the ∆ properties are taken into account,the cross section reduces by around 15% for energies E νe = 0.5 − 1.0 GeV as compared to the cross sections calculated without the medium modification effects. When the final state interaction of pions with the residual nucleus is taken into account there is a further reduction in the cross section which leads to a total reduction of around 40% for the neutrino energies E νe = 0.5 − 1.0 GeV. Similar results are also obtained for the neutral current π 0 production. The effect of nuclear medium on neutrino induced quasielastic production of leptons from 16 [34], and Marteau [35] but are smaller than the results of Maieron et al. [37]. In the case of inelastic neutrino lepton production induced by charged currents in this energy region, the effect of nuclear medium and final state interactions has been studied earlier by some authors [27], [35], [38], and [39]. The results presented here for nuclear medium effects in the total cross sections corresponding to the charged current inelastic lepton production and the neutral current pion production are consistent with our earlier results [27] and the ones of Marteau [35] and Kim, Schramm and Horowitz [38]. We can not compare our results to those of Paschos and collaborators [39] because they present results for the momentum and angular distributions and do not report results on total cross sections. We have also calculated the total cross sections for the coherent production of charge and neutral current pions in this energy region and find that the cross sections are quite small as compared to the incoherent production cross sections presented here. This has been discussed in refs. [40]- [41] where the theoretical results for the coherent production of pions have been found to be in reasonable agreement with the preliminary experimental results reported by the K2K [42] and the MiniBooNE [43] collaborations. Therefore, the coherent pion productions are not expected to give any significant contribution to the number of charged lepton events in this energy region. In order to estimate the relative contribution of the quasielastic and the inelastic production of charged leptons and also the background to the quasielastic events due to the neutral current induced neutral pion production at a far detector in a base line experiment, we have calculated the flux averaged cross section σ defined as for the neutrino and the antineutrino energies. This is relevant for the future CERN-FREJUS base line experiments which can be done with the present CERN-SPS and have been discussed in the literature [15], [17]. The forward angle approximation for the neutrino flux used in equation (2) to calculate the total cross section is quite good for a far detector specially for higher values of the Lorentz factor γ. Quantitatively, we find that the contribution to the total cross section from non-zero θ flux i.e. Φ lab (E ν , θ = 0) is about 5% for γ = 60 and reduces to less than 1% for γ = 250. In Tables 1 and 2, we show the results of the flux averaged cross section σ for neutrino and antineutrino reactions for various values of the Lorentz boost factor γ where we can see the relative contributions of the cross sections for quasielastic and inelastic production of leptons along with the cross sections for neutral current induced production of neutral pions which is the major source of background to the quasielastic events at intermediate energies. To summarize, we have presented in this letter the numerical results for the charged current lepton production induced by β-beam neutrinos (antineutrinos) in 16 O, calculated in a local density approximation which takes into account the nuclear medium effects. The calculations have been done for the quasielastic production of charged leptons using RPA and the inelastic production of charged leptons using ∆ dominance model. The renormalization of the ∆ properties in a nuclear medium is included through the self energy of the ∆ in the nuclear medium calculated in a local density approximation. The neutral current induced neutral pion production, which constitutes the major background to the quasielastic charged lepton events, is also calculated in this model. We would like to thank M. J. Vicente Vacas for providing the pion absorption probabilities and H. Arenhoevel for reading the manuscript.The work is supported by the Department of Science and Technology, Government of India under the grant DST Project No. SP/S2K-07/2000. One of the authors (S. Ahmad) would like to thank CSIR for the financial support. I: Cross sections σ νe averaged over the β beam neutrino spectrum for various Lorentz boost factor γ(column I) and corresponding average energies of neutrinos (column II). Columns III and IV give the total cross sections for the quasielastic and the inelastic charged lepton production process and column V gives the the total cross section for the inelastic neutral current production of π 0 . σ νe in 10 − II: Cross sections σ νe averaged over the β beam antineutrino spectrum for various Lorentz boost factor γ(column I) and corresponding average energies of antineutrinos (column II). Columns III and IV are give the total cross sections for the quasielastic and the inelastic charged lepton production process, while column V gives the total cross section for the inelastic neutral current production of π 0 .
3,687.6
2006-03-01T00:00:00.000
[ "Physics" ]
Research on the eLoran Differential Timing Method An enhanced long-range navigation (eLoran) system was selected as the backup of Global Navigation Satellite Systems (GNSS), and experts and scholars are committed to improving the accuracy of the eLoran system such that its accuracy is close to the GNSS system. A differential method called eLoran differential timing technology is applied to the eLoran system, which has been used in maritime applications of eLoran. In this study, an application of eLoran differential timing technology in a terrestrial medium is carried out. Based on the eLoran timing service error, the correlation of the timing service error is analyzed in theory quantitatively to obtain the range of the difference station in the ground. The results show that to satisfy the timing accuracy of 100 ns, the action range of eLoran difference station on the land needs to be less than 55 km. Therefore, the eLoran differential method is proposed, and in the difference station, the theoretical calculation is combined with the measurement of the signal delay to obtain the difference information, which is sent to the users to adjust the prediction delay and improve the eLoran timing precision. The experiment was carried out in the Guan Zhong Plain, and the timing error of the user decreased from 394.7287 ns (pre-difference) to 19.5890 ns (post-difference). The proposed method is found to effectively enhance the timing precision of the eLoran system within the scope of action. Introduction After delaying the instalment of enhanced long-range navigation (eLoran) stations for a certain period of time, the United States announced, on 4 December 2018, that eLoran stations will be used as a backup for Global Navigation Satellite Systems (GNSS). Moreover, simultaneously, a high-precision terrestrial time service system is being developed in China. The effective radiated power of eLoran is in the range of 100 to 1000 kW, and it is nearly impossible to jam over large areas with such high power levels. This is the reason why eLoran is used as an alternative to the GNSS system. Unfortunately, eLoran is less accurate than GNSS. Several studies have been conducted to improve the precision of the eLoran system to match that of the GNSS system. Many experts and scholars focus on the prediction of signal propagation delay, especially the additional secondary factor (ASF) under special topographic conditions [1][2][3][4]. By using the ASF grid, the accuracy of the eLoran system can be improved [5][6][7][8]. To develop the ASF grid, surveys have been conducted for the ASF values. However, conducting investigation and measurement of ASF for the entire area is expensive. Moreover, ASF depends on the ground conductivity along the propagation path of the signal, and any change in conductivity due to weather and season will affect ASF. To supply the real-time variation of the ASF, the eLoran differential timing method has been proposed, especially for maritime applications, in which numerous experiment and data analysis are performed [5,6]. In this study, the eLoran differential timing service is analyzed theoretically and this method is mainly used in terrestrial media. The correlation of eLoran timing service error is analyzed in theory quantitatively, and the action range of difference station on the ground is calculated with an accuracy of 100 ns. Based on the correlation of the errors, the eLoran differential timing service is proposed. Then an experiment is carried out to test and verify the eLoran differential timing method by conducting a survey in the GuanZhong Plain of the Shanxi province. eLoran Signal eLoran is a low frequency (100 kHz) terrestrial navigation and timing system. The transmitters are organized into chains, with one master station and several secondary stations in each chain. The master station transmits nine pulses and secondary stations transmit eight pulses at a time. Figure 1 illustrates the shape of the signal, and the red dot represents standard zero-crossing (SZC), which is the point at which the signal is tracked by an eLoran receiver and it is used to calculate time-of-arrival. Figure 1. The enhanced long-range (eLoran) pulse shape. The red dot represents standard zerocrossing (SZC), which is the point at which the signal is tracked by an eLoran receiver and it is used to calculate time-of-arrival. eLoran Propagation Delay The eLoran signal is transmitted radially from the transmitter to the receiver, and travels parallel to the surface of the earth. During this propagation, the signal does not travel at the speed of light, but it is slowed by the atmosphere and the surface of the earth. The time taken for the eLoran signal to reach the receiver is called propagation time ( ), which is shown in the following equation where if the signal propagates in the infinite air medium, the time taken to reach the receiving antenna from the transmitting antenna is called the primary factor (PF). Owing to the dielectric properties of the earth surface, the signal will travel relatively slowly as compared to that in the atmosphere. This delay is termed as an ASF. The enhanced long-range (eLoran) pulse shape. The red dot represents standard zero-crossing (SZC), which is the point at which the signal is tracked by an eLoran receiver and it is used to calculate time-of-arrival. eLoran Propagation Delay The eLoran signal is transmitted radially from the transmitter to the receiver, and travels parallel to the surface of the earth. During this propagation, the signal does not travel at the speed of light, but it is slowed by the atmosphere and the surface of the earth. The time taken for the eLoran signal to reach the receiver is called propagation time (T p ), which is shown in the following equation where if the signal propagates in the infinite air medium, the time taken to reach the receiving antenna from the transmitting antenna is called the primary factor (PF). Owing to the dielectric properties of the earth surface, the signal will travel relatively slowly as compared to that in the atmosphere. This delay is termed as an ASF. The refractive index of the atmosphere n s implies that the speed of the signal is a fraction slower than the speed of light in vacuum. The PF is related to the distance S, and this is expressed as follows (Equation (2)) [9,10]. where C is the speed of light in a vacuum and is equal to 0.299792458 km/µs, S is the distance of signal propagation, and n s is the atmospheric refraction index of the ground, which varies with temperature humidity and air pressure. The earth surface medium (seawater and land) delay signal transmission, and the time taken for the eLoran signal in the procedure is ASF. With the decrease in the electrical conductivity of the surface, a high proportion of the signal will penetrate the ground and the wave-front will propagate more slowly. The calculation formula of ASF is shown in Equation (3) [9,10]. where ω is the angular velocity in rad/s, W is the signal attenuation function, which is related to signal frequency f , the distance of signal propagation d, the electrical conductivity σ and dielectric constant ε of signal propagation path, and arg W is the phase of the W in rad. Principle of eLoran Timing Service The transmitter broadcasts the timing signal, which propagated in the channel and arrived at the receiver. In the receiver, the signal is processed and demodulated to obtain the one-pulse-per-second (1PPS), and then the receiver local 1PPS signal is synchronized with the received 1PPS. Specifically, the group time pulse (GTP) signal received by the eLoran receiver is compared with the local 1PPS of the receiver, and then the local 1PPS signal is adjusted according to the offset ∆τ [11][12][13]. This principle is illustrated in Figure 2. The refractive index of the atmosphere implies that the speed of the signal is a fraction slower than the speed of light in vacuum. The PF is related to the distance S, and this is expressed as follows (Equation (2)) [9,10]. where is the speed of light in a vacuum and is equal to 0.299792458 km/μs, S is the distance of signal propagation, and is the atmospheric refraction index of the ground, which varies with temperature humidity and air pressure. The earth surface medium (seawater and land) delay signal transmission, and the time taken for the eLoran signal in the procedure is ASF. With the decrease in the electrical conductivity of the surface, a high proportion of the signal will penetrate the ground and the wave-front will propagate more slowly. The calculation formula of ASF is shown in Equation (3) [9,10]. where is the angular velocity in rad/s, is the signal attenuation function, which is related to signal frequency , the distance of signal propagation , the electrical conductivity and dielectric constant of signal propagation path, and is the phase of the in rad. Principle of eLoran Timing Service The transmitter broadcasts the timing signal, which propagated in the channel and arrived at the receiver. In the receiver, the signal is processed and demodulated to obtain the one-pulse-persecond (1PPS), and then the receiver local 1PPS signal is synchronized with the received 1PPS. Specifically, the group time pulse (GTP) signal received by the eLoran receiver is compared with the local 1PPS of the receiver, and then the local 1PPS signal is adjusted according to the offset Δ [11][12][13]. This principle is illustrated in Figure 2. After adjusting the local 1PPS pulse of the receiver to be ahead of 1PPS (UTC), the local 1PPS of the receiver is used as the opening signal of the counter, and the GTP signal demodulated by the receiver is used as the closing signal. Subsequently, the phase difference N is measured between these two pulses as shown below (Equation (4)). where 0 is the transmission delay of the system obtained from the demodulated message, is propagation delay calculated with the position of transmitter and receiver, and is the receiver After adjusting the local 1PPS pulse of the receiver to be ahead of 1PPS (UTC), the local 1PPS of the receiver is used as the opening signal of the counter, and the GTP signal demodulated by the receiver is used as the closing signal. Subsequently, the phase difference N is measured between these two pulses as shown below (Equation (4)). where t 0 is the transmission delay of the system obtained from the demodulated message, t g is propagation delay calculated with the position of transmitter and receiver, and t r is the receiver delay, which is calibrated in advanced. Then, the offset of local 1PPS signal ∆τ is obtained using Equation (5). The errors of N, t 0 , t p , t r are transferred to ∆τ. The time difference N is measured by the inner counter of the receiver, and an error is introduced in the procedure. t 0 can be calibrated accurately, and the error after calibration is approximately 30 ns. t p is calculated using Equations (1)-(3), and the deviation from the true value is approximately 500 ns. t r is measured with an error less than 20 ns. The majority of the timing error is due to the error in t 0 , t p , t r which will be addressed in a future study. Error Analysis of eLoran Timing Service From the source to the destination, a signal from a source passes through three systems to reach the destination. These three systems are the transmitter, propagation channel, and receiver. During this process, the entire propagation delay is composed of transmission delay, signal propagation delay and signal receiving delay, which are shown in Figure 3. The signal transmission delay error, signal propagation delay error, and signal receiving delay error make up the eLoran timing error. Sensors 2020, 20, x FOR PEER REVIEW 4 of 14 delay, which is calibrated in advanced. Then, the offset of local 1PPS signal Δ is obtained using Equation (5). The errors of N, 0 , , are transferred to Δ . The time difference N is measured by the inner counter of the receiver, and an error is introduced in the procedure. 0 can be calibrated accurately, and the error after calibration is approximately 30 ns. is calculated using Equations (1)-(3), and the deviation from the true value is approximately 500 ns. is measured with an error less than 20 ns. The majority of the timing error is due to the error in 0 , , which will be addressed in a future study. Error Analysis of eLoran Timing Service From the source to the destination, a signal from a source passes through three systems to reach the destination. These three systems are the transmitter, propagation channel, and receiver. During this process, the entire propagation delay is composed of transmission delay, signal propagation delay and signal receiving delay, which are shown in Figure 3. The signal transmission delay error, signal propagation delay error, and signal receiving delay error make up the eLoran timing error. eLoran Transmission Delay Error The eLoran signal transmission error includes the synchronization error between the local time in a broadcast station and UTC (NTSC), the transmitter channel delay error and the antenna phase center error. (1) Synchronization error between the local time in the station and UTC (NTSC) The broadcasting system of eLoran is equipped with an independent atomic clock which is used as the time reference and keeps synchronous with UTC (NTSC) through a certain communication link. The synchronization error is generally less than 5 ns. (2) Transmitter channel delay error The transmitter channel delay error comprises signal coding delay error and delay error in the signal modulation, which can be controlled to be within 30 ns. (3) Error of antenna phase center In theory, the phase center of the antenna should be consistent with its geometric center, but the change in the signal carrier frequency can offset the phase center, which leads to the uncertainty of the antenna phase center. In the 100 kHz carrier frequency band, the phase center fluctuation of the antenna is less than 5 ns. eLoran Transmission Delay Error The eLoran signal transmission error includes the synchronization error between the local time in a broadcast station and UTC (NTSC), the transmitter channel delay error and the antenna phase center error. (1) Synchronization error between the local time in the station and UTC (NTSC) The broadcasting system of eLoran is equipped with an independent atomic clock which is used as the time reference and keeps synchronous with UTC (NTSC) through a certain communication link. The synchronization error is generally less than 5 ns. (2) Transmitter channel delay error The transmitter channel delay error comprises signal coding delay error and delay error in the signal modulation, which can be controlled to be within 30 ns. (3) Error of antenna phase center In theory, the phase center of the antenna should be consistent with its geometric center, but the change in the signal carrier frequency can offset the phase center, which leads to the uncertainty of the antenna phase center. In the 100 kHz carrier frequency band, the phase center fluctuation of the antenna is less than 5 ns. The estimated eLoran transmission delay error is calculated using the following equation Sensors 2020, 20, 6518 5 of 14 eLoran Propagation Delay Error The true value of signal propagation delay is shown in Equation (6). However, it is difficult to obtain the true value of the n s , σ and ε. In eLoran timing service, the calculation is generally used to obtain the signal propagation delay. In the calculation, the value of n s is 1.000315 in the international standard atmosphere, and the quantization value n s , σ and ε replace the true n s , σ and ε. This is the main reason for the delay error of signal transmission. For Equation (7), the deviation from Equation (6) is the propagation delay error shown in Equation (8), where ∆PF represents the error of PF and ∆ASF is the error of ASF. At present, the error of the eLoran signal propagation delay is better than 500 ns. eLoran Receiving Delay Error The receiving delay is the time from the moment when the signal propagates to the receiving antenna to the moment when the receiver outputs 1PPS [14][15][16]. The receiver delay error includes the thermal noise of the receiver and the measurement error introduced during receiver delay measurement [17,18]. After calibrating the receiver's delay, the receiving delay error can be controlled to be within 20 ns. All the source of errors in eLoran timing is listed in Table 1. Correlation Analysis of eLoran Timing Error The differential method used in eLoran timing service is similar to the GPS local difference technology. According to the GPS differential technology, the error correlations between the difference station and the user in an area lay the foundation of the difference technology. The timing error correlation between the difference station and the user in an area is analyzed sequentially. In Figure 4, A is the difference station and B represents users. In the two paths OA and OB, weather, geology, and geography are similar, which is the precondition of the difference method. Correlation of eLoran Transmitting Delay Error Transmission delay error is one of the key errors that need to be corrected. Difference stations and nearby users receive signals from the same transmitting station. Regardless of the location of the difference station and the user, the influence of the transmission error on the difference station and the user is the same. If the error is 10 ns, it will induce 10 ns timing error and 3 m pseudo-range error to users and the difference station [19]. Correlation of eLoran Propagation Delay Error The propagation delay of eLoran signal comprises PF and ASF. Although these two factors are indistinguishable in propagation delay measurement, their physical properties vary considerably. Therefore, the error correlation of PF and the error correlation of ASF is discussed separately. The error Correlation of PF = PF M − PF′ M is the PF error of difference station and = PF − PF′ is the PF error of the users, and the difference between the two is analyzed using Equation (9). First, assuming that the difference between the distance of two signal propagation path is Δ , and the amount of Δ is not more than the change in within one year in the area, such that Δ is not greater than 0.000060 [20,21], then the difference between the primary delay error caused by is expressed as follows. Error Correlation of ASF = ASF M − ASF′ M is the ASF error of difference station and = ASF − ASF′ is the ASF error of the user, and the difference between the two is analyzed using Equation (10). ASF M is the ASF with true and for the difference station, and the ASF′ M is the ASF with the quantization value ′ and ′ for the difference station. ASF U is the ASF with the true and , and the ′ U is ASF with the quantization value ′ and ′ for the user. Correlation of eLoran Transmitting Delay Error Transmission delay error is one of the key errors that need to be corrected. Difference stations and nearby users receive signals from the same transmitting station. Regardless of the location of the difference station and the user, the influence of the transmission error on the difference station and the user is the same. If the error is 10 ns, it will induce 10 ns timing error and 3 m pseudo-range error to users and the difference station [19]. Correlation of eLoran Propagation Delay Error The propagation delay of eLoran signal comprises PF and ASF. Although these two factors are indistinguishable in propagation delay measurement, their physical properties vary considerably. Therefore, the error correlation of PF and the error correlation of ASF is discussed separately. The error Correlation of PF e M = PF M − PF M is the PF error of difference station and e U = PF U − PF U is the PF error of the users, and the difference between the two is analyzed using Equation (9). First, assuming that the difference between the distance of two signal propagation path is ∆d, and the amount of ∆n s is not more than the change in n s within one year in the area, such that ∆n s is not greater than 0.000060 [20,21], then the difference between the primary delay error caused by n s is expressed as follows. Error Correlation of ASF e M = ASF M − ASF M is the ASF error of difference station and e U = ASF U − ASF U is the ASF error of the user, and the difference between the two is analyzed using Equation (10). ASF M is the ASF with true σ and ε for the difference station, and the ASF M is the ASF with the quantization value σ Sensors 2020, 20, 6518 7 of 14 and ε for the difference station. ASF U is the ASF with the true σ and ε, and the ASF U is ASF with the quantization value σ and ε for the user. According to radio wave propagation theory, the calculation of ASF is quite complex, as shown in Equation (3). However, if Equation (3) is used, it is very inconvenient to study the error correlation of ASF in Equation (10). The relationship between ASF and distance d can be simplified by using another algorithm for convenience engineering calculation, but the residual error must meet the conditions of the military standard long-wave ground wave propagation [5]. When the signal propagation distance is greater than 100 km, the ASF and distance d are almost linear, and the relationship between the SF and distance is fitted with a linear polynomial ASF = a × d + b [22]. First, using Equation (3), the ASF is calculated as 0 < d ≤ 2000 and the relation between ASF and d is obtained on a typical propagation path. Second, the relationship between ASF and distance d is fitted by the least square algorithm using the calculated data. Finally, if the fitting residual error meets the conditions of the military standard, the fitting is termed reasonable. The simulation result is listed in Table 1, and the limit error set by the military standard is presented in the last column. The fitting on five types of paths are sufficient to satisfy the condition, but the fitting residual error of seawater and average land exceed the military standard, in Table 2. In Figure 5, the fitting curve of the sea surface is shown in the top and that of the average land is on the bottom. The red line represents the result of the calculation and the blue line represents the fitting result. The maximum residual error appears near 100 km. With the increase in distance, the fitting results also improve. The distance between the difference station and the eLoran broadcast station is generally greater than 100 km. Therefore, for the sea surface and the average land path, it is also reasonable to fit the relationship between ASF and d with a polynomial SF = a × d + b, when the distance is more than 100 km. For the typical signal propagation path, the value of the electrical conductivity has a certain range. For example, in the third column of Table 3, it can be seen that the electrical conductivity of the seawater is between 3 and 7, which are the minimum and the maximum values, respectively. When the maximum value of is considered, the corresponding fitting parameters are and and when the minimum value of is considered, the corresponding fitting parameters are ′ and ′. The variation in ASF due to uncertainty in uncertainty from seasonal and weather effects is smaller than the variation in ASF brought from maximum to the minimum of . For the typical signal propagation path, the value of the electrical conductivity has a certain range. For example, in the third column of Table 3, it can be seen that the electrical conductivity of the seawater is between 3 and 7, which are the minimum and the maximum values, respectively. When the maximum value of σ is considered, the corresponding fitting parameters are a and b and when the minimum value of σ is considered, the corresponding fitting parameters are a and b . The variation in ASF due to uncertainty in σ uncertainty from seasonal and weather effects is smaller than the variation in ASF brought from maximum to the minimum of σ. Based on the simplified relationship between ASF and distance d, the ASF error comparison between user and differential station is calculated using Equation (11). Including the PF and the Sensors 2020, 20, 6518 9 of 14 ASF, the entire propagation delay error of the user and the differential station is compared using Equation (12). Correlation of Receiving Delay Error The delay error of the receiver is primarily the thermal noise of the receiver equipment, which has no correlation [19,23], but we can reduce the error by measuring the relative delay of the receivers [24]. The residual error is less than 5 ns after relative delay measurement and calibration. Correlation of the Entire Timing Error After the correction by the difference station, the residual error for the user is shown in Table 4. How much is the distance from the user to the difference station, which can meet the timing accuracy of 100 ns. Using Equation (13), the distance is calculated and is listed in Table 5. On dryer ground path, the effective range is approximately 55 km, which is the minimum range. To achieve 100 ns precision, the effective range of the difference station on the land should be less than 55 km, in the condition of the distance greater than 100 km from the difference station to the broadcasting station. Table 4. Residual error after correction. Error Item Residual Error Post-Difference/µs The error transmitting delay 0 The error of propagation delay D × ∆n s C + (a − a ) × D The error of receiving delay 0.005 sum D × ∆n s C + (a − a ) × D + 0.005 The eLoran Difference Timing Method Within 55 km around the difference station, there is a strong correlation with the timing error, and the offset between the difference station and the user timing error is not more than 100 ns. Therefore, we can package the uncertainty in the timing error of the difference station into differential correction and send them to the surrounding users to correct the propagation delay of the users and to improve the timing accuracy. For the difference station, the differential message is calculated by combining the measurement and calculation of signal propagation delay, and the differential correction model is generated and sent to the user. For users, the differential correction is calculated to modify the prediction value of propagation and improve the accuracy of time service. The calculation and prediction of the signal propagation delay are shown using Equation (7). In a region, a site is selected as the difference station for surrounding users. In the difference station, it is important to measure the signal propagation delay to calculate the difference message. Measurement devices, such as eLoran monitoring receiver, time interval counter, data acquisition equipment, and UTC (NTSC) is used. Without UTC (NTSC), the local time-synchronized to the UTC (NTSC) is also enough. For the difference station, first, the accurate position coordinate of the monitoring receiver antenna is calibrated accurately to calculate the distance from the transmitter to the receiver. Second, the signal propagation delay T p is calculated using Equation (7). Then, the monitoring receiver receives the eLoran signal and demodulate the 1PPS, which is compared with the UTC (NTSC) through the time interval counter to obtain the time difference M. Finally, the time difference M is collected to calculate the M − T p to form a differential model and is transmitted to the surrounding users in real-time. For the user, with the precise position of the receiver antenna, the circle distance from the user to the broadcast station is calculated. The signal propagation delay T p is calculated according to Equation (7), then the correction is calculated according to the received difference information to obtain more accurate signal delay and synchronize the 1PPS to UTC (NTSC). The principle of eLoran differential time service is shown in Figure 6. The eLoran Difference Timing Method Within 55 km around the difference station, there is a strong correlation with the timing error, and the offset between the difference station and the user timing error is not more than 100 ns. Therefore, we can package the uncertainty in the timing error of the difference station into differential correction and send them to the surrounding users to correct the propagation delay of the users and to improve the timing accuracy. For the difference station, the differential message is calculated by combining the measurement and calculation of signal propagation delay, and the differential correction model is generated and sent to the user. For users, the differential correction is calculated to modify the prediction value of propagation and improve the accuracy of time service. The calculation and prediction of the signal propagation delay are shown using Equation (7). In a region, a site is selected as the difference station for surrounding users. In the difference station, it is important to measure the signal propagation delay to calculate the difference message. Measurement devices, such as eLoran monitoring receiver, time interval counter, data acquisition equipment, and UTC (NTSC) is used. Without UTC (NTSC), the local time-synchronized to the UTC (NTSC) is also enough. For the difference station, first, the accurate position coordinate of the monitoring receiver antenna is calibrated accurately to calculate the distance from the transmitter to the receiver. Second, the signal propagation delay is calculated using Equation (7). Then, the monitoring receiver receives the eLoran signal and demodulate the 1PPS, which is compared with the UTC (NTSC) through the time interval counter to obtain the time difference M. Finally, the time difference M is collected to calculate the − to form a differential model and is transmitted to the surrounding users in real-time. For the user, with the precise position of the receiver antenna, the circle distance from the user to the broadcast station is calculated. The signal propagation delay ′ is calculated according to Equation (7), then the correction is calculated according to the received difference information to obtain more accurate signal delay and synchronize the 1PPS to UTC (NTSC). The principle of eLoran differential time service is shown in Figure 6. Verification of eLoran Differential Timing Method The experimental research is carried out to verify the result of the difference technology. In the experiment, a survey is carried out in the difference station and the users, but with a different purpose. The measurement in difference station is to calculate the difference in correction mode, and the test in user site is to verify the difference technology effect. The offset between the difference result and the measured value of delay is used to evaluate the effect of the timing differential method. The schematic of the test equipment is shown in Figure 7. The BD receiver is used as the standard time and frequency source, which have been synchronized to the UTC (NTSC) in advance. The time interval counter is used to measure the signal delay, which is collected in the data acquisition system. Verification of eLoran Differential Timing Method The experimental research is carried out to verify the result of the difference technology. In the experiment, a survey is carried out in the difference station and the users, but with a different purpose. The measurement in difference station is to calculate the difference in correction mode, and the test in user site is to verify the difference technology effect. The offset between the difference result and the measured value of delay is used to evaluate the effect of the timing differential method. The schematic of the test equipment is shown in Figure 7. The BD receiver is used as the standard time and frequency source, which have been synchronized to the UTC (NTSC) in advance. The time interval counter is used to measure the signal delay, which is collected in the data acquisition system. The measurement is carried out in GuanZhong Plain in Shanxi Province. The area has wet ground, with conductivity σ = 0.01 and dielectric constant ε = 30. The basic information of test points is shown in Table 6. Taking Wu Gong (WG) as the difference station and Mei Xian (MX) as the user, the distance between WG and MX is 44.8930 km. The distribution of test points is shown in Figure 8. Pu Cheng (PCH) is the eLoran broadcast station. The measurement data in WG and MX is shown in Figure 9. From the changing trend of the curve, the variation of propagation delay in the MX is coincident with the WG over time. In WG, the measurement data and the theoretical calculation results are combined to compute the difference information. In the calculation and forecasting of the difference model, the least square principle is The measurement is carried out in GuanZhong Plain in Shanxi Province. The area has wet ground, with conductivity σ = 0.01 and dielectric constant ε = 30. The basic information of test points is shown in Table 6. Taking Wu Gong (WG) as the difference station and Mei Xian (MX) as the user, the distance between WG and MX is 44.8930 km. The distribution of test points is shown in Figure 8. Pu Cheng (PCH) is the eLoran broadcast station. The measurement is carried out in GuanZhong Plain in Shanxi Province. The area has wet ground, with conductivity σ = 0.01 and dielectric constant ε = 30. The basic information of test points is shown in Table 6. Taking Wu Gong (WG) as the difference station and Mei Xian (MX) as the user, the distance between WG and MX is 44.8930 km. The distribution of test points is shown in Figure 8. Pu Cheng (PCH) is the eLoran broadcast station. The measurement data in WG and MX is shown in Figure 9. From the changing trend of the curve, the variation of propagation delay in the MX is coincident with the WG over time. In WG, the measurement data and the theoretical calculation results are combined to compute the difference information. In the calculation and forecasting of the difference model, the least square principle is The measurement data in WG and MX is shown in Figure 9. From the changing trend of the curve, the variation of propagation delay in the MX is coincident with the WG over time. In WG, the measurement data and the theoretical calculation results are combined to compute the difference information. In the calculation and forecasting of the difference model, the least square principle is used for the difference station. The sliding window is 600 and the forecast time is 300 s, with 1 as the fitting order. In other words, the data in 10 min forecast for 5 min. Sensors 2020, 20, x FOR PEER REVIEW 12 of 14 used for the difference station. The sliding window is 600 and the forecast time is 300 s, with 1 as the fitting order. In other words, the data in 10 min forecast for 5 min. As shown in Figure 10, the standard error is reduced from 32.7356 ns pre-correction to 18.9630 ns post-correction. The average of all the errors in 394.7287 ns pre-difference, and the average of the error is 19.5890 ns after correction. The accuracy is improved by an order of magnitude. It is proved that the timing service accuracy of eLoran is improved by the difference method, and is within the range of the difference station. As shown in Figure 10, the standard error is reduced from 32.7356 ns pre-correction to 18.9630 ns post-correction. The average of all the errors in 394.7287 ns pre-difference, and the average of the error is 19.5890 ns after correction. The accuracy is improved by an order of magnitude. It is proved that the timing service accuracy of eLoran is improved by the difference method, and is within the range of the difference station. Conclusions Sensors 2020, 20, x FOR PEER REVIEW 12 of 14 used for the difference station. The sliding window is 600 and the forecast time is 300 s, with 1 as the fitting order. In other words, the data in 10 min forecast for 5 min. As shown in Figure 10, the standard error is reduced from 32.7356 ns pre-correction to 18.9630 ns post-correction. The average of all the errors in 394.7287 ns pre-difference, and the average of the error is 19.5890 ns after correction. The accuracy is improved by an order of magnitude. It is proved that the timing service accuracy of eLoran is improved by the difference method, and is within the range of the difference station. Conclusions First, eLoran signal and the propagation delay on the path was introduced, and then the eLoran timing error was analyzed. Second, the correlation of eLoran timing error was analyzed quantitatively in theory based on eLoran timing error and it was found that the action range of the difference station on land should not be more than 55 km to achieve 100 ns accuracy. Finally, based on the correlation between user and difference in timing error, an eLoran differential timing technology was proposed. In the difference station, the theoretical calculation method is combined with the measurement method of the signal delay to obtain the offset which is the common error for the difference station and the users. The offset data were then processed to obtain the difference information which is sent to the users so as to adjust the propagation delay and improve the timing precision. To validate the correctness and validity of the proposed method, experimental verification was carried out in Guan. Zhong Plain. In the experiment, the measurement for the users and the difference station are carried out synchronously to verify the effect of differential technology. The timing precision of the users is improved from 394.7287 to 19.5890 ns, which shows that this method can significantly improve the timing accuracy of the system. Although the range of the difference station in the land is smaller than that of the maritime applications, it can effectively enhance the timing precision of the eLoran system within the action scope.
8,988
2020-11-01T00:00:00.000
[ "Physics" ]
Technical Note: Preliminary estimation of rockfall runout zones Rockfall propagation areas can be determined using a simple geometric rule known as shadow angle or energy line method based on a simple Coulomb frictional model implemented in the CONEFALL computer program. Runout zones are estimated from a digital terrain model (DTM) and a grid file containing the cells representing rockfall potential source areas. The cells of the DTM that are lowest in altitude and located within a cone centered on a rockfall source cell belong to the potential propagation area associated with that grid cell. In addition, the CONEFALL method allows estimation of mean and maximum velocities and energies of blocks in the rockfall propagation areas. Previous studies indicate that the slope angle cone ranges from 27 ◦ to 37 depending on the assumptions made, i.e. slope morphology, probability of reaching a point, maximum run-out, field observations. Different solutions based on previous work and an example of an actual rockfall event are presented here. Introduction Rockfall hazard is a delicate task to assess because it is very difficult to predict the exact trajectory of any block of rock.The uncertainties propagation is comparable to that occurring in the trajectory prediction of a billiard ball after several collisions (Ruelle, 1987).Rockfall hazard mapping requires definition of the run-out distance and the area which can be reached by blocks, i.e. the propagation area.The CONE-FALL method described in this paper is based on a simple frictional model assuming that the rockfall propagation areas can be modelled by analogy with a block sliding along a slope (Heim, 1932).Its aim is to obtain a fast estimation of Correspondence to: M. Jaboyedoff<EMAIL_ADDRESS>the potential of rockfall prone areas at a regional scale based on the "shadow angle" approach or, in other words, the line of energy angle method (Onofri and Candian, 1979;Toppe, 1987;Wieczoreck et al., 1999;Lied, 1977;Evans and Hungr, 1993;Corominas, 1996;Jaboyedoff and Labiouse, 2003).CONEFALL has already been used by other authors to assess rockfall hazard (Aksoy and Ercanoglu, 2006;Ghazipour et al., 2008).In this paper, it is assumed that the source areas are known.They can be defined by different methods (Aksoy and Ercanoglu, 2006;Jaboyedoff and Labiouse, 2003;Loye et al., 2009).The software can be found as supplemental material on NHESS website or on the web site: http://www.quanterra.org/softs.htm; the code is available on request. Predicting the rockfall runout distance and propagation areas, i.e. the areas potentially under the threat of rockfall, is still a challenge.Various solutions exist, ranging from the observed location of existing fallen blocks to 3-D kinematics modelling (Descoeudres and Zimmermann, 1987;Spang, 1987;Stevens, 1996;Guzzetti et al., 2002;Lan et al., 2007).Run-out distance estimations need calibrations based on direct observations, for which the reliability depends on the quantity and frequency of rockfall.This is also true for source areas.As a consequence, the more transparently the observations can be made, the better the calibration is. Historically, simple models were first developed for very large rockfalls, i.e. rock avalanches.Heim (1932) pointed out that for such deposits the angle (Fahrböschung γ ) between the line joining the top of the source cliff and the tip of the deposit follows a power law of the landslide volume (Scheidegger, 1973).Heim made the analogy with a mass moving along the topography dissipating energy by friction.The friction can be linked to an apparent friction angle equivalent to γ . This principle was modified and applied to rockfalls without volume dependency using a predefined angle of the line joining the source to the stop point of blocs (φ p ). φ p ranges from 22 • to 37 • depending on assumptions (Fig. 1a) and based on field evidence (Wieczoreck et al., 1999;Evans and Hungr, 1993;Toppe, 1987;Onofri and Candian, 1979).Such a model can be quickly applied to large areas, where a preliminary investigation of the potential rockfall propagation areas is needed starting from known source areas, or for very large rockfalls based on the Heim theory, which is based on the estimation of friction angle depending on volumes. CONEFALL permits application of this shadow angle principle with grid files DTM that can be used in a geographical information system (GIS).This method can be applied to prioritize detailed investigation based on a global risk analysis by crossing objects at risk and potential rockfall areas. Previous work on rockfall trajectory modelling The principles of rockfall trajectory modelling were stated by Ritchie (1963), who completed experimental studies of rockfall trajectories, and came up with a classification scheme for rockfall.Rockfall trajectories can be calculated using classical kinematics equations (Piteau and Clayton, 1978;Azimi et al., 1982;Descoeudres and Zimmermann, 1987;Guzzetti et al., 2002).The loss of energy at the impact points is commonly modelled using coefficients of restitution, which depend on a number of factors, including mass, shape, and velocity of the boulder.Coefficients of restitution are usually expressed as the ratio of the velocity (or ratio of energy) before and after the impact, eventually for normal and tangential velocity components.Sliding and rolling can be added to the simulation.The blocks are simulated either by points (lumped mass) or rigid bodies.It means that either the trajectories depend on the block shapes, or the mass is concentrated in a point and the influence of shape is given by additional random parameters (Stevens, 1996).For most of the parameters used for the impact calculation, a stochastic part can be added to obtain more realistic results and to reflect the fact that parameters are varying along a slope (Scioldo, 2001;Dudt and Heidenreich, 2001;Guzzetti et al., 2002;Crosta and Agliardi, 2004;Dorren et al., 2006). It must be noted that observations and modelling in 3 dimensions indicate that the trajectories can be spread around the steepest slopes in a range of ±20 • (Agliardi and Crosta, 2003).Modelling has shown that the spreading in- creases with a smaller DTM grid size, i.e. more accurate (Agliardi and Crosta, 2003).But this can also be achieved by increasing the variability of a DTM altitude of coarser mesh introducing statistical distribution for the altitude variability (Crosta and Agliardi, 2003;Jaboyedoff et al., 2006). Using GIS, Van Dijke and Van Westen (1990) and Heinimann et al. (1998) as well as Dorren and Seijmonsbergern (2003) simulated rockfall paths starting from source cells and moving to the next one by choosing the nearest neighbouring cell with the lowest elevation.The maximal run-out distance and velocity are computed using an analogy to a sliding coefficient (Scheidegger, 1973).In that case, if a digitized map containing data on the superficial geology exists, sliding coefficients can be changed depending on the geological type.Then potential rockfall propagation areas are simulated assuming that the falling rocks follow water flow paths limiting the runout distance by using the energy line, too.This leads to results similar to kinematics modelling (Utelli, 1999).The disadvantage of these methods is that small topographic or morphological irregularities may affect the rockfall trajectory substantially.Menéndez Duarte and Marquinez (2002) used GIS to determine watershed below rockfall sources identified as propagation zones (rockfall basin). Using the energy line angle (φ p ), Onofri and Candian (1979) observed that 50% of blocks are stopped for φ p > 33.5 • , 72% for φ p > 32 • , and 100% for φ p > 28.5 • .Toppe (1987) developed a similar approach for maximum runout distances of snow avalanche and rockfall hazard mapping using a φ p deduced from the parameters extracted from the fitting of the slope profile by a parabola.Toppe (1987) indicates that 50% of the rockfall boulders are stopped before 45 • and 95% for φ p > 32 • .Gerber (1994) gave three substratum-dependent limits resulting in 100% of blocks being stopped: 33 • , 35 • and 37 • . The energy line angle φ p can also be calculated from the bottom of the cliff or the top of the talus slope, instead of the rockfall source area (Lied, 1977;Hungr and Evans, 1988;Evans and Hungr, 1993) (Fig. 1a).In that case, it is called the shadow angle principle.This assumption can be used when the slope profile contains a slope break creating bolder rebounds where most of the kinetic energy is lost and then it represents a new start of the fall and bounce along the slope (Evans and Hungr, 1993).Lied (1977) found that all the blocks are stopped at an angle φ p of 28 • − 30 • .The term "shadow angle" is preferably used when the limit is set from the bottom of the cliff, even if the concept of energy line can still be applied because it is assumed that most of the kinetic energy is lost after the first rebound (Jaboyedoff and Labiouse, 2003).Evans and Hungr (1993), using numerous case-studies, set a shadow angle (φ p ) at 27.5 • .These limits are obviously "average" maximum runout points.Statistically, a small percentage of blocks can go beyond the φ p , depending on the slope land cover type.Evans and Hungr (1993) indicate that φ p can be as low as 24 • , whereas Wieczorek et al. (1999) found a different lower value at 22 • for the Yosemite valley, in Central California. Theoretical background of CONEFALL CONEFALL basically uses the principle of Heim (1932), modified and applied to rockfall.Following Heim (1932), the run-out distance (L) of a rock avalanche can be estimated using the intersection of a line connecting the top of the rockfall scar with a slope equal to tan γ = z/L with the topography, z being the difference in elevation between the top and the bottom.In the rockfall version the source cliff is replaced by the location of the rockfall source and the angle γ is replaced by a fixed limited value φ p (Onofri and Candian, 1979).The energy loss along a complex rockfall trajectory depends on several different mechanisms, but the average of the rockfall energy dissipation can be modelled by friction instead of a punctual loss of energy at impact points, sliding and rolling.This concept can be used because statistically, the energy loss along a slope can be assumed linear on average, which leads in fact to a normal distribution of the block stop point distances around a mean value (Jaboyedoff and Pedrazzini, 2010).This justifies the concept by assuming threshold limits for propagation.The energy balance of a rockfall boulder starting from an elevation H is given by (Heim, 1932;Scheidegger, 1973;Evans and Hungr, 1993): where: m is the mass of the block, g is gravity acceleration, x the horizontal coordinate, h(x) the elevation of the topographic surface at point (x,h(x)), v(x) the velocity at point x, and µ the mean kinetic coefficient of friction (Fig. 1b).Rotational energy is not considered for the sake of simplicity.Rearranging terms to estimate the stopping point horizontal distance x stop by putting v(x) = 0, and using µ = tgφ p , we obtain: Hence, the boulder stops where the line from the rockfall source area with a slope equal to φ p intersects the topographic surface.This line is the energy line.Equation (2) provides a physical meaning to φ p , as a mean kinematic coefficient of friction.From Eq. ( 1) we can estimate the boulder velocity for any x-position: From Eq. (3) it can be seen that if v(x) is constant, the topographic slope α is equal to φ p .Hence, where α > φ p the boulder accelerates, and where α < φ p , the boulder decelerates.Assuming that h is the difference of altitude between the energy line and topography, Eq. ( 3) can be rearranged to obtain: The cone method overestimates the lateral extension of the propagation zone because a cone posses a wider aperture than the observed spreading.As indicated previously in the case of a regular topography, the trajectories are spread out in a range of around ±20 • on both sides of the greater slope.Depending on local morphology, this aspect must be taken into consideration.As shown above, several authors (see Sect. 2) are giving limits for φ p , linked to the percentage of block stopped before this point (Fig. 2).It is important to know if an extreme limit exists.Theoretically the answer is no, because assuming that on average a boulder behaves like a mass sliding along the topography, the distribution of the φ p of the block deposition or stopping points is a Gaussian function.Then, if the path is divided into small segments, each of them will have a random value for tanφ p , around a mean value.By using the central limit theorem (Jaboyedoff and Pedrazzini, 2010), it can be shown that these random values are distributed according to a Gaussian distribution.In practice, φ p values of 27 • to 37 • are usually used for rockfalls, but φ p can be much lower (10-15 • ) in case of rock avalanches. If all the cells of potential rockfall source areas are used, the angle φ p must be set to a value close to the angle of repose of the talus slope: around 36-37 • for De la Noe and De Margerie (1888), 32-38 • for Evans and Hungr (1993), and ranging from 26 • to 41 • for Jomelli and Francou (2000) (Fig. 1a).Theoretically, 35 • is the upper limit for a pile of spheres (Réka et al., 1997).This last value is also very close to one of the most common friction angles in rock mechanics.It can be assumed that the highly inclined talus slopes are caused by boulders with special shapes (like bricks that can form vertical walls).When moving down a talus with a slope angle of 35 • , a boulder will keep roughly a constant velocity.Thus, at the bottom of the slope, the block will move beyond the limit defined by the 35 • cone slope to lose its energy completely.Then a slope angle lower than 35 • is necessary to stop the block.Thus, 33-35 • is a well-based φ p limiting range angle in order to predict the most common distant trajectories of blocks.Moreover, the starting point of the boulder must have a steeper slope than 35 • ; otherwise the block will not start to move. For very large rockfall phenomena, the equation of Scheidegger (1973) may be used to estimate γ . Cone method implementation CONEFALL estimates the potential rockfall propagation area using a DTM and a grid file containing all the rockfall source areas as input data.It calculates the rockfall propagation area for each rockfall source cell.The routine of detecting whether a DTM cell is located below the energy line, i.e. in the propagation area, is equivalent to consider that a cell is located within a cone.This rather simple rule can be implemented by checking if: where x and y are the horizontal x and y distances of the DTM point to the source cell (the apex of the cone), h(x) is the elevation of the cone apex, H is the altitude of the rockfall source point, and (π/2 − φ p ) is the angle of aperture of the cone (Figs.1b and 3).Note that some non-continuous areas can be obtained by this simple method.They can be avoided using the intersection of these results with a random walk algorithm (Gamma, 2000;Horton et al., 2008). Various options for (a) the source and (b) the propagation areas are available in CONEFALL.(a) It is possible to use either all the cells contained in the source areas or only those belonging to the edges of these areas.Both methods define identical propagation areas, as the cones from the cells belonging to the upper border of a source area include all the cones from the lower cells of this source area.This option greatly reduced the computing time (10-100 times, depending on the source surface areas).Another similar option is to select automatically only the cells defining the lower edges of the source areas.This option is used for the "bottom of the cliff" method of Evans and Hungr (1993).In order to detect the cliff bottoms, the cells of the full source areas are divided into three different types: edge-cells, inside-cells, and nonsource cells.Using the DTM, the slope of each edge-cells is computed.If the slope dips towards a non-source cell, then the edge-cell is defined as a bottom cliff cell (Fig. 4a, b).If a source area is not a convex polygon or if it contains nonsource cells, the algorithm may generate artefacts.To correct inconsistencies produced by this automatic procedure, a tool to correct the source files manually has been implemented (Fig. 4c). (b) In CONEFALL, the main parameter controlling the propagation is the cone angle which has a fixed value.Without further indication, the lateral propagation (dispersion) is defined by the intersection of the cone with topography.But the dispersion can also be limited using an azimuth and a tolerance angle from the source cell.Different types of outputs can be generated by CONEFALL.The main type is a grid containing the zones where rockfall boulders can propagate.In this grid, the value 1 indicates that the cell inside is at least one cone of propagation, and the value -1 indicates that the cell is outside any propagation area. For each cell of the computed propagation area, it is also possible to count the number of contributing source cells by counting the number of cones including the propagation cell.This yields information on the zones that can be affected by the greatest number of blocks.The output file is then a grid of integer.Note that this counting is strongly dependent on the type of source area used, i.e. border, bottom or entire source area.The best option is to use a complete source area that is an entire cliff in order to get a count representative of the size of the contributing area. In addition to this, using Eq. ( 3) CONEFALL can produce maps of maximum or mean velocities and energies, for each cell within the propagation area.A velocity correction factor, f v , may take a value other than 1.Assuming E rot /E tot is the ratio of rotational energy with total kinetic energy, and using Eq. ( 4), the translation velocity v t can be expressed by: Assuming that rotational energies represent around 20% of the total kinetic energy of a boulder (Gerber, 1994), f v is then set to 0.9 = √ 0.8.This factor can be determined using field observations to obtain a more precise estimation of rock-fall translational velocities.Similar considerations hold for the estimation of rockfall energies, except that a mean block mass must be determined.Mean values of velocities or energies are usually computed using the full source areas.For the maximum values, it is possible to use only the edges of the source areas without changing the final results.Other configurations are left to the preferences of the user. The CONEFALL software has been written in Microsoft Visual Basic © 6, first under Windows 98 and later on under Windows XP.The program can handle two types of input grid files, ArcGIS (*.ASC) files and Surfer 6.0 (*.GRD) ASCII files (Golden, 2002).DTM and rockfall source areas are provided as grid files and must have both the same geographical coordinates and the same number of rows and columns.The rockfall source cells are coded by integers (0-359 • ) and other grid cells must be set to -1.Depending on the type of analysis, the output file contains integer or floating point values.Cells outside the propagation areas have values of -1.Each computer run or project can be saved and loaded (file menu) in a project text file (*.PRC) that contains all the necessary filenames and computation options. Applications The rock instability of "les Crétaux", located near Sion (Switzerland) is used as example.In August 1985, a rockfall occurred from an altitude of about 1400 m to the valley floor, at an altitude of about 450 m (Fig. 5).About fifty blocks reached the vineyards of the Rhône valley.The total rockfall volume was estimated at about 800 000 m 3 , and single boulders ranged in size from 0.03 m 3 to 80 m 3 (Descoeudres, 1990;Rouiller, 1990, Labiouse andDescoeudres, 1999).In the area, different investigations on the rockfall trajectories were carried out using the 3-dimensional rockfall simulation code EBOUL (Descoeudres and Zimmermann, 1987;Dudt and Heidenreich, 2001).The upper part of the study area is a steep scree slope containing moved masses and small cliffs.The middle part is a hard rock slope mixed with scree which ends with an alluvial fan, mainly occupied by vineyards.The upper slope gradient is about 40 • , making the use of the upper limit of the φ p angle (35 • ) a good approximation.Lower values of φ p would represent a more elastic terrain.This implies that the lower part slows down the blocks rapidly because of the large plasticity of the soil. Using all the rockfall source cells, the velocity and energy are computed assuming a rock mass of 3200 kg (corresponding to a rock of more than 1 m 3 ).The rockfall volume was selected so that our simulations could be compared to the simulation produced by Jaboyedoff et al. (2005).Figure 6 shows that rockfall blocks are all located within the 35 • cones.The more distant point is located at 37 • from the top of the cliff.This shows that the 35 • limiting angle is consis- tent with a scree-like topography.In the present case, all the results of the model are identical regardless of whether all the source cells are used or not.The lateral extension of the zone of propagation is larger than the observed spread of rockfall blocks (Fig. 6).However, in comparison to trajectory simulations (see Jaboyedoff et al., 2005), the difference of spread is small, but the simulations indicate a maximum run-out distance for a few boulders further than the 35 • slope cones.It must be noted that if the number of contributing source cells in the propagation area is used (Fig. 6), all the boulders are included in the area with more than 45 contributing cones for a total number of 50 source cells.The zone of 50 contributing cones included 30 boulders over a total of 31.This gives a clear indication on the most rockfall-prone area. The maximum total kinetic energy of a 3200 kg boulder is estimated to 4200 kJ, which is slightly higher than the results of the simulations performed with EBOUL (see Jaboyedoff et al., 2005).The location of the maximum kinetic energy is not centred on the main channel of trajectories defined by the topography but on a side which is seldom reached by boulders (Fig. 7).This is one of the limitations of the cone method.In reality, rockfall boulders are slowed down in this region, because the impacts required to make the block turn in this direction are highly energy dissipative, due to the local topography and the superficial material.Following the same argument, the maximum total kinetic energy is estimated to 3500 kJ, and the maximum mean translational velocity (using a velocity factor = 0.9) is around 42 m s −1 (150 km h −1 ).This is in agreement with the observed maximum translation velocity obtained by Descoeudres (1990) from the analysis of a video record. Using the "bottom of the cliff" method (Evans and Hungr, 1993) and selecting an angle of 27.5 • , a wider area of propagation is then obtained and is compatible with the extreme boulders simulated with EBOUL (see Fig. 12 in Jaboyedoff et al., 2005) (Fig. 6).To better constrain the potential propagation area, the dispersion is limited by an azimuth and a lateral tolerance angle of 315 • ± 20 • .The result of the "bottom of the cliff" model appears to be greatly overestimated.This is because the morphology in our example is not a cliff-slope as in the Rocky Mountains, where Evans and Hungr (1993) developed their method.The 27.5 • area of propagation compared with 20 000 trajectories simulated by EBOUL (Jaboyedoff et al., 2005) contains 99.8% of the simulated block stoppage points. Nonetheless, the application of the cone model can also be refined.If we consider that a boulder loses most of its energy at the toe of a cliff or of a channel or at the greatest change in the slope angle corresponding to the apex of the cone composed by hard rock and scree slope, a φ p angle of 27.5 • can be used by analogy assuming that the apex of the cone is the location of strong energy loss.The computation of a cone centred at this apex shows that all the observed boulders are included in the obtained propagation zone (Fig. 6). CONEFALL has also been applied at a regional scale to the County de Vaud (Switzerland), a 3200 km 2 region.To identify the rockfall source areas, slope angle thresholds (from 47 • to 54 • ) were applied according to the local geology and a slope angle histogram analysis (Loye et al., 2009).The φ p angle was set to 33 • in order to be conservative.In addition, on the plain (significant zone with slope angle below 11 • ), the propagation area was limited to stripes of 100 m along the flank of the valley).The results of this regional study have been shown to be consistent with field observations (Jaboyedoff et al., 2008) (Fig. 8). Discussion and conclusions The cone method can be useful and efficient compared to other methods for several reasons.First, kinematics models need detailed knowledge of field characteristics.The distance of propagation of a rockfall is sensitive to the coefficients of restitution at impact points.As a consequence, a detailed field survey must be performed to get suitable impact parameters, and results must very often be adjusted to be in agreement with field evidence.For large and rapid surveys, it is not possible to collect all the required field data for kinematics-based modelling.In such cases the energy line cone method is more suitable.In 3-dimensions, the method of the energy line angle leads to a cone that defines "lines of energy". Nevertheless, the cone method must be applied carefully because the energy line angle varies greatly according to various authors (Fig. 1).In addition, in our experience the results obtained indicate that cones with a φ p angle from 33 • to 35 • provide good estimations of propagation zones and energies in alpine areas.For a high vertical cliff with a scree slope at its toe, the Evans and Hungr method (i.e., "bottom of the cliff or shadow angle approach") provides better estimates of the propagation zones.Further refinements of the cone method require the introduction of energy lines along the flow-paths.This can be performed with algorithms similar to those used for debris-flows using D∞ flow path and flow dispersion (Holmgren, 1994;Horton et al., 2008;Blahut et al., 2010;Kappes et al., 2011), adding a threshold for the velocity (Horton et al., 2008) or a maximum energy loss from one to another pixel. Going beyond the previous general statement, the example of les Crétaux shows that energy line angle methods can be applied in several ways with a finer tuning.First, the 50 boulders are included in the 35 • cone centered on the main instability.However, this is not always sufficient to include the extreme run-out blocks.In the simulations performed with EBOUL, 1.75% of the blocks are out of the 35 • energy line angle limit and 0.25% of them propagate further than the 27.5 • limit defined from the top of source area (see Jaboyedoff et al., 2005).This is consistent with observations considering that only 50 endpoints boulders locations are recorded and that the 27.5 • limit is very close to the extreme runout.In addition to using an adapted "bottom of cliff" 27.5 • method, placing the source at the main slope angle change at the location where the rebounds dissipate the maximum of energy, it is possible to obtain all the observed boulders inside the limit.This example shows that the CONEFALL method can not be applied blindly, the geomorphology and the goal of a study will directly influence the design of the chosen parameters, i.e. rockfall source areas, bottom of cliff or other morphological arguments.φ p must be then chosen either using previous study or deducing from local field observations. The CONEFALL method is a good way to get first estimations of rockfall "propagation zones", velocities or energies.At a regional scale the software should be used only as a preliminary mapping tool to delineate rock prone areas using the simple, binary option outlining the areas that can be affected by falling boulders or by counting the number of contributing cones.Continuous variables, such as energy or velocity, should only be used when the morphology has first been inspected carefully to insure correct analysis (selecting among the different possibility of φ p angle or the location of the source cells).It must be noted that the shadow angle approach is strongly dependent on the slope morphology.If detailed information are available at regional scale, it is possible to use more sophisticated 3-D models based on trajectory modelling (Guzzetti et al., 2003;Crosta and Agliardi, 2004;Frattini et al., 2008;Dorren et al., 2006).For rock avalanches, CONEFALL may be employed using lateral extension, and φ p angles calculated by the relationship between φ p angle and the volume after Scheidegger (1973). Finally, CONEFALL appears to be a suitable standalone solution to perform fast studies of rockfall propagation at regional scale, and such a model can be easily implemented in a GIS environment with programming capabilities. Fig. 1a . Fig. 1a.Energy line used for the cone method from the top or the bottom of a cliff (shadow angle), according to various authors (modified after Crosta et al., 2001). Fig. 1b . Fig. 1b.Variables used to calculate velocities and energies based on the energy line concept.The example uses the more distant block to define φ p and estimate h, which is used to calculate the velocity v = √ 2g h.This illustrates the tailor-made possibilities of the cone method. Fig. 2 . Fig. 2. Relationships between energy line angles and block distributions according to various authors.The gray curve is a fitting of all results (except the point Toppe, 1987 for 45 • ).The obtained value is φ p = 34 • with a standard deviation of 1.62 • . Fig. 3 . Fig. 3. Principle of the cone method, with cells as source areas.The resulting zone is the surface delineated by the higher cone surfaces or envelope of the whole pixels source of individual cones. Fig. 4 . Fig. 4. (a) Illustration of the procedure of border identification.Any pixel that has a neighbouring point at the locations 1, 2, 3 or 4 with at least one blank pixel (no cliff) is a border.To extract the bottom of the cliff, the space is divided into 3 types of pixels: inside cliff (light grey), border (dark grey) and white outside.(b) To identify a bottom pixel the normal vector N to the pixel is estimated, and if it is located above its x-y-0 component the pixel is designated as belonging to the bottom of the cliff.(c) Different possibilities for the modification of cliff areas. Fig. 6 . Fig. 6.The area in black is representing the source cells.The yellow to red scale is the count of the number of source points potentially contributing to the rockfall propagation zone.The black dashed line indicates the cone taken at the bottom of the source area in black with a φ p = 27.5 • and limits equal to 315 • ± 20 • .The black line indicates the cone taken with an apex taken at the greatest change in the slope angle with a φ p = 27.5 • .(DTM reproduced with the permission of the Swiss Federal Service of the Topography, BA034918.) Fig. 7 . Fig. 7. Computations of energies and velocities for 3200 kg blocks using the Surfer program.(DTM reproduced with the permission of the Swiss Federal Service of the Topography, BA034918.) Fig. 8 . Fig. 8. View of rockfall indicative map of the Canton de Vaud based on the conefall method with φ p of 33 • using Googlearth (see Loye et al., 2009 for a detailed description of the map characteristics).
7,420
2011-03-15T00:00:00.000
[ "Geology" ]
The Concept and Security Analysis of Wireless Sensor Network for Gas Lift in Oilwells (cid:21) Pipelines, wellbores and ground installations are permanently controlled by sensors spread across the crucial points in the whole area. One of the most popular techniques to support proper oil drive in a wellbore is a Gas Lift. In this paper we present the concept of using wireless sensor network (WSN) in the oil and gas industry installations. Assuming that Gas Lift Valves (GLVs) in a wellbore annulus are sensor controlled, the proper amount of injected gas should be provided. In a ground installation, the optimized amount of loaded gas is a key factor in the e(cid:30)cient oil production. This paper considers the basic foundations and security requirements of WSN dedicated to Gas Lift Installations. Possible attack scenarios and their in(cid:29)uence on the production results are shown as well. Introduction Oil and Gas production is based on fundamental physical properties. The pressure dierence between the ground installation -wellhead and the well bottom pushes the uid uphill, towards the surface. The density of oil has the inuence on the production results. One of the most popular techniques to reduce oil density in a wellbore is a Gas Lift. In the USA about 10 percent of wells use the intermittent or permanent installation providing natural gas through valves into a wellbore. Gas Lift systems are divided into two sections: underground and surface. In the underground area, Gas Lift Valves (GLVs) provide the natural gas into the wellbore. They are set in producers labs, to open under the dedicated pressure dierences. Inside a GLV orices provide gas into a tubing, causing higher pressure dierence between a wellhead and a bottom hole. To change the properties, valves are retrieved and this is a time consuming and expensive method. On the surface, especially for the multiwell installation, the gas distribution should take the optimization problems under the consideration. Proper amount of injected gas has the inuence on the production results [!]. In the Oilwell industry the temperature and pressure gauges are commonly used. Unfortunately, these devices could not be placed across the whole wellbore, so Distributed Temperature Sensing (DTS) systems are used [']. For the survey, thin ber optic cables made of silicon dioxide with the amorphous solid structure are used. Physical properties as temperature or pressure can aect characteristics of light transmission in the cable. In a ber, while a photon aects the crystal it is annihilated. The Rayleigh scattering is described as the photon annihilation whiles having a contact with the crystal and creation of another photon [#]. The energy balance between the scattered protons is not equal and it can be observed in the spectrum with the Stokes and anti-Stokes lines. The DTS systems have a few drawbacks. The ber optic cable can be damaged, and the replacement takes an unnecessary time to x the problem by replacing or splicing the cable. The temperature data usually cause small disturbation in the places where cable is clamped to the wellbore. It makes the automatic analysis tough to perform and can be interpreted as genuine eects or unexpected leaks. DTS demands the equipment for the data collection and a big spool of ber optic cable. All these things should be shipped to provide the proper data acquisition. The main contribution of this paper is to propose the concept of monitoring system based on Wireless Sensor Network. The presented idea is the novel one and is not discussed in literature. The traditional Gas Lift properties are set in laboratories prior to the well completion. Once the calculation is wrong, it is nearly impossible to improve the system without decompletion [!]. Wireless sensor network can be responsible for GLVs communication, to manage the proper lift. Every wellbore at an oileld may be connected to the gas wireless distributing system. These facts determine the idea of dynamic WSN dedicated to the gas lift managing and monitoring. Another contribution of this paper is to show the security requirements and potential impact of the attacks for WSN for Gas Lift in Oilwells. The security attacks on WSN can damage the system by changing the temperature and pressure data. As a result, the production can be totally corrupted or disturbed. Once the attack reaches a safety valve, the results are highly dangerous, even devastating. At the surface part of the completion, any attack deregulates the gas diversion. Regarding the study about potential impact in the case of attacks on the WSN monitoring system, the authors extended the simulation model to the single phase annulus calculation [!]. In the literature, dierent network topologies for Wireless Sensor Network can be found [1,5,11]. The network topologies can include various types of nodes which have a dierent function. The most common are: • sensing node -it samples a kind of data (i.e. temperature, acceleration, etc.); • head node -it collects data from sensing nodes and forwards it to the gateway; • gateway (base station) -this node is directly connected to a server and forwards the received data to it (i.e. via the serial port). One can enumerate the main types of topology as follows: • star topology, consisting of the gateway node and sensing nodes only; • tree topology, consisting of all three types of nodes where head nodes stand between the sensing nodes and the gateway; • mesh topology, similar to tree topology, but the head nodes can be connected to other head nodes directly. In this paper we consider a sensor network, which is built of three kinds of sensors: sensing nodes (GLVs), head nodes and the gateway. The network topology is shown in Pobrane z czasopisma Annales AI-Informatica http://ai.annales.umcs.pl Data: 30/07/2023 17:03:21 U M C S Fig. 1. The distance between each sensor is estimated as 50m, and the well depth usually reaches a few kilometres, this is tree topology with only one gateway. The working sensors transmit the data to the gateway sensor which is situated at the wellhead. In the Gas Lift point, the pressure sensor is responsible for data survey. Other sensors are mandatory in the terms of communication between GLVs and gateways on the surface. The linear structure of sensors deployment in the wellbore keeps the monitoring more accurate. In the completion lots of additional and safety devices is tted. By putting sensors in the proper places, not only gas lift can be controlled, also the safety issues can be reduced. To estimate the exact sensor distribution, the further studies are needed. Currently, the assumption that every 50 meters the sensor is placed has been made, according to the measured depth. The meshing is not necessary, it should be constant and it is completion dependable. WSN in a wellbore may be a part of monitoring only, or a decisive system which takes the responsibilities for the production process. Regarding the system automation three levels of autonomy are considered: full, semi full, none. Full automation is when GLVs are gas lifted under the set pressure circumstances. The temperature in the wellbore is checked by sensors and the results are sent to the system, which is responsible for the pressure calculation. If the pressure conditions are adequate, the algorithm makes a decision which valves should stay open. Once the gas is lifted, the algorithm checks the conditions and reduce the injection to avoid the turbulent ow. The amount of gas in this wellbore is the variable which is the function of the whole amount of gas in the system. Safety procedures are controlled by the sensor, though they are wireline connected, which supports the additional control barrier. Semi-full automation gives the exception for safety procedures and the gas injection is controlled by WSN. The unautomated system is being developed to check the physics conditions by monitoring and control. It creates the option to analyze the production process and the comparison with the classic form of well managing. Security Requirements Analysis Realization of electronic processes requires fullment of many technological standards so while projecting the systems one can take care of dierent security services. The large list of them is presented and discussed in the articles [5,12]. Realization of WSN for Oilwells requires fullment of several security services. In this section we would like to enumerate the most important ones and provide justication for our choice. • Condentiality of data. The oileld data has the priority regarding the data condentiality. A few reasons exist in the background. First of all, no one is interested to uncover the crucial production data. In some cases the oil production has been kept on the proper level which not necessarly means the highest. It is strongly demanding as fas as nancial and other external reasons are concerned. • Integrity of data. One of the most important thread, is to capture the sampled data in the network and modify them. It can damage the production or destroy safety procedures. • Authorization of nodes. Another attack can be focused on impersonation of the sensing nodes. In that situation the sensing data can be modied, too. In this situation, the production can also be damaged. • Availability of nodes. This service is especially important for sensing nodes because in the GLV there can be located only one node. The head nodes are not so crucial as the sensing nodes because they can be replicated. Modelling the Impact of the Data Modication Attack for WSN for Gas Lift in Oilwells Assuming that the malicious sensor changes the data in the system a few cases have been analyzed here. The important thread happens once the attack distorts pressure measurements in the annulus. As a result, the GLV changes the lifting sequence causing undesirable situations as slug ow, production problems even temporary block of the uid drive. The presented results were simulated using the model which was meant to estimate the production results and thermodynamic conditions in a wellbore. There are many factors determining the production calculations such as: well trajectory, reservoir data, casing and tubing data, ow types, heat transfer, bubble point pressure, viscosity, velocity, density, volume factor, compressibility, heat capacity, temperature, pressure. The drive in the tubing is usually calculated from dierent approaches for Darcy's Law and its extensions. The Darcy equation presented by Peaceman [10] is as follows: where: g i -molar ow, ϕ -molar density of j -phase, ξ -phase saturation, V b -phase molar volume, x ij -ux vector, P -pressure, n p phase number, ϱ -density, λ -mobility ratio. To the simulation model based on equation 1, the authors added the single phase gas correlations dedicated to the annulus. Based on Fanning correlation [3]: where: f -Fanning friction factor (function of Reynolds number), ρ -density, vaverage velocity, L -length of pipe section, g c -gravitational constant, D -inside diameter of pipe. The single-phase friction factor clearly depends on the Reynolds number, which is a function of the uid density, viscosity, velocity and pipe diameter. The friction factor is valid for single-phase gas or liquid ow, as their very dierent properties are taken into account in the denition of Reynolds number[ ]: where: ρ -density, µ -viscosity. This correlation has been chosen to study pressure dierence between tubing and the annulus, to estimate the condition for the gas lift. Scenarios In the article we are consider three scenarios which simulate the impact of the attack when the working sensors will be impersonated. During the attacks the measurement values are modied which are dierent and depend on the gas lift. These data are crucial for the system which control the production. That analysis shows the impact level of the attack and can be useful during constructing of the WSN architecture for oilwells. We assume that six sensing nodes are installed located at six depths: 1000ft, 2000ft, 3000ft, 4000ft, 4500ft, 5000ft. The pressure calculation in the annulus is simplied due to a single phase. The attack changes the pressure values in the range of gas lifting between 2 and 3 MMScf/D (Fig. 2). The system identies the gas lift which drops to 1 MMScf/D. The deeper the attack takes place the bigger production drop is observed. Scenario 2 Another simulation shows the attack for the lift between 3 and 4 MMScf/D (Fig.3). Regarding the productivity this is the most desirable amount of gas in the annulus. The system identies that gas is not distributed to this wellbore. Without the malicious changes, more than 3000 BPD is expected. Meanwhile, the highest drop is observed if the sensor data is overwritten at 4000ft. The thread combined with high amount of gas involved leads to the interesting results. Once the malicious sensor changes the results by adding too much gas to the system, the production stops denitely. In this case the attack took place for in range from 4.5 to 5.5 MMScf/D (Fig.4). Lifting more gas to the system when it is expected, causes signicant reduction of oil density in the wellbore. The weight of the column above the GLV pushes the drive towards the bottom, almost stopping the production. Conclusions In this paper we show the concept of Wireless Sensor Networks architecture which can be applied in oilelds, especially for the Gas Lift monitoring and management. The importance of the security of the oil production is undoubtable and this paper represents the very rst approach to this problem. We present the main security services which must be guaranteed for presented idea. In the paper the simulation model showing the eects of the potential attack inside the network which can destabilize the production process was presented. It has been showed that the more gas during the lift is in the annulus, the higher risk of stopping the production takes place. Furthermore, the deeper the GLV operates, greater eect the attack has. The information about the threat level and potential attacks impact can be used for introducing the adaptable security [6,7] and Quality of Protection analysis. In the future work we would like to analyse the security mechanisms which should be fullled to satisfy the enumerated security requirements. Next, the performance analysis will be prepared in the terms of the used security mechanisms for the lifetime of the WSN. To achieve this the QoP model will be prepared in Quality of Protection Modelling Language (QoP-ML) [8] and simulated in the AQoPA tool [14].
3,326.4
2014-01-01T00:00:00.000
[ "Computer Science" ]
Mitoxantrone and Mitoxantrone-Loaded Iron Oxide Nanoparticles Induce Cell Death in Human Pancreatic Ductal Adenocarcinoma Cell Spheroids Pancreatic ductal adenocarcinoma is a hard-to-treat, deadly malignancy. Traditional treatments, such as surgery, radiation and chemotherapy, unfortunately are still not able to significantly improve long-term survival. Three-dimensional (3D) cell cultures might be a platform to study new drug types in a highly reproducible, resource-saving model within a relevant pathophysiological cellular microenvironment. We used a 3D culture of human pancreatic ductal adenocarcinoma cell lines to investigate a potential new treatment approach using superparamagnetic iron oxide nanoparticles (SPIONs) as a drug delivery system for mitoxantrone (MTO), a chemotherapeutic agent. We established a PaCa DD183 cell line and generated PANC-1SMAD4 (−/−) cells by using the CRISPR-Cas9 system, differing in a prognostically relevant mutation in the TGF-β pathway. Afterwards, we formed spheroids using PaCa DD183, PANC-1 and PANC-1SMAD4 (−/−) cells, and analyzed the uptake and cytotoxic effect of free MTO and MTO-loaded SPIONs by microscopy and flow cytometry. MTO and SPION–MTO-induced cell death in all tumor spheroids in a dose-dependent manner. Interestingly, spheroids with a SMAD4 mutation showed an increased uptake of MTO and SPION–MTO, while at the same time being more resistant to the cytotoxic effects of the chemotherapeutic agents. MTO-loaded SPIONs, with their ability for magnetic drug targeting, could be a future approach for treating pancreatic ductal adenocarcinomas. Introduction Pancreatic cancer is the seventh leading cause of death in the world, with an increasing incidence [1,2]. The most prevalent tumor type among pancreatic cancers is pancreatic ductal adenocarcinoma (PDAC), with an overall 5-years survival rate of 11%, and it is expected to become the second leading cause of cancer death by 2030 [3]. Its poor prognosis is strongly associated with the time point of diagnosis, especially when the carcinoma has already spread beyond the pancreas [4]. Recent genomic analyses identified 32 mutated genes in pancreatic adenocarcinoma, which are divided into four subgroups correlating with histopathological characteristics, namely immunogenic, pancreatic progenitor, squamous and aberrantly differentiated endocrine/exocrine tumors [5]. The genetic drivers of PDAC are predominantly mutations in the well-known cancer genes KRAS, CDKN2A, TP53 and SMAD4, besides other genes mutated at a lower prevalence [6]. SMAD4 mutations are correlated with worse clinical outcomes and these tumors are more prone to metastasize at a higher incidence in metastatic recurrence [7,8]. SMAD4 is capable of enabling gene transcription and tumor suppression via the TGF-β signaling pathway. Thus, it can control tumor development by facilitating growth arrest and an induction of apoptosis. The poor survival rate (5-year survival < 11%) is not only due to the lack of early detection and delayed presentation caused by non-specific symptoms, but also because of inadequate therapies. Currently, surgical resection is the only treatment that offers a potential cure for pancreatic cancer. Supportive therapy through the application of chemotherapeutic agents may improve survival rates. There is some evidence that promises further improvement in survival with the administration of chemoradiotherapy in the neo-adjuvant setting [2,9]. Future treatment of this refractory disease may involve immunotherapy utilizing checkpoint inhibitors, protein, or whole-tumor-cell vaccines and nanoparticle-based drugs [9,10]. Nanoparticles, in particular superparamagnetic iron oxide nanoparticles (SPIONs), have become a new agent for treatment and diagnosis in cancer [11][12][13][14]. SPIONs can not only be used for imaging and for diagnostics of diseases, but also for enhancement of the accumulation and release of drugs at the pathological site, thereby increasing therapeutic efficacy and reducing the occurrence of side effects by decreasing their localization in healthy tissues [15,16]. Due to their magnetic properties, they can be controlled by an external magnetic field, allowing targeted delivery of therapeutics. This procedure, known as magnetic drug targeting (MDT), is characterized by high efficiency and effectiveness, as well as reduced side effects [17]. There are already several studies on the use of SPIONs for the diagnosis of pancreatic cancer using multimodal imaging, such as magnetic resonance imaging (MRI) and magnetic particle imaging (MPI) [18,19]. In contrast, there is only a limited amount of literature on the therapy of pancreatic cancer using functionalized SPIONs. For instance, the effects of SPIONs coated with curcumin and multifunctionalized magnetic iron oxide nanoparticles containing an anti-CD47 antibody, as well as the chemotherapeutic agent, gemcitabine, have been the subjects of former research [20,21]. However, MTO, a cytostatic drug used for cancer therapy and multiple sclerosis treatment with a potency up to 20,000 times higher than that of gemcitabine, has shown promising preclinical results for the treatment of pancreatic cancer [22][23][24][25]. Nevertheless, MTO did not show a significant response rate in a phase II clinical study including patients with advanced pancreatic carcinoma, as one of the main therapeutic limitations were dose-dependent hematologic side effects [26]. Since MTO not only effectively disrupts DNA synthesis and DNA repair, but also induces immunogenic cell death, the multiple effects of MTO combined with the targeting capacity of SPIONs might be a future approach for the therapy of pancreatic ductal adenocarcinoma [27,28]. The still predominantly used in vitro method to develop new drugs or agents for cancer therapy is a two-dimensional (2D) cell culture. However, this does not reflect tumor biology adequately due to unlimited amounts of oxygen and nutrition, as well as unphysiological changes in cell morphology [29]. Experiments performed with 3D cell cultures provide more accurate data on tumor characteristics, cell-to-cell interactions, drug sensitivity and metabolic profiles, and are more similar to the complex microenvironment cells experience in vivo [29][30][31]. In this study, we generated 3D spheroids of three pancreatic ductal adenocarcinoma cell lines (PaCa DD183 from primary tumors, wild-type PANC-1 and PANC-1 SMAD4 (−/−) ) by seeding the trypsinized cells into an agarose-coated 96-well plate. Free MTO was tested in for its efficacy to reduce the viability of cells and growth of tumor spheroids depending on their SMAD4 mutation status. Furthermore, we investigated the efficacy of SPIONs loaded with MTO in comparison with its unbound counterpart. Interestingly, we demonstrated an increased uptake of MTO, as well as SPIONs loaded with MTO, in SMAD4-mutated spheroids. In contrast, SMAD4-mutated cells showed a significantly worse response rate to the effects of MTO and SPIONs bound MTO. Synthesis of Superparamagnetic Iron Oxide Nanoparticles (SPIONs) Lauric acid (LA)-coated SPIONs were synthesized according to the protocol of Zaloga et al., 2014 [32]. Briefly, Fe (II) chloride and Fe (III) chloride were dissolved in ultrapure water and co-precipitated by stirring at 80 • C in alkaline media, under an argon atmosphere. After coating with lauric acid, the suspension was homogenized for 30 min at 90 • C and dialyzed several times (MWCO of 10 kDa) with ultrapure water. Subsequent coating with a protein corona of human serum albumin (HSA) was performed according to the protocol of Zaloga et al., 2016 [33]. Briefly, HSA solution (Recombumin Elite, 10% w/v, Albumedix, Nottingham, UK) was sterilized using a 0.22 µm filter, transferred into dialysis bags (MWCO 10 kDa) and dialyzed against 4.5 L of ultrapure water (4 water changes, 5 h). After concentration by tangential flow ultrafiltration [34], the HSA solution was stirred at room temperature and lauric-acid-coated SPIONs were added dropwise through a 0.8 µm filter. Finally, excess HSA was removed by tangential flow ultrafiltration before the SPION solutions were filtered using a 0.22 µm sterile filter. MTO loading of SPIONs was performed as reported by Zaloga et al., 2016 [33]. Briefly, 900 µL of SPIONs (4.84 mg Fe/mL) was vortexed with 100 µL mitoxantrone solution (2 mg/mL) and incubated for 5 min, to obtain a SPION MTO Iron Quantification of Nanoparticles The iron content of the SPIONs was determined using an Agilent 4200 microwave plasma-atomic emission spectrometer (MP-AES) (Agilent Technologies, Santa Clara, CA, USA). The SPION suspension (50 µL) was dissolved in 50 µL 65% HNO 3 , and incubated for 10 min at 95 • C. Prior to analysis, the sample was further diluted in 1900 µL ultrapure water. A commercial iron solution was used as an external standard. Dynamic Light Scattering (DLS) and Zeta Potential Measurements A Malvern Zetasizer (Malvern Instruments, Worcestershire, UK) was used to determine the zeta potential and the hydrodynamic size of the particles in water. Particles were diluted to an absolute iron content of 50 µg/mL and measured in triplicate at 25 • C. Generation of Pancreatic Carcinoma Spheroids Cellular spheroids were produced in 96-well plates coated with 50 µL of 1.5% agarose. Briefly, cells were washed with PBS and detached from the cell culture surface using trypsin/EDTA. After dilution in 10 mL cell culture medium, the number of viable cells was analyzed by an automatic cell counter (MUSE ® Cell Analyzer, Merck-Millipore, Billerica, MA, USA). Agarose-coated wells were equipped with 150 µL medium and subsequently seeded with 50 µL (300,000 viable cells/mL) cell suspension, and incubated for 72 h at 37 • C. Determination of Spheroid Growth by Transmission Microscopy On day three, five and seven after seeding, the spheroid size was determined by analyzing microscopy images using an Axiovert 40 CFL Microscope and the Axio Vision SE64 Rel4.9 software (Zeiss, Jena, Germany). The areas of the 2D projections of the spheroids were determined by the ImageJ software (National Institutes of Health, Bethesda, MD, USA). Experiments were conducted at least four times, with nine replicates each. Microsoft Excel was used for statistical analysis. Hematoxylin/Eosin Staining of Spheroid Cryosections Five representative spheroids from each cell line were selected for cryosection preparation. After removing the cell culture medium, the spheroids were carefully embedded in CryoGlue embedding medium. Twenty minutes later, spheroids were frozen at −20 • C for at least 24 h, before preparation of 10 µm sections using an MNT microtome (SLEE medical GmbH, Mainz, Germany). The sections were then transferred to slides until histological staining. For the staining, cryosections were washed with distilled water for 5 s before staining with Hematoxylin Gill III for 6 min. Subsequently, the sections were placed in 0.1% HCl for 5 s and rinsed with tap water for 3 min. Counterstaining was performed with 0.5% eosin for 6 min, followed by rinsing with tap water and embedding with DPX mounting medium. The sections were imaged and analyzed using an Axiovert 40 CFL Microscope and the Axio Vision SE64 Rel4.9 software (Zeiss, Jena, Germany). Mitoxantrone Treatment of Spheroids Seventy-two hours after seeding pancreatic cells in 96-well plates, spheroids were treated for an additional 4 days with various MTO concentrations (0, 0.5, 5, 10, 20 and 47.8 µg/mL) under standard cell culture conditions. Bright-field images were taken after three, five and seven days using an Axiovert 40 CFL Microscope and the Axio Vision SE64 Rel4.9 software (Zeiss, Jena, Germany). The areas of the 2D projections of the spheroids were determined by the ImageJ software (National Institutes of Health, Bethesda, MD, USA). Experiments were conducted four times, with technical triplicates. Microsoft Excel was used for statistical analysis. Treatment of Pancreatic Spheroids with SPION, SPION MTO and MTO Pancreatic spheroids were treated with SPION (478 µg/mL Fe), SPION MTO (20 µg/mL) and free MTO (20 µg/mL) in the same MTO and iron concentration, 72 h after seeding. Spheroids were incubated for an additional 96 h under standard cell culture conditions, followed by imaging using a Zeiss microscope (Axio Observer Z.1, Zeiss, Jena, Germany). Dissolving of Spheroids to Single-Cell Suspensions Spheroids were harvested from 96-well plates and pooled in 15 mL cell culture flasks (Falcon, Sarstedt, Nümbrecht, Germany). The supernatant was removed, and the spheroids were washed with 500 µL PBS for at least 1 min. Afterward, the spheroids were incubated with 150 µL trypsin for 20-30 min. Spheroids were dissolved by repeated pipetting. Cell culture medium (750 µL) was added to neutralize the trypsin, and the suspension was centrifuged at 300× g for 5 min before carefully removing the supernatant. The resulting cell pellet was finally resuspended in 250 µL of medium. Analysis of Viability and Cell Death Phenotype in Flow Cytometry To determine viability, 50 µL of the single-cell suspensions were incubated with 250 µL of the staining mixture for 20 min at 4 • C. One milliliter of the staining solution contained 10 µg Hoechst 33342, 25 ng AxV-FITC and 66.6 ng PI per mL Ringer's solution. The fluorescence intensity was analyzed with a Gallios flow cytometer (Beckman Coulter, Fullerton, CA, USA). Excitation of FITC and PI was obtained at 488 nm; FITC fluorescence was verified with the FL1 sensor (525/38 nm band pass filter, BP); and the PI fluorescence with the FL3 sensor (620/30 nm BP). The MTO fluorescence was excited at 638 nm and recorded by the FL7 sensor (725/20 nm BP). Excitation of the Hoechst 33342 fluorescence was obtained at 405 nm and recorded by the FL9 sensor (430/40 nm BP). Electronic compensation was used to eliminate fluorescence bleed-through. Data analysis was conducted with the Kaluza software Version 1.2 (Beckman Coulter). Physicochemical SPION Characterization The physicochemical properties greatly influence not only the interaction between nanoparticles and cells, but also the final quality of the loaded particles. Therefore, the quality and reproducibility of particle synthesis was controlled by measurements of particle size and zeta potential using dynamic light scattering (DLS) ( Table 1). When freeze fracture transmission electron microscopy (TEM) was performed, we found that SPIONs formed multicore particles, of which the aggregate size was comparable to the size measured by dynamic light scattering (DLS). Every individual particle had a diameter of approximately 10 nm [33]. The zeta potential of SPIONs in ultrapure water showed a negative value of −21 mV. In water, the SPIONs exhibited hydrodynamic sizes of 65 nm. The absence of larger aggregates indicated high colloidal stability. In addition, previous studies demonstrated high colloidal stability of all particles in FCS-containing media, indicating the formation of a stabilizing protein corona and a successful binding of MTO to the particle surface [33,39]. In serum-containing medium and whole blood, even in the presence of a magnetic field, no permanent agglomerations were detected [40]. Thus, the physicochemical data suggest the suitability of the nanoparticles for subsequent in vitro experiments. MTO was freshly loaded onto the SPIONs before every experiment. Binding experiments indicated adsorption of 97.6 ± 0.1% of 500 µg MTO adsorbed to 1 mL of SPIONs (2 mg/mL) after 5 min equilibrium. The release of MTO from the particles incubated in RPMI-1640 medium was determined via dialysis and a magnetic assay setup. After 72 h, 11.6 ± 0.1% and 23.7 ± 0.4% were released from the former or the latter, respectively. Thus, MTO was released rather slowly from the particles, most likely by diffusion [33,39]. Generation of 3D Spheroids of Different PDAC Cell Lines We used various PDAC cells lines to form spheroids. PaCa DD183 and PANC-1 originated from a primary PDAC [41]. Furthermore, the established cell line PANC-1 with intact SMAD4, as well with knocked-out SMAD4 (see Appendix A), was investigated. It is known that the SMAD4 gene is involved in gene regulation and tumor suppression via the TGF-β pathway. Furthermore, mutations are correlated with worse clinical outcomes and connected to higher metastasis rates [7,8]. After evaluating the optimal culture condition for each pancreatic ductal adenocarcinoma cell line, 15,000 cells were seeded for spheroid generation into agarose-coated 96-wellplates ( Figure 1). After spheroid structures were formed, spheroid size was determined microscopically after 3, 5 and 7 days of incubation (Figure 1a). Analysis of transmission microscopy images revealed large cell-line-dependent size differences during the initial phase of spheroid formation, with PANC-1 SMAD4 (−/−) producing the smallest and PANC-1 the largest spheroids (Figure 1c). In addition, the images and image analysis showed that between the third and seventh day, the size of all spheroids decreased significantly, indicating an increase in spheroid density [42]. After seven days, the edges of all spheroids, except those formed by PaCa DD183 cells, finally began to fray. Cryosections prepared on the third, fifth, and seventh day and subsequently stained with hematoxylin/eosin (HE) to visualize necrotic areas, revealed no evidence of necrotic cores (Figure 1b). This is consistent with previous reports, which demonstrated that mass transport in small 3D cell aggregates below 400-500 µm is sufficient to maintain cell viability through adequate diffusion of nutrients, oxygen and metabolic wastes [43][44][45]. Generation of 3D Spheroids of Different PDAC Cell Lines We used various PDAC cells lines to form spheroids. PaCa DD183 and PANC-1 originated from a primary PDAC [41]. Furthermore, the established cell line PANC-1 with intact SMAD4, as well with knocked-out SMAD4 (see Appendix A), was investigated. It is known that the SMAD4 gene is involved in gene regulation and tumor suppression via the TGF-β pathway. Furthermore, mutations are correlated with worse clinical outcomes and connected to higher metastasis rates [7,8]. After evaluating the optimal culture condition for each pancreatic ductal adenocarcinoma cell line, 15,000 cells were seeded for spheroid generation into agarosecoated 96-well-plates (Figure 1). After spheroid structures were formed, spheroid size was determined microscopically after 3, 5 and 7 days of incubation (Figure 1a). Analysis of transmission microscopy images revealed large cell-line-dependent size differences during the initial phase of spheroid formation, with PANC-1 SMAD4 (−/−) producing the smallest and PANC-1 the largest spheroids (Figure 1c). In addition, the images and image analysis showed that between the third and seventh day, the size of all spheroids decreased significantly, indicating an increase in spheroid density [42]. After seven days, the edges of all spheroids, except those formed by PaCa DD183 cells, finally began to fray. Cryosections prepared on the third, fifth, and seventh day and subsequently stained with hematoxylin/eosin (HE) to visualize necrotic areas, revealed no evidence of necrotic cores (Figure 1b). This is consistent with previous reports, which demonstrated that mass transport in small 3D cell aggregates below 400-500 µm is sufficient to maintain cell viability through adequate diffusion of nutrients, oxygen and metabolic wastes [43][44][45]. Impact of Free MTO on PDAC Spheroid Growth The effect of MTO treatment on the growth and viability of PDAC cells spheroids was examined after the addition of free MTO (0, 0.5, 5, 10, 20 and 47.8 µg/mL) on the third day of spheroid formation and during the following four days (Figure 2). Bright-field images on days 3, 5 and 7 confirmed the size reduction of untreated spheroids, as shown in Figure 1. Compared with the size of untreated spheroids at days 5 and 7, treatment with 20 µg/mL and 47.8 µg/mL MTO resulted in a significant concentration-dependent increase in the size of PaCa DD183 and PANC-1 SMAD4 (−/−) spheroids, indicating an effect on cell cohesion at higher MTO concentrations (Figure 2a,b,e,f). Interestingly, the size of PANC-1 spheroids increased sharply at already 5 µg/mL and 10 µg/mL MTO, whereas the increase was less pronounced after incubation with 20 µg/mL and 47.8 g/mL MTO, indicating a very rapid cell death at MTO concentrations of 20 µg/mL or higher (Figure 2c,d). The effect of MTO treatment on the growth and viability of PDAC cells spheroids was examined after the addition of free MTO (0, 0.5, 5, 10, 20 and 47.8 µg/mL) on the third day of spheroid formation and during the following four days (Figure 2). Bright-field images on days 3, 5 and 7 confirmed the size reduction of untreated spheroids, as shown in Figure 1. Growth progression was normalized to the spheroid size at day 3. Data are expressed as the mean with standard deviation (n = 4, with three replicates each). Statistical significances between MTOfree and MTO-treated spheroids are indicated with *, ** and ***. The respective confidential intervals are * p ≤ 0.05, ** p ≤ 0.0005 and *** p ≤ 3 × 10 −9 , respectively, and were calculated via the Student's ttest. Abbreviations: MTO, mitoxantrone. On day three, cell culture medium was supplemented with MTO (0, 0.5, 5, 10, 20 and 47.8 µg/mL, respectively). Growth progression was normalized to the spheroid size at day 3. Data are expressed as the mean with standard deviation (n = 4, with three replicates each). Statistical significances between MTO-free and MTO-treated spheroids are indicated with *, ** and ***. The respective confidential intervals are * p ≤ 0.05, ** p ≤ 0.0005 and *** p ≤ 3 × 10 −9 , respectively, and were calculated via the Student's t-test. Abbreviations: MTO, mitoxantrone. Impact of Free MTO on Cell Viability within PDAC Spheroids At day seven, we analyzed the viability of the spheroids via flow cytometry (Figure 3a,b) to get a clearer insight into the mechanism of death induced after 96 h incubation with MTO. As they are indicators of apoptosis and necrosis, we evaluated the cells for their phosphatidylserine exposure using Annexin V-FITC (Ax), and for their plasma membrane 2c,d). Impact of Free MTO on Cell Viability within PDAC Spheroids At day seven, we analyzed the viability of the spheroids via flow cytometry ( Figure 3a,b) to get a clearer insight into the mechanism of death induced after 96 h incubation with MTO. As they are indicators of apoptosis and necrosis, we evaluated the cells for their phosphatidylserine exposure using Annexin V-FITC (Ax), and for their plasma membrane integrity using propidium iodide (PI). Ax−PI are considered viable, Ax+PI apoptotic and PI+ cells necrotic. The applied gating function, shown exemplarily for PaCa DD183 cells in Figure 3a, was used to analyze cell morphology, phosphatidylserine exposure and plasma membrane integrity and MTO uptake into the cells. PaCa DD183 cell spheroids revealed a dosedependent increase in apoptotic and necrotic cells with increasing MTO concentrations. At the highest MTO concentration (47.8 µg/mL), the number of viable cells decreased to 20.1 ± 6.7% (Figure 3b). In contrast to PaCa-DD183, the sensitivity of PANC-1 cell spheroids against MTO was much more pronounced. A concentration of 5 µg/mL reduced the viability to 2.2 ± 0.7%, compared to 70.4 ± 6.0% in PaCa DD183 cells. Thereby, the percentage of apoptotic cells increased to 92.4 ± 1.3%, while PaCa DD183 cells only 17.2 ± 2.5% apoptotic cells in the presence of 5 µL/mL MTO. Interestingly, PANC-1 cells with a mutated SMAD4 gene were much less sensitive to MTO compared to the original PANC-1 cells. Thus, the use of 5 µg/mL MTO resulted in only 33.4 ± 14.8% apoptotic and 2.8 ± 2.3% necrotic cells, and only at the highest MTO concentration (47.8 µg/mL), the cells reached approximately the same values that were already reached in the PANC-1 cells at 5 µg/mL. Impact of Particle-Bound MTO and Free MTO on PDAC Spheroids Based on the cytotoxicity data obtained with free MTO on PDAC spheroids, 20 µg/mL was selected as the MTO concentration for the comparison of effects achieved in free or nanoparticle-loaded form. Spheroids treated with non-loaded SPIONs (478 µg/mL Fe) or H 2 O served as the controls. After dissociation of spheroids into single cells, analysis of viability was performed by flow cytometry (Figure 4a-c). In PaCa DD183 cell spheroids, treatment with free MTO and SPION-linked MTO resulted in similar cytotoxicity with comparable numbers of viable cells (34.7 ± 1.8% and 38.8 ± 10.2%, respectively). There were also no significant differences in PANC-1 viable cells between treatment with free and bound MTO, although the effect on PANC-1 cells was much stronger (8.9 ± 3.2% and 3.6 ± 1.8%, respectively). Discussion PANC-1 SMAD4 (−/−) -derived spheroids similarly showed no significant differences in the number of viable cells after MTO or SPION MTO treatment (23.3 ± 0.6% and 16.0 ± 1.0%, respectively). However, the proportion of viable cells is more pronounced than in PANC- Discussion PANC-1 SMAD4 (−/−) -derived spheroids similarly showed no significant differences in the number of viable cells after MTO or SPION MTO treatment (23.3 ± 0.6% and 16.0 ± 1.0%, respectively). However, the proportion of viable cells is more pronounced than in PANC-1 SMAD4 (−/−) spheroids, confirming the decreased sensitivity to toxic substances in cells with downregulated or absent SMAD4 expression, as shown in Figure 3. In PaCa DD183 and PANC-1 cells, we observed a decrease in the percentage of necrotic cells after SPION MTO -treatment compared to free MTO, which might be an indication for a slightly lower toxicity of SPION-bound MTO. However, this was not the case for PANC-1 SMAD4 (−/−) cells, which revealed slightly more necrosis when treated with SPION MTO in comparison to the free drug. To find an explanation for the cell-specific differences in MTO effects, the cellular MTO amount was determined by analyzing the fluorescence intensity (Figure 4d-f). The increased cytotoxicity of MTO toward PANC-1 cells compared with PaCa DD183 cells correlated with an increase in cellular MTO concentration. In PANC-1 cells, fluorescence intensity upon treatment with free MTO or SPION MTO was 2.4-or 6.2-fold higher than in PaCa DD183 cells. Although the number of viable cells in PANC-1 SMAD4 (−/−) spheroids was higher than in spheroids derived from PANC-1 cells, the amount of MTO detected in PANC-1 SMAD4 (−/−) cells was 3.3-fold (free MTO) and 1.2-fold (SPION MTO ) higher than in PANC-1 cells. This indicates a higher uptake of free and particle-bound drug into the SMAD4-deletion mutant cells. Since all solid tumors only occur as 3D structures in vivo, 3D cell culture systems may provide a way to bridge the gap between 2D cell cultures and the in vivo setting. In particular, analyzing the effects of drugs in cells grown in a monolayer cannot reflect the complexity of the 3D structure of a tumor [29]. Usually, in vivo studies are performed using animal models to overcome these problems. However, as the enormous number of mice and other animals used for research should be drastically reduced, in vitro testing systems need to be adapted to better approximate the real situation, which could lead to a reduction in animal testing, according to the 3Rs principle (replace-reduce-refine). Testing pancreatic ductal adenocarcinoma presents a particular challenge, as this tumor entity is represented by hypoxic conditions and a dense specialized extracellular matrix/desmoplastic stroma within these tumors. Two-dimensional cell culture systems cannot replicate these features. Moreover, it has been shown that culturing human pancreatic adenocarcinoma cells in monolayers leads to a transition from epithelial cells to mesenchymal phenotypes [46]. Although it has already been shown by others that PDAC cell lines are able to form 3D spheroids, our results demonstrate that we could consistently reproduce tumor spheroids from PaCa DD183 and PANC-1 cell lines, as well as the newly generated SMAD4 knock-out cell line, PANC-1 SMAD4 (−/−) , to have a working platform for drug testing. Even though solid spheroids started to form after approximately 24 h, our goal was to obtain tightly packed tumor spheroids. Due to the 3D structure of the spheroid, the inner core has a limited amount of resources like oxygen and nutrients available, while metabolic waste needs to be transported out of the spheroid. Thus, the 3D model comes closer to reality than a 2D cell culture. One limitation of the used 3D spheroid model is the monoculture of PADC cells. In future studies this model can be further developed by coculturing cancerassociated fibroblasts and immune cells to improve resembling the characteristic dense stroma of PDACs [29,47]. After establishing highly reproducible tumor spheroids in PaCa DD183, PANC-1 and PANC-1 SMAD4 (−/−) tumor cell lines, we tested their resistance to mitoxantrone. In PaCa DD183 and PANC-1 SMAD4 (−/−) spheroids, MTO treatment resulted in a significant concentration-dependent increase in the area size (Figure 2a,b,e,f). Surprisingly, treatment of PANC-1 spheroids with MTO caused an increase in spheroid area at concentrations of 5 or 10 µg/mL MTO, while at lower and higher concentrations, the compact structure of the spheroids was not lost (Figure 2c,d). Whilst the non-dose-dependent behaviour is unusual at first, this might be explained by the fact that at concentrations of 5 and 10 µg/mL, apoptosis is massively induced in the spheroids, as detected by AxPI staining (Figure 4). Apoptosis is accompanied by the controlled shrinkage and condensation of the cell body and the degradation into multiple individual apoptotic bodies, which might explain the loss of coherence, and thus increase of area size. In contrast, MTO concentrations of 20 µg/mL might cause an immediate loss of plasma membrane integrity in PANC-1 spheroids due to an overload of toxic MTO, resulting in a higher percentage of necrotic cell death ( Figure 3). As this process is very quick, there might be no time for controlled degradation, and therefore a reduced loss of coherence. Furthermore, our results demonstrate how genetically diverse PDAC tumor cell spheroids can respond differently to a given chemotherapeutic agent. PaCa DD183 and PANC-1 originated from a primary PDAC [41]. In the PANC-1 SMAD4 (−/−) cell clone, SMAD4 expression was knocked out using CRISPR/Cas9. It was previously pointed out that PANC-1 tumor spheroids are more resistant to chemotherapy and radiotherapy when harboring KRAS and p53 mutations compared to 2D cell cultures [48][49][50][51]. Therefore, we were interested in whether knocking out SMAD4 in the PANC-1 cell line, which has been found to be associated with a worse prognosis in pancreatic carcinoma, would affect resistance to MTO [52][53][54]. Interestingly, the uptake of free MTO and SPION bound MTO was increased in PANC-1 SMAD4 (−/−) cells. These findings correspond with the increased uptake of SPIONs in PANC-1 SMAD4 (−/−) cells in comparison to PANC1 cells, demonstrated by our research group and reported by Friedrich et al. [35]. Nevertheless, they showed a higher rate of viability compared to wildtype SMAD4 cells. Even though increased chemoresistance has been shown for other chemotherapeutic substances [55], to our knowledge this is the first study demonstrating an increased resistance to MTO in PANC-1 spheroids due to SMAD4 knockout. We chose MTO and MTO-loaded SPIONs in this study because of their high chemotherapeutical potency and capabilities of inducing immunogenic cell death by the release of damage-associated molecular patterns. At the same time, MTO is quite stable, making it excellent for analysis even in complex media [22][23][24]56]. Our data point out that MTOloaded SPIONs, despite their slightly impaired penetration in comparison to unbound MTO, induce cell death reliably in pancreatic carcinoma spheroids. Further research is needed to differentiate if MTO is released from the particles or if the particles penetrate into the spheroid together with the chemotherapeutic cargo. In previous investigations, with spheroids from HT-29 colon carcinoma cells, we observed a delayed penetration of MTO into the tumor spheroids when applied as SPION-MTO, by using fluorescence microscopy [57]. We hypothesize that the MTO must have been bound to the SPIONs at least initially, otherwise we would not have seen differences in uptake velocity and amount. The cytotoxic potential of MTO comes along with severe dose-dependent side effects, which are the main therapeutical limitation for use in patients [26]. As demonstrated previously by our research group, MDT using MTO-loaded SPIONs could be a way to increase intratumoral cytotoxicity, while at the same time reducing systemic side effects [16,58,59]. It has been shown earlier that using an immunogenic cell death inductor conjugated on nanocarriers can induce a significant tumor reduction in pancreatic carcinomas [60]. Our group demonstrated previously that the loading of MTO onto SPIONs does not influence the capability of the drug to induce immunogenic cell death in tumor cells [27]. We also showed previously that the chemotherapeutic drug MTO can be accumulated in the tumor region, and the immune system was spared from the toxic effects of the drug [16]. The conservation of the immune system and the simultaneous induction of immunogenic cell death in the tumor region might improve therapeutic efficacy in PDAC. Another possible future therapeutic application is the ability of SPIONs to locally induce hyperthermia by using alternating magnetic field-induced movement [61]. This could lead to an increased anti-tumoral activity, as it has been shown that mild hyperthermia can increase the cytotoxic effects of MTO [62,63]. With further research, combining the previously described features of SPIONs and MTO could open up possibilities for new, more effective treatment approaches regarding pancreatic carcinomas. Conclusions Our study demonstrated that 3D PDAC cell spheroids are a reproducible tool for in vitro assays. We found that nanoparticle-loaded MTO induced cell death in 3D PDAC cell spheroids. Furthermore, knocking out SMAD4 in PANC-1 cell spheroids led to an increased uptake of unbound and bound MTO. Interestingly, we demonstrated an increased cell resistance to the cytotoxic effects of free and bound MTO. This highlights the importance of the SMAD4 mutation for future treatment attempts. Potentially, a deeper understanding of genetic key mutations such as SMAD4 could enable an individualized treatment with an improved clinical outcome. Further research is needed to evaluate SPION potential for MDT in the field of pancreatic carcinomas. Locally increasing the intratumoral drug concentration, as well as reducing systemic side effects via MDT, could be a future approach to enhance the therapeutic options for patients with pancreatic carcinomas. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data are available on reasonable request from the corresponding author. study-Panc1. We performed a genetic knock-out of this gene by the CRISPR-Cas9 system [64]. Seventy-two hours after transfection, the positive cells were extracted from the culture by FACS-sorting due to the green GFP signal, and seeded into 96-well plates for single clone selection. These single clones were evaluated for successful knock-out of the SMAD4 gene by Western blot. For clones 1 and 2 originating from knock-out by the sgRNA1 pair, complete homologous absence of SMAD4 could be shown, while the sgRNA1 pair seemed to produce a truncated form of SMAD4. We therefore selected clone sgRNA1-1 (PANC-1 SG1-SC1) for performing further experiments.
7,733.6
2023-04-01T00:00:00.000
[ "Biology" ]
High-Grade Cervical Intraepithelial Neoplasia (CIN) Associates with Increased Proliferation and Attenuated Immune Signaling Implementation of high-risk human papilloma virus (HPV) screening and the increasing proportion of HPV vaccinated women in the screening program will reduce the percentage of HPV positive women with oncogenic potential. In search of more specific markers to identify women with high risk of cancer development, we used RNA sequencing to compare the transcriptomic immune-profile of 13 lesions with cervical intraepithelial neoplasia grade 3 (CIN3) or adenocarcinoma in situ (AIS) and 14 normal biopsies from women with detected HPV infections. In CIN3/AIS lesions as compared to normal tissue, 27 differential expressed genes were identified. Transcriptomic analysis revealed significantly higher expression of a number of genes related to proliferation, (CDKN2A, MELK, CDK1, MKI67, CCNB2, BUB1, FOXM1, CDKN3), but significantly lower expression of genes related to a favorable immune response (NCAM1, ARG1, CD160, IL18, CX3CL1). Compared to the RNA sequencing results, good correlation was achieved with relative quantitative PCR analysis for NCAM1 and CDKN2A. Quantification of NCAM1 positive cells with immunohistochemistry showed epithelial reduction of NCAM1 in CIN3/AIS lesions. In conclusion, NCAM1 and CDKN2A are two promising candidates to distinguish whether women are at high risk of developing cervical cancer and in need of frequent follow-up. Introduction Nearly all cervical cancers are caused by a persistent infection with high-risk human papillomavirus (HPV) [1]. During their lifetime, the vast majority of sexually active women incur an HPV infection, but less than 10% develop persistent infection associated with a higher risk of developing cervical cancer [2]. Invasive cervical cancer is preceded by a Women with CIN3/AIS biopsies had more severe cytology diagnoses, including bo high-grade squamous intraepithelial lesion (HSIL) and atypical squamous cells cannot e clude HSIL (ASC-H) and were more frequently infected with the most oncogenic HP genotypes (HPV16 or HPV18) as compared to women with normal biopsies (Table 1). similar trend for the genotypes was observed in the index biopsies, but was not signi cant. Furthermore, women with CIN3/AIS lesions were all treated, resulting in a signi cantly shorter time of HPV persistency, and all over they had a shorter follow-up tim from the first HPV positive and/or abnormal liquid based cytology (LBC) to the last fo low-up sample (Table 1). No statistical difference in age was detected between wom with normal and CIN3/AIS biopsies (Table 1). HPV testing of DNA isolated from the normal biopsies, resulted in five HPV neg tive, seven HPV16 positive and two HPV52 positive samples (Table 1). Three biopsi were concordant with HPV16 positivity in both the LBC and in the biopsy, while tw women were possibly concordant with non HPV16/18 positivity in the LBC and HPV Figure 1. Overview of the index biopsies and additional cervical samples, before and after the index biopsies, from the women included in the study. Table 1. Summary of clinicopathological characteristics before and after the index biopsy for women with normal and CIN3/AIS biopsies (n = 27). The number of cases in each group is given followed by a percentage for each group in parenthesis. Furthermore, women with CIN3/AIS lesions were all treated, resulting in a significantly shorter time of HPV persistency, and all over they had a shorter follow-up time from the first HPV positive and/or abnormal liquid based cytology (LBC) to the last follow-up sample (Table 1). No statistical difference in age was detected between women with normal and CIN3/AIS biopsies (Table 1). HPV testing of DNA isolated from the normal biopsies, resulted in five HPV negative, seven HPV16 positive and two HPV52 positive samples (Table 1). Three biopsies were concordant with HPV16 positivity in both the LBC and in the biopsy, while two women were possibly concordant with non HPV16/18 positivity in the LBC and HPV52 in the biopsy. All the CIN3/AIS biopsies showed corresponding genotypes as detected in the LBC samples, apart from two samples diagnosed with HSIL without HPV testing. Gene Expression Using Oncomine Immune Response Research Assay All 27 biopsies were subjected to RNA sequencing with the targeted amplicon based Oncomine™ Immune Response Research Assay [18], analyzing gene expression from 398 genes. Fold change in gene expression between normal and CIN3/AIS biopsies, together with pand false discovery rate (FDR) adjusted p-values, were calculated by ANOVA using the TAC 4.0.2. Twenty-seven genes matched the criteria of fold change >|2| and p < 0.05 and were defined as differentially expressed genes (DEGs) (Figure 2A). To validate the differential gene expression, qPCR was performed for two selected genes (CDKN2A and NCAM1), and the normalized C q method (2 −∆Cq ) was used to calculate the relative gene expression in normal and CIN3/AIS biopsies [19]. Spearman's rho correlation between the RNA sequencing data and the qPCR results, showed good correlation for both selected genes (CDKN2A, r = 0.84, p < 0.001 and NCAM1, r = 0.86, p < 0.001). Gene expression Using Oncomine Immune Response Research Assay All 27 biopsies were subjected to RNA sequencing with the targeted amplicon b Oncomine™ Immune Response Research Assay [18], analysing gene expression from genes. Fold change in gene expression between normal and CIN3/AIS biopsies, toge with p-and false discovery rate (FDR) adjusted p-values, were calculated by ANOVA ing the TAC 4.0.2. Twenty-seven genes matched the criteria of fold change > |2| and 0.05 and were defined as differential expressed genes (DEGs) (Figure 2A). To validat differential gene expression, qPCR was performed for two selected genes (CDKN2A NCAM1), and the normalised Cq method (2 −∆Cq ) was used to calculate the relative expression in normal and CIN3/AIS biopsies [19]. Spearman`s rho correlation between RNA sequencing data and the qPCR results, showed good correlation for both sele genes (CDKN2A, r = 0.84, p < 0.001 and NCAM1, r = 0.86, p < 0.001). Of the 27 DEGs, 16 genes were upregulated, while 11 genes were downregulate Of the 27 DEGs, 16 genes were upregulated, while 11 genes were downregulated in CIN3/AIS lesions compared to normal cervical tissue (Table 2 and Figure 2A). Unsupervised hierarchical clustering of the 27 DEGs yielded a heatmap with two main clusters ( Figure 2B). Cluster 1 contains all normal lesions in addition to one CIN3 lesion. Cluster 2 contains only CIN3/AIS lesions. No correlation to HPV genotype were detected within the clusters. Principal Component Analysis (PCA) of the 27 DEGs also revealed two distinct clusters according to the conditions, persistent CIN3/AIS and normal tissue ( Figure 2C). The tumor markers CDKN2A and KRT7 were the most upregulated genes with fold changes 10.7 and 4.8, respectively, while markers for proliferation (MELK, CDK1, MKI67, CCNB2, BUB1, FOXM1, CDKN3) had fold changes spanning from 2.0-2.7 (Table 2). Gene set enrichment analysis (GSEA) also revealed an enrichment of genes associated with cell cycle processes and proliferation in persistent CIN3/AIS (Table 3). All genes with downregulated expression in persistent CIN3/AIS were related to a favorable immune response ( Table 2). The commonly used marker for natural killer (NK) cells, NCAM1, and the macrophage marker, ARG1, had the lowest fold changes of −10.4 and −11.3, respectively (Table 2). Table 3. Top ranged C5 Gene ontology gene set collection for human from gene set enrichment analysis (GSEA) comparing normal (n = 14) versus CIN3/AIS (n = 13) lesions. ES (enrichment scores), NES (normalized enrichment scores), Nom p-value (nominal p-value), FDR (false discovery rate) are included. Rank Gene Ingenuity Pathway Analysis Ingenuity Pathway Analysis (IPA ® , Qiagen, Redwood City, CA, USA) was formed to test relationships between up-and downregulated genes and pathways vant for the CIN3 phenotype relative to normal lesions. IPA was performed by sele the feature "the relationship between cellular infiltration and proliferation of epith cells". The predicted model indicates modulation of the Myc-Max-MXD1 network of scription factors. Myc was predicted to be activated, while the Myc agonist MXD1 predicted to be inhibited ( Figure 4). Altogether, the model predicts stronger regula of genes modulated by Myc. Two additional transcription factors, FOXM1 and JUN, play important roles in model ( Figure 4). FOXM1 shows a 2.3-fold upregulated expression in CIN3/AIS les ( Table 2) and was predicted to activate Myc among others. FOXM1 is also modelle regulate JUN, an activator of several molecules and inhibitor of ARG1, the most do regulated mRNA in CIN3/AIS lesions ( Table 2). Ingenuity Pathway Analysis Ingenuity Pathway Analysis (IPA ® , Qiagen, Redwood City, CA, USA) was performed to test relationships between up-and downregulated genes and pathways relevant for the CIN3 phenotype relative to normal lesions. IPA was performed by selecting the feature "the relationship between cellular infiltration and proliferation of epithelial cells". The predicted model indicates modulation of the Myc-Max-MXD1 network of transcription factors. Myc was predicted to be activated, while the Myc agonist MXD1 was predicted to be inhibited ( Figure 4). Altogether, the model predicts stronger regulations of genes modulated by Myc. Two additional transcription factors, FOXM1 and JUN, play important roles in the model (Figure 4). FOXM1 shows a 2.3-fold upregulated expression in CIN3/AIS lesions ( Table 2) and was predicted to activate Myc among others. FOXM1 is also modeled to regulate JUN, an activator of several molecules and inhibitor of ARG1, the most downregulated mRNA in CIN3/AIS lesions ( Table 2). Discussion Unnecessary diagnostic biopsies, and sometimes even cone excisions, during followup of persistent HPV-infections and normal biopsies can be stressful for the affected women and are time consuming for both gynaecologists and pathologists. An improved toolbox for better identification of women with increased risk for progression and cervical cancer development is warranted. The main objective for the present study was to compare the expression of key genes associated with immune control between normal and CIN3/AIS biopsies, but also including genes related to proliferation, cell cycle checkpoints and cytokine signalling. A deeper understanding of the mechanisms involved in persistent HPV infections and development of cervical high-grade lesions will strengthen the chances of identifying biomarkers with the ability to stratify between persistent HPV infections with and without the potential for developing into progressive high-grade lesions. In total, 27 genes were differentially expressed between normal and CIN3/AIS biopsies; 16 genes were upregulated, while 11 genes were downregulated in CIN3/AIS lesions. The Oncomine™ results were verified by qPCR for two genes (CDKN2A and NCAM1), and the differential expression of NCAM1 was confirmed at protein level, strengthening the validity of our results. Overall, our Oncomine™ results show that CIN3/AIS lesions as compared to normal cervical tissue, had higher expression of genes related to proliferation and tumour markers and lower expression of genes related to a favourable immune response, including T-cell activation, regulation and differentiation of immune cell infiltration. GSEA also identified increased proliferation in the CIN3/AIS lesions. The hierarchical clustering and PCA results show good separation in gene expression between normal and CIN3 biopsies. In the hierarchical clustering, normal and CIN3/AIS separate in two clusters, with the exception of one CIN3 biopsy, clustering together with normal biopsies. A sub-cluster of three CIN3/AIS lesions resembles normal biopsies in Discussion Unnecessary diagnostic biopsies, and sometimes even cone excisions, during followup of persistent HPV-infections and normal biopsies can be stressful for the affected women and are time consuming for both gynecologists and pathologists. An improved toolbox for better identification of women with increased risk for progression and cervical cancer development is warranted. The main objective for the present study was to compare the expression of key genes associated with immune control between normal and CIN3/AIS biopsies, but also including genes related to proliferation, cell cycle checkpoints, and cytokine signaling. A deeper understanding of the mechanisms involved in persistent HPV infections and the development of cervical high-grade lesions will strengthen the chances of identifying biomarkers with the ability to stratify between persistent HPV infections with and without the potential for developing into progressive high-grade lesions. In total, 27 genes were differentially expressed between normal and CIN3/AIS biopsies; 16 genes were upregulated, while 11 genes were downregulated in CIN3/AIS lesions. The Oncomine™ results were verified by qPCR for two genes (CDKN2A and NCAM1), and the differential expression of NCAM1 was confirmed at the protein level, strengthening the validity of our results. Overall, our Oncomine™ results show that CIN3/AIS lesions as compared to normal cervical tissue, had higher expression of genes related to proliferation and tumor markers and lower expression of genes related to a favorable immune response, including T-cell activation, regulation and differentiation of immune cell infiltration. GSEA also identified increased proliferation in the CIN3/AIS lesions. The hierarchical clustering and PCA results show good separation in gene expression between normal and CIN3 biopsies. In the hierarchical clustering, normal and CIN3/AIS separate in two clusters, except for one CIN3 biopsy, clustering together with normal biopsies. A sub-cluster of three CIN3/AIS lesions resembles normal biopsies in gene expression related to proliferation, while the expression of genes related to immune response were in line with CIN3/AIS biopsies. NK cells play a key role as first line host defense against virus infected cells [20] and are involved in the expansion of T-and B-lymphocytes important for immune surveillance of HPV infections [21]. Interestingly, the Oncomine results showed down-regulation of several immune-related genes in CIN3/AIS lesions compared to normal lesions. For instance, NCAM1 was more than 10-fold down regulated in CIN3/AIS lesions. Besides being a marker for NK cells, NCAM1 is expressed by various T cells and dendritic cells (DCs), both important for immune surveillance of HPV infections [22]. Our IHC results showed, however, no differences in the total number of positive NCAM1 cells between normal and CIN3/AIS, but within the epithelium of normal biopsies, a significantly higher number of positive cells with strong expression of NCAM1 was found. This could explain the overall down-regulated gene expression of NCAM1 in CIN3/AIS biopsies. Lima et al. identified two distinct phenotypes of NCAM1 positive NK-cells, NCAM1 hi versus NCAM1 lo , and related NCAM1 hi to a subset of activated circulating NK-cells [23]. This is in agreement with Reiners et al. connecting reduced cytotoxicity of NK cells to downregulated expression of NCAM1 [24]. Increased numbers of circulating NCAM1 negative NK cells have also been found in patients with various virus infections and among elderly, but not in healthy individuals [25][26][27][28]. Altogether, these studies emphasize the association between NCAM1 expression and the effector function of NK cells and support decreased activity of NK cells in CIN3/AIS lesions in our study. This assumption was further strengthened with decreased expression of CX3CL1 in the CIN3/AIS biopsies. CX3CL1 is expressed on mature DCs and by promoting activation of NK cells, plays a crucial role in the NK/DCs interaction [29,30]. In addition, CD160, an NK receptor, was also downregulated in the CIN3/AIS lesions. CD160 together with IL18, also downregulated in the CIN3/AIS lesions, are both essential for NK cell mediated IFN-γ production [31,32]. The production of IFN-γ by activated NK cells promotes a cytotoxic T-cell response and the development of memory T-cells involved in the adapted immunity [21]. In summary, our results suggest that in CIN3/AIS lesions the NK cells are less effective, lacking the ability to migrate into the epithelium [22] which in turn hampers their capability to activate T-cells. Weakened immune response in CIN3/AIS lesions is further supported by the strong downregulation of ARG1 in the CIN3/AIS biopsies. ARG1 is expressed by macrophages, and downregulates immunosuppressive cytokine production (e.g., IL-4, IL-5, and IL-13) by T-helper cell 2 (Th2) in response to infectious disease [33]. Reduced ARG1 in CIN3/AIS lesions is likely a sign of chronic inflammation and downregulated immune response, promoted by Th2 cell types caused by persistent and unresolved HPV infections [34]. Enhanced weakening of the adaptive immune system is also suggested by decreased expression of key inflammatory chemokines CCL5 and CXCL11 in the CIN3/AIS lesions. Together with CXCL10 and CXCL9, they are essential for the recruitment of effector CD4+T helper cells, CD8+ cytotoxic T-cells and NK cells [35,36]. Altogether, our results imply that the CIN3/AIS lesions tend to have an immune-suppressive environment. The tumor marker CDKN2A had the highest fold change in CIN3/AIS as compared to normal biopsies. This gene encodes the tumor suppressor p16INK4a, a cyclin-dependent kinase inhibitor regulating G1/S transition. p16INK4a becomes overexpressed as a result of inactivation and degradation of the cell-cycle regulatory retinoblastoma protein (pRb) by the HPV E7 oncogene [37]. The transcription factor group E2F, normally bound and inactivated by Rb, can bind the promoter region facilitating the increased expression of p16INK4a. Another gene with significantly upregulated expression in CIN3/AIS lesions was MKI67, a marker for cell proliferation. MKI67, together with p16INK4a, are commonly used as immunohistochemical markers in CIN diagnostics, and are associated with the grade of dysplastic changes in the epithelium [6,7], but their ability to predict the outcome of an HPV infection or a CIN lesion is limited [6]. Interpretation of immunohistochemical biomarkers is subjective and in need of fine tuning to reach acceptable sensitivity and specificity levels [38,39]. Other genes identified in this study as over-expressed in CIN3/AIS lesions compared to normal cervical tissue are also associated with tumor development (KRT7, TNFRSF18 and TNFRSF4). The tumor marker KRT7 is found to be upregulated in a small population of squamous columnar (SC) junction cells, as compared to other squamous and columnar cells of the transformation zone. SC junction cells are described to have a unique expression profile with embryonic characteristics involved in cervical carcinogenesis [40]. TNFRSF18 and TNFRSF4 belong to the tumor necrosis factor receptor super family [41]. They are both expressed on T regulative lymphocytes (Tregs), an immunosuppressive subset of T cells, and promote chronic inflammation associated with HPV persistence, cancer development and antitumor effects through suppression of effector cytotoxic T cells (CTL) [42]. Zhao et al. identified TOP2A, CDK1, BUB1 and CCNB2 as candidate biomarkers for cervical cancer progression in a study using the Gene ontology and Omnibus database [43]. All four genes showed significantly higher expression in CIN3 as compared to normal biopsies in the current study. Specifically, TOP2A, a promoter of epithelial-mesenchymal transition (EMT), has been associated with increased invasive properties of cancer cells, and has been suggested as a prognostic factor for cervical cancer [44]. Furthermore, TOP2A combined with CDKN2A mRNA expression showed high sensitivity and specificity for HSIL [45] in LBC samples. KIAA0101 and FOXM1 are both known for their oncogenic properties. The transcriptional factor FOXM1as a driver for angiogenesis, invasion and metastasis [46], associated with poor prognosis in early-stage cervical cancer. KIAA0101, a promoter of microvascular invasion by inducing EMT and a critical target of FOXM1 [47], is involved in cell proliferation, cell survival, DNA repair and function as a potential oncogene. The IPA diagram presented in this study, illustrates how FOXM1 and the protooncogene AP-1 transcription factor subunit, Jun, in addition to the Myc/Max/MXD1 network of transcription factors, play important roles in for the expression of DEGs in the current study. The proto-oncogene transcriptional factor Myc activates genes related to proliferation such as CDKN2A, CDK1, MKI67, MMP9, BUB1 and CCNB2, while the expression of non-inflammatory genes NCAM1 and CXCL10, are inhibited. By antagonizing the Myc mediated transcriptional activation, MXD1 (Mad1) acts as a modulator by inhibiting activation of genes related to proliferation and transformation, such as TOP2A, BUB1 and CCNB2 [48], all of which were found to be upregulated in CIN3/AIS in this study. Furthermore, Jun acts as a fine tuner of macrophage activation by inhibiting ARG1 [49], which was found to be upregulated in the normal biopsies. Collectively, the differentially expressed genes in the CIN3 biopsies, suggest an environment with increased proliferation, increased plasticity, invasive properties and reduced immunological activity. A limitation of the study is the technique (macro dissection) used for the collection of tissue for RNA isolation. The density and number of cell types in the epithelial and stromal compartments differ in the isolated mRNA. Normal biopsies have a limited number of proliferating cells in the epithelium and CD34+fibrocytes are widely distributed in the stroma [50]. The CIN3/AIS biopsies, however, have high numbers of proliferating cells in the epithelium and many different immune cells can be observed in the stromal compartment, while a decreasing density of CD34+ fibrocytes has been observed [6,7,50]. In addition, the mRNA isolated from the epithelium and stroma is pooled. Furthermore, Oncomine only includes a subset of 398 immune related genes, thus other transcriptomic differences between normal and CIN3/AIS biopsies will not be revealed. Lastly, the moderate number of observations included hampers the strength of the results. A higher number of samples would for instance increase the chance of detecting the potential impact of HPV genotypes. Strengths of the current study include that the two compared cohorts were distinct. Women with normal biopsies remained normal/CIN1 throughout the observation pe-riod. The CIN3/AIS biopsies were from otherwise healthy women; with a first-time onset, histologically (p16/Ki67 supported) confirmed CIN3/AIS, evaluated by an experienced pathologist. The CIN3/AIS diagnoses were all confirmed in the cone excisions. Furthermore, the unique characteristics of this study, make it highly relevant as a starting point for the detection of prognostic tools for the development of high grade CIN. We have identified differentially expressed RNA transcripts related to proliferation and immune responses in HPV positive women with normal or CIN3/AIS biopsies. The DEGs are all potential prognostic biomarkers for high-grade CIN and cervical cancer development. NCAM1 together with CDKN2A are the most promising candidates given the highest fold changes between the groups. However, they need to be validated retrospectively in larger independent cohorts of HPV positive women with normal or CIN3 cervical lesions using techniques already established in clinical laboratories, such as IHC and qPCR. The development of robust prognostic biomarkers would not only benefit a large number of women by providing better cervical cancer prevention, but also reduce the burden on hospital staff and enable more efficient budgeting. Biological Material From March 2015 until June 2018, formalin-fixed, formalin-fixed, paraffin-embedded (FFPE) biopsies from a cohort of 355 women attending the Norwegian cervical cancer screening program (NCCSP), were prospectively collected at the Gynecologic outpatient clinic, Stavanger University Hospital (SUH), Stavanger, Norway, and stored in the biobank "General biobank for cervical cancer and high-grade Cervical Intraepithelial Neoplasia" (2016/805/REC). Written informed consent was received by inclusion. Twenty-seven women were selected from the biobank to the current study based on their biopsy diagnosis and screening history. Fourteen women had biopsies scored as normal, 12 had CIN3 and one had an AIS biopsy (Figure 1). The follow-up screening history during a median observational period of 1198 (414-2199) days were retrieved from the laboratory data system at the Pathology Department at SUH (Unilab 700). The screening history included the number and results of (1) RNA/DNA Isolation The "Recover all total nucleic acid isolation" kit (Invitrogen, Thermo Fisher Scientific, Waltham, MA USA) was used for simultaneous RNA and DNA isolation. Isolation was performed according to the manufacturer's protocol. For CIN3/AIS biopsies, nucleic acids were isolated from (5-10) × (4-5 µm) µm sections from the FFPE tissue block, macro dissected from the most severe dysplastic area of the epithelium and the adjacent stroma. For normal biopsies, nucleic acids were isolated from (2-5) × (4-5 µm) sections comprising all the biopsies in the FFPE tissue block. To quality assure the normal and CIN3/AIS diagnoses, adjacent sections to the sections used for isolation, were Hematoxylin/Eosin (HE) stained and examined by the pathologist. HPV Testing DNA isolated from the biopsies was used for genotyping, using the InnoLipa HPV detection system (Fujirebio Europe N.V., Gent, Belgium). InnoLipa is an automated line probe assay, based on the reverse hybridization principle, for the detection of 32 different genotypes including high-and low-risk and probably high-risk HPV. HPV testing of Thin-Prep LBC follow-up samples, was performed on the cobas 4800 fully automated system (Roche Molecular Diagnostics, Pleasanton, CA, USA) was used for HPV testing. The cobas 4800 simultaneously detects 14 HPV genotypes, including specific identification of HPV16 and HPV18. Functional RNA Quantification RNA quantification assays were used to quantify the exact amplifiable RNA concentrations from the FFPE samples. A one-step real time quantitative PCR (RT-qPCR) procedure was applied (LightCycler ® 480 System, Roche Diagnostics, Rotkreuz, Switzerland) to measure the RNA concentration with TaqMan Fast Advanced Master mix together with a Taqman probe specific for the housekeeping gene GUSB (both Thermo Fisher Scientific). A target gene standard curve was generated by creating a fourfold dilution series (range of 0.05-50 ng/µL) of a commercially available standard (HL-60, 100 ng/µL total RNA). The RNA concentrations were calculated by comparing the mean Cq of triplicates measured for test samples to the Cq measured for the different HL-60 dilutions of the standard curve. RNA Reverse Transcription The Superscript Vilo cDNA synthesis kit (Thermo Fisher Scientific) was used for transcription of 10 ng total RNA, as calculated by functional RNA quantification. Next Generation Sequencing (NGS) The Oncomine™ Immune Response Research Assay (Thermo Fisher Scientific) targets and quantifies the expression levels of a panel of 398 immune related genes, including 10 housekeeping genes, using, Ion Torrent (Thermo Fisher Scientific) next generation sequencing (NGS). Automated library preparation of 32 samples, each containing 10 ng/µL cDNA, were performed in batches of 4 libraries of 8 samples, according to the manufacturer's protocol from on the Ion Chef System. The library concentrations were measured by use of the Ion Library TaQMaN quantitation kit (Thermo Fisher Scientific) and the Ion OneTouch™ 2 System was used to prepare the enriched, template-positive Ion PI™ Ion Sphere™ Particles (ISPs). In total 10 µL, diluted 1/6 with Tris Low EDTA (Low TE) buffer, from each library were combined. The combined libraries were further diluted to 100 pM and used as a template in the emulsion PCR reaction on the Ion OneTouch™ 2 Instrument, using the Ion PI™ Hi-Q™ OT2 200 kit (Thermo Fisher Scientific). Quality control of template-positive Ion spheres (ISPs) was performed on a Qubit™ 2.0 Fluorometer, using the Ion Sphere™ Quality Control Kit, before enrichment of ISPs was performed, using the Ion OneTouch™ ES instrument. Target sequencing was performed on an Ion Proton instrument using the Ion PI™ Hi-Q™ Sequencing 200 chemistry and an Ion PI ™ chip (ThermoFisher Scientific). Subsequently, the sequencing results were downloaded to the Affymetrix Transcriptome Analysis Console (TAC) (Thermo Fisher Scientific) for further data analysis of the 27 samples used in the current study. Transcriptomic Analysis Mean housekeeping gene scaled log2 count data from the 398 genes in the Oncomine Immune Response panel, was obtained from the Torrent Suite™ Software (Thermo Fisher Scientific). TAC 4.0.2 was used to analyze differential expression between the two experimental groups. An exploratory grouping analysis provides gene level analysis from small starting material and the workflow involves a two-step process. The first step generates clusters from the data followed by expression analysis to generate fold changes, p-values and false discovery rate (FDR) adjusted p-values. TAC is based on Analysis of variance (ANOVA) and fits a linear model to each probe set independently of the others. We applied a One-Way ANOVA comparison and since the number of samples being analyzed is small, the eBayes analysis corrects the variance of the ANOVA analysis with an empirical Bayes approach that uses the information from all the probe sets to yield an improved estimate for the variance. Unsupervised hierarchical clustering of the 27 differentially expressed genes between normal and CIN3 lesions was performed within the ClustVis web tool (https://biit.cs.ut.ee/ clustvis/ (accessed on 26 July 2021). The heatmap was generated by applying correlation distance measures and average linkage. The IPA analysis revealed relationships between upstream regulators and differentially expressed genes in the dataset by using prediction of activation or inhibition by z-score, ≥2 (activation), and ≤−2 (inhibition). The PCA was generated with variance scaling applied to rows. Singular value decomposition with imputation was used to calculate the principal components. The ClustVis program was used for visualizing the PCA, illustrating variability in gene expression of the 27 DEGs according to the conditions normal or CIN3/AIS, and also normal or CIN3/AIS according to HPV genotypes. Relative Quantitative Real Time PCR (qPCR) qPCR was used to validate the gene expression of CDKN2A and NCAM1 from the NGS analysis. The qPCR was performed on the LightCycler ® 480 System with cDNA from 14 normal (Group l) and 13 CIN3/AIS (Group 2). Validation of CDKN2A was performed on cDNA synthesized for the NGS analysis, while new cDNA was synthesized for validation of NCAM1. The qPCR was assessed using TaqMan Fast Advanced Master Mix (Life Technologies) and TaqMan probes targeting CDKN2A (hs00923894) and NCAM1 (hs00941830). Based on results from the NGS analysis, the housekeeping gene ABCF1 (hs01073518) was chosen as the internal reference. Immunohistochemistry of NCAM1 Optimization of antigen retrieval and dilution of antibody for CD56 was performed prior to the IHC analysis. For all samples, 2 µm thick paraffin sections, adjacent to an HE stained section, were mounted on Superfrost Plus slides (Menzel, Braunschweig, Germany) and incubated for one hour at 60 • C, before being placed in the Dako Omnis autostainer (DAKO Agilent, Santa Clara, CA, USA). After deparaffinization and rehydration, antigen retrieval was performed by use of EnVision Flex (EnV Flex) high PH Tris buffer (DAKO Agilent) at 97 • C for 30 min. The slides were incubated for 20 min with the primary antibody CD56, diluted 1:50 (rabbit monoclonal anti-human MRQ-42; Cell Marque, Rocklin, CA, USA), and followed by Peroxidase treatment in 3 min. The immune complex was further visualized by incubation with EnV Flex+ rabbit linker for 10 min., EnV Flex horseradish peroxidase (HRP) for 20 min, and finally incubated in EnV Flex substrate for 5 min. Sections were counterstained with hematoxylin, followed by dehydration in graded ethanol and finally mounted manually. The number of positive cells in the epithelium, stroma or both, was assessed by counting the number of positive cells in two neighboring fields of vision (40 × objective 0.52 mm, numerical aperture 0.65) ≈ 1.0 mm 2 . In CIN3 biopsies the most severely dysplastic area with the most intensive p16 staining was interpreted. In normal biopsies an area containing clusters of NCAM1 positive cells was interpreted. The extent and degree of immunopositivity were assessed by consensus scoring by two observers using the same microscope. Statistics All probability values were two-sided and considered statistically significant if <0.05. For categorical variables, the correlation between groups was assessed using Pearson χ 2 or the Fishers exact test as appropriate. For continuous variables, the Mann-Whitney U test and the Independent-Samples T test were applied. Mean housekeeping gene scaled log2 count data from the 398 genes in the Oncomine Immune Response panel, was obtained from the Torrent Suite™ Software (Thermo Fisher Scientific). TAC 4.0.2 (Applied Biosystem) was used to analyze differential expression between the two experimental groups. An exploratory grouping analysis provides gene level analysis from small starting material and the workflow involves a two-step process. The first step generates clusters from the data followed by expression analysis to generate fold change, p-values and FDR adjusted p-values. The correlation between the mean housekeeping gene scaled log2 NGS data and relative gene expression for CDKN2A and NCAM1 (also designated as CD56 in NCBI Gene) by qPCR, was investigated using the Spearman's rho correlation coefficient. An Independent-Samples T test was conducted to compare the number of positive NCAM1 cells/1.0 mm 2 as counted in CIN3/AIS and normal biopsies. Fisher's exact test was performed to test weak or strong expression in NCAM1 positive cells in the epithelium and the stroma of normal and CIN3/AIS biopsies. All statistical analyses, if not otherwise indicated, were performed using IBM SPSS statistics, version 26 (SPSS Inc., Chicago, IL, USA).
7,294
2021-12-29T00:00:00.000
[ "Medicine", "Biology" ]
Perpendicular Magnetic Anisotropy in Heusler Alloy Films and Their Magnetoresistive Junctions For the sustainable development of spintronic devices, a half-metallic ferromagnetic film needs to be developed as a spin source with exhibiting 100% spin polarisation at its Fermi level at room temperature. One of the most promising candidates for such a film is a Heusler-alloy film, which has already been proven to achieve the half-metallicity in the bulk region of the film. The Heusler alloys have predominantly cubic crystalline structures with small magnetocrystalline anisotropy. In order to use these alloys in perpendicularly magnetised devices, which are advantageous over in-plane devices due to their scalability, lattice distortion is required by introducing atomic substitution and interfacial lattice mismatch. In this review, recent development in perpendicularly-magnetised Heusler-alloy films is overviewed and their magnetoresistive junctions are discussed. Especially, focus is given to binary Heusler alloys by replacing the second element in the ternary Heusler alloys with the third one, e.g., MnGa and MnGe, and to interfacially-induced anisotropy by attaching oxides and metals with different lattice constants to the Heusler alloys. These alloys can improve the performance of spintronic devices with higher recording capacity. Introduction Since the discovery of giant magnetoresistance (GMR) by Fert [1] and Grünberg [2] independently, magnetoresistive (MR) junctions have been used widely in many spintronic devices [3,4], e.g., a read head in a hard disk drive (HDDs) [5], and a cell in a magnetic random access memory (MRAM) [6]. The maximum GMR ratio achieved in a [Co (0.8)/Cu (0.83)] 60 (thickness in nm) junction was reported to be 65% at 300 K [7]. Here, the MR ratio is determined by where R P and R AP represent the resistance measured for parallel and antiparallel configurations of the ferromagnet magnetisations, respectively. In parallel, tunnelling magnetoresistance (TMR) [8] has been observed by utilising an oxide barrier instead of a non-magnetic spacer at room temperature (RT) [9,10], and have been improved its ratio very rapidly to 81% in a Co 0.4 Fe 0.4 B 0.2 (3)/Al (0.6)-Ox/Co 0.4 Fe 0.4 B 0.2 (2.5) (thickness in nm) junction at RT [11]. By replacing amorphous AlO x with epitaxial MgO [12,13] as theoretically predicated [14,15], 604% TMR ratio has been achieved in a Co 0.2 Fe 0.6 B 0.2 (6)/MgO (2.1)/Co 0.2 Fe 0.6 B 0.2 (4) (thickness in nm) junction at RT [16]. Such drastic increase in the TMR ratio has increased the areal density of HDD by almost four times over the last decade, for example [3]. For further improvement in HDD and MRAM, it is critical to satisfy two criteria: (i) low resistancearea product (RA) and (ii) perpendicular magnetic anisotropy. The low RA is important to reduce Materials 2018, 11, 105 2 of 18 power consumption and resulting unfavourable side effects, such as Joule heating and possible damage on spintronic devices. The perpendicular anisotropy is essential to achieve faster magnetisation switching [17,18] and to minimise stray fields from a MR junction and the associated cross-talk between the junction cells for MRAM. The recent development in MR ratios and RA is summarised in Figure 1. Figure 1 also includes the target requirements to achieve 1 Gbit MRAM, 10 Gbit MRAM and 2 Tbit/in 2 HDD [19]. Materials 2018, 11, 105 2 of 18 For further improvement in HDD and MRAM, it is critical to satisfy two criteria: (i) low resistance-area product (RA) and (ii) perpendicular magnetic anisotropy. The low RA is important to reduce power consumption and resulting unfavourable side effects, such as Joule heating and possible damage on spintronic devices. The perpendicular anisotropy is essential to achieve faster magnetisation switching [17,18] and to minimise stray fields from a MR junction and the associated cross-talk between the junction cells for MRAM. The recent development in MR ratios and RA is summarised in Figure 1. Figure 1 also includes the target requirements to achieve 1 Gbit MRAM, 10 Gbit MRAM and 2 Tbit/in 2 HDD [19]. For the 1 Gbit MRAM, the junction cell diameter (fabrication rule) should be <65 nm with RA < 30 Ω·µm 2 and MR ratio > 100% [19]. For the 10 Gbit MRAM, the cell diameter should be <20 nm with RA < 3.5 Ω·µm 2 and MR ratio >100%. Here, low RA is required to satisfy the impedance matching [20] with a transistor attached to one MRAM cell and a large MR ratio is essential to maintain a signal-to-noise ratio allowing for a read-out signal voltage to be detected by a small-current application. In order to achieve these requirements, intensive research has been performed on the CoFeB/MgO/CoFeB junctions. As shown as open triangles with a blue fit in Figure 1, in-plane CoFeB/MgO/CoFeB magnetic tunnel junctions (MTJs) have successfully satisfied the requirement for the 10 Gbit MRAM by achieving RA = 0.9 Ω·µm 2 and TMR = 102% at RT [21]. Later, a perpendicularlymagnetised MTJ (p-MTJ) also achieved the requirement for the 1 Gbit MRAM with RA = 18 Ω·µm 2 and TMR = 124% at RT [22], which requires further improvement for the 10 Gbit MRAM target. Such MTJs will replace the current-generation 256 Mbit MRAM with perpendicular magnetic anisotropy produced by Everspin [23]. For the 2 Tbit/in 2 HDD, on the other hand, the MTJs cannot be used as the requirement for RA is almost one order of magnitude smaller than that for the 10 Gbit MRAM [24]. One attempt is nanooxide layers (NOL), which restrict the current paths perpendicular to the GMR stack by oxidising a part of the Cu or Al spacer layer [25]. In a Co0.5Fe0.5 (2.5)/Al-NOL/Co0.5Fe0.5 (2.5) junction, RA = 0.5~1.5 Ω·µm 2 and MR = 7~10% at RT has been achieved. These values are below the requirement for the 2 Tbit/in 2 HDD, and hence further improvement in GMR or TMR junctions are crucial. Figure 1. Relationship between magnetoresistance (MR) and resistance-area product (RA) of magnetic tunnel junctions (MTJs) with CoFeB/MgO/CoFeB (blue triangles), nano-oxide layers (NOL, green squares) and Heusler alloys (red circles) with in-plane (open symbols) and perpendicular magnetic anisotropy (closed symbols) together with that of giant magnetoresistive (GMR) junctions with Heusler alloys (orange rhombus). The target requirements for 2 Tbit/in 2 hard disk drive (HDD) read heads as well as 1 and 10 Gbit magnetic random access memory (MRAM) applications are shown as purple and yellow shaded regions, respectively. For the 1 Gbit MRAM, the junction cell diameter (fabrication rule) should be <65 nm with RA < 30 Ω·µm 2 and MR ratio > 100% [19]. For the 10 Gbit MRAM, the cell diameter should be <20 nm with RA < 3.5 Ω·µm 2 and MR ratio >100%. Here, low RA is required to satisfy the impedance matching [20] with a transistor attached to one MRAM cell and a large MR ratio is essential to maintain a signal-to-noise ratio allowing for a read-out signal voltage to be detected by a small-current application. In order to achieve these requirements, intensive research has been performed on the CoFeB/MgO/CoFeB junctions. As shown as open triangles with a blue fit in Figure 1, in-plane CoFeB/MgO/CoFeB magnetic tunnel junctions (MTJs) have successfully satisfied the requirement for the 10 Gbit MRAM by achieving RA = 0.9 Ω·µm 2 and TMR = 102% at RT [21]. Later, a perpendicularly-magnetised MTJ (p-MTJ) also achieved the requirement for the 1 Gbit MRAM with RA = 18 Ω·µm 2 and TMR = 124% at RT [22], which requires further improvement for the 10 Gbit MRAM target. Such MTJs will replace the current-generation 256 Mbit MRAM with perpendicular magnetic anisotropy produced by Everspin [23]. For the 2 Tbit/in 2 HDD, on the other hand, the MTJs cannot be used as the requirement for RA is almost one order of magnitude smaller than that for the 10 Gbit MRAM [24]. One attempt is nano-oxide layers (NOL), which restrict the current paths perpendicular to the GMR stack by oxidising a part of the Cu or Al spacer layer [25]. In a Co 0.5 Fe 0.5 (2.5)/Al-NOL/Co 0.5 Fe 0.5 (2.5) junction, RA = 0.5~1.5 Ω·µm 2 and MR = 7~10% at RT has been achieved. These values are below the requirement for the 2 Tbit/in 2 HDD, and hence further improvement in GMR or TMR junctions are crucial. Heusler-Alloy Junctions For the further improvement in the MR junctions to meet the requirements for 10 Gbit MRAM and 2 Tbit/in 2 HDD, a half-metallic ferromagnet needs to be developed to achieve 100% spin polarisation at the Fermi energy at RT, leading to an infinite MR ratio using Equation (1). The half-metallicity is induced by the formation of a bandgap only in one of the electron-spin bands. There have been five types of half-metallic ferromagnets theoretically proposed and experimentally demonstrated to date: (i) oxide compounds (e.g., rutile CrO 2 [26] and spinel Fe 3 O 4 [27]); (ii) perovskites (e.g., (La,Sr)MnO 3 [28]); and, (iii) magnetic semiconductors, including Zinc-blende compounds (e.g., EuO and EuS [29], (Ga,Mn)As [30] and CrAs [31]) and (iv) Heusler alloys (e.g., NiMnSb [32]). Magnetic semiconductors have been reported to show 100% spin polarisation due to their Zeeman splitting in two spin bands. However, their Curie temperature is still below RT [33]. Low-temperature Andreev reflection measurements have confirmed that both rutile CrO 2 and perovskite La 0.7 Sr 0.3 MnO 3 compounds possess almost 100% spin polarisation [34], however, no experimental report has been proved the half-metallicity at RT. As the most promising candidate for the RT half-metallicity, a Heusler alloy has been studied extensively as detailed in the following sections [35][36][37]. Crystalline Structures Since the initial discovery of ferromagnetism in a ternary Cu 2 MnAl alloy, consisting of non-magnetic elements by Heusler in 1903 [38], the Heusler alloys have been studied for various applications, including magnetic refrigeration [39] and shape memory [40]. The Heulser alloys are categorised into two types: full-and half-Heusler alloys in the forms of X 2 YZ and XYZ, respectively, where X and Y are transition metals and Z is a semiconductor or non-magnet. Figure 2a shows a schematic crystalline structure of the full-Heusler alloy in the perfectly ordered L2 1 -phase. By mixing Y and Z, the alloy forms the partially-mixed B2-phase, while further mixing among X, Y, and Z makes the fully-disordered A2-phase. By replacing a half of X atoms with Y-site atoms, Y atoms with Z-site atoms and Z atoms with X-site atoms, inverse Heusler alloys in the D0 3 -phase can be formed. The removal of a half of the X atoms makes the half-Heusler alloys in the C1 b -phase. Additionally, a part of the constituent atoms can be replaced with the other atoms, allowing for controlling their crystalline and magnetic properties, such as lattice constants, magnetic moments, and magnetic anisotropy. Heusler-Alloy Junctions For the further improvement in the MR junctions to meet the requirements for 10 Gbit MRAM and 2 Tbit/in 2 HDD, a half-metallic ferromagnet needs to be developed to achieve 100% spin polarisation at the Fermi energy at RT, leading to an infinite MR ratio using Equation (1). The halfmetallicity is induced by the formation of a bandgap only in one of the electron-spin bands. There have been five types of half-metallic ferromagnets theoretically proposed and experimentally demonstrated to date: (i) oxide compounds (e.g., rutile CrO2 [26] and spinel Fe3O4 [27]); (ii) perovskites (e.g., (La,Sr)MnO3 [28]); and, (iii) magnetic semiconductors, including Zinc-blende compounds (e.g., EuO and EuS [29], (Ga,Mn)As [30] and CrAs [31]) and (iv) Heusler alloys (e.g., NiMnSb [32]). Magnetic semiconductors have been reported to show 100% spin polarisation due to their Zeeman splitting in two spin bands. However, their Curie temperature is still below RT [33]. Low-temperature Andreev reflection measurements have confirmed that both rutile CrO2 and perovskite La0.7Sr0.3MnO3 compounds possess almost 100% spin polarisation [34], however, no experimental report has been proved the half-metallicity at RT. As the most promising candidate for the RT half-metallicity, a Heusler alloy has been studied extensively as detailed in the following sections [35][36][37]. Crystalline Structures Since the initial discovery of ferromagnetism in a ternary Cu2MnAl alloy, consisting of nonmagnetic elements by Heusler in 1903 [38], the Heusler alloys have been studied for various applications, including magnetic refrigeration [39] and shape memory [40]. The Heulser alloys are categorised into two types: full-and half-Heusler alloys in the forms of X2YZ and XYZ, respectively, where X and Y are transition metals and Z is a semiconductor or non-magnet. Figure 2a shows a schematic crystalline structure of the full-Heusler alloy in the perfectly ordered L21-phase. By mixing Y and Z, the alloy forms the partially-mixed B2-phase, while further mixing among X, Y, and Z makes the fully-disordered A2-phase. By replacing a half of X atoms with Y-site atoms, Y atoms with Z-site atoms and Z atoms with X-site atoms, inverse Heusler alloys in the D03-phase can be formed. The removal of a half of the X atoms makes the half-Heusler alloys in the C1b-phase. Additionally, a part of the constituent atoms can be replaced with the other atoms, allowing for controlling their crystalline and magnetic properties, such as lattice constants, magnetic moments, and magnetic anisotropy. Due to the above complicated crystalline structures for the Heusler alloys, they require very high temperature (typically >1000 K in the bulk form and >650 K in the thin-film form) for their crystallisation [41]. This prevents the Heusler alloys to be used in spintronic devices. Recently, layer- Due to the above complicated crystalline structures for the Heusler alloys, they require very high temperature (typically >1000 K in the bulk form and >650 K in the thin-film form) for their crystallisation [41]. This prevents the Heusler alloys to be used in spintronic devices. Recently, layer-by-layer growth in the Heusler alloy (110) plane (see Figure 2b) has been reported to decrease the crystallisation energy, i.e., the annealing temperature, by over 50% [42]. A similar crystallisation process has been demonstrated at higher temperature to uniformly crystallise the Heusler-alloy films [43]. Magnetic Properties The robustness of the half-metallicity depends on the size and definition of the bandgap formed in one electron-spin band in the vicinity of Fermi energy. The bandgap is formed by the strong d-band hybridisation between the two transition metals of X and Y, according to ab initio calculations [34]. Typically, the bandgap of 0.4~0.8 eV is expected to be formed at 0 K [36]. At a finite temperature, however, the bandgap becomes smaller and the edge definition of the gap becomes poorly-defined. The bandgap has been measured by detecting photon absorption of circularly-polarised infrared light with energy corresponding to the bandgap [44]. The other advantage of the Heusler alloys is their controllability of their magnetic properties, such as their saturation magnetisation and Curie temperature. The total spin moments per Heusler alloy formula unit (f.u.) (M t ) have been reported to follow the generalised Slater-Pauling curve as M t = Z t − 24 (full-Heusler) and M t = Z t − 18 (half-Heusler), where Z t is the total number of valence-band electrons (see Figure 3) [45]. The atomic substitutions of any constituent atoms in the Heusler alloys can continuously change their magnetic moments and allows for customising the alloys for a specific application. There are over 2500 combinations to form Heusler alloys [36], among which a few tens of alloys have been reported to become half-metallic ferromagnets according to theoretical calculations. The atomic substitution further increase the applicability of the alloys for custom design. by-layer growth in the Heusler alloy (110) plane (see Figure 2b) has been reported to decrease the crystallisation energy, i.e., the annealing temperature, by over 50% [42]. A similar crystallisation process has been demonstrated at higher temperature to uniformly crystallise the Heusler-alloy films [43]. Magnetic Properties The robustness of the half-metallicity depends on the size and definition of the bandgap formed in one electron-spin band in the vicinity of Fermi energy. The bandgap is formed by the strong d-band hybridisation between the two transition metals of X and Y, according to ab initio calculations [34]. Typically, the bandgap of 0.4~0.8 eV is expected to be formed at 0 K [36]. At a finite temperature, however, the bandgap becomes smaller and the edge definition of the gap becomes poorly-defined. The bandgap has been measured by detecting photon absorption of circularly-polarised infrared light with energy corresponding to the bandgap [44]. The other advantage of the Heusler alloys is their controllability of their magnetic properties, such as their saturation magnetisation and Curie temperature. The total spin moments per Heusler alloy formula unit (f.u.) (Mt) have been reported to follow the generalised Slater-Pauling curve as Mt = Zt − 24 (full-Heusler) and Mt = Zt − 18 (half-Heusler), where Zt is the total number of valence-band electrons (see Figure 3) [45]. The atomic substitutions of any constituent atoms in the Heusler alloys can continuously change their magnetic moments and allows for customising the alloys for a specific application. There are over 2500 combinations to form Heusler alloys [36], among which a few tens of alloys have been reported to become half-metallic ferromagnets according to theoretical calculations. The atomic substitution further increase the applicability of the alloys for custom design. Tunnelling Magnetoresistive Junctions (1) Co2(Cr,Fe)Z A pioneering work on a Heusler-alloy junction has been carried out by Block et al. [46]. They have reported a large negative MR ratio at RT in a quarternary full-Heusler Co2Cr0.6Fe0.4Al alloy, which experimentally demonstrates the controllability of the magnetic properties of the alloys by substituting their constituent elements. They report 30% MR at RT with pressed powder compacts, Tunnelling Magnetoresistive Junctions (1) Co 2 (Cr,Fe)Z A pioneering work on a Heusler-alloy junction has been carried out by Block et al. [46]. They have reported a large negative MR ratio at RT in a quarternary full-Heusler Co 2 Cr 0.6 Fe 0.4 Al alloy, which experimentally demonstrates the controllability of the magnetic properties of the alloys by substituting their constituent elements. They report 30% MR at RT with pressed powder compacts, which acts as a series of MTJs. The Co 2 (Cr,Fe)Al alloys have then been used in MTJs in their polycrystalline form. A MTJ with the structure of Co 2 Cr 0.6 Fe 0.4 Al/AlOx/CoFe shows 16% TMR at RT [47], which is later improved up to 19% at RT by the barrier optimisation [48]. The half-metallicity of the Co 2 Cr 1-x Fe x Al full-Heusler alloys has been found to be robust against the atomic disorder using first-principles calculations by Shirai et al. [52]. In the Co 2 CrAl alloys, the atomic disorder between Cr and Al, which eventually deforms the crystalline structure from L2 1 into B2 at a disorder level of 0.5, maintains the very high spin polarisation (P) of 97% for L2 1 and 93% for B2. The Co-Cr type disorder, however, destroys the half-metallicity rapidly, i.e., P to zero at a disorder level of 0.4 and M t to be 2.0 µ B /f.u. at the full disorder. For the Fe substitution x with Cr, high P is calculated to be maintained above 90% up to x = 0.35. Similarly, the CrFe-Al type disorder preserves both spin polarisation and the magnetic moment to be above 80% and 3.7 µ B /f.u., respectively, up to the disorder level of 0.5, while the Co-CrFe disorder eliminates P at the disorder level of 0.3. These findings may explain the decrease in the measured TMR ratios as compared with the theoretically predicted value due to the interfacial disorder. Strain also affects the half-metallicity in the Co 2 CrAl alloy, according to calculations [53]. P stays 100% in the lattice strain range between 1 and +3%, and is even higher than 90% up to +10% strain. The bandgap is also maintained against the strain and can be maximised under +3% strain. P also remains 100% against the tetragonal distortion in the range of ±2%, which is a great advantage for the epitaxial growth study on a GaAs substrate [54] and the other seed layers. Unlike Co 2 CrAl, Co 2 FeAl is not theoretically predicted to be half-metallic [50]. Even so, Epitaxial Co 2 FeAl films are grown on GaAs(001) with the relationship Co2FeAl(001) [110]||GaAs(001) [110]. Accordingly, an epitaxial full Heusler Co 2 FeAl film with the L2 1 structure is also applied for a MTJ but shows only 9% TMR at RT [54]. These small TMR ratios may be caused by the selective oxidation at the interface between the Heusler films and the oxide barriers. The TMR ratios have been increased to 330% at RT (700% at 10 K) with RA = 1 × 10 3 Ω·µm 2 in a MTJ with Co 2 FeAl/MgO/Co 0.75 Fe 0.25 by utilising the ∆ 1 -band connection between Co 2 FeAl and MgO [55]. Using a MgAlOx barrier instead of MgO to maintain the ∆ 1 -band connection and to make better lattice matching with B2-Co 2 FeAl, TMR ratios are found to be increased to 342% at RT (616% at 4 K) with RA = 2.5 × 10 3 Ω·µm 2 [56]. The departure of the TMR ratios from theoretically predicted almost infinity may also be due to the interfacial atomic disorder, due to the presence of a light element of aluminium. By replacing a half of Al with Si in Co 2 FeAl to stabilise the crystallisation, MTJs with an oriented MgO barrier for which TMR ratios of 175% have been achieved at RT when using B2-Co 2 FeAl 0.5 Si 0.5 [57]. Using L2 1 -Co 2 FeAl 0.5 Si 0.5 , the TMR ratios of 386% at RT and 832% at 9 K with RA = 80 × 10 3 Ω·µm 2 has been reported later [58]. The decrease in the TMR ratio with increasing temperature is much faster than the temperature dependence of the magnetisation T 3/2 , suggesting that a small fraction of atomically disordered phases cannot be ignored in the spin-polarised electron transport at finite temperatures [59]. The elimination of such disordered interfacial phases improves the TMR ratios further and realises the half-metallicity at RT. Theoretical calculations suggest that the interface states within the half-metallic bandgap formed at the half-metal/insulator interfaces prevent the highly spin-polarised electron transport [60]. This is because the tunneling rate is slower than the spin-flip rate, and therefore the interface states for the minority spins are effectively coupled to the metallic spin reservoir of the majority spin states. In order to avoid the spin-flip scattering, a sharp interface without the interface states is crucially required. Another pioneering work on the growth of full Heusler alloy films has been performed for a Co 2 MnGe/GaAs(001) hybrid structure by Ambrose et al. [61]. They achieve an epitaxial Co 2 MnGe film with a slightly enhanced lattice constant as compared with bulk. M t is estimated to be 5.1 µ B /f.u., which almost perfectly agrees with the bulk and theoretically predicted value from the generalised Slater-Pauling curve. Consequently, systematic study has been widely carried out over Co 2 Mn-based full Heusler alloys to realise the RT half-metallicity: Co 2 MnAl [62,63], Co 2 MnSi [64,65], Co 2 MnGa [66], and Co 2 MnSn [64]. For example, an epitaxial Co 2 MnAl film has been grown on a Cr buffer layer by sputtering with the crystalline relationship Co 2 MnAl(001) [110]||Cr(001) [110]||MgO(001) [100] with the B2 structure [60]. For Co 2 MnSi, the L2 1 structure has been deposited by using both dc magnetron sputtering [67] and MBE [68]. Calculations imply that the strain induced can control the half-metallicity in the Co 2 MnZ alloys. For Co 2 MnSi, for example, the lattice compression of 4% increases the bandgap by 23%, and a similar behavior is expected for the other alloy compounds [69]. Similarly, ±2% change in the lattice constant preserves the half-metallicity in the Co 2 MnZ alloys [33]. A MTJ with an epitaxial L2 1 -Co 2 MnSi film has been reported to show very large TMR ratios of 70% at RT and 159% at 2 K with RA = 10 6 Ω·µm 2 [70]. These values are the largest TMR ratios obtained in a MTJ employing a Heusler-alloy film and AlO x barrier. This is purely induced by the intrinsic P of the Heusler electrodes. Similarly, a MTJ with Co 2 MnAl/AlO x /CoFe shows 40% TMR at RT [63], followed by the further improvement up to 61% at RT (83% at 2 K) [71]. All of these Heusler films in the MTJs have been reported to be B2 structure. By comparing the TMR ratios at RT with those at low temperature, the TMR ratios are found to show very weak temperature dependence as similarly observed for a conventional metallic MTJ. On the contrary, a MTJ with a highly ordered Co 2 MnSi film shows strong temperature dependence; 33% at RT and 86% at 10 K [72], and 70% at RT and 159% at 2 K [70]. Such rapid decrease in the TMR ratio with an increasing temperature is similar to that observed in MTJs with Co 2 (Cr,Fe)Al. (3) Ni 2 MnZ Even though Ni 2 MnZ alloys are not predicted to become half-metallic ferromagnets by calculations, detailed studies on epitaxial growth on GaAs and InAs has been reported by Palmstrøm et al. [79]. By using a Sc 0.3 Er 0.7 As buffer layer on GaAs(001), both Ni 2 MnAl [80] and Ni 2 MnGa [81,82] films are epitaxially grown with the crystalline relation-ship Ni2MnGa(001) [100]||GaAs(001)[100] [83]. All the films are slightly tetragonally elongated along the plane normal as compared with the bulk values due to the minor lattice mismatch with the semiconductor substrates. First-principles calculations demonstrate that a broad energy minimum of tetragonal Ni 2 MnGa can explain stable pseudomorphic growth of Ni 2 MnGa on GaAs despite a nominal 3% lattice mismatch [84]. (4) Half-Heusler After the first theoretical prediction of the half-metallicity of the half-Heusler NiMnSb alloy [30], this alloy has been intensively investigated to confirm its half-metallicity experimentally. M t and the bandgap are calculated to be approximately 3.99 µ B /f.u. and 0.5 eV [85], respectively, resulting in calculated spin polarisation of 99.3% [86]. Epitaxial NiMnSb(001) growth on GaAs(001) has also been studied systematically by van Roy et al. [87]. An epitaxial half Heusler NiMnSb film has been first used as an electrode in a MTJ, showing 9% TMR at RT [88]. Tunnelling Magnetoresistive Junctions By replacing Y atoms with X atoms, binary Heusler alloys can be formed. For example, Mn 3 Ga shows ferrimagnetic behaviour in the tetragonal D0 22 -phase with perpendicular magnetic anisotropy, as schematically shown in Figure 4a,b. The ferrimagnetic Mn 3 Ga has been reported to possess a large uniaxial anisotropy of 1 × 10 7 erg/cm 3 [99] and high Curie temperature of around 770 K [100]. Mn 3 Ga has been used in a MTJ, consisting of Mn 3 Ga/MgO/CoFe and has shown 9.8% TMR at 300 K with the perpendicular anisotropy of 1.2 × 10 7 erg/cm 3 [101]. The TMR ratio has then been improved by adjusting the Mn-Ga composition to be 40% at RT for the MTJ, consisting of Mn 0.62 Ga 0.38 (30)/Mg (0.4)/MgO (1.8)/CoFeB (1.2) (thickness in nm) (see Figure 4c) [102]. This improvement may be due to the increase in the perpendicular anisotropy to be 5 × 10 6 erg/cm 3 in a similar MTJ [103], which is almost the same with that for the film reported above. However, the MTJ has 20 × 10 3 Ω·µm 2 , which requires further reduction for the spintronic device applications. By inserting Co2MnSi between Mn-Ga and MgO, the perpendicular anisotropy of the Mn-Ga layer can induce perpendicular anisotropy in the half-metallic Co2MnSi layer, which is expected to achieve a large TMR ratio. Experimentally, TMR ratios of 10% at RT and 65% at 10 K have been achieved [105], which is smaller than the Mn-Ga/MgO/Mn-Ga junctions, as above. Additionally, the Co2MnSi magnetisation is in tilted states during the reversal process, which makes the TMR curves to be not well-defined. Similar to the CoFeB/MgO/CoFeB systems, as described in Section 1, perpendicular anisotropy has been induced by attaching a MgO tunnel barrier. In a p-MTJ, consisting of Co2FeAl/MgO/Co0.2Fe0.6B0.2, a TMR ratio of 53% has been reported at RT (see Figure 5) [106]. By inserting a 0.1-nm-thick Fe (Co0.5Fe0.5) layer between the MgO and Co0.2Fe0.6B0.2 layers, the TMR ratio was significantly enhanced to 91% (82%), due to the improved interface. The corresponding RA is 1.31 × 10 5 Ω·µm 2 . By further improving the MTJ quality, consisting of Co2FeAl (1. By inserting Co 2 MnSi between Mn-Ga and MgO, the perpendicular anisotropy of the Mn-Ga layer can induce perpendicular anisotropy in the half-metallic Co 2 MnSi layer, which is expected to achieve a large TMR ratio. Experimentally, TMR ratios of 10% at RT and 65% at 10 K have been achieved [105], which is smaller than the Mn-Ga/MgO/Mn-Ga junctions, as above. Additionally, the Co 2 MnSi magnetisation is in tilted states during the reversal process, which makes the TMR curves to be not well-defined. Similar to the CoFeB/MgO/CoFeB systems, as described in Section 1, perpendicular anisotropy has been induced by attaching a MgO tunnel barrier. In a p-MTJ, consisting of Co 2 FeAl/MgO/Co 0.2 Fe 0.6 B 0.2 , a TMR ratio of 53% has been reported at RT (see Figure 5) [106]. By inserting a 0.1-nm-thick Fe (Co 0.5 Fe 0.5 ) layer between the MgO and Co 0.2 Fe 0.6 B 0.2 layers, the TMR ratio was significantly enhanced to 91% (82%), due to the improved interface. The corresponding RA is 1.31 × 10 5 Ω·µm 2 . By further improving the MTJ quality, consisting of Co 2 FeAl (1. A perpendicularly magnetised seed layer has also been used to induce perpendicular anisotropy onto the Heusler-alloy films. For example, a MTJ stack with L10-CoPt/Co2MnSi/MgO/FePt has been demonstrated [108], as similarly reported in a conventional CoFeB/MgO/CoFeB junctions. Giant Magnetoresistive Junctions Recently, body-centred cubic (bcc) seed layers have been used to minimise the interfacial mixing with face-centred cubic (fcc) Heusler-alloy layer. For a bcc vanadium seed layer, X-ray analysis shows that 25-nm-thick vanadium introduces a strong (110) orientation in the Co2FeSi Heusler alloy [109]. The B2-texture of the Co2FeSi is found to match that of the vanadium proving that the texture is defined by the seed layer. Reduction of the Co2FeSi thickness is found to result in a reduction in the strength of the in-plane anisotropy, as expected from the cubic nature. Since the perpendicular magnetic anisotropy (PMA) is induced at the interface between the Co2FeSi and vanadium, a second vanadium interface is added and found to increase the observed PMA. Further reduction in the thickness of the Co2FeSi layer lead to an increase in the PMA where 4-nm-thick Co2FeSi exhibited a strong PMA (see Figure 6a). Here, the magnetic moment of the Co2FeSi layers all fell short of the bulk value with the saturation magnetisation (MS) of 700~800 emu/cm 3 . This may indicate magnetic dead layers at the interfaces due to roughness or intermixing; or could be due to a lack of full L21-ordering resulting in a drop in net moment. A perpendicularly magnetised seed layer has also been used to induce perpendicular anisotropy onto the Heusler-alloy films. For example, a MTJ stack with L1 0 -CoPt/Co 2 MnSi/MgO/FePt has been demonstrated [108], as similarly reported in a conventional CoFeB/MgO/CoFeB junctions. Giant Magnetoresistive Junctions Recently, body-centred cubic (bcc) seed layers have been used to minimise the interfacial mixing with face-centred cubic (fcc) Heusler-alloy layer. For a bcc vanadium seed layer, X-ray analysis shows that 25-nm-thick vanadium introduces a strong (110) orientation in the Co 2 FeSi Heusler alloy [109]. The B2-texture of the Co 2 FeSi is found to match that of the vanadium proving that the texture is defined by the seed layer. Reduction of the Co 2 FeSi thickness is found to result in a reduction in the strength of the in-plane anisotropy, as expected from the cubic nature. Since the perpendicular magnetic anisotropy (PMA) is induced at the interface between the Co 2 FeSi and vanadium, a second vanadium interface is added and found to increase the observed PMA. Further reduction in the thickness of the Co 2 FeSi layer lead to an increase in the PMA where 4-nm-thick Co 2 FeSi exhibited a strong PMA (see Figure 6a). Here, the magnetic moment of the Co 2 FeSi layers all fell short of the bulk value with the saturation magnetisation (M S ) of 700~800 emu/cm 3 . This may indicate magnetic dead layers at the interfaces due to roughness or intermixing; or could be due to a lack of full L2 1 -ordering resulting in a drop in net moment. Vanadium and tungsten are similar materials in that both are transition metal elements, which crystallise in a bcc structure. They have similar lattice parameters of a V = 0.3030 and a W = 0.31648 nm, leading to 3.3% and 17% strain in Co 2 FeSi, respectively. Tungsten is, however, of much lower bulk resistivity with a value of 5.6 × 10 −6 Ω·cm [110], which is around half the value for vanadium to be 1.9 × 10 −5 Ω·cm [111]. As such, tungsten should give similar if not superior results to vanadium as a seed layer. Accordingly, tungsten layers of 10~20 nm are deposited under 5-nm-thick Co 2 FeSi, resulting in the (110) texture in Co 2 FeSi, as similarly observed for the V seed samples. However, the W seed layer is found to be heavily oxidised [112]. X-ray reflectivity (XRR) indicates a smooth film with low interfacial roughness of 0.4 nm for W/WO x /Co 2 FeSi, which is comparable with 0.5 nm for V/Co 2 FeSi. The sample with a 20 nm W/WO x seed layer exhibited clear in-plane anisotropy with a typical out-of-plane hard axis loop. The value of the anisotropy is low at only 1.58 × 10 4 erg/cm 3 . This low value is due to the low value of M S of~400 emu/cm 3 . The 10 nm thick W/WO x sample, however, exhibited a strong PMA in the Co 2 FeSi layer. vanadium interface is added and found to increase the observed PMA. Further reduction in the thickness of the Co2FeSi layer lead to an increase in the PMA where 4-nm-thick Co2FeSi exhibited a strong PMA (see Figure 6a). Here, the magnetic moment of the Co2FeSi layers all fell short of the bulk value with the saturation magnetisation (MS) of 700~800 emu/cm 3 . This may indicate magnetic dead layers at the interfaces due to roughness or intermixing; or could be due to a lack of full L21-ordering resulting in a drop in net moment. In an attempt to improve the quality of the tungsten seed layers, high temperature growth was utilised. The substrate is preheated to 673 K before deposition of 20 nm of tungsten. The resulting film shows a drastic reduction in oxidation with strongly crystallised tungsten. However, there is a lack of global texture as demonstrated by the multiple phases of tungsten. Scherrer analysis of the (110) peak gives an approximate crystallite size of 9 nm. The magnetisation of the W/Co 2 FeSi sample is measured to be 400 emu/cm 3 with the perpendicular anisotropy of 8 × 10 5 erg/cm 3 . These properties are summarised in Table 1. Towards Device Implementation Since the crystalline plane induced by the bcc seed layers is (110), which is a favourable orientation to promote the layer-by-layer crystallisation, low-temperature crystallisation has been demonstrated with PMA [115]. Samples consisting of W (10)/Co 2 FeAl 0.5 Si 0.5 (12.5)/W (1.2)/Co 2 Fe Al 0.5 Si 0.5 (2.5)/Ta (2) (thickness in nm) have been deposited with pre-growth heating at 300 ≤ T ≤ 370 K. Increasing temperature is found to cause a large increase in the crystallinity in the W(110) direction. As the heating time is increased, the position of the peak relaxed towards the bulk location, as shown in Figure 7a, corresponding a change in lattice spacing ∆d = (0.0053 ± 0.0001) nm out-of-plane, i.e., a change in strain of ∆s = (−2.4 ± 0.1) %. The position of the Heusler-alloy peak is not changed by increased deposition heating time. However, the intensity of the reflection increased significantly, indicating an increased crystallisation, as expected. Magnetic characterisation of the samples is performed under both in-and out-of-the-plane fields. All of the samples with heated substrates showed perpendicular anisotropy. Figure 7b shows the coercivities (HC) and saturation magnetisations (MS) for the samples. HC and MS both increase monotonically with substrate temperature in agreement with the XRD data. The increased moment is due to the increase in the crystallisation of the material. After T = 305 K (30 s) the loop squareness decreases from MR/MS = 1, but remains high (>0.8) up to T = 370 K (120 s). MS reaches almost 1060 emu/cm 3 , which is almost 85% of the theoretically predicted value, and it is ideal for device implementation due to the low-temperature crystallisation. Due to band-structure matching silver makes an ideal conduction layer for Heusler alloy CPP-GMR devices. A 3 nm thick layer of Ag was deposited into a device structure Si sub./W (10)/Co2FeAl0.5Si0.5 (12.5)/Ag (3)/Co2FeAl0.5Si0.5 (5)/Ru (3) where thicknesses are in nm. These were patterned using e-beam lithography into elliptical devices with dimensions from (1000 × 500) nm 2 to (150 × 100) nm 2 . The ΔR vs. field of these devices with a perpendicular applied field is shown in Figure 8 where a small but distinct GMR of 0.03% is observed at room temperature. The shape of the MR curve matches that of the hysteresis loop for the sample, where domain rotation occurs to the antiparallel state, followed by a rapid nucleation reversal. This explains the asymmetry of the GMR peak, with a slow approach to a high resistance state, but a rapid return to the low resistance state at a definite field. Magnetic characterisation of the samples is performed under both in-and out-of-the-plane fields. All of the samples with heated substrates showed perpendicular anisotropy. Figure 7b shows the coercivities (H C ) and saturation magnetisations (M S ) for the samples. H C and M S both increase monotonically with substrate temperature in agreement with the XRD data. The increased moment is due to the increase in the crystallisation of the material. After T = 305 K (30 s) the loop squareness decreases from M R /M S = 1, but remains high (>0.8) up to T = 370 K (120 s). M S reaches almost 1060 emu/cm 3 , which is almost 85% of the theoretically predicted value, and it is ideal for device implementation due to the low-temperature crystallisation. Due to band-structure matching silver makes an ideal conduction layer for Heusler alloy CPP-GMR devices. A 3 nm thick layer of Ag was deposited into a device structure Si sub./W (10)/Co 2 FeAl 0.5 Si 0.5 (12.5)/Ag (3)/Co 2 FeAl 0.5 Si 0.5 (5)/Ru (3) where thicknesses are in nm. These were patterned using e-beam lithography into elliptical devices with dimensions from (1000 × 500) nm 2 to (150 × 100) nm 2 . The ∆R vs. field of these devices with a perpendicular applied field is shown in Figure 8 where a small but distinct GMR of 0.03% is observed at room temperature. Magnetic characterisation of the samples is performed under both in-and out-of-the-plane fields. All of the samples with heated substrates showed perpendicular anisotropy. Figure 7b shows the coercivities (HC) and saturation magnetisations (MS) for the samples. HC and MS both increase monotonically with substrate temperature in agreement with the XRD data. The increased moment is due to the increase in the crystallisation of the material. After T = 305 K (30 s) the loop squareness decreases from MR/MS = 1, but remains high (>0.8) up to T = 370 K (120 s). MS reaches almost 1060 emu/cm 3 , which is almost 85% of the theoretically predicted value, and it is ideal for device implementation due to the low-temperature crystallisation. Due to band-structure matching silver makes an ideal conduction layer for Heusler alloy CPP-GMR devices. A 3 nm thick layer of Ag was deposited into a device structure Si sub./W (10)/Co2FeAl0.5Si0.5 (12.5)/Ag (3)/Co2FeAl0.5Si0.5 (5)/Ru (3) where thicknesses are in nm. These were patterned using e-beam lithography into elliptical devices with dimensions from (1000 × 500) nm 2 to (150 × 100) nm 2 . The ΔR vs. field of these devices with a perpendicular applied field is shown in Figure 8 where a small but distinct GMR of 0.03% is observed at room temperature. The shape of the MR curve matches that of the hysteresis loop for the sample, where domain rotation occurs to the antiparallel state, followed by a rapid nucleation reversal. This explains the asymmetry of the GMR peak, with a slow approach to a high resistance state, but a rapid return to the low resistance state at a definite field. The shape of the MR curve matches that of the hysteresis loop for the sample, where domain rotation occurs to the antiparallel state, followed by a rapid nucleation reversal. This explains the asymmetry of the GMR peak, with a slow approach to a high resistance state, but a rapid return to the low resistance state at a definite field. Materials and Methods Epitaxial Heusler-alloy films have been deposited using ultrahigh vacuum (UHV) sputtering or molecular beam epitaxy (MBE) with precise control of compositions to satisfy their stoichiometry. For the UHV sputtering, compositions of targets need to be carefully optimised or combinatorial sputtering needs to be employed. For UHV MBE, simultaneous deposition is typically used on a single-crystal substrate. Polycrystalline Heusler-alloy films, on the other hand, have been grown by high-target utilisation sputtering system (HiTUS) [116]. In both of the films, substrate heating is often utilised to assist crystalline formation of the Heusler alloys. Here, the sputtering has higher energy on the materials to be deposited than those for UHV MBE by almost three orders of magnitude, allowing for the deposited films to be atomically well-mixed to form complex crystalline structures, as described in Section 2.1.1. The deposited films have been characterised structurally and magnetically. The crystalline structures of the films are determined by X-ray diffraction (XRD, Rigaku, Tokyo, Japan) with chemical composition analysis, such as energy dispersive X-ray spectroscopy (EDX) and electron energy loss spectroscopy (EELS). Cross-sectional transmission electron microscopy (TEM, JEOL, Tokyo, Japan) is also used to investigate atomic ordering and interfacial structures of the films. The magnetisation loops of the films are measured using a vibrating sample magnetometer (VSM, MicroSense, Lowell, MA, USA) or similar methods under elevating temperatures. Temperature-dependent electrical resistivity measurements can also reveal the detailed scattering mechanism by defects in the films [52]. The half-metallicity can be determined by point-contact Andreev reflection (PCAR) [32] and infrared photoexcitation [42]. Additionally, X-ray magnetic circular dichroism (XMCD) with synchrotron radiation can reveal spin and orbital moments per constituent atoms [34]. The optimised Heusler-alloy films can be used as a ferromagnetic electrode in TMR and GMR junctions. The TMR junctions can be characterised using current-in-plane tunneling (CIPT) [117], which provides accurate TMR ratios. The GMR junctions can also be analysed by a conventional four-terminal method in a current-in-the-plane (CIP) configuration, which is more than one order of magnitude smaller than that in a CPP configuration. Therefore, these films are required to be patterned into nanometre-scale pillar junctions by electron beam lithography (EBL) and Ar-ion milling. The TMR or GMR junctions are patterned into nanopillars by EBL and Ar-ion milling, followed by the insulator deposition to isolate the pillars. For preparing the sample for electrical measurement, the top and the bottom of the pillar were connected to large contact pads via two-step lithography. Finally, smaller contacts were fabricated by EBL, and then the large contact pads were made by optical lithography. Conclusions The importance of the development of half-metallic ferromagnetic films for room-temperature operation has been increasing significantly. Among candidates for them, Heusler-alloy films have the greatest potential and have attracted intensive attention. Even though the bulk of the Heusler alloys have already been proven to be half-metallic, the film form still suffers from the interfacial atomic disorder against the neighbouring tunnelling barrier or non-magnetic spacer in magnetic tunnel or giant magnetoresistive junctions, respectively. For further improvement, the optimisation of growth conditions and the selection of better seed or barrier/spacer layers are crucial. Such improvement can also induce perpendicular magnetic anisotropy for the device miniaturisation. MgO-or bcc-seed-induced perpendicular anisotropy may lead to the Heusler-alloy films to satisfy the requirements for the next-generation spintronic devices.
10,382.4
2018-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
Bimetallic MOFs-Derived Hollow Carbon Spheres Assembled by Sheets for Sodium-Ion Batteries Metal-organic frameworks (MOFs) have attracted extensive attention as precursors for the preparation of carbon-based materials due to their highly controllable composition, structure, and pore size distribution. However, there are few reports of MOFs using p-phenylenediamine (pPD) as the organic ligand. In this work, we report the preparation of a bimetallic MOF (CoCu-pPD) with pPD as the organic ligand, and its derived hollow carbon spheres (BMHCS). CoCu-pPD exhibits a hollow spherical structure assembled by nanosheets. BMHCS inherits the unique hollow spherical structure of CoCu-pPD, which also shows a large specific surface area and heteroatom doping. When using as the anode of sodium-ion batteries (SIBs), BMHCS exhibits excellent cycling stability (the capacity of 306 mA h g−1 after 300 cycles at a current density of 1 A g−1 and the capacity retention rate of 90%) and rate capability (the sodium storage capacity of 240 mA h g−1 at 5 A g−1). This work not only provides a strategy for the preparation of pPD-based bimetallic-MOFs, but also enhances the thermal stability of the pPD-based MOFs. In addition, this work also offers a new case for the morphology control of assembled carbon materials and has achieved excellent performance in the field of SIBs. Introduction In recent years, with the wide application and rapid development of portable electronic devices and electric vehicles, the demand for secondary batteries has gradually increased, and the green renewable resources have become a research hotspot [1]. The high cost and shortage of lithium resources limit the large-scale application of lithium-ion batteries (LIBs) [2,3]. As an alternative, sodium-ion batteries (SIBs) will be more competitive in the future large-scale energy storage market due to their abundant sodium resources, low cost, similar working principles to LIBs, high safety performance, and wide operating temperature range [4]. However, the relatively large size of sodium ions also poses a great challenge to the structural stability of sodium storage materials, and will lead to slow electrochemical reaction kinetics, affecting the sodium storage capacity and rate performance [5,6]. Therefore, the design and preparation of electrode materials is the key to promote the development of SIBs. As an important part of SIBs, the anode material has an important influence on the performance of SIBs [7]. To date, the reported anode materials have mainly focused on carbon-based materials, alloy-based materials, and metal compounds [8]. Among them, carbon-based materials have good cycle stability, electrical conductivity, abundant raw materials, and simple preparation methods, making them a standout candidate for SIBs anode materials [9]. MOFs have the advantages of multiple designability (structure, composition, pore size) and a large specific surface area [10], and are considered as good precursors and templates for the preparation of carbon materials with tunable structures and high specific surface areas [11,12]. The MOFs-derived carbon material inherits the advantages of the MOF precursor while enhancing its poor electrical conductivity and improving the abundant reaction sites; MOFs-derived carbon materials are widely regarded as a new material with excellent electrochemical activity and have attracted much attention in the field of sodium storage [13]. Xu et al., synthesized a series of metal-organic frameworks (MOFs) using terephthalic acid as the organic ligand, and prepared selenide/carbon composites by pyrolysis as anode materials for SIBs. Among them, the uniform peapod-like Fe 7 Se 8 @C nanorods have a high specific capacity (218 mA h g −1 after 500 cycles at 3 A g −1 ) and the porous NiSe@C spheres still have a specific capacity of 160 mA h g −1 after 2000 cycles at the current density of 3 A g −1 [14]. Ge et al., prepared MOFs-derived core/shell structured CoP@C polyhedrons anchored on 3D reduced graphene oxide networks for SIBs, which exhibited excellent cycling stability and high rate performance [15]. Zou et al., synthesized Fe-MOF with Fe(NO 3 ) 3 ·9H 2 O and fumaric acid, and obtained rod-like Fe 7 S 8 /C composites by vulcanization, which showed a specific capacity of about 500 mA h g −1 after 100 cycles at a current density of 0.1 A g −1 [16]. In recent years, more and more MOFs-derived carbon materials have been used as anode materials for SIBs, but the organic ligands of MOFs are mainly concentrated in imidazoles and carboxylic acids. There are few reports of MOFs with p-phenylenediamine (pPD) as the organic ligand, which may be related to the unstable skeleton structure and the inability to maintain the morphology during pyrolysis [17]. Many studies have shown that, compared with monometallic MOFs, bimetallic MOFs contain two metal active sites, which can not only tune the morphology and structure of MOFs, but also increase the complexity of the structure where the structures support each other during the pyrolysis process, which is beneficial to improve the stability of the material skeleton structure [18][19][20]. In addition, bimetallic MOFs-derived carbon materials possess more exposed active centers, good stability, and electrical conductivity, enabling them to have broader applications in electrochemical energy storage and conversion [21]. In this paper, the bimetallic MOF (CoCu-pPD) was synthesized from Co(NO 3 ) 2 ·6H 2 O, Cu(NO 3 ) 2 ·3H 2 O and pPD as raw materials, and N, O co-doped carbon material (BMHCS) was obtained by a simple pyrolysis process, showing the morphology of hollow spheres assembled by nanosheets. It has been found that the introduction of Cu 2+ not only tunes the morphology but also enhances the framework stability of the main structure of Co-pPD. The BMHCS well inherits the hollow spherical morphology of the precursor and has a high specific surface area and heteroatom doping. When used as the anode material of SIBs, it shows a higher sodium storage capacity than Co-pPD-derived carbon nanosheets (MCNS) and exhibits high cycling performance (the capacity of 306 mA h g −1 after 300 cycles at the current density of 1 A g −1 , with the cycle retention rate of 90%) and rate performance (the sodium storage capacity of 260 mA h g −1 at a high current of 5 A g −1 ). This is attributed to the fact that the hollow spherical structure assembled by nanosheets is beneficial to the stability of carbon materials during sodium storage, and the heteroatom doping, high specific surface area, and abundant pore structure can provide more surface defects and active sites for carbon materials, which increase sodium storage capacity. Synthesis of MOFs and Derived Carbon Materials Synthesis of MOFs: 1.75 g of Cobalt nitrate hexahydrate (Co(NO 3 ) 2 ·6H 2 O, Shanghai D&B Biological Science and Technology Company, China, 98%) and 0.65 g of pphenylenediamine (pPD, Macklin, 99%) were dissolved in absolute ethanol (30 mL), respectively. After mixing the two reactants, 1.45 g of Cupric nitrate hexahydrate (Cu(NO 3 ) 2 ·3H 2 O, Beijing Tongguang Fine Chemical Company, China, 99.5%) was added and stirred uniformly. Then, it was loaded into a hydrothermal reactor for solvothermal reaction in a blast furnace (reaction at 150 • C for 24 h). After being cooled to room temperature, the precipitate was washed several times with ethanol and dried in a vacuum oven to obtain the final product, denoted as CoCu-pPD. Other experimental conditions remained unchanged, and the product obtained without adding Cu(NO 3 ) 2 ·3H 2 O was denoted as Co-pPD. This method is a common solvothermal method for preparing the MOFs. For example, Guan et al., used solvothermal method to grow bimetallic (NiCo) MOF on an NF surface [22]. Synthesis of the MOFs-derived carbon materials: The CoCu-pPD and Co-pPD were placed into a tubular carbonization furnace with N 2 flow, and the temperature was increased from room temperature to 700 • C at a rate of 5 • C/min and kept for 2 h-denoted as CoCu/C and Co/C, respectively. The obtained carbonized products were acid-washed with concentrated HCl solution to obtain pure carbon materials, which were denoted as BMHCS and MCNS, respectively. Analysis and Measures The morphologies and internal structures of the MOFs and carbon materials were observed by scanning electron microscopy (SEM, Supra-55) and transmission electron microscopy (TEM, Hitachi HT-7700). The crystal structures of the MOFs and carbon materials were determined by X-ray diffraction (XRD, Rigaku D/max2500B2+/PCX system). The thermogravimetric test (TG, Netasch STA 449C) was used to investigate the structural changes of CoCu-pPD and Cu-pPD during pyrolysis and to compare the framework stability of the two MOFs. The specific surface area and pore size distribution of BMHCS and MCNS were obtained using a nitrogen adsorption and desorption test (ASAP 2020), which were based on the brunauer-emmett-teller (BET) model and nonlocal density functional theory (NLDFT), respectively. The defect sites and surface chemical elements of the BMHCS and MCNS were analyzed by Raman spectroscopy (Labram Aramis) and X-ray photoelectron spectroscopy (XPS, Escalab 250), respectively. Electrochemical Measurements The BMHCS or MCNS, acetylene black, and polyvinylidene fluoride were mixed in a weight ratio of 8:1:1, and an appropriate amount of N-methylpyrrolidone was dripped into the mixture to form a slurry. Subsequently, the slurry was coated on copper foil, dried in vacuum at 120 • C, and then cut into 12 mm circular electrode sheets. The SIBs were assembled using CR2025 cells in an argon-filled glove box (Unilab M Braun). Metal sodium was used as the negative electrode, glass fiber was used as the separator, and 1 M NaSO 3 CF 3 dissolved in diglyme (1.0 M NaSO 3 CF 3 in diglyme = 100 vol%) was used as the electrolyte. Cyclic voltammetry measurements were performed using an electrochemical workstation (Zennium, Zahner) at different scan rates (0.2, 0.5, 0.7, 1, 2, 5 mV/s) with a voltage range of 0.01-3 V. The galvanostatic charge-discharge tests were carried out by the blue electric test system (LAND-CT2001A) with a voltage range of 0.01-3 V (set the current density to 1 A g −1 for 300 cycles to test the electrochemical cycle performance, and set the current density to 0.05, 0.1, 0.2, 0.5, 1, 2, 5 A g −1 for 10 cycles to test the electrochemical magnification performance). The AC impedance measurements were performed using an electrochemical workstation (Zennium, Zahner, Germany) with an alternating voltage amplitude of 5 mV and a frequency range of 100 kHz to 10 mHz. The Performance of the Co-pPD and CoCu-pPD Co-pPD exhibits the morphology of sheet-like random stacking (Figure 1a), while the bimetallic MOF (CoCu-pPD) formed by the coordination reaction of Co 2+ , Cu 2+ and pPD exhibits spherical morphology assembled by nanosheets (the average particle size: 620 nm), as illustrated in Figure 1b [23], so the hollow structure is further identified. The elemental mapping results in Figure 1f indicate uniform distributions of C, N, O, Co, and Cu elements on the CoCu-pPD, which provides the evidence that the structure and composition of the MOF are uniform. The crystal structures of the two MOFs were characterized by XRD. Both the XRD patterns of Co-pPD and CoCu-pPD show sharp peaks, which are different from the characteristic peak of pPD (PDF#31-1832), indicating that the two kinds of MOFs are highly crystalline ( Figure 2). From the two spectra, although the framework structures are not identical, the main crystal structures are similar, and the addition of Cu 2+ makes the crystallinity of the product MOFs higher. In addition, the multiple small peaks between 60 • and 80 • indicate that the coordination form of CoCu-pPD is more complex. In summary, the addition of Cu 2+ can significantly change the degree of crystallization-as well as part of the coordination-but does not significantly change the crystal structure of the MOFs. As shown in Figure 3, it can be seen from the TG curve that the weight loss of Co-pPD mainly goes through two stages. The mass loss in the first stage 250-300 • is mainly the volatilization of water molecules and ethanol solvent molecules adsorbed in the MOFs structure during the preparation and placement process [24]. The temperature between 450-500 • C is the second stage, which is mainly caused by the decomposition process of the MOFs. The weight loss behavior of CoCu-pPD is similar to that of Co-pPD, but the decomposition temperature range in the second stage is between 530-620 • C, which is 100 • C higher than that of Co-pPD, indicating that the thermal stability of CoCu-pPD is better than that of Co-pPD. In addition, the final mass loss (39%) of CoCu-pPD is also smaller than that of Co-pPD (58%) under the same pyrolysis conditions. It is consistent with the results mentioned in many literatures that bimetallic MOFs can exhibit high thermal stability due to the synergistic effect of the two metals [25][26][27]. The Performance of MCNS and BMHCS Both MOFs can maintain the morphology after high-temperature carbonization (Co/C and CoCu/C), as showed in Figure S1. The Co-pPD-derived carbon nanosheets (MCNS) have a randomly stacked sheet-like morphology similar to Co-pPD, as illustrated in Figure 4a,b, and no obvious crystallites can be seen in the HAADF-STEM image (Figure 4c), indicating that the crystal structure of the precursor is destroyed. The SEM image (Figure 4d) shows that the CoCu-pPD-derived hollow carbon spheres (BMHCS) maintain the morphology of the precursor and is still displays spheres assembled from sheets (the average particle size: 550 nm). It is not only a sheet-like assembled ball, but also a hollow spherical structure, as can be recognized from the TEM image (Figure 4e). This means that CoCu-pPD has good framework stability and can still maintain the morphology and hollow structure of the precursor after carbonization and acid washing. In addition, the metal elements in BMHCS have been removed, and only three elements (C, N, O) are uniformly distributed on the surface in the sample (Figure 4f). The Raman spectra of MCNS and BMHCS are displayed in Figure 5c,d, and they can be fitted into four peaks, in which the peaks at 1348 cm −1 and 1585 cm −1 correspond to D peak of amorphous carbon and G peak of graphite structure, respectively [28]. It is found that the I D (peak area of D peak) of both carbon materials are larger than I G (peak area of G peak) and the I D /I G value of MCNS (2.21) is larger than that of BMHCS (1.51), indicating that MCNS is more amorphous and contains more defects than BMHCS. The defects of MCNS and BMHCS provide more sodium storage sites and facilitate the transmission of Na + during charging and discharging. The elements and contents of carbon materials were analyzed by the XPS test and the results are shown in Figure S2a,b and Table S1. In the survey spectra of MCNS and BMHCS, in addition to the C peak, there are also N and O peaks. There is little difference in the amount of N atoms contained in MCNS and BMHCS as displayed in Table S1, but the content of O atoms in BMHCS is less, which may be related to the adsorption of O-containing molecules during the preparation process. In order to further reveal the chemical state of each element of carbon materials, C1s, N1s, and O1s spectra are divided into peaks. The C1s spectra ( Figure S2a 1 ,b 1 ) of the two carbon materials can be divided into C=C/C-C (284 eV), C-O/C-N (286 eV), and C=O (289 eV) peaks. The peaks around 398, 400, and 401 eV in the N1s spectrum ( Figure S2a 2 ,b 2 ) represent N-6 (pyridine nitrogen), N-5 (pyrrole nitrogen), and N-Q (graphite nitrogen), respectively, and the peaks at 531, 532, and 534 eV in the O1s spectrum ( Figure S2a 3 ,b 3 ) correspond to C=O, C-OH/C-O-C, and COOH, respectively [29]. The peak splitting results of the two carbon materials are the same, but the content of N-5 and N-6 of BMHCS is higher than that of MCNS (Table S1). N-5 and N-6 can make the adjacent C atoms as the active center, which is conducive to the adsorption of Na + , thereby improving the electrochemical performance of carbon materials [30,31]. In addition, oxygen-containing functional groups can also increase the interlayer spacing and defect sites of carbon materials, increasing the sodium storage capacity and cycle performance [32]. The N 2 adsorption and desorption curves of MCNS and BMHCS (Figure 6a) are both IV isotherms, in which the rapidly increasing part of the adsorption amount at low relative pressure represents the existence of micropores, and the hysteresis ring represents the existence of mesopores [33,34]. As shown in Figure 6b, MCNS and BMHCS coexist with micropores and mesopores, and the pore size is mainly concentrated within 15 nm. The total pore volumes of two kinds of carbon materials are not very different-all close to 0.9 cm 3 g −1 -but the specific surface area of BMHCS (418 m 2 g −1 ) is larger than that of MCNS (386 m 2 g −1 ). The large specific surface area and pore structure can provide more active sites and more mass transfer channels for Na + storage. The cycle performance and rate performance of the carbon materials were tested (Figure 7a,b) when used as the anode material of SIBs. BMHCS exhibits higher first coulombic efficiency effect (507 mA h g −1 , 68.22%) than MCNS (496 mA h g −1 , 53, 5%) at a current density of 1 A g −1 . The main reason is that BMHCS has a higher specific surface area and contains more N-5 and N-6, which can provide more sodium storage sites. After 300 cycles, the specific capacities of MCNS and BMHCS are 213 and 306 mAh g −1 , with the capacity retention rate of 62.9% and 90.8% (Figure 7a). It is obvious that the cycle performance of BMHCS is also higher than that of MCNS, which is attributed to the unique sheet-like assembled hollow spherical structure of BMHCS that can adapt to the continuous de-intercalation of Na+ during the process of sodium storage and maintain the stability of the structure. As shown in Figure 7b, BMHCS exhibits a higher specific capacity than MCNS at different current densities, and still maintains the capacity of 260 mA h g −1 at a large current density of 5 A g −1 , with the sodium storage capacity of 160 mA h g −1 after 1000 cycles (Figure 7c). In addition, BMHCS also possesses the capacity recovery rate of 90% when the current density recovers from 5 A g −1 to 0.5 A g −1 , exhibiting excellent rate performance. Its electrochemical performance is higher than previously reported MOFs-derived pure carbon materials for the anode materials of SIBs, as displayed in Table 1. In addition, Table 1 also shows the performance of bimetallic MOFS-derived carbon materials for lithium ion batteries, which is a reference for scholars interested in bimetallic MOFS-derived carbon materials for lithium-ion batteries. MOFs-Derived Carbon Materials Cycle Performance Rate Performance Batteries Hollow carbon spheres (this work) 306 mA h g −1 at 1 A g −1 after 300 cycles 260 mA h g −1 at 5 A g −1 SIBs Cube-shaped porous carbon [35] 240 mA h g −1 at 0.1 A g −1 after 100 cycles 100 mA h g −1 at 3.2 A g −1 SIBs Hollow carbon nanobubbles [36] 100 mA h g −1 at 10 A g −1 after 1000 cycles 100 mA h g −1 at 3.2 A g −1 SIBs Hollow carbon nanobubbles [30] 236 mA h g −1 at 0.1 A g −1 after 100 cycles 142 mA h g −1 at 5 A g −1 SIBs 3D hollow porous carbon microspheres [37] 313 mA h g −1 at 0.1 A g −1 after 100 cycles 112.5 mA h g −1 at 5 A g −1 SIBs Ni-doped Co/CoO/NC hybrid [38] 218 mA h g −1 at 0.05 A g −1 after 100 cycles 110 mA h g −1 at 5 A g −1 SIBs ZnFe 2 O 4 @C nanocomposites [39] 1780 mA h g −1 at 1 A g −1 after 400 cycles 918 mA h g −1 at 3 A g −1 LIBs Hollow Fe-Mn-O/C razmak microspheres [40] 1294 mA h g −1 at 0.1 A g −1 after 200 cycles 521 mA h g −1 at 1 A g −1 LIBs Carbon-coated Cu-Co razmak bimetal oxide composite material [41] 900 mA h g −1 at 0.1 A g −1 after 100 cycles 507 mA h g −1 at 1 A g −1 LIBs The first three charge-discharge curves of MCNS and BMHCS at a current density of 1 A g −1 are shown in Figure 8a,b, which show a sloping trend and no obvious plateau. The inclined area represents the insertion and extraction of Na + at the surface defects, indicating the capacitive adsorption behavior of Na + [9,42]. The first discharge curves are different from the others because of the formation of the SEI film during the first charge and discharge [43]. The first charge-discharge CV curves of MCNS and BMHCS at the scan rate of 0.2 mV s −1 (Figure 8c,d) show an irreversible reduction peak in the range of 1.5-0.5 V, which are caused by the formation of the SEI film and the irreversible consumption of Na + during the first charge-discharge process [44]. The reduction peak around 0.01 V indicates that Na + is mainly embedded in the carbon material in the voltage range around 0.01 V, and the oxidation peak near 0.1 V represents the desorption of Na + from the carbon material [45]. The changing behavior of the CV curves of MCNS and BMHCS at different scan rates (0.5, 0.7, 1, 2, 5 mV s −1 ) are the same as those of 0.2 mV s −1 , but the current increases with the increase in the scan rate and the oxidation peak gradually broadens with the increase in scan rate, indicating that the voltage range of Na + extraction from carbon material becomes wider ( Figure S3a,b). The relationship between the current I and the scan rate v obeys the law of I = av b (a and b are adjustable parameter, b can be obtained from the slope of log(v)-log(i); usually the value of b between 0.5 and 1 represents pseudocapacitive properties) [46,47]. The b values of MCNS and BMHCS are all about 0.8 by analysis ( Figure S3a 1 ,b 1 ), showing the high capacitive controlled during the process of sodium storage. Moreover, by analyzing the capacitive contributions under different scanning rates, as displayed in Figure S3a 2 ,b 2 , the contribution rates of the capacitive capacity of the two kinds of carbon materials at the scanning rate of 0.2 mV s −1 are 58% and 59%, respectively, and with the increase in the scanning rate, there are higher capacitive contributions, which further indicates that MCNS and BMHCS mainly store sodium by the capacitive processes ( Figure S3a 3 ,b 3 ). The impedance spectra of MCNS and BMHCS and equivalent circuit diagram are shown in Figure S4a. According to the literature, R f represents the surface diffusion resistance, and R ct represents the charge transfer resistance [24]. According to the resistance value obtained by fitting ( Figure S4b), it can be seen that the resistance values of R f and R ct of BMHCS are smaller than MCNS, indicating that it has more active sites, and the diffusion and charge transfer of Na + are easier than that of MCNS during the charge and discharge process, so BMHCS shows better cycling performance and rate performance when is used for sodium storage. Conclusions In this work, the bimetallic MOF (CoCu-pPD) with pPD as the organic ligand was synthesized by a simple solution thermal method, and N/O co-doped sheet-like assembled hollow carbon spheres were prepared. The bimetal strategy not only modulates the MOF morphology, but also changes the disadvantage of poor thermal stability of MOFs with pPD as the ligand. The bimetallic MOFs-derived carbon material (BMHCS) exhibits excellent cycling and rate performance as the anode material for SIBs (306 mA h g −1 at 1 A g −1 after 300 cycles; 260 mA h g −1 at 5 A g −1 ) due to its unique hollow spherical structure, abundant defect sites, large specific surface area, and N/O co-doping. The morphology and framework stability of the MOFs are regulated by the introducing of Cu 2+ , which remarkable change the morphology, crystalline structure, specific surface area, and heteroatom doping of the derived carbon materials, improving their electrochemical performance.
5,686.8
2022-11-01T00:00:00.000
[ "Materials Science" ]
Self-consistent determination of long-range electrostatics in neural network potentials Machine learning has the potential to revolutionize the field of molecular simulation through the development of efficient and accurate models of interatomic interactions. Neural networks can model interactions with the accuracy of quantum mechanics-based calculations, but with a fraction of the cost, enabling simulations of large systems over long timescales. However, implicit in the construction of neural network potentials is an assumption of locality, wherein atomic arrangements on the nanometer-scale are used to learn interatomic interactions. Because of this assumption, the resulting neural network models cannot describe long-range interactions that play critical roles in dielectric screening and chemical reactivity. Here, we address this issue by introducing the self-consistent field neural network — a general approach for learning the long-range response of molecular systems in neural network potentials that relies on a physically meaningful separation of the interatomic interactions — and demonstrate its utility by modeling liquid water with and without applied fields. C omputer simulations have transformed our understanding of molecular systems by providing atomic-level insights into phenomena of widespread importance. The earliest models used efficient empirical descriptions of interatomic interactions, and similar force field-based simulations form the foundation of molecular simulations today 1 . However, it is difficult to describe processes like chemical reactions that involve bond breakage and formation, as well as electronic polarization effects within empirical force fields. The development of quantum mechanics-based ab initio simulations enabled the description of these complex processes, leading to profound insights across scientific disciplines [2][3][4][5][6][7][8][9] . The vast majority of these first principles approaches rely on density functional theory (DFT), and the development of increasingly accurate density functionals has greatly improved the reliability of ab initio predictions [10][11][12][13][14][15] . But, performing electronic structure calculations are expensive, and first-principles simulations are limited to small system sizes and short time scales. The prohibitive expense of ab initio simulations can be overcome through machine learning. Armed with a set of ab initio data, machine learning can be used to train neural network (NN) potentials that describe interatomic interactions at the same level of accuracy as the ab initio methods, but with a fraction of the cost. Consequently, NN potentials enable ab initio quality simulations to reach the large system sizes and long time scales needed to model complex phenomena, such as phase diagrams [16][17][18][19][20] and nucleation 21,22 . Despite the significant advances made in this area, there are still practical and conceptual difficulties with NN potential development, especially with regard to long-range electrostatics. To make NN potential construction computationally feasible, most approaches learn only local arrangements of atoms around a central particle, where the meaning of "local" is defined by a distance cutoff usually <1 nm. Because of this locality, the resulting NN potentials are inherently short-ranged. The lack of long-range interactions in NN potentials can lead to both quantitative and qualitative errors, especially when describing polar and charged species [23][24][25] . The need for incorporating long-range electrostatics into NN potentials has led to the development of several new approaches 23,24,[26][27][28][29] . Many of these approaches exclude all or some of the electrostatic interactions from training and then assign effective partial charges to each atomic nucleus that are used to calculate long-range electrostatic interactions using traditional methods 23,[25][26][27][28] . The values of these effective charges can be determined using machine learning methods. For example, the fourth-generation high-dimensional neural network potential (4G-HDNNP) 28 employs deep NNs to predict the electronegativities of each nucleus, which are subsequently used within a charge equilibration process to determine the effective charges. These approaches can predict binding energies and charge transfer between molecules, but they also introduce quantities that are not direct physical observables, such as the effective charges and electronegativities. Another approach explicitly incorporated nonlocal geometric information into the construction of local feature functions 24,30 . This approach, referred to as the long-distance equivariant representation, is able to more accurately predict the binding energy between molecules and the polarizability of molecules, compared to purely local models. However, this model only takes in the coordinates of the nucleus as input information and cannot handle external fields. The difficulties that current approaches to NN potentials have when treating long-range interactions can be resolved by a purely ab initio strategy that uses no effective quantities. Such a strategy can be informed by our understanding of the roles of short-and long-range interactions in condensed phases [31][32][33][34] . In uniform liquids, appropriately chosen uniformly slowly varying components of the long-range forces-van der Waals attractions and long-range Coulomb interactions-cancel to a good approximation in every relevant configuration. As a result, the local structure is determined almost entirely by short-range interactions. In water, these short-range interactions correspond to hydrogen bonding and packing [35][36][37][38] . Therefore, short-range models, including current NN potentials, can describe the structure of uniform systems. This idea, that short-range forces determine the structure of uniform systems, forms the foundation for the modern theory of bulk liquids [31][32][33] , in which the averaged effects of long-range interactions can be treated as a small correction to the purely short-range system. In contrast, the effects of long-range interactions are more subtle and play a role in collective effects that are important for dielectric screening. Moreover, long-range forces do not cancel at extended interfaces and instead play a key role in interfacial physics. As a result, short-range models cannot describe interfacial structure and thermodynamics, as they do in the bulk, and standard NN models fail to describe even the simplest liquidvapor interfaces 25 . The local molecular field (LMF) theory of Weeks and coworkers provides a framework for capturing the average effects of long-range interactions at interfaces through an effective external field 34,[39][40][41][42] . LMF theory also provides physically intuitive insights into the roles of short-and long-range forces at interfaces that can be leveraged to model nonuniform systems. Here, we exploit the physical picture provided by liquid-state theory to develop a general approach for learning long-range interactions in NN potentials from ab initio calculations. We separate the atomic interactions into appropriate short-range and long-range components and construct a separate network to handle each part. Importantly, the short-range model is isolated from the long-range interactions. This separation also isolates the long-range response of the system, enabling it to be learned. Short-range interactions can be learned using established approaches. The short-and long-range components of the potential are then connected through a rapidly converging selfconsistent loop. The resulting self-consistent field neural network (SCFNN) model is able to describe the effects of long-range interactions without the use of effective charges or similar artificial quantities. We illustrate this point through the development of a SCFNN model of liquid water. In addition to capturing the local structure of liquid water, as evidenced by the radial distribution function, the SCFNN model accurately describes longrange structural correlations connected to dielectric screening, the response of liquid water to electrostatic fields, and water's dielectric constant. Because the SCFNN model learns the response to electrostatic fields, it can predict properties that depend on screening in environments for which it was not trained. We demonstrate this by using the SCFNN trained on bulk configurations to model the orientational ordering of water at the interface with its vapor. Finally, the SCFNN also captures the electronic fluctuations of water and can accurately predict its high-frequency dielectric constant. Results Workflow of the SCFNN model. The SCFNN model consists of two modules that each target a specific response of the system (Fig. 1). Module 1 predicts the electronic response via the position of the maximally localized Wannier function centers (MLWFCs). Module 2 predicts the forces on the nuclear sites. In turn, each module consists of two networks: one to describe the short-range interactions and one to describe perturbations to the short-range system from long-range electric fields. Together, these two modules (four networks) enable the model to predict the total electrostatic properties of the system. In the short-range system, the v(r) = 1/r portion of the Coulomb potential is replaced by the short-range potential v 0 ðrÞ ¼ erfcðr=σÞ=r. Physically, v 0 (r) corresponds to screening the charge distributions in the system through the addition of neutralizing Gaussian charge distributions of opposite sign-the interactions are truncated by Gaussians. Therefore, we refer to this system as the Gaussian-truncated (GT) system [34][35][36][37][38] . By making a physically meaningful choice for σ, the GT system can describe the structure of bulk liquids with high accuracy but with a fraction of the computational cost. Moreover, the GT system has served as a useful short-range component system when modeling the effects of long-range fields 37,39,41,43,44 . Here, we choose σ to be 4.2 Å (8 Bohr), which is large enough for the GT system to accurately describe hydrogen bonding and the local structure of liquid water [34][35][36][37][38] . The remaining part of the Coulomb interaction, v 1 ðrÞ ¼ vðrÞ À v 0 ðrÞ ¼ erf ðr=σÞ=r, is long ranged, but varies slowly over the scale of σ. Because v 1 (r) is uniformly slowly-varying, the effective field produced by v 1 (r) usually induces a linear response in the GT system. The linear nature of the response makes the effects of v 1 (r) able to be captured by linear models. In the context of NNs, we demonstrate below that a linear network is sufficient to learn the linear response induced by long-range interactions. Module 1. The separation of interactions into short-and longrange components is crucial to the SCFNN model. In particular, the two networks of each module are used to handle this separation. Network 1S of Module 1 predicts the positions of the MLWFCs in the short-range GT system, while Network 1L predicts the perturbations to the MLWFC positions induced by the effective long-range field. Networks 1S and 1L leverage Kohn's theory on the nearsightedness of electronic matter (NEM) 45,46 . The NEM states that 46 "local electronic properties, such as the density n(r), depend significantly on the effective external potential only at nearby points." Here the effective external potential includes the external potential and the self-consistently determined long-range electric fields. Therefore, the NEM suggests that the electronic density, and consequently the positions of the MLWFCs, are "nearsighted" with respect to the effective potential, but not to the atomic coordinates, contrary to what has been assumed in previous work that also uses local geometric information of atoms as input to NNs 47,48 . An atom located at r 0 will affect the effective potential at r, even if r 0 is far from r, through long-range electrostatic interactions. Consequently, current approaches to generating NN models can only predict the position of MLWFCs for a purely short-range system without long-range electrostatics, such as the GT system 47,48 . We exploit this fact and use established NNs to predict the locations of the MLWFCs in the GT system 47 . To do so, we create a local reference frame around each water molecule (Fig. 2) and use the coordinates of the surrounding atoms as inputs to the NN. The local reference system preserves the rotational and translational symmetry of the system. The network outputs the positions of the four MLWFCs around the central water, which are then transformed to the laboratory frame of reference. Network 1L predicts the response of the MLWFC positions to the effective field E(r), defined as the sum of the external field, E ext (r), and the long-range field from v 1 (r): where ρðr 0 Þ is the instantaneous charge density of the system, including nuclear and electronic charges. Network 1L also introduces a local reference frame for each water molecule. However, Network 1L takes as input both the local coordinates and local effective electric fields. The NEM suggests that this local information is sufficient to determine the perturbation in the MLWFC positions. Network 1L outputs this change in the positions of the water molecule's four MLWFCs, and this perturbation is added to the MLWFC position determined in the GT system to obtain the MLWFCs in the full system. We note that E(r) is a slowly-varying long-range field, such that the MLWFCs respond linearly to this field. Therefore, Network 1L is constructed to be linear in E(r). Table 1 demonstrates that the We now need to determine the effective field E(r). This effective field depends on the electron density distribution, but evaluating and including the full three-dimensional electron density for every configuration in a training set requires a prohibitively large amount of storage space. Instead, we approximate the electron density by the charge density of the MLWFCs, assuming each MLWFC is a point charge of magnitude −2e 0 . This approximation is often used when computing molecular multipoles, as needed to predict vibrational spectra, for example 14,48 . Here it is important to note that the MLWFs of water are highly localized so that the center gives a reasonable representation of the location of the MLWF. Moreover, the electron density is essentially smeared over the scale of σ through a convolution with v 1 (r), which makes the resulting fields relatively insensitive to small-wavelength variations in the charge density. As a result, the electron density can be accurately approximated by the MLWFC charge density within our approach. The effective field is a functional of the set of MLWFC positions, E½ r w È É , and the positions of the MLWFCs themselves depend on the field, r w [E]. Therefore, we determine E and r w È É through self-consistent iteration. Our initial guess for E is obtained from the positions of the MLWFCs in the GT system. We then iterate this self-consistent loop until the MLWFC positions no longer change, within a tolerance of 2.6 × 10 −4 Å. In practice, we find that self-consistency is achieved quickly. Module 2. After Module 1 predicts the positions of the MLWFCs, Module 2 predicts the forces on the atomic sites. As with the first module, Module 2 consists of two networks: one that predicts the forces of the GT system and another that predicts the forces produced by E(r). To predict the forces in the GT system, we adopt the network used by Behler and coworkers 49 . This network, Network 2S, takes local geometric information of the atoms as inputs and, consequently, cannot capture long-range interactions. To describe long-range interactions, we introduce a second network (Network 2L in Fig. 1). This additional network predicts the forces on atomic sites due to the effective field E(r), which properly accounts for long-range interactions in the system. In practice, we again introduce a local reference frame for each water molecule and use local atomic coordinates and local electric fields as inputs. In this case, we also find that a network that is linear in E(r) accurately predicts the resulting long-range forces, consistent with the linear response of the system to a slowly-varying field. In practice, separating the data obtained from standard DFT calculations into the GT system and the long-range effective field is not straightforward. To solve this problem, we apply homogeneous electric fields of varying strength while keeping the atomic coordinates fixed. The fields only perturb the positions of the MLWFCs and the forces on the atoms-these perturbations are not related to the GT system. The changes induced by these electric fields are directly obtained from DFT calculations and are used to train Networks 1L and 2L and learn the response to long-range effective fields. The remaining part of the DFT data, which has the long-range field E(r) removed, is used to train Networks 1S and 2S and learn the response of the short-ranged GT system. See the "Methods" section for a more detailed discussion of the networks and the training procedure. We emphasize that our approach to partitioning the system into a short-range GT piece and a long-range perturbation piece is different from other machine learning approaches for handling long-range electrostatics. The standard approaches usually partition the total energy into two parts, a short-ranged energy and an Ewald energy that is used to evaluate the long-range interactions. However, this partitioning results in a coupling between the short-and long-range interactions. For example, the short-range part of the energy in the 4G-HDNNP model depends on the effective charges that are assigned to the atoms, but these effective charges depend on long-range electrostatic interactions through the global charge equilibration process used to determine their values 28 . In contrast, the SCFNN approach isolates the short-range interactions during the training process and connects the short-range model to long-range interactions through E(r) via self-consistency. The GT system embodied by Network 1S and 2S does not depend on long-range electrostatics even implicitly; it is completely uncoupled from the long-range interactions. The effects of long-range electrostatic interactions are isolated within the second network of each module, Network 1L and Network 2L in Fig. 1. This separation of short-and long-ranged effects is similar in spirit to the principles underlying LMF theory 34,39,41 and related theories of uniform liquids 32,33,35,36,38 . Water's local structure is insensitive to long-range interactions. We demonstrate the success of the SCFNN approach by modeling liquid water. Water is the most important liquid on Earth. Yet, the importance of both short-and long-range interactions makes it difficult to model. Short-range interactions are responsible for water's hydrogen bond network that is essential to its structure and unusual but important thermodynamic properties 36,50 . Longrange interactions play key roles in water's dielectric response, interfacial structure, and can even influence water-mediated interactions 41,51 . Because of this broad importance, liquid water has served as a prototypical test system for many machine learning-based models 17,24,48,49,52 Here, we test our SCFNN model on a system of bulk liquid water by performing molecular dynamics (MD) simulations of 1000 molecules in the canonical ensemble under periodic boundary conditions. One conventional test on the validity of a NN potential is to compare the radial distribution function, g(r), between atomic sites for the different models. The g(r) predicted by the SCFNN model is the same as that predicted by the Behler-Parrinello (BP) model 49 for all three site-site correlations in water (Fig. 3). This level of agreement may be expected, based on previous work examining the structure of bulk water [36][37][38]40,41 . The radial distribution functions of water are determined mainly by shortrange, nearest-neighbor interactions, which arise from packing and hydrogen bonding; long-range interactions have little effect on the main features of g(r). Consequently, purely short-range models, like the GT system, can quantitatively reproduce the g(r) of water [36][37][38]40,41 . Similarly, the short-range BP model accurately describes the radial distribution functions, as does the SCFNN model, which includes long-range interactions. Long-range electrostatics and dielectric response. Though the short-range structure exemplified by the radial distribution function is insensitive to long-range interactions, long-range correlations are not. For example, the longitudinal component of the dipole density or polarization correlation function evaluated in reciprocal space, χ 0 zz ðkÞ, was recently shown to be sensitive to long-range interactions 44 . This correlation function is defined according to Here p j is the dipole moment of water molecule j and r j is the position of the oxygen atom of water molecule j. Here we compare the longitudinal polarization correlation function predicted by our SCFNN model and the BP model. The original BP model is not able to predict molecular charge distributions. Therefore, to predict the dipole moment of water, we couple the BP model with the short-range part of the SCFNN model that predicts MLWFCs (Network 1S). We note that a similar strategy was used in the previous work 47 . The longitudinal polarization correlation function predicted by our SCFNN model and the BP agree everywhere except at small k, indicating that long-range correlations are different in the two models (Fig. 4a). The long-wavelength behavior of the polarization correlation function is related to the dielectric constant via 35,53,54 lim k!0 where ε ≈ 100 is the value of the dielectric constant of water predicted by the SCFNN, as discussed below. The χ 0 zz ðkÞ predicted by our SCFNN model is consistent with the expected behavior at small k. In contrast, short-range models, like the GT system 35,44,54 and the BP model, significantly deviate from the expected asymptotic value. Consequently, these short-range models are expected to have difficulties describing the dielectric screening that is important in nonuniform systems 25,37,39,41,44 , for example. To further examine the dielectric properties of the NN models, we can apply homogeneous fields of varying strength to the system and examine its response. To do so, we performed finitefield simulations at constant displacement field, D. These finite-D simulations 55 can be naturally combined with our SCFNN model, unlike many other NN models that cannot handle external fields. Following previous work 44 , we use D ¼ Dẑ, vary the magnitude of the displacement field from D = 0 V/Å to D = 0.4 V/Å, and examine the polarization, P, induced in water. As shown in Fig. 4b, the polarization response of water to the external field is accurately predicted by dielectric continuum theory, as expected, further suggesting that the SCFNN model properly describes the dielectric response of water. To the best of our knowledge, this is the first NN model that can accurately describe the response of a system to external fields. We emphasize that this response is Because the SCFNN can predict the response to electrostatic fields, we can use a highly efficient method to estimate the dielectric constant 56 . To do so, we compute the r-dependent Kirkwood g-factor, G K (r), with E = 0 and D = 0, where μ 1 is the dipole of a water molecule at the origin and M 1 (r) is the total dipole moment in a sphere of radius r including the molecule at the origin. The composite Kirkwood g-factor, converges rapidly with r to a constant g K , which is related to the dielectric constant through Kirkwood's relation for polarizable molecules 56 where N is the number of water molecules, V is the system volume, β = 1/(k B T), and ε ∞ is the high-frequency dielectric constant that arises from electronic polarization; ε ∞ ≈ 1.65 for the SCFNN model, as discussed below. As shown in Fig. 5a, the composite correlation function plateaus to a constant value near a distance of 6 Å, as expected 56 . By replacing g K in Eq. (6) with G Kc (r) and inverting, we can compute the effective distance-dependent dielectric constant, shown in Fig. 5b. The dielectric constant rapidly converges to the bulk value of ε ≈ 100, which is close to estimates provided by van der Waals corrected functionals of similar accuracy 14 and significantly less than that predicted by the PBE functional that overstructures water 56 . To push the limits of the SCFNN model, we can ask if it can properly predict dielectric screening in nonuniform environments, for which it was not trained. To do so, we simulate a water-vapor interface by extending the simulation cell along the z-axis to create a slab of water surrounded by a large vacuum region on either side. Because we have only trained on bulk configurations and not on configurations in the nonuniform system, we cannot expect the BP or the SCFNN model to accurately reproduce all features of the interface. Yet, both models do produce a stable interface, as shown by the densities in Fig. 6a, although the width of the SCFNN interfaces is smaller than those of the BP. Both models predict densities that are lower than those predicted by models explicitly trained for the interface, which may be expected because the bulk models did not learn the unbalanced dispersion forces that exist at interfaces 25,36,57 . However, the bulk density predicted by the SCFNN model is larger than that of the BP model, in better agreement with experiments. Dielectric screening manifests in the orientational structure of interfacial water, and we examine the orientational preferences of water by computing hcosθðzÞi, where θ(z) is the angle formed by the surface normal and the dipole moment vector of a water molecule located at z. At the water-vapor interface, water Fig. 6 The structure of the water-vapor interface. a Water density and b average cosine of the angle formed by the water dipole moment and the surface normal for the Behler-Parrinello (BP) and SCFNN models without any additional training. molecules tend to point their dipoles slightly toward the vapor phase, a consequence of breaking an average of one H-bond per molecule at water's surface. This dipole layer is screened by subsequent layers of water, such that no net orientation and zero electric field exists in the bulk. In the absence of long-range electrostatics, this screening is not achieved, and short-range models result in extended ordering from the interface into the bulk 25,36,37,39,44 . Indeed, the short ranged BP model results in long-ranged orientational ordering of water at the liquid-vapor interface because it lacks dielectric screening. In contrast, the SCFNN model displays the expected behavior. A single hcosθðzÞi peak in hcosθðzÞi appears near the interface and goes to zero in the bulk of the slab due to proper screening of the interfacial dipole layer. This successful prediction suggests that the SCFNN approach may lead to the creation of NN models that are at least partially transferable to different environments. Electronic fluctuations. In addition to the screening encompassed by the static dielectric constant, the SCFNN model can also properly predict electronic fluctuations of water and the high-frequency dielectric constant. To quantify electronic fluctuations, we compute the probability distribution of the magnitude of the water dipole moment from our simulations of bulk water using the SCFNN model, Fig. 7a. This distribution is dominated by the electronic polarization of water molecules and has a width consistent with predictions from ab initio MD simulations [58][59][60][61] . Moreover, the mean of the distribution yields an average dipole moment (2.9 D) in agreement with that estimated from experiments (2.9 D) 62 , further supporting that the SCFNN produces an accurate description of the molecular charge distribution in liquid water. We also decomposed the dipole moment distribution into contributions from short-and long-range interactions. The shortrange contribution to the electronic polarization is determined by Network 1S and the long-range part is determined by Network 1L. As shown in Fig. 7a, the molecular dipole moment distribution in bulk water is determined by short-range interactions, where the nuclear configurations of the bulk were determined using the full SCFNN model. This is consistent with the idea that local structure in a uniform bulk liquid, and fluctuations about that local structure, are determined by short-range interactions. Long-range electronic effects on electrostatic screening are quantified by the high-frequency dielectric constant, ε ∞ . Physically, ε ∞ can be thought of as the amount by which an electric field is screened without altering the positions of the nuclei; it quantifies the electronic response to applied fields. To estimate ε ∞ , we perform precisely this exercise: we compute the polarization of water in response to an external electric field of magnitude E and keep all positions of the nuclei fixed. The resulting polarization, shown in Fig. 7b, is consistent with a linear response to the field, as expected for dielectric screening. Fitting the induced electronic polarization to dielectric continuum theory expectations yields ε ∞ ≈ 1.65, in good agreement with the experimental value of 1.77, demonstrating that the SCFNN model can accurately predict long-range electronic response to electrostatic fields. Finally, we compare the electronic fluctuations of the SCFNN model to predictions made by the 4G-HDNNP model. To do so, we perform a MD simulation of bulk water using the extended simple point charge (SPC/E) water model 63 and use the resulting configurations to determine the dipole moment distribution using each NN model, Fig. 7c. Using the same set of configurations allows us to compare only the ability of each model to predict charge distributions. The 4G-HDNNP relies on atomic partial charges obtained from electronic structure calculations during the training process. The original implementation of the 4G-HDNNP model used Hirshfeld charges 28,64 . We additionally train another version of the 4G-HDNNP model using Mulliken charges to examine the dependence of the results on the method of determining the atomic partial charges 65 . See the Methods section for a more detailed discussion of the training procedure. The SCFNN model results in a dipole moment distribution centered near the experimentally-determined average dipole moment. Moreover, the width of the SCFNN distribution is in good agreement with ab initio predictions 58-61 , although slightly narrower than that obtained using SCFNN-generated configurations (Fig. 7a). The 4G-HDNNP models result in significantly narrower distributions than the SCFNN model, and the average molecular dipole moment is either too large (Hirshfeld) or too small (Mulliken). The prediction of distinctly different molecular dipole moments demonstrates a key disadvantage of relying on atomic partial charges during training-the definition of partial charges can be ambiguous and often artificial. Then, the resulting 4G-HDNNP models trained with different partial charges will give different results. In contrast, the SCFNN model removes this ambiguity by representing the molecular charge distributions using MLWFCs. Discussion In this work, we have presented a general strategy to construct NN potentials that can properly account for the long-range response of molecular systems that is responsible for dielectric screening and related phenomena. We demonstrated that this model produces the correct long-range polarization correlations in liquid water, as well as the correct response of liquid water to external electrostatic fields. Both of these quantities are related to the dielectric constant and require a proper description of long-range interactions. In contrast, current derivations of NN potentials result in short-range models that cannot capture these effects. We anticipate that this approach will be of broad use to the molecular machine learning and simulation community for modeling the electrostatic and dielectric properties of molecular systems. In contrast to short-range interactions that must be properly learned to describe the different local environments encountered at extended interfaces and at solute surfaces, the response of the system to long-range, slowly-varying fields is quite general. Learning the long-range response (through Networks 1L and 2L) is analogous to learning a linear response in most cases, and we expect the resulting model to be relatively transferable; we emphasize, however, that the SCFNN is not limited to the linear response regime. As such, our resulting SCFNN model can make predictions about conditions on which it was not trained. For example, we trained the model for electric fields of magnitude 0, 0.1, and 0.2 V/Å, and then used this model to successfully predict the response of the system to displacement fields with magnitudes between 0 and 0.4 V/Å. This suggests that our approach can be used to train NN models in more complex environments and then accurately predict the response of water to long-range fields in those environments. We also showed that the SCFNN model trained for bulk water can predict orientational structure at the water-vapor interface as a result of learning dipolar screening, further emphasizing the ability of the SCFNN to predict the response of the system to electrostatic fields. The ability to learn the response of condensed phases to applied fields should make the SCFNN appealing for modeling atomic systems in electrochemical environments 66 , where electrostatic potential differences drive chemical processes, as well as in the modeling of interfaces with polar surfaces where the application of displacement fields is used to properly model surface charge densities 67,68 . Our SCFNN approach is complementary to many established methods for creating NN potentials. Learning the short-range, GT system interactions can be accomplished with any method that uses local geometric information, and recent advances in optimizing this training can be leveraged 69,70 . In this case, the precise form of Networks 1S and 2S can be replaced with an alternative NN. Then, Networks 1L and 2L can be used as defined here, within the general SCFNN workflow, resulting in a variant of the desired NN potential that can describe the effects of long-range interactions. Because of this, we expect our SCFNN approach to be transferable and readily interfaced with current and future machine learning methods for modeling short-range molecular interactions. We close with a discussion of the limitations of the SCFNN model in its current form and possible strategies for improvement. We rely on defining a local molecular coordinate system on each water molecule, in order to make our model rotationally equivariant. Moreover, we assumed that a specific number of MLWFCs are associated with each molecule, four for each water molecule, and examined their coordinates within the local frame. These steps are complicated when bond breakage and formation occurs. Although the general procedure can be readily extended to many molecules, the set of possible molecules must be known in advance. Strategies for constructing rotationally equivariant NN potentials without a local reference frame have been developed and can be used in place of the strategy used here to develop further generations of the SCFNN model that improve upon these deficiencies 30,48,[71][72][73] . Methods Training the SCFNN. Our training and test set consists of 1571 configurations of 64 water molecules 52 . Homogeneous electric fields were applied to the system, as described further in the next section. We used two-thirds of the configurations for training and one-third to test the training of the network. To train the networks we need to separate the DFT data into the GT system and the long-range effective field. However, that separation is not straightforward in practice. To achieve this, we use the differences in the MLWFC locations and forces induced by different fields to fit Networks 1L and 2L. We now describe this procedure in detail for fitting Network 1L, and Network 2L was fit following a similar approach. To learn the effects of long-range interactions, we consider perturbations to the positions of the MLWFCs induced by external electric fields of different magnitudes. Consider applying two fields of strength E j j and E j j 0 . These fields will alter the MLWFC positions by Δr w [R, E] and Δr 0 w ½R; E 0 , respectively. However, both Δr w and Δr 0 w are not directly obtainable from a single DFT calculation. Instead, we can readily compute the difference in perturbations, Δr w À Δr 0 w , directly from the DFT data, because Here r w and r 0 w are the locations of the MLWFCs in the full system in the presence of the field E and E 0 , respectively, and these positions can be readily computed in the simulations. These differences in the MLWFC positions are used to fit Network 1L. In addition, we also exploit the fact that Δr w = 0 when E = 0. This allows us to fix the zero point of Network 1L. After fitting Networks 1L and 2L, we use them to predict the contribution of the effective field to the MLWFC locations and forces. We then subtract that part from the DFT data. What remains corresponds to the short-range GT system, and this is used to train Networks 1S and 2S. We now describe the detailed structure of the four networks used here. Network 1S. In the local frame of water molecule i, we construct two types of symmetry functions as inputs to Network 1S. The first type is the type 2 BP symmetry function 74 , Here η and r s are parameters that adjust the width and center of the Gaussian, and f c is a cutoff function whose value and slope go to zero at the radial cutoff r c . We adopted the same cutoff function as previous work 49 , and the cutoff r c is set equal to 12 Bohr. The second type of symmetry function is similar to the type 4 BP symmetry function 74 . This symmetry function depends on the angle between r ij and the axis of the local frame, Here, ζ and λ are parameters that adjust the dependence of the angular term. We use 36 symmetry functions as input to Network 1S. Network 1S itself consists of two hidden layers that contain 24 and 16 nodes. The output layer consists of 12 nodes, corresponding to the three-dimensional coordinates of the four MLWFCs of a central water molecule. Network 1S is a fully connected feedforward network, and we use tanhðxÞ as its activation function. Network 1L. In the local frame of water molecule i, we construct one type of symmetry function as input to Network 1L, Here, E j is the effective field exerted on atom j. We use 36 symmetry functions as inputs to Network 1L. Network 1L has no hidden layers. The output layer consists of 12 nodes, corresponding to the three-dimensional coordinates of the perturbations of a water molecule's four MLWFCs induced by the external field. Network 2S. Network 2S is exactly the same as the BP Network employed in the previous work 49 . In brief, the network contains 2 hidden layers, each containing 25 nodes. Type 2 and 4 BP symmetry functions are used as inputs to the network. The network for oxygen takes 30 symmetry functions as inputs, while the network for hydrogen takes 27 symmetry functions as inputs. A hyperbolic tangent is used as the activation function. Network 2L. Network 2L uses the same type of symmetry function as Network 1L. The network for the force on the oxygen and for the force on hydrogen are trained independently. To predict the force on the oxygen, we center the local frame on the oxygen atom. When the force on a hydrogen atom is the target, we center the local frame on a hydrogen atom. We use 36 symmetry functions as inputs to Network 2L. Network 2L has no hidden layers. The inputs map linearly onto the forces on the atoms. 4G-HDNNP. The same configurations used to train and test the SCFNN model are used to train and test the 4G-HDNNP. Hirshfeld and Mulliken charges for these configurations are obtained with DFT. Two-thirds of these configurations are used to train the 4G-HDNNP and the remaining one-third is used to test the training. We trained two versions of the 4G-HDNNP, one with Hirshfeld charges and the other with Mulliken charges. The 4G-HDNNP-Hirshfeld model yields an average charge error of 0.012e 0 on the test set, while the 4G-HDNNP-Mulliken yields an average charge error of 0.02e 0 on the test set. DFT calculations. The DFT calculations followed previous work 52, 75 and used published configurations of water as the training set 52 . In short, all calculations were performed with CP2K (version 7) 76,77 , using the revPBE0 hybrid functional with 25% exact exchange 15,78,79 , the D3 dispersion correction of Grimme 80 , Goedecker-Tetter-Hutter pseudopotentials 81 , and TZV2P basis sets 82 , with a plane wave cutoff of 400 Ry. Maximally localized Wannier function centers 83 were evaluated with CP2K, using the LOCALIZE option. The maximally localized Wannier function spreads were minimized according to previous work 84 . Hirshfeld and Mulliken charges were determined using the default implementations in CP2K. A homogeneous, external electric field was applied to the system using the Berry phase approach, with the PERIODIC_EFIELD option in CP2K 56,85,86 . Electric fields of magnitude 0, 0.1, and 0.2 V/Å were applied along the z-direction of the simulation cell. Sample input files are given at Zenodo 87 . MD simulations. MD simulations are performed in the canonical (NVT) ensemble, with a constant temperature of 300 K maintained using a Berendsen thermostat 88 . The system consisted of 1000 water molecules in a cubic box 31.2 Å in length. The equations of motion were integrated with a timestep of 0.5 fs. Radial distribution functions and longitudinal polarization correlation functions were computed from 100 independent trajectories that were each 50 ps in length. Finite-D simulations were performed under the same simulation conditions, and each trajectory was 50 ps long at each magnitude of D. The liquid-vapor simulation was performed at 300 K. The system consisted of 1000 water molecules. The dimensions of the simulation box were L x = L y = 30 Å and L z = 90 Å. The density profiles and the orientational profiles of water were obtained from 59 independent trajectories that were each 50 ps in length. Each trajectory is equilibrated for at least 50 ps before data are collected. The SPC/E water 63 simulation is performed in the canonical (NVT) ensemble, with a constant temperature of 300 K maintained using a Berendsen thermostat 88 . The system consisted of 1000 water molecules in a cubic box of length 31.2 Å. One thousand configurations were sampled from a 50 ns long trajectory of the SPC/E water simulation and the SCFNN and 4G-HDNNP were applied to these configurations to predict the dipole moments of water molecules. Data availability The data generated to train and test the SCFNN and the 4G-HDNNP have been deposited in Zenodo under accession code https://doi.org/10.5281/zenodo.5760191 87 . Source data are provided with this paper. Code availability All DFT calculations were performed with CP2K version 7. In-house code was used to construct the NN potentials and perform the MD simulations. These codes are available on Github: https://doi.org/10.5281/zenodo.5919317 89 . Received: 29 September 2021; Accepted: 7 March 2022;
9,613.8
2021-09-27T00:00:00.000
[ "Computer Science", "Chemistry", "Physics" ]
Rare double-hit with two translocations involving IGH both, with BCL2 and BCL3, in a monoclonal B-cell lymphoma/leukemia Background Chronic Lymphocytic Leukemia (CLL) is a lymphoproliferative disease characterized by multiple recurring clonal cytogenetic anomalies and is the most common leukemia in adults. Chromosomal abnormalities associated with CLL include trisomy 12 and IGH;BCL3 rearrangement [t(14;19)(q32;q13)] that juxtaposes a proto-oncogenic gene BCL3 and an immunoglobulin heavy chain, a translocation that may be associated with shorter survival. In addition to the IGH;BCL3 rearrangement, other translocations involving 14q32 locus are involved in various lymphoproliferative pathologies pointing toward the significance of IGH locus in oncogenic progression. Significantly, in the majority of B-cell neoplasms that carry an IGH;BCL3 rearrangement, it is a sole translocation involving an IGH locus. Case Presentation We report a patient who, in addition to trisomy 12, carried a rare double-hit translocation characterized by the IGH;BCL3 translocation and an additional clonal IGH;BCL2 translocation involving IGH and another proto-oncogene BCL2, t(14;18)(q32;q21), commonly found in follicular lymphoma. Further single nucleotide polymorphism (SNP) array-based analysis detected a duplication of the 58.8 kb region at 19q13.32 adjacent to the BCL3 translocation junction on chromosome 19q13. Interestingly, the duplicated region contained ERCC2 gene, which encodes a DNA excision repair protein involved in the cancer-prone syndrome, xeroderma pigmentosum. Conclusions Taken together our findings indicate the existence of double-translocation driven oncogenic events involving both IGH loci and proto-oncogenes BCL2 and BCL3. Importantly, the IGH;BCL3 translocation was characterized by the duplication of the genomic region adjacent to BCL3, containing a major DNA repair factor, ERCC2. Electronic supplementary material The online version of this article (doi:10.1186/s13039-015-0203-y) contains supplementary material, which is available to authorized users. Background Chronic lymphocytic leukemia (CLL) is a genetically heterogeneous neoplasm characterized by the progressive accumulation of B cells in bone marrow, lymph nodes and blood. The progression of the disease is highly variable, ranging from the indolent state to the highly aggressive leukemia marked by short survival times. Numerous chromosomal abnormalities have been shown to contribute to CLL, including but not limited to trisomy 12, loss of 11q22-q23 containing the ATM gene, loss of 13q14.3 and 6q, loss of 17p13 containing TP53 gene, and others [1]. A specific translocation [t(14;19)(q32;q13)] which juxtaposes the immunoglobulin heavy chain locus (IGH) sequence (HUGO, 14q32.33) and the gene encoding an anti-apoptotic protein BCL3 resulting in overexpression of BCL3 [2], is of a particular interest because it does not occur frequently, and it is usually associated with shorter survival [1]. In addition, this translocation as well as other translocations involving the IGH locus, although found in CLL, have been also described in poorly clinicopathologically described B cell lymphomas, categorized as atypical CLL [3]. Another translocation involving IGH locus and an antiapoptotic protein BCL2, t(14;18)(q32;q21), is considered a hallmark of aggressive lymphomas, such as follicular lymphomas (FL) [4,5] and diffuse large B-cell lymphomas (DLBCL) [6][7][8]. We report a patient who was evaluated for leukocytosis with lymphocytosis, and who was found to have a marked bone marrow involvement by neoplastic lymphocytes showing a B-cell line of differentiation as determined by flow cytometry and immunohistochemistry. Morphologically, the lymphocytes were small to medium in size, and a subset showed knobby cytoplasmic blebbing. Further immunophenotypic studies ruled out involvement by typical CLL or mantle cell lymphoma in this patient, highlighting an atypical characteristic of this malignancy. Surprisingly, in addition to trisomy 12, which is commonly found in CLL, chromosome analysis at the haploid band resolution 300-400 revealed the presence of clones carrying a single t(14;19)(q32;q13) IGH;BCL3 translocation or both, t(14;19)(q32;q13) IGH;BCL3 and t(14;18)(q32;q21) IGH;BCL2 translocations. Whereas single translocation events involving IGH and BCL3 or BCL2 loci can be detected in lymphoid cancers, double-hit translocations involving both IGH;BCL2 and IGH;BCL3 rearrangements in the same patient are exceptionally rare. Additionally, using SNP array analysis we detected a local microduplication event at the BCL3 translocation junction on chromosome 19 involving a DNA repair factor ERCC2. An additional file describing experimental procedures is available (see Additional file 1). However, due to the nature of the SNP array platform we cannot exclude the possibility that ERCC2 duplication occurred on the non-rearranged chromosome 19. Given the absence of classical CLL and mantle cell lymphoma immunophenotype, size and morphology of the neoplastic lymphocytes, and the presence of B-cell lymphoproliferative genetic markers characteristic of aggressive B-cell lymphomas, we favor the diagnosis of an aggressive monoclonal B-cell lymphoma/ leukemia, likely B-prolymphocytic leukemia in this patient driven by the IGH;BCL3 and IGH;BCL2 mediated mechanisms. Case presentation Our patient was an 82-year-old African-American female who presented to her oncologist for leukocytosis. Complete blood count (CBC) data showed a white blood cell (WBC) count of 24.5 K/ MicroL with relative and absolute lymphocytosis of 70 % and 17.2K/ MicroL respectively. A bone marrow biopsy and aspirate was performed and the specimen was sent to the hematopathologist for evaluation. Flow cytometric analysis showed 40 % monoclonal B-cells with lambda light chain restriction of moderate intensity with dim CD5 co-expression and the following immunophenotype: CD10-, CD19+, CD20+, CD200-, CD23 +/-(dim), FMC7 +/-(dim), CD38-, CD25-, CD103-and CD11c-. By histology the bone marrow showed marked involvement by lymphocytes with interstitial pattern of distribution estimated at 80-90 % of the total marrow cellularity. H&E staining, CD20 and Pax5 immunohistochemistry confirmed the nature of the lymphocytes consistent with B-cell line of differentiation ( Fig. 1a and b and data not shown). No lymphoid aggregate formation was seen. Morphologically the lymphocytes were small to medium in size and showed clumped nuclear chromatin with small but conspicuous nucleoli and no morphologic features suggestive of involvement by DLBCL, Burkitt lymphoma or other high-grade aggressive B-cell lymphomas. By immunohistochemistry the B-cells were negative for Cyclin-D1 and, additionally, they were negative for Sox-11, arguing against the diagnosis of mantle cell lymphoma or Cyclin-D1 negative mantle cell lymphoma, respectively. Evaluation of the aspirate smear showed the lymphocytes with occasional small knobby cytoplasmic blebbing but no discernible villous hairy projections. Overall, based on the immunophenotypic finding by flow cytometric analysis, it is unlikely that this lymphoma represents typical CLL, since it not only expresses monoclonal light chain with moderate intensity, it also shows weak CD23 expression and is negative for CD200. The possibility of mantle cell lymphoma and Cyclin D-1 negative mantle cell lymphoma was considered and further evaluated but ruled out by negative staining with Cyclin D1 and Sox-11 immunohistochemistry respectively. Given the morphologic observation of knobby cytoplasmic blebbing, the diagnosis of B-PLL (B-prolymphocytic leukemia) was considered a strong possibility. Cytogenetic studies of bone marrow preparations from this patient revealed two related abnormal clones in eight of twenty-two metaphase spreads examined. The first clone (stemline [sl]), six cells, contained a translocation between the long (q) arm of chromosomes 14 and 19 (Fig. 2a, arrows), and gain of one copy of chromosome 12 (underlined). The second clone, a composite of two cells, was a doubling of the stemline clone (the first clone) with two copies of an additional translocation between 14q and 18q (Fig. 3a, red arrows, six copies of chromosome 12 are underlined). The remaining fourteen cells contained a normal karyotype. Additionally, FISH studies using BCL3 and IGH break-apart probes confirmed BCL3 and IGH translocations in the first clone ( Fig. 2b and c, arrows). With respect to the second Fig. 1 a H&E staining of the bone marrow showing marked interstitial involvement by medium sized cells with small but conspicuous nucleoli. b CD20 immunohistochemistry highlights marked involvement of the bone marrow by abnormal B-cells Fig. 2 a Karyotype of the first clone. The first clone (stemline [sl]), six cells, contained a translocation between the long (q) arms of chromosomes 14 and 19 (arrows), and gain of one copy of chromosome 12 (underlined). b BCL3 break-apart FISH probe (green/red) indicates rearrangement of a BCL3 locus in the first clone (arrows). Normal FISH signal is shown on the right side of the panel for comparison. c IGH break-apart FISH probe (green/red) indicates rearrangement of an IGH locus in the first clone (arrows). Normal FISH signal is shown on the right side of the panel for comparison clone, IGH;BCL2 dual-fusion probe picked up eight signals for IGH (Fig. 3b, black and white IGH image) which, in combination with karyotype data, was indicative of the presence of eight IGH derivative translocation products. This suggests that all IGH loci are rearranged in the second clone in addition to the near duplication of the chromosome content. BCL2 probe picked up six BCL2 signals (Fig. 3b, black and white BCL2 image) indicating on the presence of four translocated BCL2 loci and two intact BCL2 loci from the non-translocated chromosome 18. Importantly, IGH and BCL2 signals formed four fusions (Fig. 3b, "merge" panel, arrows) confirming two derivative IGH;BCL2 loci on chromosomes 14 and chromosome 18. In addition, IGH break-apart probe confirmed the presence of four translocated IGH loci (Fig. 3b, right panel, four red and four green signals). In summary, first clone was characterized by the IGH;BCL3 translocation event resulting in derivative chromosomes, 14 and 19 (Fig. 4a) and second clone was characterized by the presence of IGH;BCL2 translocation in addition to IGH;BCL3 translocation and near tetraploid chromosome content resulting in four derivative chromosomes 14 and two derivative chromosomes 18 and 19 (Fig. 4b). Conclusions In this report we describe a case of a monoclonal B-cell lymphoma/leukemia with extensive bone marrow involvement and CLL-like immunophenotype showing CD5 expression by flow cytometric analysis and furthermore displaying a double translocation event between IGH loci and both BCL3 and BCL2 gene loci. Additionally, we detected a microduplication event in the genomic locus adjacent to BCL3 gene involving a nucleotide excision repair protein ERCC2. Our karyotype analysis identified two abnormal clones in this patient. The first clone contained, in addition to trisomy 12, a single cytogenetically defined translocation t(14;19)(q32;q13) involving IGH;BCL3 loci. The second clone represented a near duplication of the chromosome content of the first clone and the presence of two copies of an additional translocation involving IGH locus and BCL2 gene t(14;18)(q32;q21). Therefore, all IGH loci located on the chromosome 14 were rearranged in the second clone which was confirmed by our FISH studies. These results are consistent with a neoplastic process in this patient's bone marrow specimen. Importantly, the translocation t(14;19)(q32;q13) involving IGH and BCL3 loci, which results in altered BCL3 expression, has been described in aggressive forms of lymphomas and atypical CLLs [3,10]. The translocation t(14;18)(q32;q21) is a recurring abnormality in B-cell lymphomas. This translocation places the oncogene BCL2 on 18q21 within the immunoglobulin heavy chain locus on 14q32 and therefore deregulates the BCL2 function. Juxtaposition of both Fig. 4 Schematic representation of the rearrangements described in clone 1 and clone 2. Chromosomes involved in rearrangements are shown on the right. a Rearrangements in clone 1 involving IGH (14q32) and BCL3 (19q13.1) which results in two derivative chromosomes (der(19) and der (14). Rearranged chromosomes 14 and 19 are indicated by an arrow on the right. b Rearrangements in clone 2 which includes chromosome duplication event and IGH (14q32) and BCL2 (18q21) translocation in addition to IGH;BCL3 rearrangements. Therefore clone 2 contains 8 derivative chromosomes (a pair of each der(19), der(18), der(14)t(14;19), and der(14)t(14;18)). Rearranged chromosomes 18 and 19 are indicated by the arrows on the right BCL3 and BCL2 genes next to the active IGH locus leads to their altered expression, decreased apoptosis, and leukemia progression [10,11]. Concurrent rearrangements of BCL2(18q21) and BCL3(19q13) have been reported in atypical chronic lymphocytic leukemia at Richter's transformation [12]. Both BCL3 and BCL2 are protooncogenes, however mechanistically, BCL2 functions as a key regulator of apoptosis through the mitochondrial pathways [11], whereas BCL3 is a predominantly nuclear protein recruited to the NFκB-responsive promoters where it regulates apoptotic program [13], suggesting that rearrangements of BCL2 and BCL3 may differentially modulate leukemic progression through distinct molecular pathways. The appearance of the IGH;BCL3 clone followed by the genome near duplication event and the appearance of the second clone containing both IGH;BCL3 and IGH;BCL2 rearrangements in our patient might, therefore, have an additive effect on the development of this neoplasm. ERCC2 duplication in this patient is an intriguing finding, because of the role of ERCC2 in DNA nucleotide excision repair process. Specific ERCC2 polymorphisms are implicated in susceptibility to melanoma [14] as well as triple negative breast cancer [15]. However, according to the Database of Genomic Variants (http://dgv.tcag.ca/dgv/ app/home), duplications of ERCC2 genomic region are also found in healthy individuals. Therefore, the significance of ERCC2 duplication in context of IGH;BCL3 and
2,888.2
2015-12-30T00:00:00.000
[ "Medicine", "Biology" ]
Comparing free-space and fiber-coupled detectors for Fabry–Pérot-based all-optical photoacoustic tomography Abstract. Significance: Highly sensitive detection is crucial for all-optical photoacoustic (PA) imaging. However, free-space optical detectors are prone to optical aberrations, which can degrade the pressure sensitivity and result in deteriorated image quality. While spatial mode-filtering has been proposed to alleviate these problems in Fabry–Pérot-based pressure sensors, their real functional advantage has never been properly investigated. Aim: We rigorously and quantitatively compare the performance of free-space and fiber-coupled detectors for Fabry–Pérot-based pressure sensors. Approach: We develop and characterize a quantitative correlative setup capable of simultaneous PA imaging using a free space and a fiber-coupled detector. Results: We found that fiber-coupled detectors are superior in terms of both signal level and image quality in realistic all-optical PA tomography settings. Conclusions: Our study has important practical implications in the field of PA imaging, as for most applications and implementations fiber-coupled detectors are relatively easy to employ since they do not require modifications to the core of the system but only to the peripherally located detector. Introduction All-optical photoacoustic (PA) tomography is an emerging alternative to classical piezoelectric approaches. 1 Multiple optical detector types and geometries are constantly being developed and improved with the overarching aim of matching the detection sensitivity of piezoelectric systems. Among them, Fabry-Pérot interferometer (FPI) sensors are particularly promising, as they combine the ability to measure acoustic waves with high spatial resolution and pressure sensitivity. For this application, a pressure-sensitive FP device is formed by sandwiching a thin layer (10 to 100 μm) of elastomer (e.g., Parylene C) between two dichroic mirrors. This optical resonator has then the ability to elastically deform under pressure, modulating the FP interferometer's transfer function (ITF), which is a function of the cavity thickness. The sensor is then interrogated by tuning the laser wavelength to the point of maximum slope on the ITF (so-called bias wavelength) which translates the acoustic (pressure) waves into a modulation of an optical (interference) signal. In practice, this approach has allowed acoustic sensing in the range of 100 to 10 6 Pa with a very broadband frequency response (bandwidth ∼0.1 to 40 MHz). 2,3 As optical devices, FPIs are sensitive to light beam aberrations, which can have detrimental effects on their performance under certain conditions. Among different types of optical sensors, FP cavities are especially sensitive to aberrations as their sensing principle is dependent on the high spatial uniformity of the light beam to facilitate efficient interference. 4 It was previously shown that both beam and cavity aberrations, 5 surface roughness 6 as well as mirror nonparallelism 4,7 can lead to severe deterioration of the optical sensitivity of the FPI and that this loss can be partially recovered by the use of aberration correction techniques including adaptive optics. 8 An alternative method of aberration correction is based on spatial-mode-filtering 4,5,9 where a single-mode fiber 5,9 or an optical pinhole 4 is used to reject part of the interrogation light to improve the measurement sensitivity. This effect can be explained using different frameworks, either as rejecting more divergent components of the beam which carry lower contrast interference fringes 4 or removing higher-order spatial modes of the beam which carry spectrally shifted interference fringes which lower the contrast (and hence sensitivity) of the ITF. 5 Experimentally, significant improvements of sensitivity by mode-filtering were shown with the use of a single-mode fiber-coupled detector (FCD). 5,9 Although FCDs are commonly used in the community, 10-12 their advantages and disadvantages to free space detectors (FSDs) were never directly compared in realistic, experimental conditions. Hence, the real functional advantage of FCDs remained unclear, since important factors such as the effective sensitivity gain as well as expected power losses in FCDs were not previously quantified. In this work, we, therefore, aimed to rigorously compare FCDs and FSDs while taking into account differences in photodetector working points, fiber-coupling efficiency as well as the frequency response of the photodiodes. We found that an FCD is capable of not only significantly improving the optical sensitivity, but also the ultrasound signal level, both of which ultimately translate to improvements in PA image quality. Importantly, FCDs can achieve these gains with only moderate losses in the transmitted power as compared to FSDs. Taken all together, these are strong arguments for the use of FCDs in FP-based PA sensing and imaging experiments. This design using back-coupling into the same fiber with redirection using a circulator simplifies the optical path and allows for high coupling efficiency as the beam can be freely resized to match the PA measurement requirements imposed by the FPI. Photoacoustic Imaging and Image Reconstruction The wire phantom was prepared by suspending a 30 μm nylon surgical suture (NYLON ZO030590) on a custom three-dimensional (3D) printed scaffold and submerging it in a water bath. This work was done in accordance with the European Communities Council Directive (2010/ 63/EU) and all procedures described were approved by EMBL's committee for animal welfare and institutional animal care and use. Experiments were performed using C57Bl6/j transgenic mice from EMBL Heidelberg core colonies. An aqueous gel was inserted between the skin and the FPI sensor head to facilitate acoustic coupling. For imaging, mice were anesthetized with isoflurane (2% in oxygen, Harvard Apparatus). Body temperature was kept constant throughout the experiments by the use of a small animal physiological monitoring system (ST2 75-1500, Harvard Apparatus), and eyes were covered with ointment to prevent drying. The diameter of the excitation beam incident on the skin surface was ≈ 1.5 cm, and the fluence was ≈ 1 mJ cm −2 and was thus below the safe maximum permissible exposure for the skin. 13 The FPI used in this study closely resembles previously published FPI designs 2 and uses dielectric mirrors with 98% reflectivity between 1500 and 1600 nm on a slightly wedged polymethyl methacrylate backing. A ∼20 μm Parylene C spacer is then deposited in-between the mirrors using vapor deposition. The maximum scan area was 8 × 8 mm 2 and a typical scan acquired ∼6000 waveforms each comprising over 1000 time points (sampling rate 125 MHz, ATS9440-128M, AlazarTech). The image acquisition time was ≈ 10 min and was limited by the response time of the interrogation laser. The diameter of the focused interrogation laser beam was 92 μm, which, to a first approximation, defines the acoustic element size. As fiber coupling might cause substantial losses in the transmitted power, we compared the transmitted power between the FCD and the FSD and quantified it to be on average 47 AE 35% (mean AE2σ, n ¼ 6561 point on the FPI). The large variation in the transmitted power probably stems from imperfections of the optical setup (limited telecentricity of the scan lens combined with a non-conjugated galvo system) which could in principle be improved for higher power transmission. All images shown are based on averaging three subsequent excitation pulses at each scan position. Following the acquisition of the PA signals, the following protocol was used to reconstruct and display the images: (1) To correct for the effects of photodiode working point, the signals were normalized by the power incident on the PD at each scan position which was acquired during the characterization. (2) For the mouse image, the recorded PA signals were interpolated onto three times finer spatial grid. (3) The tissue sound speed was estimated using an autofocus method. 14 (4) A 3D image was then reconstructed from the interpolated PA signals using a time-reversal-based algorithm 15 for the mouse image and a back-projection algorithm 16 for the wire phantom with the sound speed obtained in step (4) as an input parameter. The image reconstruction was implemented using k-Wave, an open-source Matlab toolbox. 16 Quantification of FPI Optical Sensitivity We use an approach based on fitting of the Psuedo-Voigt function E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 3 3 8 (1) where Lðx; f L Þ is a Lorentzian with f L being the full width at half maximum (FWHM) parameter of the Lorentzian, Gðx; f G Þ is a Gaussian with f G being the FWHM parameter of the Gaussian (see Refs. 5 and 12 for details). We calculate the normalized optical sensitivity from the fit according to our previous work. 5 Quantification of PA Image Quality We use three previously described and routinely used image metrics 14 to compare PA image quality between the FSD and FCD detectors: 1. The Brenner gradient is given by where Iðx; yÞ are the pixel values of the image at point (x; y). The Tenenbaum gradient is given by where s is the Sobel operator, stated as And normalized variance is E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 6 7 5 where hIi is the mean intensity of the image. Correlative FPI Characterization Using Free-Space and Fiber-Coupled Detectors To meticulously study the previously suggested 5,9 differences between FSD and FCD, we implemented a custom correlative FP-based photoacoustic tomography (PAT) setup [ Fig. 1(a)]. We performed simultaneous characterization of the FPI using both an FSD and FCD across ∼6500 scan points over an 8 × 8 mm sensor surface. We found that the FCD significantly improves the ITF in visibility [ Fig. 1(b)] which is a strong indicator for increased optical sensitivity. We then proceeded to measure the normalized optical sensitivity 5 across the surface of the FPI and observed that the FCD indeed shows an increase in sensitivity [ Fig. 1(c)], which translates to an overall increase in the order of ∼30% across the whole FPI sensor. To investigate this increase, further, we analyzed the data in a point-wise manner and observed that the increase in sensitivity is uniform with almost all characterized points exhibiting a higher sensitivity with the FCD [Fig. 1(d)]. To ascertain that the apparent increase in optically measured sensitivity actually translates to improved ultrasound sensing capabilities, we performed correlative ultrasound measurements using our system. We observed that the measured ultrasound amplitude is higher by 43 AE 0.11% (mean AE2σ for n ≈ 4400 scan positions) for the FCD [ Fig. 1(e)], which is in agreement with the increased optical sensitivity of the FCD. However, it is important to note that differences in the working point between the FCD and FSD need to be taken into account for proper comparison between the conditions as the ultrasound amplitude is directly proportional to the direct current (DC) level of the photodetector. This effect can be removed by normalizing the measured signals by the working point known from the FPI characterization. We, therefore, also analyzed the data in a point-wise manner by plotting the normalized experimentally measured signal improvement (S exp FC ∕S exp FS ) against the increase in normalized optical sensitivity (S opt FC ∕S opt FS ) and we observe that the two are in good agreement [ Fig. 1(f)], corroborating the observation that the FCD shows an increase in effective sensitivity. Despite our efforts to accurately quantify both the optical as well as acoustic gains, there are still off-diagonal outliers present in the data. These presumably stem either from small distortions in the transfer function due to the wavelength dependence of the BS splitting ratio that affect quantifying the optical sensitivity or from small differences in the frequency response of the photodiodes employed that affect quantifying the ultrasound sensitivity. Correlative Imaging Comparing Free-Space and Fiber-Coupled Detectors Having characterized the FPI-based system both optically and using ultrasound sources. we went on to characterize the PA imaging properties of the FCD and FSD. We performed PA imaging of a wire phantom and observed that the reconstructed image intensity is higher for the FCD [ Fig. 2(a)] which is consistent with the characterization results. Here also, the PA waveforms were normalized to the working point of the photodiode to remove the effect of detector differences from the PA signal amplitude. We quantified the increase in image quality by calculating commonly used image quality metrics 14 and show that for all metrics there is a significant improvement in image quality when using the FCD [ Fig. 2(b)]. We further compared the performance of FC and FS detectors by performing in vivo mouse vasculature imaging experiments. We acquired PAT data from the lower back area using 600 nm excitation to visualize the vasculature in a label-free manner. We observed that also in this case the FCD provides a significantly better image quality [ Fig. 2(c)] which can also be quantified using appropriate metrics [ Fig. 2(d)]. This corroborates the superiority of the FCD for both phantoms as well as in vivo imaging in FPI-based PAT. Discussion We have experimentally demonstrated that mode-filtering with the use of FCDs is capable of significantly improving the sensitivity of FPI-based PAT. This finding has important practical implications as for most applications and implementations fiber-coupled detectors are relatively easy to employ as they do not require modifications to the core of the system but only to the peripherally located detector. We would like to highlight that the obtainable improvements are still dependent on the FPI properties such as thickness, interrogation spot size, and surface quality of the FPI. Based on our observations, however, for FPIs relevance of PA imaging, we expect an overall improvement in the same order as reported here. Additionally, we note that careful optical design is required because experimentally induced losses in fiber coupling may be significant and potentially disadvantageous overusing FSD. We note that further improvements to the sensitivity can be obtained in principle by combining fiberbased mode filtering with active wavefront modulation approaches (as previously suggested in Ref. 5). To achieve this, however, several technical challenges need to be overcome in the future, especially on the optical engineering side to increase the beam stability in the system allowing for efficient back-coupling into the fiber in conjunction with active wavefront shaping.
3,277.4
2021-12-06T00:00:00.000
[ "Physics" ]
Active learning via informed search in movement parameter space for efficient robot task learning and transfer Learning complex physical tasks via trial-and-error is still challenging for high-degree-of-freedom robots. Greatest challenges are devising a suitable objective function that defines the task, and the high sample complexity of learning the task. We propose a novel active learning framework, consisting of decoupled task model and exploration components, which does not require an objective function. The task model is specific to a task and maps the parameter space, defining a trial, to the trial outcome space. The exploration component enables efficient search in the trial-parameter space to generate the subsequent most informative trials, by simultaneously exploiting all the information gained from previous trials and reducing the task model’s overall uncertainty. We analyse the performance of our framework in a simulation environment and further validate it on a challenging bimanual-robot puck-passing task. Results show that the robot successfully acquires the necessary skills after only 100 trials without any prior information about the task or target positions. Decoupling the framework’s components also enables efficient skill transfer to new environments which is validated experimentally. Introduction The motivation for this work comes from the approach humans take when learning complex tasks such as acquiring new skills, using new tools or learning sports. Most of their learning process is centred around trials and errors (Newell 1991). These trials do not necessarily lead directly to accomplishing the task, but eventually a confident task execution is learned (Pugh et al. 2016). For robot learning, each trial can be uniquely defined, i.e. parameterised, by a set of movement parameters (Ijspeert et al. 2013) which means that performing a trial is equivalent to selecting a point in the movement parameter space and evaluating it. In this paper, we focus on a problem of learning a task through trial and error, where a task can be executed by selecting an appropriate point in the movement parameter space. Our aim is to develop a sample-efficient approach that avoids B Nemanja Rakicevic<EMAIL_ADDRESS>Petar Kormushev<EMAIL_ADDRESS>1 Robot Intelligence Lab, Dyson School of Design Engineering, Imperial College London, London, UK trials which are not useful, by not doing random or exhaustive exploration during learning, intended for systems where trial execution is expensive. Moreover, during the learning phase, we do not provide any prior information about the task (e.g. goal position or cost function) or the environment to the agent, in order to reduce inputted domain knowledge and aim to make the approach "task-agnostic". To this end, we introduce a novel iterative and online active-learning approach, which performs informed search in the movement parameter space defining the trials, in order to sample datapoints. The proposed learning framework consists of a task model and an exploration component. The task model is implemented as a Gaussian Process (GP) Regression (GPR) (Rasmussen and Williams 2006) function that maps the movement parameters as inputs, to the trial outcomes as outputs. The exploration component performs search in the movement parameter space to find a parameter vector that encodes a subsequent most informative trial for the task model. This component represents a composite query strategy in the Active Learning parlance, obtained via probabilistic modelling of previous trial data and uncertainty inherent to the GPR task model. It is implemented as a probability distribution over the movement parameter space, from which parameter vectors are sampled. During the learning phase, the exploration component iteratively finds datapoints in the parameter space used to fit the task model and thus lower the task model's posterior uncertainty. Actual performance of the trial outcomes, i.e. cost function, is not used by either component as the desired target outcomes are not provided. This renders the components independent from a specific task requirement. For transfer we consider tasks which can be different but have the same interface, i.e. response, from the environment, and the same parameter space. Meaning, the exploration and sampling of the datapoints for the task model is independent of a particular task and related to the agent's kinematic model. Therefore, the same exploration sequence would be applied in different environments. Since the exploration component maintains the information about the successful trials, these trials can be directly reproduced (transferred) in different environments, in order to gather data and fit the task model for the new environment. As a consequence, new task models can be learned from scratch with significantly less trial evaluations. To present and analyse the performance of the proposed framework we use the MuJoCo (Todorov et al. 2012) simulation environment as well as a physical robot. Both the simulated and real robot task are similar, in that they employ an agent which tries to learn how to move another object (puck) to arbitrary locations using its body. During the testing phase, the agent is presented with a set of previously-unseen arbitrary target positions. It is expected to automatically generate an appropriate movement action, based on the learned task model, to hit the puck so that it lands on the target. This is the actual task that the agent needs to perform well. For evaluation on the real robot we have selected the ice hockey puck-passing task, as shown in Fig. 1. We selected this particular task as it is interesting for its complexity: (i) it requires dual-arm coordination, (ii) there is a non-trivial extension of the robot's kinematic model via the ice hockey stick, and (iii) the surface friction and stick-surface contact models are quite difficult to model. The proposed approach requires very little prior knowledge about the system: no previous task knowledge (strategies, desired movements, etc.), prior kinematic (stick and joint constraints) nor environment (surface friction, contact forces, etc.) models are provided. No demonstrations or expert human supervision are necessary. The number of input parameters is given (which represent the displacement of each degree of freedom) and their ranges, without contextual information regarding their influence or importance. To summarise, the main contributions of this work are: -The probabilistic framework for trial-and-error robot task learning, based on a task-agnostic and sample-efficient search of the trial parameter space. This is achieved Fig. 1 Experimental setup: robot DE NIRO uses both arms to maneuver the ice hockey stick and learns the skills needed to pass the puck (blue) to user-specified target positions (green). Estimation of the polar coordinates θ and L is done using the head-mounted Kinect camera. The red line in the bottom is parallel to the robot heading direction and is the zero-angle reference axis (Color figure online) through the exploration component which is a novel composite query function consisting of the model uncertainty and the penalisation function. -As a consequence of decoupling the task model and exploration components, efficient task transfer to new environments is possible, as shown experimentally. The robot successfully learns the task models in the new environments in significantly less trials, by executing only successful trials generated in the previous environment. The rest of the paper is organised as follows: Sect. 2 gives an overview of the related work. Section 3 formulates the problem we are addressing and in Sect. 4 we present the proposed framework. The proof of concept on a simulated task is given in Sect. 5, and robot experiment is presented and results are discussed in Sect. 6. Finally, we conclude and discuss the future directions in Sect. 7. Active learning and Bayesian optimisation The proposed approach can be characterised as an Active Learning (AL) approach (Settles 2012), a field similar to Bayesian Optimisation (BO) (Mockus 1994) and Experimental Design (ED) (Santner et al. 2013). The purpose of these approaches is to efficiently gather datapoints used to fit a model, which are most informative about the underlying data distribution. Such datapoins enable learning a good model in a sample-efficient way. The idea behind ED is that all the datapoints are defined offline, before the execution, which limits the flexibility of the approach. The difference between AL and BO is rather subtle but important, which is why we focus more on them in this section. Both approaches model a low-fidelity surrogate function, which is usually obtained as a posterior over the unknown function. The mean and variance of this surrogate function are used, through a set of rules, in order to query a new input from the domain to evaluate the unknown true function over. In BO terminology, this set of rules is called an Acquisition Function, while in AL it is called a Query Strategy. In BO the function that is evaluated needs to be optimised, while in AL the query strategy is actually a mechanism for obtaining labels for input data to further improve the function estimate. Therefore, the endgoal of BO is to optimise an underlying function (whence the name), while for AL it is not. Consequentially, the nature of the acquisition and query functions slightly differ as in BO they also need to improve the evaluated function's value. The query function in AL focuses solely on querying inputs that will be most informative for a supervised learning problem, and minimise the uncertainty in such a model. Such query functions do not have explicit exploitation components, as opposed to their counterparts in BO, thus no explicit function optimisation is being done. Some of the most popular BO Acquisition Functions are: probability of improvement (Kushner 1964), expected improvement (Močkus 1975), GP upper confidence bound (GP-UCB) (Srinivas et al. 2010), entropy search (Hennig and Schuler 2012) and predictive entropy search (Hernández-Lobato et al. 2014). The objective function usually quantifies the model performance on a specific task. If the acquisition function needs to "know" the actual value of an objective function in order to select the next parameter to evaluate, this selection inherently carries task information embedded in the objective function. As opposed to BO, our proposed approach tries to avoid this. Several interesting examples in the literature use BO in robotic applications (Lizotte et al. 2007;Martinez-Cantin et al. 2009;Tesch et al. 2011;Calandra et al. 2016). When AL Query Functions are implemented with GPs, similarly to BO acquisition functions, they provide uncertainty-based exploration (Seo et al. 2000;Kapoor et al. 2007;Kroemer et al. 2010;Rodrigues et al. 2014). However, they do not necessarily need to rely on the surrogate's posterior, one example being empirically estimating the learning progress (Lopes et al. 2012) which requires performance evaluation. Other examples include uncertainty sampling introduced by Lewis and Gale (1994), similar to our approach where the authors use the classifier prediction uncertainty. However, our uncertainty measure is derived from the GP posterior distribution and combined with the penalisation function. Another interesting approach to querying is based on maintaining multiple models for prediction and select-ing those points over whose prediction the models disagree the most Bongard and Lipson (2005) which is related to the notion of query by committee (Seung et al. 1992). Otte et al. (2014) and Kulick et al. (2015) present examples of applying AL to robotics, mostly for learning the parameters of the controller by probing environment interactions. Other robotic applications include Thrun and Möller (1992), Daniel et al. (2014), Dima et al. (2004), Baranes and Oudeyer (2013) and Kroemer et al. (2010) where AL helps relieve the sample complexity-one of the main limitations imposed by hardware for robotic experiments. Most of the above-mentioned AL sample query strategies, which rely on prediction uncertainty, do not take into account the actual order of acquiring datapoints explicitly, which is important to understand the boundaries within the parameter space. This is particularly needed in robotics, where physical constraints play a crucial role. Therefore, we explicitly include such information within our exploration component. Including safety constraints within the BO framework has been done through the optimisation constraints in the objective function (Englert and Toussaint 2016;Berkenkamp et al. 2016). Gelbart et al. (2014) and Schreiter et al. (2015) model safety constraints as a separate GP model, but this approach requires additional computational resources. There have been several approaches in the literature employing GPs to learn mapping functions similar to our task model (Nguyen-Tuong et al. 2009;Nemec et al. 2011;Forte et al. 2012). The latter two generate full trajectories encoded via DMPs and introduce constraints that guide the new policy to be close to the previously demonstrated examples in the trajectory database. Parameter space exploration The concept of good exploration strategies is crucial in supervised learning, as well as RL, where it can improve sample selection and sample-efficiency. Several authors argue the importance of exploration and benefits of moving it directly to the parameter space, as opposed to e.g. action space in RL. This can reduce the variance caused by noisy trajectories, and generally avoids premature convergence to suboptimal solutions (Rückstiess et al. 2010;Plappert et al. 2017). Evolutionary Strategy-based methods Hansen et al. 2003;Heidrich-Meisner and Igel 2009;Wierstra et al. 2014;Salimans et al. 2017) introduce noise in the parameter space to guide exploration, acting as a black-box optimiser, but have poor sample-efficiency. The main inspiration for the proposed work is to shift away from the common utilitarian paradigm of task learning through optimising some utility (cost) function. Some of the approaches in this direction develop exploration which tends to be decoupled from the actual task definition embodied in the cost function. A recent parameter space search approach Pathak et al. (2017) where an intrinsic curiosity module is implemented to promote exploration, by learning to distinguish changes in the environment caused by the agent from random ones. The Quality Diversity (Pugh et al. 2016) family of approaches such as MAP-elites Cully et al. 2015) and Novelty Search with Local Competition (Lehman and Stanley 2011b) perform exploration by encouraging diversity in candidate behaviours and improving fitness over clusters of behaviours in the behaviour space. However, in our presented problem formulation we do not aim to derive diverse behaviours, rather to find those for which the system is uncertain about and which avoid dangerous situations. More importantly, there is no notion of relative task fitness involved, as the proposed exploration component of our method generates points which are informative for the model, unrelated to their actual fitness as the fitness is task specific. The notion behind the proposed framework is akin to the concept of objective-free learning Lehman and Stanley (2011a) which promotes diversifying the behaviours as an alternative to having the objective as the only mean of discovery, which can in fact lead to deceptive local optima (Pugh et al. 2016). As opposed to promoting novelty, our approach actually selects behaviours which are most useful for the task model. Methods relying on techniques like Motor Babbling (Demiris and Dearden 2005;Kormushev et al. 2015), Goal Babbling (Rolf et al. 2010) and Skill Babbling (Reinhart 2017) can learn the robot's forward/inverse model by iteratively performing random motor commands and recording their outcomes. However, these methods are usually data-inefficient due to random exploration. Kahn et al. (2017) use neural networks with bootstrapping and dropout, to obtain uncertainty estimates of the observations for predicting possible collisions and adapting the robot control accordingly. These estimates are not further used to explore alternative control policies. Deisenroth et al. (2015) show that using GPs within modelbased Reinforcement Learning (RL) helps in improving the sample-efficiency of learning the task. The posterior mean and variance are used to address the exploration/exploitation trade-off during policy learning. Still, the above-mentioned approaches require an explicit cost function optimised by the agent in order to learn the task. Learning robotic tasks with complex kinematics, by exploring the low-level control space is presented in Kormushev et al. (2015). Additional elements such as links and leverage points are incorporated into the original kinematic chain to skew the mapping of motor torques to end-effector pose. The robot adjusts to these modifications, without an explicit model of the robot's kinematics or extra links provided, but such approach would have difficulties when scaled. Bimanual skill learning There are few examples in the literature of learning to play bimanual ice hockey, but none of them simultaneously address: bimanual manipulation, and using a tool which distorts/modifies the original robot kinematics. Relevant example of single-arm robot learning to play hockey using RL is presented in Daniel et al. (2013) where the robot learns to send the puck into desired reward zones and gets feedback after each trial. Kinaesthetic teaching is required to extract the shape of the movement which is then improved. Recently, Chebotar et al. (2017) combined model-free and model-based RL updates to learn the optimal policy that shoots the puck to one of the three possible goals. The tracked puck-to-goal distance is used within the cost function to provide reward shaping. Our approach differs from the above two, because during the training phase no information about the goal nor the environment is provided. Problem formulation and movement parameterisation The main problem we are trying to solve is efficient highdimensional parameter search. We employ the proposed exploration component, to search for movement parameter datapoints used to fit our task model component. The goal is to eventually have a good model performance during testing, on the task of reaching the desired outcomes. Figure 2 compares the information flow diagrams for the standard supervised learning paradigm and our proposed approach. The model outputs a control (e.g. movement) parameter vector, given an input, and the trial outcome is the produce of this output when applied in the environment. The performance metric is a cost function comparing trial outcome and target desired outcome, and is used in supervised learning ( Fig. 2a) to update the model. In our case (Fig. 2b), the "input" can be seen as the whole movement parameter space from which the model samples and outputs a movement parameter vector. The proposed approach does not use the model performance metric to update the model, rather the trial outcome, since desired outcomes are not provided nor needed. To demonstrate the proposed approach, we consider a task in which the agent needs to perform a movement that displaces its body from a fixed initial configuration. This movement can potentially lead to a contact with another object (e.g. a puck) moving this object to a certain location. One such executed event is called a trial. The movement of the object is governed by the dynamical properties of the environment (object mass, surface friction coefficient, obstacles etc) which are unknown to the agent. The only feedback that the agent receives, is whether the movement it performed successfully made contact with the object (successful trial) or not (failed trial), and in the former case, what is the final resting position of the object (i.e. trial/task outcome). The action that the agent performs is defined by a vector of D movement parameters x = [ q 1 , . . . , q D−1 , s] that define the whole motion sequence as a "one-shot" action. This movement parameter vector contains the displacements q for each of the actuators w.r.t. a fixed starting configuration, and the speed of the overall action execution s. We assume that there already exists a position controller that translates the goal position to a trajectory. The set of all movement parameter vectors that encode actions (trials) is the movement parameter space. Even though this space is continuous, we discretise it to obtain a finite set of possible combinations. This allows us to perform fast and exact inference over the parameter space, without the need for approximate inference methods. In the simulation experiments we use revolute joints so the parameters are given in radians. Their units are the same even though their ranges might be different. The same holds in robotic experiments where the parameters are displacements in the Cartesian space measured in centimeters, with the exception of the wrist angle which is in radians. Although the wrist angle has different units, the effect it causes can be comparable to the displacements in centimeters. After the trajectory has been executed, in case of a successful trial, the obtained trial outcome can be any value (both continuous or discrete) and is used to fit the task model component. Both the successful and failed trials contribute to the exploration component. Under such a setup, the agent does not optimise for a particular task performance, but rather tries to avoid failed trials. Proposed approach The base assumption of our approach is that similar movement parameter vectors result in similar trial outcomes. Therefore, the task regression mapping function is smooth, without hard discontinuities, i.e. Lipschitz continuous. In order to provide a sufficiently diverse sample distribution for the regression model to create a generic mapping during training, successful trials are necessary, i.e. agent needs to move the object. The main challenge is selecting the trial to evaluate next which will lead to the highest information gain. The proposed approach consists of two decoupled components updated using previous experience, i.e. previous trials-the task model and exploration components. They are implemented as functions over the movement parameter space, mapping each movement parameter vector to a certain value. The mathematical formulation, together with the underlying intuition behind the task model and exploration components is given in Sects. 4.1 and 4.2, respectively. Task model component The task model component uses the information from scarce successful trials, and creates a mapping between the movement parameter space (X) as input, and the trial outcomes-puck's final position (θ puck , L puck ) as output. This component creates two independent task models for each of the puck's polar coordinates, angle and distance (as depicted later on in Sect. 5 in Fig. 7a, b, respectively). To this end, we use GPR as it generalises well with limited function evaluations, which in our case are the successful trials executed on the robot. Using the notation from the previous section, let us define a point in the movement parameter space x ∈ IR D . The main assumption is that for any finite set of N points , the corresponding function evaluations (in our case trial outcome) at these points can be considered as another set of random variables , whose joint distribution is a multivariate Gaussian: where μ(x i ) is the prior mean function and K (x i , x i ) is the kernel function for some pair of parameter vectors x i , x i . When applied to all the pairs from X the kernel produces the matrix of covariances K . Having a joint probability of the function variables, it is possible to get the conditional probability of some parameter vector's evaluation f x i given the others, and this is how we derive the posterior based on observations from the trials. In our case, X is the set of movement parameter vectors which led to successful trials during the training phase. Set X contains all the possible parameter combinations, since we need to perform inference over the whole parameter space in order to obtain the task models. We define the extended joint probability as below, and use matrix algebra to deduce the posterior: We assume a mean of 0 for our prior as we do not want to input any previous knowledge in out task model. Similarly to K , K is the matrix of covariances for all the pairs from the set X , and K gives us the similarity of the sucessful parameter vectors X to each point in the parameter space X. Within the kernel definition we also consider zero mean Gaussian noise, ε ∼ N (0, σ 2 ε ), to account for both modelling and measurement inaccuracies. We evaluated the performance using the squared exponention (SE), Matern 5/3 and the rational quadratic (RQ) kernels. The best performing kernels are SE: , and these results are presented in Fig. 6. The distance measure d is defined as the Euclidean distance between the points in the parame- Even though the concept of a distance metric in a high-dimensional space is not straightforward to decide and interpret, we opt for the Euclidean distance based on the discussion from Aggarwal et al. (2001) who argue that in problems with a fixed high dimensionality, it is preferable to use a lower norm. Moreover, the presented kernel showed good empirical performance. From the similarity measure given by the kernel we get that for the points which are far away from each other, will have a higher variance associated with their prediction. The coefficients α = D/2, σ 2 f and σ 2 l are the scaling parameter, variance and the lengthscale of the kernel, respectively. The advantage of GPR is that for every point for which we estimate the posterior distribution, we know its mean and variance. The means are interpreted as the current task models' predictions, and the variance as their confidence about these predictions. Therefore, regions of the parameter space which are farther away from the training points, will have a higher variance and thus the uncertainty about their predictions is higher. After each new successful trial, we can re-estimate the posteriors over the whole movement parameter space, in order to update both task models, and their uncertainty. The inference is memory demanding but executes in seconds on a workstation with a GTX 1070 GPU. Even though it is possible to learn the GPR hyperparameters from data, we do not perform this because of: (i) Low number of samples; as the main goal of our approach is sample-efficiency, having a low number of samples and learning the hyper parameters with the marginal likelihood is very likely to give overfitting results (at least several dozens of samples are needed to learn something meaningful (Cully et al. 2015)). (ii) Search instability; the order of acquiring samples is important and each subsequent point in the search depends on the previous ones. Changing the GPR hyperparameters after each step, would cause large variance in the sample acquisitions which may lead to instability. Therefore, we do extensive search of the hyperparameters, but keep them fixed throughout the training phase. Exploration component The exploration component exploits all the past trial information, in order to obtain the selection function that guides the movement parameter search and selection process. The elements contributing to the selection function are the information about the unsuccessful trials, expressed through a penalisation function, and the GPR model uncertainty. Since the movement parameters used as inputs for GPR are the same for both the distance and angle task model, their corresponding GPR uncertainty will also be the same. The penalisation function and the GPR model uncertainty are represented as improper density functions (IDF), since their values for each point in the parameter space are in the range [0, 1] but they do not sum to 1. Therefore, multiplying these two functions acts as a kind of particle filter. Since we are interested in the relative "informativeness" of each point in the parameter space when sampling the next trial, the actual absolute values of these functions do not play a crucial role. An example of these IDFs is visualised in Fig. 3. Penalisation IDF (PIDF) probabilistically penalises regions around the points in the movement parameter space which have led to failed trials. This inhibits repetition and reduces the probability of selecting parameters leading to failed trials. In our experiments, a trial is failed if the agent does not move the object. Additionally, in the simulation experiment, the trial is stopped if the agent contacts (hits) the wall. In the robotic experiment, fail cases are also when: -Inverse kinematic solution cannot be found for the displacements defined by the movement parameters. -The displacements would produce physical damage to the robot (self collision, ignoring the stick constraint or hitting itself with the stick). -Mechanical fuse breaks off due to excessive force. -Swing movement misses the puck (no puck movement). The PIDF is implemented as a mixture of inverted Ddimensional Gaussians (MoG) (2), as they consider all failed trials evenly. Gelbart et al. (2014) and Englert and Toussaint (2016) chose GP for this, however, MoG provide better expressiveness of multiple modes in a distribution. PIDF is initialised as a uniform distribution p p (X) ≡ U(X). The uniform prior ensures that initially all the movement actions have an equal probability of being selected. Each of the K N evaluated trials is represented by a Gaussian with a mean μ P k coinciding with the parameter vector x k associated with this trial. Coefficient cov is the covariance coefficient hyperparameter. Covariance matrix P k is a diagonal matrix, calculated based on how often does each of the D parameters take repeated values, considering all the previous failed trials. This is implemented by using a counter for each parameter. In this way, the Gaussians have a smaller variance along the dimensions corresponding to the parameters with frequently repeating values, thus applying higher penalisation and forcing them to change when 'stuck'. Parameters with evenly occurring values have wider Gaussians. This procedure inhibits the selection of parameter values which are likely to contribute to failed trials, and stimulates exploring new ones. Conversely, the parameter vector leading to a successful trial is stimulated with a non-inverted and high variance Gaussian, which promotes exploring nearby regions of the space. PIDF can be interpreted as p(successful_trial | uncertain_trial) i.e. the likelihood that the parameter vector will lead to a successful trial given that the model is uncertain about it. (2) (3) and is used to encourage the exploration of the parameter space regions which are most unknown to the underlying task mod-els. UIDF is updated for both successful and failed trials, as the exploration does not depend on the actual trial outcomes. Model uncertainty IDF (UIDF) is intrinsic to GPR Selection IDF (SIDF) combines the information provided by the UIDF, which can be interpreted as the prior over unevaluated movements, and the PIDF as the likelihood of improving the task models. Through the product of PIDF and UIDF, we derive SIDF (4), a non-parametric distribution used as a query function, as the posterior IDF from which the optimal parameter vector for the next trial is sampled. The trial generation and execution are repeated iteratively until the stopping conditions are met. Since we are not minimising a cost function, the learning procedure can be stopped when the average model uncertainty (i.e. entropy) drops below a certain threshold. This can be interpreted as stopping when the agent is certain that it has learned some task model. The pseudocode showing the learning process of the proposed framework is presented in Algorithm 1. Learned task model evaluation In order to evaluate the performance during the testing phase, it is necessary for the angle θ(x) and distance L(x) task models to be invertible. Given the target coordinates (desired trial outcome), a single appropriate movement parameter vector x defining the swing action that passes the puck to the target needs to be generated. It is difficult to generate exactly one unique parameter vector which precisely realises both the desired coordinate values θ d and L d . Therefore, the one which approximates them both best and is feasible for robot execution is selected. This is achieved by taking the parameter vector which minimises the pairwise squared distance of the coordinate pair within the combined model parameter space, as in (5). Additional constraint on the parameter vector is that its corresponding PIDF value has to be below a certain threshold to avoid penalised movements. In our numerical approach, this is done iteratively over the whole parameter space, and takes a couple of miliseconds to run. Alternatively, obtaining the optimal parameter vector could be achieved using any standard optimisation algorithm. More importance can be given to a specific coordinate by adjusting the weighing factor δ. Task transfer After the task model is learned, if we were to repeat the approach in a different environment, the algorithm would still generate the same trials, both successful and failed. The new environment can be considered as a different task as long as the parameter space stays the same, but the trial outcomes values change. This is possible due to the fact that the proposed approach does not take into account the actual puck position values when generating subsequent trials, but rather the binary feedback whether the trial was successful or failed. To reiterate, only the successful trials contribute to forming the task models as they actually move the puck, while the failed trials are inherent to the robot's physical structure. Therefore, we can separate the failed and successful trials, and execute only the latter in the new environment in order to retrain the task models. This significantly reduces the amount of trials needed to learn the skill in a new environment, because usually while exploring the parameter space the majority of the trials executed are failed. Experimental validation of the task transfer capability of the proposed approach is presented in Sect. 6.3. Simulation experiments and analysis In order to present and analyse in detail the performance of the proposed approach on a simple task, we introduce the MuJoCo ReacherOneShot environment and evaluate the approach in simulation. We introduce two variants of the environment, with a 2-DOF agent consisting of 2 equal links, and a 5-DOF agent consisting of 5 equal links in a kinematic chain structure, as depicted in Fig. 4. The environment consists of an agent acting in a wall-constrained area, with the initial configuration of the agent and the puck location above it fixed. This environment is a physical simulation where the contacts between each of the components are taken into account, as well as the floor friction. Contact between the puck and the agent makes the puck move in a certain direc- Fig. 4 The MuJoCo ReacherOneShot-2link (top row) and ReacherOneShot-5link (bottom row) environments used for simulation. The links are shown in different colours for clarity, the walls are black and the puck the agent needs to hit is blue. Column a shows the initial position, b a training trial (no targets given) and c the testing phase with the testing area and sample targets in green (Color figure online) tion, while the collision between the agent and the wall stops the simulation experiment and the trial is classified as failed. The agent learns through trial and error how to hit the puck so it moves to a certain location. The hitting action is parameterised, with each parameter representing the displacement of a certain joint w.r.t. the initial position, and is executed in a fixed time interval. During the training phase, the agent has no defined goal to which it needs to optimise, but just performs active exploration of the parameter space to gather most informative samples for its task model. We have chosen this task as it is difficult to model the physics of the contacts, in order to estimate the angle at which the puck will move. Moreover, the walls on the side act as an environmental constraint to which the agent must adapt. Experiment with 2-link agent In order to properly visualise all the components of the proposed framework to be able to analyse them, we introduce a 2-link agent with only two parameters defining the action. The range of base-link joint ( joint_0) is ± π 2 , while for the inter-link joint ( joint_1), the range is ± π radians. The action execution timeframe is kept constant, thus the speed depends on the displacements. To demonstrate the efficacy of the proposed informed search approach, we compare and analyse its trial selection process to random trial sampling. We show this on a crude discretisation where each joint parameter can take one of 10 equally spaced values within its limits, producing the parameter space of 100 elements. Figure 5 shows side-byside the trial selection progress up to the 70th trial, for both the proposed informed approach and the random counter-part. In the beginning both approaches start similarly. Very quickly, the proposed informed approach appears to search in a more organised way, finding the 'useful region' of the parameter space that contains successful trials and focuses its further sampling there, instead of the regions which produce the failed trials. After 50 trials we can see that the distribution of the points is significantly different for the two approaches even in this simple low-dimensional example. The number of sampled successful trials with the proposed informed approach is 16, as opposed to 11 obtained by the random approach, while the remaining are failed which do not contribute to the task models. This behaviour is provided by the PIDF which penalises regions likely to lead to failed trials, and the sampling in the useful region is promoted by the UIDF which seeks samples that will improve the task model. We note that the highest concentration of the failed trials from the informed approach is actually in the border zones, around the useful region and at the joint limits. These regions are in fact most ambiguous and thus most interesting to explore. After the 70th trial, the informed approach already sampled all the points that lead to successful trials, and the next ones to be sampled are on the border of the useful region as they are most likely to produce further successful trials. Conversely, the random approach would need at least 6 more samples to cover the whole useful region. We further analyse the proposed approach by discussing the role of the hyperparameters and their influence on the performance. For this purpose, the two joint angle ranges are discretised with the resolution of 150 equally spaced values, which creates a much more complex parameter space of 22,500 combinations, over which the task models need to be defined. This finer discretisation makes the search more difficult and emphasises the benefits of the informed search approach. To analyse the task model performance as the learning progresses, after each trial we evaluate the current task models on the test set. This is only possible in simulation, because testing on a real robot is intricate and time consuming. We perform the evaluation on 140 test target positions, with 20 values of the angle ranging from − 65 • to 30 • with 5 • increments, and 7 values of the distance starting from 5 to 35 distance units from the origin, as shown in Fig. 4. The test error is defined as the Euclidean distance between the desired test target position, and the final resting position of the object the agent hits when executing the motion provided by the model. For the test positions which are complicated to reach and the learned task model cannot find the proper inverse solution, we label their outcomes as failed, exclude them from the average error calculation and present them separately. Figure 6 features plots showing the influence of different hyperparameter values, where PIDF covariance coefficients (cov), kernel functions (kernel) and the kernel's lengthscale (σ 2 l ) parameter are compared. The top plots show the mean Euclidean error, middle plots the error's standard deviation over the test set positions and the lower ones show the number of failed trials. The PIDF covariance values tested are 2, 5, 10 and 20 and they correspond to the width of the Gaussians representing failed trials in the PIDF. Making the covariance smaller (wider Gaussian) leads to faster migration of the trial selection to the regions of the parameter space leading to successful trials. This hyperparameter does not affect the random approach as the random approach does not take into account the PIDF. Regarding the kernel type and its lengthscale hyperparameter, this affects the task model for both the proposed informed approach and the random trial generation. Smaller lengthscales imply that the influence of the training point on other points drops significantly as they are farther away from it, and thus the uncertainty about the model predictions for these other points raises quicker than with larger lengthscales. The actual effect is that the UIDF produces much narrower cones around trial points for small lengthscale values, which promotes sampling other points in the vicinity of the successful trials. In order to define the best performing model, we need to take into account the metrics presented in the plots in Fig. 6. Thus, models which do not produce failed tests and also have the minimal Euclidean error are required. Based on this criteria, both informed models from Fig. 6b, showed superior performance. However, for visualisation purposes, in Fig. 7 we show the learned task models and the final selection function of the best performing informed model together with its random counterpart from In Fig. 7 we can see how the SIDF was shaped by the penalisation function and the model uncertainty. The PIDF influenced the trial sampling to move away from the regions leading to failed trials, and focus on the region where the informative samples are, similarly to the previous experiment shown in Fig. 5. Furthermore, we can again see that most of the failed trials are in fact at the border between the failed and successful trial regions, as well as at the joint limits, which are the areas that need to be explored thoroughly. Regarding the learned task models, we can see a clear distinction in the angle model that defines whether the puck will travel to the left (positive angle) or to the right (negative angle) and joint_0 influences this the most. The other, joint_1 mostly influences the intensity of the angle, i.e. how far will the puck move. This is possible because the joint space has a continuous nature which implies that the samples which are close in the parameter space produce similar performance. In the case of the learned angle model, it is easy to see the difference between what the informed and random approaches learned. While for the informed approach it is clear that the positive values of the joint_0 parameter lead to the positive angle values, within the random approach this relationship does not hold. Experiment with 5-link agent We further evaluate the performance of the proposed approach on a more complex task by using a 5-link agent as depicted in Fig. 4. The parameter space is 5 dimensional, discretised with 7 values per parameter dimension. The action execution speed, base-link and inter-link joint ranges are as described in the previous section. Even though the discretisation is crude as mentioned in Sect. 3, we show the task is learned efficiently and shows good empirical performance. We evaluate the performance of the proposed informed search w.r.t. a random sampling approach. We also add an ablative comparison with the case where the PIDF is not included in the exploration component, but just the UIDF. UIDF uses the GPR model's variance which can be considered proportional to the entropy measure, as the entropy of a Gaussian is calculated as a 1 2 ln(2π eσ 2 ), where the log preserves the monotonicity and the variance is always positive. In addition to this, we evaluate the performance of a modified version of the state-of-the art BO method presented in Englert and Toussaint (2016). Our problem formulation does not provide an objective function evaluations needed in BO, because the movement parameters are not model parameters which influence the final model performance. Instead of the model performance, we provide the decrease in model uncertainty as a measure of performance which is dependent on the movement parameter selection. This setting is then in line with our problem formulation and represents a good method for comparison. In Fig. 8a we show the mean (solid line) and standard deviation (shaded area) of the test performance error as well as number of failed test trials, based on several top performing hyperparameters. Below, in Fig. 8b, we show the heatmaps with errors for each test target position, at trials 30, 50, 100 and 300. First significant observation, which was not obvious in the 2-link example, is that the random approach needs to sample almost 40 trials before obtaining a partially useful (a) (b) Fig. 8 Performance of various hyperparameters on the test set after each trial, for 300 trials. In a the plots show the mean and the standard deviation of the test Euclidean error, averaged over the 5 best performing models of the (orange) informed approach ( cov = 5, RQ kernel with σ 2 l = 0.01; cov = 10, RQ kernel with σ 2 l = 0.001; cov = 10, SE kernel with σ 2 l = 0.001; cov = 20, RQ kernel with σ 2 l = 0.001; cov = 20, SE kernel with σ 2 l = 0.01), (green) random approach runs over 5 different seeds, top 3 performing (red) UIDF-only exploration approaches ( all using RQ kernel with σ 2 l = 0.01, σ 2 l = 0.001 and σ 2 l = 0.001) and (blue) the modified BO approach from Englert and Toussaint (2016) (showing all combinations of RQ and SE kernels with σ 2 l values: 0.01, 0.001, 0.0001). The heatmaps in b show actual test errors for each of the 140 test positions at trials 30, 50, 100 and 300, using the best performing instance of each of the models. The legend colormap show the average values for each approach (Color figure online) Fig. 9 An example of a successful trial executed during the training phase. The blue arrow points to the puck (Color figure online) task model, while the informed approach needs less than 5 trials. It is important to emphasise that the parameter space contains 7 5 = 16,807 elements, which could cause inferior performance of the random approach. Secondly, we can see from the graph that even after 300 trials the informed approach demonstrates a declining trend in the test error mean and standard deviation, while the random approach stagnates. The uncertainty-only-based exploration approach finds a simple well-performing task model after only few trials, even slightly outperforming the informed approach. However, this approach is unstable and very sensitive to hyperparameter choice. This can be explained by UIDF being hardware-agnostic and not taking into account failed trials, but purely exploring the parameter space. The modified BO approach (Englert and Toussaint 2016), as expected, shows good and consistent performance. Also, it can be seen that it is not sensitive to hyperparameter change as the variance in performance for different settings is low. Unilke with our proposed approach, at the end of the learning phase when testing the models, there are still some test target positions which are not reachable by this model. By adding this new experiment, we compared to a method that enforces the feasibility of the parameters as a constraint in the cost function. As opposed to having an explicit constraint selecting only successful trial parameters, our proposed approach implements a soft (probabilistic) constraint through the PIDF, which still allows sampling of failed trials occasionally. This allows us to obtain datapoints at the borders of the feasible regions which are useful for the task model. Robot experiment and analysis To evaluate our proposed approach on a physical system we consider the problem of autonomously learning the icehockey puck-passing task with a bimanual robot. 1 We use robot DE NIRO (Design Engineering Natural Interaction RObot), shown in Fig. 1. It is a research platform for bimanual manipulation and mobile wheeled navigation, developed in our lab. The upper part is a modified Baxter robot from Rethink Robotics which is mounted on a mobile base via the extensible scissor-lift, allowing it to change the total height from 170 to 205 cm. The sensorised head includes a Microsoft Kinect 2.0 RGB-D camera with controllable pitch, used for object detection. DE NIRO learns to hit an ice hockey puck with a standard ice hockey stick, on a hardwood floor and pass it to a desired target position. We are using a right-handed stick which is 165 cm long and consists of two parts: the hollow metal stick shaft and the curved wooden blade fitted at the lower end. The standard (blue) puck weighs approximately 170 g. To enable the robot to use this stick, we have equipped its end-effectors with custom passive joints for attaching the stick. A universal joint is mounted on the left hand, while the spherical joint is installed on the right (refer to Fig. 1). This configuration inhibits the undesired idle roll rotation around the longitudinal stick axis, while allowing good blade-orientation control. The connection points on the stick are fixed, restricting the hands from sliding along it. This imposes kinematic constraints on the movement such that the relative displacement of the two hands along either axis cannot be greater than the distance between the fixture points along the stick. Due to the right-handed design of the ice hockey stick, the initial position of the puck is shifted to the right side of the robot and placed approximately 20 cm in front of the blade. We monitor the movement effect on the puck using the head-mounted Kinect camera pointing downwards at a 45 • angle. A simple object-tracking algorithm is applied to the rectified RGB camera image in order to extract the position of the puck and the target. For calculating the polar coordinates of the puck, the mapping from pixel coordinates to the floor coordinates w.r.t. the robot is done by applying the perspective transformation obtained via homography. All elements are interconnected using ROS (Quigley et al. 2009). Experiment description The puck-passing motion that the robot performs consists of a swing movement, making the contact with the puck and transferring the necessary impulse to move the puck to a certain location (as shown in Fig. 9). The robot learns this through trial and error without any target positions provided during the training phase, just by exploring different swing movements in an informed way and recording their outcomes. The trajectory is generated by passing the chosen parameters (displacements) that define the goal position, to the built-in position controller implemented in the Baxter robot's API. During the training phase, the generated swing movement can either be feasible or not for the robot to execute. If feasible, the generated swing movement can potentially hit the puck which then slides it from the puck's fixed initial position to some final position which is encoded via polar coordinates θ and L, as shown in Fig. 1. Such a trial is considered successful and contributes to the task models. If the swing misses the puck, the trial is failed. Other cases in which a trial is considered failed are defined in Sect. 4.2. During the testing phase, the robot is presented with target positions that the puck needs to achieve, in order to evaluate the task model performance. The target is visually perceived as a green circle which is placed on the floor by the user (Fig. 1). Having received the target coordinates (θ d and L d ), the robot needs to apply a proper swing action (x) that passes the puck to the target. Each trial consists of a potential swing movement which is encoded using a set of movement parameters. We propose a set of 6 movement parameters which are empirically chosen and sufficient to describe a swing. The movement parameters represent the amount of relative displacement with respect to the initial arm configurations. The displacements considered are along the x and y axes of the robot coordinate frame (task space) for the left (l x , l y ) and right (r x , r y ) hands, the joint rotation angle of the left wrist (w), and the overall speed coefficient (s) which defines how fast the entire swing movement is executed. The rest of the joints are not controlled directly. In this way the swing movement is parameterised and can be executed as a one-shot action. In the proposed setup, the parameters take discrete values from a predefined fixed set, equally spaced within the robot's workspace limits. The initial configuration of the robot arms and the ranges of the movement parameter values are assigned empirically. Even though the approach could be extended to autonomously detect the limits for the parameters, it is done manually in order to reduce the possibility of damaging the robot while exploring the edges of the parameter space. This implicitly reduces the number of learning trials, especially the failed ones. However, this parameter definition does not lead to any loss of generality of the framework and preserves the difficulty of the challenge. Although the robot's kinematic model is implicitly used for the movement execution, via the inverse kinematics in the position controller, this information is not used within our framework. The discretisation resolution of the parameter values inside the range is due to the numerical approach to obtaining the task models whose domain is the whole The training phase consisted of 100 trials of which 24 were successful and contributed to the task models. The rest of the failed trials did not contribute to the task model explicitly, rather implicitly, through the exploration component. The stopping criterion is when the model's average uncertainty drops below 10% and the last 5 updates do not lead to more than 0.5% improvement each. Further trials and uncertainty reduction would not make sense as it depends on the inherent task uncertainty which is hard to quantify. This task uncertainty is affected by the system's hardware repeatability and noise in the trial outcome amongst others. Figure 10 shows the uncertainty decrease over the sampling progress, and this can be interpreted as a learning curve showing how our task model decreases its uncertainty about its predictions. The overall training time including resetting is approximately 45 min. Figure 11a, b show the angle and distance models learned based on the datasamples from 24 successful trials. For visualisation purposes we slice the model and display it along two of the six dimensions. We visualise r x and w, while the remaining parameters are fixed with values: l x = − 0.3, l y = 0.1, r y = 0.35 and s = 1.0, which is equivalent to a backward motion of the left hand and a full speed swing. The angle model in Fig. 11a shows how the wrist rotation angle greatly affects the final angle of the puck, for this particular swing configuration. This is in line with how professional hockey players manipulate the puck by rotating their wrist. The right hand displacement along the robot's x-axis does not contribute as much. The distance model in Fig. 11b shows more complex dependencies, where the right hand displacement has a high positive correlation with the final puck distance for positive wrist angles. As the wrist angle Fig. 11 GPR task models learned during training based on successful trial data; learned task model for the a angle and b distance value decreases, so does the influence of r x . The range of motions that the puck achieves after training are from 0 • to 25 • for the angle, and the distance from 50 to 350 cm. As a side note, one option could be to prune all the failed trials in simulation and perform only the successful ones on the robot. However, this would require having a precise kinematic model of the robot including the hockey stick and the passive joints which is not straightforward to model. Experiment results and discussion The essential interest here is to evaluate the main contributions: the informed search approach, and its application to efficient task transfer. The hypothesis is that the proposed approach needs significantly less trials to learn a confident and generalisable task models, because the trials generated in this manner are the most explanatory for the model. To quantitatively assess the task model performance of our approach, we analyse the test execution accuracy, i.e. the ability to reach previously unseen targets. 2 During testing, the robot is presented with a target position (green circle as in Fig. 12) and required to generate appropriate movement parameters for a swing action that will pass the puck to the target. We evaluate the accuracy using 28 different target positions, placed in the mechanically feasible range with 4 increments of the angle {0, 10, 15, 20}, and 7 of the distance {100, 120, 150, 175, 200, 250, 300}. These coordinates have not necessarily been reached during training. For specific target coordinates, the model is inverted to give an appropriate and unique movement parameter vector, as described in Sect. 4.3. The final repeatability is the one achievable by the robot hardware (± 5 cm) and is consistent. Firstly, we compare the results of our approach to those of a model learned from randomly generated trials. We generated 100 random points in the movement parameter space which were evaluated on the robot and used to create the GPR Fig. 12 View from the Kinect camera during the testing phase. The error e is the measured Euclidean distance between the puck and the target position, during the a best and b worst hit cases task models. We produced 5 such random models with different initial seeds, verified their performance on the test target set and averaged the results (see Table 1). As shown, our model is on average twice as accurate and more importantly, almost three times more confident, based on the standard deviation, than the models produced by random search. This demonstrates that the informed search selects training points which provide the model with better generalisation capabilities. We did not consider the grid search approach, as it is not feasible to evaluate all 18,750 movement parameter combinations. Regarding the performance in the related work, in Daniel et al. (2013) the puck is sent to a target zone of 40 cm in width, while in Chebotar et al. (2017) there are only three fixed 25 cm-wide goals, in which the execution is deemed as successful. From the results, our method on average achieves better accuracy over 28 previously unseen target positions. Secondly, we compare our approach to human-level performance. We asked 10 volunteers who had no previous ice hockey experience and 4 members of the college ice hockey club to participate, under the same settings as the robotic counterpart. The volunteers were placed at the same fixed position as the robot to maintain equal distance from the test targets, and the puck had the same starting position. No additional guidance was given regarding the stance, but they were shown in which regions of the stick they should place their grip in order to be comparable with the robot. The volunteers had 24 practice shots to get accustomed to the stick, puck and the surface. After, they were presented with the same test target positions, and their averaged results are presented in Table 1. We have to emphasise that such a comparison is not straightforward to analyse: this task is difficult for a human as it requires repeatability in the arm control and hand-eye coordination; although the inexperienced subjects have not practiced hockey previously, through their lifetime they have developed a good general notion of the physical rules and limb control. The inexperienced volunteers achieve slightly worse accuracy, yet the variance among the subjects is high, which could be attributed to their various skillsets that are more or less akin to ice hockey. Experienced volunteers performed better than the robot and this can be explained with their domain knowledge. Even with a small sample size the within-group variance is low. By observing the heatmaps of these tests (Fig. 13) we can see the performance on each of the 28 test target position individually, averaged over all the candidates.It is noticeable that the human volunteers are more confident with targets that are closer, and to some extent the random approach as well. For the informed approach no such obvious pattern has emerged. From qualitative observations we deduce that the inexperienced volunteers also need less time to acquire the basic skill level necessary to perform this task efficiently. This includes adjusting their grip and swing technique after a couple of trials, so it resembles that of experienced volunteers. We also note that several inexperienced volunteers who showed good performance, discovered that sliding the puck in the blade on the ground improves the accuracy. This technique was employed by all experienced volunteers and was also learned by the robot. Task transfer We demonstrate the task transfer aspect of the proposed framework by re-learning the task model for different environments. In this experiment we consider a task new, if it Table 1 has a significantly different environment model, such as the object shape or weight and the floor friction. The main idea is that the trials generated are intrinsic to the robot hardware and are independent of the environment. Consequently, if the robot is placed on a different surface and given a ball instead of the puck, it would still generate the same trial movements. However, if the stick or other parts of the robot's kinematic chain change significantly, this might not hold anymore. In that case, the training phase would have to be done from scratch as different kinematics generate different failed trial cases which need to be accounted for. Thus, if the kinematics are the same we just need to replicate the successful trials, and gather datapoints for the new environment-specific task model. Therefore the robot can adapt and perform the task in a new environment by executing only the set of 24 movement parameter vectors that generated successful trials in the "original" training session (standard puck on hardwood floor), not all 100 trials. The successful trials are independent of the environment and provide samples for the GPR task models. After executing the 24 trials and obtaining the trial outcomes, the actual model update is done in batch with the 24 datapoints, so the learning happens instantaneously. The new environments we consider are the marble floor which has a higher friction coefficient than the hardwood floor, and a wooden puck (red puck) which is lighter than the standard puck (approx. 80 g). The experimental setup for the task transfer is presented in Fig. 14. Successful trials are executed by the robot on the new surface, using both pucks. Two new task models are learned, evaluated on the test target set, and the results are shown in Table 1. As a benchmark, we show results of directly transferring the model learned in the "original" environment. The decrease in accuracy can be explained due to the higher friction and thus decreased sensitivity, where changes in the movement parameters have a lower impact on the puck position. Therefore, not all test positions could be achieved. However, we see that using the blue puck as in the "original" setup, on the new floor performs worse than the lighter (red) puck, which can be explained by the fact that a lighter puck on a higher-friction (marble) floor acts as an equivalent to a heavier puck on a lower-friction (hardwood) floor. Even though completely new task models are learned after only 24 trials, the average accuracy is still in line with the literature examples and outperforms the random case by more than 20 cm on average. Conclusion and future directions We have presented a probabilistic framework for learning the robot's task and exploration models based solely on its sensory data, by means of informed search in the movement parameter space. The presented approach is validated in simulation and on a physical robot doing bimanual manipulation of an ice hockey stick in order to pass the puck to target positions. We compared our informed trial generation approach with random trial generation, as well as two more approaches in simulation, and showed superior performance of our proposed approach. In the robotic experiment, the robot learns the task from scratch in approximately 45 min with an accuracy comparable to human-level performance and superior to similar experiments in the literature. Additionally, through our framework we demonstrated that the agent is capable of re-learning the task models in different new environments in significantly less time. Future directions of the research include exploring the applicability of this approach to sequential tasks through the informed search in the policy or DMP parameter space. In particular with the emphasis on adapting the approach to continuous movement parameter spaces.
14,915.6
2019-02-21T00:00:00.000
[ "Engineering", "Computer Science" ]
N-Acetylneuraminic Acid Supplementation Prevents High Fat Diet-Induced Insulin Resistance in Rats through Transcriptional and Nontranscriptional Mechanisms N-Acetylneuraminic acid (Neu5Ac) is a biomarker of cardiometabolic diseases. In the present study, we tested the hypothesis that dietary Neu5Ac may improve cardiometabolic indices. A high fat diet (HFD) + Neu5Ac (50 or 400 mg/kg BW/day) was fed to rats and compared with HFD + simvastatin (10 mg/kg BW/day) or HFD alone for 12 weeks. Weights and serum biochemicals (lipid profile, oral glucose tolerance test, leptin, adiponectin, and insulin) were measured, and mRNA levels of insulin signaling genes were determined. The results indicated that low and high doses of sialic acid (SA) improved metabolic indices, although only the oral glucose tolerance test, serum triglycerides, leptin, and adiponectin were significantly better than those in the HFD and HFD + simvastatin groups (P < 0.05). Furthermore, the results showed that only high-dose SA significantly affected the transcription of hepatic and adipose tissue insulin signaling genes. The data suggested that SA prevented HFD-induced insulin resistance in rats after 12 weeks of administration through nontranscriptionally mediated biochemical changes that may have differentially sialylated glycoprotein structures at a low dose. At higher doses, SA induced transcriptional regulation of insulin signaling genes. These effects suggest that low and high doses of SA may produce similar metabolic outcomes in relation to insulin sensitivity through multiple mechanisms. These findings are worth studying further. Introduction Sialic acids (SAs) are N-acetylated derivatives of neuraminic acid that occur naturally in glycoproteins and gangliosides. SA is a biomarker of cardiometabolic diseases, where it is thought to be a consequence of long-term inflammation [1]. This claim is supported by the hypothesis that elevated SA levels in cardiovascular diseases may facilitate the resialylation of vascular endothelium in an attempt to reverse atherosclerosis [2]. Additionally, results of dietary supplementation with SA have not been consistent; some reports show that it promotes inflammation, hepatocellular cancer, and hemolytic-uremic syndrome [3,4], while others have shown that it may be useful for brain development and for certain age-related disorders that cause reduced salivation [5,6]. What can be gleaned from the reports thus far is that the nonhuman SA, N-glycolylneuraminic acid (Neu5Gc), not N-acetylneuraminic acid (Neu5Ac), is responsible for the deleterious effects of SA [4,5]. There are numerous pharmacological and alternative therapies for cardiovascular diseases, but most of them have proven ineffective in curbing the rising burden of the diseases. This is driving the search for alternatives to currently available therapies [7]. Moreover, the rising burden of obesity and other risk factors for cardiometabolic diseases continually increase the prevalence of these diseases. If cardiometabolic diseases are to be effectively managed, there is a need for alternatives to the currently available therapies that control the risk factors and prevent the development of these diseases. To evaluate a potential alternative for the prevention of cardiometabolic diseases, we studied the effects of dietary supplementation with SA on the development of insulin resistance, which is a common denominator for cardiometabolic diseases [8]. Simvastatin is typically used to lower lipids, and because the animal model in the present study was given a diet rich in fats, it was used as a control. Additionally, we decided to use Neu5Ac since, which unlike Neu5Gc, it is the form that is widely reported to be beneficial. As a marker of cardiometabolic diseases, we hypothesized that dietary Neu5Ac could have far-reaching effects on cardiometabolic indices in view of the widespread incorporation of SA into multiple tissues in the body. (10-week-old, 230-280 g, = 30) were housed in individual cages at 25 ± 2 ∘ C, with 12/12 h light/dark cycle, and allowed to acclimatize for 2 weeks with free access to normal pellet and water. The rats were then assigned to one of five groups ( = 6, Table 1): normal group fed with standard rat pellet (335 Kcal/100 g), high fat diet (HFD) group fed a HFD alone (448 Kcal/100 g), simvastatin group fed with HFD + oral gavage of 10 mg/kg BW simvastatin/day (HFD + SIM), and SA groups that received HFD + daily oral gavage of 50 or 400 mg/kg BW SA (HFD + SAL and HFD + SAH, resp.). Simvastatin was chosen because it is the standard drug used to manage hyperlipidemia and associated metabolic perturbations [9], similar to what HFD induces in rats. Diets were formulated in-house except for the normal pellet and were given to the rats for 12 weeks, after which they were euthanized and their blood and organs (liver and visceral adipose tissues) were collected for further studies. Tissue samples were immediately washed with normal saline and preserved in chilled RCL-2 solution, which was then transferred to −80 ∘ C until RNA was extracted. During the intervention, food intake was calculated daily by subtracting the leftover food from what was added the previous day, while body weight was recorded. Serum Adiponectin, Leptin, and Insulin. Serum from blood collected in plain tubes was used for measurements of adiponectin, leptin, and insulin using the respective ELISA kits according to the manufacturer's instructions. Absorbances were read on a BioTeK Synergy H1 Hybrid Reader (BioTek Instruments, Inc., Winooski, VT, USA) at the appropriate wavelengths (450 nm for insulin and leptin, and 450 nm and 590 nm for adiponectin). The results were analyzed on http://www.myassays.com/ using four parametric test curves: adiponectin ( 2 = 0.9914), insulin ( 2 = 1), and leptin ( 2 = 0.9996). Biochemical Analyses. Lipid profile analyses were performed using serum from blood collected at the end of the study by cardiac puncture after an overnight fast. Samples were analyzed using Randox analytical kits according to the manufacturer's instructions with a Selectra XL instrument (Vita Scientific, Dieren, Netherlands). After an overnight fast, the oral glucose tolerance test (OGTT) was performed by oral gavage of 2 g/kg BW D-glucose to every rat, and the blood glucose levels were measured at 0, 30, 60, and 120 minutes via tail vein puncture, using a glucometer (Roche Diagnostics, Indianapolis, IN, USA). The area under the curve for glucose in the OGTT was calculated as reported previously [10]. Then, homeostatic model assessment of insulin resistance (HOMA-IR), a measure of insulin sensitivity, was computed from the fasting plasma glucose and insulin levels using the formula HOMA-IR = (fasting glucose level (mg/dL)/fasting plasma insulin ( U/mL))/2430 [11]. Gene Expression Study. Primers for the gene expression study were designed using the Rattus norvegicus gene sequences from the National Center for Biotechnology Information website (http://www.ncbi.nlm.nih.gov/nucleotide/) and tagged with an 18-nucleotide universal forward and 19nucleotide universal reverse sequence, respectively. Primers (Table 2) were supplied by Integrated DNA Technologies (Singapore) and reconstituted in RNAse free water. Extracted RNA (20 ng) from liver and adipose tissues was used for reverse transcription and PCR according to the GenomeLab GeXP Start Kit protocol (Beckman Coulter, USA) using the conditions shown in Table 2. PCR products (1 L) were analyzed on a GeXP genomelab genetic analysis system (Beckman Coulter, Inc., Miami, FL, USA) after mixing with sample loading solution and DNA size standard 400 as recommended by the manufacturer. Results were analyzed with the Fragment Analysis module of the GeXP system software and normalized on the eXpress Profiler software. Fold changes were calculated by dividing the expression value for the different treatment groups by the expression value for the normal group. 2.6. Data Analysis. The means ± standard deviation ( = 6) of the groups was used for the analyses. One-way analysis of variance (ANOVA) with Tukey's post hoc test was performed using SPSS 17.0 software (SPSS, Inc., Chicago, IL, USA) to assess the level of significance of the differences between means with a cutoff of < 0.05. Body Weight Changes. Food intake was similar for all the groups throughout the intervention period (Table 3), but differences in weight gain were observed between the HFD and other groups at the end of 12 weeks ( Figure 1); the highest weight gain between the beginning and end of the study (50%) was in the HFD group, followed by the normal group (47%), while the HFD + SIM group had the lowest weight gain (40%). The HFD + SAH and HFD + SAL groups had weight gains of 42% and 46%, respectively. Week 8 Week 1 Week 5 Week 9 Week 2 Week 6 Week 10 Week 3 Week 7 Week 11 Table 1. No significant differences were observed between the groups' actual weights, but total weight gain was the highest in the HFD and normal groups (50% and 47%, resp.), while the HFD + simvastatin group had the lowest (40% increase), followed by the HFD + SAH and HFD + SAL groups (42% and 46%, resp.). Table 3 and Figure 2. The results indicated that SA improved lipid profile values and insulin sensitivity (HOMA-IR) of rats, although only the triglycerides were significantly different ( < 0.05) in comparison with the HFD group (Table 3). Moreover, OGTT results showed that the HFD + SAH and HFD + SAL groups, unlike the HFD and HFD + SIM groups, had better insulin sensitivity ( Figure 2); the HFD + SAH and HFD + SAL groups had better glycemic response on OGTT and lower area under the curve for glucose over 120 min of OGTT ( < 0.05). Table 3 shows the serum adiponectin and leptin results. From the table, elevated leptin and reduced adiponectin levels were observed. The leptin level was significantly lower in the HFD + SIM, HFD + SAH, and HFD + SAL groups compared to the HFD group ( < 0.05). Adiponectin level in the HFD + SIM group was also lower than that of the HFD group, while the HFD + SAH and HFD + SAL groups had higher levels, despite being not significantly different from the HFD group. Table 1. * indicates statistically significant difference ( < 0.05) in comparison with HFD. Different letters on bars in (b) indicate a statistically significant difference ( < 0.05). Hepatic and Adipose Tissue mRNA Levels of Insulin Signaling Genes. Figures 3 and 4 show the effects of SA on hepatic and adipose tissue mRNA levels of insulin signaling genes, respectively. The HFD + SAH group had upregulation of the glucokinase (Gck), potassium inwardly rectifying channel, subfamily J, member 11 (KCNJ11), mammalian target of rapamycin (mTOR), phosphoinositide-3-kinase (Pi3k), and prkcz-protein kinase C, zeta (Prckz) genes in both liver and adipose tissues. In addition, it had downregulation of the inhibitor of kappa light polypeptide gene enhancer in B-cells, kinase beta (I bk ), and mitogen-activated protein kinase (Mapk1) in the liver, while it upregulated pyruvate kinase (Pk) in the adipose tissue. Other genes involved in the insulin signaling pathway were not changed by SA (Figures 3 and 4). Discussion As can be recalled, the HFD + SIM, HFD + SAH, and HFD + SAL groups had lower weight gains in comparison with the HFD group. Simvastatin has weight-reducing properties [12] and hence the lower weight gain observed. SA, on the other hand, has not been reported to reduce weight, and the observations from this study suggest that it may regulate weight gain. Therefore, we hypothesized that this effect may have been due to differential sialylation or resialylation of glycoprotein or glycolipids with implications on weight gain. Moreover, Zinc 2-glycoprotein (ZAG), a glycoprotein, is an established marker for fat catabolism [13], and dietary SA may have affected its sialylation or that of similar yet unknown glycoprotein adipokines, with resultant changes in their functions due to changes in sialylation status. This is an area worth evaluating further, especially since the metabolism of such glycoproteins may in itself be influenced by the degree of sialylation [14,15]. Additionally, the changes in lipid profiles and insulin sensitivity markers in the HFD + SAH and HFD + SAL groups may also have been due to differential sialylation of glycoproteins like ZAG, with resultant modulation of various metabolic pathways. The lower weight gains in the HFD + SAH and HFD + SAL groups may also have contributed to the metabolic outcomes observed in these groups, since lower weights have been associated with lower lipid profiles and improved insulin sensitivity [8]. Elevated leptin and reduced adiponectin levels are associated with cardiometabolic diseases [16], and in the present study, a similar pattern was observed for the HFD group (Table 3). Conversely, SA reduced leptin level significantly lower than that in the HFD group and increased the adiponectin level, albeit not significantly. Simvastatin, however, reduced both the leptin and adiponectin levels. The effects of SA on adiponectin and leptin indicated that it could improve cardiometabolic indices, since these markers are predictors of metabolic diseases [16]. The gene expression data from this study showed that the HFD + SIM and HFD groups had transcriptional changes of insulin signaling genes that tended towards insulin resistance. Furthermore, the data showed that the HFD + SAH group attenuated the HFD-induced transcriptional changes, which tended towards improved insulin signaling. Specifically, it upregulated the expression of a central mediator of the intracellular signal transduction of insulin sensing (Pi3k), whose transcriptional downregulation has been linked with obesityinduced insulin resistance [17,18]. Similarly, the upregulation of mTOR [19] and prkcz [20] and downregulation of Mapk1 [21] and I bk [22] especially in the HFD + SAH group suggested improved insulin sensitivity, which may have been mediated through increased dephosphorylation of insulin receptor substrate (IRS) with consequent increase in IRSmediated insulin action via activation of Pi3k [17,23]. The data in this study support this hypothesis, since the expression of Pi3k was increased but not that of IRS. Also, in the present study, SA upregulated Gck (liver and adipose tissues) and Pk (adipose tissue), which are associated with enhanced glucose sensing and homeostasis and elevated cellular adenosine triphosphate (ATP) levels [24]. Elevated cellular ATP will consequently increase KCNJ11, which is reported to regulate ion channels involved in glucose sensing [25]. Taken together, the data showed that although simvastatin improved lipid profiles, it increased insulin resistance, Table 1. Different letters on bars representing each group indicate a statistically significant difference ( < 0.05). as reported previously [26]. SA, on the other hand, improved lipid profiles and prevented HFD-induced insulin resistance. Serum insulin levels in the HFD + SIM, HFD + SAH, and HFD + SAL groups were similar, but insulin sensitivity was better in the HFD + SAH and HFD + SAL groups, suggesting that the effects of SA may have been mediated at the insulin signaling level. The effects of SA on mRNA levels of hepatic and adipose tissue insulin signaling genes confirmed our hypothesis. However, despite improved insulin sensitivity in the HFD + SAH and HFD + SAL groups, only the HFD + SAH group showed upregulation of the insulin signaling genes, suggesting that the effects of SA may be both transcriptional (at higher doses) and nontranscriptional (at lower and higher doses). Earlier, we hypothesized that the improvements in weight and other metabolic indices observed due to SA could have been due to its effects on glycoprotein sialylation. Moreover, glycoprotein sialylation is reportedly influenced by the degree of metabolic flux [14,15], and dietary SA administration could have influenced the flux towards increased sialylation of glycoprotein structures that influenced cardiometabolic indices. These effects induced by SA are likely to be transient, since reduced SA flux could rapidly reverse any changes produced. However, transcriptionally mediated changes at higher doses of SA may produce longer-lasting effects [26,27]. Therefore, from the findings in this study, we propose that SA may be able to Fold change (c) Figure 4: Effects of SA on adipose tissue mRNA levels of (a) insulin receptor (Insr), insulin receptor substrate (Irs) 2, and phosphoinositide-3-kinase (PI3K), (b) glucokinase (Gck), potassium inwardly rectifying channel, subfamily J, member 11 (KCNJ11), and pyruvate kinase-liver isoform (L-Pk), and (c) mammalian target of rapamycin (mTOR), protein kinase C, zeta (Prkcz), inhibitor of kappa light polypeptide gene enhancer in B-cells, kinase beta (I BK ), and mitogen-activated protein kinase (MAPK) 1 in HFD-fed rats. Groups are the same as Table 1. Different letters on bars representing each group indicate a statistically significant difference ( < 0.05). prevent insulin resistance through transcriptional regulation of insulin signaling genes ( Figure 5) and nontranscriptional mechanisms, depending on the concentration used. Conclusions We demonstrated that SA prevents HFD-induced insulin resistance through transcriptional and nontranscriptional mechanisms. At lower doses, sialylation of glycoprotein targets may be responsible for the preventive effects of SA against insulin resistance. In addition, at higher doses, transcriptional regulation of insulin signaling genes may provide longer-lasting effects. These findings are worth evaluating further.
3,834.6
2015-11-25T00:00:00.000
[ "Biology", "Medicine" ]