text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Coupling physics and biogeochemistry thanks to high-resolution observations of the phytoplankton community structure in the northwestern Mediterranean Sea . Fine-scale physical structures and ocean dynamics strongly influence and regulate biogeochemical and ecological processes. These processes are particularly challenging to describe and understand because of their ephemeral nature. The OSCAHR (Observing Submesoscale Coupling At High Resolution) campaign was conducted in fall 2015 in which a fine-scale structure (1–10 km / 1–10 days) in the northwestern Mediterranean Ligurian subbasin was pre-identified using both satellite and numerical modeling data. Along the ship track, various variables were measured at the surface (temperature, salinity, chlorophyll a and nutrient concentrations) with ADCP current velocity. We also deployed a new model of the CytoSense automated flow cytometer (AFCM) optimized strongly influence and regulate biogeochemical and ecological processes.These processes are particularly challenging to describe and understand because of their ephemeral nature.The OSCAHR (Observing Submesoscale Coupling At High Resolution) campaign was conducted in fall 2015 in which a fine-scale structure (1-10 km/1-10 days) in the northwestern Mediterranean Ligurian subbasin was pre-identified using both satellite and numerical modeling data.Along the ship track, various variables were measured at the surface (temperature, salinity, chlorophyll a and nutrient concentrations) with ADCP current velocity.We also deployed a new model of the CytoSense automated flow cytometer (AFCM) optimized for small and dim cells, for near real-time characterization of the surface phytoplankton community structure of surface waters with a spatial resolution of a few kilometers and an hourly temporal resolution.For the first time with this optimized version of the AFCM, we were able to fully resolve Prochlorococcus picocyanobacteria in addition to the easily distinguishable Synechococcus.The vertical physical dynamics and biogeochemical properties of the studied area were investigated by continuous high-resolution CTD profiles thanks to a moving vessel profiler (MVP) during the vessel underway associated with a high-resolution pumping system deployed during fixed stations allowing sampling of the water column at a fine resolution (below 1 m).The observed fine-scale feature presented a cyclonic structure with a relatively cold core surrounded by warmer waters.Surface waters were totally depleted in nitrate and phosphate.In addition to the doming of the isopycnals by the cyclonic circulation, an intense wind event induced Ekman pumping.The upwelled subsurface cold nutrient-rich water fertilized surface waters and was marked by an increase in Chl a concentration.Prochlorococcus and pico-and nano-eukaryotes were more abundant in cold core waters, while Synechococcus dominated in warm boundary waters.Nanoeukaryotes were the main contributors (> 50 %) in terms of pigment content (red fluorescence) and biomass.Biological observations based on the mean cell's red fluorescence recorded by AFCM combined with physical properties of surface waters suggest a distinct origin for two warm boundary waters.Finally, the application of a matrix growth population model based on high-frequency AFCM measurements in warm boundary surface waters provides estimates of in situ growth rate and apparent net primary production for Prochlorococcus (µ = 0.21 d −1 , NPP = 0.11 mg C m −3 d −1 ) and Synechococcus (µ = 0.72 d −1 , NPP = 2.68 mg C m −3 d −1 ), which corroborate their opposite surface distribution pattern.The innovative adaptive strategy applied during OSCAHR with a combination of several multidisciplinary and complementary approaches involving high-resolution in situ observations and sampling, remote-sensing and model simulations provided a deeper understanding of the marine biogeochemical dynamics through the first trophic levels. Introduction Despite representing only 0.2 % of the global photosynthetically active carbon (C) biomass, phytoplankton accounts for about half of the global primary productivity on Earth (Falkowski et al., 1998;Field et al., 1998).It forms the basis of the marine food web and exerts a major control on global biogeochemical cycles.In the context of global change, mainly due to the rise in anthropogenic atmospheric CO 2 (IPCC, 2013), marine phytoplankton plays a fundamental role in the global C cycle by photosynthetically fixing CO 2 and exporting it into the ocean's interior by the biological pump (De La Rocha and Passow, 2007).Phytoplankton community structures are highly heterogeneous over the ocean in terms of assemblage, physiology and taxonomy (Barton et al., 2010;De Vargas et al., 2015).Phytoplankton cell volume spans more than 9 orders of magnitude (Marañón et al., 2015), from Prochlorococcus cyanobacteria (∼ 10 −1 µm 3 ) to the largest diatoms (> 10 8 µm 3 ).Phytoplankton diversity is primarily controlled by environmental factors such as temperature, nutrients, light availability, vertical stability, and predation, which lead to a biogeography of phytoplankton's diversity landscape (Lévy et al., 2015).The heterogeneity and the fine-scale variability of phytoplankton abundance have been observed and described from the 1970s (Platt, 1972;Denman et al., 1976), but the community structure variability on this scale remained uncharted at this time.While on a basin scale the phytoplankton community structure is relatively well constrained, on smaller scales both modeling (Lévy et al., 2001;Clayton et al., 2013;Lévy et al., 2014;d'Ovidio et al., 2015) and observation (Claustre et al., 1994;d'Ovidio et al., 2010;Clayton et al., 2014;Martin et al., 2015;Cotti-Rausch et al., 2016) studies have revealed during the last decades that phytoplankton community structure exhibits strong variability (Levy et al., 2015). The term "fine-scale" is generally used to refer to the ocean dynamics features occurring on scales smaller than about 100 km; consequently, the term includes (i) a fraction of the mesoscale processes (e.g., large coherent eddies), with scales close to the first internal Rossby radius, and (ii) the submesoscale processes, with scales smaller than the first internal Rossby radius (e.g., intense vortices, fronts and filaments).The physical dynamics on that scale strongly influence and regulate biogeochemical and ecological processes (McGillicuddy et al., 1998;Levy and Martin, 2013;McGillicuddy, 2016).This can have a significant impact on primary productivity (Oschlies and Garçon, 1998;Mahadevan, 2016) and thus on the biological C pump (Levy et al., 2013) and associated export (Siegel et al., 2016).Mesoscale eddies modify the vertical structure of the water column: cyclones and anti-cyclones, respectively, shoal and deepen isopycnals (McGillicuddy et al., 1998).Eddy pumping may have a significant biogeochemical impact in oligotrophic areas (Falkowski et al., 1991): shoaling isopycnals in the center of a mesoscale cyclonic eddy can stimulate phytoplank-ton productivity by lifting nutrients into the euphotic zone.Eddy stirring and trapping further influence biogeochemical and ecological processes (McGillicuddy, 2016, for a review).Submesoscale dynamics enhance the supply of nutrients in the euphotic zone in nutrient depleted areas and also influence the light exposure of phytoplankton by modifying the density gradient in the surface layer, which contribute significantly to phytoplankton production (Mahadevan, 2016) and community structure variability (Cotti-Rausch et al., 2016).The underlying biogeochemical submesoscale processes are particularly challenging to describe and understand because of their ephemeral nature.For the moment, submesoscale dynamics have been predominantly investigated through the analysis of numerical simulation.The lack of in situ observations at an appropriate spatio-temporal resolution makes the integration of these in situ data with the model simulations difficult, and it still remains unclear how these processes affect the global state of the ocean (Mahadevan, 2016). The efficient study of fine-scale structures and their associated physical-biological-biogeochemical mechanisms requires the use of a combination of several complementary approaches involving in situ observations and sampling, remote-sensing and model simulations (Pascual et al., 2017).High-resolution measurements are mandatory to assess the mechanisms controlling the fine-scale biophysical interactions.They are now available thanks to the recent progress in biogeochemical sensor developments, the combination of ship-based measurements and autonomous platforms, and innovative adaptive approaches.The OSCAHR project (Observing Submesoscale Coupling At High Resolution, PIs: A. M. Doglioli and G. Grégori) aims to study the influence of fine-scale physical dynamics on the biogeochemical processes, phytoplankton community structure and dynamics at high resolution.In the present study the terms "high resolution" and "fine-scale" aim to describe observations and mechanisms, respectively. During the OSCAHR cruise, novel platforms for coupling physical-biological-biogeochemical observations and sampling the ocean surface layer at a high spatial and temporal resolution were coupled with real-time analyses of satellite ocean color imagery and altimetry.In this article, we first describe the hydrological structure and dynamics of the studied feature based on satellite data and continuous sea surface measurements.Then we address the corresponding phytoplankton community structure and distribution based on analyses performed at the single-cell level and at high spatiotemporal resolution in an autonomous way.Moreover, we also present the fine-scale vertical variability of the phytoplankton community structure in various stations within and outside the studied structure, resulting in a three-dimensional dataset for the investigation of the physical driving mechanisms acting on the phytoplankton community structure.Finally, thanks to the outstanding potential of single-cell analysis performed by automated high-resolution flow cytometry, we estimate in situ growth rates and address the appar-ent primary productivity of the two dominant phytoplankton species (in terms of abundances), Prochlorococcus and Synechococcus. OSCAHR outlines The OSCAHR cruise was carried out between 29 October 2015 and 6 November 2015 in the western Ligurian subbasin onboard R/V Téthys II (Doglioli, 2015).A first leg sampled the coastal waters, and a second one was dedicated to offshore waters in a > 1000 m water column area.The present study focuses on the second leg held from 3 November to 6 November (Fig. 1).The cruise strategy used an adaptive approach based on the near-real-time analysis of both satellite and numerical modeling data to identify dynamical features of interest and to follow their evolution.Several satellite datasets were exploited during the campaign to guide the cruise using the SPASSO software package (Software Package for an Adaptative Satellite-based Sampling for Ocean campaigns, http://www.mio.univ-amu.fr/SPASSO/) following the same approach of previous cruises such as LATEX (Doglioli et al., 2013;Petrenko et al., 2017) andKEOPS2 (d'Ovidio et al., 2015).SPASSO was also used after the cruise in order to extend the spatial and temporal vision of the in situ observations.We sampled a fine-scale dynamical structure characterized by a patch of cold surface water surrounded by warm waters.We recorded physical, biological and chemical data at high frequency (minute to hourly scale) with a combination of classical (thermosalinograph (TSG), discrete surface sampling) and innovative (automated high-frequency flow cytometry (AFCM), MVP) methods.Regular fixed station measurements (classical conductivity, temperature, depth (CTD) profiles and sampling at high vertical resolution (at a meter scale)) were also performed at strategic sampling sites. Satellite and model products We used the altimetry-derived (i.e., geostrophic) velocities distributed by AVISO as a multi-satellite Mediterranean regional product (http://www.aviso.altimetry.fr) on a daily basis with a spatial resolution of 1/4 • .Sea surface temperature (SST, levels 3 and 4, 1 km resolution) and Chl a concentrations (level 3, 1 km resolution, MODIS-Aqua and NPP-VIIRS sensors) were provided by CMEMS (Copernicus Marine Environment Monitoring Service, htpp://marine.copernicus.eu).Following d 'Ovidio et al. (2015), Eulerian and Lagrangian diagnostics were performed on the altimetry-derived currents.The Chl a product is optimized to work in "case 1 waters" (Morel et al., 2006), i.e., open ocean conditions where the optical signal is dominated by phytoplankton.The WRF (Weather Research and Forecasting, Skamarock et al., 2008) atmospheric numerical model provided meteorological forecast (wind speed and direction, irradiance).WRF has been implemented at the Observatory of Universe Sciences, Institut Pytheas (Marseille), as an operational model.Ekman pumping was calculated from the curl of the wind stress: w = curl(τ/ρ • f ), where w is an estimate of the vertical velocity (w > 0 refers to vertical velocity), ρ is the density of the water, here considered ρ = 1028 kg m 3 , and f is the Coriolis parameter that is variable with latitude and in the region of study is ∼ 10 −4 rad s −1 . Nutrients and Chl a analysis Nutrient samples were collected in 20 cm 3 high-density polyethylene bottles poisoned with HgCl 2 to a final concentration of 20 mg dm −3 and stored at 4 • C before being analyzed in the laboratory a few months later.Nutrient concentrations were determined using a Seal AA3 auto-analyzer following the method of Aminot and Kérouel (2007) with analytical precision of 0.01 µmol dm −3 and quantification limits of 0.02, 0.05 and 0.30 µmol dm −3 for phosphate, nitrate (and nitrite) and silicate, respectively. To determine Chl a concentrations, 500 ± 20 cm 3 of seawater were filtered through 25 mm glass-fiber pyrolyzed filters (Whatman ® GF/F) and immediately frozen at −20 • C. Filters were placed in glass tubes containing 5 cm 3 of pure methanol and allowed to extract for 30 min as described by Aminot and Kérouel (2007).Fluorescence of the extract was determined by using a Turner Fluorometer AU10 equipped with the Welschmeyer kit to avoid chlorophyll b interference (Welschmeyer, 1994).The fluorometer was zeroed with methanol turbidity blank.The detection limit was 0.01 µg dm −3 .Calibration was performed using a pure Chl a standard (Sigma Aldrich ® , ref: C5753, pure spinach chlorophyll). Benchtop flow cytometry Seawater samples collected from the Niskin bottles were prefiltered through a 100 µm mesh size net to prevent any clogging of the flow cytometer.Cryovials (5 cm 3 ) were filled with subsamples that were preserved with glutaraldehyde 0.2 % final concentration for ultraphytoplankton analysis.Samples were then rapidly frozen in and stored in liquid nitrogen until analysis at the PRECYM flow cytometry platform of the institute.In the laboratory, cryovials were rapidly thawed at room temperature and analyzed using the FACSCalibur flow cytometer (BD Biosciences ® ) of PRECYM.This flow cytometer is equipped with a blue (488 nm) air-cooled argon laser and a red (634 nm) diode laser.For each particle analyzed (cell), five optical parameters were recorded: forward and right angle light scatter, and green (515-545 nm), orange (564-606 nm) and red (653-669 nm) fluorescence wavelength ranges.Data were collected using the CellQuest software (BD Biosciences ® ).The analysis and identification of ultraphytoplankton groups were performed a posteriori us- Various ultraphytoplankton groups were optically resolved without any staining on the basis of their light scatter and fluorescence properties (defined below in Sect.3).Separation of picoeukaryotes and nanophytoplankton was performed by adding 2 µm yellow-green fluorescent cytometry microspheres (Fluoresbrite YG 2 µm, Polyscience Inc.) to the samples.Trucount ™ calibration beads (Becton Dickinson Biosciences) were also added to the samples as an internal standard both to monitor the instrument stability and to determine the volume analyzed by the instrument.This is mandatory to compute the cell abundances. Underway surface measurements The in situ velocity of the currents was measured by a hull-mounted RDI Ocean Sentinel 75 kHz ADCP (acoustic Doppler current-meter profiler).The configuration used during the whole cruise was 60 cells, 8 m depth beams, and 1 min averaged.The depth range extended from 18.5 to 562.5 m. The onboard surface-water flow-through system pumped seawater at 2 m in depth with a flow rate carefully maintained at 60 dm 3 min −1 .The TSG, a SeaBird SBE21, ac-quired sea surface temperature (SST) and salinity (SSS) data every minute during the whole cruise.A Turner Designs fluorometer (10-AU-005-CE) recorded simultaneously sea surface fluorescence.In order to validate the salinity measurements computed from conductimetry, discrete salinity samples were performed on a daily basis before, during and after the campaign.They were measured on a Por-taSal salinometer at the SHOM (Service Hydrographique et Oceanographique de la Marine) with a precision of 0.002.A 1 : 1 relationship between TSG and analyzed salinity was obtained (R 2 = 0.97, n = 31) with a mean difference of 0.000 and a SD of the residuals of 0.018.Surface water samples were collected every 20 min from the TSG water outflow for the determination of nitrate, nitrite, phosphate and silicate concentrations (Sect.2.3); in total, 177 surface samples were obtained.Measurements for Chl a (Sect.2.3) were collected randomly during the day and the night, leading to a total of 41 samples collected from the flow-through system.The TSG fluorescence signal was converted to Chl a concentration values thanks to a comparison with Chl a analysis showing a significant correlation between fluorescence and Chl a with a R 2 of 0.50 (p value < 0.05).As Chl a values obtained during OSCAHR were low (0.08 to 0.42 µg dm −3 , with a mean value of 0.15 µg dm −3 ), and considering the effect of fluorescence quenching, getting such a correlation was quite reasonable.The CytoSense, an automated flow cytometer (AFCM) designed by the CytoBuoy, b.v.company (NL), analyzed every 20 min samples isolated from the sea surface continuous flow-through system of the TSG.The AFCM used in this study was specially designed to analyze the pulse shapes of a wide range of phytoplankton size (< 1-800 µm in width and several mm in length) and abundance (within the ∼ 0.5 to the ∼ 4.5 cm 3 analyzed).The analyzed seawater was pumped with a calibrated (weighing method) peristaltic pump from a discrete intermediate container, subsampling the continuous flow-through seawater into a 300 cm 3 volume to minimize the spatial extent during the AFCM analyzing time.A sheath loop (NaCl solution (35 ‰) filtered on 0.2 µm) was used to separate, align and drive the particles to the light source and was continuously recycled using a set of two 0.1 µm filters (Mintech ® fiber Flo 0.1 µm), completed with an additional carbon filter (PALL ® Carbon filter) to reduce the background signal from the seawater and remove colloidal material.The sheath flow rate was 1.3 cm 3 s −1 .In the flow cell, each particle was intercepted by a laser beam (OBIS ® laser, 488 nm, 150 mW) and the generated optical pulse shape signals were recorded.The light scattered at 90 • (sideward scatter, SWS) and fluorescence emissions were separated by a set of optical filters (SWS (488 nm), orange fluorescence (FLO, 552-652 nm) and red fluorescence (FLR, > 652 nm)) and collected on photomultiplier tubes.The for-ward scatter (FWS) signal was collected onto two photodiodes to recover left and right signals of the pulse shape.Each particle passes at a speed of 2 m s −1 along the laser beam width (5 µm) with a data recording frequency of 4 MHz, generating optical pulse shapes used as a diagnostic tool to discriminate phytoplankton groups.Two distinct protocols were run sequentially every 20 min, the first one targeted autotrophic picophytoplankton with FLR trigger level fixed at 5 mV, sample flow rate at 5.0 mm 3 s −1 for 3 min, resulting in ∼ 0.5 cm 3 analyzed samples.Two main groups, Prochlorococcus and Synechococcus, were optimally resolved and adequately counted using this first protocol.Synechococcus are easily detectable by flow cytometry due to the bright orange fluorescence emitted by phycoerythrin during the excitation by the blue 488 nm laser beam of the flow cytometer.Prochlorococcus, which are smaller than Synechococcus, are characterized by very dim red fluorescence induced by Chl a.The second protocol dedicated to the analysis of nanoand microphytoplankton was triggered on FLR at 30 mV, sample flow rate was fixed at 10 mm 3 s −1 for 10 min, resulting in ∼ 4.5 cm 3 analyzed samples.Using this configuration, more accurate abundances of these less abundant microorganisms were obtained as the smallest and most abundant cells (Prochlorococcus for instance) were not considered.Phytoplankton groups were resolved using CytoClus ® software generating several two-dimensional cytograms of retrieved information from the 4 pulse shapes curves (FWS, SWS, FLO, FLR) obtained for every single cell, mainly the area under the curve and the maximum of the pulse shape signal.Groups' abundances (cells cm −3 ), mean (a.u.cell −1 , a.u.standing for arbitrary units) and sum (product of mean properties per group abundances, a.u.cm −3 ) of optical pulse shapes were processed with the software to assess their inherent dynamics.Up to 150 pictures of microphytoplankton were collected during the FLR 30 mV acquisition by an image-in-flow camera mounted upward the flow cell.FWS scatter signals of silica beads (0.4, 1.0, 1.49, 2.01, 2.56, 3.13, 4.54, 5.02, 7.27 µm non functionalized silica microspheres Bangs Laboratories, Inc.) were used to convert light scatter to equivalent spherical diameter (ESD) and biovolume.A power law relationship (log(Size) = 0.309 • log(FWS) − 1.853) allowed the conversion of the FWS signal into cell size (n = 17, R 2 = 0.94).The stability of the optical unit and the flow rates were checked using Beckman Coulter Flowcheck ™ fluorospheres (2 µm) before, during and after installation. Vertical sampling A moving vessel profiler, MVP200 ODIM Brooke Ocean, equipped with a MSFFF I (Multi Sensor Free Fall Fish type I) containing an AML microCTD was deployed.The MVP casts were run from sea surface to 300 m in depth during the vessel underway at a mean speed of 6 knots with continuous acquisition of temperature and salinity.Along most of the campaign route, vertical profiles of temperature and salinity were obtained during the nearly vertical free fall with a temporal resolution of 8-10 min, corresponding to a spatial resolution of ∼ 1 km.Salinity and temperature data acquired near the surface (∼ 5 m) with the MVP were compared to the data acquired from the onboard TSG.MVP temperature and salinity values were significantly correlated with the continuous underway measurements with a 1 : 1 relationship, R 2 of 0.99 and 0.84 and root mean square error (RMSE) of residuals of 0.07 • C and 0.02 for temperature and salinity, respectively. A total of eight fixed stations were performed (Fig. 2) and used to collect biogeochemical information and to validate the deployment of the MVP.For each station, a CTD rosette cast down to 300 m recorded temperature, salinity and fluorescence profiles.At Station 11, the water column properties down to 1000 m were investigated with this CTD rosette instrument.The CTD rosette was equipped with a 12 Niskin bottle (12 dm 3 ) SBE32 Carousel water sampler and carried a CTD SBE911+ for temperature and salinity, a Chelsea Aquatracka III fluorimeter and a QCP-2350 (cosinus collector) for PAR measurements.Samples for nutrients and phy-toplankton groups using benchtop flow cytometry (Sect.2.4) were collected from the surface to 1000 m in depth. For stations 5 to 11 (Fig. 2), an innovative system of highresolution seawater sampling down to 35 m (PASTIS_HVR -Pumping Advanced System To Investigate Seawater with High Vertical Resolution) was deployed.Seawater samples were collected using a Teflon pump (AstiPure TM II High Purity Bellows Pumps -flow rate = 30 dm 3 min −1 ) connected to a polyethylene (PE) tube fixed to the frame at the level of the pressure sensor of a Seabird SBE19+ CTD and a Wet-Lab WETstar WS3S fluorimeter.The depth of the sampling was defined as the mean depth recorded by the pressure sensor with a vertical resolution of 0.1 to 1 m (depending on the sea state).The SBE19+ CTD offered precisions for temperature and computed salinity of 0.005 • C and 0.002, respectively.The PASTIS_HVR was used to collect samples every 2-3 m for benchtop flow cytometry analyses (Sect.2.4).Complementary nutrient analyses were made at a lower vertical resolution (10 m).Nitrite and phosphate concentration profiles never overpassed the limits of quantification of the analyzers (data not shown).Twentyseven random seawater samples were collected and filtered Surface-specific growth rates and primary production estimates Phytoplankton growth rates were estimated by measuring independently with AFCM the net abundances combined with a size-structured population model described in Sosik et al. (2003) and adapted by Dugenne et al. (2014) and Dugenne (2017).Observed diel variations of single-cell bio-volumes within a specific cluster, retrieved from the power law relationship between cell size and FWS, were used as inputs for this size-structured population model.The absolute number of cells (N) and proportions of cells (w) were counted during 24 h to follow the transitions of cells in each size class (v). We identified the set of parameters that could optimally reproduce the diel variation of the population size distribution using only cell cycle transitions by inverse modeling.In the P. Marrec et al.: High-resolution of the phytoplankton community structure in the NW Mediterranean model, temporal transitions of cell proportions in size classes are assumed to result from either cellular growth, supported by photosynthetic carbon assimilation, or asexual division.The increase in cell size occurring during the interphase is dependent on the proportions of cells that will grow between t and t +dt, denoted γ (t).This probability is expressed as an asymptotic function of incident irradiance (Eq.2). with Irradiance the instantaneous PAR, Irradiance * the scaling parameter, and γ max the maximal proportion of cells growing between t and t + dt.By contrast, the decrease in cell size occurring after the mitosis marks the production of two daughter cells whose size has been divided by a factor of 2. Thus the decrease in cell size is dependent on the proportion of cells that will enter mitosis between t and t + dt, denoted δ(t), which ultimately controls the population net growth rates (Eq.3). Because natural populations show a clear temporal variation of the mitotic index (δ(t)), the proportion of cells entering mitosis is expressed as a function of both time (Vaulot and Chisholm, 1987;André et al., 1999;Jacquet et al., 2001) and cell size (Marañón, 2015) (Eq.4). with f the normal probability density, v cell size, δ max the maximal proportion of cells entering mitosis, µ v the mean of the size density distribution, σ v the SD of the size density distribution, µ t the mean of the temporal density distribution, and σ t the SD of the temporal density distribution.By analogy with a Markovian process, the initial distribution of the cell size, N(0), is projected with a time step of dt = 10 60 h, to construct the normalized size distribution, w(t), over a 24 h period (Eq.5), with ˆstanding for model predictions. The tridiagonal transition matrix, A(t), contains 1. the stasis probability, expressed as the proportions of cells that neither grew nor divided between t and t + dt, 2. the growth probability (γ ), expressed as the proportions of cells that grew between t and t + dt, and 3. the division probability (δ), expressed as the proportions of cells that entered division between t and t + dt. The set of optimal parameters, θ (Eq.6), minimize the Gaussian error distribution between predictions ( ŵ) and observations (w), (θ ) (Eq. 7).Their SDs are estimated by a Markov chain Monte Carlo approach that samples θ from their prior density distribution, obtained after running 200 optimizations on bootstrapped residuals, to approximate the parameter posterior distribution using the normal likelihood. Ultimately, the equivalent of the temporal projection of proportions is conducted on the absolute diel size distribution (N) with the optimal set of parameters to estimate population intrinsic growth rates (µ) on a 24 h period, from which the hourly logarithmic difference of observed abundances is subtracted to obtain the daily average population loss rates (l) (Eq.8). The ratio between mean cell biovolume at dawn (min) and dusk (max) has been used for Synechococcus and other phytoplankton groups (Binder et al., 1996;Vaulot and Marie, 1999) as a minimum estimate of the daily growth rate.This simple approach assumes that cell growth and division are separated in time (synchronous population), whereas these processes occur simultaneously in a population (Waterbury et al., 1986;Binder and Chisholm, 1995;Jacquet et al., 2001).Since the model allows for any cell to grow, divide or be at equilibrium over the entire integration period (asynchronous populations), growth rates µ size superior to the median size ratio µ ratio = ln(v max /v min ) (indicative of a synchronous population) are assumed to be well represented.The apparent increase in carbon biomass, defined as the net primary production NPP cell (Eq.9, mg C m −3 d −1 ), was calculated using a constant cell to carbon conversion factor Q C, calc (Table 2). The biovolume to carbon av b i relationship (Table 2) was used to calculate the net primary production NPP size (Eq.10) as the differential of carbon distributions, as the scalar product of vectors av b i and N over time: These conversions allow approximation of the daily NPP using the approximation of the carbon content of the cells newly formed after mitotic division over 24 h (NPP cell ), or directly assimilated by photosynthesis during the photoperiod (NPP size ).The estimations result from the apparent mitotic index optimally deduced from the diel dynamics of the normalized size distribution.They do not accommodate any cell removal process within the period of integration that could be caused by grazing or physical transport. Description of the fine-scale structure Surface currents distributed by AVISO exhibited a cyclonic recirculation in the Ligurian subbasin (Fig. 1).Current velocities and directions measured by the ADCP were in general agreement with the altimetry-derived ones.The highest current velocities (> 0.3 m s −1 ) were associated with the Northern Current.The main cyclonic circulation was divided into two parts: a small recirculation centered on (8.75 • W, 43.80 • N) and a second one in the southwest separated by a local minimum in current intensity, both observed in AVISO and ADCP data. Between 30 October and 2 November, a strong northeasterly wind event (wind velocity of up to 70 km h −1 ) was recorded all over the area, associated with a SST drop of ∼ 1 • C in the Ligurian subbasin.Satellite SST images from 30 October to 6 November (Fig. 1) showed a patch of cold surface waters with values below 17.5 • C. The observation was confirmed by the ship surface TSG between 3 November and 6 November (mean SST of 16.3 ± 0.3 • C and mean SSS below 38.20, Fig. 2, Table 1).The cold patch was surrounded by warmer surface waters with SST up to 19 • C, validated by in situ records from the TSG.Both satellite and in situ sampling described warm boundary waters characterized by SST higher than 17.0 • C.These warm boundary waters were divided into type 1 and type 2 (see Sect. 4.1).Type 2 warm boundary waters presented the highest SST (above 18 • C) and SSS values below 38.24.Type 1 warm boundary waters were defined as the surface waters characterized by SST values higher than 17.0 • C and SSS above 38.23,apart from type 2 warm boundary waters.The lowest SST values were observed between 3 November and 5 November, and then the patch warmed up on 6 November.Remotely sensed SST was well correlated with the one recorded by the TSG along the ship track (R 2 = 0.82, p value < 0.05), even if remote-sensing tended to underestimate SST.Temperature gradients observed from the TSG were well caught by satellite products. Figure 3 depicts the temperature and salinity vertical section of a south-to-north MVP transect from 00:00 to 06:00 (local time) on 5 November.The thermocline was located between 20 and 30 m in depth in cold core area and between 30 and 40 m abroad.Temperatures above the thermocline were uniform in the cold core and warm boundary waters, while within the transition areas temperatures increased progressively from the thermocline to the surface (Fig. 3 and Fig. S1 in the Supplement).The deep water temperature, below the thermocline, ranged from 13.5 to 14.5 • C and did not present any significant differences between the cold core and the warm boundaries.Sea surface salinity (SSS) was lower (< 38.20) in the cold core than in the warm boundaries (> 38.20) and salinity at 300 m in depth was higher than 38.50.A subsurface layer of low-salinity waters (< 38.10) spread off below the thermocline with a 40 to 80 m thickness.This subsurface layer was observed up to the surface in the center of the cold core, whereas in warm boundaries saltier (> 35.20) surface waters overlaid it. Remotely sensed Chl a concentration estimates ranged between 0.10 and 0.30 µg dm −3 during the campaign (Fig. 1).Unfortunately cloud cover masked the remote-sensing Chl a from 3 November to 5 November.The study area (black square in Fig. 1) was considered case 1 waters (Morel et al., 2006).On 30 October, remotely sensed surface Chl a concentrations ranged from 0.10 to 0.20 µg dm −3 .On 2 November, concentrations higher than 0.30 µg dm −3 were observed in the center of the cold patch and decreased below 0.20 µg dm −3 on 6 November.Mean satellite Chl a estimates recorded and averaged from 2 November to 6 November were significantly correlated with Chl a derived from the ship fluorometer during the campaign (R 2 = 0.47, p value < 0.05).The highest Chl a concentrations measured from TSG fluorescence were recorded in the center of the cold patch, with Chl a concentrations up to 0.40 µg dm −3 and mean Chl a of 0.17 ± 0.04 µg dm −3 (Table 1), while warm boundaries presented lower Chl a concentrations (< 0.15 µg dm −3 ). Surface nutrient variability was investigated from the 177 discrete samplings performed every 20 min (Table 1).Surface nitrate, nitrite and phosphate concentrations were below or close to the detection limits (< 0.05 µmol dm −3 ) excluding any spatial variability observation.Only silicate concentration presented detectable variability in its distribution, with mean values of 1.31 ± 0.05 µmol dm −3 in the cold core and 1.19 ± 0.06 µmol dm −3 in the warm boundary surface water (Table 1). Deep Chl a maxima (DCM) were observed in the vicinity of 30 and 45 m in depth for the cold core and warm boundary stations, respectively (Fig. S1).DCM occurred approximately 10 m below the thermocline.DCM Chl a concentration values were comprised between 0.30 and 0.40 µg dm −3 in the cold core and between 0.20 and 0.30 µg dm the warm boundary waters.The euphotic zone (Z eu ) spread down around 70 m all over the study area (Figs.S1 and S2). Phytoplankton group definition Up to 10 phytoplankton groups were resolved by AFCM on the basis of their light scatter (namely forward scatter FWS and sideward scatter SWS) and fluorescence (red FLR and orange FLO fluorescence ranges) properties over the 177 validated samples collected using two-dimensional projections (cytograms, Fig. 4).Due to their small sizes and their limited photosynthetic pigment contents, Prochlorococcus were resolved close to the limit of the AFCM detection capacity means of the maximum SWS and FLR pulse shape curves.Cells assigned to the Synechococcus group were unambiguously resolved thanks to their higher FLO intensity compared to their FLR intensity (Fig. 4a and b) induced by the presence of phycoerythrin pigments.According to a log-log linear regression relying on FWS to the equivalent spherical diameter (ESD), Prochlorococcus and Synechococcus exhibited a mean ESD of 0.5 ± 0.1 µm (0.07±0.03 µm 3 ) and 0.9±0.2µm (0.46±0.38 µm 3 ), respectively (Table 2).Prochlorococcus and Synechococcus continuous surface counts were compared to conventional flow cytometry analyses performed with the FACSCalibur flow cytometer on discrete samples collected at fixed stations at the first two sampling depths near the surface (Fig. S3).The two counting methods did not show significant differences (t test, p value < 0.001), which validates the observations obtained with the automated CytoSense.A post-campaign validation against conventional flow cytometers showed a good fit of data (Student test p > 0.4, Table S1 in the Supplement). With a higher trigger level (FLR30) it was possible to resolve and count larger cells in 5 cm 3 , from picoeukaryotes to microeukaryotes (Fig. 4c and d).Three groups of picoeukaryotes were resolved on the basis of their optical properties.The main picoeukaryote group (PicoE) exhibited higher FLR and FWS and lower FLO intensities than Synechococcus, with an ESD of 2.6±0.5 µm (10.5±5.5 µm 3 ) (Table 2).One picoeukaryote group with high FLO (PicoHigh-FLO) and another with high FLR (PicoHighFLR) were also identified during the campaign (Fig. 4c and d).Three distinct nanoeukaryote groups were defined according to their red and orange fluorescence properties.The main nanoeukaryote group (NanoE) had a FLR/FLO ratio close to the PicoE ratio (Fig. 4c), with an ESD of 4.1±0.5 µm (37.0±14.7 µm 3 ) (Table 2).Nanoeukaryote cells, which emitted orange fluorescence with higher intensities than red fluorescence, were divided into two additional groups: NanoFLO and NanoHigh-FLO, respectively.The distinction between nano-and microeukaryotes was made by combining FWS and the pictures collected by the image-in-flow device of CytoSense.During the campaign, taxonomic identification based on pictures taken by the image-in-flow device was impossible due the lack of a sufficient number of phytoplanktonic cells with sizes above 20-30 µm (the size from which a taxanomic iden- tification can be performed).Two types of microeukaryotes were distinguished: microeukaryotes (MicroE) with a size ranging between 10 and 20 µm and microeukaryotes with high FLO (MicroHighFLO) with a size above 20 µm.The relatively small size of most of the MicroE limited their identification.The MicroE was not properly a microphytoplankton group according to the official size classification (20-200 µm), but it was distinct from the three nanoeukaryote groups (ESD < 5 µm). Phytoplankton group distribution Figure 5 shows the surface abundance of Prochlorococcus, Synechococcus, picoeukaryote and nanoeukaryote groups over the study area.Picoeukaryote and nanoeukaryote abundances were computed as the sum of the three picoeukaryote (PicoE, PicoHighFLO and PicoHighFLR) and nanoeukaryote (NanoE, NanoFLO and NanoHighFLO) groups, respectively, in order to simplify the representation of the phytoplankton group distribution.Prochlorococcus abundances varied between 8800 and 51 500 cells cm −3 (Fig. 5a), with higher abundances in the center of the structure (> 30 000 cells cm −3 ) corresponding to the cold core (Fig. 2a).In warm boundaries, Prochlorococcus abundances were below 30 000 cells cm −3 , with on average 20 000 ±6000 cells cm −3 (Table 1).The Synechococcus population ranged from 13 500 to 35 900 cells cm −3 (Fig. 5b).In the patch of cold waters, Synechococcus mean abundance was 18 000 ± 3000 cells cm −3 , and in the surrounding warm waters a mean abundance of 25 000 ±3000 cells cm −3 was observed.Picoeukaryote abundances varied between 875 and 2040 cells cm −3 and nanoeukaryote abundances ranged from 567 to 1175 cells cm −3 .Picoeukaryote and nanoeukaryote populations presented a similar surface distribution pattern to the Prochlorococcus one, with higher abundances in the cold core than in warm boundaries.In the cold patch, mean abundances of 1200 ± 200 cells cm −3 and 890 ± 90 cells cm −3 were observed for picoeukaryotes and nanoeukaryotes, respectively.Warm boundary surface waters hosted picoeukaryote and nanoeukaryote average populations of 900 ± 100 cells cm −3 and 780 ± 130 cells cm −3 , respectively (Table 1).PicoHighFLO and NanoFLO did not exhibit a clear pattern between cold core and warm boundaries (data not shown), with abundances varying between 50 and 150 cells cm −3 .PicoHighFLR abundance was below 100 cells cm −3 during the entire campaign, except in the vicinity of Station 8 (Fig. 2), where it reached up to 400 cells cm −3 and where the highest Chl a values were recorded (Fig. 6).NanoHighFLO showed the same behavior as PicoHighFLR, with abundances below 50 cells cm −3 during the campaign and a peak of up to 200 cells cm −3 in the same area.Variations of microeukaryotes (between 20 and 30 cells cm −3 and below 5 cells cm −3 for MicroE and Mi-croHighFLO, respectively) are not shown considering their low and relatively homogeneous abundances during the campaign and throughout the different type of surface waters (Table 1).However, MicroHighFLO abundances were exceptionally high in the vicinity of Station 8 (up to 20 cell cm −3 ). Figure 6 illustrates the temporal surface variability of Prochlorococcus, Synechococcus, picoeukaryote and nanoeukaryote abundances together with temporal variations of SST, SSS and Chl a concentration.Prochlorococcus and Synechococcus abundances exhibited an opposite distribution throughout the cold and warm surface waters, with the dominance of Prochlorococcus in cold core waters and of Synechococcus in warm boundary waters.These shifts fitted perfectly with the short-term transitions observed from the SST all along the cruise.Picoeukaryote maximal abundances (around 2000 cells cm −3 ) were observed simultaneously with the highest values of Chl a concentrations in cold waters, and lower abundances were found in warm and Chl a-poor surface waters.The nanoeukaryote population followed a similar trend. Contribution to total fluorescence and carbon biomass The = 0.94) obtained with silica beads of known diameter.Biovolumes were calculated considering that the cells were spherical.Biovolumes were converted into a mean carbon cellular quota (Q C, calc ) according to the Q C, calc = a • Biovolume b relationship using conversion factors a and b reported by (1) Menden-Deuer and Lessard (2000).Carbon cellular quotas (Q C, lit , lit for literature) from (2) Campbell et al. (1994) and (3) Shalapyonok et al. (2001) were reported for comparison. Fine-scale vertical variability The fine-scale vertical variability of temperature, salinity and Chl a concentration was investigated in the first 35 m of the water column during several discrete station stops together with phytoplankton abundances sampled every 2-3 m (Fig. 8) with the dedicated PASTIS_HVR pump system.Fixed stations were grouped into cold core (stations 5, 8, 9, and 11) and warm boundary stations (stations 6, 7, and 10) depending on their surface water temperatures (Figs. 2 and 6).Profiles performed at warm boundary stations over the first 35 m were mostly homogeneous.Temperatures ranged between 18 and 19 • C, salinity values were higher than 38.20 and Chl a concentrations were lower than 0.10 µg dm −3 .Nitrate concentrations remained lower than 0.05 µmol dm −3 and silicate concentrations varied between 1.15 and 1.20 µmol dm −3 .Picophytoplankton abundances exhibited the same uniform vertical patterns.Prochlorococcus abundances remained below 30 000 cells cm −3 , the Synechococcus population counted over 30 000 cells cm −3 and picoeukaryotes varied between 800 and 1200 cells cm −3 .As previously described in Sect.3.1, the thermocline was located between 30 and 40 m, below the PASTIS_HVR sampling depth.Profiles performed in cold surface water areas showed a decrease in temperatures from 15 to 30 m in depth occurring together with an increase in Chl a concentrations of up to 0.60 µg dm −3 .Higher values of nitrate and silicate were recorded concomitantly with the temperature drawdown and Chl a increase.Prochlorococcus and picoeukaryote populations became more abundant in depth and reached concentrations of up to 97 000 cells cm −3 and 5200 cells cm −3 , while Synechococcus abundance tended to decrease, together with the temperatures, below 4000 cells cm −3 .Station 11 (Fig. 2) was considered a cold core station regarding its vertical profile (Fig. 7) even though the surface was relatively warm (Fig. 6), with Synechococcus more abundant than Prochlorococcus.Station 11 was positioned in a transition area between the warm boundaries and the cold core.In the cold core stations, vertical profiles exhibited heterogeneous patterns, because of a shallower thermocline (Fig. S1), impacting physical and biogeochemical fine-scale variability.These results corroborated the observations obtained from the MVP profiles (Fig. 3), suggesting a shallowing of the thermocline and the associated surface mixed layer limit in the cold core. Growth rates and primary production estimates Prochlorococcus and Synechococcus size distributions were retrieved over 24 h according to the power law function relying on FWS to biovolume.On 5 November, the different types of waters were crossed several times by the ship.In order to select FWS measurements performed in the warm boundary waters this day, the individual cell FWS measurements were subsampled on an hourly scale from the 20 min AFCM measurements.In this way we were able to follow the diurnal variability of population size distribution only in the warm boundary waters (Fig. 6).Figures 9a and b show the hourly cell biovolume variations over 24 h for Prochlorococcus and Synechococcus, respectively.A diurnal cycle was described for both populations, with minimal and maximal biovolumes observed at 06:00 and at 18:00 (local time), respectively.Prochlorococcus biovolume varied from 0.04 µm 3 (ESD = 0.42 µm) to 0.12 µm 3 (ESD = 0.61 µm) between dawn and dusk.At the end of the dark period (06:00), Synechocococcus biovolume decreased down to 0.20 µm 3 (ESD = 0.72 µm), and at the end of the photoperiod, biovol-Figure 8. Vertical profiles of temperature (in • C), salinity and Chl a concentrations (in µg dm −3 ) obtained from the CTD fluorimeter after conversion at the depths where vertical high-resolution sampling was acquired for benchtop flow cytometry analysis using the PASTIS_HVR system.Abundances of Prochlorococcus, Synechococcus and picoeukaryote (PicoE + PicoHighFLR + PicoHighFLO) groups are expressed in cells cm −3 .Nutrients were sampled at a different resolution using both the PASTIS_HVR system (circles) and the CTD rosette (squares).Stations performed in cold core surface waters are represented by blue-green colors and those performed in warm boundary surface waters by red-orange colors. ume reached values of up to 0.60 µm 3 (ESD = 1.04 µm).The size distribution variations observed for both populations, with a clear diurnal cycle pattern, highlighted the capacity of single-cell flow cytometry measurements to follow the cellular cycle of these picophytoplanktonic populations.Similar computations were performed on pico-and nano-eukaryote populations, but their size distribution did not show a pattern consistent with the assumption of the size distribution model. Using a size-structured matrix population model, in situ daily growth rates were estimated from the predicted absolute distribution of cells in size classes, with the continuously observed size distribution as model input.Prochlorococcus and Synechococcus modeled-predicted cell size distributions (Fig. 9c and d) reproduced well the diurnal size distribution cycle and allowed us to derive a specific growth rate (µ size , Table 3) for both populations.For comparison, the median size ratio µ ratio = ln (v max /v min ) (Table 3) was computed.Prochlorococcus and Synechococcus specific growth rates µ size were 0.21 ± 0.01 d −1 and 0.72 ± 0.01 d −1 , and 0.28 and 0.49 for the mean size ratio µ ratio , respectively.The Prochlorococcus computed loss rate estimate was 0.30 d −1 , while Synechococcus was characterized by a computed loss rate of 0.68 d −1 . The apparent production of these picocyanobacteria NPP cell and NPP size was computed from the population's intrinsic growth rates (Eqs.8 and 9), in the absence of particle grazing and sinking and of advective processes, using the approximation of the carbon content Q C, calc (Tawww.biogeosciences.net/15/1579/2018/Biogeosciences, 15, 1579-1606, 2018 3) considering mean carbon cellular quota of 25 and 109 fg C cell −1 for Prochlorococcus and Synechococcus (Table 2).Accounting for the increase in their size distribution during the photoperiod, Prochlorococcus NPP size was estimated at 0.13 mg C m −3 d −1 and Synechococcus NPP size estimated at 2.80 mg C m −3 d −1 (Table 3) using the biovolumeto-carbon av b i relationship for Prochlorococcus and Synechococcus (Table 2). Discussion The Mediterranean Sea represents only ∼ 0.8 % in surface and ∼ 0.3 % in volume as compared to the World Ocean, but hosts between 4 and 18 % of world marine species, making it a biodiversity hotspot (Bianchi and Morri, 2000;Lejeusne et al., 2010).The Mediterranean Sea is a reducedscale laboratory basin for the investigation of processes of global importance (Malanotte-Rizzoli et al., 2014;Pascual et al., 2017) because it is characterized by a complex circulation scheme including deep water formation and intense mesoscale and submesoscale variability (Millot and Taupier-Letage, 2005).Mesoscale and submesoscale variability overlays and interacts with the basin and subbasin scales, pro-Table 3. Prochlorococcus and Synechococcus daily growth rate estimate (µ ratio ) computed as the median size ratio µ ratio = ln(v max /v min ), intrinsic growth rate (µ size ) and loss rate (l) obtained from Eq. ( 7).NPP cell and NPP size biomass production values obtained from Eqs. ( 8) and ( 9), respectively. Prochlorococcus Synechococcus ducing intricate processes representative of complex and still unresolved oceanic systems (Malanotte-Rizzoli et al., 2014;Pascual et al., 2017).The small size of the Mediterranean Sea and the proximity of numerous marine observatories are other outstanding advantages giving its status of a "miniature ocean" laboratory.The Mediterranean Sea is considered an oligotrophic basin (Moutin and Prieur, 2012) and its primary production by phytoplankton is generally low (D'Ortenzio and Ribera d'Alcala, 2009).The general surface circulation pattern in the western basin of the Mediterranean Sea is characterized by Modified Atlantic Water (MAW) transported from the Algerian basin to the Ligurian subbasin (Millot and Taupier-Letage, 2005), flowing in the surface and northward from the western part of Corsica called the Western Corsican Current, and joining the Eastern Corsican Current in the vicinity of Cap Corse to form the Northern Current (Astraldi and Gasparini, 1992;Millot, 1999).A cyclonic gyre is generated by a recirculation of the Northern Current towards the Western Corsican Current.Our study area was located in the center of a cyclonic recirculation within the Ligurian subbasin and forced by atmosphericclimatic conditions (Astraldi et al., 1994).The Ligurian subbasin hydrological regime varies from intense winter mixing to strong thermal stratification in summer and fall.The phytoplankton biomass increases significantly in late winter/early spring, sustained by nutrient fertilization from deep waters, and decreases along with biological activity in summer and fall due to nutrient (N and P) depletion in surface waters (Marty et al., 2002).In the late summer/early fall season (the time of this present study), the phytoplankton community structure in the Ligurian subbasin is dominated by small-sized phytoplankton species (such as Prochlorococcus, Synechococcus, and pico-and nano-eukaryotes; Marty et al., 2008). Physical origins and dynamics of the fine-scale structure investigated during OSCAHR Both ADCP and AVISO derived surface current directions and intensities suggested that the sampled cold core mesoscale structure was associated with a cyclonic gyre generated by a recirculation of the Northern Current towards the Western Corsican Current (Fig. 1, AVISO).Besides a generally cyclonic circulation pattern between the French coast and Corsica that geostrophically domed the isopycnals, Ekman pumping is likely to have played an important role since strong wind events were observed before the OSCAHR cruise, and previous studies (Gaube et al., 2013) have highlighted Ekman pumping's impact on ocean biogeochemistry.Ekman pumping calculated using both WRF and scatterometer wind estimates (Fig. 10) suggested that, besides the strong wind event occurring during the first day of the cruise, the region has experienced several wind events 2 weeks before the cruise characterized by vertical velocities peaking to 3-4 m d −1 inducing a strong decline in SST.Furthermore, the time series of vertical velocities highlighted that the cold water "patch" experienced almost constantly negative (i.e., upwarding) vertical velocities for about 1 month (Fig. 10). The shallowing of the thermocline in the central part of the cyclonic structure associated with low SST in the cold patch was shown by the MVP salinity and temperature profiles (Fig. 3).Low-salinity waters at the surface of the cold patch support the Ekman pumping process hypothesis.Within the warm boundaries, a subsurface layer of low-salinity waters (< 38.10) spreading off below the thermocline and reaching the surface in the cold core is observed for each MVP and CTD deployment.The origin of these low-salinity sub- surface waters remains unclear.The cyclonic circulation in the Ligurian subbasin induced by the intense coastal currents along the Italian and French coasts (Astraldi et al., 1994) is supposed to isolate the central Ligurian subbasin from direct riverine inputs, such inputs being in addition particularly poor in this area (Migon, 1993).Goutx et al. (2009) reported similar observations for the same period (13 October 2004) in the Ligurian subbasin (43.25 • N, 8 • E, 48 km offshore), close to our study area, as well as Marty et al. (2008).Further investigations might be done to find the origin of this low-salinity subsurface layer. As mentioned by McGillicuddy (2016), the superposition of a wind-driven Ekman flow on a mesoscale velocity field can cause ageostrophic circulation involving significant vertical transport (Niiler, 1969;Stern, 1965).The cyclonic recirculation produced a zone of divergence in the central zone of the Ligurian Sea which domed the main pycnoclines, thereby shallowing the mixed layer (Sournia et al., 1990;Estrada, 1996;Nezlin et al., 2004).This process resulted in the fertilization of the upper mixed layer with nutrient-rich upwelled waters (Miquel et al., 2011).Remote-sensing (SST, Chl a), model (AVISO, WRF), continuous surface measurements and MVP profiles support the Ekman pumping hypothesis induced by a strong wind event.The resulting upwelled subsurface cold water fertilized surface waters, which increased Chl a concentration (Figs. 1, 2 and 6) and the primary production (Sournia et al., 1990) that in turn sustain higher trophic levels (Warren et al., 2004). Furthermore, surface warm boundary waters were subdivided into two distinct types (Table 1): type 1 (in red in Figs. 6, 10 and 11) and type 2 (in orange), according to their physical and biogeochemical properties.Cold patch water (Fig. 7d) signatures had a SST lower than 17 Nutrients and Chl a distribution In the cold core, nitrate and silicate started to increase below 15 m (Fig. 8).The first detectable phosphate concentrations appeared below 50 m (> 0.2 µmol dm −3 , Fig. S2).However, surface cold core waters contained more autotrophic biomass than warm boundary waters, as shown by surface Chl a concentrations (Figs. 2 and 6, Table 1).In the cold core waters, the nutrient availability starting around 15-20 m in depth sustained an increase in Chl a of up to 0.6 µg dm −3 at 30 m in depth (Fig. 8), while in warm boundary waters, a deeper MLD kept the DCM below 30 m (Fig. S1).This later was characterized by lower Chl a values in the warm boundary, which was limited by both the nutrient availability and the amount of light availability for phytoplankton cells.Within the Ligurian subbasin, the DCM is shallower than in other oligotrophic areas: a maximum of 60 m in depth (Marty et al., 2002) against 150 m or more in the tropical oligotrophic Pacific Ocean (Claustre et al., 1999), and ∼ 100 m in the oligotrophic Atlantic gyres (Marañón et al., 2003).The euphotic depth (Zeu ∼ 70 m, Fig. S2) in the Ligurian subbasin was deeper than the MLD and the DCM during all of the year (Marty et al., 2002), except in winter.The variation of the nitracline depth induced by the cyclonic circulation and Ekman pumping appeared to be the most relevant factor controlling this vertical and horizontal biological distribution variability. Phytoplankton functional group description The picocyanobacteria Prochlorococcus and Synechococcus are the smallest and most abundant photoautotroph in the oceans (Waterbury et al., 1986;Olson et al., 1988;Chisholm et al., 1992) and have a key role in a variety of ecosystems, particularly in oligotrophic ones (Partensky et al., 1999a). The observations reported in this study are, to the best of our knowledge, the first to correctly resolve Prochlorococcus abundance in surface waters using a CytoSense AFCM due to some improvements of the instruments (a carbon activated filter to reduce the optical background of the seawater, a more powerful laser beam to improve the side scatter intensities of these very small cells).Prochlorococcus mean ESD and the associated biovolume of 0.5 ± 0.1 µm and 0.07 ± 0.03 µm 3 , respectively (Table 2), were in the lower range of 0.5 to 0.9 µm and 0.03 to 0.38 µm 3 ESD and biovolume values reported in previous studies (Morel et al., 1993;Partensky et al., 1999b;Shalapyonok et al., 2001;Ribalet et al., 2015).Sieracki et al. (1995), DuRand et al. (2001) and Shalapyonok et al. (2001) noticed that Prochlorococcus cell diameter and biovolume were generally lower in the surface mixed layer (0.45-0.60 µm and 0.05-0.11µm 3 ) than in deeper waters (0.75-0.94 µm and 0.21-0.43µm 3 ).In this study, Synechococcus mean ESD and an associated biovolume of 0.9±0.2µm, 0.46±0.38µm 3 , respectively (Table 2), were in the same range of 0.8 to 1.2 µm and 0.25 to 1.00 µm 3 as ESD and biovolume values reported in previous studies (Morel et al., 1993;Shalapyonok et al., 2001;Sosik et al., 2003;Hunter-Cevera et al., 2014).DuRand et al. (2001) and Shalapyonok et al. (2001) reported that deeper Synechococcus can also be characterized by higher mean cell diameters.To explain our observations, the literature reveals that Prochlorococcus can belong to the photoadapted high-light (HL) ecotype characterized by less Chl a content, i.e., less FLR, or to the low-light (LL) ecotype characterized by higher Chl a content, i.e., higher FLR (Moore and Chisholm, 1999;Garczarek et al., 2007;Partensky and Garczarek, 2010).Usually, the HL ecotype occupies the upper part of the euphotic zone, while the LL ecotype dominates the bottom of the euphotic layer.The occurrence of a Prochlorococcus population with significantly higher FLR (and/or SWS) values, which might be representative of the LL ecotype, was never observed in surface waters (Fig. 12 for AFCM and Fig. S4 for conventional flow cytometry).FLR distribution of Prochlorococcus obtained from samples analyzed by conventional flow cytometry in the cold core and warm boundary waters over the first 35 m (Fig. S5) revealed that distinct normal distributions of FLR were observed in cold core waters between surface and mixed layer depth samples.The presence of both ecotypes (HL and LL) around the mixed layer depth in cold core waters (from 15 to 20 m in depth) was suggested from the Prochlorococcus FLR distributions, even when any clear bimodal distribution of FLR (or SWS, data not shown) signals were observed (Figs.S4 and S5).The DCM (i.e., 40 m in depth), where the LL ecotype is supposed to be the main ecotype, was sampled on only one occasion, during the STA11 CTD rosette (Figs.S2, S4 and S5).Campbell and Vaulot (1993) clearly show that a bimodal distribution of FLR intensities can be observed when two ecotypes are present together in similar proportion around the DCM.By similar, we mean a sufficient abundance of both ecotypes, which makes it possible to clearly identify the bimodal distribution of FLR.Blanchot and Rodier (1996) also identify such a bimodal distribution in a few locations.They clearly explained that in other locations, ecotype (sub-population) cooccurrence cannot be observed from bimodality of the FLR distribution because both ecotypes were not abundant enough to be clearly seen.In these locations both ecotypes still existed, but their concentrations were very different and thus the two peaks could not be seen, the larger peak overpassing the smaller one.Synechococcus ecotype distribution is not characterized by a clear depth partitioning: their distribution appears to be principally controlled by water temperature and latitude (Pittera et al., 2014).Mella-Flores et al. (2011) and Farrant et al. (2016) reported that in the Mediterranean Sea, HLI and III clades were the dominant ecotypes in surface waters for Prochlorococcus and Synechococcus, respectively, whereas LLI and I/IV clades were the main Prochlorococcus and Synechococcus ecotypes present in deep waters.Obviously, further analyses of OSCAHR samples performed at the molecular level would have been necessary to validate these explanations or not.Pico-and nano-eukaryotes were distinguished into six cytometric groups based on their scattering (FWS) and fluorescence (FLR and FLO) properties, although pico-and nanoeukaryotes include cells of several taxa (Simon et al., 1994;Worden and Not, 2008;Percopo et al., 2011).As mentioned in Sect.3.3, PicoE and NanoE (Fig. 4) were the main groups represented in terms of abundances, and their variability drove the whole dynamics of pico-and nano-eukaryote size groups across the cold core and warm boundary waters.If flow cytometry is ataxonomic, it has been reported in several previous studies that the picoeukaryote size fraction in the Mediterranean Sea is represented by prasynophytes, alveolates, picobiliphytes, haptophytes and stramenopiles (Not et al., 2009), in the size spectrum 0.9 µm (Ostreococcus taurii) -3.5 µm (Phaeocystis cordata).A global compilation from Vaulot et al. (2008) reported a picoeukaryote description in an extended range of 0.8-3 µm, which corresponds to the mean ESD of 2.6 ± 0.5 µm observed in our study. Microphytoplankton abundances reported in this study (20-30 cells cm −3 ) could appear high regarding previously reported cell concentrations ranging between 1 and 5 cells cm −3 (Gomez and Gorsky, 2003).MicroE cells, as defined manually on cytograms, presented ESDs of between 10 and 20 µm, which could be considered large-sized nanophytoplankton cells.As mentioned by Siokou-Frangou et al. (2010), single cells of colonial diatoms smaller than 20 µm are commonly observed in Mediterranean waters and are treated separately from the nanophytoplankton because of their larger functional size and distinct ecological role.The MicroHighFLO cluster had mean ESD > 20 µm, and was considered the only true microphytoplankton component.MicroHighFLO abundances (< 5 cells cm −3 , with a peak of up to 20 cells cm −3 ) were in better agreement with those generally observed in similar oligotrophic surface waters (Gomez and Gorsky, 2003;Vaillancourt et al., 2003;Girault et al., 2013a).Similarly low microphytoplankton abundances (< 5 cells cm −3 ) were observed in a coastal station of the northwestern Mediterranean Sea, even during the spring bloom (Gomez and Gorsky, 2003), and low abundances, 4±5 and 3.6 ± 7 cell cm −3 , were reported by Dugenne (2017) in the northwestern Mediterranean Sea.The microphytoplankton in the northwestern Mediterranean Sea is rather dominated by diatoms and dinoflagellates (Ferrier-Pagès and Rassoulzadegan, 1994;Gomez and Gorsky, 2003;Marty et al., 2008). Horizontal and vertical distributions of the phytoplankton community structure A clear distinct tri-dimensional distribution of phytoplankton abundances was observed between the cold core and warm boundary waters.Despite the apparent constant oligotrophy of the surface waters (Sect.3.1), high variations in phytoplankton assemblage structuration were in evidence in this study, consistent with previous studies led in similar oligotrophic areas (Marañón et al., 2003;Girault et al., 2013b).The cold core richness, in terms of Chl a concentration, was sustained by higher Prochlorococcus, picoeukaryote and nanoeukaryote abundances (Figs. 5 and 6, Table 1).By contrast, high abundances of Synechococcus characterized the warm boundaries.The contrasted surface distribution between Prochlorococcus and Synechococcus populations is clearly visible in Fig. 6.As displayed by their vertical distribution (Fig. 8), Prochlorococcus and picoeukaryote higher abundances in the cold core waters resulted from upwelled nutrient-rich waters.Maximal abundances above 80 000 and 4000 cells cm −3 were recorded for Prochlorococcus and picoeukaryotes, respectively, at the DCM depth, where nitrate was not limiting but irradiance decreased (10-30 % of surface PAR only).By contrast, Synechococcus presented low abundances at the DCM (< 5000 cells cm −3 , Fig. S2) but maximal abundances (∼ 30 000 cells cm −3 ) within the warm boundary mixed layer (Fig. 8).Prochlorococcus and Synechococcus have been demonstrated to occupy different light niches over the water column (Agustí, 2004).Synechococcus are particularly adapted to depleted nitrate and phosphate conditions (Moutin et al., 2002;Michelou et al., 2011) and are high-light adapted due to less efficient accessory pigments (Moore et al., 1995).To acquire the necessary energy to grow, they have developed efficient ways to cope with light and UV stress, conversely to Prochlorococcus (Mella-Flores et al., 2012), which are able to grow deeper in the euphotic zone (Olson et al., 1990a).Marty et al. (2008) (Olson et al., 1990a;Campbell et al., 1997;Partensky et al., 1999a, b;DuRand et al., 2001;Girault et al., 2013b).As a matter of fact, we have reported similar Prochlorococcus and Synechococcus abundances ranging between 15 000 and 50 000 cells mm −3 , although 1 or 2 orders of magnitude between Prochlorococcus and Synechococcus abundances have been generally observed in strong to ultra-oligotrophic areas. Contribution to total red fluorescence and C biomass The FLR and C biomass contributions of Prochlorococcus, Synechococcus, picoeukaryotes and nanoeukaryotes present opposite patterns to the one in abundances previously described between the cold core and warm boundary waters.Nanoeukaryotes were the main contributors (> 50 %) in terms of pigment content (defined by FLR) and biomass.Marty et al. (2008) reported a 10 % relatively constant contribution of C biomass for microphytoplankton in the same area during the late summer/early fall based on pigment data analysis.Abundances of Prochlorococcus and Synechococcus throughout cold core and warm boundary surface waters were on the same order of magnitude as in their study (10 5 cells cm −3 ), but FLR and biomass contributions of Prochlorococcus were 5 to 10 times lower.When this contribution is integrated over the euphotic layer, studies performed in a similar oligotrophic environment indicated a larger contribution of Prochlorococcus to Chl a and/or biomass compared to Synechococcus at this time of the year (Olson et al., 1990a;DuRand et al., 2001;Marty et al., 2008).In our study, as only surface data were considered, excluding the DCM phytoplankton assemblage, it may explain the higher contribution of Synechococcus compared to Prochlorococcus. Biology as a tracer on a fine scale of water masses Synechococcus relative contribution to total FLR, as defined by AFCM, tends to overestimate their importance compared to their contribution calculated from their cellular C quota.Their abnormally high FLR (Fig. 7a, orange dots in Fig. 7b) caused a sudden increase in FLR Total (Fig. 7b), while no shift in red fluorescence was shown by the TSG in these type 2 warm boundary waters (Sect.4.1).Synechococcus pigment composition is characterized by phycoerythrin (PE), with a fluorescence emission peak at 575 nm, and phycocyanin (PC), with a fluorescence emission peak at 650 nm.Those pigments vary depending on the strains or the growing conditions (Olson et al., 1990b).Since the TSG fluorometer collects fluorescence emission > 685 nm and the AFCM collects > 652 nm, the relatively higher FLR contribution could be explained by the PC red fluorescence emission into the red fluorescence channel collected by the AFCM.As some samples were also analyzed on a FACSCalibur equipped with a 633 nm laser beam, it was possible to measure the red fluorescence induced by PC and thus calculate the ratio PC/PE.It occurred that the Synechococcus population observed in type 2 waters (stations 6 and 7) had a higher PC : PE ratio (about 0.33, data not shown) compared to other stations (< 0.27, data not shown).The ratio PC : PE varies as a response to photoacclimation, as well as to chromatic adaptation (Dubinsky and Stambler, 2009;Stambler, 2014).These Synechococcus populations were retrieved in the northern corners of our study area (Fig. 7c), characterized by warmer SST (> 18.5 • C) and lower SSS values (< 38.24) than type 1 warm boundary waters.Besides their apparently different physical properties, type 1 and 2 waters remained relatively close in terms of TSG fluorescence and phytoplankton abundances (Fig. 11).Surface silicate concentrations in type 2 waters were the lowest observed (Fig. 11d).As mentioned above, only a few phytoplankton species requiring silicate (i.e., diatoms) were observed in the Ligurian subbasin at this time of the year, meaning that the silicate concentration values observed were unlikely to be induced by phytoplankton silicate consumption. The observed increase in Prochlorococcus, Synechococcus and picoeukaryote mean cell FLR m in type 2 surface warm boundary waters (Fig. 11i-k) might result from photoacclimation to depth by increasing their cell size and Chl a per cell content (Olson et al., 1990b;Campbell et al., 1997;Du-Rand et al., 2001;Dubinsky and Stambler, 2009;Stambler, 2014), suggesting a recent upwelling of deeper waters.However, they were characterized by the highest SST recorded during the campaign, which runs counter to the deep origin of the water mass.Moreover, deep Prochlorococcus and Synechococcus cells located below the thermocline at the DCM were characterized by a ∼ 5-fold higher FLR compared to surface cells (Fig. S4).Vertical Synechococcus fluorescence values recorded by benchtop flow cytometry at stations 6 and 7 (type 2 warm boundary waters) were characterized by the highest values, down to 10 m in depth, but still remain below the highest fluorescence values recorded below the DCM.This rejects the hypothesis of upwelled low-light photoacclimated populations.The phytoplankton community in surface warm boundary waters 2 might then be considered a distinct phytoplankton population, which grew in a different environment than warm boundary waters 1. Type 1 and type 2 warm boundary waters were not significantly distinguishable regarding SST/SSS (Fig. 7d), and by combining it with the surface circulation patterns and FLR anomalies (Fig. 7a), we can hypothesize that type 2 warm boundary waters could correspond to a patch of surface Tyrrhenian Sea brought by the Eastern Corsica Current trapped in MAW waters from the Western Corsica Current.Although both warm boundary waters reflected similar biogeochemical growing conditions and phytoplankton 1. group abundances, the distinct optical properties of phytoplankton groups recorded by flow cytometry combined with high-resolution observations could be evidence of a different (bio)geographical water mass origin. Flow cytometry and productivity estimates The application of a matrix growth population model based on high-frequency AFCM measurements in warm boundary surface waters provides estimates of daily production (division rate) and loss rate for Prochlorococcus Synechococcus populations.The low in situ growth rate obtained for Prochlorococcus (µ size = 0.21 d −1 ) and the higher growth rate (µ size = 0.72 d −1 ) got for Synechococcus corroborate their surface distribution pattern.The combination of surface growth rate and the population's vertical distri-bution suggests that Prochlorococcus growth was limited in warm boundary surface waters by more intense light conditions, whereas Synechococcus cells were more particularly adapted.Synechococcus growth rate was larger than one division per day (> 0.69 d −1 ).As expected for an asynchronous population, the Synechococcus growth rate estimate from differences in minimal and maximal values of biovolume (µ ratio = 0.49) was smaller than the one retrieved from the size distribution variations µ size .For Prochlorococcus, both growth rates were characterized by low values.Low size variations, close to the limits of detection of the flow cytometer, might cause eventual bias in the µ ratio calculation.It could explain why µ ratio (0.28 d −1 ) was slightly higher than µ size .Synechococcus growth rate was consistent with values of 0.48-0.96d −1 reported by Ferrier-Pages and Rassoulzadegan (1994) and with the value of 0.6 d −1 reported by Agawin et al. (1998), both measured in the same period in surface waters of coastal stations of the northwestern Mediterranean Sea.Prochlorococcus growth rate was in the same range as the growth rate values (between 0.1 and 0.4 d −1 ) reported by Goericke et al. (1993) during summer and winter in surface waters of the Sargasso Sea.Vaulot et al. (1995) and Liu et al. (1997) measured Prochlorococcus growth rates of 0.5-0.7 d −1 and 0.45-0.60d −1 , respectively, in oligotrophic surface waters of the equatorial and subtropical Pacific, with abundances ranging from 50 000 to 200 000 cells cm −3 .Riballet et al. (2015) found a linear relationship between SST and growth rate in October in the subtropical Pacific, with a growth rate value of ∼ 0.4 d −1 at 18 • C. Vaulot et al. (1995) reported maximal growth rate values at 30 m in depth, where Prochlorococcus abundances were the highest.Moore et al. (1995) noticed that LL Prochlorococcus strain growth could be limited by high light intensity and grew faster at lower light levels, whereas HL strain was photoinhibited only at the highest growth irradiance tested.Based on the literature, on the mean FLR surface values obtained with the AFCM (Fig. 12) and on the single-cell FLR distribution over the water column (Fig. S4), it is very likely that the HL Prochlorococcus strain was the prevailing strain in warm boundary surface waters.The small growth rate of 0.21 d −1 suggested that the surface layer was not the optimal environment at this time of the year for the growth of the Prochlorococcus population observed.This weak growth rate might be linked to the relatively low Prochlorococcus abundances compared to Synechococcus abundances reported in this study.Indeed, in oligotrophic areas, 1 or 2 orders of magnitude have been generally observed between Prochlorococcus and Synechococcus abundances.Higher Prochlorococcus growth rates than those estimated in surface waters by AFCM might be observed at the DCM, where maximal abundances were indeed observed. Prochlorococcus loss rate (0.30 d −1 ) was higher than its growth rate during our study, suggesting that loss processes in these surface waters tended to control the Prochlorococcus population abundance, resulting in a decrease in abundance.In the same time, Synechococcus loss rate was slightly lower (0.68 d −1 ) than its growth rate.Calculated loss rates include both biological factors (predation, viral lysis) and physical factors (removal or addition of cells through sedimentation, or physical transport).Our loss and growth rate estimates were relatively similar for both Prochlorococcus and Synechococcus populations.Similar observations were made by Hunter-Cevera et al. (2014) throughout a year on natural Synechococcus populations, using a similar approach.Ribalet et al. (2015) reported a synchronization of Prochlorococcus cell production and mortality with the day-night cycle in the subtropical Pacific gyre, which likely enforces ecosystem stability in oligotrophic ecosystems.In these ecosystems with limited submesoscale instabilities, picocyanobacteria abundances are relatively constant (Partensky et al., 1999a), as well as biogeochemical characteristics, on 1 to a few days. The apparent equilibrium of cell abundances of these systems suggests that growth and loss processes are tightly coupled, which helps to stabilize open ocean ecosystems (Partensky et al., 1999a;Ribalet et al., 2015). Despite a similar range of abundances of both picocyanobacteria (10 000-20 000 cells cm −3 ), the apparent productions NPP size and NPP cell of Prochlorococcus and Synechococcus (Table 3) indicate that Synechococcus contribution to net C uptake was 20-25 times higher than Prochlorococcus in surface warm boundary waters.Following the growth rate difference previously described, it may reflect the fact that environmental conditions in these surface waters favor the production of Synechococcus cells.Our NPP estimates for Synechococcus (2.68 mg C m −3 d −1 , Table 3) were consistent with gross production between 1 and 4 mg C m −3 d −1 reported by Agawin et al. (1998) in the northwestern Mediterranean Sea in the same period.Marty et al. (2008) estimates of primary production in the Ligurian subbasin in summer/fall yielded values of between 8 and 16 mg C m −3 d −1 in surface waters.According to these estimates, apparent production of Prochlorococcus and Synechococcus accounted for 0.5-1 % and 17-33 % of primary production, respectively, which is consistent with their relative contributions to (i) total fluorescence of 2.5 and 33.3 %, respectively, and (ii) to C biomass of 4 and 22 %, respectively, in surface warm boundary waters mentioned in Sect.3.4.Picocyanobacteria apparent net production rates obtained from different calculations (NPP size and NPP cell , Eqs. 8 and 9) provide similar specific C uptake rates, meaning that the quantity of C assimilated during the photoperiod is strictly equivalent to the biomass of newly formed cells after mitosis.This result strengthens the characterization of oligotrophic ecosystems in which populations follow a daily dynamic at equilibrium.However, our apparent production estimates for both Prochlorococcus and Synechococcus have several limitations.The successive conversions from FWS to biovolume and then to C contents remain a substantial source of uncertainty, although our cellular C quotas are in agreement with the literature (Table 2).Recent advances in flow cytometry provide direct measurements of specific phytoplankton biomass on sorted populations (Graff et al., 2012).Growth rates do not account for size-specific removal processes (selective grazing, sinking rates).Size-selective grazing may alter in situ growth rates by up to 20 % of the estimation (Dugenne, 2017).To overcome this issue, Hunter-Cevera et al. (2014) performed a dilution experiment to estimate the selective grazing rates.During the OSCAHR campaign, the study of the diel variation of cell size distribution was limited to the warm boundary surface waters based on the assumption that the picophytoplankton populations presented the same cellular properties across this hydrographical province.Tracking of coherent time series in a particular zone based on an adaptive Lagrangian approach might be considered.That was the plan for OSCAHR, but the bad weather conditions prevented it.The production estimates presented in this study rely on C conversions based on cell size, whereas many production estimates are still based on Chl a to C conversion factors.Direct integration of growth rates in biogeochemical models (Cullen et al., 1993) and comparison to C-based productivity models (Westberry et al., 2008) should be envisaged for a better assessment of the biogeochemical contribution of picocyanobacteria in oligotrophic ecosystems.Our estimates of specific growth rates and associated apparent production provide new insight into Prochlorococcus and Synechococcus population dynamics and will allow better understanding and quantifying of their respective biogeochemical and ecological contributions in oligotrophic ecosystems, where they play a major role. Conclusions The scientific objectives of the OSCAHR (Observing Submesoscale Coupling At High Resolution) project were to characterize a fine-scale (submesoscale) dynamical structure and to study its influence on the distribution of biogenic elements and the structure and dynamics of the first trophic levels associated with it.The methodology included the use of novel platforms of observation for sampling the ocean surface layer at a high spatial and temporal frequency.A new version of an automated flow cytometer optimized for small and dim cells was installed tested for real-time, high-throughput sampling of phytoplankton functional groups, from micro-phytoplankton down to picocyanobacteria (including Prochlorococcus).The cruise strategy utilized an adaptive approach based on both satellite and numerical modeling data to identify a dynamical feature of interest and to track its evolution.We have demonstrated that subsurface cold waters reached the surface in the center of a cyclonic recirculation into the Ligurian subbasin.These nutrient-rich upwelled waters induced an increase in Chl a concentration, and associated primary production, in the center of the structure, whereas surrounding warm and oligotrophic boundary waters remained less productive.The phytoplankton community structure was dominated in terms of abundance by Prochlorococcus, Synechococcus, and pico-and nano-eukaryotes, respectively.The phytoplankton community structure was determined from optical properties measured by flow cytometry, which is an ataxonomic technique (except for some specific genus such as Prochlorococcus and Synechococcus).Optical microscope examination of samples might add interesting information, but according to the weak abundance of microphytoplankton (MicroE ≈ 20 cells cm −3 and MicroHighFLO < 5 cells cm −3 , with 10 µm < MicroE ESD < 20 µm and Micro-HighFLO ESD > 20 µm) and the small size of nanoeukaryote cells observed (ESD = 4.1 ± 0.5 µm), a microscopic examination would also have been limited in resolution and quantification.Prochlorococcus and Synechococcus abundances exhibited an opposite distribution throughout cold and warm surface waters, with dominance of Prochlorococcus in cold core waters and of Synechococcus in warm boundary waters.These shifts fitted perfectly with the short-term transitions when passing through one water type to another.The study of the fine-scale vertical distribution of Prochlorococcus and Synechococcus showed that the dominance of Prochlorococcus vs. Synechococcus in cold core waters was closely linked to the upwelled subsurface waters.Coupling a cell's optical properties and physical properties appears to be a valuable approach for characterizing the origin of distinct surface water types. The OSCAHR campaign perfectly encompasses the new opportunity offered by coupling fine-scale vertical and horizontal physical measurements, remote sensing, modeled data, in situ AFCM and biogeochemistry using an innovative adaptive sampling strategy, in order to deeply understand the fine-scale dynamics of the phytoplankton community structure.The unprecedented spatial and temporal resolution obtained thanks to the latest advances in AFCM deployment allowed us to clearly demonstrate the preponderant role of physical fine-scale processes in the phytoplankton community structure distribution.For the first time, using this new model of Cytobuoy commercial AFCM, we were able to fully resolve Prochlorococcus and Synechococcus picocyanobacteria, the smallest photoautotrophs on Earth, which play a major role in widespread ocean oligotrophic areas.Finally, single-cell analysis of well-defined Prochlorococcus and Synechococcus functional groups associated with a size structure population matrix model provided some valuable indications of the daily dynamics of these populations.Primary productivity estimates of these two major phytoplankton species obtained by this model are essential for better understanding the contribution of picocyanobacteria to biological productivity.This study encourages the continuation and improvement of such a strategy to biogeochemically quantify the contribution of such fine-scale structures in the global ocean.Finally, repeated surveys of the phytoplankton community structure using this kind of combined approach will allow a better assessment of the impact of climate change and anthropogenic forcings.This is particularly of importance in the Mediterranean Sea, which is a biodiversity hotspot under intense pressure from anthropogenic impacts and already one of the most impacted seas in the world (Lejeusne et al., 2010). Data availability.Standardized, validated and interoperable OSC-AHR metadata and data are available through the SeaDataNet Pan-European infrastructure for ocean and marine data management.The detailed metadata (based upon the ISO19139 standards) created by Doglioli et al. (2018) are available through the Common Data Index (CDI) service.These metadata are tied to the OSCAHR flow cytometry dataset after adopting and creating a flow cytometry common vocabulary.The data can be requested and downloaded through the SeaDataNet portal in Ocean Data View ASCII (ODV) format. Competing interests.The authors declare that they have no conflict of interest. Figure 1 . Figure 1.Sea surface temperature (SST, in • C), Chl a concentration (in µg dm −3 ) and AVISO altimetry (in cm) and derived current intensity (m s −1 ) and direction in the Ligurian subbasin from 30 October to 6 November.The black box represents the study area.From 3 November to 6 November, SST and Chl a continuous surface measurements were superimposed on the satellite products and ADCP currents were represented in the AVISO products. Figure 2 . Figure 2. Sea surface temperature (SST, in • C) and Chl a concentrations (µg dm −3 ) obtained from fluorescence continuous surface measurements from 3 November to 6 November during the OSC-AHR campaign and fixed station locations (STA5 to STA12).This study area corresponds to the black box represented in Fig. 1. Figure 3 . Figure 3. Continuous vertical profiles of salinity and temperature from the surface to 300 m in depth between points A and B from 00:00 to 06:00 (local time) on 5 November.Associated SST (in • C), SSS and Chl a concentration (in µg dm −3 ) from continuous surface measurements and abundances (in cell cm −3 ) of Prochlorococcus, Synechococcus, picoeukaryotes (PicoEuk) and nanoeukaryotes (NanoEuk). Figure 6 . Figure 6.Continuous measurements of SST (in • C), SSS and Chl a concentrations (in µg dm −3 ) of surface waters during the OSCAHR campaign from 3 November 12:00 to 6 November 00:00 (local time), with associated surface abundances (in cells cm −3 ) of Prochlorococcus, Synechococcus, picoeukaryotes (PicoE + PicoHighFLO + PicoHighFLR) and nanoeukaryotes (NanoE + NanoFLO + NanoHighFLO).The background color code corresponds to cold core surface waters in blue, warm boundary waters of type 1 in red and warm boundary waters of type 2 in orange (more details in Sect.4.2).Vertical dashed lines represent sampling times of the eight fixed stations (STA5 to STA12) performed during the campaign and colors correspond to the type of surface waters in which stations were performed.The purple color for STA11 shows that STA11 was performed in transition surface waters between cold core and warm boundary 1 surface waters.Start and end of the MVP transect presented in Fig. 3 are represented by a horizontal black line. relative contributions to red fluorescence FLR i by Prochlorococcus, Synechococcus, picoeukaryotes (PicoE, PicoHighFLO, PicoHighFLR), nanoeukaryotes (NanoE, NanoFLO, NanoHighFLO), and microphytoplankton (Mi-croE, MicroHighFLO) groups were obtained by multiplying their mean cell's red fluorescence intensity (FLR m ) recorded by AFCM by their respective abundances according to FLR i = (FLR m, i • Abundance i ).The integrative FLR Total signal was calculated as FLR Total = i FLR i .The ratios FLR i /FLR Total give an estimate of the contribution of each phytoplankton group to the bulk fluorescence signal.A significant correlation (R 2 = 0.80, n = 144) was established between computed FLR Total and Chl a concentrations de- Figure 7 . Figure 7. (a) Relative contribution FLR i = (FLR m,i • Abundance i ) of Prochlorococcus, Synechococcus, picoeukaryotes (PicoE + PicoHighFLR + PicoHighFLO), nanoeukaryotes (NanoE + NanoFLO + NanoHighFLO) and microeukaryotes (Mi-croE + MicroHighFLO) to the integrated red fluorescence signal (FLR Total = i (FLR m,i • Abundance i )) from 3 November 12:00 to 6 November 00:00.Vertical dashed lines represent sampling times of the eight fixed stations (STA5 to STA12) performed during the campaign and colors correspond to the type of surface waters in which stations were performed.(b) Fluorescence recorded with the FLRTotal (in a.u.) vs. TSG (in a.u.) recorded by the automated flow cytometer.Blue, red and orange dots correspond to sampling performed in cold core, warm boundary 1 and boundary 2 surface waters.(c) Sampling positions of automated flow cytometry surface measurements.Blue, red and orange dots correspond to sampling performed in cold core, warm boundary 1 and boundary 2 surface waters.(d) SSS vs. SST (in • C) plot from continuous TSG measurements with corresponding density isolines.The distinction between cold core and warm boundary 1 and 2 surface waters throughout the paper was made according to this plot. Figure 10 . Figure 10.Ekman pumping vertical velocities (in m d −1 ) computed from scatterometer (in blue) and atmospheric model (in black) wind speeds and mean SST (in red, in • C) in our study area from 3 October to 6 November.Shade areas represent the SD relative to each measurement.Negative Ekman pumping values represent upward vertical velocities. Figure 11 . Figure 11.Boxplots of SSS, fluorescence (in a.u.), SST (in • C) and silicate concentration (in µmol dm −3 ) in cold core (in blue) and warm boundary 1 (in red) and 2 (in orange) surface waters.Prochlorococcus, Synechococcus, picoeukaryote (PicoE) and nanoeukaryote (NanoE) abundances (in cell cm −3 ) and specific mean red fluorescence (FLR m ) in the same hydrographical provinces are also represented with boxplots.The boundary of the box closest to zero indicates the 25th percentile, the black line within the box marks the median, the dashed line indicates the mean and the boundary of the box farthest from zero indicates the 75th percentile.Error bars above and below the box indicate the 90th and 10th percentiles, and outlying points are represented.The number of observations on which these boxplots are based is reported in Table1. Marrec et al.:High-resolution of the phytoplankton community structure in the NW Mediterranean tion in the central Ligurian subbasin under late summer/early fall conditions and such vertical distribution of the picophytoplankton has been described and explained in various other oligotrophic environments
19,658.8
2017-08-22T00:00:00.000
[ "Environmental Science", "Physics" ]
Simulation and Techno-Economical Evaluation of a Microalgal Biofertilizer Production Process Simple Summary The world’s population is expected to increase to almost 10,000 million by 2025, thus requiring an increase in agricultural production to meet the demand for food. Hence, an increase in fertilizer production will be needed, but with more environmentally sustainable fertilizers than those currently used. Traditional nitrogenous fertilizers (TNFs, inorganic compounds, for example nitrates and ammonium) are currently the most consumed. Biofertilizers concentrated in amino acids (BCAs) are a more sustainable alternative to TNF and could reduce the demand for TNFs. BCAs are widely used in intensive agriculture as growth and fruit formation enhancers, as well as in situations of stress for the plant, helping it to recover its vigor. In addition, BCAs minimize or contribute to reducing the damage caused by pests and diseases, have an immediate action, giving a full utilization and, lastly and most importantly, they produce savings in the crop. The objective of this work is to propose a process for the production of biofertilizer concentrated in free amino acids from microalgal biomass produced in a wastewater treatment plant and to carry out techno-economic evaluation in such a way as to determine the viability of the proposal. Abstract Due to population growth in the coming years, an increase in agricultural production will soon be mandatory, thus requiring fertilizers that are more environmentally sustainable than the currently most-consumed fertilizers since these are important contributors to climate change and water pollution. The objective of this work is the techno-economic evaluation of the production of biofertilizer concentrated in free amino acids from microalgal biomass produced in a wastewater treatment plant, to determine its economic viability. A process proposal has been made in six stages that have been modelled and simulated with the ASPEN Plus simulator. A profitability analysis has been carried out using a Box–Behnken-type response surface statistical design with three factors—the cost of the biomass sludge, the cost of the enzymes, and the sale price of the biofertilizer. It was found that the most influential factor in profitability is the sale price of the biofertilizer. According to a proposed representative base case, in which the cost of the biomass sludge is set to 0.5 EUR/kg, the cost of the enzymes to 20.0 EUR/kg, and the sale price of the biofertilizer to 3.5 EUR/kg, which are reasonable costs, it is concluded that the production of the biofertilizer would be economically viable. Introduction The increase in the world population, which by 2050 is expected to be close to 10,000 million [1,2], will require an increase in agricultural production to meet food needs, for which more fertilizers will be needed. These fertilizers should be more environmentally sustainable since traditional fertilizers are important contributors to climate change and entails. Among the species of microalgae that have been used most frequently for the production of biofertilizers and biostimulants are Chlorella sp., Scenedesmus sp., Dunaliella sp., Nannochloropsis sp., Haematococcus sp., and Chlamydopodium sp. [15][16][17][18]. The aims of this work are the design a production process for biofertilizer concentrated in free amino acids derived from microalgal biomass produced in a wastewater treatment plant and to carry out simulation with Aspen Plus and techno-economic evaluation of the design process. The biofertilizer produced will have a minimum content of free amino acids of 6% to comply with current legislation in Spain and Europe, in which the biostimulant capacity of amino acids has already been included (Royal Decree 506/28 June 2013, on fertilizer products (B.O.E.; 7 October 2013); Order APA/104/11 February 2022, which modifies annexes I, II, III and VI of the Royal Decree 506/28 June 2013, on fertilizer products; and Regulation (EU) 2019/1009 of the European Parliament and of the Council of 5 June 2019, which establishes provisions relating to the availability on the market of EU fertilizer products). Raw Material To develop this work, the starting point was an existing urban wastewater treatment plant. Unlike conventional processes, in this case, the treatment plant used microalgae. Once the biological treatment had been carried out and the regenerated water had been separated, microalgal biomass was obtained. This constitutes the raw material used in the proposed work. The selected microalgal species was Scenedesmus almeriensis, which has an average total protein content of 49.5% dry weight, estimated from total protein content data presented in several publications [19][20][21][22]. This biomass is considered suitable for obtaining amino acids due to its high protein content. Likewise, it has been confirmed that the Scenedesmus almeriensis hydrolysate has a good capacity as a biofertilizer and bio-stimulant [23,24]. To design the process, we considered that the microalgal biomass was produced in a 5 ha wastewater treatment plant, similar to that of AQUALIA S.L in Mérida, Spain [25]. In this plant, the open reactor installation yielded an average biomass productivity of 25 g/(m 2 ·day), estimated from biomass productivity data presented in several publications [26][27][28][29][30], with an operating time of 330 days per year. According to these values, 82.5 t/(ha·year) of biomass would be produced. However, it must be taken into account that in these processes the biomass is harvested together with a considerable amount of water, since after the biological treatment it goes to a downstream stage where the reclaimed water is separated (using ultrafiltration-centrifugation membranes), finally obtaining a sludge that typically consists of 20% biomass and 80% water. Proposed Process The proposed process consists of a mechanical pressure pretreatment of the microalgal biomass (high-pressure homogenization) followed by enzymatic hydrolysis to obtain a hydrolysate concentrated in free L-amino acids with value as a biofertilizer and biostimulant. In the hydrolysis stage, the temperature is controlled by supplying heat from a solar collection system. The separation between the biomass residue and the free L-amino acid concentrate product is carried out by centrifugation. The free L-amino acids concentrate is subsequently stored until it is packaged in suitable containers for production of the final biofertilizer product ready for market release. Biomass Storage The biomass used for amino acid production comes from a wastewater treatment plant, in which the nutrients present in the wastewater are used to produce reclaimed water and the biomass of interest, using microalgae for this conversion. The wastewater treatment plant produces around 2000 t/year of sludge from a centrifugation stage, with a biomass concentration of 200 kg/m 3 . This sludge has a pH of 8 and a density of 1.03 t/m 3 . The plant is in operation for 330 days a year, 24 h a day, which will be the same period in which the biofertilizer production plant will operate. Biomass sludge storage time is set to one day (24 h) to avoid decomposition problems. Microalgal Biomass Pretreatment A mechanical pretreatment is carried out using high-pressure homogenization. For this purpose, the microalgal sludge obtained from the wastewater treatment plant is subjected to a pressure of 200 bar followed by depressurization to ambient pressure. This allows a better degree of protein hydrolysis, in addition to conferring biostimulant properties to the hydrolysate produced [23,24]. According to Navarro-López et al. [24], the pressure treatment confers a character as a biostimulant with a capacity to improve the germination index by 10%. Protein Hydrolysis Enzymatic hydrolysis is chosen because it is a highly selective process that has multiple advantages: it does not destroy amino acids; all amino acids are in their L form (natural form) usable by plants; no organic or amine nitrogen is formed; and a high percentage of biological and nutritional value is achieved. The bibliographical references show that the most convenient way to carry out the hydrolysis is to use two types of proteases, endoproteases and exoproteases, with one acting immediately after the other [20,[22][23][24]. The hydrolysis process proposed by Romero-García et al. [21] is used, since they used Scenedesmus almeriensis biomass as a substrate with a concentration of 200 g/L, as in this work. The following commercial enzyme preparations from Novozymes A/S are used in the process: Alcalase 2.5 L. (endoprotease activity) and Flavourzyme 1000 L (exoprotease activity), which are in liquid form [31]. The hydrolysis time is 3 h and the operating temperature is 50 • C. To supply the necessary heat for the hydrolysis process, a low-temperature solar installation is used. By means of a heat exchanger, the solution to be hydrolyzed is heated using the hot fluid from the solar collection system. The dose of enzymes will be 4% (v/w) to the substrate (microalgal biomass). Alcalase 2.5 L is first added to the substrate at pH = 8.0. After 2 h, the pH is adjusted to a value of 7.0 and Flavorzyme 1000 L is added. The degree of protein hydrolysis reached after 3 h is 55% [21]. In the process, Ca(OH) 2 at 70% (w/v) is used, obtaining the Ca(OH) 2 /biomass sludge ratio experimentally, resulting in a value of 0.79% (v/v). Ca(OH) 2 is added over 2 h, during which the pH changes due to the action of Alcalase 2.5 L. To adjust the optimal pH of Flavourzyme 1000 L, 98% wt. sulfuric acid (H 2 SO 4 ) is added, obtaining an experimental sulfuric acid/biomass sludge ratio with a value of 0.19% (v/v). All of the sulfuric acid added reacts to form gypsum (CaSO 4 ). Ca(OH) 2 (70% (w/v)) and H 2 SO 4 (98% wt.) are stored in sufficient quantity for one month of operation. The necessary heat to reach and maintain the optimum hydrolysis temperature of 50 • C is provided by the solar-heat-capture system. Centrifugation The sedimentation alternative was ruled out due to the small particle diameter of the biomass and the low difference in density between the fluid and the solid. The main factor affecting the economics of centrifuge operation is particle size. In filtration, the choice of filter media depends on particle size, but overall economics are not affected. The cut-off point according to the cost that determines the choice of separation by ultrafiltration and by centrifugation corresponds to the interval of 1-2 µm, the size of Escherichia coli [32]. Since the cells of microalgae are about ten times larger than those of bacteria, filtration as a separation operation was ruled out and centrifugation was selected. During this stage, the stream exiting the hydrolysis reactor is separated in two parts, one containing the free-amino-acid concentrate (product) and the other containing the biomass remains (by-product, 20% of the input volume to the centrifuge). In the by-product stream, it is assumed that 100% of the solids remains are retained, which is generally a good approximation as centrifugation is a highly efficient operation. Free-Amino-Acid Concentrate Storage and Packaging Once the free-amino-acid concentrate is obtained, it is stored in a tank with a maximum capacity that is equivalent to 30 days of production. Next, the free-amino acid concentrate is packaged in containers of different volumes to obtain the final biofertilizer market product. The volumes of the containers are 1, 5, 10, and 20 liters, which are the most commonly used in the agricultural-fertilizer market. Solar Thermal Collector To supply the necessary heat for the hydrolysis process, a low-temperature solar installation is used. To heat the process water (the heating fluid in the heat exchanger), the heat accumulation system used consists of a collection system of thermal collectors and a tank storage system. Heat-Storage System To design the tank of this system an autonomous 3-day operation was considered, with which, having a large volume, greater thermal inertia was achieved. Heat-Capture System Solar energy is captured utilizing thermal collectors, so capture area depends on the energy required. Considering that there is a 20% energy loss, the collection system is designed to collect 20% more heat than that consumed during one day of operation, thus ensuring the necessary heat collection. A collection time of 8 h and a minimum temperature variation in the collectors of 10 • C were considered. Process Modelling and Simulation: Material and Energy Balances For the modelling and simulation of the stages of the proposed process, and therefore for obtaining the material and energy balances, the modular simulator ASPEN Plus V9 (Aspen Technology Inc., Bedford, MA 01730 EE.UU., USA was used. A processing capacity of 2062.5 t/year of biomass sludge with 7920 h/year of operation was considered. The thermodynamic models used were non-random two liquids (NRTL) and the Hayden-O'Connell equation of state (HOC EoS). Figure 1 shows the process flow diagram in ASPEN Plus. Table 1 describes the most significant aspects in the modelling of each piece of the equipment. Economic Analysis This stage consists of two parts: (1) Selection and design of each equipment, estimation of their costs, and, subsequently, estimation of total investment capital (CAPEX) and (2) determination of the total annual cost of operation (OPEX), as well as the expected annual income. With all the above, economic analysis of the proposed process was carried out to determine its profitability. This was performed using the Aspen Process Economic Analyzer V9 software (APEA, Aspen Technology Inc., Bedford, MA 01730 EE.UU., USA) and the mass and energy balances from the process simulation carried out previously. As input data, a project life of 10 years (7920 h/year) has been considered, which is a common value for biotechnological projects [33,34]; a conservative annual interest rate of 5% with the current value that could be around 2.5%; taxes on profits of 25% (current value in Spain); and the straight-line method for calculating depreciation. Regarding the costs related to labor, costs associated with operators and supervisors of 23.41 EUR/h and 26.45 EUR/h, respectively [35], have been considered. As for the cost of the Ca(OH) 2 and sulfuric acid, 65 EUR/t and 73 EUR/t, respectively, have been considered. The price of electricity has been set at 0.12 EUR/kWh (average price for 2021 in Spain) and the price of water at 1 EUR/m 3 . The cost of biomass sludge and enzymes is variable, so different scenarios have been considered when carrying out a sensitivity analysis. The same has been done with the sale price of the biofertilizer. Thus, a sensitivity analysis has been carried out using a Box-Behnken-type response-surface statistical design with 13 scenarios (Table 2), in which three factors varied-the cost of the biomass sludge, the cost of the enzymes, and the sale price of the biofertilizer-determining the indicators of economic profitability (net present value (NPV), payback period (PP), internal rate of return (IRR), and profitability index (PI)).The cost of microalgal biomass in open photobioreactors ranges between 1 EUR/kg and 5 EUR/kg, according to the literature [27,30,33,36]. Therefore, if the sludge contains 20% biomass it should have a cost between 0.2 EUR/kg and 1 EUR/kg. In the case of the enzymes (Alcalase 2.5 L and Flavourzyme 1000 L), their cost would range between 10 EUR/kg and 25 EUR/kg [37][38][39]. And, finally, the market price for biofertilizers similar to the one produced in this process ranges between 2.5 EUR/kg and 7.5 EUR/kg, according to different references and websites consulted [40][41][42][43][44][45][46][47]. The results obtained were adjusted according to a quadratic model to be able to appreciate the influence of each of the factors studied in the answers (Y), according to Equation (1), using the Design-Expert 8.0.7.1 program (Stat-Ease Inc., Minneapolis, MN, USA): Y = a 0 + a 1 A + a 2 B + a 3 C+ a 4 AB + a 5 AC + a 6 CB + a 7 A 2 + a 8 B 2 + a 9 C 2 (1) where A is the cost of the biomass sludge (EUR/kg), B is the cost of the enzymes (EUR/kg), and C is the sale price of the biofertilizer (EUR/kg). The model enables the influence of each factor on the responses to be determined as well as the interactions between factors, according to the a i coefficients. A higher absolute value of the coefficient in terms of coded values implies a larger effect on the factor. The sign indicates if the answer is positive or negative. a 0 indicates the response values at the center point (coded value = 0). There are three coefficients for factors (a 1 -a 3 ). The fit is hierarchical, there will only be interactions and quadratic coefficients if the factor coefficient (a 4 -a 9 ) exists. Table 3 shows the main characteristics of the process streams and the results obtained from the resolution of the materials and energy balances with Aspen Plus. Once all the streams that appear in the process had been characterized, the needs for reagents and energy, and the production of biofertilizer for one year were calculated. Table 4 shows the values obtained, resulting in a biofertilizer production of 1645.95 t. A solid by-product was also produced and this could be used as an organic amendment with an amount of 483.52 t. Table 4. Annual consumption of reagents and heating energy and annual production of solid byproducts and biofertilizer. Equipment Selection, Sizing, and Cost Estimation Except for the packaging machine and the solar thermal collectors, the rest of the equipment has been selected, dimensioned, and its cost estimated using Aspen Process Economic Analyzer V9 (APEA) and its databases, all following the materials and energy balances and the process design considerations. Table 5 shows the selected equipment, the main dimensions, the number of pieces of equipment, and the cost. The total cost of equipment is EUR 1,088,448.80. [48] was selected since it meets the specifications. It allows containers of very different volumes up to 20 L to be filled, with a packaging capacity of up to 120 bottles per minute. It is also specially designed for viscous and semi-viscous liquids and is made of 316 L stainless steel. It is budgeted at EUR 80,000, with Euroguard (CE) system and transport costs. Considering the value of heat that the collection system has to supply to the process (10,336.50 kJ/h, Table 3) and the considerations made in Section 2.2.6 (20% losses, 8 h of collection, and 10 • C of minimum temperature variation in the collectors) and the specifications of the selected collector, a heat capture surface of 10.63 m 2 is required, which is equivalent to 5.6 collectors, so a total of seven collectors have to be installed. Investment Capital Once the total cost of equipment was estimated using APEA software, the initial investment capital required (CAPEX) was estimated, resulting in a value of EUR 9,648,523.33 for this project. The item with the greatest weight was "design, engineering and acquisitions" ( Table 6). This investment value is in the order of other plants with very small treatment capacities of around 2000 t/year, with investment values of less than EUR 9 million [50,51]. Table 7 shows the results of the statistical design carried out for the sensitivity analysis. As can be seen that in conditions 1, 9, and 11, the NPV is negative, that is, there will be no benefit in the life of the project and, therefore, no value can be obtained from the PP since the investment will never be recovered nor the IRR value since it cannot be less than 0; and in terms of PI, negative values are obtained, which indicate how much would be lost for each euro of investment if these conditions were met, ranging from −0.08 to −1.02. Economic Sensitivity Analysis The results obtained have been modelled and the values of the coefficients of the models and the response surfaces obtained for the four responses studied are shown in Figures 2-5. Of the three factors studied, the most influential, by far, is the sale price of the biofertilizer, followed by the cost of the biomass sludge and, finally, the cost of the enzyme. If, for example, the coded coefficients obtained for the NPV model are compared, the sale price of the biofertilizer is almost 5 times higher than the cost of the biomass sludge and 14 times the cost of the enzyme. Similar results were obtained when a sensitivity study about the production of microalga protein concentrate by flash hydrolysis was performed, showing that the sale price of the concentrate had the greatest influence [52]. Thus, the sale price of the final product is crucial for the economic viability of microalgae valorization processes, as shown in a multi-objective study of techno-economic optimization of the microalgae-based value chain; the results obtained here are in correlation with those found in the literature [53]. Table 7. Results of the statistical design of the sensitivity analysis. Net present value (NPV), payback period (PP), internal rate of return (IRR), and profitability index (PI). If the response surfaces for the NPV (Figure 2) and the PI ( Figure 5) are observed, a flat zone below the value 0 of each of these variables indicates that there would be no benefit and the project would not make a profit. In the case of the higher costs of biomass sludge and enzymes, a sale price of the biofertilizer higher than 3.5 EUR/kg would be needed to achieve NPV > 0 and IR > 0. If NPV = 0 and IR = 0, it means that the initial investment is recovered in the life of the project, but it would not be an attractive project for investors and, therefore, it would not be carried out. Scenario NPV (million EUR) PP (Year) IRR (%) PI For projects to be attractive, the recovery of the investment must occur as soon as possible, with around 60-70% of the life of the project being acceptable. In the present case, the PP should lie between 6 and 7 years. If higher costs of biomass sludge and enzymes are considered to achieve a TRI = 6-7 years, the sale price of the biofertilizer should be between 3.9-4.4 EUR/kg. And, finally, the IRR must be as high as possible, but it must at least triple the interest rate considered, which in this case is 5%, therefore the IRR should be greater than 15%. To achieve this IRR > 15% with the higher costs of biomass sludge and enzymes, the sale price of the biofertilizer would have to be greater than 4.1 EUR/kg. Case Study Once the sensitivity analysis had been carried out, the case study was proposed, in which specific values are given to the three factors studied. In the case of the cost of biomass sludge, a value of 0.5 EUR/kg is used, which corresponds to a biomass cost of 2.5 EUR/kg, being conservative regarding biomass costs from wastewater residuals in open reactors, which are often below 2 EUR/kg [36]. The cost of the enzymes is established at 20 EUR/kg, which is acceptable according to the values found in the literature, and finally, a value of 3.5 EUR/kg for the sale of the biofertilizer, which is below the reference price for this product, so as to facilitate its market entry. The price of the biofertilizer has also been chosen to be conservative and to take into account the effect that a lower efficiency of the enzymes and/or a lower concentration of the microalgal sludge could have on the system. It is clear that if the concentration of the sludge is maintained and the efficiency of the enzymes is lower, a lower concentration of amino acids would be produced. In the same way, if the concentration of biomass sludge is lower and the efficiency of the enzymes is maintained, the concentration of free amino acids would also be lower. With the values proposed above an annual production cost of EUR 4.22 million was obtained in the case study, with the cost of raw materials being the factor with the greatest weight at over 40% of the total (see Table 8). The NPV at the end of the 10-year life of the project is EUR 9.17 million and its evolution throughout the project can be seen in Figure 6. The IR obtained is 0.95, that is, a profit of almost one euro per euro invested will be achieved, which means that after 10 years the profit will have almost doubled the amount that was invested, around 10% per year, much higher than the current interest rates on bank deposits, which are below 0.1%. In addition, the PP is 6.5 years and the IRR > 18%, a value that triples the 5% interest considered for the project. The values obtained here are similar to those of another very small plant that uses around 1000 t/year of grape pomace, in which a PP = 5.8 years, IRR = 13.2%, and NPV = €3.58 million were obtained [51]. It is worth mentioning that in the economic analysis carried out, the income that would be obtained from the sale of the solid by-product has not been taken into account. If it were sold as an organic amendment at a price of around 60 EUR/t [54], it would result in an annual amount of around EUR 29,000, which means that in over ten years of life it would be close to EUR 300,000. On the other hand, the sustainability of the proposed process should be highlighted, since a previous study by Arashiro et al. [55] carried out a life-cycle analysis in which two scenarios for the environmental impact of the final use of the microalgal biomass produced in the treatment of wastewater were considered: (1) its use in an anaerobic digestion process to produce biogas and (2) its use for the production of biofertilizer. It turned out that the second scenario was more environmentally friendly in 7 of 11 impact categories (climate change, ozone depletion, freshwater eutrophication, marine eutrophication, photochemical oxidant formation, fossil depletion, and human toxicity). In that same study, these two scenarios were compared with the traditional treatment of wastewater with activated sludge, resulting in a lower impact in 6 of 11 impact categories (climate change, ozone depletion, freshwater eutrophication, marine eutrophication, photochemical oxidant formation, and fossil depletion) [55]. The company Biorizon Biotech SL, a world pioneer in the development of various biofertilizers and biostimulants based on microalgae, has taken an important step towards strengthening corporate sustainability by joining the Global Compact for United Nations because it aims to promote sustainable economic development and contribute to minimizing the nitrogen footprint [56]. Other companies such as Algaenergy SA, also produce biofertilizers based on microalgae, stating that they provide three types of general benefits to crops: higher yield, better quality, and greater stress resistance. This company also affirms that its products contribute decisively to the conservation of nature and the environment since, for every 5 L produced, 2 kg of CO 2 is removed during the biomass production process [57]. On the other hand, to advance and promote compliance with the Sustainable Development Goals of the United Nations' 2030 Agenda, this agency organizes the program for Progress and Development, in which the most outstanding projects are selected. Within the Ocean Innovation Challenge (OIC), a Spanish company, Ficosterra, dedicated to the transformation of algae into fertilizers, biostimulants, and biofertilizers for agriculture, has been chosen from among over 600 candidates. With their project "Nutrialgae" they propose to demonstrate that the use of biostimulants reduces water contamination while increasing the productivity of crops by up to 15%, on average, thus advancing towards the agriculture of the 21st century [58,59]. Conclusions A process has been proposed for the production of a biofertilizer concentrated in free L-amino acids from microalgal biomass, which allows a concentration of 6% of free amino acids to be obtained, thus complying with current legislation on fertilizers in Spain and the European Union. The process transforms the microalgal biomass produced in a wastewater treatment plant into a biofertilizer that can be used in agriculture to improve crops and reduce the use of traditional fertilizers, thereby advancing towards a sustainable food system for the 21st century, in line with the Sustainable Development Goals of the United Nations' 2030 Agenda. The ASPEN Plus simulator allowed the process to be modelled and the material and energy balances necessary for the following stages of analysis to be obtained. The Aspen Process Economic Analyzer software has provided estimates of the CAPEX and OPEX, finding that the greatest weight is for the cost of raw materials. The sensitivity analysis carried out using a Box-Behnken-type response surface statistical design with three factors-the cost of the biomass sludge, the cost of the enzymes, and the sale price of the biofertilizer-determined that the most influential factor in profitability is the sale price of the biofertilizer. In the base case proposed, the cost of the biomass sludge (0.5 EUR/kg) and enzymes (20.0 EUR/kg) and the sale price of the biofertilizer (3.5 EUR/kg) have been set, showing that the production of the biofertilizer is economically feasible, with an NPV = EUR9.17 million, IR = 0.95, PP = 6.5 years, and IRR = 18.31%, for a project life of 10 years. Funding: This work is part of the R&D project AL4BIO RTI2018-099495-A-C22 funded by MCIN/AEI/ 10.13039/501100011033/ and by "ERDF A way of making Europe", and it is also part of Valima project funded by Junta de Andalucía (P20_00800). J.M. Romero-García expresses his gratitude to the Junta de Andalucía for financial support (Postdoctoral researcher R-29/12/2020). Institutional Review Board Statement: Not applicable.
6,749.4
2022-09-01T00:00:00.000
[ "Environmental Science", "Agricultural and Food Sciences", "Engineering" ]
Western Blot Analysis of the IgG-Antibody Response to Acid-Glycine-Extracted Antigens from Campylobacter fetus subsp . fetus and C . jejuni in Naturally Infected Sheep Gürtürk K. , I . H. Ekin, A. Arslan: Western Blot Analysis of the IgG-Antibody Response to Acid-Glycine-Extracted Antigens from Campylobacter fetus subsp. fetus and C. jejuni in Naturally Infected Sheep. Acta Vet. Brno 2007, 76: 245-251. IgG-antibody response in aborting sheep and in apparently healthy sheep in a flock against acidglycine-extracted antigens from three strains for each C. fetus subsp. fetus and C. jejuni were analysed by Western blot. One strain of C. fetus subsp. fetus was isolated from aborting sheep. Western blot analysis of the sera revealed the presence of IgG antibody binding to the common antigens including proteins with the Mw of 63 kDa and 54 kDa in extracts from both C. fetus subsp. fetus and C. jejuni strains. In addition, IgG antibodies in sera from aborting sheep reacted more strongly with the antigens from C. fetus subsp. fetus strains with Mw of approximately 100, 95 and 86.5 kDa than those of apparently healthy sheep. The binding profile of the antibodies with these antigens appeared to be unique for each C. fetus subsp. fetus strain. On the other hand, IgG antibodies only in sera from aborting sheep recognized strongly the antigens of each C. fetus subsp. fetus strain at the Mw ranged from approximately 26 to 22 kDa. However, the antigenic components between 26 and 22 kDa were not detectable in coomassie blue stained gel and thought to have non-protein nature. These low molecular weight antigens of C. fetus subsp. fetus may be related to a recent infection in aborting sheep. These observations indicate that such speciesspecific antigens or conjugated protein antigens could be used for improving the specificity of the serological tests to detect C. fetus antibodies in sheep sera, and may be the candidates for subunit vaccines against ovine abortion. Ovine abortion, Western blot, C. fetus subsp. fetus, C. jejuni Various campylobacter species are found in the reproductive organs, intestinal tracts and oral cavities of both animals and humans. C. fetus subsp. fetus is well known as a pathogen causing sporadic or epizootic abortions in sheep and cattle as well as systemic infections in humans. Considerable economical losses in animal production may ensue. C. jejuni, a human pathogen, is also recognized as a cause of abortions in sheep (Blobel and Schl iesser 1982). Sheep aborting due to campylobacter infection produce high titres of serum antibodies in the response to these organisms. As for detecting humoral antibody response, several serological tests, such as agglutination, complement fixation, enzyme immunoassay have been reported (Rautel in and Kosunen 1983; Gröhn and Genigeorgies 1985; Melby 1987; Gürtürk et al. 2002). But the use of these tests is limited due to low sensitivity and specificity for the diagnosis of campylobacter infections in sheep. Since effective use of serological tests is mainly related to the specificity of the antigenic or immunogenic components of bacteria used in the tests, the characterization and use of such antigenic components are necessary for enhancing the specificity of the serological tests. In C. fetus infection, a surface layer (S layer) protein plays an important role in the invasion and survival within the host (McCoy et al. 1975; Blaser and Pei 1993). These proteins ACTA VET. BRNO 2007, 76: 245–251; doi:10.2754/avb200776020245 Address for correspondence: Prof. Dr. Kemal GÜRTÜRK Yuzuncu Yil Universitesi Veteriner Fakultesi Mikrobiyoloji Anabilim Dali, 65080 Van, TURKEY Phone: +90 432 225 1128 Fax: +90 432 225 1127 E-mail<EMAIL_ADDRESS>http://www.vfu.cz/acta-vet/actavet.htm of C. fetus represent a family of high molecular weight proteins including proteins of 98 to 100, 127 and 149 kDa that have been demonstrated by SDS-PAGE and Western blot analysis with rabbit immune sera (Pei et al. 1988; Grogono-Thomas et al. 2000, 2003). Immunoblot studies of C. fetus and C. jejuni with rabbit and human immune sera showed also that C. fetus subsp. fetus, C. jejuni and C. coli share antigenically cross-reactive epitopes including flagellin with a molecular size of 50 kDa and 61 62 kDa and other major OM proteins (Wenmann et al. 1985). It has been reported that the 31 kDa acid dissociable protein is the antigenic determinant common to the thermophilic Campylobacters but the 92.5 kDa protein of thermophilic campylobacters might be strain-specific (Logan and Trust 1983; J in and Penner 1988; Dubreui l et al. 1990a). However, there is little information on the issue, whether these membrane antigens characterized with rabbit immune sera were also recognized with antibodies elicited after natural infection in sheep. In consideration of the aspect that immune response of sheep against determined antigens after natural infection could be different than those of rabbits, the demonstration of antibodies to the campylobacter antigens with immune sera after natural infection in a compromised animal would give more appropriate information on the membrane antigens implicating in serological tests. This would make it possible not only to improve the specificity of serological tests to be used for the diagnosis of campylobacter infection in sheep, but also to develop strategies for immune protection. Our previous study (Gürtürk et al. 2002) showed also that campylobacter antibodies in sheep sera could be detected with a dot-immunobinding assay and a complement fixation tests by using crude extract of acid dissociable antigens from both C. fetus subsp. fetus and C. jejuni but the tests failed to discriminate antibodies to the antigens from both campylobacter species. In the present study, therefore, ovine-IgG antibody response to acid-glycine-extracted antigens from C. fetus subsp. fetus and C. jejuni strains was analysed to observe possible strain or speciesspecific antigens reacting with IgG antibodies elicited during ovine abortion. Materials and Methods Serum samples Three sera from different sheep aborting due to C. fetus subsp. fetus infection in a flock were used. Sera were obtained 3 or 4 weeks after abortion. C. fetus subsp. fetus was isolated from one of the aborted fetus examined bacteriologically. Additionally three sera were collected from apparently healthy sheep in the same flock which were found to be negative in intestinal culture for Campylobacter. All sera from aborting sheep showed positive antibody titres of 1:20 ≤ in complement fixation test (CFT). Sera from apparently healthy sheep had antibody titres of 1:5 ≥ in CFT. In CFT, the adapted micro technique (Kolmer method) using cold fixation was employed as described previously (Gürtürk et al. 2002). All sera were found to be negative for antibodies to Brucellae with Rose bengal plate test and Dot-ELISA and the tests were performed as described previously (Gürtürk et al. 1997). Bacter ia l s t ra ins The following Campylobacter strains were used in this study: C. fetus subsp. fetus strain F5 was isolated from aborting sheep (of which homolog serum was also used); C. fetus subsp. fetus strain F3; C. fetus subsp. fetus strain F6; C. jejuni strain J1 and C. jejuni strain J3 were isolated from the contents of intestines or gall-bladders of apparently healthy sheep. C. jejuni subsp. jejuni DSM 4688 was supplied from DSM (Braunschweig, Germany). The C. fetus subsp. fetus strains were resistant to nalidixic acid and grew at 25 °C. The C. jejuni strains were sensitive to nalidixic acid, hydrolysed Na-hippurate and grew at 43 °C but not 25 °C. Both strains could be cultured on the Skirrow’s selective medium in a microaerobic atmosphere. They were catalaseand oxidase-positive, Gram negative bacteria with a typical S form. Both species were identified by further biochemical characteristics as described previously (Holt et al. 1984). Extract ion of ant igen The antigen was extracted separately from all campylobacter strains and used in Western blot. The bacteria were cultured on Blood agar base (Oxoid No. 2) supplemented with 7% defibrinated sheep blood and Skirrow’s selective supplement for 48 72 h at 37 °C and 42 °C, respectively. Acid glycine extraction was performed after the method described by McCoy et al. (1975). The cultures were harvested into distilled water, washed twice and then suspended in 0.2 M glycine-hydrochloride, pH 2.2 (1 g of cell per 25 ml). The suspension was stirred at room temperature (RT) for 30 min and whole cells were removed by 246 of C. fetus represent a family of high molecular weight proteins including proteins of 98 to 100, 127 and 149 kDa that have been demonstrated by SDS-PAGE and Western blot analysis with rabbit immune sera (Pei et al. 1988;Grogono-Thomas et al. 2000, 2003).Immunoblot studies of C. fetus and C. jejuni with rabbit and human immune sera showed also that C. fetus subsp.fetus, C. jejuni and C. coli share antigenically cross-reactive epitopes including flagellin with a molecular size of 50 kDa and 61 -62 kDa and other major OM proteins (Wenmann et al. 1985).It has been reported that the 31 kDa acid dissociable protein is the antigenic determinant common to the thermophilic Campylobacters but the 92.5 kDa protein of thermophilic campylobacters might be strain-specific (Logan and Trust 1983;Jin and Penner 1988;Dubreuil et al. 1990a). However, there is little information on the issue, whether these membrane antigens characterized with rabbit immune sera were also recognized with antibodies elicited after natural infection in sheep. In consideration of the aspect that immune response of sheep against determined antigens after natural infection could be different than those of rabbits, the demonstration of antibodies to the campylobacter antigens with immune sera after natural infection in a compromised animal would give more appropriate information on the membrane antigens implicating in serological tests.This would make it possible not only to improve the specificity of serological tests to be used for the diagnosis of campylobacter infection in sheep, but also to develop strategies for immune protection.Our previous study (G ürtürk et al. 2002) showed also that campylobacter antibodies in sheep sera could be detected with a dot-immunobinding assay and a complement fixation tests by using crude extract of acid dissociable antigens from both C. fetus subsp.fetus and C. jejuni but the tests failed to discriminate antibodies to the antigens from both campylobacter species.In the present study, therefore, ovine-IgG antibody response to acid-glycine-extracted antigens from C. fetus subsp.fetus and C. jejuni strains was analysed to observe possible strain or speciesspecific antigens reacting with IgG antibodies elicited during ovine abortion. Serum samples Three sera from different sheep aborting due to C. fetus subsp.fetus infection in a flock were used.Sera were obtained 3 or 4 weeks after abortion.C. fetus subsp.fetus was isolated from one of the aborted fetus examined bacteriologically.Additionally three sera were collected from apparently healthy sheep in the same flock which were found to be negative in intestinal culture for Campylobacter.All sera from aborting sheep showed positive antibody titres of 1:20 ≤ in complement fixation test (CFT).Sera from apparently healthy sheep had antibody titres of 1:5 ≥ in CFT.In CFT, the adapted micro technique (Kolmer method) using cold fixation was employed as described previously (Gürtürk et al. 2002).All sera were found to be negative for antibodies to Brucellae with Rose bengal plate test and Dot-ELISA and the tests were performed as described previously (Gürtürk et al. 1997). Bacterial strains The following Campylobacter strains were used in this study: C. fetus subsp.fetus strain F5 was isolated from aborting sheep (of which homolog serum was also used); C. fetus subsp.fetus strain F3; C. fetus subsp.fetus strain F6; C. jejuni strain J1 and C. jejuni strain J3 were isolated from the contents of intestines or gall-bladders of apparently healthy sheep.C. jejuni subsp.jejuni DSM 4688 was supplied from DSM (Braunschweig, Germany). The C. fetus subsp.fetus strains were resistant to nalidixic acid and grew at 25 °C.The C. jejuni strains were sensitive to nalidixic acid, hydrolysed Na-hippurate and grew at 43 °C but not 25 °C.Both strains could be cultured on the Skirrow's selective medium in a microaerobic atmosphere.They were catalase-and oxidase-positive, Gram negative bacteria with a typical S form.Both species were identified by further biochemical characteristics as described previously (Holt et al. 1984). Extraction of antigen The antigen was extracted separately from all campylobacter strains and used in Western blot.The bacteria were cultured on Blood agar base (Oxoid No. 2) supplemented with 7% defibrinated sheep blood and Skirrow's selective supplement for 48 -72 h at 37 °C and 42 °C, respectively. Acid glycine extraction was performed after the method described by McCoy et al. (1975).The cultures were harvested into distilled water, washed twice and then suspended in 0.2 M glycine-hydrochloride, pH 2.2 (1 g of cell per 25 ml).The suspension was stirred at room temperature (RT) for 30 min and whole cells were removed by centrifugation at 10,000 g for 15 min.The supernatant was neutralized with NaOH and lyophilised.Protein contents of the extracts were determined by using a protein detection kit (Sigma, St. Louis, OM, USA). After electrophoresis, proteins were immediately transferred from slab gel to nitrocellulose paper (BA85; Schleicher & Schuell, Germany) by the method of Towbin et al. (1979).Electrophoretic transfer was carried out overnight at 50 -60 V with a Hoefer Transblot apparatus (Hoefer Scientific Instruments, USA) and 25 mM Tris-192 mM glycine buffer (pH 8.3) containing 20% methanol.For the immunological detection, the nitrocellulose paper was incubated with 10 mM Tris-0.9%NaCl buffer (TBS, pH 7.4) containing 5% skimmed milk powder for 2 h at RT to block non-specific binding.The nitrocellulose paper was then incubated with sera diluted 1 in 100 or more with TBS containing 0.05% Tween 20 (TBS-T) for 2 h at RT.The nitrocellulose paper was washed three times with TBS-T and incubated with horseradish peroxidase-conjugated donkey anti-sheep immunoglobulin G (whole molecule; Sigma, St. Louis, MO, USA) diluted 1 in 1000 with PBS-T for 2h at RT.After washing with PBS-T, the binding was revealed by the treatment of the nitrocellulose paper with 4-chloro-1-naphtol/hydrogen peroxide substrate in TBS. SDS-PAGE Protein band profile of acid-glycine extracts from C. fetus subsp.fetus and C. jejuni strains in coomassie blue-stained polyacrylamide gel ranged in molecular weight from 22 kDa to greater than 100 kDa in this gel system (Fig. 1, Plate VIII).Protein bands with the Mw of approximately 63 kDa and 54 kDa were present in each extract from both C. fetus subsp.fetus and C. jejuni strains, whereas several protein bands including those with Mw of approximately 42.4 to 46.5, 36, 30, 26 and 22 kDa could only be observed in the extracts from C. fetus subsp.fetus strains.The protein bands of C. fetus subsp.fetus strains at the Mw of approximately 113.4,100, 95, 86.5 kDa appeared to be unique for each strain, but they were not observed in C. jejuni strains. Western Blotting Immunoblotting analysis of the sera from aborting sheep and from apparently healthy sheep with the glycine extracted antigens from C. fetus subsp.fetus and C. jejuni strains are shown in Figs 2 and 3 (Plate VIII and IX).IgG antibodies in each serum reacted with the common protein antigens of approximately 63 kDa and 54 kDa from all strains of C. fetus subsp.fetus and C. jejuni.In addition, all sera showed similar binding profile with the antigens from C. fetus subsp.fetus isolated from aborting sheep which were still distinctly different from those of C. jejuni DSM 4688 (Fig. 2).However, the IgG antibodies from aborting sheep sera reacted strongly with the antigens from infecting strain of C. fetus subsp.fetus in the Mw region from approximately 26 to 22 kDa.On the other hand, any comparable binding profiles of IgG antibodies in sera from apparently healthy sheep with these antigens were not observed (Fig. 2, panel a).Even if the sera were used at a lower dilution (1:100), the same result was obtained (data not shown).Antibodies in sera from aborting sheep recognized also these low molecular weight antigens in each extract from other strains of C. fetus subsp.fetus as well as from homolog infecting strain of C. fetus subsp.fetus.Any comparable binding of antibodies to the antigens from C. jejuni strains between these molecular weights was not observed (Fig. 3, panel 1).However, antigenic components of C. fetus subsp.fetus strains between the molecular weights of 26 to 22 kDa were not observed in coomassie blue stained gel (Fig. 1, lanes a, b, c) As represented in Fig. 3, IgG antibodies in serum from aborting sheep reacted also strongly by binding activity with the antigens from other C. fetus subsp.fetus strains as well as homolog infecting strain of C. fetus subsp.fetus, including the proteins with the molecular weights of approximately 100, 95 and 86.5 kDa, but the binding profile of the antibodies with these antigens appeared to be unique for each strain.Any comparable binding of the antibodies with the antigens from all C. jejuni strains at the range of these Mw could not be observed (Fig. 3, panel 1).Antibodies in serum from apparently healthy sheep reacted weakly with the antigens of C. fetus subsp.fetus strains, excepting the 63 kDa and 54 kDa (Fig. 3, panel 2) but no reaction was apparent in Western blots when higher dilutions of serum was used (Plate IX, Fig. 4, panel a, 1 -2).However, the binding profile of the antibodies in serum from apparently healthy sheep with the antigens from C. jejuni strains did not differ from those of aborting sheep (Fig. 3, panel 1 -2) and remained still undistinguishable, even if a higher dilution of the serum was used (Fig. 4, panel b, 1 -2). Discussion Studies on immunogenicity of C. jejuni and C. fetus cellular components during human infection have been well reported.Sera from rabbits immunized with C. jejuni reacted with a number of components in outer-membrane protein preparations and differed from human sera (Nachamkin and Hart 1985).Recently, an investigation has been reported on the role of S layer protein (SLP) during C. fetus infection in sheep (Grogono-Thomas et al. 2000).Different isotypes of antibodies directed against SLPs during ovine infection were also demonstrated with enzyme-immunoassay (Grogono-Thomas et al. 2003).A number of specific outer membrane antigens of C. fetus and C. jejuni are well characterized by immunoblot with rabbit immune sera (Wenmann et al. 1985).However, data on immunoblot analysis of antibody response of naturally infected sheep against these SLPs or other antigenic components of C. fetus are not available. C. fetus is known to be the most common agent of ovine abortion, but the specificity of serological tests for the diagnosis of C. fetus infection was limited due to cross-reactivity of the antibodies with the glycine extracted antigens from both C. fetus and C. jejuni.Glycine extracted antigens from C. jejuni were found to be a mixture of different proteins, including flagella antigens and acid dissociable surface antigens.The 62-63 kDa proteins confirmed as the flagellum were antigenically cross reactive with thermophilic campylobacters and C. fetus (Logan and Trust 1983;Mills et al. 1986).The 31 kDa acid dissociable protein appeared to be an antigenic determinant common to the thermophilic campylobacters (Dubreuil et al. 1990).A 92.5 kDa protein was shown to be a strain-specific antigen in C. jejuni (Jin and Penner 1988). In the present study, Western blot analysis of the sera from both aborting and apparently healthy sheep revealed similar binding patterns with the antigens from both C. fetus subsp.fetus (isolated from the aborting sheep) and C. jejuni, including approximately 63 and 54 kDa proteins which were one of the major proteins in coomassie blue stained gel.The 63 kDa antigens, that were thought to be components of flagellin, appeared to be major proteins involved in the cross reactivity of the sera with antigens from both C. fetus and C. jejuni strains.The 54 kDa antigen might be other acid dissociable surface protein or a breakdown product of 63 kDa protein.Similar to our results, Wenmann et al. (1985) reported that C. fetus shares only two antigens strongly with C. jejuni and C. coli, proteins with molecular weights of 62 and 50 kDa reacting with rabbit immune sera to C. jejuni in Western blots.Our Western blot studies showed also that the binding profile of antibodies in sheep sera with the glycine extracted antigens from C. fetus subsp.fetus, except for 63 and 54 kDa antigens, were found to be different from those of C. jejuni.C. fetus subsp.fetus and C. jejuni differed also by their protein band profile in coomassie blue stained gel.Therefore, antibodies in sheep sera did not appear to cross-react with other antigens of both Campylobacter species, excepting 63 kDa-54 kDa proteins. Our Western blot studies demonstrated also antibodies in sheep sera against the approximately 100, 95 and 86.5 kDa proteins of C. fetus strains, but their binding profile with the antigen from each C. fetus subsp.fetus strain were found to be unique in both coomassie blue stained gel and in Western blots.These antigens of C. fetus could not be detected in extracts from C. jejuni and thought to be S layer proteins (SLP).SLP of C. fetus serve as important virulence factors in pathogenesis of C. fetus infections (McCoy et al. 1975;Blaser and Pei 1993).These antigens of C. fetus, including the high molecular weight proteins of 98 to 100, 127 and 149 kDa are present in glycine extracts from C. fetus strains and the diversity of size and structure of the SLP of C. fetus have been well reported (Pei et al. 1988;Fujimoto et al. 1991;Brooks et al. 2002).Grogono-Thomas et al. (2000) reported also that most C. fetus subsp.fetus isolates from natural ovine infections express the 97 kDa protein but the 127 kDa and 149 kDa surface layer proteins are rarely seen. In this study, antibodies in aborting sheep reacted strongly with the high molecular weight antigens of C. fetus, even if higher dilutions of serum were used.Although antibodies against C. fetus antigens with high molecular weights were also present in sera from apparently healthy sheep, they were not detectable in higher dilutions of sera.These results indicate that the antibodies in sera from apparently healthy sheep could be acquired during a past C. fetus infection and sheep aborting recently due to C. fetus infection could have developed a substantial systemic antibody response directed against antigens with high molecular weight.Dubreuil et al. (1990b) reported that the acid dissociable protein with low molecular weight, e.g. 31 kDa protein, appeared to be an antigenic determinant that is common to the thermophilic campylobacter strains, but not in C. fetus subsp.fetus.In our study, Western blots of the sera from aborting sheep, but not from apparently healthy sheep, demonstrated a distinctly different reaction with the antigenic components of C. fetus subsp.fetus strains between the molecular weights of approximately 26 to 22 kDa.However, except for the proteins with the molecular weight of approximately 26 to 22 kDa, other antigenic components reacting in this region of Western blots could not be observed in coomassie blue stained gel and appeared to be of a non-protein nature.These antigens of C. fetus appeared not to be present in C. jejuni strains and may be species-specific.The role of these antigenic components of C. fetus during ovine abortion are not known but may be related to a recent infection in aborting sheep.Further studies are necessary for the purification and characterization of such antigenic components of C. fetus subsp.fetus. In conclusion, the results of this study showed that the acid glycine extractable proteins of both C. fetus subsp.fetus and C. jejuni with the Mw of approximately 63 kDa and 54 kDa were dominant antigens involved in cross-reaction with the IgG antibodies in sheep sera.IgG antibody response in aborting sheep were mainly directed against the high molecular weight antigens of C. fetus subsp.fetus of which binding profile were unique for each strain and were stronger than those in apparently healthy sheep.We have also demonstrated antibodies only in aborting sheep to a group of acid-glycine extractable antigens of C. fetus subsp.fetus of Mw ranging from approximately 26 to 22 kDa which may be related to a recent infection in aborting sheep.Such species-specific antigens could be used for improving the specificity of serological tests to detect anti-C.fetus antibodies in sheep, and may be candidates for subunit vaccines against ovine abortion due to C. fetus infection. Fig. 1 . Fig. 1.SDS-PAGE (10% polyacrylamide) of the glycine extracted components from different C. fetus subsp.fetus (lanes a, b, c) and C. jejuni (lanes d, e, f) strains.Lane b; infected with strain of C. fetus subsp.fetus F5; Lane d; C. jejuni subsp.jejuni DSM 4688.Poliacrylamide gels were stained with Coomassie brilliant blue.Lane M; Molecular weight marker and Molecular weights (MW) are indicated in kDa.
5,614
2007-01-01T00:00:00.000
[ "Medicine", "Biology" ]
A Divide and Conquer Approach to Eventual Model Checking : The paper proposes a new technique to mitigate the state of explosion in model checking. The technique is called a divide and conquer approach to eventual model checking. As indicated by the name, the technique is dedicated to eventual properties. The technique divides an original eventual model checking problem into multiple smaller model checking problems and tackles each smaller one. We prove a theorem that the multiple smaller model checking problems are equivalent to the original eventual model checking problem. We conducted a case study that demonstrates the power of the proposed technique. Introduction Model checking is an attractive and promising formal verification technique because it is possible to automatically conduct model checking experiments once good concise formal models are made.It has also been used in industries, especially hardware industries.There are still some challenges to tackle in model checking, one of which is the state explosion, the most annoying one.Many techniques to mitigate the state explosion have been devised, such as symbolic model checking [1] and SAT-based bounded model checking (BMC) [2], where SAT stands for Boolean satisfiability problem.As those existing techniques are not enough to deal with the state explosion, it is still worth tackling the issue. Moe Nandi Aung et al. [3] tried to check that an autonomous vehicle intersection control protocol [4] enjoyed some desired properties, where there were 13 vehicles, and encountered the notorious state space explosion, making it impossible to conduct the model checking experiments.Note that it was possible to conduct the model checking experiments for a case wherein there were five vehicles.One property is the starvation freedom property that can be expressed as an eventual property.An informal description of the starvation freedom property is that every vehicle will pass the intersection concerned.The case motivated us to come up with the technique proposed in the present paper. The present paper proposes a divide and conquer approach to eventual model checking.The technique splits the reachable state space from each initial state into L + 1 layers, where L ≥ 1, generating multiple smaller sub-state spaces, dividing the original eventual mode checking problem into multiple smaller model checking problems and tackling each smaller one.As the name indicates, the technique proposed in the present paper is dedicated to eventual properties.Many important software requirements can be expressed as eventual properties.For example, halting is one important requirement many programs should enjoy.Halting can be expressed as an eventual property.We prove a theorem that the multiple smaller model checking problems are equivalent to the original eventual model checking problem.We conducted a case study that demonstrates the power of the proposed technique.Maude [5] was used as the formal specification language and Maude LTL (linear temporal logic) model checker was used as the model checker. The model checking algorithm adopted by Maude LTL model checker is the same as the one used by SPIN [6], which is one of the most popular model checkers for model checking software systems.It has been reported that Maude LTL model checker is comparable with SPIN with respect to model checking running performance.This implies that whenever Maude LTL model checker encounters the state space explosion problem, making it impossible to conduct model checking experiments, SPIN does so as well, and so do most existing model checkers.The proposed technique aims at mitigating the state space explosion problem and we demonstrate that it can mitigate the problem through a case study.We are allowed to use Maude as a formal specification language for systems under model checking.Maude is extremely expressive because it is one direct descendant of and OBJ language family, such as OBJ3 [7] and CafeOBJ [8].Inductively-defined data structures, associative and/or commutative binary operators, etc., can be used in systems' specifications under model checking with the Maude LTL model checker.Inductively-defined data structures and associative and/or commutative binary operators cannot be used in systems' specifications under model checking for most existing model checkers, such as SPIN and NuSMV [9].This is mainly why we used the Maude LTL model checker.Those who are more interested in the flavor of the Maude LTL model checker are recommended to see the paper [10] in which the Maude LTL model checker is intensively compared with the Symbolic Analysis Laboratory (SAL) [11], a collection of model checkers. The remaining part of the paper is organized as follows.Section 2 explains some preliminaries, such as Kripke structures and LTL.Section 3 uses a simple example to outline the proposed technique.Section 4 describes the theoretical part of the proposed technique.Section 5 describes the proposed technique.Section 6 reports on a case study.Section 7 mentions some existing related work.Section 8 concludes the paper and suggests some future directions. Preliminaries This section describes some preliminaries needed to read the technical contents of the paper.We give the definitions of Kripke structures, the syntax of LTL formulas and the semantics of LTL formulas.We need infinite sequences of states (called paths of Kripke structure) to define the semantics of LTL formulas.We introduce several notations or symbols for paths, sets of paths and satisfaction relations, where satisfaction relations are the essence of the semantics of LTL formulas.We prepared tables for those notations or symbols.We use the symbol as "if and only if" or "be defined as." Definition 1 (Kripke structures).A Kripke structure K S, I, T, A, L consists of a set S of states, a set I ⊆ S of initial states, a left-total binary relation T ⊆ S × S over states, a set A of atomic propositions and a labeling function L whose type is S → 2 A .An element (s, s ) ∈ T is called a (state) transition from s to s and may be written as s → K s .S does not need to be finite.The set R of reachable states is inductively defined as follows: I ⊆ R and if s ∈ R and (s, s ) ∈ T, then s ∈ R. We suppose that R is finite.K in s → K s may be omitted if it is clear from the context. An infinite sequence of states is a sequence that consists of states infinitely many times, where infinitely many copies of some states may occur.Let s 0 , s 1 , . . ., s i , s i+1 , . . .be an infinite sequence of states, where s 0 is the top element (called 0th element), s 1 is the next element (called 1st element) and s i is the ith element.As we suppose that R is finite, if s 0 ∈ R, then s 0 , s 1 , . . ., s i , s i+1 , . . .only consists of bounded number of different states, although infinitely many copies of some states occur.As usual, let ∞ be used to denote the infinity.An infinite sequence s 0 , s 1 , . . ., s i , s i+1 , . . . of states is called a path of K if and only if for any natural number i, (s i , s i+1 ) ∈ T. Let π be s 0 , s 1 , . . ., s i , s i+1 , . . .and some notations are defined as follows: where i and j are any natural numbers.Note that π (0,j) = π j .Note that where a ∈ A. Definition 3 (Semantics of LTL).For any Kripke structure K, any path π of K and any LTL formula ϕ, K, π |= ϕ is inductively defined as follows: ⊥ ¬ and some other connectives are defined as follows: , U , ♦, and are called next, until, eventually, always and leads-to temporal connectives, respectively.Although it is unnecessary to directly define the semantics for ♦, and , we can define it as follows: We summarize some notations or symbols used in the paper in the three tables: Tables 1-3.Table 1 describes notations or symbols for paths.Table 2 describes notations or symbols for sets of paths.Table 3 describes notations or symbols for satisfaction relations.the postfix s i , s i+1 , . . .obtained by deleting the first i states s 0 , s 1 , . . ., s i−1 from π π i s 0 , s 1 , . . . ,s i , s i , . . .constructed by first extracting the prefix s 0 , s 1 , . . . ,s i , the first i + 1 states from π and then adding s i , the final state of the prefix, to the prefix at the end infinitely many times π ∞ s 0 , s 1 , . . ., s i , s i+1 , . .., the same as π π (i,j) if i ≤ j, then s i , . . ., s j , s j , . .., the same as (π i ) j−i ; otherwise, s i , s i , . .., the infinite sequence in which only s i occurs infinitely many times π (i,∞) s i , s i+1 , . .., the same as π i π i j the same as π (i,j) Table 2. Descriptions of path-set notations (or symbols), where b is a natural number. P K the set of all paths of K P (K,s) the set of all paths π of K such that π(0), the 0th state of the path π, is s the same as P (K,s) Symbol Description K, π |= ϕ an LTL formula ϕ holds for a path π of K K |= ϕ an LTL formula ϕ holds for all computations of K K, s |= ϕ an LTL formula ϕ holds for all paths in P (K,s) K, s, b |= ϕ an LTL formula ϕ holds for all paths in P b Outline of the Proposed Technique Let us outline the proposed technique with a simple system (or Kripke structure) called SimpSys as depicted in Figure 1 so that you can intuitively comprehend the technique.SimpSys has four states s 0 , s 1 , s 2 and s 3 , where s 0 is the only initial state.There are seven transitions depicted as arrows in Figure 1.Let us consider three atomic propositions init, middle and final.The labeling function is defined as depicted in Figure 1.For example, middle holds in s 1 and s 2 and does not in s 0 and s 3 .Let us take ♦ final as a property concerned.We can straightforwardly check that SimpSys satisfies ♦ final, namely SimpSys |= ♦ final, and then do not need to use the proposed technique for this model checking experiment.We, however, use this simple model checking experiment to sketch the technique.The left part of Figure 2 shows the computation tree made from the reachable states such that its root is the initial state s 0 .Let us split the computation tree into two layers such that the first layer depth is 1.Note that it is unnecessary to specify the second (or the final) layer depth.The first layer has one sub-state space such that its initial state is s 0 as shown in the right part of Figure 2. The second layer has three sub-state spaces such that their initial states are s 1 , s 2 and s 3 , respectively.We first conduct the model checking experiment that ♦ final holds for the sub-state space in the first layer.There are two counterexamples: (1) s 0 , s 1 , s 1 , . . .and (2) s 0 , s 2 , s 2 , . .., where s 1 and s 2 are called counterexample states.As ♦ final holds for s 1 , s 3 , s 3 , . .., we do not need to conduct the model checking experiment that ♦ final holds for the sub-state space whose initial state is s 3 in the second layer.It suffices to conduct the model checking experiments that ♦ final holds for the two sub-state spaces whose initial states are s 1 and s 2 , respectively.There are no counterexamples for the two model checking experiments and then we can conclude that SimpSys satisfies ♦ final.This is how the proposed technique works.For this simple example, the number of different states in each sub-state space is the same as or almost the same as the number of different states in the original state space.If the number of each sub-state space is much smaller than the number of the original state space, then even though it is impossible to conduct a model checking experiment for the original reachable state space because of the state space explosion, it may be possible to conduct the model checking experiment for each sub-state space.This is how the proposed technique mitigates the state space explosion problem. Multiple Layer Division of Eventual Model Checking This section describes the theoretical contribution of the paper.An overview of the proposed technique is as follows: an eventual model checking problem is divided into multiple smaller model checking problems and each smaller model checking problem is tackled so as to tackle the original eventual model checking experiment.We need to guarantee that tackling each smaller model checking problem is equivalent to tackling the original eventual model checking problem.We prove a theorem for it. We first tackle the case in which L is 1. Lemma 1 (Two-layer division of ♦).Let ϕ be any state proposition of K.For any natural number k, (K, because they are equivalent). If so, we have the following as an instance of the induction hypothesis: Therefore, the induction hypothesis instance can be rephrased as follows: From Definition 5, Eventually l+1 (K, π, ϕ, 0) is because of the induction hypothesis instance.From Lemma 1, this is equivalent to K, π |= ♦ϕ. Theorem 1 makes it possible to divide the original model checking problem A Divide and Conquer Approach to an Eventual Model Checking Algorithm This section describes an algorithm that carries out the proposed technique.The algorithm takes as inputs a Kripke structure K, a state proposition ϕ, a non-zero natural number L and a function d such that d(x) is a natural number for x = 1, . . ., L, where d(x) is the depth of layer x; and returns as an output success if K |= ♦ϕ holds and failure otherwise. An algorithm can be constructed based on Theorem 1, which is shown as Algorithm 1.For each initial state s 0 ∈ K, unfolding s 0 by using T such that each node except for s 0 has exactly one incoming edge, an infinite tree whose root is s 0 is made.The infinite tree may have multiple copies of some states.Such an infinite tree can be divided into L + 1 layers, as shown in Figure 3, where L is a non-zero natural number.Although there does not actually exist layer 0, it is convenient to just suppose that we have layer 0. Therefore, let us suppose that there is virtually layer 0 and s o is located at the bottom of layer 0. Let n l be the number of states located at the bottom of layer l = 0, 1, . . ., L and then there are n l sub-state spaces in layer l + 1.In this way, the reachable state space from s 0 is divided into multiple smaller sub-state spaces.As R is finite, the number of different states in each layer and in each sub-state space is finite.Theorem 1 makes it possible to check K |= ♦ ϕ in a stratified way in that for each layer l ∈ {1, . . ., L + 1} we can check K, s, d(l) |= ♦ ϕ for each s ∈ {π(d(l − 1)) | π ∈ P d(l−1) (K,s 0 ) }, where d(0) is 0, d(x) is a non-zero natural number for x = 1, . . ., L and d(L + 1) is ∞. ES and ES are variables to which sets of states are set.Each iteration of the outermost loop in Algorithm 1, which conducts the model checking experiment in layer l = 1, . . ., L + 1. ES, is the set of states located at the bottom of layer l = 0, 1, . . .L and ES is the empty set before the model checking experiments conducted in the l + 1st iteration.If K, π |= ♦ϕ for π ∈ P d(l) (K,s) , then π(d(l)) is added to ES .ES is set to ES at the end of each iteration.If ES is empty at the beginning of an iteration, Success is returned, meaning that K |= ♦ϕ holds.After the outermost loop, we check whether ES is empty.If so, Success is returned, and otherwise, Failure is returned. Although Algorithm 1 does not construct a counterexample when failure is returned, it could be constructed.For each l ∈ {0, 1, . . ., L}, ES l is prepared.As elements of ES l , pairs (s, s ) are used, where s is a state in S or a dummy state denoted δ-stt that is different from any state in S, s is a state in S and s is reachable from s if s ∈ S. The assignment at line 6 should be revised as follows: The assignment at line 10 should be revised as follows: The assignment at line 14 should be revised as follows: A Case Study Many systems' requirements can be expressed as eventual properties.Termination or halting is one important requirement that many programs should satisfy, which can be expressed as an eventual property.The starvation freedom property that should be satisfied by systems, such as an autonomous vehicle intersection control protocol [4], can be expressed as an eventual property.Some communication protocols, such as Alternating Bit Protocol (ABP) and the sliding window protocol used in Transmission Control Protocol (TCP), guarantee that all data sent by a sender are delivered to a receiver without any data loss and duplication.The requirement can be expressed as an eventual property. We use a mutual exclusion protocol as an example in the case study.The requirement we take into account is that the protocol guarantees that a process can enter the critical section, doing some tasks there, leaving the section and reaching a final position (or terminating).The requirement can be expressed as an eventual property.The mutual exclusion protocol is called Qlock, an abstract version of the Dijkstra binary semaphore in that an atomic queue of process IDs is used. In the rest of the section, we first describe Qlock, how to formally specify Qlock and the property concerned in Maude and how to model check the eventual property with the proposed technique.Let us note that when there are 10 processes that participate in Qlock, it is impossible to complete the model checking experiment with Maude LTL model checker, while it is possible to do so with the proposed technique.We finally summarize the case study. Qlock We report on a case study that demonstrates the power of the proposed technique.The case study used a mutual exclusion protocol called Qlock whose pseudo-code for each process p can be described as follows: "Start Section" ss : enq(queue, p); ws : repeat until top(queue) = p; "Critical Section" cs : deq(queue); fs : . . ."Finish Section" where queue is an atomic queue of process IDs shared by all processes participating in Qlock.enq(queue, p) atomically puts a process ID p into queue at bottom.top(queue) atomically returns the top element of queue.deq(queue) atomically deletes the top element of queue.If queue is empty, deq(queue) does nothing.queue is initially empty.Each process p is supposed to be located at one of the four locations ss (start section), ws (waiting section), cs (critical section) and fs (finish section), and is initially located at ss.Let us suppose that each process p stays fs once it gets there, implying that it enters the critical section at most once.The property to be checked in this case study is that a process will eventually get to fs.The property can be formalized as an eventual property.When there were 10 processes, it did not complete the model check with the Maude LTL model checker running on a computer that carried a 2.10 GHz microprocessor and 8 GB main memory because of the state space explosion. Formal Specification We describe how to formally specify Qlock in Maude.A state is expressed as a braced soup of observable components, where observable components are name-value pairs and soups are associative-commutative collections.When there are n processes, the initial state of Qlock is as follows: {(queue: empq) (pc[p1]: ss) ... (pc[pn]: ss) (cnt: n)} where (queue: empq) is an observable component saying that the shared queue is empty, (pc[pi]: ss) is an observable component saying that process pi is in the ss and (cnt: n) is an observable component whose value is a natural number n.The role of (cnt: n) will be described later. Transitions are described in terms of rewrite rules.The transitions of Qlock are specified as follows: where Q is a variable of queues, I is a variable of process IDs, OCs is a variable of observable component soups and N is a variable of natural numbers.I | Q denotes a non-empty queue such that I is the top and Q is the remaining part of the queue.deq(Q) returns the empty queue if Q is empty and what is obtained by deleting the top from Q otherwise.dec(N) returns 0 if N is 0 and the predecessor number of N otherwise. start, wait, exit and fin are the labels given to the four rules, respectively.Rule start says that if process I is in ss, then it puts its ID into Q at end and moves to ws.Rule wait says that if process I is in ws and the top of the shared queue is I, then I enters cs.Rule exit says that if process I is in cs, then it deletes the top from the shared queue, decrements the natural number N stored in (cnt: N) and moves to fs.Rule fin says that if the natural number N stored in (cnt: N) is 0, a self-transition s → K s occurs.Rule fin is used to make the transitions total.The natural number N stored in (cnt: N) is the number of processes that have not yet reached fs.Use of it and rule fin make it unnecessary to use any fairness assumptions to model check an eventual property. Let us consider one atomic proposition inFs1.inFs1 holds in a state if and only if the state matches {(pc[p1]: fs) OCs}, namely, that process p1 is in fs. Model Checking with the Proposed Technique It quickly completes to model check ♦ inFs1 for Qlock when there are five processes, finding no counterexample.It is, however, impossible to model check the same property for Qlock when there are 10 processes.We then use Algorithm 1 to tackle the latter case, where L = 1 and d(1) = 3. We use one more observable component where D is a variable of natural numbers and Bound is 3. Rule stutter has been added to make each state at depth three have a transition to itself.The revised version of rule start says that if D is less than Bound and process I is in ss, then I puts its ID into Q at end and moves to ws and D is incremented.The other revised rules can be interpreted likewise.When we model checked ♦ inFs1 for the revised specification of Qlock, we found a counterexample that is a finite state sequence starting from the initial state and leading to a state loop that consists of one state that is as follows: We needed to find all counterexamples and then revise the definition of inFs1 such that inFs1 holds in the state as well.When we model checked the same property for the revised specification, we found another counterexample.This process was repeated until no more counterexamples were found.We totally found 819 counterexamples and 819 counterexample states at depth three. We gathered all states at depth three from the initial state, which totaled 820 states, including the 819 states found in the last step.There was one state at depth three such that process p1 was located at fs.For each of the 819 states as an initial state, we model checked ♦ inFs1 for the original specification of Qlock, finding no counterexample.Therefore, we can conclude that it completed model check ♦ inFs1 for Qlock when there were 10 processes, finding no counterexample.It took about 44 h to conduct the model checking experiments for the second layer and it took less than 200 ms to conduct each model checking experiment for the first layer.As there were 819 counterexamples for ♦ inFs1 in the first layer, we needed to conduct 820 model checking experiments for the first layer. Summary of the Case Study The proposed divide and conquer approach to eventual model checking makes it possible to successfully conduct the model checking experiment ♦ inFs1 for Qlock when there are 10 processes and each process enters the critical section at most once, which cannot be otherwise tackled by the computer used in the case study.The specifications in Maude used in the case study are available at the webpage (http://www.jaist.ac.jp/ ~ogata/code/dca2emc/). Related Work The state space explosion problem is one of the biggest challenges in model checking.Many techniques to mitigate it have been proposed so far.Among them are partial order reduction [12], symmetry reduction [13], abstraction [14][15][16], abstract logical model checking [17] and SAT-based bounded model checking (BMC) [2].The proposed divide and conquer approach to eventual model checking is a new technique to mitigate the problem when model checking eventual properties.The second, third and fourth authors of the present paper proposed a (L + 1-layer) divide and conquer approach to leads-to model checking [18].The technique proposed in the present paper can be regarded as an extension of the one described in the paper [18] to eventual properties. Clarke et al. summarized several techniques that address the state space explosion problem in model checking [19].One of them is SAT-based BMC.SAT-based BMC is used in industries, especially hardware industries.BMC can find a flaw located within some reasonably shallow depth k from each initial state but cannot prove that systems whose (reachable) state space is enormous (including infinite-state systems) enjoy the desired properties.Some extensions have been made to SAT-based BMC so that we can prove that such systems enjoy the desired properties.One extension is k-induction [20,21].k-induction is a combination of mathematical induction and SAT/SMT-based BMC, where SMT stands for SAT modulo theories.The bounded state space from each initial state up to depth k is tackled with BMC, which is regarded as the base case.For each state sequence s 0 , s 1 , . . ., s k , where s o is an arbitrary state, such that a property concerned is not broken in each state s i for i = 0, 1, . . ., k, it is checked that the property is not broken in all successor states s k+1 of s k , which is done with an SAT/SMT solver and regarded as the induction case.If an SMT solver is used, infinite-state systems, for example, in which integers are used, could be handled.Our proposed technique can be regarded as another extension of BMC, although we do not use any SAT/SMT solvers. SAT/SMT-based BMC has been extended to model check concurrent programs [22].Given a concurrent (or multithreaded) program P together with two parameters u and r that are the loop unwinding bound and the number of round-robin schedules, respectively, an intermediate bounded program P u is first generated by unwinding all loops and inlining all function calls in P with u as a bound, except for those used for creating threads, and then P u is transformed into a sequential program Q u,r that simulates all behaviors of P u within r round-robin schedules.Q u,r is then transformed into a propositional formula, which is converted into an equisatisfiable CNF formula that can be analyzed by an SAT/SMT solver.This way to model check multithreaded programs can be parallelized by decomposing the set of execution traces of a concurrent program into symbolic subsets and analyzing the set of execution traces in parallel [23].Instead of generating a single formula from P via Q u,r , multiple propositional sub-formulas are generated.Each sub-formula corresponds to a different symbolic partition of the execution traces of P and can be checked for satisfiability independently from the others.The approaches to BMC of multithreaded programs seem able to deal with safety properties only, while our tool is able to deal with leadsto properties, a class of liveness properties.Another difference between their approach and our approach is that the target of our approach is designs of concurrent/distributed systems, while the one of theirs is concurrent programs. Barnat et al. [24] surveyed some recent advancements of parallel model checking algorithms for LTL.Graph search algorithms need to be redesigned to make the best use of multi-core and/or multi-processor architectures.Parallel model checkers based on such parallel model checking algorithms have been developed, among which are DiVinE 3.0 [25], Garakabu2 [26,27] and a multicore extension of SPIN [28].In the technique proposed in the present paper, there are generally multiple sub-state spaces in each layer, and model checking experiments for these sub-state spaces are totally independent from each other.Furthermore, model checking experiments for many sub-state spaces in different layers are independent.It is possible to conduct such model checking experiments in parallel.Therefore, it is possible to parallelize Algorithm 1, which never requires us to redesign any graph search algorithms and makes it possible to use any existing LTL model checker, such as Maude LTL model checker. To tackle a large system that cannot be handled by an exhaustive verification mode, SPIN has a bit-state verification mode that may not exhaustively search the entire reachable state space of a large system, but can achieve a higher coverage of large state spaces by using a few bits of memory per state stored.The larger a system under verification becomes, the higher chances the SPIN bit-state verification mode may overlook flaws lurking in the system.To overcome such situations, swarm verification [29] has been proposed.The key ideas of swam verification are parallelism and search diversity.For each of the multiple different search strategies, one instance of bit-state verification is conducted.These instances are totally independent and can be conducted in parallel.Different search strategies traverse different portions of the entire reachable state space, making it more likely to achieve higher coverage of the entire reachable state space and find flaws lurking in a large system if any.An implementation of swarm verification on GPUs, called Grapple [30], has also been developed.Although the technique proposed in the present paper splits the reachable state space from each initial state into multiple layers, generating multiple sub-state spaces, it exhaustively searches each sub-state space with the Maude LTL model checker.It may be worth adopting the swarm verification idea into our technique such that swarm verification is conducted for each sub-state space instead of exhaustive search, which may make it possible to quickly find a flaw lurking in a large system. One hot theme in research on methods to formally verify liveness properties including program termination is liveness-to-safety reductions.Biere et al. [31] have proposed a technique that formally verifies that finite-state systems satisfy liveness properties by showing the absence of fair cycles in every execution and coined the term "liveness-tosafety reduction" to refer to the technique.The technique can be extended to what is called "parameterized systems" in which the state space is infinite but actually finite for every system instance [32].Padon et al. [33] have further extended "liveness-to-safety reduction" to systems such that processes can be dynamically created and each process state space is infinite so that they can formally verify that such systems enjoy liveness properties under fairness assumptions.Their technique basically reduces a infinite-state system liveness formal verification problem under fairness to a infinite-state system safety formal verification problem that can be expressed in first-order logic.The latter problem can be solved by existing first-order theorem provers, such as IC3 [34,35] and VAMPIRE [36].The technique proposed in the present paper does not take into account fairness assumptions.We need to use fairness assumptions to model check liveness properties, including eventual ones from time to time.We might adopt the idea used in the Padon et al.'s liveness-to-safety reduction technique.To our knowledge, the liveness-to-safety reduction technique has not been parallelized.Our approach to eventual model checking might make it possible to parallelize the liveness-to-safety reduction technique. Conclusions We have proposed a new technique to mitigate the state explosion in model checking.The technique is dedicated to eventual properties.It divides an eventual model checking problem into multiple smaller model checking problems and tackles each smaller one.We have proved that the multiple smaller model checking problems are equivalent to the original eventual model checking problem.We have reported on a case study demonstrating the power of the proposed technique. There are several things left to do as our future research.One piece of future work for us will be to develop a tool supporting the proposed technique.We will use Maude as an implementing language with its reflective programming (meta-programming) facilities to develop the tool that will do all necessary modifications to systems specifications (or systems models) so that human users do not need to change systems specifications to use the divide and conquer approach to eventual properties.It was impossible to conduct the model checking experiment with Maude LTL model checker; the autonomous vehicle intersection control protocol [4] enjoys the starvation freedom property when there are 13 vehicles with the tool supporting the proposed technique.The starvation freedom property can be expressed as an eventual property.Another piece of future work will be to complete the model checking experiment with the tool supporting the proposed technique.To complete the model checking experiment, we may need to make the best use of up-to-date multi-core/processor architectures.To this end, we need to parallelize Algorithm 1 and the tool supporting the proposed technique.Therefore, yet another piece of future work may be to evolve the tool into a parallel version that can make best use of up-to-date multi-core/processor architectures. Figure 2 . Figure 2. Two-layer division of the SimpSys reachable state space. Lemma 1 makes it possible to divide the original model checking problem K, π |= ♦ϕ into two model checking problems K, π k |= ♦ϕ and K, π k |= ♦ϕ.We only need to tackle K, π k |= ♦ϕ unless K, π k |= ♦ϕ holds.Definition 5 (Eventually L ).Let L be any non-zero natural number, k be any natural number and d be any function such that d(0) is 0, d(x) is a natural number for x = 1, . . ., L and d and k is a natural number.A path π of K is called a computation of K if and only if π(0) ∈ I.Let P K be the set of all paths of K. Let P (K,s) be {π | π ∈ P K , π(0) = s}, where s ∈ S. Let P b (K,s) be {π b | π ∈ P (K,s) }, where s ∈ S and b is a natural number.Note that P ∞ (K,s) is P (K,s) .If R is finite and s ∈ R, then P (K,s) is finite and so is P b (K,s) . and only if there exists a natural number i such that K, π i |= ϕ 2 and for each natural number j < i, K, π j |= ϕ 1 where ϕ 1 and ϕ 2 are LTL formulas.Then, K |= ϕ if and only if K, π |= ϕ for all computations π of K. Table 1 . Descriptions of path notations (or symbols), where i and j are natural numbers. Table 3 . Descriptions of satisfaction relation |= notations (or symbols), where b is a natural number. 11 end 12 end 13 end 14 ES ← ES 15 end 16 if ES = ∅ then 17 return Success 18 end 19 else 20 return Failure 21 end We could then construct a counterexample, when failure is returned, by searching through ES L , . . ., ES 1 and ES 0 .
8,442
2021-02-12T00:00:00.000
[ "Computer Science" ]
BLAST-QC: automated analysis of BLAST results Background The Basic Local Alignment Search Tool (BLAST) from NCBI is the preferred utility for sequence alignment and identification for bioinformatics and genomics research. Among researchers using NCBI’s BLAST software, it is well known that analyzing the results of a large BLAST search can be tedious and time-consuming. Furthermore, with the recent discussions over the effects of parameters such as ‘-max_target_seqs’ on the BLAST heuristic search process, the use of these search options are questionable. This leaves using a stand-alone parser as one of the only options of condensing these large datasets, and with few available for download online, the task is left to the researcher to create a specialized piece of software anytime they need to analyze BLAST results. The need for a streamlined and fast script that solves these issues and can be easily implemented into a variety of bioinformatics and genomics workflows was the initial motivation for developing this software. Results In this study, we demonstrate the effectiveness of BLAST-QC for analysis of BLAST results and its desirability over the other available options. Applying genetic sequence data from our bioinformatic workflows, we establish BLAST_QC’s superior runtime when compared to existing parsers developed with commonly used BioPerl and BioPython modules, as well as C and Java implementations of the BLAST_QC program. We discuss the ‘max_target_seqs’ parameter, the usage of and controversy around the use of the parameter, and offer a solution by demonstrating the ability of our software to provide the functionality this parameter was assumed to produce, as well as a variety of other parsing options. Executions of the script on example datasets are given, demonstrating the implemented functionality and providing test-cases of the program. BLAST-QC is designed to be integrated into existing software, and we establish its effectiveness as a module of workflows or other processes. Conclusions BLAST-QC provides the community with a simple, lightweight and portable Python script that allows for easy quality control of BLAST results while avoiding the drawbacks of other options. This includes the uncertain results of applying the -max_target_seqs parameter or relying on the cumbersome dependencies of other options like BioPerl, Java, etc. which add complexity and run time when running large data sets of sequences. BLAST-QC is ideal for use in high-throughput workflows and pipelines common in bioinformatic and genomic research, and the script has been designed for portability and easy integration into whatever type of processes the user may be running. Background The Basic Local Alignment Search Tool (BLAST) from NCBI has been a popular tool for analyzing the large data sets of genetic sequences that have become common when working with new generation sequencing technologies. BLAST has been the preferred utility for sequence alignment and identification in bioinformatics and genomics research and workflows for almost 30 years [1]. One of the main challenges for researchers utilizing the NCBI BLAST is interpreting the huge amount of output data produced when analyzing large numbers of input sequences. While BLAST does allow for multiple output formats as well as limiting the number of top hit results (using -outfmt and -max_target_seqs, respectively) [2], for some purposes such as pushing results down a workflow or pipeline, these tools may not be enough to ensure results that can be meaningfully and reasonably interpreted. The controversy raised by Shah et al. [3] in their 2018 paper outlining a bug in the functionality of the -max_tar-get_seqs parameter has started a discussion in the BLAST community over the usage and potential for misuse of the parameter. NCBI published a response stating that the utility of this parameter is simply misunderstood by the community and that the bug seen by Shah et al. was the result of "overly aggressive optimization" introduced in 2012, and patched the issue following the release of BLAST+ 2.8.1 in 2018 [4]. However, follow up test cases and posts, including those by Peter Cock [5], have shown that this issue is much more complex than simply "BLAST returns the first N hits that exceed the specified e-value threshold". While the update 2.8.1 fixed 9/10 of Shah et al's test cases, according to the post by Peter Cock, 1/10 remained invalid, due to an error with the internal candidate sequence limit introduced by -max_target_seqs 1. This is because, as was stated by Shah et al. [3], the -max_ target_seqs parameter is applied much earlier in the search, before the final gapped alignment stage. This means that the use of this parameter can change the number of sequences processed as well as the statistical significance of a hit if using composition-based statistics [2]. This is contrary to the popular assumption that the parameter is simply a filter applied post search [6]. This intuition is false, and may lead to errors in the resulting data of a BLAST search if the value of -max_target_seqs is too small. The use of -max_target_seqs in this way is not advised. As a result of the misinformation and confusion over '-max_target_seqs' and other intricacies of the BLAST heuristic search and filtering process, there has been a push towards more detailed documentation of these processes and effects of parameters on the BLAST algorithm [6], with NCBI adding a warning to the BLAST command-line application if the value of -max_target_seqs is less than 5 [7]. The community has also moved towards better methods of narrowing the results of a large search, as opposed to using BLAST parameters that may affect the actual search process. These methods include resources like Bio-Perl and BioPython that can be used to create scripts to parse and filter result files. A few community written scripts can be found available online, such as the Perl script created by Dr. Xiaodong Bai and published online by Ohio State [8], a version of this script produced by Erin Fichot [9], and a XML to tabular parser by Peter Cock [10]. While all of these scripts (and others like them) can potentially be very useful for parsing BLAST XML results into a concise tabular format, most have drawbacks that leave much to be desired. First and most importantly, for Bai and Fichot, the programs require Perl and Bio-Perl modules which can be unwieldy and slow for use in highthroughput workflows and pipelines, especially those which are built on a modern python framework. Furthermore, both scripts contain a bug, found on lines 77 and 93 respectively, that causes the query frame value to be lost through the parsing process, setting the value to 0. Our team sought to correct this and other errors and to provide a verified solution that can be soundly applied for research purposes. Secondly, the team saw a need for increased functionality, particularly the ability to filter results by threshold values input by the user. The only program that implements a threshold other than BLAST-QC is the script by Fichot, but only a bit score threshold is implemented. Our team sought to provide a single solution that would let researchers determine the best combination of values that would be optimal for any given experiment without the need to change parsers between runs. The team's central motivation was to pursue creating a dedicated piece of quality control software for use in research workflows, find a solution that solely utilizes Python 3, streamlines the process, and reduces run times for parsing large data sets of BLAST results. Implementation: BLASTQC has been implemented in a single python file, requiring only that Python3 be installed for all functionalities to be used. The team felt that an implementation in Python was important for the simplicity and ease of use that comes with the Python environment. Python is also one of the most popular and well understood languages for research purposes, and thus is a perfect choice for a tool that is designed for portability and integration into other research processes. Python is also capable of very fast runtimes when compared to other interpreted languages, such as Perl, and while it may be slower than a compiled language like C, the benefits in ease of use and portability outweigh the minor increase in runtimes. For example, C requires the use of dependencies like libxml2 for parsing, requiring a higher level of knowledge to make modifications to source code, and as such is less desirable as a simple addition to bioinformatic workflows already built within the Python framework. With Python 3 the parsing step of the workflow is simplified to a single file. Furthermore, the use of a standalone script rather than the use of a command line sorting option such as GNU sort not only provides a great increase in possible functionality, as implementing filtering parameters in bash on the command line can be cumbersome, but also allows for a better user experience for researchers who don't want to memorize long sort commands that need to be changed constantly as experiment goals change. The BLAST-QC script implements thresholds on e-value, bit-score, percentage identity, and the number of 'taxids' (level of taxonomic or functional gene detail) in the definition of a hit (<hit_def > in BLAST XML results). It is also possible for the user to choose which of these values the output should be ordered by and how many top matches should be returned per query sequence in the input. Thus, the behavior of the -max_target_seqs parameter may be implemented with ease without altering the search process. Additionally, if the researcher decides that a higher bit-score is more important for a certain experiment, it is trivial to change the parsing process to return the highest bitscore hit, whereas max_target_seqs only supports returning top hits by e-value. Further, the Python script is also capable of setting a range on the threshold values, and selecting those sequences that produced a more detailed hit definition within that range. This is useful for researchers because it avoids the problem of finding a high scoring sequence that provides no relevant information, as there may be little use in knowing that a hit accurately matches an unknown sequence. For example, a BLAST search may return a hit sequence with a taxid of "protein of unknown function DUF1680". This may not be a useful result for a study on the function of a specific protein, regardless of how low the evalue of the hit. BlastQC allows researchers to define the reasonable evalue for their application using input parameters, and returns hits with more informative taxids that still fit within the chosen parameters. Increased definition information is useful for narrowing the taxonomy of a species (for BLAST nucleotide) or the type/functionality of a protein sequence in a protein BLAST query. The team also found an issue in many of the available community parsers regarding the number of HSPs (high scoring pairs) per hit. In some cases BLAST may return multiple HSPs per hit sequence, and the BLAST-QC script handles this by considering it a separate hit that retains the same id and definition. This case was not covered in any of the scripts the team encountered online, causing hits with multiple HSPs to lose any data from the additional HSPs. The BLAST-QC Python script is compatible with all BLAST types (BLASTP, BLASTN, BLASTX, etc.) as well as both the tabular and XML output formats (−outfmt 6 and -outfmt 5, respectively) and reports all relevant data produced in a BLAST results file: query name, query length, accession number, subject length, subject description, e-value, bit-score, query frame, query start, query end, hit start, hit end, percent identity, and percent conserved (qseqid, sseqid, pident, length, mismatch, gapopen, qstart, qend, sstart, send, evalue, bitscore and salltitles (optional) for tabular output). Information on these values can be found in the BLAST glossary and manual [2,11], and the two percentage values (percent identity and percent conserved) have been calculated using the identity (Hsp_identity), positive (Hsp_positive) and align length values (Hsp_align-len). Percent identity is defined as the percent of the sequences that have identical residues at the same alignment positions and is calculated by the number of identical residues divided by the length of the alignment multiplied by 100 (100*(hsp_identity/hsp_align-len)). Percent conserved (positive) is defined as the percent of the sequences that have 'positive' residues (chemically similar) at the same alignment positions and is calculated by the number of positive residues divided by the length of the alignment multiplied by 100 (100*(hsp_positive/hsp_align-len)). Additionally, BLAST-QC supports parallel processing of results, using Python's multiprocessing module. The number of concurrent BLASTQC processes defaults to the number of CPU cores present on the machine, but this value may be adjusted using the -p command line option. If sequential processing is desired, the number of processes may be set to 1 using "-p 1". This along with the ability to pipe input from stdin allow for replication of some of GNU sort's main features. Results The objective of the development of BLAST-QC was to provide BLAST users with a method of quality control that ensures accuracy of results, posts superior runtimes, and provides configurations for many types of analysis processes, while remaining streamlined and simple to use and modify. In order to establish BLAST-QCs effectiveness as compared to other quality control options, we have compared BLAST-QC python to implementations of the program in compiled languages, both C and Java, to the community available parsers by Bai, Fichot and Cock, and to a standard approach to parsing for some researchers, GNU sort commands. We demonstrate the ability of BLAST-QC to correct the issue with`max_target _seqs`, using the dataset provided in the case study from Shah et al. [3]. This dataset is available on Shah's github page, 'https://github.com/ shahnidhi/BLAST_maxtargetseq_analysis'. As shown in Fig. 1, BLAST-QC was able to correctly identify the lowest e-value hit for the same query sequence while BLAST with '-max_target_seqs = 1' was not. This result illustrates the potential for errors to be introduced into BLAST data by the use of this parameter, and we encourage researchers to seek more information on its usage and application [6,11]. To use BLAST-QC to replicate the function of '-max_target_seqs' leave the parameter set to default while using BLAST to locate matching sequences, then run BLAST-QC on the resulting data, ordering output by e-value and setting a limit of 1 hit per query using the syntax shown at the bottom of Fig. 1. Although the issue with max_target_seqs has been corrected in BLAST 2.8.1, it is still not well understood by the BLAST community and, being a popular parameter, we felt that it was important to show this use case in order to demonstrate a safe way of achieving the desired effect, as well as to promote compatibility with older versions of the BLAST software. We also demonstrate the runtime of BLAST-QC as compared to existing parsers, two developed with the commonly used BioPerl modules, one written by Xiaodong Bai from Ohio State and an improved version of the same script developed by Erin Fichot [9], an XML to tabular conversion script published by Peter Cock [10], implementations of the BLAST-QC parser in both Java and C, as well as a standard GNU sort command. The implementations in C and Java were necessary to compare BLAST-QC Python to both compiled and interpreted languages, as both Python and Perl are interpreted languages. The sort command used for the runtime benchmarking is "sort -k1,1 -k11,11 blast.tab", as this replicates ordering the hit sequences for each query sequence by evalue, BLAST-QCs default mode. Thus, we sort by query name first, then by evalue. All runtime data was gathered using a system with a 28 core Intel Xeon E5-2680 @ 2.4GHz and 128GB of RAM. All sample datasets used for figures were produced using nucleotide sequences extracted for use in another one of our teams bioinformatic workflows [9]. The BLAST command used to produce the result data was: 'ncbi-blast-2.10.0+/bin/ blastn -query Aug2013_metaG_12142016-34108099_Ex-periment1_layer1.fasta -db SILVA_132_SSURef_Nr99_ tax_silva_trunc -outfmt (5 and 6) -num_threads 28'. Result datasets were then split into 5 files containing 10 3 , 10 4 , 10 5 , 10 6 and 10 7 query sequences respectively. As the datasets used for the runtime tests are very large (the largest being~60gb for 10 7 query sequences), we have hosted the datasets on our team's HPC server. For access to the exact data used for all test cases, please submit a request at: https://sc.edu/about/offices_and_divisions/div-ision_of_information_technology/rci/. While each script is designed to operate on a BLAST output file, they all differ in functionality and implementation. All versions of BLAST-QC (Python, Java, C) can operate on both XML and tabular BLAST outformats, while the scripts by Bai, Fichot and Cock only operate on XML output, whereas GNU sort only functions on a tabular outformat. These scripts were chosen for comparison as they replicate many of the possible use cases for BLAST-QC, both direct tabular conversion of results as well as the application of filtering thresholds to provide quality control. Both scripts by Bai and Cock do not provide any quality control or threshold functionality, they simply function as XML to tabular format converters for BLAST results, while Fichot implements a bit-score threshold and support for both protein and nucleotide databases. All versions of BLAST-QC implement the ability to operate on both BLAST output formats, the ability to input various filters to narrow results to the highest quality sequences, and support for both protein and nucleotide databases. Figure 2 plots runtime vs number of query sequences for all four programs that operate on tabular format, using the same dataset of BLAST result files. As the figure depicts, BLAST-QC C version is the fastest program, followed by Python, GNU sort then Java. While the C version is certainly faster, it requires the use of external libraries libxml2, requires more knowledge than python to operate and maintain, and is only marginally faster. Most notably, BLASTQC Python outperforms a GNU sort of the dataset at ordering hits by evalue. Many researchers choose GNU sort as a standard approach to parsing BLAST results as it is a widely available solution, but as the figure shows, it does not perform as well as the standalone parser. Furthermore, more complex QC tasks require a strong knowledge of GNU sort's syntax and creates additional runtime, making standalone parsers much more functional for complex parsing tasks (eg. replicating -max_target_seqs). Lastly, the Java parser performed the worst of the four parsers in the tabular benchmark, despite the fact that Java is a compiled, rather than interpreted, language. This is most likely due to the high memory overhead required by the Java Virtual Machine (JVM), which takes up memory bandwidth that is needed for parsing the large BLAST files. Figure 3 plots runtime vs number of query sequences for all 6 programs that operate on XML format, using the same dataset of BLAST result files. Both Bioperl parsers performed worst out of all the XML parsers, with Fichot's script being somewhat of an outlier in the dataset (over 4 h to parse a file with 10 6 query sequences). This is most likely due to the combination of the cumbersome BioPerl modules and the more involved parsing of Fichot's script as compared to the script by Xaiodong Bai, as well as the fact that Perl is an interpreted language. The C parser had the fastest runtime in the XML benchmark, followed by the blastxml_ to_tabular. The blastxml_to_tabular script simply converts the data from XML to tabular, so no real computations are required. In both Figs. 2 and 3, we plot runtimes for both sequential (1 core) and parallel processing modes (28 cores) of the BLAST-QC python application, to demonstrate the effect of the parameter on runtime. While the parallel processing takes more time for a lower number of query sequences, at a value of approximately 5.055 × 10 5 query sequences for XML out format, and 5.354 × 10 6 for tabular, the program running in parallel achieves a faster runtime than that of the sequential program. This is due to the fact that opening each separate process and facilitating communication creates more overhead than the parallelization can scale for smaller inputs, but over a larger number of query sequences the efficiency of parallelization decreases the overall runtime, despite the overhead of the required process management. In Figs. 4-6 we demonstrate some of the various functionalities of BLAST-QC and provide the results of their application to a sample dataset. In Fig. 4 the range functionality is demonstrated using an e-value range of .0005. This means that BLAST-QC will consider hits found that fall Fig. 2 Plot of runtime vs number of query sequences in a BLASTN tabular (−outfmt 6) results file. The graph is a linear-log plot, the number of query sequences is shown on a log scale, due to the exponential nature of runtime data and the large numbers of sequences involved. Each of the scripts were run against BLAST tabular files containing 10 3 , 10 4 , 10 5 , 10 6 and 10 7 query sequences, respectively. All versions of BLAST QC were run using default parameters (no command line options specified), which is to order hit sequences for each query by e-value. To replicate this behavior using GNU sort, the command 'sort -k1,1 -k11,11', was used as this orders the rows in the tabular output by query id then e-value (the 1st and 11th columns respectively) within that range of the lowest e-value hit, if the target sequence provides an increase in the quality of the hit definition (taxids in <Hit_def > or salltitles). As depicted in Fig. 5 the sequence returned by simply returning the lowest e-value hit provides a definition that may not be useful for research analysis, while the top hit using a range value provides an insightful description of the sequence while maintaining a reasonable e-value within that range. In BLAST-QC, range values are implemented on e-value, bit-score, and percent identity. Figure 5 depicts the ordering functionality of BLAST-QC, and ordering by e-value, bit-score, percent identity and hit definition is implemented. For example, when ordering by definition, as in Fig. 5 [1], the hit that has the highest quality of hit definition that fits input thresholds will be returned. Figure 6 demonstrates the threshold capability of the BLAST-QC program. The first sequence is returned when ordering by evalue with the number of hits set to one (replicating max_tar-get_seqs). The second employs a bit-score threshold to find a matching sequence with the highest e-value that also has a bit-score above the threshold. Threshold values are implemented on e-value, bit-score, percent identity and hit definition. All of the resources needed for the BLAST-QC software are available for download from the BLAST-QC GitHub repository: https://github.com/torkian/blast-QC. Additional testcases and usage information for the program are located on the page as well. Discussion BLAST-QC was developed using Python 3, and is designed for usage within a script or larger workflow, but The graph is a linear-log plot, the number of query sequences is shown on a log scale, due to the exponential nature of runtime data and the large numbers of sequences involved. Each of the scripts were run against BLAST XML files containing 10 3 , 10 4 , 10 5 , 10 6 query sequences respectively. We did not include a run of 10 7 XML query sequences as the file size became impractical for our system, taking up all 128Gb of ram (this resulted in a outOfMemoryException on the Java parser). All versions of BLAST QC were run using default parameters (no command line options specified), which is to order hit sequences for each query by e-value. Fichot's BioPerl script was also run using default parameters with no thresholds implemented. Both the BioPerl script by Xiaodong Bai and the Python script by Peter Cock only function as XML (outfmt 5) to tabular format (outfmt 6) converters, so no input parameters are required Fig. 4 Demonstration of range parameters of BLAST-QC. Command 1 depicts the usage of BLAST-QC to return a single top hit per query sequence. Command 2 depicts the same command with an additional range parameter, '-er .0005'. This will return a hit within this e-value range (+.0005 from the top e-value hit) that has the most taxids present. Thus, as the figure shows, the result of command 2 is a hit with a e-value that is .0005 more than the first top hit returned in command 1, but with more informative taxids also offers a headless command-line interface for use with smaller datasets. Usage information has been documented in this paper, and additional documentation, as well as all test-cases and datasets used in this paper can be found in the BLAST-QC GitHub repository, at https://github.com/torkian/blast-QC. Our team seeks to standardize researchers' approach to analyzing BLAST result datasets. Many researchers opt to apply 'max_tar-get_seqs' as a quality control parameter in their research workflows (over 400 Google Scholar papers reference the value) [12][13][14], even though it has been shown that this parameter can cause issues with the search process and resulting data, and is simply not intended for this purpose. Those who use a standalone script are exchanging gains in the accuracy of results and greater functionality and control over parameters for increased bulk and runtime, which add up when running large datasets of sequences. With the added functionality and superior runtime that BLAST-QC provides over an option like GNU sort or max_target_seqs, the BLAST-QC script provides a practical option for parsing of BLAST result files, especially as with the use of Python, the task can be simplified to a single file. While there are other standalone quality control and filtering options available for BLAST results, BLAST-QC python takes a novel approach to the task, eliminating the necessity for other dependencies and allowing researchers to have increased control over the level of definition of results, while also providing greatly decreased runtimes when compared to other language and parsing options. We encourage the community to consider available options when seeking analysis of BLAST results, and to help contribute to and improve on our source code by submitting a pull request on the BLAST-QC GitHub page. Conclusions BLAST-QC provides a fast and efficient method of quality control for NCBI BLAST result datasets. It offers greater functionality for controlling the desired QC parameters when compared to existing options and outperforms them in terms of runtime. We suggest that it is BLAST-QC's Python 3 framework that allows it to outperform dense BioPerl and BioPython modules, while it also provides much higher functionality than GNU sort or even -max_target_seqs. Furthermore, BLAST-QC provides seamless integration into larger workflows developed with Python 3. With the increase in popularity of high-performance computing and new generation sequencing, novel approaches to BLAST quality control and other bioinformatic computational processes are needed to handle the increasing size of datasets, but also to take advantage of the increasing capacity of computing to provide solutions to these problems. Our team also sought to increase awareness of the controversy surrounding the application of the 'max_target_seqs' parameter in BLAST, and to provide a sound solution that replicates the function of the parameter and ensures the highest quality results. The BLAST-QC software and all other documentation and information can be located at BLAST-QC's GitHub page.
6,321
2020-08-12T00:00:00.000
[ "Computer Science" ]
EVALUATED DISPLACEMENT AND GAS PRODUCTION CROSS-SECTIONS FOR MATERIALS IRRADIATED WITH INTERMEDIATE ENERGY NUCLEONS The evaluation of atomic displacement and gas production cross-sections for irradiated materials is a challenging task combining the modelling of the various underlying nuclear reaction processes, the simulation of the material behavior, and taking into account, as far as possible, experimental data. The report describes methods of evaluation and evaluated data recently obtained in KIT for a number of materials. Introduction The evaluation of atomic displacement and gas production cross-sections for irradiated materials is a challenge considering the modelling of nuclear reactions, the simulation of atomic interactions, and the analysis and the use of available experimental data. The report describes displacement and gas production cross-sections recently evaluated in KIT and the methods used to obtain the evaluated data. The displacement cross-section for incident particle with the kinetic energy E p is calculated as follows (1) where dσ i /dT i is the recoil energy distribution of primary knock-on atoms (PKA) produced in i-th nuclear reaction; N D (T i ) is the number of stable defects produced by PKA with the kinetic energy T i , T max i is the maximal kinetic energy of the PKA in i-th reaction; E d is the average threshold displacement energy of material. The calculation of displacement cross-section assumes the use of nuclear models to get recoil energy distributions and the simulation of atomic collision to obtain the number of stable displacements in irradiated material. Calculation approach Calculations were performed using nuclear models implemented in the MCNP [1], CASCADE [2,3], DISCA-C [4], TALYS [5,6], and ALICE/ASH [3,7] codes depending on the task and on the applicability of models at different incident nucleon energy and target ranges.Results obtained using various models and codes were also applied for a verification of calculations and the estimation of the uncertainty of theoretical predictions [8][9][10][11][12][13]. The calculation of recoil energy distributions is discussed in Ref. [14].Advanced calculations of gas production components are described in Refs.[6,11,15].a e-mail<EMAIL_ADDRESS>cross-sections for most materials were obtained in two forms, using the NRT model [16] and the approach, which combines the binary collision approximation model (BCA) and molecular dynamics (MD) simulations [8,11,17]. The BCA calculations were performed using the IOTA [18] and SRIM codes [19] at relative high energies of ions; the available results of MD simulations [17] were utilized at low ion energies to estimate the total number of stable displacements. The BCA-MD calculations are discussed in details in Refs.[8,11].An example of calculations is shown in Fig. 1.The figure shows the ratio of the number of stable displacements N D calculated using BCA-MD to the number of defects predicted by the NRT model (defect production efficiency) for Fe-Fe irradiation.The calculations were performed with the IOTA code using different screening functions [18] and with the SRIM code [19,20].The results of MD simulations from Ref. [21] were applied.The systematics data were obtained from original data [22]. Atomic displacement cross-sections Displacement cross-sections were obtained for different materials and irradiations.A special attention was paid to the uncertainty of calculated cross-sections. Neutron displacement cross-sections for 9 Be at energies up to 200 MeV The evaluation of σ d consisted of i) test model calculations of energy and angular particle distributions in proton induced reactions to estimate the "quality" of model predictions and to quantify the deviation between calculated values and measured data, ii) calculations for n+ 9 Be reactions, and iii) an adjustment of calculated values to JEFF data below 20 MeV.The details are described in Ref. [23]. Figure 2 shows the obtained σ d cross-sections. Numerical data can be found in Ref. [24]. Displacement cross-sections for EUROFER The calculations of displacement cross-sections values were performed using the recoil energy distributions obtained from neutron data libraries as discussed in Ref. [25] and results of BCA-MD simulations with the IOTA code.The number of stable displacements was calculated for main components of EUROFER considering PKAs moving in stainless steel in contrast to the usual procedure, where σ d is calculated as a weighted sum of the independent components [25]. The evaluation procedure consisted of the removing of possible peculiarities in σ d values resulting from the use of dσ /dT taken from neutron data libraries, especially at 20 MeV, the fitting to results of σ d calculations using intranuclear cascade evaporation models above 150 MeV, and the combination of the different results below and above 20 MeV, if necessary.Obtained values are shown in Fig. 3 both for BCA-MD and NRT model. Data are available in Ref. [24]. Evaluated data at incident nucleon energies up to 3 GeV and higher Nuclear data used for the calculation of recoil energy distributions at low incident neutron energies [8,11] were taken from ENDF/B-VII and processed using the NJOY code [26].At higher incident energies recoil spectra were calculated using appropriate models: the model describing the scattering of charged particles in the matter, the optical model, the pre-equilibrium model, and the intranuclear cascade evaporation model.At intermediate energies of primary particles the reliability of obtained displacement cross-sections was improved by using of weighted results of calculations obtained by different approaches, see details in Refs.[8,10,11].The numbers of stable defects in irradiated materials were calculated using the BCA-MD approach.Figure 4 shows the example of obtained displacement cross-sections for proton irradiation of vanadium.Experimental data are taken from Ref. [27]. The evaluated displacement cross-sections were obtained for Al, Ti, V, Cr, Fe, Ni, Cu, Zr, and W irradiated with neutrons and protons at energies from 10 −5 eV to 3 GeV [10,28].Data in ENDF-6 format can be found in Ref. [28]. Data for Fe, Cu, and W were obtained at primary proton energies up to 100 GeV [8]. Study of uncertainty of cross-sections The uncertainties of calculated displacement crosssections were analysed at incident neutron energies above 0.1 MeV using the Monte Carlo method [29]. Both the NRT model and the arc-dpa approach [30] were applied for estimation of the number of stable displacements.The four parameters, including E d , were varied while using NRT model and two parameters when using the arc-dpa approach [12,13].The RSD values of displacement cross-sections for iron calculated using the arc-dpa approach [30] with different variation of parameters and the variation of optical model and nuclear level density parameters with the RSD values equal to 5% and 10%, correspondingly.See details in Refs.[12,13].Figure 5 shows the example of estimated relative standard deviation (RSD) of displacement cross-sections. Evaluation of atomic mass dependence of components of gas production cross-sections By analogy with the evaluation of the energy dependence of cross-sections, atomic mass dependence (A) of gas production cross sections was evaluated for a number of incident nucleon energies.The choice of the incident energy depends on available experimental data.The evaluation procedure of A-dependence is discussed in Refs.[31,32]. The proton-, deuteron-, triton-, 3 He-, and α-particle production cross-sections were obtained for 278 stable targets from 7 Li to 209 Bi at proton incident energies 62, 90, 150, 600, 800, and 1200 MeV [31,32] and at the neutron incident energy equal to 96 MeV [15].Figure 6 shows the example of evaluated α-particle production cross-section as a function of the target atomic mass number. The obtained cross-sections [15,30,31] can be used as the "reference points" for data evaluation for targets where experimental data are rare or missing. Gas production data for Be and target nuclei from Mg to Bi at neutron incident energies up to 200 MeV Proton-, deuteron-, triton-, 3 He-, and α-particlesproduction cross-sections were obtained for beryllium and other 262 stable nuclides with atomic number from 12 to 83 at the energies of primary neutrons up to 200 MeV. The data evaluation consisted of the analysis of available experimental data, the estimation of atomic mass dependence of cross-sections to improve final evaluated curves, the analysis of evaluated data from ENDF/B-VII.1, JENDL-4, JEFF-3.2, and TENDL-2014, nuclear model calculations, the improvement of existing evaluated data concerning the incorrect energy dependence, and statistical combination of experimental and theoretical data.The detail description of the evaluation and the data is given in Ref. [15,23]. Evaluated data at incident nucleon energies up to 3 GeV The evaluation was performed using results of nuclear model calculations, available experimental data, and systematic predictions.Evaluated proton-, deuteron-, triton-, 3 He-, and α-particlesproduction cross-sections were obtained for Be, Al, Ti, Cr, Fe, Ni, and W irradiated with nucleons with energies from 10 −5 eV to 3 GeV [10,28,33]. Data in ENDF-6 format for Ti, Cr, Fe, Ni, and W can be found in Ref. [28]. Conclusion Atomic displacement cross-sections were obtained for Be, Al, Ti, V, Cr, Fe, Ni, Cu, Zr, and W to estimate the radiation damage and gas production rates in nuclear-and fusion reactors, and neutron spallation sources.The NRT model and an advanced atomistic modelling approach combining the use of binary collision approximation model and results of molecular dynamics simulations were utilized for calculations of the number of stable displacements in materials. Proton-, deuteron-, triton-, 3 He, and α-particle production cross-sections were evaluated for 278 stable target nuclei from Li to Bi irradiated with intermediate and high energy nucleons using available experimental information and results of model calculations. The work leading to this publication has been funded partially by Fusion for Energy under the Specific Grant Agreements F4E-GRT-168.01 and F4E-GRT-168.02.This publication reflects the views only of the authors, and Fusion for Energy cannot be held responsible for any use which may be made of the information contained therein. Figure 1 . Figure 1.Defect production efficiency for Fe-Fe irradiation calculated with the IOTA code and SRIM code.The E d value is equal to 40 eV.See details in the text. Figure 2 . Figure 2. Neutron displacement cross-sections for 9 Be obtained using the NRT model.The E d energy is equal to 31 eV. Figure 3 . Figure 3. Displacement cross-sections for EUROFER obtained using the BCA-MD approach and the NRT model.The E d value is equal to 40 eV. Figure 5 . Figure5.The RSD values of displacement cross-sections for iron calculated using the arc-dpa approach[30] with different variation of parameters and the variation of optical model and nuclear level density parameters with the RSD values equal to 5% and 10%, correspondingly.See details in Refs.[12,13]. Figure 6 . Figure 6.Evaluated α-particle production cross-sections for 278 stable target nuclei at the incident proton energy 1.2 GeV.Experimental data are overviewed in Ref.[31].
2,394
2017-09-01T00:00:00.000
[ "Materials Science", "Physics" ]
Deactivation and Regeneration of Mo/ZSM-5 Catalysts for Methane Dehydroaromatization The methane dehydroaromatization (DHA) was studied over a series of impregnated Mo/ZSM-5 catalysts with different molybdenum contents (1-10 wt.%). It was shown that total methane conversion was decreased by 30% during 12 h of DHA reaction. The benzene formation rate was increased from 0.5 to 13.9 mol C6H6/(gMo·s) when the molybdenum content in the catalyst was lowered from 10 to 1 wt.%. The deactivated Mo/ZSM-5 catalysts were studied by a group of methods: N2 adsorption, XRD, TGDTA, HRTEM and XPS. The content and condensation degree (C/H ratio) of the carbonaceous deposits was found to increase with an increase of either of the following parameters: molybdenum content (1-10 wt.%), reaction temperature (720-780°C), space velocity (405-1620 h), reaction time (0.5-20 h). The stability of Mo/ZSM-5 catalysts in reaction-regeneration cycles was better when the time on stream was shorter. The regeneration conditions of deactivated Mo/ZSM-5 catalysts providing their stable operation under multiple reaction-regeneration cycles have been selected. Introduction Development of highly effective catalysts for the one-stage conversion of light hydrocarbons to valuable products with high selectivity will solve such problems as efficient utilization of natural and oil-associated gases and environmental protection. Methane dehydroaromatization (DHA) over Mo/ZSM-5 catalysts is a promising process for direct production of valuable aromatic compounds and hydrogen from methane. Bifunctional Mo/ZSM-5 catalysts provide up to 70% benzene formation selectivity with 14% total methane conversion at 720 o C [1]. However, the carbonaceous deposits (CD) are formed as a side product in DHA of CH 4 , part of them being necessary to assist the reaction, while the others leading to gradual deactivation of the catalysts [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. Thus, it is very important to study the nature of CD formed on Mo/ZSM-5 during reaction -first, in order to possibly minimize their unwanted formation, starting from the rational design of catalysts and, second, to elaborate an appropriate method of catalysts regeneration. The content and properties of the CD formed on Mo/ZSM-5 samples depend on both catalyst composition and reaction conditions. The formation rate of the CD was found to grow linearly with the molybdenum content in the zeolite matrix increasing from 0 to 2 wt.% Mo and remain almost constant at molybdenum content 2-10 wt.% [2]. The concentration and burn-out temperature of the CD were shown to depend on the type of molybdenum carbide phase formed on the zeolite surface during the reaction [3]. The CD with a lower burn-out temperature preferentially formed over more active and stable Į-Ɇɨɋ 1-ɯ /ZSM-5 catalyst compared to ȕ-Ɇɨ 2 ɋ/ZSM-5. However, the total concentration of the CD was twice higher on the Į-Ɇɨɋ 1-ɯ /ZSM-5 than on ȕ-Ɇɨ 2 ɋ/ZSM-5. According to XPS [5], TPO [6] and TGA [3] data, several types of the CD can be found in ɫorresponding author. E-mail<EMAIL_ADDRESS>Mo/ZSM-5 catalyst. As a rule, differences in the CD properties (burn-out temperature, structure) result from their different localization. In particular, two types of the CD characterized by low (~470°C [7,8], 503°C [9]) and high (543°C [6], 557°C [8], 592°C [9]) oxidation temperatures were distinguished by TPO. It was supposed [7,10] that the deposits with a lower oxidation temperature were located on the surface of molybdenum carbide, whereas the ones with a higher oxidation temperature were associated with the zeolite Broensted acid sites. The strength of CD has also been correlated with their localization on the external surface or in the pores of zeolite [11,12]. Deactivation of Mo/ZSM-5 catalysts, due to their excessive carbonization is a generally accepted opinion [5,10,13]. The time required for the catalyst to loose almost all its activity can be different and equal, e.g. to 4 h [13] or 16 h [14]. Several methods were suggested for regeneration of deactivated Mo/ZSM-5 catalysts, e.g. treatment in a NO/air mixture (1/50 vol/vol) at 450°C [15] or in 20% H 2 /He mixture at 680°C [16]. The TPH followed by the TPO was shown to be the most acceptable method for regeneration of Mo/ZSM-5 catalysts because it removed all types of the carbonaceous deposits [10]. About 90% of the carbonaceous deposits related to Broensted acid sites and 60% of those related to Mo sites were removed when the catalyst was regenerated only by the TPH. Another suggested approach is an in situ catalyst regeneration, by introducing additional components, e.g. CO 2 , to the reaction mixture, and thus suppressing the coke formation [17,18]. This paper is devoted to the study of physicochemical properties of CD formed on Mo/ZSM-5 catalysts during DHA of CH 4 , depending on the Mo content (1-10 wt.%) and reaction conditions (reaction temperature 720-780°C, space velocity 405-1620 h -1 , reaction time 0.5-20 h). The results of the study made it possible to optimize the conditions of oxidative regeneration of Mo/ZSM-5 catalysts. Catalytic activity measurements The catalytic activity of Mo/ZSM-5 catalysts in the DHA of CH 4 was studied at atmospheric pressure in a flow setup with a quartz reactor with an internal diameter (i.d.) 9 mm. The reactor was loaded with 0.6 g of a catalyst (0.25-0.5 mm fraction, ca. 1 cm 3 volume). Before the reaction, the catalyst was heated in an argon flow to 720°C with a heating rate of 10°C/min and kept at this temperature for 60 min. Then the feed consisting of 90 vol.% CH 4 + 10 vol.% Ar was introduced with a flow rate 13.5 ml/min (810 h -1 ). Argon was used as an internal standard to account for the changes of methane flow rate due to the reaction [20]. A Kristall-2000M gas chromatograph (Chromatech Ltd., Russia) equipped with two simultaneously operating detectors (TCD and FID) was used for on-line analysis of the reaction products. The aromatic products (ɋ 6 ɇ 6 , ɋ 7 ɇ 8 and ɋ 10 ɇ 8 ) were separated using a first packed stainless steel column with 1.5 m length and 3 mm i.d. The column was filled with a polymer sorbent DC 550 and 15% Chromatron N (FID, He carrier gas, 30 ml/min flow rate, 165°C column temperature). Naphthalene was collected in an ice trap (T ca. 0°C) and for the analysis dissolved in ethanol. The CO 2 , C 2 H 4 and C 2 H 6 were separated using a second packed stainless steel column with 1.5 m length and 3 mm i.d., which was filled with SKT activated carbon (TCD, He carrier gas, 30 ml/min flow rate, 165°C column temperature). The ɋɇ 4 , H 2 , ɋO, Ar and air were separated using a third packed stainless steel column with 2 m length and 4 mm i.d. filled with NaX zeolite (TCD, He carrier gas, 30 ml/min flow rate, 165°C column temperature). Catalyst regeneration Regeneration of deactivated (after DHA reaction) Ɇɨ/ZSM-5 catalysts was carried out at atmospheric pressure for 2 h in a flow setup with a quartz reactor (the same one used for the activity measurement, see above). The catalyst was heated in an argon flow to appropriate temperature with a heating rate of 10°C/min. Then the oxygen was introduced with a flow rate of 13.5 ml/min (810 h -1 ). The regeneration temperature depended on the catalyst composition and conditions of DHA of ɋH 4 , and was varied from 480 to 600 ɨ ɋ. Catalyst characterization The chemical composition of the parent zeolites and Mo content in the prepared catalysts were determined by means of inductively coupled plasma atomic emission spectroscopy using the Thermo Baird PST instrument. Textural characteristics (surface area, porosity) of the parent zeolites and Mo/ZSM-5 catalysts were studied on a Micromeritics ASAP 2400 instrument using nitrogen adsorption at 77 K. The specific surface area values were calculated by the BET method. Differential thermal analysis (DTA) was carried out using a Q-1500 D (Hungary) instrument in the temperature range 20-800 o C in the air, with the heating rate 10°C/min and initial sample weight 100 mg. In addition, thermogravimetry (TG) data were obtained. High resolution transmission electron microscopy (HRTEM) images were obtained using a JEOL JEM-2010 electron microscope with accelerating voltage 200 kV and lattice resolution 0.14 nm. The samples were deposited on perforated carbon supports attached to the copper grids. The local elemental analysis of the samples was carried out by an Energy Dispersive X-ray Analysis (EDX) method using an EDAX spectrometer equipped with a Si (Li) detector (energy resolution 130 eV). X-ray photoelectron spectroscopy (XPS) measurements were performed in the MultiLab 2000 surface analysis system (Thermo Electron Co.). The samples were deposited on Fe-Cr-Ni stainless steel stubs by fixing the powder under hydraulic press at 2 MPa, forming the tablets of about 9 mm diameter and 2 mm height. Before XPS acquisition, the samples were pumped down to 5 x 10 -8 mbar in the preparation chamber, and then transferred to the analysis chamber, with the base vacuum of 2 x 10 -10 mbar. Spectra of survey and individual regions (Mo3d, O1s, C1s, Al2p, Si2p) were acquired with MgK D X-rays (hQ = 1253.6 eV) at 20 eV pass energy of hemispherical analyzer, and the photoelectrons were collected from the area of about 1 mm 2 in the center of sample. The binding energy (BE) scale was referenced to Si2p BE value of 103.4 eV characteristic for the H-ZSM-5 support, and verified with expected values of other elements. Spectra were treated with an Avantage v.2.26 software. Table 1 illustrates the effect of molybdenum content on the activity of catalysts studied in the DHA reaction. According to the presented data, the total methane conversion increases with a Mo content increase from 1 to 2 wt.%. At the same time, the methane conversion to benzene grows when the Mo content is increased from 1 to 2-5 wt.%. Further increase of the Mo content up to 10 wt.% results in a decrease of both the total CH 4 conversion and the methane conversion to C 6 H 6 . The benzene formation rate is reduced sharply with an increase of Mo content from 1 to 10 wt.%. For example, after 2 h on stream the total CH 4 conversion over the sample with 2 wt.% Mo is ca. 14% and the C 6 H 6 formation rate is 10.9 ȝmol C 6 H 6 /(g Mo ·s), whereas over the sample with 10 wt.% Mo these values are ca. 7% and 0.5 ȝmol C 6 H 6 /(g Mo ·s), respectively. Catalyst activity and deactivation The obtained data indicate that for all Mo/ZSM-5 samples, the total CH 4 conversion decreases with time-on-stream. Meanwhile, the methane conversion to benzene initially grows and reaches a maximum value after 75-100 min on stream, and after staying at the plateau slowly decreases (Fig. 1). The total methane conversion decreases by 30% during 12 h of the DHA reaction and becomes equal to 2% after 20 h of the reaction. Benzene and hydrogen are the main reaction products of DHA of CH 4 . In addition, traces of CO, C 2 H 4 , C 2 H 6 , C 7 H 8 and C 10 H 8 are formed. The detailed data on the product composition were published earlier [21]. Physico-chemical properties of catalysts The Mo/ZSM-5 catalysts before and after reaction were studied by a group of methods: N 2 adsorption, XRD, TG-DTA, HRTEM and XPS. N 2 adsorption The specific surface area and total pore volume of all the catalysts studied decreased after being on stream. This effect is most pronounced for the sample with the highest Mo content (10 wt.%). Meanwhile, the specific surface area of the sample with 2 wt.% Mo decreases by ca. 15% (from 321 to 271 m 2 /g) after 6 h of the reaction. An increase of the molybdenum content to 10 wt.% leads to a more significant decrease of the specific surface area by ca. 65% (from 254 to 89 m 2 /g). An increase of the reaction time leads to further reduction of the specific surface area and total pore volume. For example, the specific surface area of the 2%Mo/ZSM-5 sample decreases by ca. 35% (from 321 to 209 m 2 /g) after 20 h on stream. This may be due to the accumulation of CD during the reaction, leading to blocking of the zeolite micropores. XRD According to the XRD data, the H-ZSM-5 zeolite is the predominant crystal phase for all studied Mo/ZSM-5 catalysts, both before and after the reaction. The presence of additional X-ray amorphous phases has been shown only for the 10%Mo/ZSM-5 samples after the reaction. These phases are observed after 30 min on stream in the form of a halo with maxima in the 2T range of 20-25° and 35-42 ɨ . Such diffraction picture is retained during the next 6 h of the DHA reaction. In accordance with literature, the halo at 2ș = 20-25 ɨ can be assigned to the formation of CD [22] or amorphous silica [23] in the course of reaction. A halo at 2ș = 37-42 ɨ can be an evidence of the formation of molybdenum carbide. It is well known that D-MoC 1-x is characterized by the diffraction maxima at 2ș = 36. 5 TG-DTA When the deactivated Mo/ZSM-5 samples are exposed to air at elevated temperatures, oxygen reacts both with carbon of molybdenum carbide and with carbonaceous deposits ɋ ɯ H ɭ formed during the reaction. According to the reaction stoichiometry, in the former case the sample weight should increase according to the reaction: Ɇɨ 2 ɋ (204 g/mol) + 4Ɉ 2 = 2ɆɨɈ 3 (288 g/mol) + ɋɈ 2 , whereas in the latter case it should decrease: ɋ ɯ H ɭ + (x + 1/4ɭ)Ɉ 2 = ɯɋO 2 + ½ yH 2 Ɉ. Fig. 2 presents the typical TG, DTG and DTA curves for Mo/ZSM-5 catalyst after DHA of CH 4 . The data of thermal analysis obtained for the 1-10%Mo/ZSM-5 catalysts after 0.5-20 h on stream show an endothermic process at 90-110°C accompanied by a weight loss of 1-5 wt.% that can be attributed to water desorption [25]. At higher temperatures (Ɍ = 370-600qC), an exothermic process is observed. It is related to the oxidation of carbon from ɋ ɯ H ɭ and/or molybdenum carbide [4]. The sample weight gains due to the molybdenum carbide oxidation were observed in the temperature range 370-440°C. Then weight loss due to the burning of the carbonaceous deposits is observed during further temperature increase (450-600 o C). The position of the exothermic effect maximum on the DTA curve (Ɍ DTA ) and the sample weight change depend on the Mo content (1-10 wt. %) in the catalyst and the reaction conditions (reaction temperature 720-780°C, space velocity 405-1620 h -1 , reaction time 0.5-20 h). The Ɍ DTA value shifts to the lower temperatures with an increase of Mo content from 1 to 10 wt.% (Fig. 3). The observed dependence may be due to the catalytic combustion of carbonaceous deposits in the presence of a metal [26] that becomes more significant when molybdenum concentration increases, or due to lower condensation of carbonaceous deposits over the samples with higher Mo concentrations [27]. Meanwhile, the content of carbonaceous deposits decreases with an increase of the Mo content in the catalyst (Fig. 3). This phenomenon may be explained by several reasons. First, according to the literature [28], acid sites on the external zeolite surface can be the sites of CD formation. It was also shown [29] that concentration of the acid sites decreases when the molybdenum content is increased. It means that a decrease of the CD content with an increased Mo content can be due to the decrease of the concentration of the CD formation sites when the molybdenum content is increased. Second, it is necessary to consider the earlier proposed consecutive mechanism of carbonaceous deposits formation during the DHA of CH 4 [30]. Activity of Mo/ZSM-5 catalysts, ȝmol ɋ6H6 / (gMo·s)) Content of carbonaceous deposits, wt.% In accordance with this mechanism, monoaromatics are intermediate compounds in the succession of reactions resulting in the CD formation. Thus, the consecutive mechanism of CD formation allows an assumption that more active catalysts in the DHA of CH 4 should produce monoaromatics at a higher concentration and, consequently, provide a higher rate of polycyclic structures formation. Indeed, Fig. 4 shows that the content of CD increases with the benzene formation rate growth. Fig. 5 illustrates the effect of the reaction temperature, space velocity and reaction time on the position of exothermic effect maximum and content of carbonaceous deposits over the 2%Mo/ZSM-5 catalysts after the DHA of CH 4 . The reaction temperature increase from 720 to 780°C leads to a significant T DTA growth indicating an increased condensation degree of the carbonaceous deposits. A considerable (more than 3-fold) growth of the content of the carbonaceous deposits is observed only when the reaction temperature is increased to 780°C (Fig. 4). When the space velocity is increased from 405 to 1620 h -1 , the concentration of the carbonaceous deposits and the T DTA grow (Fig. 5). In this respect, an increase of methane flow rate is similar in its effect to a longer reaction time. An increase of either of these factors leads to a higher load on the catalyst, which results in the accumulation of more carbonaceous deposits having a higher condensation degree. HRTEM Earlier we have shown [21] that according to HRTEM, under reaction conditions, ҏ the 2-15 nmҏ Mo 2 C nanoparticles are formed on the zeolite surface, and ~ 1 nm Mo-containing clusters -in the zeolite channels. In the case of Mo/ZSM-5 catalysts after 6 h on stream, the CD were formed as graphite layers with a thickness of ~2 nm on the surface of Mo 2 C nanoparticles that were > 2 nm in size, and as friable layers with a thickness of up to 2-3 nm and a disordered structure on the external surface of zeolite. It was proposed [21], that Mo-containing clusters can be the active centers for the DHA of CH 4 . Note that in catalysts after 20 h of the DHA reaction, the thickness of graphite layer on the surface of Mo 2 C nanoparticles is practically unchanged. In contrast to CD on the surface of Mo 2 C particles, the thickness of CD layer on the zeolite is increasing with time-on-stream and comes to 3-5 nm after 20 h of reaction. These CD are friable and their structure is defective: it consists of the curved graphite-like layers, forming separate islands that do not provide complete coverage of the zeolite surface (Fig. 6). It is most likely that accumulation of friable CD on zeolite with time-onstream is a main reason of the catalyst deactivation. We suggest that appearance of a single maximum of the exothermic effect during the burn-out of the CD formed on the Mo/ZSM-5 catalysts after 6 h of the reaction (Fig. 2) may be associated either with a small amount or a wide interval of burn-out temperatures of one of the CD types: (1) with graphite structure or (2) friable distorted carbonaceous layer. XPS The XPS results for 1-10%Mo/ZSM-5 samples reveal additional details about changes of a surface state of these catalysts during the reaction. It was shown that surface molybdenum (Mo 6+ ) was reduced to lower oxidation states -down to molybdenum oxicarbide MoC x O y (BE Mo2d 5/2 ca. 228 eV) and molybdenum carbide Mo 2 C (BE Mo2d 5/2 ca. 226.8 eV). According to the XPS data, it is possible to distinguish three types of carbon. These include carbide carbon in Mo 2 C (ɋ 1s 281.9 eV), carbon in pre-graphite carbonaceous deposits (sp-type, ɋ 1s 283.4 eV) and carbon in carbonaceous deposits with graphite structure (ɋ 1s 284.5 eV), which is in good correlation with literature data [5,6,8,13,14,17]. Detailed quantitative analysis of the XPS spectra will be presented in our next publication [31]. Catalyst regeneration Oxidative regeneration of Mo/ZSM-5 catalysts after ~ 6 and ~ 20 h on stream was carried out at 520 and 600°C, respectively. It should be noted that the temperature of regeneration was selected in accordance with the values of burn-out temperatures of carbonaceous deposits. The deactivated catalysts were regenerated and one can see (Fig. 7) that catalytic activity of 2%Mo/ZSM-5 remains practically constant after 5 reactionregeneration cycles. Meanwhile, the catalytic activity of 10%Mo/ZSM-5 was not recovered after regeneration. According to N 2 adsorption, XRD and HRTEM, the state of Mo and structure of H-ZSM-5 were identical in fresh and regenerated 2%Mo/ZSM-5. However, formation of the aluminum molybdate and partial destruction of the H-ZSM-5 structure were observed for the regenerated 10%Mo/ZSM-5 catalyst. An increase of the reaction time up to ~20 h led to faster 2%Mo/ZSM-5 catalyst deactivation in the third cycle: methane conversion to benzene decreased from 9 to 2% in ~ 15 h, whereas after the first cycle such decrease occurred only after ~ 20 h on stream. Conclusions The effect of molybdenum content on the activity and deactivation of Mo/ZSM-5 catalysts in methane dehydroaromatization reaction has been studied. It was shown that total methane conversion decreased by 30% during 12 h of the DHA reaction. The benzene formation rate increased from 0.5 to 13.9 ȝmol C 6 H 6 /(g Mo ·s) when the molybdenum content was decreased from 10 to 1 wt. %. The nature of carbonaceous deposits formed on Mo/ZSM-5 catalysts during the DHA of CH 4 was established. Correlations between the content and physicochemical properties of CD vs. catalyst composition and reaction conditions were determined. The content and condensation degree (ratio C/H) of carbonaceous deposits decreases when the Mo content in the catalyst increases from 1 to 10 wt.%. It was shown that regeneration in oxygen at 520 o C during 2 h led to burn-out of all types of carbonaceous deposits formed on Mo/ZSM-5 after 6 h on stream. The stable operation of the 2%Mo/ZSM-5 catalysts under multiple reactionregeneration cycles was demonstrated.
5,097.4
2009-11-10T00:00:00.000
[ "Chemistry" ]
Improvements in thermal efficiency of onion slice drying by exhaust air recycling Abstract Drying is an important process that extends the storage life and retains the quality of onion. Unfortunately, convective dyers with poor engineering designs and construction show low thermal efficiency for onion drying. This work aims to study the effects of recycled exhaust air on the thermal efficiency of onion slice drying at various temperatures and drying capacities. The exhaust air leaving the drying chamber was recirculated and mixed with fresh air to reduce the preheating load. The thermal efficiency of the proposed system was calculated and compared with the results of a standard drying process without exhaust air recycling. Findings showed that the average thermal efficiency of the modified dryer improved with increasing drying temperature, drying capacity, and recycled exhaust air-to-fresh air ratio. Optimization via a central composite design revealed an optimal thermal efficiency of 58.82% at a drying temperature of 67.94°C, drying capacity of 0.089 kg of onion slices, and recycled exhaust air-to-fresh air ratio of 4.0. The findings of this work suggest that significant cost reduction in onion slice drying may be achieved by optimizing the recycled exhaust air-to-fresh air ratio and drying capacity of the dryer. PUBLIC INTEREST STATEMENT This work aims to improve thermal efficiency of onion slice drying by recycling exhaust air exiting the dryer. As we know that the air leaving dryer still contains amount of heat which can be directly recovered. Here, the exhaust air was recirculated and mixed with fresh air in certain ratio to reduce the preheating load. The result showed that the average thermal efficiency of the modified dryer improved with increasing drying temperature, drying capacity, and recycled exhaust air-to-fresh air ratio. The optimal efficiency of 58.82% was achieved at a drying temperature of 67.94°C, drying capacity of 0.089 kg of onion slices, and recycled exhaust air-to-fresh air ratio of 4.0. The findings of this work imply that incorporation of recycled exhaust air could remarkably reduce energy costs during onion slice drying. Introduction Onion (Allium cepa) bulbs are commonly used as a food seasoning and rich in flavonoid compounds and vitamin C (Metrani et al., 2020). Postharvest processing, such as drying, strongly affects the preservation of these compounds. The drying of fresh onion aims to remove moisture on the outer surface layer of the crop to inhibit undesirable microbial activity and germination and enable the extended storage of the bulbs (Bourdoux et al., 2016). In general, onion bulbs are purchased from traditional markets for direct use as a food seasoning. However, many consumers are gradually adjusting to the use of sliced or powdered onion for the flexibility these onions forms offer (Bamba et al., 2020). In the food industry, onion is often processed as an instant seasoning in the form of dried slices or powder (Edith et al., 2018). Some bioactive substances and vitamin C contents of dried onion slices or powder are lost following exposure to high air drying temperatures. Sasongko et al. (2020) reported that higher drying temperatures can exacerbate phenolic compound degradation. However, when the air drying temperature and relative humidity are decreased to 50°C and <10%, respectively, phenolic compound retention may be as high as 96%. Djaeni et al. pointed that thiamine retention is very low when conventional drying is conducted at 50-70°C. For example, at a drying temperature of approximately 50°C for 5 h, only 14% thiamine retention is achieved (Djaeni & Arifin, 2017). Fresh onion bulbs dried at 40-60°C show degradation of their physicochemical properties, including color, quercetin content, and antioxidant activity, with increasing drying time (Djaeni et al., 2016). The use of dehumidified air as a drying medium could shorten the drying time of onion bulbs and preserve their bioactive compounds (Atuonwu et al., 2011;Sasongko et al., 2020). The main drawbacks of onion drying include massive energy usage and low thermal efficiency. The thermal efficiency of conventional convective drying is less 50% (Kemp, 2014), which translates to only 10-20% total energy usage. Although solar drying could minimize energy costs, the technique usually requires longer drying times and is highly dependent on weather conditions (Tiwari, 2016). Dehumidified air drying has been reported to be a successful method to improve the energy efficiency of food drying (Djaeni et al., 2011). Relative air humidity close to 0% or air moisture content (MC) of up to 0.1 ppm and temperatures in the range of 10-40°C could potentially enhance the driving force for moisture transfer from the onion surface to the drying air. Hence, the drying time could be shortened, and heat could be utilized more efficiently (Sasongko et al., 2020). Asiah et al. (2017) applied air dehumidification as a drying medium to reduce the free moisture on the surface or outer layer of fresh onion bulbs to approximately 12%. In this state, fresh onions can be preserved from germination during storage. Results showed that dehumidified air can significantly reduce the drying time at drying temperatures ranging from 40to 50°C. In the case of onion slice drying, both free and bound moisture must be removed to reduce the MC of the slices to approximately 20%. Free moisture can easily be evaporated by lowering the air humidity of the drying medium, but the bound moisture within the tissues of the onion layers requires more heat to eliminate. Heat is required to break the physical interactions between moisture and the tissues of the onion layers, resulting free moisture. More heat is then required to evaporate the free moisture. The heat required for these processes could be doubled and reduce the drying efficiency of the system (Strumilto et al., 2014). The time required to dry onion slices and reduce their MC from 86 to 88% to less than 10% by conventional drying is approximately 4 hours; however, the energy efficiency of this system is only approximately 35% (Nwakuba et al., 2017). A previous study showed that dehumidified air drying reduces the drying time to 0.8 hours, but no attempt to improve the energy efficiency of the system was reported (Sasongko et al., 2020). Energy efficiency may be enhanced by recycling the exhaust air leaving the drying chamber; this exhaust air contains potential sensible heat and, thus, can evaporate moisture from food products (Djaeni et al., 2007;Krokida & Bisharat, 2004). The exhaust air can also be recirculated during drying to minimize heat losses from the dryer. Existing dryer designs may be upgraded by installing systems to mix the exhaust air leaving the heater with fresh air of a low relative humidity. The main benefit of such an upgraded system is that exhaust air can be directly reused; this improvement reduces the amount of fresh air entering the dryer for subsequent drying (Djaeni et al., 2007). The current work aims to evaluate the effect of recycled exhaust air on the thermal efficiency of onion slice drying. Ambient air was heated to a certain temperature before feeding to the drying chamber, and then the exhaust air exiting the dryer was recirculated and mixed with incoming fresh air to minimize heat utilization. The thermal efficiency of this system was estimated under various drying conditions and compared with the results of standard drying without a recycling system. Materials Onion (A. cepa; cultivar, Bima) with an initial moisture content (MC) of 86.02% ± 0.66% (wet basis) or 6.15 ± 0.33 kg water/kg dry solid (dry basis), as determined according to the gravimetric method, was used in this work (Shreve et al., 2006). The onions were sliced into 1.233 ± 0.029 mm- thick slices, as measured by a vernier caliper. Before slicing, the fresh onion was stored in a cool, dry, well-ventilated room at ambient conditions (27-29°C). Convective dryer The convective dryer used for onion slice drying is shown in Figure 1. Ambient air with a relative humidity of 77% ± 1.53% and temperature of 30.33°C ± 1.53°C (measured by KW0600561, Krisbow ® , Indonesia) was supplied by a blower through a pipe with an inside diameter (ID) of 12 cm and linear velocity of 1.78 m/s (measured by KW0600562, Krisbow ® ). The air was heated to a certain drying temperature (e.g., 50°C) in an electric preheater equipped with a thermoregulator. The hot air was channeled to the tray dryer (30 cm × 50 cm) loaded with 0.025 kg of fresh onion slices. The MC of the onions was recorded every 30 minutes for 180 minutes. Three experimental temperatures were tested (i.e., 50°C, 60°C, and 70°C), and experiments were conducted in two replicates. The mean values of the experiments were obtained and reported as results. The data were used to estimate the drying rate via Newton's model, as shown in equation 1 (Mota et al., 2010). The thickness and diameter of the onion before and after drying process were observed. The thin layer of onion resulted the small difference changes of thickness and diameter after drying process, as presented in Figure 2. Meanwhile, the hot air in dryer can be considered that it was homogenously distributed in cross sectional direction ( Figure 3). So, the moisture evaporation and heat transfer were uniform. Moreover, the volume shrinkage could be considerably neglected (Muhlbauer & Muller, 2020;Onwude et al., 2016). The model was evaluated in terms of correlation coefficient R 2 and root mean square error (RMSE). wherek is the constant of drying (1/min), t is the observation time (minute), and MR t is the moisture ratio, which can be calculated from equation 2. where X 0 and X t are the MCs (kg water/kg dry material) at the initial time (t 0 ) and observation time (t), respectively, and X emc is the equilibrium MC (kg water/kg dry material), which is a function of air temperature and relative humidity (Sasongko et al., 2020) (temperature range, 30-50°C; relative humidity (RH) range, 0-90%), as expressed by the modified Henderson model (Viswanathan et al., 2003) in equation 3. x emc ¼ where x emc is 100% of the equilibrium MC (dry basis), and Tis the air temperature (°C). The drying time of onion slices from an MC of 86.02% (wet basis) or 6.15 kg water/kg dry solid (dry basis) to an MC of 30% (wet basis) or 0.42 kg water/kg dry solid (dry basis) can be estimated using equations 1-3 with the Newton model. Thermal efficiency Thermal efficiency was estimated on the basis of the laws of conservation of mass and energy. The following assumptions were taken into account for the calculations. • The heater, dryer, and pipe system were operated adiabatically. • No heat loss to the environment occurred during drying. • The hot air was homogenously distributed in the dryer ( Figure 3) • The process was well mixed system • The physical properties (e.g., density and specific heat) of dry and wet air were identical. • The dimension of onion did not change during the drying or the onion shrinkage was neglected ( Figure 2) • Pressure drops were not considered in the calculations. • The operating pressure was 1 bar. • The exhaust air of the dryer had a relative humidity of 50%. • The enthalpy was referenced to a temperature of 0°C. Equation 4 shows the expression used to calculate the thermal efficiency of the dryer (Kudra, 2012): where T amb is the temperature of ambient air (°C), T o is the temperature of the exhaust air leaving the dryer (°C), and T i is the temperature of air entering the dryer (°C). The temperature of exhaust air can be estimated from the energy balance of the heat generated by the dryer and the heat of water evaporation, as shown in equations 5 and 6. The energy balance in the adiabatic drying system, where the heat used to evaporate moisture from the onion equals the sensible heat released by the hot air, could be used to estimate the temperature change of air (ΔT) as follows: where M s is the mass of dry solid (kg), λ is the latent heat of water vaporization (kJ/kg) (Green & Southard, 2019), F a is the air flow rate (kg/minute), and C pa is the heat capacity of air (kJ/kg K) (Green & Southard, 2019). The thermal efficiency calculations were conducted at different drying capacities (i.e., 0.05, 0.075, 0.1, 0.125, and 0.15 kg of onion slices) and drying times. The average thermal efficiency was calculated from the two sets of experiments and reported as the result. Recycled exhaust air The exhaust air leaving the drying chamber contains a large amount of heat that could be potentially reused. In the present study, some of the exhaust air exiting the dryer was recycled and mixed with fresh ambient air before feeding to the dryer. In this way, the total heat load in the heater could be reduced, resulting in improved thermal efficiency. Partial, rather than total, recycling of the exhaust air is recommended because total recycling can lead to saturation of the drying air. In this case, the capacity of air to evaporate water from a wet product is limited. The thermal efficiency calculations were slightly modified to consider partial air recycling, as shown in equation 7. where T m is the temperature of mixed air (fresh air + recycled air), which can be estimated by equation 8. where F f and F RC respectively refer to the flowrates of fresh ambient air (m 3 /minutes) and recycled air (m 3 /minutes). Because the total air entering the dryer is considered constant, addition of recycled air could reduce the amount of fresh air entering the dryer. Equation 8 was rearranged to obtain equations 9 and 10: where R represents the recycled exhaust air to fresh air ratio. Here the R values were varied from 1 to 4 to determine the effect of recycled exhaust air on thermal efficiency enhancement. Optimization using response surface methodology (RSM) Response surface methodology was used to explore correlations between independent and dependent variables in the drying process. Here, sequential values of the independent variables were applied to determine the optimal response of the dependent variable. A central composite design (CCD) was used in the optimization studies. The design consisted of 10 runs, including 4 star, 4 cube, and 2 center points. The drying temperature A ð Þ and drying capacity B ð Þ were selected as independent variables, and the percentage of thermal efficiency (E) was selected as the where E is the predicted response, γ 0 is the constant coefficient, γ 1 and γ 2 are linear coefficients, γ 11 and γ 22 are quadratic coefficients, and γ 12 is a two-factor interaction coefficient. Statistical analyses and optimization were performed using Statistica software (version 10.0, USA). Figure 4 presents the moisture ratios of dried onion slices without recycled exhaust air and recycled exhaust air to fresh air ratio of 1 at drying temperatures of 50°C, 60°C, and 70°C. In all drying experiments, the moisture ratio decreased exponentially, which is a typical observation in food products during drying. The decrease in moisture ratio was more pronounced in the first 60 minutes of drying than at later times because of the high initial concentration of free moisture on the surface of the samples. Thus, moisture could easily be diffused to and evaporated by the drying air. Approximately midway through the drying process, the availability of free moisture on the surface of the samples is limited, and the removal of bound moisture within the inner tissue matrix of the onion begins to take place (Asiah et al., 2017). This process requires more heat to break moisture-onion layer linkages and a longer time for moisture to diffuse to the onion surface before evaporating, as observed in some other food products during drying (Kudra, 2012;Viswanathan et al., 2003). Figure 4 also demonstrates that the rate of moisture ratio reduction is faster at higher drying temperatures than at lower ones. For example, the moisture ratio observed after drying (180 minutes) at a drying temperature of 70°C is 1.21 times lower than that obtained when the drying temperature is 60°C. Higher drying temperatures promote the driving force for mass transfer, which increases the moisture evaporation rate and reduces the drying time (Djaeni et al., 2011). Parameter estimation The different change of onion slice volume during the drying process was observed by measuring the thickness and diameter of onion slice, as previously presented in Figure 2. The difference of thickness was about 13% for total moisture content 86.02% and 30.76%. So that, the shrinkage factor can be neglected. Hence, the Newton Model can be suitable for this case (Muhlbauer & Muller, 2020;Onwude et al., 2016). The Thus, the moisture ratio profiles could be fitted using the Newton model, which has been employed to describe the drying processes of food such as strawberry (El-Beltagy et al., 2007), red chili (Hossain & Bala, 2007), and onion (Kadam et al., 2011). According to the R 2 and RMSE values calculated in the present study, the Newton model fits the data well (Table 1). The R 2 values were very close to 1.00, and the RMSE values were lower than 0.008. These findings imply that the Newton model can be used to describe the phenomenon of onion slice drying. The validated model was applied to estimate the drying time from an initial MC of 86.02% (wet basis) or 6.15 kg water/ kg dry solid (dry basis) to a final water content of 30% (wet basis) or 0.429 kg water/kg dry solid (dry basis), as shown in Table 1. As the drying temperature increased, the drying rate increased and the drying time decreased (Table 1). Higher drying temperatures enhance the driving force for mass transfer by increasing the temperature difference, which leads to lower moisture viscosity and higher moisture diffusivity. These phenomena result in faster mass transfer rates from the moist product to the drying air (Djaeni et al., 2011). According to the results of onion slice drying, increasing the drying temperature by 20°C would reduce the total drying time by nearly half. The drying process with and without exhaust air recycle were comparable. The exhaust air recycle did not affect the drying rate and drying time, since the temperature of hot air entering the dryer is same. However, the way can reduce the heat load of the preheater. Figure 5 exhibits the thermal efficiencies estimated for various drying times under different temperatures and drying capacities. Earlier experiments indicated that higher temperatures improve thermal efficiency by providing stronger driving forces for mass transfer; thus, more moisture could be evaporated from the onion. However, midway through drying, the residual moisture on the onion slice surface decreases and the driving force for mass transfer declines. Thus, more heat is required to break the bound moisture within the tissue matrix of the onion layer (Minea, 2016), and poorer thermal efficiency is observed. Thermal efficiency estimation An increase in drying capacity enhances thermal efficiency, as shown in Figure 5. The amount of moisture increases as the drying capacity increases. The sensible heat transferred from the drying medium and could be fully utilized in the presence of more free moisture that could be easily evaporated. However, longer drying times eventually reduce the average thermal efficiency of the drying process. In all cases studied, drying without recycled exhaust air resulted in poor thermal efficiency. This low efficiency can be explained by analyzing the temperature of the exhaust air exiting the dryer. In all drying experiments, the temperature of the exhaust air leaving the drying chamber was higher than that of ambient air. After 60 min of drying, for example, the temperature of the exhaust air leaving the dryer remained close to the inlet air temperature (see Figure 6). Thus, heat usage for water evaporation in the system without recycled exhaust air may be considered inefficient. Because exhaust air contains a considerable amount of sensible heat, some of the exhaust air leaving the drying chamber could be recirculated and mixed with fresh ambient air prior to feeding into the heater. In this way, heat is directly recovered, the air temperature entering the heater increases significantly, and the heat load of the preheater could be reduced. The average thermal efficiency of onion slice drying from an initial MC of 86.02% (wet basis) or 6.15 kg water/kg dry solid (dry basis) to a final MC of 30% (wet basis) or 0.429 kg water/kg dry solid (dry basis) was estimated. The results in Figure 7 showed that thermal efficiency improves with increasing air temperature and drying capacity (Borah et al., 2015;Djaeni et al., 2011). For example, increasing the air temperature by 10°C could increase the thermal efficiency of drying by 10%. The thermal efficiency of drying improved with increasing drying capacity. However, at a capacity of 0.12 kg, the improvement in thermal efficiency was not significant because the ability of the air to evaporate moisture from the samples is limited. In general, onion drying without recycled exhaust air resulted in poor average thermal efficiencies. This finding supports the results of previous studies on apple slice drying (Beigi, 2016), greenhouse drying (Kadam et al., 2011), and onion drying with a convective dryer (EL-Mesery & Mwithiga, 2012). Thermal efficiency improvement with recycled exhaust air The moderately hot exhaust air exiting the dryer contains potential sensible heat that could be reused for other purposes. Unfortunately, exhaust air is also characterized by a higher absolute humidity, which could reduce the driving force for mass transfer. Therefore, total exhaust air recycling is not recommended. Instead, some of the exhaust air could be recycled to recover heat directly and reduce the preheater load required to heat the air to the desired drying temperature (Djaeni et al., 2007;Krokida & Bisharat, 2004). Figure 8 shows that the thermal efficiency for drying in all test cases could be significantly enhanced by employing recycled exhaust air. A recycled exhaust air to fresh air ratio of 1 (50% fresh air + 50% exhaust air) improved the thermal efficiency of drying by 1.5 times compared the drying process without exhaust air recycling (R = 0), especially in the first 30 min. However, after certain time (suppose 80 minutes), the thermal efficiency improvement was limited because the water content in product was low, which inhibits the driving force for mass transfer. The average thermal efficiency of the system represents the performance of onion slice drying with recycled exhaust air. The average thermal efficiency was calculated on the basis of the heat required to reduce the MC of onion from 86.02to 30% (wet basis). Figure 9 shows the thermal efficiency of onion slice drying at a drying temperature of 70°C under various recycled exhaust airto-fresh air ratios and drying capacities. In all cases, thermal efficiency improved with increasing drying capacity and recycled exhaust air-to-fresh air ratio. For example, without recycled exhaust air, the maximum thermal efficiency calculated was 39%, only, as depicted in Figure 7. By exhaust air recycle, the average thermal efficiency was able to reach approximately 57% (Figure 9). Improvements in thermal efficiency by incorporating recycled exhaust air could help remarkably reduce the overall energy requirement for drying, similar to the findings of a previous study (Djaeni et al., 2007). Optimization of thermal efficiency in onion slice drying The objective of the present experiment is to reduce the MC of onion slices from 86% to the 30% (wet basis) with optimum thermal efficiency. Ten experimental runs were conducted to obtain the optimal drying conditions that could maximize the thermal efficiency of drying. Low and high levels of independent variables were selected on the basis of the data presented in Figures 8 and 9. The factors selected for optimization included drying temperature (60-70°C) and drying capacity (0.025-0.125 kg of onion slices), and a recycled exhaust air-to-fresh air ratio of 4.0 was applied. Table 2 shows the low and high levels of independent variables, and Table 3 shows the experimental design using orthogonal CCD. Optimization using orthogonal CCD revealed the relationship between two independent variables (i.e., drying temperature and onion slice weight) and one response variable (i.e., thermal efficiency) as expressed in a second-order polynomial equation (equation 12). This equation was evaluated using R 2 , as shown in Table 4. The model was successfully fitted to the data with R 2 close to 0.9. E ¼ À 41:76 þ 2:40A þ 426:13B þ 0:77AB þ À 0:02A 2 þ À 2672:06B 2 (12) The coefficients of A and B in equation 13 were positive, which indicates that thermal efficiency increases at higher drying temperatures and drying capacities. Three-and two-dimensional plots of the thermal efficiency of onion slice drying determined from equation 13 and Table 2 are illustrated in Figure 10(a,b), respectively. According to Table 4, B significantly affects the thermal efficiency of drying with a p value of <0.05. The maximum thermal efficiency, at 58.82%, was obtained at a drying temperature of 67.94°C and drying capacity of 0.089 kg of onion slices. Therefore, improving thermal efficiency by heat recovery may be a sensible option to reduce the total energy consumption of a drying system. Comparison with other drying methods The thermal efficiency of several drying technologies for fruits and vegetables have recently been evaluated. Kadam et al. (2011) observed the thermal efficiency of a greenhouse solar dryer for onion slice drying at midday for 10 days and obtained a thermal efficiency of approximately 20.82%. A hybrid dryer combining two heat sources, namely, solar energy and microwave energy, revealed a thermal efficiency of approximately 24.3% (Çelen & Karataser, 2019). A solar collector equipped with a modified solar tracking system to move the collector showed enhanced thermal efficiency of up to 75.7% (Das & Akpinar, 2020). The thermal efficiency of convective dryers used for apple slice and potato drying is generally below 15% (El-Beltagy et al., 2007;Kadam et al., 2011). Such poor efficiency can be explained from the perspective of the temperature of the exhaust air leaving the dryer. In the present study, some of the exhaust air was recycled by mixing with fresh air entering the heater so that the sensible heat of the exhaust air could be directly recovered and reduce the heater load. A recycled exhaust air-to-fresh air ratio of 4.0 resulted in a thermal efficiency of 58.82% at a drying temperature of 67.94°C. This improvement is superior to the thermal efficiencies produced by a solar dryer (Kadam et al., 2011), hybrid dryer (Çelen & Karataser, 2019), convective dryer (Beigi, 2016), and vacuum infrared dryer (Hafezi et al., 2015). Conclusion The thermal efficiency of onion slice drying was studied at various temperatures and drying capacities, and the drying rate and effective drying time of 0.025 kg of onion slices at temperatures ranging from 50°C to 70°C were evaluated using the Newton model. The Newton model could satisfactorily predict the drying characteristics of onion slices. Drying rates increased with increasing drying temperature, and the drying time could be shortened by approximately 50% by increasing the drying temperature by 20°C. The combination of mass and heat balances could help predict the temperature of the air exiting the dryer and be used in thermal efficiency calculations. Results showed that the average thermal efficiency of the conventional dryer is relatively poor. However, introduction of recycled exhaust air could reduce the heat load of the heater and significantly improve the average thermal efficiency of the dryer. Drying efficiencies improved with increasing drying temperature, recycled exhaust air-to-fresh air ratio, and drying capacity. Finally, process optimization via CCD was conducted to obtain the best conditions with which to dry onion slices from an initial MC of 86% to a final MC of 30% (wet basis) with optimum thermal efficiency. Three input variables, namely, drying temperature, drying capacity, and recycled exhaust air-to-fresh air ratio, were selected. Results showed that the maximum thermal efficiency (58.82%) could be achieved at a drying temperature of 67.94°C, drying capacity of 0.089 kg onion slices, and recycled exhaust air-to-fresh air ratio of 4.0. These results demonstrate that incorporation of recycled exhaust air could remarkably reduce energy costs during onion slice drying.
6,508.6
2021-01-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Potential utility of eGFP-expressing NOG mice (NOG-EGFP) as a high purity cancer sampling system Purpose It is still technically difficult to collect high purity cancer cells from tumor tissues, which contain noncancerous cells. We hypothesized that xenograft models of NOG mice expressing enhanced green fluorescent protein (eGFP), referred to as NOG-EGFP mice, may be useful for obtaining such high purity cancer cells for detailed molecular and cellular analyses. Methods Pancreato-biliary cancer cell lines were implanted subcutaneously to compare the tumorigenicity between NOG-EGFP mice and nonobese diabetic/severe combined immunodeficiency (NOD/SCID) mice. To obtain high purity cancer cells, the subcutaneous tumors were harvested from the mice and enzymatically dissociated into single-cell suspensions. Then, the cells were sorted by fluorescence-activated cell sorting (FACS) for separation of the host cells and the cancer cells. Thereafter, the contamination rate of host cells in collected cancer cells was quantified by using FACS analysis. The viability of cancer cells after FACS sorting was evaluated by cell culture and subsequent subcutaneous reimplantation in NOG-EGFP mice. Results The tumorigenicity of NOG-EGFP mice was significantly better than that of NOD/SCID mice in all of the analyzed cell lines (p < 0.01). Sorting procedures enabled an almost pure collection of cancer cells with only slight contamination by host cells. Reimplantation of the sorted cancer cells formed tumors again, which demonstrated that cell viability after sorting was well maintained. Conclusions This method provides a novel cancer sampling system for molecular and cellular analysis with high accuracy and should contribute to the development of personalized medicine. Introduction Cancer xenograft models of immunodeficient mice are widely applied in various cancer research areas. Recently, xenografted human tumors are commonly used for preclinical drug testing, including biomarker discovery. [1,2] It has been reported that there is a close correlation between the effects in xenografts and clinical outcomes, in terms of both drug resistance and sensitivity. [3] An eventual goal of such preclinical studies using mouse xenograft models is the realization of personalized medicine. Molecular analyses using clinical specimens or xenografted tumors are essential in research for personalized medicine, and high purity samples of sufficient volume are necessary for precise analyses. In general, mouse xenografts are superior to clinical specimens because of the abundance and renewability of the tumor samples. Tumors consist of two components, i.e. cancer cells and stroma. Stromal cells derived from murine cells within the xenografted tumors. Even though tumor tissue acquired from patients is transplanted, human stromal cells are ultimately replaced by murine stromal cells [4]. Accordingly, contamination by stromal cells hinders precise analyses of cancer cells using tumor tissue. Although stromal cells need to be removed from tumor tissue as much as possible to obtain accurate results, it is still technically difficult to collect high purity cancer cells without contamination by stromal cells. As technologies of comprehensive analyses (e.g., high-resolution microarray, nextgeneration sequencing and proteomics) are progressing rapidly, high purity samples uncontaminated by stromal cells are necessary for such advanced technology. Therefore, it is very important to establish a method of separating cancer cells and stromal cells clearly and collecting cancer cells uncontaminated by stromal cells. On the other hand, athymic nude mice, nonobese diabetic/severe combined immunodeficiency (NOD/ SCID) mice or NOD.Cg-Prkdc scid Il2rg tm1Sug /ShiJic (NOG) mice are routinely used for mouse xenograft models of cancer. Among these types of mice, NOG mice show the most severe immunodeficient state. Machida and colleagues have reported that NOG mice have higher susceptibility to xenografted tumors than other immunodeficient mice [5]. Thus, NOG mice are very useful for the transplantation of tumor tissue. In 2008, Niclou and colleagues reported that NOD/ SCID mice with ubiquitous expression of enhanced green fluorescent protein (eGFP) were useful for the clear separation of tumor cells and mouse stromal cells in subcutaneous xenografted tumors by fluorescence activated cell sorting (FACS), and demonstrated that the contamination by stromal cells after the removal of eGFP-expressing cells was slight. [6] Meanwhile, Suemizu et al. generated NOG mice expressing eGFP ubiquitously (NOG-EGFP) and clarified that NOG and NOG-EGFP mice have equivalent immunodeficient states. [7] However, there are no reports to study cancer xenograft of NOG-EGFP mice. In this study, we hypothesized that NOG-EGFP mice are potentially useful for the collection of cancer cells without contamination by stromal cells and would also have the advantage of easy engraftment. Here we compare the tumorigenicity between NOG-EGFP and NOD/ SCID mice and show the degree of contamination by stromal cells after removal of eGFP-expressing cells in the xenografted tumors of NOG-EGFP mice by FACS. Furthermore, we demonstrate the viability of the collected cancer cells by cell culture and subsequent inoculation. Ethics All animal experiments conformed to the guidelines of the Institutional Animal Care and Use Committee of Tohoku University and were performed in accordance with the Guide for the Care and Use of Laboratory Animals of Tohoku University. The protocol was approved by the Ethics Review Committee of Tohoku University. Animals 6 week-old female NOG-EGFP (formally, NOD.Cg-Prkdc scid Il2rg tm1Sug Tg (Act-eGFP) C14-Y01-FM1310sb/ ShiJic) mice and NOG mice were kindly provided by Central Institute for Experimental Animals (Kawasaki, Japan). NOD/SCID mice were purchased from CLEA Japan, Inc. (Tokyo, Japan). Female heterozygous NOG-EGFP mice were mated with male NOG mice in order to breed the NOG-EGFP mice under the permission of Central Institute for Experimental Animals. Since their offspring were NOG mice or NOG-EGFP mice, the fluorescence of NOG-EGFP mice was confirmed by a hand-held UV lamp (COSMO BIO, Tokyo, Japan). Thereafter, NOG-EGFP mice were used in the experiments. The animals were housed under pathogen-free conditions on a 12-hour light cycle and with free access to food and water. Image acquisition We confirmed that organs and cells obtained from NOG-EGFP mice could be fluorescently visualized. In detail, after euthanizing NOG-EGFP mice, internal organs were placed on a tray and imaged using an IVIS W Spectrum system (Caliper Life Sciences, MA, USA). Skin fibroblasts of NOG-eGFP mice were cultured in RPMI-1640 media with 10% FBS and 1% P/S. Subsequently, cultured fibroblasts on dishes were visualized using a Keyence BZ-9000 fluorescence microscope (Keyence Corporation, Osaka, Japan). Cell transplantation in NOG-EGFP and NOD/SCID mice 5 × 10 5 cells in a total volume of 100 μl media were injected subcutaneously into each side of the lower back of 6-8-week-old NOG-EGFP mice and NOD/SCID mice. Tumor size was measured with digital calipers (A&D, Tokyo, Japan) twice a week. Tumor volume was determined using the following formula [8]: Patient-derived cancer xenografts Resected specimens of pancreatic cancer tissue were cut into 2-3mm 3 pieces in antibiotic-containing RPMI-1640 media. Under anesthesia with pentobarbital (Abbott Laboratories, IL, USA), and sevoflurane (Maruishi Pharmaceutical, Osaka, Japan), the pieces of the tumors were implanted subcutaneously into each side of the lower back in 6-8-week-old female NOG-EGFP mice. Tumors were harvested upon reaching a volume of 1,500 mm 3 and provided for immunohistochemistry. Immunohistochemistry Subcutaneous tumors of NOG-EGFP xenografts were fixed in 10% formalin before embedded in paraffin. After blocking, immunohistochemistry for eGFP was performed using a rabbit anti-GFP (ab290, Abcam, MA USA) at a dilution of 1:1000 incubated for 1hour at 25°C. A horseradish peroxidase (HRP)-conjugated goat antirabbit IgG (Nichirei Biosciences, Tokyo, Japan) was used as the secondary antibody. Peroxidase visualization was done using 3,3'-Diaminobenzidine (DAB). All techniques including H&E staining were performed by Animal Pathology Platform, Biomedical Research Core of Tohoku University Graduate School of Medicine. Cell sorting and phenotyping of murine stromal cells TFK-1 xenografts were used in this experiment. Freshly isolated subcutaneous tumors of NOG-EGFP mice were , as previously reported [6] . Analyses were performed on a FACS Aria TM II Cell Sorter (BD Biosciences). Viability of sorted cancer cells Xenografted tumors of TFK-1 cells in NOG-EGFP mice were harvested and separated into cancer cells and stromal cells by FACS as described above. Collected TFK-1 cells were cultured on dishes and subsequently reimplanted in NOG-EGFP mice. In order to confirm the effect of removal of eGFP-expressing cells, the subcutaneous tumors of TFK-1 cells were provided for primary cell culture without FACS sorting as a control. Statistical analysis Data were presented as the mean ± S.E. Statistical significance was determined by Mann-Whitney U test performing using GraphPad Prism for Windows version 5.02. Differences between experimental groups were considered significant when the p-value was <0.05. Confirmation of eGFP expression in NOG-EGFP mice Green fluorescence was detected in the NOG-EGFP mice by a hand-held UV lamp ( Figure 1A). Almost all internal organs showed green fluorescence in the imaging instrument ( Figure 1B). The fluorescence of skin fibroblasts was visible using a fluorescence microscope ( Figure 1C). Histological findings revealed eGFPexpressing cells (shown as DAB-positive cells in Figure Figure 2 Tumorigenicity was compared between NOG-EGFP mice and NOD/SCID mice using the pancreato-biliary cancer cell lines. A) TFK-1, B) HuCCT1, C) MIAPaCa2 and D) AsPC-1. A total of 5.0 × 10 5 cells was injected into each mouse (n = 6). ** denotes P < 0.01. NOG-EGFP mice showed a significantly higher tumorigenic potential than that of NOD/SCID mice in all cell lines (p < 0.01). 1Db and fluorescent cells in Figure 1Dc) in the stroma of the xenografted tumors, whereas cancer cells did not show eGFP expression (Figure 1Db-c). Based on the findings mentioned above, expression of eGFP on NOG-EGFP mice was confirmed. Comparison of tumorigenic potential between NOG-EGFP and NOD/SCID mice Human pancreatic cancer cell lines (MIA PaCa2 and AsPC-1) and human cholangiocarcinoma cell lines (TFK-1 and HuCCT1) were inoculated into NOG-EGFP mice and NOD/SCID mice for comparison of the tumorigenic potential. The tumorigenic potential of the NOG-EGFP mice was significantly superior (p < 0.01) to that of the NOD/SCID mice in all cell lines (Figure 2A-D). Separation of cancer cells and stromal cells A single-cell suspension was obtained by enzymatic dissociation from the xenografted tumors of TFK-1 cells. The cancer cells and the GFP-expressing cells were sorted using FACS. FACS analysis showed two subpopulations clearly enabling us to separate the cancer cells and the GFP-expressing cells ( Figure 3A). Then, the subpopulation of cancer cells was collected for phenotyping of murine stromal cells. CD31, CD90, CD49b, CD14 and CD11c are specific markers suggesting the existence of endothelial cells, fibroblasts, natural killer cells, macrophage and dendritic cells, respectively. The percentages of mouse CD31, CD90, CD49b, CD14 and CD11c positive cells in the subpopulation of the cancer cells were almost below the detection level (0.9%: CD31; 0.4%: CD90; 1.6%: CD49b; 1.7%: CD14 and 0.4%: CD11c ( Figure 3B). These results demonstrated that the accuracy of the separation of the cancer cells and the host cells in this study was the same as in the previous report [6]. Cell viability after FACS sorting Cancer cells collected from TFK-1 xenografts of NOG-EGFP mice by FACS were able to grow on the dishes ( Figure 4A). Few fluorescent cells were detectable among the collected cancer cells (experimental) on the dishes, whereas the unsorted cancer cells (control) showed a mixture of fluorescent and non-fluorescent cells ( Figure 4A). These results demonstrated that FACS sorting could completely separate cancer cells and stromal cells. Subsequent reimplantation after cell culture showed that the sorted cancer cells had tumorigenic ability ( Figure 4B). Since the period from inoculation to beginning of growth was longer in the sorted TFK-1cells than in the unsorted TFK-1 cells (Figure 4B), the viability of the sorted cells might have been lower than that of the unsorted cells. Discussion The aim of the present study was to develop methods for separating mice-xenografted human cancer cells from host cells by FACS with minimal amount of contamination and also to maintain the cell viability for subsequent analyses. For this purpose, we have developed techniques that employ NOG-EGFP mice. To date, fluorescent immunodeficient mice, i.e. GFP nude mice [9], NOD/SCID EGFP mice [6] and NOG-EGFP mice [7], have been established. The previous reports showed that fluorescent mice were very useful to study the details of tumor-stroma interaction [10][11][12]. Recently, Niclou and colleagues reported the almost complete separation of cancer cells and host cells using xenografted tumors of a glioma cell line in NOD/SCID EGFP mice. Based on this report, we evaluated the contamination rate of murine stromal cells among each cell type collected cancer cells. Our results showed similar contamination rates to those of the previous report and suggest that fluorescent mice would be very useful for the separation of cancer cells from host cells. However, the purity of the separation might be different in tumor type and implantation site since content rate of stromal cells varies in them. Further studies including orthotopic models of several organs and use of other tumor types are needed to evaluate the purity of separation. We also demonstrated that that sorted cancer cells were able to grow in vitro and in vivo. One of the advantages is that the tumor cells start to grow significantly earlier in NOG-EGFP mice than in NOD/SCID mice. Our present results provide a novel way of employing of collected cancer cells for to various subsequent analyses. In the report of the NOD/SCID EGFP xenografts, cancer cells labeled with another type of fluorescence were used for the separation study [6]. The present study suggests that fluorescent labeling of cancer cells is not necessary for the separation of cancer cells and host cells. On the other hand, this method is applicable for the collection of not only cancer cells but also stromal cells. The methodology using fluorescent mouse xenografts might usefully contribute to studies of cancer stromal cells. In conclusion, NOG-EGFP has high potential utility for complete separation of cancer cells and stromal cells with minimal contamination, if any, from xenografted tumors. Further studies are needed to establish a solid methodology for the separation and collection of stromal/cancer cells, and the use of NOG-EGFP mice for this is very promising.
3,199
2012-06-06T00:00:00.000
[ "Biology", "Medicine" ]
Research on Economic Periodicity Based on Principal Component-Weighted Distance and Clustering Analysis Based on the knowledge of economics, this paper selects 22 macroeconomic indicators that best reflect the overall economic situation of the United States. After differential, logarithmic and exponential preprocessing of the original data, this paper, based on the power spectral analysis model, adaptively identifies the periodicity of the selected economic indicators, and visualize the results. As a result, it screens out 11 indicators with obvious periodicity. In the process of solving the weighted distance based on principal component analysis, correlation test is first conducted on the selected 11 single indicators of periodicity to obtain Pearson correlation heatmap. Then, the principal components are extracted by selecting the first five principal components as the virtual indicators to represent the monthly economic situation, and calculating the weighted distance value between months for visualization. Finally, we select the results of 36 months’ smoothing for analysis, figure out the time intervals with similar economic situation, and verify the conjecture of economic periodicity. Finally, based on K-MEAN clustering analysis, the economic conditions of 352 months are classified into 3 clusters by using the weighted distance after 36 months’ smoothing. From the visualized results, it is found that there are two complete cycles, i.e. red-yellow-blue and red-yellow-blue, which is consistent with the conclusion of principal component analysis model, and proves the existence of economic cycle again. In conclusion, based on the above PCA weighted distance and clustering analysis, it can be concluded that the economic period is around 176 months, in favor of medium long periodicity theory. Research Background As the famous journalist and historian Eduardo Galeano remarked, "History never really says goodbye. History says, see you later" [1]. For a long time, investors and market observers have always been doing researches on the past markets as an indicator of the current situation and future prospect. As the old saying goes "we should take history as a mirror", history always provides some references for us. Therefore, we need to use historical data for integrated analysis and calculation to summarize the law of economic development. In addition, we need to compare the current economic development with history, and guide the formulation of economic policies in line with the current situation in combination with the past experience and lessons. In history, the theory of economic cycle came into being. Economic cycle refers to the periodic fluctuation phenomenon in the process of national economic development due to the fluctuation of various economic indicators [2]. Such phenomenon is a major feature of macroeconomic operation, which can be used to describe the fluctuation of economic activities under the overall growth trend. Generally speaking, the social productivity increases, the level of household consumption rises, the employment status improves, and the employment and output are at their peaks in economic boom. However, the economy gradually declines and falls between prosperity and depression in economic recession. In economic depression, the national economy is characterized by a sharp decrease in social productivity, a significant reduction in the level of household consumption, mass unemployment in the labor market and employment and output at the bottom. After the depression, the economy gradually picks up and enters the recovery period before the second boom, which is a complete cycle [3]. The discussion on economic periodicity is always a hot topic. Given the fact that different schools have different viewpoints, the main difference focuses on whether the periodicity is endogenous or exogenous [4]. On the deep analysis of this problem, John Keynes, a famous modern British economist, explained in 1936 the inevitable regularity of economic development in his General Theory of Employment, Interest and Money, i.e. the spiral periodic movement. In Keynes' opinion, capitalist system was not as perfect as what the ancient books described because crisis and cycle cannot be avoided. The main cause of the crisis lies in the lack of effective demand, which is rooted in three fundamental psychological laws [5]. Keynes explained the cause of economic cycle from the psychological perspective. In 1939, Schumpeter summarized three economic cycles, i.e., long cycle, medium cycle and short cycle. Although these three cycles have different lengths and definitions, there is an inclusive relationship in turn among them. The theory of economic cycle has always been having an important impact on the world, and how does the economic cycle forms have always been the main focus of the contradictions among various schools. Some schools insist that it is exogenous, while others hold that it is endogenous. Keynes pointed out in 1936 that economic development will inevitably form a kind of regularity, that is, the periodic movement of fluctuations. He also stressed that the so-called perfect capitalism does not exist because crisis and cycle coexist. The lack of effective demand may cause economic crisis, and the root cause can be explained by three fundamental psychological laws. Similarly, the formation of economic cycle can also be explained by it. The theory of real economic cycle holds that the real factors beyond the management system represented by technological shock are the important reasons for the emergence of economic cycle [6]. However, the theory of equilibrium cycle, proposed by Robert E. Lucas Jr., the main representative of the Rational Expectation School, which is called Lucas cycle, holds that the main reason for economic fluctuation lies in the response of producers to price changes [7]. The theory combines time series analysis with rational expectations, achieves the microanalysis of macroeconomic and the long-term and dynamic macroeconomic analysis, and maintains the basic theory of classical economics. In the research process of economic periodicity, scholars gradually turn their research focus from time domain analysis to time-frequency domain analysis, and extract each frequency and amplitude of economic cycle fluctuation from another perspective, which also provides a basis for us to research the periodicity of economic fluctuations. In addition, we also introduce the idea of artificial intelligence into the research of periodic fluctuations, that is, we regard the data indicators of time points as sample points, obtain the monthly classification based with same or similar economic situation on the distance between sample points, and get the results of periodic changes based on the classification results. Overview In order to research the economic periodicity, we select the United States with rich economic statistics for periodicity identification, aiming to take history as a mirror and find out the periodicity of economic development, so as to predict future development and formulate relevant policies. We obtain 22 macroeconomic indicators from the statistics provided by US officials with a time span from January 1990 to May 2019, totaling 352 months, to analyze the periodicity of single indicator and the overall situation. In this paper, we mainly conduct the following researches: (1) Adaptive power spectral analysis: After the initialization operations such as first-order difference, second-order difference, logarithmic and exponential preprocessing of macroeconomic data, we conduct power spectral analysis (PSD) separately, select the best for visualization, and classify by whether the single indicator has periodicity; (2) With selected periodic indicators, we conduct principal component analysis, and demonstrate correlation coefficient in heatmap; after calculating the eigenvalues and eigenvectors, we select the first five principal components for the weighted calculation of Euclidean distance, so as to measure the similarity between months in economic situation; (3) We use the PCA weighted distance for K-MEANS clustering analysis, and visualize the results in 3D Euclidean space, so as to prove the existence of periodicity. Adaptive Periodicity Identification Based on PSA Selecting a Selection of Macroeconomic Indicators The data set used in this paper is from the United States, whose official economic statistics is open, accurate and timely. The official economic indicators of the United States can be used as a good data sample to research economic cycle. The main macroeconomic indicators selected in this paper are shown in Table Ⅰ: TABLE I. TABLE TYPE STYLES As the most important indicator to measure national economic development, GDP can highlight the economic prosperity of a country. Similarly, Purchasing Managers' Index (PMI) reflects the degree of economic prosperity by reflecting the trend of economic change. As the two most important indicators to measure the labor market of the Bureau of Labor Statistics, unemployment rate and non-farm payroll employment can better affect the trend of all markets compared with all other indicators. CPI is widely used all over the world, which can be used to analyze market prices. It is often used by governments as the basis for developing price and wage policies. It is also used together with personal income and expenditure to measure inflation. US dollar, as the most special currency in the world, enables the US economic indicators to have a huge impact on the currency world. In order to reflect the situation of US dollar in the international foreign exchange market in an all-round way, we introduce the US dollar index & reg, which can indirectly reflect the change of the export competitiveness and import cost of the United States. In addition, we also adopt FFR, NDY and other indicators in this paper. Similarly, import and export price index and consumer information index can also provide a powerful basis for policy-making. In light of the different time lengths of various indicators, we select 352 months from January 1990 to May 2019 for research, and fill in the missing data for mean processing. Based on the data, we conduct adaptive periodicity test based on spectral analysis and construct the clustering analysis model based on the weighted distance of principal component to analyze economic periodicity, compare historical data with the current economic development of the United States to draw a conclusion, summarize the historical experience, and put forward reasonable suggestions for economic development. Mathematical Principle of Power Spectral Analysis The Power spectral analysis (PSA) is a signal processing method based on frequency domain analysis. Generally speaking, signals, especially economic indicators, are time series. For a long time, scholars analyze time series based on time domain. However, frequency domain analysis can display the original time signals on the frequency spectrogram, that is, the original signals are decomposed into a number of sine wave components with different periods and amplitudes to obtain more laws that cannot be discovered in the time domain. Under the efforts of scientists, PSA content continues to expand and the method updates rapidly, from which it can be seen that it is a technology with considerable development prospects. ( ) is non-negative real function of ; when is rational function of , it can be expressed as follows: Where 2 , 2 are constant, and 2 > 0, > , the denominator has no real root. Power spectrum of stochastic sequence: In the stochastic sequence ( ), its correlation function ( ) satisfies its power spectral density ( ), can be expressed as follows: Conclusions of Adaptive PSA Model As for the data of 22 indicators collected, we hope to select those indicators with periodic characteristics through spectral analysis. Before the principal component weighted distance model, we introduce difference, logarithm and exponential methods for the preprocessing of original data. For each indicator, we first apply firstorder difference, second-order difference, primary logarithmic transformation, secondary logarithmic transformation, primary exponential transformation and secondary exponential transformation, then carry out PSA, and finally extract one of several methods with best periodicity for periodic evaluation. The criteria for periodic evaluation are as follows: If the period corresponding to the maximum peak of the power spectral density curve is not longer than half of the total investigation time and more than one year, the data is considered to be periodic, and its period is the period corresponding to the maximum peak, otherwise, the original index is considered not to be periodic. We first carry out PSA on the 22 data collected after pretreatment, and obtain the following results: Figure 1 shows 11 indicators selected with strong periodicity and their corresponding power spectral curves based on the above methods. These indicators are as follows: Unemployment rate (117. From the figure, we can see that the period between indicators may be different through the periodic analysis of single indicator. In the research length of 352 months, we can conclude that there are at least two complete cycles in some indicators, which proves the possibility of the existence of periodicity from the perspective of actual values. Figure 2 shows the indicators that we deem they are not periodic. The visual display results of these 11 indicators show that there are a lot of monotonous growth trends and disorderly fluctuations. After the difference, logarithm and exponentiate of these 11 indicators, we do not find the existence of obvious periodicity, so we may believe that they do not have periodicity under the criteria of this paper. After PSA of single indicator, we obtain the respective periodicity. According to spectral analysis method, we conclude that all waveforms can be regarded as the addition of waves with different frequencies and amplitudes. As we consider the economic periodicity of multiple indicators, it can also be regarded as the addition of multiple indicator waveforms in essence. Therefore, the monotonic indicator can be regarded as the general trend of economy, which will have a great impact on the analysis of multi-indicator economic periodicity, so it will not be considered in subsequent calculation. Weighted Distance Based on Principal Component Analysis Principal Component Analysis (PCA) is a multivariate analysis method for non-stochastic variables introduced by Pearson [8] in 1901 in the research process. In 1933, Hotelling [9] further extended this method to stochastic vectors. The use of PCA method for periodic analysis first an reduce the dimension, that is, only a few virtual principal components are selected to express as much information as possible. Under the circumstance where the correlation impact between variables is eliminated, we re-combine the original variables to obtain a new virtual variable, and use the virtual variable for weighted calculation of the distance between sample points. The advantage of using such multivariate analysis is that it can avoid the repeated calculation of highly correlated indicators, and it is also of practical significance to use eigenvalue of PCA as the weight in distance calculation. As the indicators to be researched, such as GDP, CPI and unemployment rate, are closely correlated with each other, using expert-defined weight to calculating the distance may result in the impact caused by superposition of different indicators. Therefore, we adopt principal component weighted distance to discuss the similarity of samples. PCA Algorithm and Realization Next, we will conduct Principal Component Analysis to the periodic indicators obtained by Spectral Analysis. The specific realization process is as follows: (1) Standardized processing of original data: As the indicators are all of specific practical significance, the dimension will have a serious impact on the process of data analysis. In order to eliminate the impact, the data needs to be preprocessed first. In the process of scientific and technological work, commonly used data preprocessing methods include uniformization, standardization and normalization. Here, we use Z-score method to preprocess the data. The calculation of Z-score method is as follows: (2) Calculation of correlation coefficient matrix: After data standardization, we need to conduct correlation analysis of indicators to obtain the correlation coefficient of different indicators. In this paper, Pearson correlation coefficient is used for correlation analysis. Figure 3, it is concluded that there is a strong correlation between nominal US dollar index& reg, non-farm payroll employment & CPI and total personal expenditure, and between import price index and export price index, as well as between national debt yield and federal fund rate. On the one hand, the increase of nonfarm payroll employment directly reflects the improvement in employment, on the other hand, it also reflects the potential inflationary pressure. As labor force plays a vital role in the US economy, the US government will adopt policies to adjust the non-farm payroll employment data to make it in a reasonable fluctuation range. Generally speaking, excess money in circulation leads to inflation. In order to curb the adverse effects of excess, the Federal Reserve will usually raise interest rate to reduce inflation. However, the ups and downs of the US dollar are generally positively correlated with interest rate, and the non-farm payroll employment and total personal expenditure also reflect the economic development of the United States, which directly affects the nominal US dollar index& reg, so there is a strong correlation between non-farm payroll employment and nominal US dollar index& reg. Meanwhile, CPI is also closely correlated with non-farm payroll employment and nominal US dollar index& reg. In sharp contrast to the Chinese people's saving habits, Americans are more inclined to spend the money they earn quickly to maintain at a higher standard of living for the whole family, and they pay more attention to spiritual consumption. In fact, the change of CPI directly reflects the change in residents' purchasing power, that is, the change in total personal expenditure. Therefore, CPI, non-farm payroll employment, nominal US dollar index& reg and total personal expenditure are strongly correlated with each other. FFR and long-term interest rate are separately controlled by the Federal Reserve and influenced by long-term national debt traders, while long-term national debt traders themselves will also make trading decisions according to the policy of the Federal Reserve, which well explains strong correlation between them mentioned before. As far as the international commodity trade is concerned, on the one hand, the rise and fall of import and export prices of commodities in the world are internally restricted by the scientific and technological content of production technology and labor efficiency. On the other hand, they are externally affected by the change of supply and demand of commodities in the international market and the competition from other countries in the same industry. Therefore, import and export prices are affected by the above factors, and there is a strong correlation between them. (3) Calculation of eigenvalue and eigenvector: Definition 3.2 (Eigenvalue and Eigenvector). Assuming that is an -order matrix, if there is number and vector ⃗, let ⃗ = ⃗, then is the eigenvalue of Matrix , and the vector ⃗ is the corresponding eigenvector of the eigenvalue . According to the definition, it can be obtained that ( − ) ⃗ = 0 , and the necessary and sufficient condition for the equation to have untrivial solution is: | − | = 0 . It is a polynomial about , which we usually call it eigen polynomial, and we can obtain the corresponding solution of by solving the eigen polynomial. After substituting it into the original equation, we can obtain the corresponding eigenvector. The specific calculation results are shown in the table below: TABLE II. EIGENVALUE AND CONTRIBUTION TABLE TABLE III Where, refers to the data after standardized processing, and ̃ refers to the eigenvector obtained after PCA. (4) Calculation of corresponding contribution rate and cumulative contribution rate of each principal component: The weight value corresponding to each virtual indicator can be obtained by using the weight corresponding to the eigenvalue. The specific calculation method is as follows: Where, refers to the corresponding contribution rate of the principal component; refers to the cumulative contribution rate of the first principal components. (5) Selection of principal component according to cumulative contribution rate: The number of principal components to be selected usually depends on cumulative contribution rate, which usually requires that the cumulative contribution rate should reach above 80% and the number of principal components should be limited within 6. According to the table of contribution rate in this paper, the cumulative contribution rate of the first four principal components is 90.899% and that of the first five principal components have reached 95.550%. According to the corresponding eigenvector of each principal component, we can see that the first principal component mainly contains the information of the 2nd, 3rd, 4th, 9th, 10th and 11th indicators, the 2nd principal component mainly contains the information of indicators 6, 7 and 8, the 3 rd principal component contains the information of indicator 1, and the 4th and 5th principal components mainly contain the information of indicators 5, 8 and 7. Therefore, we select the first five principal components as the indicators to calculate the weighted distance between months in this paper. The calculation method of virtual indicator (principal component) value has been given in Equation 3.3. (6) Calculation of Euclidean distance: The traditional calculation method of the distance between sample points includes Euclidean distance: several distances from point to point, absolute value distance: the sum of the distances of each component, Chebyshev distance and Mahalanobis distance, etc. The traditional calculation method of Euclidean distance is shown as follows: ( , ) = √( 1 − 1 ) 2 + ( 2 − 2 ) 2 + ⋯ + ( − ) 2 (3.6) Where is equal to the dimension of data. In the traditional calculation of Euclidean distance, we usually deem each indicator as equally important, but in practice, the indicator has different impacts on the result. Therefore, the weighted Euclidean distance seems to be particularly important. Where, refers to the corresponding contribution rate of the first five eigenvalues, and and refer to the corresponding sample point coordinates of the first five principal components. 1) Conclusions of PCA Weighted Distance The weighted distance based on PCA can be used to obtain the distance between months from January 1990 to May 2019, and the corresponding thermodynamic diagram can be obtained by visual display. As the visual result of the distance corresponding to the point of monthly data is not clear enough, that is, the economic periodicity is a research problem based on a longer time scale, it is difficult for monthly data to reflect its periodicity. Therefore, we smooth the monthly data and finally obtain the visual results after 36 months' smoothing, as shown in the figure below: According to Figure 4, we can draw a further conclusion (given the symmetry of the figure, we only discuss the lower triangle area): (1) First of all, the diagonal in red in the figure refers to the distance between monthly data and itself. In addition, the direction vertical to the straight line can obtain the extension of the blue area, indicating that the economic situation is similar or even identical in several months with certain similar month and even longer time range. We define it as "steady period", which means that there is no obvious change in the economic situation within the period of time. The blue area stretching from the red diagonal becomes wider, indicating that the steady period of economy is longer. (2) There is another red line approximately parallel with the red diagonal in the figure, where can also find that it corresponds to a dark color zone, which indicates periodic change in economic development, because the line can be deemed that when the horizontal coordinate changes for n months, the vertical coordinate also changes for integral multiple of n, indicating that the month with similar economic development will appear at regular intervals. (3) Area 1 indicates that the economic situation of two years in early 1990s is relatively close to that from 2007 to early 2011. According to history, in early 1990s, the United States just recovered from Black Monday in 1987; another economic crisis that broke out in 2007 coincided with this, which also proves the periodicity of economic development and the rationality of weighted distance algorithm based on PCA indirectly. (4) The main purpose of this paper is to identify the circumstance where the current economic environment is closest to history, thus we can conclude that Area 2 in the figure corresponds to the fact that the economic situation from the second half of 2016 to May 2019 is similar to that from the end of 2002 to the middle of 2006. In 2002, the United States just recovered from the economic crisis in 1998. Then in 2006, the US economy was in a stable stage, but in 2007, another serious economic crisis broke out. The upper part of Area 2 and the yellow area at the lower right corner of the figure exactly indicate that the economic environment of that year was not so severe to constitute an economic crisis and damages were caused afterwards. However, combined with the actual situation, our current economic situation is likely to be on the verge of another periodic economic crisis from a historical point of view. (5) We can see from Area 3 that there are obvious similarities between the economic situation of the United States from early 1992 to late 2000 and that from 2009 to 2018 on a larger time scale. After the great recession in 1991, the US economy began to pick up in 1992, then entered a stage of prosperity in the middle and late 1990s, and has been expanding from 2009 to 2018. The dark blue part of Area 2 is larger than other areas, which indicates that the economic development of the United States has obvious similarity in these two long periods. The dark blue part is surrounded by light blue parts, which indicates that the economic development of the United States has experienced some fluctuations in these two periods, such as the twists and turns of the US economy in the first half of the 1990s. Moreover, these three similar periods of economic development also strongly prove the existence of economic cycle. 2) Smoothing Analysis of Different Time Scales In this paper, we conduct the periodicity research based on macroeconomic data. According to the relevant literature, we know that the macroeconomic periodicity is characterized by a long time scale. Therefore, for the monthly data used in this paper, the PCA results will be in high resolution, which is actually not good for us. Therefore, in order to explore periodicity we need to smooth the original data on different time scales. The so-called smoothing is to move and average the data by window size, which can reduce the noise of the original data. As shown in Figure 3.3, the straight line in the figure means that the economic situation of the corresponding month is greatly different from that of all other months. In addition to this, those areas are similar in color. We smooth the PCA data for 12 months, 24 months and 36 months, respectively, and compare it with the image without smoothing to obtain Figure 3 Figure 3.3, when we smooth the original data on the time scales of 12, 24 and 36 months, respectively, we find that the diagonal from the lower-left corner to the upper right corner becomes gradually clear, which indicates that the smoothing does play a role in reducing image noise. In addition, we also obtain the regional characteristics of economic similarity by smoothing the original data, which also indirectly reflects that macroeconomic indicators are characterized by a long cycle. Mathematical Principle K-MEANS is known as a classical clustering method. With this method, the data samples can be divided into several categories according to certain similarity indicators. Specifically, "K" in K-MEANS represents the number of categories after clustering. The K-MEANS method is especially for the case where independent variables are continuous values. Simply speaking, K-MEANS divides N samples containing P variables into K categories, making the sum of squares of the distance from internal point to central point of categories minimum. The specific realization steps are as follows: 1. Divide the samples into K categories on the whole, and each sample is randomly divided into one of K categories. 2. Calculate the center of each category. Then, calculate the distance of each sample to the center of each category, select a center closest to it, and divide the sample into the category. 3. Repeat Step 2 until the category of all samples will not be changed, and the algorithm ends. In K-MEANS method, the distance measurement method is very important. In this paper, we use the weighted distance defined in Equation 3.7 to measure the distance. Given the large size of samples processed in K-MEANS method and the randomness of initial center generation, the method cannot obtain the global optimal solution. What we obtain is usually the local optimal solution. In this case, any movement of the center cannot make the square of the distance from the internal point to the central point of category smaller, so the clustering reaches a stable state. In practice, we usually run K-MEANS for multiple times, select different starting points each time, and then take the stable state with the best final effect as the final result of clustering. 3) Conclusions of K-MEANS Clustering Model After smoothing PCA results for 36 months, we conduct K-MEANS clustering analysis and visual display on three-dimensional coordinates. The three dimensions in the figure correspond to the first three principal components, respectively, and red-yellow-blue corresponds to the three categories defined by human, respectively. In this paper, we only use this model to discuss periodicity, and make no difference between the three categories. According to the figure, we can see that the monthly data has gone through two complete cycles of red-yellow-blue and red-yellow-blue, which is consistent with the conclusion of the previous chapter. In addition, the two curve sections in the same color are relatively similar in length, which indicates that the stable cycle of economic situation is also basically the same. That is, in each section of same or similar economic situation, the length of time to maintain such economic stability is the same, which also favorably proves the existence of periodicity of 176 months in favor of medium long period. 4) Smoothing Analysis of Different Time Scales Component Similarly, we still discuss the visualized results after smoothing on different times scales after we obtain the results of K-MEANS clustering analysis. First of all, we can see from the results of original monthly data (without any smoothing) that when the categories are divided into three categories, monthly data can also be divided into three categories by distance, but the data points are relatively chaotic. We can know that the current month is similar to certain month in history and the current economic environment is similar to certain stage in history, which essentially improves the resolution of clustering analysis. In addition, when the data is smoothed for 12 months, we can actually see that the economic situation tends to be stable in a long period of time, and undergoes periodic changes among these three categories. But due to the long time scale of economic cycle, we can still see the impact of noise. With the increase of smoothing scale, when the data is smoothed for 36 months, we can see that the economic situation has undergone two complete cycles of red-yellow-blue and red-yellow-blue since January 1990, which is consistent with the conclusion of weighted distance algorithm based on PCA. In this model, data smoothing does greatly eliminate the impact of noise. Main Conclusions In this paper, we have conducted PSA, principal component weighted distance calculation and K-MEANS clustering analysis on 22 economic indicators, and find that the analysis object, namely, the US economy, has obvious periodic characteristics in the period of 352 months from January 1990 to May 2019. (1) In the adaptive period recognition model based on spectral analysis, we first use difference, logarithm and exponentiate methods to preprocess the original data of the 22 indicators collected, later conduct PSA, and we select the best result as the periodic analysis result of single indicator, which is called adaptive algorithm. After the visual display of the spectral analysis results, we obtain 11 indicators with periodicity. Therefore, those indicators without periodicity will be deleted to avoid causing interference to the analysis of economic periodicity. (2) We use the weighted distance calculation method based on PCA to obtain the distance between monthly data from January 1990 to May 2019, and obtain the corresponding thermodynamic diagram. As economic periodicity is a research topic based on a longer time scale, we smooth monthly data for 36 months. By analyzing the visualized results, we obtain the regional characteristics that reflect economic similarity, and analyze the historical economic situation of representative areas. (3) After obtaining the results based on principal component weighted distance, we further conduct K-MEANS clustering analysis according to the distance obtained, and achieve visual display on the threedimensional coordinates, from which we can see that these 352 months have undergone two complete cycles, which is consistent with the conclusion of PCA. The identified 176 months period is consistent with medium long period theory. Prospect In this paper, we obtain the results of macroeconomic periodicity based on multiple models, and find that our models have some objective shortcomings while identifying where the current economic environment is. (1) The data length selected in this paper is 352 months. For the research of the economic cycle reflected in the long-term economic situation, the data length is too short. Next, we will use data of a longer period of time as much as possible to research the issue of economic periodicity. (2) In the spectral analysis of macroeconomic indicators, we can see that high and low-frequency noises have impacts on the results, which also causes some interference to PSA results. Therefore, we shall conduct high-pass and low-pass filtering on original data to improve the resolution of the results. (3) By analyzing the original indicators, we obtain only 11 indicators that can be used for PCA, and the data dimension is smaller, which may inevitably make the conclusion of economic periodicity biased to some degree. When the data dimension and data coverage are further increased, the data models will play a better representative role. (4) In the last part of this paper, we adopt a simple method, i.e. K-MEANS, to conduct clustering analysis on the principal component selected, which is more like a supplement to PCA in essence. In the follow-up work, we may use a higher-order algorithm, such as neural network, etc., to identify the mode of economic situation. Acknowledgment History never ceases and the traces of history will last long. For a long time, investors and market analysts have always been taking the research on past markets as an indicator to measure the current situation and judge the prospects. In today's high-speed economic development, the economic situation is complex and changeable, but there is always an "invisible hand" behind the economic market to push it forward. Therefore, we suppose that the current economic situation is highly coincident with the economic situation of a certain historical period, and there may be more than one such period. Thus, we assume that there is a certain periodicity in the development of economic situation. When searching for the relevant information of school topic research on the Internet, we found an article on the website of "TWO SIGMA" that weights some representative indicators to discuss and analyze the historical situation of the US financial market, so it reminds us that we may use the relevant knowledge of self-taught machine learning and mathematical modeling to research economic periodicity. In the process of writing this paper, we worked together and completed different parts of the project. Speciflcally, Yan Jiayi is responsible for the research of spectral analysis and the writing and proofreading of this paper; Pu Qian is responsible for the operation of PCA and the data visualization; Liu Juanfei is responsible for the modeling of K-MEANS, as well as data collection and processing. In the three months' research of economic periodicity, we have benefited a lot. We have learned a lot of knowledge of economics, felt a sense of achievement in mathematical modeling, and established a deep friendship. Whether in classroom or after-school research, we are willing to be each other's teacher and make progress together. We hereby want to extend the special gratitude to our supervisor-Mr. Ma Lianhan. As this paper is mainly written after class, Mr. Ma sacrificed his valuable rest time to answer our questions, and guided us to use Python for data visualization. When we were stuck in bottleneck, delayed the task of modeling unceasingly and even thought about giving up, Mr. Ma talked with us during self-study at night and encouraged us to move on. Thanks to Mr. Ma's diligent cultivation, we can thrive and keep going on the road of modeling. We solemnly declare that all the content of this paper was completed by three of us independently. Thanks to Mr. Ma Lianhan for his
8,471.4
2020-01-01T00:00:00.000
[ "Economics", "Computer Science" ]
Effect of Transition Metals Oxide Additives on the Properties of Mixed-Alkali Glass for Electric Insulating Coatings on Aluminum In order to reduce the cost of the thick-film technology of microcircuits and heating elements, as well as to expand its areas of use, it became necessary to expand the range of materials that, along with ceramics and steel, can also be used as substrates for these products. One of these advanced materials is aluminum. Enamels for aluminum should be subjected to roasting at relatively low temperatures, and therefore contain an increased amount of alkali metal oxides in their composition, which causes low values of chemical resistance and volume of electrical resistivity. Electrically insulated coatings on metals are subjected to repeated heating and cooling while their produce and use, which promotes the creation of temperature stresses in the coating and leads to their chipping. Therefore, in order to improve the water resistance and adhesion strength of electrical insulation coatings on aluminum, CuO, ZrO2 and Bi2O3additives were examined. There was found an increase in water resistance with a simultaneous increase in the adhesion strength of the enamel coating with an aluminum substrate with the addition of not more than 3 pts. wt. of copper oxide (II), up to 1 pts. wt. of zirconium oxide (IV), and up to 4 pts. wt. of bismuth oxide (III) w.r.t. 100 pts. wt. of glass. Introduction The basis of modern technical progress is the development of chemical compositions and technological parameters for obtaining new glass enamel coatings for various metal substrates [1]. Recently, electric film heater have been increasingly used in the instrument engineering and in the production of household electrical heating equipment. Literature review. The need for electrical insulation coatings on aluminum in the manufacture of low-power film heaters is caused by the technical and economic feasibility of replacing ceramic substrates for the metal ones. In this regard, additional requirements of performance are imposed on enamel coatings, in addition to their anti-corrosion and decorative properties. These characteristics of enamel coatings include increased values of their heat resistance, chemical resistance, electrical insulation and other properties [2]. As is known, aluminum is characterized by a high value of the temperature coefficient of linear expansion (TCLE) (230•10 -7 К -1 ) and a low melting temperature (~650 0 С). Known compositions of glass frit to obtain coatings on aluminum due to the need to form them by roasting at a temperature of 550-580 0 С contain a significant amount of alkali metal oxides and therefore cannot be used as electrical insulating. To improve the electrical properties of enamels for aluminum, lead oxide is added into their composition by reducing the content of alkali oxides [1,[3][4][5]. However, this oxide tends to interact with the components of conductive pastes to form metallic lead, which contributes to a decrease in the electrical strength of the coating. Strong adhesion of the enamel coating to the metal substrate is an important component of the durability of the use of enameled products [1,3,[6][7][8][9][10][11]. It is known that when enameling aluminum, the adhesion strength of the coating to the substrate depends on the ability of the oxide film to dissolve in the enamel melt and the thickness of the transition layer [3]. I. Object Therefore, the object of the work was to develop the oxide composition of electric glass for obtaining enamel coating on aluminum with increased water resistance and to investigate the effect of transition metal additives on the formation of a contact layer in the aluminum-enamel system. II. Methods For the synthesis of glass under research were used quartz sand and soda of technical purity; lithium, potassium, strontium, barium carbonates, titanium dioxide and trisodium phosphate of pure grade; boric acid, oxides of zinc, copper (II) and zirconium (IV) of analytical grade, as well as bismuth (III) oxide of extrapure grade. Glass melting was carried out in fireclay crucibles in an electric furnace with carborundum heaters with an exposure time of 30-40 minutes at a temperature of 1150-1180 0 С. The readiness of the glass is determined by the visual test of the thread. Melts of glass under research were granulated by pouring into cold water. To determine the physicochemical properties of the glass, samples were made from glass bubble-free metal by casting into steel molds. Experimental studies of the properties of the glass were performed using standard methods and measuring instruments that were generally recognized and widely used in glass chemistry and technology: -dilatometer investigation of the thermal coefficient of linear expansion (TCLE) and melting temperature (MT) were determined on a DKV-5A quartz dilatometer in accordance with GOST 10978-83; -electrical resistivity was determined using E6-13A teraohmmeter; -chemical resistance of glass was determined by the grain method in accordance with GOST 10134.1-82; -the structure of the contact layer of aluminum and enamel was examined on a REMMA-102-02 scanning electron microscope. Insulating coatings on metal obtained under the slurry and roasting technology. The coating was applied on aluminum by pouring. Preparation of products from aluminum before coating consisted of treatment in an alkaline solution of a chromic acid salt according to the recipe, which is given in [3]. Slurry for coatings on aluminum was crushed to pass through sieve No. 0045 in porcelain drums with uralite grinding charge in isopropyl alcohol environment. The roasting of electrical insulation coatings was carried out in a muffle furnace at a temperature of 580 °С for 15-20 minutes. III. Results Due to calculation and experimental studies it was established [12] that the basis for producing electrical insulating coatings on aluminum was mixed-alkali borosilicate glass of the following composition, mol. %: SiO 2 + TiO 2 -54, BaO + SrO + ZnO -5; B 2 O 3 + P 2 O 5 -5; Li 2 O+Na 2 O+K 2 O -36. This glass is characterized by the following properties, which are represented in Table 1. In order to increase chemical resistance and reduce surface tension, CuO and ZrO 2 additives in the amount of 0.5-3.0 pts. wt. w.r.t. 100 pts. wt. of the mixture and Bi 2 O 3 additives in the amount of 2-10 pts. wt. w.r.t. 100 pts. wt. of the mixture were added to the composition of glass. Figures 1-4 show the dependences of water resistance (W), electrical resistivity (lgρ 150 ), TCLE and Table 1 Physical and chemical properties of glass for the production of coatings on aluminum Water resistance (amount of 0.01 N HCl which was used for titration), cm 3 MT of glass on the content of additives of these oxides in its composition. It has been established ( Fig. 1, a) that the addition of zirconium dioxide and copper oxide additives into the composition of this glass contributed to an increase in water resistance, which was characterized by a decrease in the amount of 0.01 N HCl solution, which was used for aqueous extract titration, from 3.55 to 1.10 ml (4/98 dimming class). It should be noted that CuO additives contributed to an increase in electrical resistivity from 10 9.8 to 10 10.5 Ω•cm and did not affect the TCLE of the enamel (165•10 -7 K -1 ), while the addition of ZrO 2 , on the contrary, contributed to a slight decrease of TCLE to 155•10 -7 K -1 , and the value of electrical resistivity did not change ( Fig. 2-3, a). The addition of bismuth oxide to the composition of this glass has an ambiguous effect on chemical resistance, possibly due to a change in the bismuth valence ( Fig. 1, b). Its presence practically does not affect the value of TCLE, MT and electrical resistivity, but significantly improves the spreading of enamel on aluminum ( Fig. 2-4, b). IV. Discussion It has been established that the addition of zirconium dioxide and copper oxide additives to the composition of the glass under research contributed to an increase in water resistance, which even at a content of up to 3 pts. wt. w.r.t. 100 pts. wt. of the mixture increased the water resistance of the glass frit almost 1.5-3.0 times, respectively. At the same time, the increase of the ZrO 2 additive in the enamel leads to a slight decrease in TCLE to 155•10 -7 K -1 and increase in the temperature of the melting temperature to 455 0 С, therefore the content of this component should not exceed 1 pts. wt. w.r.t. 100 pts. wt. of the mixture. The recommended content of Bi 2 O 3 should not exceed 4 pts. wt. w.r.t. 100 pts. wt. of the mixture due to the occurrence of recovery processes in the formation of the coating. According to the results (Fig. 5) contributed to a more uniform distribution of aluminum oxide throughout the enamel thickness, which contributed to the expansion of the contact zone, and accordingly, an increase in the adhesion strength of the enamel coating to the aluminum substrate. Bismuth oxide also acts as a surface-active component, which is in good agreement with known ideas about the structure of the glass [14]. While sorbing on aluminum enamel on the interface it improves the flowability of enamel over the metal surface, which is confirmed by chemical analysis data (Fig. 6) and visual assessment of the coverglass. In Fig. 5 (b) the area with the increased content of bismuth oxide is marked with a white circle.
2,170
2019-01-01T00:00:00.000
[ "Materials Science" ]
Higher Education Students and Covid-19: Challenges and Strategies in facing Online Learning The emergence of Covid-19 has had a significant influence on the world of education. Even though the emergence of Covid-19 has accelerated the integration of technology in learning, there are still negative classroom learning impacts. This study aims to explore student experience in online learning in the Covid-19 Pandemic period; describe the negative impacts and obstacles that arise in online learning; and describe student strategies in online learning. This research is qualitative in the type of case studies. Data were collected through open questionnaires and interviews with participants of 20 students. Data analysis was accomplished with Bogdan and Biklen models through reduction, searching for sub-themes, and seeking relationships between sub-themes to obtain the conclusion. The results of this study show that learning in the Covid-19 pandemic period has not been fully optimally done. In addition, students also get physical and mental impacts during online learning. Furthermore, students get barriers to signals, learning environment, and learning activities with lecturers online. However, students have a varied learning strategy to minimize obstacles and negative impacts of online learning. Introduction Online learning is a remote learning accessible through the help of electronic devices such as smartphones, laptops, tablets, and computers, requiring Internet connections to access them (Gonzalez & St.Louis, 2018). The use of online learning systems in the Covid-19 pandemic period is an appropriate response in the face of crisis (Murphy, 2020), and became an important key in the educational world (Herliandry et al., 2020). But online learning is also a new challenge for Indonesian educators, learners, and people, as changes in the face-to-face system to online learning occur in the absence of transitions. The rapid move from face-to-face to distance learning has caused confusion for both academics and learners (Czerniewicz et al., 2020). The changes that occur in every line of life, including in the educational world certainly provide a sense of anxiety due to the uncertainties that arise in the process, as Fullan had said 1,5 decades ago in his book (Fullan, 2005, p. 31). Nevertheless, various ways to minimize the concerns, barriers, and all the shortcomings that arise remain sought in every process of change that takes place, including in the current Covid-19 pandemic period. Educators certainly have to think about how to connect pedagogical skills with technology (Nursyifa et al., 2020;Son, 2018). The pedagogical skills will be futile when there is still a problem in technology access in learning (Cakrawati, 2017). The advancement of the internet (Alqurashi, 2019), and the skill of using technology (Jan, 2015) became essential to the optimization of online learning. Educators certainly require more mature preparation with power investment, skills, and time to design and implement online learning, as online learning requires higher investment than face-to-face learning (Green, 2016). So that, educators, learners, and platforms used in online learning (Zhu et al., 2020) into three important elements whose proportions must balance one with the other. When one of those essential elements is absent or disproportionate then online learning becomes less optimally implemented. The positive side of learning during the Covid-19 pandemic is that the educational community becomes accustomed to technology-based learning. The presence of online learning is as it is the stimulus that hastens the distribution of IT skills in the industry era 4.0. that all-round uses technology. Previous research stated that technology in online learning facilitates educators in interacting and subjection to learners in the Covid-19 pandemic period (Salsabila et al., 2020). Technology also offers learning that can be done anywhere and at any time (Korucu & Alkan, 2011;Nursyifa et al., 2020), and improves technology-related skills (Ulfa & Mikdar, 2020). On the other hand, however, online learning that occurs in the field still leaves many flaws or problems in education. Findings in previous research have shown that learning during the Covid-19 pandemic has an impact on students' understanding in learning are regressing, problems in accessing technology (Di Pietro et al., 2020, p. 30), the emergence of physical and mental fatigue (Atmojo & Nugroho, 2020), eye tension (Octaberlina & Muslimin, 2020), disruption of Internet connection (Diningrat et al., 2020), difficulty in producing learning media and designing fair assessment (Arlinwibowo et al., 2020), explanations being less maximal (Jariyah & Tyastirin, 2020), limited learning resources (Czerniewicz et al., 2020), and learning ability from still weakly classified students (Chang & Fang, 2020). Research before Covid-19 also showed some obstacles in online learning, such as difficulties in time management (Mohamadkhani et al., 2017), and slow internet connection (Cakrawati, 2017). Analysis of various research findings shows that the presence of Covid-19 has also had a big impact on education, both positive and negative. However, additional studies are still needed to explore students' challenges and student strategies carried out during the Covid-19 pandemic to add to the treasurer of research during the Covid-19 pandemic. And so, based on the complexity that arises, the study intends to deepen, strengthen, and complement the discussion of online learning in the Covid-19 pandemic period from a student perspective. The purpose of this study is to (1) explore student experience in online learning in the Covid-19 pandemic period; (2) describe the negative impacts and obstacles that arise in online learning; and (3) describe student strategy in the face of problems that occur in times of online learning. Method This research is a qualitative study with a type of case study. The study was conducted at Yogyakarta PGRI University on October 28-November 11, 2020. Participants in the study were 20 students of Elementary School Teacher Education. The technique used to determine the subject of research is purposive sampling. Students selected as respondents are active students of 3 rd semester who have done online learning for two semesters. Data collection is generated through open questionnaires through google form and interviews through WhatsApp. Open questionnaires were conducted at the beginning of the study to see the problem in general. Furthermore, there is a deepening of information through interviews individually as well as in groups. Open questionnaire materials and interviews are the following (1) learning exploration in the Covid-19 pandemic period; (2) describing the negative impacts and obstacles that arise in online learning; (3) describing PGSD students' learning strategies in online learning. Researchers became the main instrument in the study. Open questionnaires and interviews were conducted by adjusting the leisure time of each respondent. Researchers conveyed that all data obtained from respondents were used in the study only. The participants' response to the open questionnaire and interview became the sole asset of the researcher and respondents themselves. The collected data will not affect future participants' conduct. The analysis was carried out using Bogdan & Biklen model (1982) by through reduction, seeking sub-themes, and relationships between sub-themes (Bogdan & Biklen, 1982). Open questionnaire results data and interviews were initially reduced, then reduction results data were presented in table form and sought out sub-companies. Thereafter, it is analyzed and connected between sub-themes to seek out the conclusion. Results The results of this study were divided into three main topics, namely (1) online learning exploration in the Covid-19 pandemic period; (2) obstacles and negative impacts in online learning; and (3) Student learning strategies in the face of online learning. Each of the topics is further re-expounded in some themes for reduction and searching for sub-heralism. Furthermore, each sub-theme is connected one with the other to be obtained inferences on each theme. Each topic is discussed more detail in the following discussion. Exploring Online Learning in Times of Covid-19 Pandemic Topics regarding online learning exploration in the Covid-19 pandemic period found two themes to understand it. Two such themes are (1) Technology in online learning; and (2) Learning in the time of the Covid-19 Pandemic. Each of the themes obtained is presented as follows. Technologies of Online Learning The use of gadgets and the Internet became something that could not be released in online learning. Based on the information gained, the entire student used his laptop and smartphone to study as well as work on tasks in the Covid-19 pandemic period. The Internet is used so that each student can be connected to one with the other, including with a teaching lecturer. Furthermore, the utilization of technology and the internet increased in the Covid-19 pandemic period. Based on the results of data analysis done on the theme of technology in online learning, there are four sub-themes as presented in Table 1. Most students have been used to using technology and the internet in learning before and after the existence of Covid-19 The increasing use of technology and the internet during the Covid-19 pandemic has accelerated the development of education that utilizes technology in the Industrial 4.0 era. 2. The intensity of technology and internet use increased after the presence of a Covid-19 pandemic 3. Online learning makes technology access even more advanced 4. Utilization of technology adds to student IT skills Most college students mentioned that they are fully adept to using technology and the internet in learning, both before and after the advent of Covid-19. The distinguishing factor is that after the advent of Covid-19, the intensity of technology and the internet use increased. Whereas before the advent of Covid-19, Internet use was only to seek information related to education or material relevant to lectures. However, the presence of Covid-19 makes learning with technology more variable, such as the use of google classrooms, edmodo, gamification, and Conference video platforms such as zoom or google meet. Students also revere that the use of technology in Covid-19 pandemic learning adds to IT skills from college students. It certainly makes technology access even more advanced. However, such positive circumstances also still leave many gaps of flaws in online learning in the Covid-19 pandemic period. One of them was like a hampering of access using books in the campus library. This is because the campus is lockdown and many students are returning to its hometown. Learning in Times of Covid-19 Period Online learning in the Covid-19 pandemic period is already in two semesters in Indonesia. Based on the results of the analysis on learning themes in the Covid-19 pandemic period found six sub-themes, as presented in Table 2. No. Sub-themes Relations between sub-themes 1. Interactions occurred between lecturers with college students are quite good, although not as optimal as face-to-face Learning in the Covid-19 pandemic period has not been fully optimal 2. Sometimes college students have a hard time understanding learning from lecturers 3. Not all lecturers provide reflection at the end of learning 4. Sometimes there are fewer lecturers in providing explanations 5. Online learning makes student understanding lesser than on face-to-face 6. Students expect relearning with face-to-face systems The conclusion that can be taken out of this topic suggests that learning in the Covid-19 pandemic period has not been fully optimally done. The statement is based on information that students present on new challenges that expose the flaws of learning in the Covid-19 pandemic period. Students render that it is harder to understand lecturers' learning, both from direct lecturers and from exposure to other students through already shared groups. Students realize that the currently ongoing online learning makes student understanding diminished, in contrast to face-to-face learning systems. Lecturers are also sometimes lacking in providing explanations regarding the topics of the material discussed, as well as when providing assignments. In addition, there are still fewer lecturers in providing reflection at the end of learning. Furthermore, students are rendering that intertwined interactions between lecturers with students in online learning are quite good, although not as optimal as face-to-face. Some college students are delivering their hopes of carrying out face-to-face lectures back. However, the Covid-19 pandemic conditions with steadily increasing positive casualties, left the college side still delaying face-to-face learning the following semester. So that students who are out of town still postpone going to campus. The statement certainly hampered the search for references in the form of printed books available in campus libraries. Obstacles and Negative Impacts of Online Learning Topics regarding obstacles and negative impacts in online learning are discussed in two themes. The theme that addresses barriers is about the condition of student learning signals and environments, while the theme that deals with the negative impact of online learning is the physical and mental impact of university students. The Issues of Signal and Students Learning Environment. The fluidity of internet signals is certainly the sustainability of online learning swaying in the Covid-19 pandemic period. Similarly, the home learning environmental conditions that have co-influenced online learning's smoothness, because learning is done remotely, and college students are at home respectively. Based on data analysis results on signal conditions and student learning environments, seven sub-themes are obtained as presented in Table 3. Uneventful signaling conditions will inhibit learning from college students as it will eliminate student focus in learning. Students expose that the lack of smooth signals are affected by power outages from the PLN of the surrounding area, bad weather such as heavy rains or high winds, and students who are in highland areas or mountains. When the signal conditions keep college students from following lectures, then the thing that college students do is confirm immediately to lecturers directly as well as through other friends' intermediaries. No Sub-theme Relations between subthemes 1. The signal conditions become less good or lost when power outages Less smooth signal conditions are affected by power outages, weather conditions, and the area's location is in high altitudes or mountains 2. The signal conditions become lost when the weather is bad 3. Signal conditions often do not smoothly occur in college students living in highland or mountainous areas 4. Poor signal conditions make college students less focused on learning Uneventful signal conditions can inhibit student learning 5. Students give confirmation to the lecturer when the learning process is inhibited because the signal condition is impaired 6. A bustling or noisy state of the environment can interfere with student concentration in learning The state of the environment that lacks comfortable students can inhibit college students in learning 7. Parents who lack understanding the learning process of college students will inhibit student learning In addition, a less comfortable environmental state may also inhibit college students in studying. The lack of comfort of the environment is based on the crowd of student learning environments thereby disrupting student concentration in following lectures or learning, and conditions when parents lack understanding of the learning processes undertaken by students. Meanwhile, the crowded state of the environment is mentioned by students is because of the disruption of sisters or other young children crowded when students study, or noisy because of the passing vehicles are auctioned around the house, for the case of college students residing in the highway environment. Physical and Mental Impact Towards Students In addition to the presence of obstacles, certainly online learning also brings about negative impacts for college students. The negative impact that students get based on the information collected is the impact on the physical and mental of students. Based on data analysis results on physically and mentally impacts from college students, there are eight sub-themes as presented in Table 4. No Sub-theme Relations between subthemes 1. Spending too much time in front of the gadget makes the eyes damaged and hurt Uncontrolled online learning can result in negative impacts on student physical 2. Too long using gadgets in online learning makes head dizzy 3. A monotonous sitting position makes the waist ache 4. Too much assignment and less rest make the thrush and increase gastric acid 5. The mental impact of online is the rise of stress and stress Excessive online learning without regard to mental health can result in negative impacts on student mental 6. Students feel saturated with online learning 7. Student motivation is declining in online learning 8. The lazyness comes at a time in online learning Negative impacts on the physicality are affected by the lack of students in moving, and linger in front of laptops or smartphones. It is fully realized by students that lingering in front of the gadget makes the eyes blurry, sick, and spicy. Too use a gadget also results in the head becoming dizzy. This is because after online learning is complete, college students take a break by still using gadgets, such as playing games, opening social media or watching movies. Furthermore, a monotonous sitting position makes the waist ache, and too many tasks and less rest or escape from the gadget make the student a thrush and some college students mention gastric acid to rise. In addition to physical impacts, another negative impact perceived by college students is on mentally. The mental impact that students feel is in the form of the rise of stress and stress due to assignments that students think are too much. Students also mentioned that they were already saturated with online learning, which results in a declining motivation and a sense of lazyness that arises in its learning process. Decreasing motivation can certainly make college students lose focus and less optimal in learning. Students Learning Strategies in Managing Online Learning Topics regarding student learning strategies in the face of online learning are discussed in three themes. The three themes are (1) How to overcome difficulties in learning; (2) Strategies in organizing learning; and (3) Self-motivating techniques. Each of the themes acquired, presented as follows. The Way to Overcome Obstacles on Learning The difficulty in learning in the Covid-19 pandemic period resulted from negative impacts and obstacles in online learning that had been discussed on earlier themes. In addition to mentioning the negative impacts and obstacles, college students also mentioned how to minimize the difficulties in such learning. Solution on poor signal conditions is by replacing providers, looking for a place with good signals, and looking for a free Wifi spot, such as in Cafe or in the sister home installing Wifi. In the case of a crowded environment due to the disturbance of another younger brother or child, then the step made by the student is to search for a quiet place, as in the room by locking the room door. While for parents who sometimes miscommunicate with student conditions while studying is by publicly communicating to parents about their activities and learning schedules during college. The parent who provides full support to his child and meets all his or her child needs certainly makes the student able to perform learning optimally. Furthermore, when students have difficulty in seeking reference or are confused with the assignment the lecturer provides, then the thing done is to ask the other friend both personally and through the WA group. However, college students also revere that they also often ask directly to lecturers, and for reluctant students, then prefer to ask the elder level who has completed the course. Related to the negative impacts earned on physical and mental, college students prefer to take solutions by taking a break, or by looking at something far and green, like trees, to refresh the mind. Also, there are students who tell parents, or friends to reduce the burden in his mind. Strategies in Organizing Learning Students have a varied learning strategy to minimize the disorders that arise in online learning in the Covid-19 pandemic period. Based on data analysis results on strategy in regulating student study, there are six subthemes as presented in Table 5. Gives special time to study or work on tasks at night because it adds concentration Specifying special time to study by not procrastinating assignment can optimize student study 2. Not procrastinating time on tasks 3. Get used to every one hour for 20 minutes' No. Sub-themes Relations between sub-themes study and rest 4. Installing reminder alarms for important lectures, tasks, and agendas 5. Creating the unworked assignment list Making list assignments and making summaries of materials can make it easier for college students to learn 6. Learn by summarizing or by making important points The strategy that students do is to determine the specific time for study. The determination of learning time certainly has variations between one student with another student. One student conveys that it is necessary for the habitability of controlling time to study for a full hour by avoiding interference from social media or other disorders, then doing a 20-minute break. Another thing student do is to put up a reminder alarm for college time, work on tasks, and other important agendas such as following webinars or public lectures. The exact activity most students make is to give special time at night to study, because according to them, nighttime can add to its concentration for study as well as to work on tasks. In addition, students have a strategy to create an assignment list in the record on grounds as a reminder of already worked and unworked tasks. The notes are made on smartphones, whiteboards attached to rooms, or on their laptops. Later, some college students also made a summary or important points on some courses to make it easier for those students to remember the material. This finding adds information about student strategy in order to optimize self-study from a student perspective in the Covid-19 pandemic period. The Method of Self-Motivation Although student motivation in the Covid-19 pandemic period tends to decline, college students still have techniques to improve their motivation in online learning. Based on data analysis results, four techniques that most students perform to improve their motivations in the Covid-19 pandemic period. Four such techniques are presented in Table 6. Table 6. Reduction Results of Self-Motivating Techniques No. Sub-theme Relations between subthemes 1. Motivate oneself by remembering the struggles of the elderly Students have variations in ways to motivate themselves, such as the cell-talk, motivation from youtube, given the parental struggle, as well as through rest or soothe the mind 2. Motivating oneself by looking at motivational videos of youtube, and quotes from social media 3. motivate oneself by speaking to oneself in positive words 4. motivate oneself by resting or soothing the mind Students have variations in ways to motivate themselves. The motivation is presented in two stimuli, namely from the outside and within. Outside stimulus is the form of motivational videos watched from youtube, quotes from social media, and also by remembering the struggles of parents or tuition that are certainly numerous. Meanwhile, stimulus from within is evoked with self-talk or speaks to oneself through motivating positive words. In addition, motivation can also be obtained when performing rest and when calming the mind. This finding became an additional information on the way college students in motivating themselves in the middle of the Covid-19 pandemic. Discussion Technology integration during the Covid-19 pandemic is a must for online learning. The current crisis reflects new technology as an essential and needed tool in current learning (Nuere & de Miguel, 2020). This study's findings indicate the importance of gadgets and the internet in supporting learning during the Covid-19 pandemic. These findings are in line with previous research that states that technology facilitates students and educators to interact and learn together online (Salsabila et al., 2020). The intermediary media used in online learning are gadgets such as smartphones, laptops, or tablets. Meanwhile, the interactions formed in online learning and connecting students and lecturers can occur because of the internet. Therefore, online learning cannot be done without gadgets and the internet. The use of technology during the Covid-19 pandemic is indeed a key in the world of education (Herliandry et al., 2020). Students also mentioned that they were accustomed to using technology and the internet, both before and after the emergence of Covid-19. Technology has become part of the learning process provided by the campus for students; thus, lecturers need to understand how to use technology in learning (Arifin & Sukmawidjaya, 2020). This finding is a differentiator from the previous statement, which states that problems arise in accessing technology (Di Pietro et al., 2020). Problems in accessing technology may still arise in elementary to senior high school students, but not for student groups. The reason is that college students have adapted independently to learning that utilizes technology. This is reinforced by previous findings stating that students' media literacy and technological literacy are good (Sulistiyarini & Sabirin, 2020). Therefore, most higher education students do not need to study again from zero to access learning that utilizes technology online. The emergence of Covid-19 made the use of technology and the internet increase. It accelerates the development of education that utilizes technology during the Industry 4.0 era, as in previous research findings that stated that technological innovation's achievement became accelerated during the Covid-19 pandemic (Brem et al., 2021). However, there is still a negative side to the emergence of Covid-19 for the world of education. The findings prove that it is challenging for students to search for printed reference sources that can be accessed through the library because they return to their hometowns. This finding reinforces previous research that states that access to resources available on campus, such as libraries, computer laboratories, and free Wi-Fi available on campus, becomes inaccessible (Czerniewicz et al., 2020). Thus, the search for books that support lectures is searched online. The problem is that students lack references due to limitations in obtaining the ebooks needed in lectures, although there are also illegal ebooks that students often target in seeking information; this is certainly new information in this study. Other findings in this study confirm that students have difficulty understanding the learning process during the Covid-19 period, compared to before the Covid-19 presence. These findings strengthen the previous statement that students' understanding experiencing a setback during the Covid-19 pandemic (Di Pietro et al., 2020), the occurrence of confusion of concepts (Onojah et al., 2020), the explanation from the lecturer was not optimal (Jariyah & Tyastirin, 2020), and students' learning abilities are still relatively weak (Chang & Fang, 2020). Other findings from the lecturer's point of view also state that there are difficulties in producing learning media and designing fair assessments (Arlinwibowo et al., 2020), and the difficulty of ensuring student achievement using only technological media (Fatoni et al., 2020). The difficulties that arise from lecturers and students are certainly correlated with a decreased level of student understanding in learning. So lecturers need to always reflect on their learning and always look for effective learning with the environment in which they teach. Similarly, students must continue to adapt to the online learning environment, and students need to improve their self-learning programs. Furthermore, the findings show that the interaction between lecturers and students in online learning is considerably good, although not as optimal as face-to-face. The excellent communication relationships that lecturers and students develop help increase student involvement (Tichavsky et al., 2015), support student cognition (Chatzara et al., 2016), and reduce anxiety amid the Covid-19 pandemic (Talidong & Toquero, 2020). Students also expressed their hope of back to conducting face-to-face lectures because online learning was still not optimal. These findings strengthen previous research that online learning is still less than optimal (Parmin et al., 2020), cannot meet as many learning needs as face-to-face (Best & Conceição, 2017), and is less interesting than face-to-face (Mulyanti et al., 2020). Therefore, universities need to conduct periodic evaluations to monitor the effectiveness of online learning and seek best practice through scientific studies. The presence of Covid-19 also provides obstacles and negative impacts in online learning. The findings show that unreliable signal reception will hinder the students' learning because it will eliminate students' focus on learning. These findings corroborate previous research on less optimal learning due to signal problems (Abidin et al., 2020). Also, students stated that the unreliable signal reception was influenced by the blackout from the local PLN, bad weather such as heavy rain or strong winds, and students who live in highland or mountainous areas. These findings complement previous research related to the causes of disturbances in internet connectivity (Cakrawati, 2017;Diningrat et al., 2020;Fatoni et al., 2020). Students practiced dealing with signal interference to change providers, found a place with better signal reception, and looked for a free Wi-Fi place, such as in a cafe or at a relative's house with Wi-Fi installed. These findings add alternative solutions that can be carried out based on previous research on problems in facilities needed in online learning (Abidin et al., 2020;Cakrawati, 2017;Dabbagh, 2007;Diningrat et al., 2020;Zhu et al., 2020). Students also stated that there were still disturbances in the learning environment that made it uncomfortable and disturbed student concentration. These findings corroborate previous findings that it is more difficult to find a conducive learning environment (Czerniewicz et al., 2020;Erarslan & Arslan, 2020;Fatoni et al., 2020). The student's strategy in dealing with this is to find a quiet place, such as in their room, by locking the door. These findings strengthen previous research that comfort is made in the environment by minimizing disruption in learning (Erarslan & Arslan, 2020;Sadikin & Hamidah, 2020). Furthermore, online learning during the Covid-19 pandemic also negatively impacts students' physical and mental health. This study's findings indicate that students experiencing physical effects in the form of waist pain, increasing gastric acid, blurred, sore, and stinging eyes. These findings strengthen and complement previous research on the emergence of physical exhaustion (Atmojo & Nugroho, 2020) and eye strain (Octaberlina & Muslimin, 2020). The mental impact experienced by students is in the form of boredom, decreased motivation, and stress that arose while studying during the Covid-19 pandemic. These findings complement previous research on the emergence of students' mental problems (Atmojo & Nugroho, 2020;Charles et al., 2021;Jhon et al., 2020;Rakhmanov & Dane, 2020;Sayeed et al., 2020;Wang & Zhao, 2020). The emergence of stress due to their state of being constantly under stress and the emergence of anxiety regarding their studies (Chhetri et al., 2021). The strategy students used to deal with this was by taking a short break or looking at something far and greenery, such as trees, as a refresher mind. Some students shared their experience with their parents or friends to reduce the burden on their minds. These findings are in line with previous research, which states that it is necessary to maintain health and fitness through entertainment, accelerate motor skills, eat nutritious food and adopt a healthy life (Sekti & Juwariyah, 2020). The finding reinforces that sleep habits, daily fitness routines, and social interactions significantly affect students' health conditions (Chaturvedi et al., 2021). Online learning that has occurred for one year in the scope of higher education certainly has a lot of meaning for each component of education. This is because online learning during the Covid-19 pandemic also contributed to accelerating the use of technology in education. However, online learning during the Covid-19 period could not be carried out optimally, and there were still many obstacles in the process. However, based on students' experiences and points of view, they do not just give up on the uncertainty that Covid-19 brings in learning. Students still have motivation and strategies to overcome any obstacles and negative impacts of Covid-19 in learning. So, students can learn strategies from each other to improve their self-learning programs. This research certainly has limits that appear in data collection. These emerging limits can be a suggestion for further research that other researchers can carry out. The limits and suggestions of the study are as follows. First, the study has not explored the viewpoints of lecturers and campus parties on the barriers, negative impacts, and strategies that lecturers can undertake to deal with any challenges that arise. It therefore requires further research exploring the faculty' online viewpoint through in-depth interviews and observations. Second, the study has not traced positive impacts in the Covid-19 pandemic period as well as future impacts after Covid-19 could be resolved. And so, it takes more profound reviews of the positive impact or potential that arises at the time of the Covid-19 existence as well as after the Covid-19 can be resolved in the foreseeable future. Third, this study has not yet traced to the effective learning process or best practices in learning, so further research is needed on it. Conclusions and Suggestions Based on the analysis results, it is concluded that learning during the Covid-19 pandemic has not been fully optimal. However, the positive side is that the increasing use of technology and the internet during the Covid-19 pandemic has accelerated education development that utilizes technology in the Industry 4.0 era. Students also experienced negative impacts physically and mentally during the Covid-19 pandemic because online learning was poorly controlled without attending to students' physical and mental health. Besides, students still encounter obstacles in online learning, such as poor signal reception, a learning environment that is sometimes crowded, and learning activities with lecturers that are not as optimal as face-to-face. However, students still have learning strategies, selfmotivating techniques, and ways to minimize obstacles or impacts during online learning during the Covid-19 pandemic.
7,655.2
2021-10-18T00:00:00.000
[ "Education", "Computer Science" ]
Memory-assisted measurement-device-independent quantum key distribution A protocol with the potential of beating the existing distance records for conventional quantum key distribution (QKD) systems is proposed. It borrows ideas from quantum repeaters by using memories in the middle of the link, and that of measurement-device-independent QKD, which only requires optical source equipment at the userʼs end. For certain memories with short access times, our scheme allows a higher repetition rate than that of quantum repeaters with single-mode memories, thereby requiring lower coherence times. By accounting for various sources of nonideality, such as memory decoherence, dark counts, misalignment errors, and background noise, as well as timing issues with memories, we develop a mathematical framework within which we can compare QKD systems with and without memories. In particular, we show that with the state-of-the-art technology for quantum memories, it is potentially possible to devise memory-assisted QKD systems that, at certain distances of practical interest, outperform current QKD implementations. Introduction Despite all commercial [1] and experimental achievements in QKD [2,3,4,5,6,7,8,9,10], reaching arbitrarily long distances is still a remote objective.The fundamental solution to this problem, i.e., quantum repeaters, is known for over a decade.From early proposals by Briegel et al. [11] to the latest no-memory versions [12,13,14], quantum repeaters, typically, rely on highly efficient quantum gates comparable to what we may need for future quantum computers.While the progress on that ground may take some time before such systems become functional, another approach based on probabilistic gate operations was proposed by Duan and co-workers [15], which could offer a simpler way of implementing quantum repeaters for moderate distances of up to around 1000 km.The latter systems require quantum memory modules with high coupling efficiencies to light and with coherence times exceeding the transmission delays, which are yet to be achieved together.In this paper, we propose a protocol that, although is not as scalable as quantum repeaters, for certain classes of memories, relaxes, to some extent, the harsh requirements on memories' coherence times, thereby paving the way for the existing technologies to beat the highest distance records achieved for no-memory QKD links [2].The idea behind our protocol was presented in [16], and independent work has also been reported in [17].This work proposes additional practical schemes and rigorously analyses them under realistic conditions. Our protocol relies on concepts from quantum repeaters, on the one hand, and the recently proposed measurement-device-independent QKD (MDI-QKD), on the other.The original MDI-QKD [18] relies on sending encoded photons by the users to a middle site at which a Bell-state measurement (BSM) is performed.One major practical advantage of MDI-QKD is that this BSM can be done by an untrusted party, e.g., the service provider, which makes MDI-QKD resilient to detector attacks, e.g., timeshift, remapping, and blinding attacks [19,20,21,22,23,24,25,26]. The security is then guaranteed by the reverse EPR protocol [27].Another practical advantage is that this BSM does not need to be a perfect measurement, but even a partial imperfect BSM implemented by linear optical elements can do the job.In our scheme, by using two quantum memories at the middle site, we first store the state of the transmitted photons in the memories, and perform the required BSM, only when both memories are loaded.In that sense, our memory-assisted MDI-QKD is similar to a single-node quantum repeater, except that there is no memories at the user's end.This way, similar to quantum repeaters, we achieve a rate-versus-distance improvement as compared to the MDI-QKD schemes proposed in [18,28,29,30], or other conventional QKD systems that do not use quantum memories. There is an important distinction between our protocol and a conventional quantum repeater system that relies on single-mode memories.In such a quantum repeater link, which relies on initial entanglement distribution among neighbouring nodes, the repeat period for the protocol is mainly dictated by the transmission delay for the shortest segment of the repeater system [31,32].In our scheme, however, the repeat period is constrained by the writing time, including the time needed for the herald/verification process, into memories.This implies that using sufficiently fast memories, i.e., with short writing times, one can run our scheme at a faster rate than that of a quantum repeater, thereby achieving higher key generation rates, as compared to conventional QKD links, and at lower coherence times, as compared to probabilistic repeater systems.This increase in clock rate is what our proposal shares with the recently proposed third generation of quantum repeaters, which use quantum error correction codes to compensate for loss and errors, thus also being able to speed up the clock rate to local processing times [12].The need for long coherence times remains one of the key challenges in implementing the first generations of quantum repeaters before the latest no-memory quantum repeater proposals can be implemented. The above two benefits would offer a midterm solution to the problem of longdistance QKD.While our scheme is not scalable the same way that quantum repeaters are, it possibly allows us to use the existing technology for quantum memories to improve the performance of QKD systems.In the absence of fully operational quantum repeater systems, our setup can fill the gap between theory and practice and will become one of the first applications of realistic quantum memories in quantum communications. It is worth mentioning that the setups we propose here are compatible with different generations of hybrid quantum-classical (HQC) networks [33].In such systems, home users are not only able to use broadband data services, but they can also use quantum services such as QKD.MDI-QKD offers a user-friendly approach to the access part of such networks as the end users only require source equipment.Whereas, in the first generation of HQC networks, the service provider may only facilitate routing services for quantum applications, in the future generations, probabilistic, deterministic, and eventually no-memory quantum repeaters constitute the quantum core of the network.In each of these cases, our setups are extensible and compatible with forthcoming technologies for HQC networks. The rest of the paper is structured as follows.In Section II, we describe our proposed schemes and the modelling used for each component therein.Section III presents our key rate analysis, followed by some numerical results in Section IV.Section V concludes the paper. System Description Our scheme relies on "loading" quantum memories (QMs) with certain, unknown, states of light.This loading process needs to be heralding, that is, by the end of it, we should learn about its success.Within our scheme, two types of memories can be employed, which we refer to by directly versus indirectly heralding QMs.Some QMs can operate in both ways, while some others are more apt to one than the other.By directly heralding memories we refer to the class of memories to which we can directly transfer the state of a photon and we can verify-without revealing or demolishing the quantum statewhether this writing process has been successful.An example of such memories is a trapped atom in an optical cavity [34].In the case of indirectly heralding memories, a direct writing-verification scheme may not exist.Instead, we assume that we can entangle a photonic state with the state of such QMs [15,35,36,37,38,39,40], and later, by doing a measurement on the photon, we can effectively achieve a heralded writing into the memory.These two approaches of writing cover most relevant practical examples to our scheme. The scheme for directly heralding memories works as follows [16,17]; see figure 1(a).The two communicating parties, Alice and Bob, send BB84 encoded pulses [41], by either single-photon or weak laser sources, towards QM units located in the middle of the link.Each QM stores a photon in a possibly probabilistic, but heralding, way.Once both memories are loaded, we retrieve their states and perform a BSM on the corresponding photons.A successful BSM indicates some form of correlation between the transmitted bits by Alice and Bob. We can easily extend the above idea to the case of indirectly heralding memories.An additional BSM, on each side, along with an entangling process between photons and QMs, can replace the verification process needed for directly heralding memories.In this case, see figure 1(b), a successful BSM between the transmitted photon by the users and the one entangled with the QM, would effectively herald a successful loading process, that is, the state of the QM is correlated with the quantum state sent by the users. In order to entangle a QM with a photon, one can think of two standard ways.One approach would be to generate a pair of entangled photons, e.g., by using spontaneous parametric down-converters [42,43], and then store one of the photons in the memory and use the other one for interference with the incoming photon sent by the user.While this approach is not fully heralding (because we cannot be sure of the absorption of the locally generated photon by the memory), it is still a viable option for highly efficient writing procedures.Another approach to entangle a photon with a memory, which this paper is mainly concerned with, is to start from the memory and generate a photon entangled with the memory by driving certain transitions in the memory [40,15].With entangling times as short as 300 ps reported in the literature [44], high repetition rates are potentially achievable for indirectly heralding memories. In either approach, it is possible to have multiple-excitation effects, which can cause errors in our setup.In this paper, for readability reasons, we make the simplifying assumption of having only single excitations in the memories, and address the multiple excitation effect in a separate publication [45].Furthermore, here we only consider the polarization entanglement.The extension to other types of entanglement is straightforward and will be dealt with in forthcoming publications. Under all above assumptions, suppose once we entangle the memory A with a single photon P , the joint state of the two is given by 1 Figure 1: (a) MDI-QKD with directly heralding quantum memories.Alice and Bob use the efficient BB84 protocol to encode and send pulses to their respective QM in the middle of the link.At each round, each QM attempts to store the incoming pulse.Once they are both loaded, we retrieve the QMs' states and perform a BSM on the resulting photons.(b) MDI-QKD with indirectly heralding quantum memories.At each round, an entangling process is applied to each QM, which would generate a photon entangled, in polarization, with the QM.These photons interfere at the BSM modules next to the QMs with incoming pulses from the encoders.As soon as one of these BSMs succeeds, we stop the entangling process on the corresponding QM, and wait until both QMs are ready for the middle BSM operation.In this case, QMs are not required to be heralding; a trigger event is declared by the success of the BSM located between the QM and the respective encoder.(c) The original MDI-QKD protocol [18].(d) One possible energy-level configuration for a QM suitable for polarization encoding. photons, and |s H A and |s V A are the corresponding memory states; see figure 1(d).In equation (2.1), the conditional state of the photon, knowing the memory state, has the same form as in BB84.Each leg of figure 1(b), from the user end to the respective QM, is then similar to an asymmetric setup of the original MDI-QKD scheme as depicted in figure 1(c).The working of the system in figure 1(b) will then follow that of the original MDI-QKD.We will use this similarity in our analysis of the system in figure 1(b). The main advantage of our scheme as compared to the original MDI-QKD, in figure 1(c), is its higher resilience to channel loss and dark count.In the no-memory MDI-QKD, both pulses, sent by Alice and Bob, should survive the path loss before a BSM can be performed.The key generation rate then scales with the loss in the entire channel.In our scheme, each pulse still needs to survive the path loss over half of the link, but this can happen in different rounds for the signal sent by Alice as compared to that of Bob.We therefore achieve the quantum repeater benefit in that the key generation rate, in the symmetric case, scales with the loss over half of the total distance.Moreover, in the case of directly heralding memories, our scheme is almost immune against dark counts [17].This is because the measurement efficiency in the BSM module is typically a few orders of magnitude higher than that of dark count rates.Dark counts will then only sightly add to the error rate.In our scheme, memory decoherence errors play a major role as we will explain in this and the following sections. In the following, we describe the protocol and its components in more detail. Protocol In our protocol, Alice and Bob, at a rate R S , send BB84 encoded pulses to the middle station (dashed boxes in figure 1).At the QMs, for each incoming pulse, we either apply a loading process by which we can store the state of the photons into memories and verify it, or use the indirectly heralding scheme of figure 1(b).Once successful for a particular QM, we stop the loading procedure on that QM, and wait until both memories are loaded, at which point, a BSM is performed on the QMs.The BSM results are sent back to Alice and Bob, and the above procedure is being repeated until a sufficient number of raw key bits is obtained.The rest of the protocol is the same as that of MDI-QKD.Sifting and postprocessing will be performed on the raw key to obtain a secret key.In this paper, we neglect the finite-size-key effects in our analysis [29]. Component modeling In this section, we model each component of figure (1) including sources and encoders, the channel, QMs, and the BSM module. Sources and encoders We consider two types of sources: ideal single-photon sources and phase-randomized weak laser pulses.The latter will be used in the decoystate [46] version of the protocol.Each source, at both Alice's and Bob's sides, generates pulses at a rate R S .Each pulse is polarization encoded in either the rectilinear (Z) or diagonal (X) basis.In the case of ideal single photons, we, correspondingly, send states |H and |V in the Z basis, and in the X basis.In each basis, the two employed states, respectively, represent bits 1 and 0. In the case of the decoy-state protocol, the single-photon states are replaced with weak phaserandomized coherent states of the same polarization.Here, we use the efficient version of BB84 encoding, where the Z basis is used much more frequently than the X basis [47].The pulse duration is denoted by τ p and it is chosen in accordance with the requirements of the memory system in use. There are several sources of nonideality one may be concerned with at the encoder box.For instance, in [48], one major source of error is in not generating fully orthogonal states in each basis.Note that secure exchange of keys may still be possible, although at a possibly reduced rate, by using even uncharacterised sources [49].Another possible issue would be in having multiple-photon components if one uses parametric downconverters to generate single photons [50,51].Although all these issues, among others, are important in the overall performance of the system, here we would rather focus on the memory side of the system, which is newly introduced, and deal with the details of source imperfections, and their effects on the secret key generation rate in a separate publication [45]. Channels The distance between Alice (Bob) and the respective QM is denoted by L A (L B ).The total distance between Alice and Bob is denoted by The transmission coefficient for a channel with length l is given by where L att is the attenuation length of the channel (roughly, 22 km for 0.2 dB/km of loss).The channel is considered to have a background rate of γ BG per polarization mode, which results in an average p BG = 2γ BG τ p background photons per pulse.This can stem from stray light or crosstalk from other channels, especially if classical signals are multiplexed with quantum ones in a network setup [6,5,52,53,54]. We also consider setup misalignment in our analysis.We assume certain polarization maintenance schemes are in place for the Alice's and Bob's channels, so that the reference frames at the sources and memories are, on average, the same.We, nevertheless, consider a setup misalignment error probability e dK , for K = A, B, to represent misalignment errors in each channel. Quantum memories We use the following assumptions and terminologies for the employed QMs.This list covers most relevant parameters in an experimental setup relying on polarization encoding, whether the QM is operated in the directly or indirectly heralding mode. • In the case of a successful loading, each QM in figure 1 ideally stores a polarization qubit corresponding to the polarization of the incoming pulse.We assume that such a squashing operation occurs [55,56] even if at the input of the QM there is a non-qubit state, e.g., a phase-randomized coherent state.That is, if, for instance, two photons with horizontal polarizations are at the input of the memory, the QM would only store the polarization information, and ignores the photon-number information.In practice, the loading efficiency would be a function of input photon numbers, but, for simplicity, here we neglect this dependence.This is in line with our single-excitation assumption we have adopted in this paper.One suitable energy level structure for such a memory is the double-Λ configuration in figure 1(d), with a common ground state and two other metastable states corresponding to two orthogonal polarizations.The excited states can then facilitate Raman transitions from the ground state to each of the metastable states, using known optical transition techniques [57,58], in response to the input polarization state.We assume that each QM only stores one spatio-temporal mode of light.Our protocol can be extended to incorporate multimode QMs [59,60,61,62] or multiple QMs [31], in which case a linear improvement in the rate is expected.In this work, we focus on the case of a single logical memory per user and leave extensions to future work. • For directly heralding memories, we denote the QM's writing efficiency by η w .The writing efficiency is the probability to store a qubit and herald success conditioned on having a single-photon at the QM's input.Note that η w also includes the chance of failure for our verification process.For indirectly heralding memories, we introduce an entangling efficiency, η ent , which is the probability of success for entangling a photon with our QM. • We denote the QM's reading efficiency by η r .That is the probability to retrieve a single photon out of the QM conditioned on a successful loading in the past.The reading efficiency is expected to decay over a time period t as η r (t where T 1 is the memory amplitude decay time and η r0 is the reading efficiency right after loading.In our example of a double-Λ-level memory of figure 1(d), such a decay corresponds to the transition form one of the metastable states |s H or |s V to the ground state |g , in which case, no photon will be retrieved from the memory. • We denote the QM's writing time by τ w .For directly heralding memories, it is the time difference between the time that a pulse arrives (beginning of the pulse) at the QM and the time that a successful/unsuccessful loading is declared.This is practically the fastest repeat period one can run our protocol.In the case of indirectly heralding memories, τ w includes the time for the entangling process as well as that of the side BSM operation.Accounting for such timing parameters is essential in enabling us to have a fair comparison between memory-assisted and no-memory QKD systems.One must note that in a practical setup there will be time periods, e.g., for synchronization purposes or memory refreshing, over which no raw key is exchanged.The total number of key bits exchanged over a period of time must therefore exclude such periods once the total key generation rate is calculated.In our work, we neglect all these overhead times, with the understanding that one can easily modify our final result by considering the percentage of the time spent on such processes within a specific practical setup. • We denote the QM's reading time by τ r .It is the time difference between the time that the retrieval process is applied until a pulse (end of the pulse) is out. • We denote the QM's coherence (dephasing) time by T 2 .For an initial state ρ(0) of the QM at time zero, its state at a later time t is given by [31] where p(t) = [1 + exp(−t/T 2 )]/2.Note that dephasing would only occur if we are in a superposition of Z eigenstates, e.g., the eigenstates of X.The above model of decoherence is expected to have more relevance in some practical cases of interest [34,40,31] than the model used in [17], in which the memory state switches suddenly from an intact one to a fully randomized version after a certain time.We discuss the implications of each model in our numerical result section.It is, however, beyond the scope of this paper to fully model every possible decoherence mechanisms in QMs.Specific adjustments are needed if one uses a memory that is not properly modelled by our T 1 and T 2 time constants. BSM module Figure 2 shows the schematic of the BSM module used in our analysis.This module enables an incomplete BSM over photonic states.In order to use this module, in our scheme, we first need to read out the QMs and convert their qubit states into polarization-encoded photons.The BSM will then be successful if exactly two detectors click, one H-labelled and one V -labelled.Depending on which detectors have clicked and what basis is in use, Alice and Bob can identify what bits they ideally share [28]. We assume the BSM module is symmetric.We lump detector quantum efficiencies with other possible sources of loss in the BSM module and denote it by η d for each detector.We also assume that each detector has a dark count rate of γ dc , which results in a probability p dc = γ dc τ p of having a dark count per pulse.The implicit assumption here is that the retrieved and the writing photons have the same pulse width.Finally, we assume that there is no additional misalignment error in the BSM module. Key Rate Analysis In this section, we find the secret key generation rate for our proposed schemes in figures 1(a) and 1(b).The common assumption in our predicting the relevant observed parameters in a QKD experiment is that we work under the normal mode of operation, where there is no eavesdropper present, and we are only affected by the imperfections of the system, behind which an eavesdropper can in principle hide.We later compare our results with two conventional QKD schemes, namely, BB84, summarized in Appendix A, and the original MDI-QKD in figure 1(c), summarized in Appendix B, that use no memories.In all cases, we consider both single-photon and decoy-state sources.In all forthcoming sections, f denotes the inefficiency of the error correction scheme, i.e., the ratio between the actual cost of error correction and its minimum value obtained by the Shannon's theorem, assumed to be constant, and we denote the binary entropy function as h(p) = −p log 2 (p) − (1 − p) log 2 (1 − p), for 0 ≤ p ≤ 1. Key rate for single-photon sources With ideal single-photon sources, the secret key generation rate in the setups of figures 1(a) and 1(b) is lower bounded by [63] where efficient BB84 encoding is employed [47].In the above equation, e QM 11;X and e QM 11;Z , respectively, represent the quantum bit error rate (QBER) between Alice and Bob in the X and Z basis, when single photons are used, and Y QM 11 represents the probability that both memories are loaded with single photons of the same basis and the middle BSM is successful. To obtain the individual terms in equation ((3.1)), we can decompose the protocol into two parts: the memory loading step and the measurement step, once both memories are loaded.The first step is a probabilistic problem with two geometric random variables, N A and N B , corresponding, respectively, to the number of attempts until we load Alice and Bob's memories with single photons.The number of rounds that it takes to load both memories is then max{N A , N B }. Once both memories are loaded, the rest of the protocol is similar to that of original MDI-QKD in terms of rate analysis: the QMs replace the sources in figure 1(c) and the total transmission-detection efficiency is replaced by the reading-measurement efficiency in the BSM module.We can therefore use many of the relationships obtained for the original MDI-QKD, summarized in Appendix B, for the memory-assisted versions of figure 1. For finite values of T 1 , the reading efficiency for the Alice's QM could be different from that of Bob.In fact, we can assume that, once both memories are loaded, one of the memories (late) will be read immediately, while the other (early) |N A − N B | rounds after its successful loading.The effective measurement efficiency for the leg K, K = A, B, corresponding to the path originating from memory K in the BSM module will then be given by With the above setting, and considering the required time for reading from the QMs, we obtain where Y 11 is the corresponding yield term, given by equation (B.4), for the MDI-QKD protocol and N L = E{max(N A , N B )} is given by equation (C.3).Here, E{•} represents the expectation value operator with respect to N A and N B , and η ′ m = η d η r , where η r = η r0 E{exp(−|N A − N B |T /T 1 )} can be obtained from equation (C.4).In equation (3.3), N r , represents the extra rounds lost due to the nonzero reading times of QMs, once they are both loaded, and is given by where T = 1/R S is the repetition period.The condition τ w ≤ T is a matter of practicality as sending photons faster than they can be stored is of no benefit.The fastest possible rate is then obtained at T = τ w . In the case of directly heralding memories of figure 1(a), we have as the probabilities of successful loading of Alice and Bob's QMs with single-photon sources (or background noise).In the case of indirectly heralding memories of figure 1(b), following our discussion in Section (2) about the equivalence of each leg of figure 1(b) to an asymmetric MDI-QKD system, we have where the above terms must be calculated at an effective dark count rate of γ dc + γ BG η d /2.We remark that, although obtained from different methods, the analysis in [17] also finds similar expressions for the yield term.In [17], the analysis is only concerned with the symmetric setup, and some of the parameters considered in our work take their ideal values.It can be verified, however, that in the special case of τ w = T , τ r = 0, γ BG = 0, L A = L B , η w = 1, and T 1 → ∞, for directly heralding memories, equation (3.3) reduces to the same result obtained in [17].By accounting for additional relevant parameters, our analysis offers a better match to realistic experimental scenarios. Similarly, the error terms are given by where, e 11;Z and e 11;X , given by equation (B.4), are the corresponding error terms for the original MDI-QKD.In addition to the typical sources of error, such as loss and dark count, the above expressions are functions of misalignment parameters.This misalignment could be a statistical error in the polarization stability of our setup, modelled by e dA and e dB , or an effective misalignment because of memory dephasing [64] and/or background photons.Putting all these effects together, as we have done in Appendix D, we obtain where e dS and e dS , respectively, represent the misalignment probabilities for Alice's and Bob's memories, for basis S = X, Z, at loading probabilities η A and η B and are given by equations (D.2) and (D.5).The above equation accounts for the fact that if the state of both memories are flipped, Alice and Bob will still share identical key bits.We assume that the BSM module is balanced and does not have any setup misalignment. Note that in equation (3.8), because of no dephasing errors for the Z eigenstates, e QM dZ is independent of N A and N B , whereas e QM dX is a function of them.The approximation in equation (3.7) assumes E{e QM dX η mA η mB } ≈ E{e QM dX }E{η mA η mB }, which is valid when T 1 ≫ T 2 , to give a more readable final result. Equation (3.8) can also be used in the case of indirectly heralding QMs as explained in Appendix D. The main idea is to use the analogy of each leg in figure 1(b) with the original MDI-QKD in figure 1(c). Key rate for decoy states Suppose Alice and Bob use a decoy-state scheme with average photon numbers µ and ν, respectively, for the two main signal intensities, and infinitely many auxiliary decoy states.The secret key generation rate, in the limit of infinitely long key, is then given by where is the rate at which both memories are loaded, by Alice (Bob) sending a coherent state in the Z basis with µ (ν) average number of photons, and a successful BSM is achieved. In the case of directly heralding memories, are the probabilities for successful loading of Alice and Bob's QMs with coherent-state sources.Similarly, is the QBER in the Z basis, and is the contribution of single-photon states in the gain term of equation (3.10). Similar to the treatment in the previous subsection, one can find or approximate the above terms in the case of indirectly heralding memories as well.For the sake of brevity, we leave this extension to the reader. Apart from all additional parameters considered in our model as compared to [17], our treatment of the decoy-state QKD is different from that of [17] in the way that QMs are modelled.In our work, we assume QMs store qubits, which while is not necessarily an exact model, it often serves a good first-order approximation to the reality.In [17], however, QMs are assumed to be able to store number states.This assumption seems more restrictive as many QMs, such as single trapped atoms or ions, can only store one photon. Storage time To get some insight into the working of our system, in this section, we simulate the achievable rates assuming L A = L B = L/2.The average number of trials to load both memories from equation(C.3) is then given by [65] where η is the probability of successfully loading a QM at distance L/2, approximately, given by η QM exp(−(L/2)/L att ), where η QM = η w for directly heralding memories, and η QM = η ent η 2 d for indirectly heralding QMs.Similarly, the average required storage time, from equation (C.5), is given by which is similar to the result reported in [17]. The secret key generation rate in equations (3.1) and (3.9) is proportional to the pulse generation rate R S = 1/T at the encoder.To maximize R S , we choose T = τ w , throughout this section and next, resulting in T st ≈ τ w /η. Figure 3 compares T st with the required storage time in multi-memory probabilistic quantum repeaters [31], L/c, where c is the speed of light in the channel.It can be seen that our scheme offers lower required coherence times until a certain distance.With fast memories of shorter than 10 ns of access time, this crossover distance could be longer than 500 km.With such memories, the required coherence time at 300 km is roughly 1 µs, or lower, as compared to over 1 ms for probabilistic quantum repeaters. It is worth mentioning that the possible advantage of requiring low coherence times is only achievable for systems with nesting level one, i.e., with one stage of entanglement swapping.Unlike quantum repeaters, our protocol, in terms of its timing, is not scalable to higher nesting levels.Nevertheless, even with only one entanglement swapping stage, our protocol can outperform conventional QKD schemes in terms of rate-versus-distance behaviour, and, more importantly, this can possibly be achieved with existing technology for quantum memories.We explore this and other aspects of our scheme in the next section.Distance, L (km) Figure 3: Average required storage time, T st , versus distance, in our scheme, for different repetition rates 1/τ w .As compared to that of a probabilistic quantum repeater, labelled by L/c, where c = 2 × 10 8 m/s is the speed of light in optical fibre, our scheme requires lower coherence times up to a certain distance.The crossover distance at τ w = 1 µs is over 300 km and at τ w = 1 ns is nearly 700 km.In all curves, η w = η d = η ent = 1 and p BG = 0. Numerical Results In this section, we study the impact of various parameters on the secret key generation rate of our scheme.All results have been obtained assuming the symmetric setup described in Section (3.3), τ w = T , f = 1.16, c = 2 × 10 8 m/s, and 0.2 dB/km of loss in the channel.We also compare our scheme with the efficient BB84 and MDI-QKD protocols, whose secret key generation rates are, respectively, summarized in Appendices A and B. Coherence time In this section, we discuss the effects of memory dephasing on the secret key generation rate.As mentioned before, while our scheme in figure 1(a) is particularly resilient to dark count errors, it still suffers from memory errors.Figure 4 demonstrates the secret key generation rate per pulse at different coherence times for the scheme of figure 1(a). A finite coherence time is the only source of nonideality considered in this figure.Since, in our model, the dephasing process only affects the diagonal basis, e QM 11;Z = 0 at all distances; hence R QM ∝ 1 − h(e QM 11;X ) remains always positive.The rate is initially proportional to exp(−(L/2)/L att ), and with low values of e QM 11;X for short distances, our scheme beats the BB84 case depicted by the dashed line.Note that, because of the partial BSM in figure 2, the initial key rate at L = 0 for our scheme is lower than that of BB84.At large distances, however, the dephasing process becomes significant and results in e QM 11;X approaching 1/2; see the inset.Subsequently, R QM decays with a faster slope and at some point becomes lower than what one can achieve with an ideal BB84 system.The window between the two crossing points on each curve is the range where our scheme, can, in principle, beat a noise-free BB84 system.This window is larger for QMs with longer coherence times.In [17], authors look at the minimum required coherence time to achieve nonzero key rates, assuming e QM 11;X = e QM 11;Z within their model of decoherence.Although the models used for decoherence in our work and [17] are different, e QM 11;X has a similar behaviour in both cases.In our case, however, the transition from 0 to 1/2 is smoother than that of [17].This is expected as the model in [17] is an abrupt good-bad model for the memory.A consequence of this difference is that the minimum required coherence time is then higher in our case, which highlights the importance of the more accurate model we have used for decoherence. The comparison in figure 4 assumes that the source rate R S is the same for both the BB84 protocol and our scheme.In our scheme, however, R S depends on the writing time of the memories.Figure 5 shows the secret key generation rate for the scheme of figure 1(a) at a fixed value of T 2 /T = 1000, but for several values of T = τ w = 1/R S .The BB84 system is run at a fixed rate of 1 GHz.Again, we assume that the only source of nonideality is memory dephasing.It can be seen that slow memories with writing times of 100 ns, or higher, can hardly compete with an ideal BB84 system.The two orders of magnitude lost because of the lower repetition rate cannot be compensated within the first 300 km.It is still possible to beat the BB84 case, at long distances, if memories have higher coherence times. Realistic examples It is interesting to see if any of the existing technologies for quantum devices can be employed in our scheme to beat conventional QKD systems.Figure 6 makes such a comparison between BB84, MDI-QKD, and memory-assisted MDI-QKD for particular experimental parameters.We have chosen our QM parameters based on the two lessons learned from figures 4 and 5: the QM needs to have a high bandwidth-storage product (T 2 /τ w ) on the order of 1000 or higher, and, it also needs to be fast, with writing times on the order of nanoseconds.Both these criteria are met for the QM used in [44], which particularly offers fast reading and writing with 300-ps-long pulses at a storage time of around 4 µs.The employed memory in this experiment is an atomic ensemble, which fits our indirectly heralding scheme of figure 1(b).We should, however, be careful with multiple excitations in this case, which are not considered in our model.We therefore assume that, by driving this memory with short pulses, one can ideally generate the jointly entangled state in equation (2.1) between the memory and a photon [39], where, in this case, |s H and |s V are, respectively, the corresponding symmetric collective excited states to horizontal and vertical polarizations [15,37].By keeping the entangling efficiency low at η ent = 0.05, here, we try to keep the effect of multiple excitations in such memories low [35,66,64]; further analysis is, however, required to fully account for such effects [45].We also assume that T 2 = T 1 and use the state-ofthe-art single-photon detectors with η d = 0.93 at γ dc = 1 count per second and 150 ps of time resolution [67] for all systems. We consider two sets of parameter values for our employed QM in figure 6.In the first set, corresponding to the curve labelled A on the figure, we use the same numerical values as reported in [44], that is, η r0 = 0.3, T 1 = 4 µs, and τ w = τ r = τ p = 300 ps.We, however, assume that R S = 1/τ w , which is much faster than the repetition rate used in [44].In the curve labeled B, we improve the performance by assuming η r0 = 0.73, Figure 6: Secret key generation rate for single-photon BB84 (dotted), MDI-QKD (dashed), and our indirectly heralding scheme of figure 1(b) (solid) at practical parameter values.In all curves, η d = 0.93, γ dc = 1/s, γ BG = 0, and e dA = e dB = 0.005.For BB84 and MDI-QKD, R S = 3.3 G pulse/s, similar to R S = 1/τ w , in our scheme.For our scheme, we have used some of the experimental parameters reported in [44].For the curve labelled A, η ent = 0.05, η r0 = 0.3, T 1 = T 2 = 4 µs, and τ w = τ r = τ p = 300 ps.It is assumed that there is no multiple excitations in the QMs.For the curve labelled B, everything is the same except that η r0 = 0.73 and which is what another group has obtained for a similar type of memory [39], and T 1 = T 2 = 100 µs, which is attainable by improving magnetic shielding [68].It can be seen that, whereas the current QM employed in [44] is short of beating either of no-memory systems, our slightly boosted system, in curve B, outperforms both systems at over roughly 200 km.The cut-off distance in curve B is about 400 km, which is mainly because of memory decoherence, and it can be improved by using memories with longer coherence times.This implies that with slightly improving some experimental parameters, we would be able to employ realistic QMs to improve the performance of practical quantum communication setups.We remark that the example QM chosen in figure 6 is not necessarily the only option, and improved versions of other types of memories can potentially offer the same performance [40,61,69,70,71,72,73]. What we have proposed here is an initial step toward improving the performance of QKD systems by using quantum memories.In particular, we have shown how technologically close we are to beating a direct, no-memory, QKD link in terms of the achievable rate at certain long distances.Our scheme is not, however, scalable to arbitrarily long distances.For that matter, full quantum repeaters would eventually be needed.A possible roadmap for the development of such systems would pass through probabilistic, and then deterministic, and eventually no-memory versions of quantum repeaters [12,13,14].It is hard to make a fair comparison between all these and our scheme, as the required resources in each case are different.Some studies have nevertheless compared different repeater schemes under certain assumptions [74,64].It is only the future, in the end, that proves which system, and at what price, can be implemented over the course of time. Conclusions By combining ideas from quantum repeaters and MDI-QKD, we proposed a QKD scheme that relied on quantum memories.While offering the same rate-versus-distance improvement that quantum repeaters promise, the coherence-time requirements for the quantum memories employed in our scheme could be less stringent than that of a general single-mode probabilistic quantum repeater system.That would provide a window of opportunity for building realistic QKD systems that beat conventional no-memory QKD schemes by only relying on existing technologies for quantum memories.In our work, we showed that how close some experimental setups would be in achieving this objective.Our protocol acts as a middle step on the roadmap to long-distance quantum communication systems. in the (infinitely many) decoy-state case, where µ is the average number of photons for signal states, which is dominantly used.In equation (A.1), Y 1 is the yield of single photons, or the probability that Bob gets a click on his measurement devices assuming that Alice has sent exactly one photon, and is given by where η = η ch (L)η d , and correspond, respectively, to the terms that, in the absence of misalignment, result in identical (Correct) versus non-identical (Error) bits shared by Alice and Bob.The QBER, e 1 , which is the same for both bases, is given by where e 0 = 1/2 and e d = e dA + e dB is the total misalignment probability for the channel.Similarly, in equation (A.2), are the corresponding gain terms [46], and gives the QBER. Appendix B. MDI-QKD key rate analysis The secret key generation rate for the MDI-QKD scheme of figure 1(c) is lower bounded by [28] in the single-photon case, and in the decoy-state case, where µ (ν) is the average number of photons for signal states sent by Alice (Bob).Here, Q 11 is the probability of a successful BSM if Alice and Bob, respectively, send pulses with µ and ν average number of photons and is given by with e d being the total misalignment probability.In the scheme of figure 1(c), e d = e dA (1 − e dB ) + e dB (1 − e dA ).Similarly, using the results obtained in [28], we have where The loading process in the setups of figures 1(a) and 1(b) are probabilistic ones, with two geometric random variables N A and N B playing the major role.Suppose the success probability for each loading attempt corresponding to these random variables is, respectively, given by η A and η B .Then, we obtain the following probability distribution for |N A − N B |: where Using the above expressions, we then obtain where we used the fact that e In the case of indirectly heralding QMs, we assume that each erroneous click on the side BSMs will effectively result in a flip to the corresponding QM state, and can also be modeled as misalignment.This assumption is valid at low distances where majority of errors are caused by the setup misalignment.We then obtain Figure 2 : Figure 2: Bell-state measurement module for polarization states. Figure 4 : Figure 4: Secret key generation rate per pulse for the heralded scheme of figure 1(a) for different values of T 2 /T using single-photon sources.The dashed line represents the ideal efficient BB84 case.Unless explicitly mentioned, all other parameters assume their ideal values: T 1 → ∞, η w = η r0 = η d = 1, γ BG = γ dc = 0, e dA = e dB = 0, and τ r = 0. dZ = e 11;Z (η d η ch (L K ), η d η ent , e dK ), K = A, B, (D.10) for indirectly heralding QMs, where e 11;Z can be calculated from equation (B.4) at an equivalent dark count rate of γ dc +η d γ BG /2.At long distances, most errors originate from dark counts or background photons, whose effective misalignment effect will approach half of e 11;Z in the above equation.As a conservative assumption, we use the expression in equation (D.10) for all distances.All other terms in equations (3.7) and (3.8) can be obtained following the same expressions in eqations (D.5)-(D.9)at β K = 1 − 2e (K) dZ , for K = A, B, and using equation (D.10) for e (K) dZ . )where |H P and |V P , respectively, represent horizontally and vertically polarized single
10,666.6
2013-09-13T00:00:00.000
[ "Physics" ]
Characterization of aluminium AC4B/Nano TiC composite with the variation of volume fraction of nano TiC reinforced through stir casting process Characterization of aluminium ac4b/nano tic composite with variation of volume fraction of nano TiC reinforced through stir casting process has been studied. The aluminium matrix used was aluminium alloy AC4B, which contain silicon and copper as its main alloy. Furthermore, the addition of Nano TiC into AC4B composite can increase the tensile strength, ductility, and toughness of AC4B composite by refining the dendrite structure of the α-Al phase and forming super saturated solid phase, θ (Al2Cu). In this study, AC4B/Nano TiC composite were made through stir casting with some variable parameters of Nano TiC reinforce composition of 0.25%, 0.3%, 0.35%, 0.4%, and 0.5% volume fraction to determine the optimum value of the mechanical properties of AC4B/Nano TiC composite. Stir casting process was chosen because it has several advantages, such as easy to use, flexible, and can be used to produce a large number of the products. It is known that AC4B/Nano TiC composite has optimum value of the mechanical properties when Nano TiC composition is 0,3% volume fraction with ultimate tensile strength of 132,31 MPa and the hardness of 55,18 HRB. Introduction Current technological developments are very fast because of seeing the need for new technologies that can meet human daily life needs. This rapid technological development is open the opportunities to the need for material that has mechanical properties that can compensate the sophistication of technology that is made. A material currently being developed is the composite materials. Composite is a promising solution as a material that can compensate the sophistication of technology with its mechanical properties because composite is a material that made from two different material with different mechanical properties that can combine and improve the strength of the composite itself. Aluminium composite is one of the composite type that has bigger opportunities to be used as the material for the new technologies in the future because aluminium has the characteristics of low density, good strength, good wear and corrosion resistance, low melting point, and resilience. In this study, the composite material used was a matrix composite material of aluminium AC4B with a reinforce in the form of nano TiC. The matrix composites of aluminium AC4B will be made through the stir casting process with variable parameter of nano TiC reinforce composition of 0.25%, 0.3%, 0.35%, 0.4% and 0.5% volume fraction so that it is expected to get the optimal value from the reinforce addition of Nano TiC to obtain optimal strength of aluminium matrix composites. The addition of Al-5Ti-1B aims to minimize the formation of dendritic grains by triggering equiaxed fine grain growth [1]. The addition of Al-15Sr aims to change the morphology of silicon eutectic which was originally in the form of a rough needle into a form of fibres that are spread evenly [2]. Beside, The addition of Al-15Sr can also increase the flowability of AC4B alloys [3]. Meanwhile, the addition of magnesium (Mg) aims to increase the elasticity of the aluminium matrix so that the distribution of particle reinforce is evenly distributed and prevents the occurrence of reinforcing particle agglomeration [4]. Methods The study was begun with the fabrication process of composite samples of aluminium AC4B/Nano TiC. The AC4B alloy is melted at 850 o C in the tilting furnace. The Nano TiC reinforce which has been pre-heating at 900 o C in the muffle furnace for an hour, and the purification process to remove impurities using an ultrasonic vibrator are mixed into the melting AC4B alloy. Then, the melting AC4B alloy was stirred using a stirrer for 40 seconds and carried out the degassing process using argon gas for 2 minutes. Furthermore, 0.15%wt TiB, 0.04%wt Sr., and 5%wt Mg were added to the melting AC4B alloy. Then, it was stirred using a stirrer for 40 seconds and carried out the degassing process using argon gas for 2 minutes. The melting alloy was then poured into a mould which had been pre-heated at a temperature of 850 o C in a muffle furnace for 7 minutes which was then cooled through an air-cooling process. Composites of aluminium AC4B/Nano TiC are made with variations of 0.25%, 0.3%, 0.35%, 0.4%, and 0.5%Vf. The composition of pure AC4B aluminium can be seen in Table 1. Furthermore, destructive testing and identification of elements and compounds were carried out to determine the characteristics of the composite samples. Meanwhile, samples of aluminium AC4B/Nano TiC were tested through mechanical testing in the form of a tensile test using the Go Tech 27-7000 LA 10 machine with ASTM E8 standard, hardness test using the Rockwell B method with indenter in the form of a steel ball with a diameter of 1/16 inch according to the standard ASTM E18, and the Charpy impact test method with the ASTM E23 standard. Density testing was carried out to determine the effect of adding nano TiC on the composite density by using the Achimedes law and then the amount of porosity in the samples were calculated. In addition, samples were characterized by metallographic testing, such as Optical Microscope (OM) to observe the shape of microstructure based on the ASTM E3-11 standard by using ETSA keller, SEM and EDS to identify phase and surface structure of AC4B/Nano TiC composites, OES to determine chemical composition, and XRD to determine the compounds formed. 3 the addition of magnesium levels which aims to increase the elasticity of aluminium to be able to wet the reinforce of Nano TiC. The Increase of the amount of magnesium content can affect the formation of several phases which can affect the mechanical properties of aluminium AC4B/Nano TiC composites, such as the formation of Mg2Si phase owing to the change of main solidification reaction from Al-Si-Cu to Al-Si-Mg. Chemical Composition of AC4B/Nano TiC Composite This change of main solidification reaction can occur due to the change of main chemical elements composition from the composition of copper (Cu) which initially has a content ranging from 2-4% to magnesium which has a much larger content, which ranges from 4-7% after the manufacturing process. Meanwhile, the increase in the amount of iron (Fe) is due to the presence of iron (Fe) elements which also dissolve into the melting AC4B alloys derived from stirrer made of steel during the fabrication process of aluminium AC4B/Nano TiC composite. The iron (Fe) content that is too high can reduce the ultimate tensile strength (UTS) and tenacity of the AC4B/Nano TiC composite by forming an iron intermetallic phase [5]. Meanwhile, the Al3Ti phase can be formed due to the addition of nano TiC amplifiers and TiB grain refining agents, where the Al3Ti phase formed can be a nucleating agent for α-Al so that the α-Al phase which initially has a dendrite form will change to an equaixed form. Meanwhile, the Al3Fe, β-Al5FeSi, and π-Al9FeMg3Si5 phases are formed due to the high content of magnesium and iron in the AC4B / Nano TiC aluminum composite. Microstructure of AC4B/Nano TiC composite Mg2Si phase can be formed due to the high content of magnesium in which it is equal to 5.23%wt. This increase occurs due to the addition of magnesium levels of 5%wt. in which it aims to increase the elasticity of aluminium to be able to wet the reinforce particle. The increase of magnesium content which reaches 5.23% wt. cause a magnesium content being greater than the copper content in which it is only 1,997%wt. Thus, it causes a change of main solidification reaction from Al-Si-Cu to Al-Si-Mg. Therefore, Mg2Si phase will be more dominant than Al2Cu phase. However, based on microstructural observations, it can be seen that in the AC4B/Nano TiC composite microstructure included an Al2Cu phase although the main solidification reaction experiences a change from the Al-Si-Cu solidification reaction to the Al-Si-Mg solidification reaction. This A12Cu phase can be formed when done by a rapid cooling process was done in which it caused the solidification line would touch the eutectic line so that a decrease in temperature during solidification will allow the formation of Al2Cu phase even though it is only in small amounts. Iron intermetallic phase in the form of β-Al5FeSi phase, Al3Fe phase, and π-Al9FeMg3Si5 phase can be formed because it was influenced by two factors, namely the high content of iron and magnesium which is equal to 4.48%wt. and 5.23%wt. with a low solidification rate. β-Al5FeSi phase can be formed by reaction of L+Al3Fe  Al+Al8Fe2Si+L  Al+Al5FeSi. Based on these reactions, it can be seen that the reaction between Liquid and the intermetallic phase of Al3Fe with a high solidification rate will form the Al8Fe2Si phase. However, when the solidification rate has not stopped, the Al8Fe2Si phase will react with Liquid to form a β-Al5FeSi phase [6]. The formation of β-Al5FeSi phase is influenced by the content of the elements iron and magnesium, where the iron content is above 2.4% wt. and magnesium content below 6% wt. with a low solidification rate can form β-Al5FeSi phase which has a needle-like shape so that this phase has a relatively high strength. In addition, the intermetallic phase of Al3Fe generally has a needle-shaped microstructure as shown in figure 5 (a). However, in this study the microstructure shape of the Al3Fe intermetallic phase is irregular-shaped as shown in Figure 5 (b). The microstructural transformation of the Al3Fe intermetallic phase is due to the high magnesium content in aluminium AC4B/Nano TiC composite, where the magnesium content is above 1.5% wt. able to change the microstructure of Al3Fe from needle-shaped to irregular-shaped. Meanwhile, Al3Ti phase can be formed due to the addition of TiB grain refiner agent. The Al3Ti phase formed will act as a nucleation agent for α-Al, so α-Al phase which initially has the form of dendrites will turn into an equiaxed form that can increase the mechanical strength of aluminium AC4B/Nano TiC composite. Based on the results of the mechanical test, it can be seen that the addition of the Nano TiC reinforce can increase the mechanical strength of aluminium AC4B/Nano TiC composite by inhibiting dislocation movement by distributing the stress evenly in the composite matrix through three mechanisms, namely orowan strengthening mechanism, grain boundary strengthening mechanism through refining grain of -Al phase and the precipitation hardening mechanism through the formation of saturated solids θ (Al2Cu) [7]. This is shown in the results of the tensile test in figure 6 (a) and the hardness test results in figure 6 (c), In which the increase of number of nano TiC reinforce reaches 0.3%Vf. was able to produce the ultimate tensile strength (UTS) and the highest hardness value, which is 132.31 MPa and 55.18 HRB. The grain boundary strengthening mechanism occurred when the nano TiC reinforce acted as a nucleating agent that will change the structure of the dendrite of α-Al phase being more subtle (Equiaxed). Dendrite structure of α-Al phase which is refine can increase the resistance to crack propagation of aluminium AC4B/Nano TiC composite. Meanwhile, the precipitation hardening mechanism occurred when the nano TiC reinforce increases the formation through saturated solids θ (Al2Cu) phase, in which saturated solids (Al2Cu phase) formed will result in lattice and internal stress distortion which can inhibit dislocation movement [8]. Results of mechanical test of AC4B/Nano In figure 6 (c), the hardness value of aluminium AC4B/Nano TiC composite from each variation of the volume fraction of nano TiC reinforce is higher than the hardness of AC4B without the reinforce. A higher hardness value is obtained because of the addition of reinforce particles, where the addition of reinforcing particles evenly into the aluminium matrix will distribute the stress evenly so that the dislocation movement is not focused in just one area. In this study, an increase in the hardness value was obtained due to the formation of the θ (Al2Cu) phase and the Mg2Si phase. The addition of nano TiC reinforce is able to initiate the formation of phase θ (Al2Cu) which is a saturated solid (Supersaturated Solid Solution) formed through the Precipitation Hardening mechanism. The saturated solid (Al2Cu Phase) formed in lattice distortion so it can increase the hardness value of aluminium AC4B/Nano TiC composite. In addition, the formation of the Mg2Si phase can also increase the hardness value of the aluminium composite AC4B / Nano TiC, where the Mg2Si phase is formed due to the addition of magnesium and a modifying agent in the form of strontium. Strontium will react with aluminium to Al4Sr, then Al4Sr will act as a nucleating agent for primary Mg2Si so that the Mg2Si phase formation will increase the hardness value of aluminium AC4B/Nano TiC composite in line with the addition of strontium. In other words, as the number of Mg2Si phases is formed, the hardness value of the AC4B / Nano TiC aluminium composite will tend to increase. However, in the variables 0.35%, 0.4%, and 0.5%Vf, there is a decrease in mechanical strength as the percentage of the increase of Nano TiC reinforce. This was caused by the uneven dispersion of nano TiC reinforce particles and the greater number of porosities in line with the increase of percentage of the volume fraction of nano TiC reinforce which can be seen in figure 6 (f). In addition, in figure 6 (b), it can be seen that the greater the percentage of the volume fraction of the nano TiC reinforce can reduce the percentage of elongation of aluminium AC4B/Nano TiC composites, which is due to the increased agitation of aluminium AC4B/Nano TiC composites. The increase in aluminium AC4B composite agitation was caused by the formation of the Mg2Si phase, the intermetallic phase of β-Al5FeSi and intermetallic phase of π-Al9FeMg3Si5, where these phases have brittle properties so these phases are able to reduce percentage of elongation of aluminium AC4B/Nano TiC composites [9]. In addition, the increase in the elasticity of aluminium AC4B/Nano TiC composite caused the aluminium AC4B/Nano TiC composite to be more difficult to absorb energy from impact loads, where it can be seen in figure 6 (d) which shows the increasing number of nano TiC reinforce, the impact prices from aluminium AC4B/Nano TiC composites tending to decrease. In figure 6 (e), it can be seen that the actual density values tend to be lower than the theoretical density in each variation of the nano TiC reinforce. This is influenced by the presence of Mg2Si and porosity found in aluminium AC4B/Nano TiC composites, where the Mg2Si can be formed in aluminium AC4B/Nano TiC composites due to the addition of magnesium during the fabrication process of aluminium AC4B/Nano TiC composites by 5%wt. which is used to increase the wettability of aluminium to be able to form a good interface with reinforce nano TiC. Therefore, more Mg2Si phases are formed which can affect the density of aluminium AC4B/Nano TiC composites. For the presence of porosity in aluminium AC4B/Nano TiC composites, the percentage of the increase of porosity in aluminium AC4B/Nano TiC composites, the density value of aluminium AC4B/Nano TiC composites will tend to be more decrease. This is because porosity is gas/air trapped inside a solid and the material contained in the shaft has a lower density than material that is free from the shaft so that the material tends to be lighter than the material that is free from the shaft [10]. Based on the porosity test results in figure 6 (f), it can be seen that the decrease in the mechanical strength of aluminium AC4B/Nano TiC composite is due to the presence of increasingly large porosity [11]. In this study, porosity can be formed due to the non-optimal stirring process, the formation of shrinkage during the solidification process, and hydrogen evolution [12]. The evolution of hydrogen gas can occur due to the high pouring temperature, where the safe temperature limit that can be used to prevent the formation of hydrogen gas should be 675-700 0 C. However, in this study the pouring temperature was around 800-850 0 C. This is needed to prevent premature cooling during the stirring process of the reinforcement and pouring process into the mold. The effect of the presence of hydrogen gas in the melted AC4B alloy itself is able to reduce the tensile strength of the aluminium AC4B/Nano TiC composite because the material tends to be brittle [12]. Conclusion 1) Aluminium AC4B/Nano TiC composite with variations of the addition of volume fraction of Nano TiC reinforce has better mechanical properties than AC4B alloy without nano TiC reinforce particle. In addition, the addition of a nano TiC reinforce can increase the mechanical strength of aluminium composites through three mechanisms, namely the orowan strengthening mechanism, the Grain Boundary Strengthening mechanism, and the Precipitation Hardening mechanism.
3,875.2
2020-12-22T00:00:00.000
[ "Materials Science" ]
Target highlights in CASP14: Analysis of models by structure providers Abstract The biological and functional significance of selected Critical Assessment of Techniques for Protein Structure Prediction 14 (CASP14) targets are described by the authors of the structures. The authors highlight the most relevant features of the target proteins and discuss how well these features were reproduced in the respective submitted predictions. The overall ability to predict three‐dimensional structures of proteins has improved remarkably in CASP14, and many difficult targets were modeled with impressive accuracy. For the first time in the history of CASP, the experimentalists not only highlighted that computational models can accurately reproduce the most critical structural features observed in their targets, but also envisaged that models could serve as a guidance for further studies of biologically‐relevant properties of proteins. authors on the accuracy of best models submitted on 12 CASP14 targets ( Table 1). All target providers were invited to contribute to the paper, with the exception of five targets structures for which have been solved by using CASP models, described separately in this issue. 7 The resulting targets presented here include: the neutralizing monoclonal antibody 93k bound to the varicella-zoster virus fusogen glycoprotein B (H1036 and T1036), the Bacteriophage T5 tail tip complex (H1060 and T1061), polymorphic CDI toxinimmunity protein complex from Serratia marcescens (H1065, Members of the Herpesviridae are pathogens of humans and animals that cause a wide range of medically and economically important diseases. 8 The outer lipid membrane of herpesvirus virions is studded with glycoproteins that enable binding to cell membranes and fusion of the virus envelope to initiate entry and establish infection. Herpesvirus orthologs of glycoprotein B (gB) are trimeric proteins that have been classified as type III fusogens due to their structural similarities with vesicular stomatitis virus G protein and baculovirus gp64. [9][10][11][12][13][14][15] The ectodomain architecture for gB orthologs consists of five structurally distinct domains (DI to V) that fold into a homotrimer with C3 symmetry. Varicella-zoster virus (VZV) is an alphaherpesvirus that causes chickenpox (varicella) upon primary infection. 16 VZV establishes latency in sensory ganglion neurons and subsequent reactivation manifests as shingles (zoster). In addition to virion entry fusion, characteristic polykaryocyte formation caused by cell-cell fusion within tissues in vivo is essential for VZV pathogenesis. This process can be modeled in vitro via syncytia formation of VZV infected cells in culture. 17,18 Critically, there are adverse health effects directly linked to cell fusion between differentiated host cells; fusion between ganglion neurons and satellites has been associated with postherpetic neuralgia, and strokes have been linked to vascular endothelial cell fusion. [19][20][21] The functional domains of herpesvirus gB orthologs have been characterized using monoclonal antibodies (mAbs) that neutralize viral infection via binding to gB before membrane fusion. 11,[22][23][24][25][26][27][28][29][30][31] Although the molecular interactions for some of these antibodies with gB residues have been defined previously, it was unknown whether these gB residues were involved in fusion function or virus infection. 11,28 A newly derived human mAb, 93k, neutralized VZV by binding to gB and membrane fusion inhibition. 32 To elucidate gB domain function and their role in VZV infection, a 2.8 Å resolution cryo-EM structure of native, full-length VZV gB in complex with mAb 93k Fab fragments was determined. 32 This near-atomic resolution structure revealed residues within gB DIV that were then shown to be essential for membrane fusion by evaluating DIV mutants in a virus free assay. with gB residues R592 and I594 of β23, and V617 and L619 of β25 ( Figure 1C; see supplemental movie 3 in Oliver et al. 32 ). The aromatic ring of VHCDR3 Y113 formed a cation-π interaction with gB R592 that was inserted into a negatively charged pocket within the 93k antigen binding site. In addition, the OH group of VHCDR3 Y113 and the side chain of N111 the carbonyl oxygen formed H-bonds with and backbone nitrogen of gB I593 and L595, respectively ( Figure 1C). At the boundary of gB β23 and 93k interface the carbonyl oxygens of VHCDR3 P103 and G104 H-bonded with the side chain of the gB Q596 and the backbone nitrogen of N597, respectively, while the backbone nitrogen of VHCDR3 A106 H-bonded with the gB L595 carbonyl oxygen ( Figure 1C). The gB-93k interface made a sharp turn where hydrophobic and Van der Waals contacts dominated the 93k interaction with gB β28-30. The H-bond between VHCDR3 T108 OG1 and gB E670 OE1 was surrounded by hydrophobic interactions between residues P107, P109, and L110 of VHCDR3, and W32 of the variable light chain CDR1 (VLCDR1) and gB β28-30 residues F655, H658, V660 and Y667 ( Figure 1C). This complex network of hydrophobic and hydrophilic interactions at the gB-93k interface of postfusion gB identified the strongest interactions between gB β23 and β30, and the 93k VHCDR3. Importantly, because mAb 93k has neutralizing activity through fusion inhibition, 32 residues within gB DIV β23 and β30 were implicated in a functional role for membrane fusion. Indeed, two or more alanine substitution of residues within β23 and β30 reduced or abolished fusion and limited the capacity of VZV to infect cells, indicating that these residues act together to ensure that the gB structure supports its fusion function. Using cryo-electron microscopy, we determined the structure of T5 tail tip, before and after interaction with its receptor FhuA 36 : we could solve the structure of two rings of the Tail Tube Protein pb6, prolonged by a ring of p140 surrounded by a dodecamer of p132 that forms the collar, a hexameric ring of pb9, a trimeric ring of pb3, which closes the tube, and a trimer of the C-terminus of the Tape Measure Protein, pb2 ( Figure 2). Although the structures of pb9 and pb6 were already available, 37,38 structures of p140, p132, pb3, and pb2 were unknown. 35 The structure of the whole tail tip before interaction with the receptor has been submitted to CASP14. The pb6-p140-p132-pb9 complex has been proposed to the competition, as well as the individual rings and individual proteins. Although not having any sequence homology with pb6, p140 shares the same fold 36 and both form a trimeric ring. This was well predicted, with the best GDT-TS = 83 for the monomer, and a QSscore of 0.442 for the trimeric ring. The inner-ring diameter was correctly reproduced in the best quality model only, while it was predicted to be smaller in all other models ( Figure 2B). The structure of p132 monomer, which belongs to the immunoglobulin superfamily, was very well predicted (best GDT-TS = 95). The dodecameric ring was also well predicted by five groups (QS-scores from 0.442 to 0.228). The predicted models contained more or less altered subunit interfaces, resulting in slightly smaller rings and/or modified subunit orientation within the ring ( Figure 2C). For both p140 and p132, AlphaFold2 is far ahead of the others (by 18 and 24 points on the GDT-TS parameter). Pb6 and pb9 rings were also well predicted, with Figure 2D). At least the six top groups predicted the correct inner diameter of the tube, even though the orientation of the protein within the ring is not always optimal, due to modified subunit interactions. An important protein of this assembly is pb3, which closes the tube. This protein is predicted to share structural similarity with the baseplate hub proteins of Myoviridae and related contractile injection bacterial systems. 35 It is, however, a larger protein, with in addition two fibronectin domains in C-terminus predicted from the sequence. 35 Indeed, the protein is composed of the four canonical "hub domains" (HDs) of phage T4-pg27, 39 with a large insertion in the second one to allow the closure of the tube, and two C-terminal fibronectin domains ( Figure 2E1). Only three groups predicted the struc- Figure 2E1). Very interestingly, these predicted structures do not represent pb3 in its closed conformation, in which part of the insertion in HDII is folded back along the inner wall of the tube to provide a plug to close the tube (orange in Figure 2E2). This plug sequence (45 residues) is rather stretched out downwards as a long beta hairpin in the predicted structures (cyan in Figure 2E2). This is very close to the structure of pb3 after interaction of the tail with its receptor, which induces the opening of the tube ( Figure 2E3), which thus seems to represent a more stable conformation of the protein (unpublished results). When the pb3 trimer is considered, only one group predicted it with satisfaction and here again in the open conformation ( Figure 2E4; QS-score with the closed pb3 trimer was 0.252). Others, even with similar QS scores, did not predict the correct monomer structure. The trimeric pb2-C-terminal helical bundle was very well predicted by six groups, with QS-scores ranging from 0.678 to 0.607 ( Figure 2F). With regards to the pb6-p140-p132-pb9 complex, four groups predicted reasonably the general tube assembly (QS-score of 0.266-0.196), with the correct inner-tube diameter and inter-ring distances. Inter-ring interactions were however not optimal, as none predicted the correct register of the different rings ( Figure 3). In conclusion, each target (whether it was monomers, rings, or full complex) was reasonably well predicted by at least one CASP14 competitor, and very often by several ones. The best structure predictions for p132, p140, and pb3 monomers were highly accurate, as well as for the pb2 trimer. In the case of ring assemblies, although some predictions were reasonably close to the targets, it was surprising to observe noticeable variations regarding ring diameter/orientation, and structure predictions of the monomers within the rings were not as F I G U R E 3 Four best CASP predictions of phage T5 tail tip complex (H1060v0, pb6-pb6-p140-p132-pb9) aligned on the experimental structure, in which the different proteins are colored as in Figure 2. We recently determined the high-resolution crystal structure of a novel CDI toxin-immunity protein complex from the nosocomial pathogen S. marcescens BWH57 ( Figure 4). The CdiA-CT BWH57 region is The CdiA-CT BWH57 nuclease domain includes three α-helices and one 3 10 helix, four antiparallel β-strands arranged in a small concave β-sheet and two β-strands that form a hairpin. The β-sheet and β-hairpin wrap around α4, which serves as a core of this fold. Helix α3 has a significant kink and helix α1 interacts with the β-hairpin. CdiI BWH57 has a simple α/β fold with two α-helices, three 3 10 helices and four mixed β-strands arranged in a small β-sheet. The toxin's interaction surface is largely electropositive and complemented by a negatively charged patch on the immunity protein ( Figure 4B). CdiI BWH57 binds to the nuclease domain using the large loop linking β1 to β2 and three 3 10 helices ( Figure 4). These secondary structure elements interact with the exposed β-sheet residues, two loop regions, helix α3, and the C-terminus of the toxin domain. Several CdiI BWH57 residues that interact with the toxin, including K5, D9, Y10, W16, D25, and the C-terminal Y98, are highly conserved across the protein family. Similarly, toxin residues H47, E51, H52, R89, N117, and R119 that interact with the immunity protein are also highly conserved. A subset of these latter residues (H47, E51, H52, R89) are good candidates to form the nuclease active site, suggesting that CdiI BWH57 binding to the toxin blocks access to its RNA substrates. For the CASP14 competition, CdiA-CT BWH57 and CdiI BWH57 were first modeled as individual monomers, and the top 10 predictive models, as ranked by GDT-TS score, were evaluated. Figure . Previously, we demonstrated that BIL2 operates as a "singleubiquitin-dispensing-platform," allowing the conjugation of ubl4 to different substrates such as ubl5 and Ras GTPase. 66 Since the splicing reaction is ATP-independent, the presence of the intein allows the host to avoid employing energy-consuming cascades of enzymes usually deputed to ubiquitin conjugation. In order to elucidate the molecular mechanism of BUBL protein splicing, we solved the high-resolution crystal structures of BIL2 in both apo and zinc-bound forms. The analysis of the structures revealed that zinc induces a conformational change of H69, which has been suggested to function as a key catalytic residue, 67 BonA is an outer-membrane lipoprotein from the opportunistic pathogen A. baumannii that is important for maintaining the structure and function of the outer membrane. 73 In A. baumannii, the loss of BonA causes the loss of cell motility and a change in the structure of the outer membrane. 73 BonA homologs in other bacterial species (designated YraP or DolP) form part of the cell envelope stress regulon (e.g., SigmaE regulon in E. coli). 74 These BonA homologs are important for the integrity of the outer membrane and the virulence of bacterial pathogens (e.g., Neisseria gonorrhoeae, Salmonella enterica). [75][76][77] BonA and its homologs localize to the divisome, the large protein complex that mediates cell division in bacteria. 75,78 As part of the divisome, DolP, the BonA homolog from E. coli, regulates the activity of cell wall remodeling enzymes during cell division. 79 The mechanism by which BonA and its homologs mediate their function remains unknown. BonA is 235 amino acids in length and is composed of two Bacte- 73 However, in the crystal structure, BonA-27N formed a dimer ( Figure 7A), that has an extensive buried surface area of 3236 Å 2 according to PISA. 81 In the BonA-27N structure, the Cterminal BON domain (BON2) adopts the canonical α/β-sandwich fold, consisting of two α-helices and three β-sheets. However, in the N-terminal BON domain (BON1), α-helix 1 is displaced from the α/β-sandwich, by α-helix 1 of BON2 from the opposing dimeric molecule, which forms a hydrophobic interaction that facilitates dimer formation ( Figure 7B). I hypothesized that this dimer was a constituent of the BonA decamer and performed additional structural analysis of full-length BonA using small-angle X-ray scattering and negative stain electron microscopy, revealing that the decamer was pentameric, consisting of five BonA dimers. 73 The sequence corresponding to BonA-27N was submitted as a A major difference between the model and experimental data was the orientation of α-helix 1 of BON1, which rather than being displaced from BON1 as in the experimental structure, adopted a canonical BON domain conformation ( Figure 7C). This position of α-helix 1 of BON1 in the model precludes the formation of the dimer observed in the crystal structure and is analogous to BON1 of DolP, which exists as a monomer when purified. 75 Experimental evidence indicates that BonA is stable as a monomer both when purified and in the bacterial cell. 73 To exist in this state, the hydrophobic surface protected by α-helix 1 of BON2 in the dimer would need to be shielded from the solvent ( Figure 7D). α-helix 1 of BON1 in the CASP14 models adopts analogous conformation to α-helix 1 of BON2 ( Figure 7E), This is in sharp contrast to JBP1 J-DBD that binds J-DNA with low nM affinity in vitro, and has a remarkable discrimination against normal DNA, which it binds with μΜ affinity. The low sequence identity between the JBP1 and JBP3 J-DBD domains (16.5%) was enough to establish the homology between them, but not sufficient to understand their difference in J-DNA specificity from sequence conservation alone. Importantly, Asp525, the JBP1 residue that we have previously shown to be crucial for discriminating J-DNA against normal DNA, is conserved, as well as Lys522A and Arg532A (but not Lys518 or K524), which are all important for general DNA binding. We therefore decided to determine the structure of the J-DBD domain of JBP3, to understand what are the structural determinants that confer the limited affinity and specificity toward J-DNA. We were surprised to find out that we were unable to determine the structure of the JBP3 J-DBD by molecular replacement. We determined the structure using massive combination of small fragments and density modification as implemented in Archimboldo-Lite. 91 The main difference between the JBP1 and JBP3 J-DBD domain structures is the placement of the N-terminal region and C-terminal helix (α5) of the helical bouquet fold that we have previously described. The determined structure, covering 133 of the 134 residues of the mature protein, forms a β-roll-like distorted architecture containing two α-helices and nine β-strands ( Figure 10A). The overall β-roll fold part of the structure is formed by two β-sheets, one comprised by β-strands 1-4, which is connected to a second sheet, comprising β-strands 5-9, via a disulphide bond formed between residues C31 and C132, which appears to adopt two alternative conformations. Additionally, a disulphide bond C90-C118 links the 19 residues loop between β-strands 6 and 7 with β-strand 8, suggesting that correct positioning of this loop is relevant for Bd0675 function. All cysteine residues are conserved in predatory homologs. An electrostatic surface potential shows that Bd0675 possesses a hand-like shape with a potential binding cleft, which is mainly nega- Figure 11A). Even the extended N-and C-terminal regions with irregular secondary structure were predicted accurately, with more than 96% residues correctly aligned with the experimental structure. The accuracy in side chain rotamer predictions was also very good with RMS_all of 1.7 calculated on all atoms. Though the di-sulfide bonded cysteines are placed juxtapose to each other in the predicted structure but the di-sulfide linkages have not been predicted. Other top ranked models from FEIG-R1 (GR# 314), FEIG-R2 (GR# 480), FEIG-S (GR# 013s), and Seder2020hard (GR# 428) groups also predicted the protein fold correctly with GDT score more than 80 ( Figure 11A). Tsp1 forms dimer and the dimeric interface was also hydrophobic residues, especially leucine. 106 Based on initial speculations that the hydrophobic residues of the individual helices would interdigitate like the teeth of a zipper, short coiled coils are also often termed leucine zippers, 107 although the eponymous hypothesis shattered when the first crystal structures showed that the hydrophobic residues are not interdigitating at the interface, rather being arranged like the rungs of a ladder. In recent years, however, we have come across a family of coiled-coil proteins that essentially resembles the initially hypothesized zipper architecture, although with a decisive difference. This family is especially rich in histidines, which are found in a repetitive arrangement and it is these histidines that interdigitate like the teeth of a zipper between two antiparallel helices of a monomeric α-helical hairpin. 108 As seasoned coiled-coil researchers, we set out to further characterize and delineate this unexpected new coiled coil flavor. In sequence searches we identified a wide range of such histidine zippers. All of them appeared to form hairpins of different types, which we confirmed with the determination of several crystal structures. Interestingly, many of them turned out to be homo-oligomers, in which a histidine-zipper interface can be found either within the monomers (intra-chain), between the monomers (inter-chain), or both. We expected these to be possibly challenging targets for structure predic- we can only speculate about the functional role of these proteins, and hypothesize that they might function as scavengers of metal ions. To our surprise, most groups and servers did a very good job at predicting this new variant of the coiled-coil fold. It is likely that several predictors have benefitted from the structure of the first representative that we had published for this fold previously, from the fungus Serendipita indica (PDB: 5LOS). 105 This instance has 23% sequence identity to Tuna, 15% to Nitro and 19% to Meio. However, it was not identified as a template by the CASP prediction center for either of the three targets, and also sequence searches with The most important feature of all three targets, the correct orientation of the histidines to form the zipping interactions was generally predicted very well in the top predictions, even in those from the best servers. According to the CASP14 evaluation formula, which we describe in a separate article in this special issue, 111 In both lineages, HBc consists of a predominantly α-helical, Nterminal assembly domain ( Figure 13) that forms the capsid, and an unstructured arginine-rich C-terminal domain (CTD) that projects into the capsid interior and fine-tunes the charge balance with the genome. Only the ordered assembly domain of HBc has been amenable to structure determination, and huHBc has been studied for decades . [115][116][117] The assembly domain of huHBc forms hammer-shaped dimers that assemble into capsids with protruding spikes, 118 and these spikes contact the envelope in viruses and virus-like particles. 119,120 Each monomer contributes two long helices (α3 and α4), connected by a short loop, to the intra-dimer interface of the spikes ( Figure 13A). The inter-dimer contacts are mediated by a hand-like region that follows the helical hairpin in the spike and precedes the CTD. The sequences of inter-dimer contacts are conserved among Hepadnaviridae, which is not the case for the inner dimer contacts, or the protruding part of the spikes. In contrast to huHBc, DHBc is much larger with an extension domain of approximately 40 residues that maps to the loop region of the spikes. To understand the structural importance of this extension domain, we determined the structure of DHBc in capsids by electron cryo microscopy. 114 As in huHBc, the core of the spike is formed by a four helical bundle with two helices from each monomer ( Figure 13). These helices are longer than in huHBC with a dif- Figure 13). In conclusion, many predictions recapitulated key-features of the fold of DHBc but failed to predict changes in the oligomerization interfaces that deviated from huHBc. The X-ray crystal structure of A. pompejana ASCC1 was to 1.4 Å with one molecule in the asymmetric unit ( Figure 14A While previous SAXS studies that directly measure flexibility 148 suggested that, in general, X-ray structures were too rigid, 149 computational predictions were uncovering the greater flexibility of the solution structures. 150,151 Indeed, several repair proteins were shown to be functionally flexible, 129,152 and our X-ray structure revealed a simple loop connecting the two domains, consistent with substantial flexibility between the two domains. | Yet, the clear consensus of the highest ranked prediction models on the relative orientation of the two domains suggests to us that the ASCC1 domains are not flexible relative to each other but are rigidly encoded in the sequence. Perhaps, ASCC1 activity is strictly controlled and that this rigidity plays a role in the regulatory mechanism. So the prediction models and their interesting implications will be tested by SAXS and mutational analyses, which ultimately need to be integrated with testing in and structural imaging in cells that can provide the most relevant environment, 153,154 Furthermore, emerging cancer biology data are showing that it is important to understand the structure of the nucleic acid as well as of the damage response proteins. 155 So, the potential structural rigidity of ASCC1 suggests its activity may favor specific RNA structures or serve to sculpt RNA for | CONCLUSIONS This article describes the structural and functional aspects of the selected CASP14 targets. The authors of the structures highlighted the most interesting target features that were reproduced in the models, and also discussed the drawbacks of the predictions. The overall ability to predict three-dimensional structures of proteins has improved remarkably, and many difficult targets were modeled with impressive accuracy. When modeling monomeric targets, AlphaFold2 systematically outperformed other methods, followed up by runners-up in predicting some targets, and the authors suggested that the top models could be used to confidently infer functional sites of the protein. For example, for target T1057, top two predictions would allow for correct assignment of active site catalytic residues and environment. There is, however, room for improvement when it comes to modeling loops. It also remains challenging to accurately model multimeric protein complexes. In some cases, the limiting factor could be the lack of the adequate structure of the individual components (e.g., targets H1036 and H1065). In other cases, predictions of the individual components were highly accurate, yet the methods failed to reproduce the relative orientations observed in their oligomeric states. Examples include incorrect oligomerization interface of the DHBc spike (T1099), and large deviations of the ring assembly for the phage T5 tail tip complex, where no model was able to reproduce inter-ring distances and diameter (H1060 and T1061). We also observed that the conformations of the models for several targets, for example, T1054, T1068, and T1101, differed from the experimentally determined structures. As the authors pointed out, these conformations may represent alternative biologically relevant states, and could be helpful for better understanding of the structural dynamics of the targets. The outcomes of this critical assessment have paved the way for increasing the synergies between computational and experimental approaches to protein structure determination. As described in another article of this issue, several of the CASP14 targets were solved with the aid of the models, or the models allowed to improve structure accuracy. 7 The synergies could be particularly helpful for capturing conformations that may eluded experimental structure determination, particularly in membrane proteins, 156 or as a strategy for attempting molecular replacement phasing that has already been shown to be beneficial. 157 In conclusion, we have shown that for the targets described here, the most critical structural features were accurately reproduced by the models. The experimentalists now foresee the models guiding further studies of biologically-relevant properties of proteins, including spatial orientations of structural elements and their dynamics. The performance of computational methods has increased, so has the confidence in the scientific value of the results they produce.
5,894
2021-09-25T00:00:00.000
[ "Biology", "Computer Science" ]
Robust time-of-arrival localization via ADMM This article considers the problem of source localization (SL) using possibly unreliable time-of-arrival (TOA) based range measurements. Adopting the strategy of statistical robustification, we formulate TOA SL as minimization of a versatile loss that possesses resistance against the occurrence of outliers. We then present an alternating direction method of multipliers (ADMM) to tackle the nonconvex optimization problem in a computationally attractive iterative manner. Moreover, we prove that the solution obtained by the proposed ADMM will correspond to a Karush-Kuhn-Tucker point of the formulation when the algorithm converges, and discuss reasonable assumptions about the robust loss function under which the approach can be theoretically guaranteed to be convergent. Numerical investigations demonstrate the superiority of our method over many existing TOA SL schemes in terms of positioning accuracy and computational simplicity. In particular, the proposed ADMM achieves estimation results with mean square error performance closer to the Cram\'{e}r-Rao lower bound than its competitors in our simulations of impulsive noise environments. I. INTRODUCTION Passive source localization (SL) refers to determining the position of a signal-emitting target from the measurements collected using multiple spatially separated sensors [1].Owing to its great significance to a lot of location-based applications (e.g., emergency assistance [2], asset tracking [3], Internet of Things [4], and radar [5]), the SL problem has received much attention in the literature over the past decades [6].Depending on the measurements being used, methods (Corresponding author: Wenxin Xiong.)Wenxin Xiong and Christian Schindelhauer are with the Department of Computer Science, University of Freiburg, Freiburg 79110, Germany (e-mail: w.x.xiong@outlook.com;schindel@informatik.uni-freiburg.de). Hing Cheung So is with the Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China (e-mail: hcso@ee.cityu.edu.hk). for SL can be roughly divided into two groups: range-and direction-based.A representative and perhaps the most extensively studied instance of the former, is to exploit the time-of-arrival (TOA) observations characterizing the source-sensor range information for positioning.Under the Gaussian noise assumption, least squares (LS) has undoubtedly been the very thing one needs in order to deliver statistically meaningful estimation results.Along this line, numerous LS techniques have been proposed in the literature, such as algebraic explicit and exact solutions [7]- [9], convex programming approaches [10], [11], and local optimization algorithms [12]- [15], to name but a few. Statistical model mismatch will appear in real-world scenarios where the Gaussian noise assumption may be violated due to the presence of unreliable sensor data [22], thereby largely deteriorating the performance of the LS methodology [16].To reduce the negative impact of non-Gaussian measurement errors on the positioning accuracy, researchers have resorted to the strategies of joint estimation of location coordinates and a balancing parameter [17], [18], worstcase criterion [19]- [21], as well as robust statistics [23]- [28].Here, we are particularly interested in the statistical robustification of TOA-based LS position estimators.Countermeasures of this type have been attracting increasing attention in recent years, mainly because of their relatively low prior knowledge requirements and low complexity of implementation [28]. Consider a single-source localization system deployed in H-dimensional space.It consists of a radiating source that is to be located, with unknown position x ∈ R H , and L sensors with signal receiving capabilities and known positions {x i ∈ R H |i = 1, ..., L}.Synchronization among the source and sensors is assumed to be guaranteed, so that the TOA of the emitted signal at each of the L sensors can be estimated.Specifically, the TOA-based range measurements are modeled as [1] where • 2 denotes the ℓ 2 -norm and e i accounts for the observation uncertainty associated with the ith source-sensor path.Under the assumption that {e i } are uncorrelated zero-mean Gaussian processes with variances {σ 2 i } and in the maximum likelihood (ML) sense [29], the TOA SL problem is mathematically stated as [12] The ℓ 2 -norm based estimator (2) is, however, vulnerable to outlying (unreliable) data, which manifest themselves as range measurements immersed in non-Gaussian, potentially bias-like errors.Common causes of them in the localization context include the non-line-of-sight and multipath propagation of signals, interference, sensor malfunction, and malicious attacks [5], [16], [22], [30].To achieve resistance against the occurrence of outliers in such adverse situations, (2) was statistically robustified in [25]- [28] by substituting the ℓ 2 loss (•) 2 with a certain fitting error measure less sensitive to biased r i , i.e. where f (•) turns into the ℓ 1 loss | • | in [26], the Huber loss: with radius R i > 0 in [25], the Welsch loss with parameter σ W in [27], and the ℓ p loss | • | p with 1 ≤ p < 2 in [28].Forming the associated epigraph problem [46]: one may directly apply the classical convex approximation technique of second-order cone programming (SOCP) [47] or difference-of-convex programming (DCP) [48] to address the ℓ 1 -minimization version of (3): Smoothed approximations through the composition of the natural logarithm and hyperbolic cosine functions [26], on the other hand, make the Lagrange-type neurodynamic optimization framework of Lagrange programming neural network (LPNN) applicable to where with parameter ν > 0 is also known as (a.k.a.) the smoothed ℓ 1 loss [49].The idea of resorting instead to the Huber convex underestimator for (1) based on the composite loss f was conceptualized in [25], where [•] + = max(•, 0) is the ramp function.Nonetheless, the focus of [25] is more on the extension of to sensor network node localization rather than single-source positioning.In [27], the intractable range-based Welsch loss minimization problem: was approximated by using the squared ranges, after which a half-quadratic (HQ) optimization algorithm has been devised.The authors of [28] converted the ℓ p -minimization problem: into an iteratively reweighted LS (IRLS) framework and applied sum-product message passing (MP) to solve the x-update IRLS subproblem in closed form. While implementing the convex programs via interior-point methods is known to scale badly with L, numerical realization of the neurodynamic approach in [26] usually takes several thousands of iterations to reach equilibrium [50].Despite the low-complexity advantage of HQ, the squared-range formulation (11) in [27] inherently suffers from an impaired statistical efficiency. The MP-assisted IRLS [28] is computationally attractive, yet the lack of a theoretical foundation to support convergence of it may affect the soundness of the technique.With these concerns in mind, we are motivated to explore new avenues for coping with (3), especially in a way less computationally demanding but more statistically efficient and theoretically complete than the existing solutions. Our main contribution in this work is to develop an alternating direction method of multipliers (ADMM) [31] for tackling the general robust TOA positioning problem (3).As the versatility of the cost function will only be embodied in one of the subproblems that amounts to computing the proximal operator of f (•), we are able to adapt f (•) to specific noise environments with ease.In an analytical manner, we prove that if the proposed ADMM is convergent, the limit point to which it converges will satisfy the Karush-Kuhn-Tucker (KKT) conditions for the equivalent constrained reformulation of (3).We also show the possibility of insuring the theoretical convergence of the ADMM, under several extra assumptions about f (•).In addition, numerical examples are presented to corroborate the positioning accuracy and computational simplicity superiorities of our solution over its competitors.It should be pointed out that compared with the ADMM in [12], our scheme is of independent interest in the following two aspects.First, the algorithm in [12] deals with the non-outlier-resistant formulation (2), whereas we aim at solving (3) with a robustness-conferring fitting error measure.Second, we provide analytical convergence results of the presented ADMM.The authors of [12], in contrast, incorrectly assumed the nonconvex constrained optimization reformulation of (2) to be convex and, as a result, omitting to probe into the convergence of their method under nonconvexity. The rest of this contribution is organized as follows.The equivalent constrained reformulation of (3) and the ADMM solution to it are described in Section II.The complexity and convergence properties of the proposed ADMM are discussed in detail in Section III.Performance evaluations are conducted in Section IV.Ultimately, Section V concludes the article. II. ALGORITHM DEVELOPMENT Let us start by transforming (3) into where d = [d 1 , ..., d L ] ∈ R L is a dummy vector of decision variables for the source-sensor distances, independent of x. We further rewrite (13) as by introducing an auxiliary vector β = β T 1 , ..., β T L T ∈ R HL into the characterization of range information [12].Here, β i ∈ R H is a unit vector that indicates the source's direction with respect to (w.r.t.) the ith sensor, a.k.a. the direction vector of arrival [32]. For (14), we construct the following augmented Lagrangian with constraints: where λ = λ T 1 , ..., λ T L T ∈ R HL is a vector containing the Lagrange multipliers for (14a) and ρ > 0 the augmented Lagrangian parameter.Splitting the primal variables into two parts, the ADMM for solving (14) consists of the following iterative steps: where the iteration index is indicated by (•) (k) .To be specific, (16a) and (16b) sequentially minimize the augmented Lagrangian (15) w.r.t. the primal variables, after which (16c) updates the dual variables via a gradient ascent with step size ρ.We note that there are only two (namely, x and (d, β)) rather than three primal blocks in the ADMM governed by (16).By comparison, natural extension of the basic two-block ADMM to multi-block cases may result in divergence [33]. More detailed explanations for (16a) and (16b) are given as follows. By ignoring the constant terms independent of x, the subproblem (16a) can be simplified into an LS form where It has the following closed-form solution: Similarly, another subproblem (16b) is re-expressed as where Exploiting the particular structure of (20), the optimal β can be obtained first as Thus, ( 20) is reduced to which is separable w.r.t. the partition of d into its L elements and leads to the following L subproblems: With the following proposition, we may simplify (24) to some extent in order to facilitate the calculations. PROPOSITION 1 Under mild assumptions about the robust loss and range observations that f (•) is an even function strictly increasing on the nonnegative semi-axis and {r i } are always nonnegative, coping with (24) will be equivalent to handling PROOF See Appendix A. It is worth noting that the assumptions made in Proposition 1 are actually rather common (see our justification in the following).In such cases (where Proposition 1 applies), solving (24) boils down to computing the proximal operator of f (•), namely, complying with the definition of proximal mapping below. DEFINITION 1 The proximal mapping of a function f : R → R with parameter τ > 0 for any Obviously, the computational simplicity of the proximal mapping procedure is crucial to the efficient update of d in each iteration of the proposed ADMM.In fact, the calculation of ( 27) can be done in a relatively unencumbered manner or in closed form for many choices of f (•) exhibiting outlier-resistance [34]- [37].As summarized in Table I 1 , this article restricts the scope of discussion to two typical options: (i) the ℓ p loss | • | p for 1 ≤ p < 2 and (ii) the Huber function The two corresponding instantiations of (3) are then (12) and respectively.Our justification for them is given as follows.The ℓ p -minimization criterion with 1 ≤ p < 2 is known to show a considerable degree of robustness against outliers in a wide range of adverse environments [28], [35], [56], [59].Closely resembling its ℓ 1 (resp.ℓ 2 ) counterpart for large (resp.small) fitting errors, the Huber loss minimization scheme offers another means of allowing somewhat controllability therebetween [38].Most importantly, all these cost functions are compatible with our assumptions in Proposition 1. We note that a + and a − in Table I are the unique positive root of a−b τ +pa p−1 = 0 for a ∈ [0, b] (when b ≥ 0) and unique negative root of a−b τ − p(−a) p−1 = 0 for a ∈ [b, 0] (when b < 0), respectively, both of which can be conveniently found using simple root-finding algorithms (e.g., bisection) [35]. The steps of ADMM for dealing with the robust TOA SL problem are summarized in Algorithm 1, where δ is the tolerance to terminate the iterations. 1 For the purpose of completeness, we also include the non-robust case of p = 2, in which the ℓp-minimization estimator will coincide with (2), providing optimum estimation performance as long as the disturbances {ei} are independent and identically distributed (i.i.d.) Gaussian processes. Output: Estimate of source location x. III. COMPLEXITY AND CONVERGENCE PROPERTIES The complexity and convergence properties of the proposed ADMM are discussed in detail in this section. From Algorithm 1, it is not difficult to find that the complexity of the ADMM is dominated by that of the d-update steps, in which proximal mapping calculations are involved.Therefore, the total complexity of Algorithm 1 will be O(N ADMM L) if the loss function taken from Table I admits closed-form d-updates and O(N ADMM KL) otherwise, where N ADMM represents the iteration number of the ADMM and K the number of searches involved in the root-finding process at each ADMM iteration.Empirically, a few steps of the bisection method applied in each d-update and a few tens to hundreds of ADMM iterations will already be enough to yield an accurate source location estimate.By comparison, realizing the second-order cone program derived from (5): N CCCP concave-convex procedure (CCCP) iterations for the difference-of-convex program (5): and the ramp function based Huber convex underestimator ( 9) in [25] using interior-point methods results in O(L 3.5 ), O(N CCCP L 3.5 ), and O(L 3.5 ) complexity, respectively [47].The LPNN in [26], HQ technique in [27], and the MP-assisted IRLS in [28] are three other representative TOA positioning schemes from the literature that adopt the strategy of statistical robustification as well.They lead to O(N LPNN L), O(N HQ KL), and O(N IRLS L) complexity, respectively, where N LPNN is the number of iterations in the numerical implementation of LPNN, N HQ the number of steps needed for the HQ algorithm to converge, and N IRLS that of the IRLS iterations.As reported in [27], [28], [50], N HQ typically takes a value of several tens and the same goes for K and N IRLS , whereas N LPNN is usually of several thousands.It turns out that the proposed ADMM with loss functions permitting closed-form proximal computations, the HQ algorithm in [27], and the IRLS in [28] are most computationally efficient. With an appropriately selected value of ρ, Algorithm 1 is observed to be always convergent in the simulation studies.Nonetheless, existing theoretical analyses of the convergence of ADMM under nonconvexity [39] are not easily transferable to our design, owing to disparity between the formulation ( 14) and the paradigms specified in [39].We note that similar dilemmas have been faced by many practitioners, despite empirically sound ADMM use cases in their respective contributions [12], [24], [35], [40].In what follows, we will present our own analytic proofs for the convergence of Algorithm 1 in lieu of pinning all hopes on the generally applicable results from the literature. Establishing the equivalence between (14) and the following proposition points out that (14b) can be disregarded and we may shift our focus to the simplification (31) instead in the convergence analysis: PROPOSITION 2 Under the same assumptions as in Proposition 1, the formulations ( 14) and ( 31) are equivalent to one another. PROOF The proof of Proposition 2 is similar to that of Proposition 1. Please see Appendix B for the details. Next, we derive the following theorem for the optimality of tuples produced by Algorithm 1: .. be the tuples of primal and dual variables generated by Algorithm 1.If then the limit (x ⋆ , d ⋆ , β ⋆ , λ ⋆ ) will satisfy the KKT conditions (a.k.a. the first-order necessary conditions [41]) for (31) 2 : where 2 In cases where f (•) is non-differentiable at the origin, the condition (33b) should be substituted with the inclusion: 0L ∈ ∂d (x ⋆ , d ⋆ , β ⋆ , λ ⋆ ) [42], where ∂d (•) denotes the generalized gradient of a function at d that takes into consideration the subdifferential calculus [43].For simplicity's sake, we confine our analyses here to the special case with a differentiable f (•). Nevertheless, it is worth pointing out that with slight modifications, the results can be readily extended to the scenarios in the presence of non-differentiability. is the associated Lagrangian of (31). PROOF See Appendix C. Also mentionable is the linear independence constraint qualification (LICQ), which has been one of the most commonly adopted constraint qualifications and is necessary to guarantee that an optimal solution will satisfy the KKT conditions [44].Moreover, the uniqueness of the Lagrange multipliers can be ensured by it [41].In our case, the LICQ concerns the linear independence of constraint gradients at the local solution point y ⋆ = (x ⋆ ) T , (d ⋆ ) T , (β ⋆ ) T T : where 0 a×b (resp.I a ) denotes the a × b zero (resp.a × a identity) matrix.In a nontrivial source localization setting, the position of the source is different from those of the sensors [45].That is, none of the elements of {β ⋆ i } should be equal to zero and the same goes for {d ⋆ i }.It is then easily deducible that the LICQ holds.REMARK 1 Theorem 1 does not guarantee the convergence of the proposed ADMM.Instead, it only reveals the optimality of solution when the algorithm converges.To further improve the analytical completeness, one may borrow the idea from [40] to relax the formulation (31) somewhat by introducing additional auxiliary variables and an extra quadratic penalty, whereby rigorously proving the convergence of the generated sequence is made possible.This will, however, give rise to degradation of the estimator's performance [40] (i.e., the incompleteness of analytical convergence results can be seen as an acceptable trade-off for higher estimation accuracy).For avoiding such performance-impairing approximations to the source localization formulation, one would have to limit the scope of choice of f (•), as elaborated in the following statements. PROOF See Appendix D. from below for a sufficiently large ρ. .. is convergent under the above assumptions about f (•) and ρ. PROOF This corollary is established via Theorems 2 and 3. Equipped with these promising tools, we are finally enabled to present the following theorem, which asserts that Algorithm 1 can be theoretically guaranteed to converge with certain reasonable assumptions. IV. SIMULATION RESULTS In this section, we carry out computer simulations to evaluate the performance of Algorithm 1. Comparisons are made between the proposed ADMM and several existing TOA-based location estimators that are also built upon robust statistics.Table II gives a summary of these methods, and we note that the convex approximation techniques are all realized using the MATLAB CVX package [52].To implement the LPNN [26], we invoke the MATLAB routine ode15s, a variable-step, variable-order solver relying on the formulas for numerical differentiation of orders 1 to 5 [53]. An SL system with H = 2 and L = 8 is considered.Unless otherwise mentioned, the locations of the source and sensors are randomly generated inside an origin-centered 20 m × 20 m square area, in each of the Monte Carlo (MC) runs.The positioning accuracy metric is the root-meansquare error (RMSE), defined as where N MC denotes the total number of MC runs and is fixed at 3000 here, and x{j} represents the estimate of the source position, x {j} , in the jth MC run.User-specified parameters of the ADMM are set as δ = 10 −5 and ρ = 5 [24].The tunable parameter σ W of the Welsch loss is adaptively chosen according to the Silverman's heuristic [54], in the same fashion as was indicated in [27].The smoothing parameter ν is set to 0.1 to ensure feasibility of the ode15s solver. Taking into account the presence of outliers, we follow [28], [56] to use the class of α-stable distributions for modeling {e i } in (1).Stable processes are well known for their suitability to characterize skewness and heavy-tailedness.Except for a few special instances, members of the stable distribution family do not have an explicit expression of probability density function (PDF).Instead, their PDF p(z) is implicitly described through the inverse Fourier transform of the characteristic function Φ(t; α, ζ, γ, µ): where the detailed analytical parameterization of Φ(t) can be found in [57].As one may see, there are four parameters defining the family.The stability parameter, 0 < α ≤ 2, controls the tails of the distribution.Generally speaking, the smaller the value of α, the heavier the tails and the more impulsive the random variable being modeled.µ determines the location.The skewness parameter, −1 ≤ ζ ≤ 1, is a measure of asymmetry.In the simplest case where ζ = 0, the distribution becomes symmetric about its mean (which is, µ, when α > 1) and degenerates into the so-called symmetric α-stable (SαS) distribution.By contrast, the distribution is said to be right-skewed (resp.left-skewed) for ζ > 0 (resp.ζ < 0).γ > 0 is the scale parameter, measuring to what extent the distribution spreads out (similar to the variance of the normal distribution). For illustration purposes, Fig. 1 plots the PDF functions for several representative choices of α and ζ.As in our case, the stable-distributed range measurement noise {e i } is assumed to be i.i.d. Because the variance of the stable distribution is undefined for α < 2, we introduce the concept of generalized signal-to-noise ratio (GSNR) from [28] to quantify the relative noise level: Furthermore, the square root of the trace of the MC-approximated Cramér-Rao lower bound matrix (termed RCRLB) [58] is included in the comparison results to offer a benchmark for the accuracy of different robust position estimators in i.i.d.non-Gaussian noise.As it is in principe difficult to work out a general schedule for adjusting the Huber radius under stably modeled non-Gaussianity, we simply follow [55] to assign a fixed value of 1 to R i . To begin with, Figs.We should also note that for reproducibility, in this test L = 8 sensors are evenly placed on the perimeter of the afore-defined square region and the source is deterministically deployed at x = [2, 3] T m.Both ℓ p -ADMM and Huber-ADMM are observed to rapidly decrease the objective function and converge to a point close to the true source position, in the first few tens of iterations. In our following simulation studies, the value of p required by ℓ p -ADMM and ℓ p -IRLS will be set to the optimal one hinging on α [60], in a way analogous to [28], [56].two proposed ADMM approaches deliver the lowest level of positioning errors for all GSNRs. Especially, the RMSE performance of ℓ p -ADMM is the closest to the RCRLB benchmark (with a small gap of about half a meter).Among the six competitors, ℓ p -IRLS produces the best results.This reconfirms the findings reported in [28], that ℓ p -IRLS can be fairly statistically efficient in impulsive noise if p is optly chosen.The direct nonlinear optimization based ℓ 1 -minimization techniques, ℓ 1 -DCP and ℓ 1 -LPNN, are slightly inferior to ℓ p -IRLS due to model mismatch but still capable of outperforming the remaining schemes, whose estimation performance looks relatively poorer.ℓ 1 -LPNN will not be comparable to ℓ 1 -DCP as the GSNR decreases from 25 dB, attributed to the smoothed approximations introduced in (7).Nonetheless, the former does not have tightness issues from which the traditional convex relaxation methods ℓ 1 -SOCP and Huber-CUE suffer, nor is it quite as statistically inefficient Welsch-HQ. Setting the GSNR to 20 dB and the impulsiveness-controlling parameter as α = 1.5, we plot the RMSE versus the number of sensors used for localization, L ∈ 9], in Fig. 5. Again, the two ADMM solutions have the best performance.Although none of the estimators can attain RCRLB, our algorithms are still the closest ones to it, particularly in the scenarios with a large number of sensors.Fig. 6 further plots the RMSE as a function of α ∈ [1.1, 1.9] for L = 8 and GSNR = 20 dB, which is once more the case where ℓ p -ADMM is demonstrated to be superior to the rest of the approaches. In Table III, we list the per-sample average CPU time of the eight location estimators in the above simulations.It is seen that the running time can vary by several orders of magnitude. Huber-ADMM admitting closed-form proximal mapping of f (•) and Welsch-HQ are the second Nonetheless, its problem-solving process still takes less time than those of the classical convex approximation techniques.In general, the simulation results agree with our complexity analysis in Section III. V. CONCLUSION This article focused on the problem of outlier-resistant TOA SL. on the ADMM, we presented an iterative algorithm to handle the statistically robustified positioning formulation. Each iteration of our ADMM consists of two primal variable minimization steps and a dual variable update, all of can be effortlessly implemented provided that the loss function relied on allows convenient proximal computations.What is more, we proved that the ADMM will converge to a KKT point of the nonconvex constrained optimization problem under certain conditions, and verified that the LICQ holds at the point for nontrivial SL configurations.The superiority of the devised scheme over a number of robust statistics type TOA SL methods in terms of positioning accuracy in impulsive noise and computational simplicity was demonstrated through simulations.Let (x * , d * , β * ) be the point corresponding to the globally optimal solution to (31).To prove the proposition, it suffices to show that d * i ≥ 0 holds ∀i ∈ {1, ..., L}.Based on the assumptions made and the reverse triangle inequality, we have 22) renders the solution to the β-subproblem in each of the ADMM iterations certainly feasible, the condition (33e) is satisfied.On the other hand, based on the dual variable update schedule (16c) and our assumption about lim k→∞ λ (k) in (32), it is straightforward to deduce that the condition (33d) is satisfied as well.We now proceed to check the remaining conditions (33a)-(33c).As x (k+1) and d (k+1) , β (k+1) are the minimizers of the subproblems (16a) and (16b), respectively, the following relations hold: Putting ( 32), (42a), and the verified condition (33d) together, we have Analogously, it follows from ( 32), (33d), and (42b) that and ( 32), (33d), and (42c) that The conditions (33a)-(33c) are verified by the above equalities.The proof is complete. APPENDIX D PROOF OF THEOREM 2 In order to study the monotonicity properties of the sequence we decompose the difference in L ρ between two successive ADMM iterations as and analyze their ranges one by one. For (47a) and (47b), it follows from the xand β-update schedules of the proposed ADMM that and hold by definition. On the other hand, additional assumptions about f (•) will be required to facilitate the analysis of (47c) and (47d).As long as we assume the convexity of f (r i − d i ) w.r.t.d i , ∀i ∈ {1, ..., L}, it would then be easily verified that is strongly convex w.r.t.d (with parameter M 2 > 0), by examining the positive semidefiniteness of the difference between its Hessian and M 2 • I L .Applying the equivalent inequality that characterizes strong convexity [46] to the points d (k+1) and d (k) , we have for (47c) which subsequently leads to The remaining difference of two evaluated augmented Lagrangians, (47d), can be re-expressed as based on (16c).Since our d i -minimization schedule implies taking advantage of the convexity of f (r i − d i ) w.r.t.d i , ∀i ∈ {1, ..., L} once again we derive and Utilizing the fact that β (k) i and β (k+1) i are both unit vectors and our convexity as well as Lipschitz continuity assumptions about f (r i − d i ), we arrive at after putting ( 55) and ( 56) into (53). APPENDIX E PROOF OF THEOREM 3 Combining the convexity of f (r i −d i ) w.r.t.d i with the M 1 -Lipschitz continuity of its gradient gives [51, Lemma 4] Having in mind that β (k) i is a unit column vector, we construct from ( 59) by letting η 1 and η 2 be β (k) i T x (k) − x i and d (k) i , respectively.Plugging ( 56) into (60) further deduces (61) Based on (61) and the Cauchy-Schwarz inequality, we have (62a) Under the symmetry and monotonicity assumptions earlier made about f (•) in Proposition 1, it is not hard to find that the first part of the equation enclosed within square brackets in (62c) is bounded from below.The same can also be said of the second part for ρ ≥ M 1 (in fact, ρ should satisfy ρ ≥ max(M 1 , (2M 2 1 )/M 2 ) so that Propositions 1 and 2 remain valid), trivially, as it is lower bounded by 0. This completes the proof. APPENDIX F PROOF OF THEOREM 4 First of all, it follows from (58) and Corollary 1 that as well.Akin to ( 50) and ( 51), the strong convexity characterizing inequality (with parameter (65) associated with the points x = x (k+1) and x = x (k) gives which implies by invoking Corollary 1 once more.Likewise, we can show for some parameter M 4 > 0, following a procedure similar to (65)-(67).The proof is complete. THEOREM 4 . The sequence x(k) , d(k) , β(k) , λ (k) k=1,... is convergent, namely,(32) holds, under the same circumstances as in Corollary 1. PROOF See Appendix F. COROLLARY 2 Let us adopt the setting of Corollary 1.The solution obtained by Algorithm 1 corresponds to a KKT point of the nonconvex constrained optimization problem (31).PROOF This corollary is established via Theorems 1 and 4. 2 and 3 show the convergence behavior of ℓ p -ADMM (p taking different values from 1 to 2) and Huber-ADMM in a single MC run for α = 1.5 and GSNR = 20 dB. optimal solution to (25) by d * 3 .Straightforwardly, the proposition will hold if we are able to show that d * i ≥ 0 holds ∀i ∈ {1, ..., L}.Based on the assumptions made and the reverse triangle inequality ||a| − |b|| ≤ |a − b|, we have ) which means that the objective function value associated with the global optimum d = d * is greater than or equal to its counterpart associated with d = abs(d * ), where abs(•) stands for the element-wise absolute value function.Namely, d * will no longer be the globally optimal solution if the inequality in (40c) holds strictly.It then follows that ≥ in (40c) should degrade into =, which holds if and only if d * i = |d * i |, ∀i ∈ {1, ..., L} since the sum of two strictly monotonic functions of the same kind of monotonicity is still a monotonic function.Therefore, d * i ≥ 0 is tacitly satisfied ∀i ∈ {1, ..., L}.The proof is complete.APPENDIX B PROOF OF PROPOSITION 2 TABLE I PROXIMAL COMPUTATIONS FOR ℓp (1 ≤ p ≤ 2) AND HUBER LOSS FUNCTIONS TABLE III AVERAGE CPU TIME FOR SIMULATIONS CONDUCTED ON A LAPTOP WITH 16 GB MEMORY AND 4.7 GHZ CPU and third fastest, respectively, taking only slightly more time than the state-of-the-art method ℓ p -IRLS.Because of the extra bisection procedure incorporated into the ADMM algorithm, ℓ p -ADMM may not be as computationally efficient as Huber-ADMM in actual CPU time comparisons.
7,408.8
2023-06-15T00:00:00.000
[ "Engineering", "Computer Science" ]
Doppler profile diagnostics on VUV spectra for the impurity ion temperature in edge plasmas of Large Helical Device A space-resolved VUV spectroscopy using a 3 m normal incidence spectrometer is utilized to measure the impurity emission profile in the edge and divertor plasmas of the Large Helical Device in the wavelength range of 300 - 3200 Å. The ion temperatures derived from the Doppler profile fitting for the spectra of carbon CII 1335.71 × 2 Å, CIII 977.02 × 2 Å, and CIV 1548.20 × 2 Å are comparable to ionization potential for each charge state. The vertical profile of the ion temperature measured from CIV line has higher values in the edge observation chords compared to those in the central chords. Introduction In the study of the impurity behavior in the edge region of magnetically-confined torus plasmas for fusion research, the vacuum ultraviolet (VUV) line of impurity ions is attractive for the impurity diagnostics because the emission from the correct charge states is located in the edge plasmas with a considerably low electron temperature. The impurity species and its concentration can be examined through the identification of impurity lines and the line intensity, respectively. In addition, the spectral shape of impurity lines can also provide information on the ion temperature and the plasma flow based on Doppler-broadening and Doppler-shift measurements, respectively [1]. In this paper, the ion temperature and its vertical profile derived by measuring the Doppler profile of line emission from intrinsic carbon impurity ions sputtered from the carbon divertor plates of the Large Helical Device (LHD), which are the most abundant impurity in LHD, are investigated by using the VUV spectroscopy. Experimental setup Space-resolved VUV spectroscopy using a 3 m normal incidence spectrometer has been developed to measure the radial distribution of VUV lines in wavelength range of 300 -3200 Å in the edge plasmas of LHD [2]. LHD has the major/minor radii which are 3. At present, inversion data processing such as Abel inversion is not applied. The edge plasma of LHD consists of stochastic magnetic fields with threedimensional structure intrinsically formed by helical coils called "ergodic layer," while well-defined magnetic surfaces exist inside the last closed flux surface [3]. The VUV spectroscopy is appropriate for the edge impurity study because the emissions are only located inside the ergodic layer with electron temperatures distributing in ranges of 10 to 500 eV. Figure 2 shows the edge electron density dependence of the impurity ion temperature for (a) CII 1335.71 × 2 Å (1s 2 2s 2 2p-1s 2 2s2p 2 ), (b) CIII 977.02 × 2 Å (1s 2 2s 2 -1s 2 2s2p), and (c) CIV 1548.20 × 2 Å (1s 2 2s-1s 2 2p) obtained in hydrogen discharges. It has been experimentally certificated that the CII, CIII, and CIV emissions are located in the outermost region of the ergodic layer or in the region close to the edge X-points because the ionization potential, Ei, of 24 eV, 48 eV, and 65 eV for C + , C 2+ , C 3+ ions, respectively, is extremely low compared to the edge temperature of LHD plasmas [4]. Thus, measuring CII, CIII, and CIV emission gives as information of plasma parameters in the ergodic layer. VUV spectroscopy is attempted under the conditions of a 50-μm-wide entrance slit. A CCD data acquisition operation which is called "full-binning" mode in which all CCD-pixels aligned in the vertical direction are replaced by single channel and the vertical spatial resolution is entirely eliminated. The line shape of the wavelength spectrum has a Gaussian profile if the ions are assumed to have a Maxwellian velocity Fig. 2, Ti ranges around or below the ionization potential for each ionization stages and has a negative correlation with the electron density as has been widely observed in the fusion plasma experiments. Thus, Doppler profiles of VUV spectra have been successfully observed for the carbon impurity. Results In addition, when the intensity is sufficient, the spatial profile of the ion temperature can be measured. Figure 3 shows a full vertical profile of C 3+ ion temperature derived from CIV line emission. A synthetic profile of the C 3+ ion temperature calculated by the impurity transport simulation based on a three-dimensional simulation code, EMC3-EIRENE, is also plotted [5,6]. Ti (CIV) in the edge observation chords has higher values compared to those in the central chords both in the experiment and in the simulation. In order to explain the vertical profile, we attempted comparison between Ti (CIV) and the connection length of the magnetic field lines and we found some positive correlation as preliminary results. It is reasonable that Ti (CIV) has higher value when the connection length are longer at the region of CIV emission because both of the heat and particle transport parallel to the magnetic field lines are dominant in the transport processes in the ergodic layer. Further detailed investigation on the relationship between Ti (CIV) and the connection length should be addressed as future studies. Summary A space-resolved VUV spectroscopy using a 3 m normal incidence spectrometer is utilized to measure the impurity emission profile in the edge and divertor plasmas of LHD in wavelength range of 300 -3200 Å. The ion temperatures derived from the Doppler profile fitting for the spectra of carbon CII 1335.71 × 2 Å, CIII 977.02 × 2 Å, and CIV 1548.20 × 2 Å are comparable to ionization potential for each charge state. The vertical profile of the ion temperature measured from CIV line has higher values in the edge observation chords compared to those in the central chords. The spatial profile may be explained by considering relationship between the distribution of the connection length at the region of CIV emission and geometry of the observation. A synthetic profile of the C 3+ ion temperature simulated with EMC3-EIRENE code is also plotted.
1,362.2
2019-07-01T00:00:00.000
[ "Physics" ]
Counterfactual Distributions in Bivariate Models—A Conditional Quantile Approach This paper proposes a methodology to incorporate bivariate models in numerical computations of counterfactual distributions. The proposal is to extend the works of Machado and Mata (2005) and Melly (2005) using the grid method to generate pairs of random variables. This contribution allows incorporating the effect of intra-household decision making in counterfactual decompositions of changes in income distribution. An application using data from five latin american countries shows that this approach substantially improves the goodness of fit to the empirical distribution. However, the exercise of decomposition is less conclusive about the performance of the method, which essentially depends on the sample size and the accuracy of the regression model. Introduction Most empirical studies analyze the effects of income distribution determinants through decomposition methodologies based on Oaxaca-Blinder (1973) [2,3]. Those methodologies usually focus on the wage distribution of a single individual assuming that all employment decisions are made in an isolated or independent way with respect to other household members. Notwithstanding, the literature on intra-household labor supply shows several models of the interdependence in the employment decisions within the household. Assortative mating literature provides vast evidence of interrelations of individual variables among members, such as their education levels, labor income, the choice of hours of work, etc. Ignoring this feature when estimating household labor earnings on decomposition exercises provides a scenario that may be biased or unrealistic. The main component of personal earnings is labor income. Therefore, it is important to know its intra-household determinants to understand the behavior of the household incomes and their consequences on inequality. In the most traditional model, there is a sole individual responsible for making labor decisions independently of other household members. However, in the case of complete households (with head and spouse) it is usual that this decision is made by the couple. There are several models in the literature where a couple faces the problem of deciding together their labor supply according to their interests within the home (e.g., Chiappori and Pierre-Andre, 1992 [4]; Blundell et al., 2005 [5]; van Klaveren et al., 2008 [6], among others). The main mechanisms behind this decision are the reservation wages of each member and the bargaining power that determines the share rule of the household income. Given the complexity involved in analyzing the joint employment decisions of all household members, the usual alternative is to focus only on the decisions made by the household head and spouse. The implicit assumption is that the rest of the household members will not change their behavior, or at least their impact on family income is small. This assumption may be too simple, but it is a starting point used in the literature to understand the complex mechanisms interacting in the labor decisions made within the household. In particular, both the reservation wages and bargaining power depend on observable and unobservable characteristics of household members such as age, education status, persuasion, etc. Modeling both earnings equations to analyze household income distribution while taking into account their interactions requires a methodology that generates counterfactual distributions of hypothetical changes on their determinants. Some examples of models including employment decisions within the household are Browning et al. (1994) [7], Gasparini and Marchionni (2007) [8], Galiani and Weischenbaum (2012) [9], among others. Usually these studies make several parametric and/or distributional assumptions, such as normality in the unobservable income determinants. This approach could be too strict or it may not be quite representative of the actual income distribution. Another usual methodological aspect is that those papers use models focused on conditional means, relying on parametric assumptions other aspects of the distribution. Despite the progress of quantile regression literature allows exploring issues beyond the average effects, the bulk of the decompositions literature is based on counterfactual distributions of earnings equations for a single individual (i.e., Machado and Mata (2005) [10], Melly (2005) [11] and Firpo et al. (2009) [12]). This paper attempts to expand this literature by proposing a methodology to generate counterfactual scenarios on bivariate distributions using conditional quantiles. The main contribution of this paper is to show that the problem of generating counterfactual income distributions for both household members is just an exercise of numerical integration involving a joint mechanism to generate a pair of random variables through their marginal distributions. Once this mechanism is established, it is possible to use the ranking association of both household members in order to get a set of replicates or realizations of the joint distribution. Nevertheless, the fact that incomes are related with observable characteristics makes necessary to introduce some structure to the conditional income distribution. Conditional quantiles are useful to model this matter for two reasons: first, they are the counterpart of the cumulative conditional distribution, and second they are easily estimable by standard methods. Quantile regressions allows an indirect way to capture the unobservable heterogeneous effects on each marginal distribution. Finally, the last step of the proposed method is to incorporate the relationship between the conditional incomes of both household members using a probabilistic association of conditional rankings. The paper is organized as follows. In Section 2, a methodology to simulate bivariate random variable realizations based on marginal distributions is presented. Section 3 extends this idea to conditional joint distributions and its applications to counterfactual distributions. Section 4 shows an empirical application with household survey data for different countries in the Southern Cone of Latin America. Finally, Section 5 discusses the results and scope of the methodology. Generating Random Variables Generating random variables in the univariate case is relatively simple, and there are several methods available. The most widely used is the inverse cumulative function method: let U be a random variable with a uniform distribution U (0, 1), then the transformation F −1 Y (U ) generates a random variable with distribution F Y (y). Thus, this procedure simply consists on taking a realization of a uniform random variable u and then computing the u−th quantile Q Y (u) ≡ F −1 Y (u). In the case of integer variables, the logic is quite similar to the continuous case (Devroye, 1986) [13]. The bivariate setup is more complex because the statistical relationship between two variables must be considered. A closely related problem can be found in the study of copula functions. A copula is a function that links the joint distribution to the one-dimensional marginal distributions (Nelsen, 1999) [14]. As in the univariate case, there are several methods to create a bivariate random draw. For example, the conditional distribution method allows to generate a random vector (y 1 , y 2 ) using a vector (U 1 , U 2 ) of independent uniform random variables. Specifically, the method of the conditional distribution requires the following two steps: (1) compute y 2 = F −1 2 (U 2 ), where F 2 (.) is the marginal cumulative distribution of y 2 ; and (2) compute y 1 = F −1 (u 1 |y 2 ), that is, using the inverse of the cumulative distribution of y 1 conditional on y 2 . The key to this process is to know the exact functional form of the conditional distribution, which can be too strict in practice. Another strategy that allows us to adapt the univariate methods (such as the inverse cumulative function) to the bivariate problem is the grid method. Before explaining this procedure, it is appropriate to give some definitions. The image of this function is Im(T ) = [M 1 + 1, M 2 1 + M 2 ]. The most interesting property of the encoder function is that it has a single value m for each ordered pair (m 1 , m 2 ). Therefore, each coordinate is identified by the following decoding function: Property 1. (decoding functions). Let m ∈ Im(T ) be a encoder function. Then, the coordinates m 1 and m 2 can be obtained from the decoding functions: The last element that is needed is to define the set of grids of an enclosure A ⊂ R 2 + . Finally, consider two random variables (y 1 , y 2 ) ∈ A with a joint density function f (y 1 , y 2 ). To generate a realization of a vector (ỹ 1 ,ỹ 2 ) from the population distribution f (y 1 , y 2 ) we can use the grid method by following the next steps: Calculate the probability mass of each grid p m = P r[(y 1 , y 2 ) ∈ C m ] for every m. 3. Generate a realization of an integer univariate random variablem with probability distribution p m , calculated in the previous step. 4. Decodem to obtain the vector (m 1 ,m 2 ). 5. Compute the realization of (ỹ 1 ,ỹ 2 ) assigning values within the grid Cm. The dotted lines delimit the grids subdividing the enclosure (i.e., the support of both random variables). Clearly, in the first case the probability mass (measured by the proportion of points falling into each grid) is concentrated in the diagonal given by the bisectrix, while in the second case there is no clear pattern for the joint probability. Logically, the greater the number of grids, the better the approach of the method (Hörmann et al., 2004 [15]). Therefore, the method incorporates the statistical relationship between y 1 and y 2 through the probability of each grid. Lastly, note that the grids can be determined by their marginal quantile by defining the values . The validity of this equivalence is that the cumulative distribution is an increasing monotonic transformation of the random variable support. In other words, the F j (.) value represents the ranking position resulting from sorting the y j s increasingly. This establishes a one to one relationship between any value of y j and its ranking. Then, the grid C m can be written as: The grid definition on the marginal probability plane is equivalent to define the grid in terms of the levels of both variables. Then, looking at the grid plane makes it possible to adapt this method to the context of conditional quantiles. This is a key idea because it is precisely the estimation target of the quantile regression technique. Briefly, building the link between the probability grids and the conditional quantiles allows us to associate the marginal rankings with the univariate method of random sampling for the purpose of generating counterfactual distributions. Furthermore, this strategy requires less information than the method of the conditional distribution given that it only requires to know the probability of each grid rather than an entire functional form for the distribution of y 1 conditional on y 2 . Population Consider now the distribution of (y 1 , y 2 ) depending on a group of covariates (x 1 , x 2 ). In particular, consider the following linear model for the pair of random variables (y 1 , y 2 ): where x 1 and x 2 are observable covariates vectors and the errors have a joint density f (ε 1 , ε 2 |x), with x ≡ (x 1 , x 2 ). Using the Skorohod's representation, the same model can be formulated under the conditional quantile form: where (θ 1 , θ 2 ) are two random variables whose domain is given by A ∈ [0, 1] × [0, 1]. Given the joint density f (ε 1 , ε 2 |x), the density of this transformation becames: Note that this function is the second derivative of the y 1 and y 2 copula conditional on x. The estimation of this object is not easy if we do not previously postulate some parametric assumptions (e.g., bivariate gaussian). While there are several available parametric forms for copulas, such as the Fréchet and Mardia families, our goal is to keep the nonparametric aspect that characterizes the quantile regression approach. However, it is unclear how the density estimation is useful for generating a sequence of random numbers to build counterfactual scenarios. 1 In this context, generating random values for a vector (y 1 , y 2 ) conditional on x appears as a simple extension of the grids method explained in the previous section. ( m 1 , m 2 ). 5. Get realizations of the pair ( θ 1 , θ 2 ) assigning values within the grid C m . 6. Generate ( y 1 , y 2 ) using the pair ( θ 1 , θ 2 ) and the univariate method of inverse cumulative function y 1 = Q θ 1 (y 1 |x 1 ) and y 2 = Q θ 2 (y 2 |x 2 ). So far, all the elements used in each step of the process come from the population and so they are unobservable for the econometrician. Thus, an estimation strategy is required. The next section discusses about this topic when we have a random sample available instead of population data. Sample Estimation To generate a replicate (y 1 , y 2 ) using a random sample we can apply the same procedure explained in the preceding paragraphs but replacing each element with its sample analogue. Specifically, we can estimate conditional quantiles Q θ j (y j |x j ) = x j β j (θ j ) for j = 1, 2 using a certain grid of values-e.g., θ = 0.05, 0.10, ..., 0.90, 0.95. The classic reference to get a consistent estimator of β 1 (θ 1 ) and β 2 (θ 2 ) is Koenker and Basset (1978) [16]. The grid method steps when we are working with sample estimators are: 1. Build C m using x 2i β 1 (a m ) and x 2i β 2 (b m ) as delimiters. All the estimates used on each of the previous steps have good asymptotic properties (consistency) under the usual exogeneity assumption (Koenker, 2005) [17]. Moreover, if the number of cells M is large enough the grid method fits better. However, the number of different quantile regressions that can be estimated with a finite sample size is limited. Portnoy (1991) [18] shows that this number is O(n · log(n)). Nevertheless, this rate corresponds to the univariate case and to the best of our knowledge there is no study for the bivariate analysis. On the other hand, taking too many quantiles affects the consistency in the second step of the procedure because the probabilities of each grid are estimated with few observations. Therefore, there is a trade-off between the number of grids and the precision of the method. By the continuous mapping theorem, the method is expected to work well with relatively large sample sizes, provided that it allows to subdivide the enclosure into a greater number of grids. Counterfactual Distributions The proposed methodology can be used to generate counterfactual distributions due to a change on its determinants, as in Oaxaca-Blinder decompositions. Particularly, this proposal is in line with the literature initiated by Machado and Mata (2005) [10] and Melly (2005) [11]. Our contribution to this literature is to extend their method when there are two variables of interest (y 1 , y 2 ) or some function of them. For example, the distribution of household per capita income is the variable of interest in the vast majority of the studies about inequality and/or poverty. If y 1 and y 2 respectively represent the head and spouse individual incomes, then the household income is the sum of them, plus the income of the other family members. After calculating the total level of income received by each household, this number is divided by the number of members in the household to obtain the household per capita income. Assuming that incomes from other family members and those coming from non-labor sources are constant, the distribution of household per capita income depends only on the determinants of the couples income. 2 Formally, let y t be the vector of household per capita income for all households observed in year t, then the income distribution can be represented as: That is, y t is a function of the parameters of each income equation at year t as well as of the probabilistic relationship between the two conditional rankings represented by r t (θ 1 , θ 2 ). Let I(·) be any distributive indicator based on the vector of household incomes (e.g., Gini, Theil index, among others), then I(y t )−I(y s ) is the distributional change between the years t and s. A decomposition of this difference is an exercise of comparative statics where some income distribution determinants are changed and the others remain constant. The key is to build a set of counterfactual scenarios where some determinants are changed. The mechanism to do it is to generate replicates of the income distribution using the method explained in Section 2. For example, let's consider three counterfactual scenarios: In the first equation, only the parameters of the household head have changed and this represent the first scenario. In the second, only those of the spouse have been modified, while in the third both parameter sets have changed. Then, if I(y t ) − I(y s ) is the observed change in the distributive indicator, the effect of each scenario is: y 12 t -y s = D(β 1t (θ 1 ), β 2t (θ 2 ), r s (θ 1 , θ 2 )) − D(β 1s (θ 1 ), β 2s (θ 2 ), r s (θ 1 , θ 2 )) (5) 2 To incorporate non-labor income on a microsimulation exercise is not a simple task and depends mainly on the social policies applied in each country under analysis. See Badaracco (2014) [19] as an example for the countries in the Southern Cone of Latin America. To ease notation, we have omitted the observable characteristics of the household head (x 1 ) and spouse (x 2 ). This obey to the fact that these determinants remain constant in our simulation exercise. Notwithstanding, our methodology admits counterfactual scenarios including isolated changes on those characteristics. In the terminology of Firpo et al. (2011) [20], the result of this exercise is called "characteristic effect"; while the scenarios proposed in Equations (3)-(5) are "parameter effects". The aim of this paper is to show the simulation methodology and the performance of implementing a simple exercise with different sample sizes. Therefore, to keep this analysis simple, only the counterfactual scenarios involving parameter changes were considered, separating the effect of both household members to explore the potential of the method. Empirical Illustration In this section we use real data as an application of the proposed methodology to generate counterfactual distributions of per capita household income. The model is defined by Equations (1) and (2) where y 1 and y 2 represent the labor earnings (in logs) of the household head and spouse, respectively. The vectors x 1 and x 2 are observed characteristics (age, education, gender, number of children) while ε 1 and ε 2 are terms representing the unobserved determinants of earnings. We focus our analysis on five countries in Latin America, particularly those belonging to the Southern Cone: Argentina, Brazil, Chile, Paraguay and Uruguay. The data come from household surveys collected by the statistical institutes of each country. 3 We use three alternative methods to estimate the earning models. The first method is to use a seemingly unrelated regression model (SUR), in which the parameters in Equations (1) and (2) are estimated by OLS but allowing correlation between the error terms in both equations. The second case is the estimation of a quantile regression model (IQR), in which the assumption is that the error terms are independent. Finally, the outcomes from these methods are compared with those obtained by applying the methodology of estimating through quantile regressions but relating the model equations using the grids method (DQR). The first exercise is to analyze the performance of the proposed methodology (DQR) relative to the other two strategies (SUR and IDR). We use an ad-hoc rule to choose the number of quantiles on each earning equation. This rule ensures that the number of observations on each grid will be around 40 in the case in which both equations are independent. 4 The reason behind choosing this rule is to try to get reliable estimates of each grid without losing the asymptotic properties. Using these three methods, the model's coefficients are estimated in order to generate the joint distribution of labor earnings of the heads and spouses in a particular year. These earnings are used to build a new household per capita income and compute the Gini coefficient. Table 1 shows the results. The first panel of the table shows the Gini coefficient observed in each country, followed by the Gini coefficient of the simulated income from each method. The standard errors of each coefficient are computed using 50 simulation replicates. Errors in the SUR model are generated 50 times from a bivariate normal distribution using all the estimated parameters. For the IQR, uniform random variables are generated independently, so that there is no conditional ranking association. Finally, under the DQR simulation, the values of the labor earnings of the head and spouse are obtained from the estimated probabilities in the grid method, namely considering the relationship between the two equations. The second panel in the table presents the mean square error (MSE) of each estimate with respect to the empirical distribution: Note: Standard errors in parentheses. The observed Gini coefficient corresponds to the initial year (see Table A1). The number of observations corresponds to the sample of households that have both head and spouse in the initial year. The SUR method has the lowest MSE for Argentina and Paraguay, followed closely by the DQR method. Therefore, in these two cases, using the conditional mean with an assumption of normality in errors fits relatively well to the real data. In the cases of Brazil, Chile and Uruguay the method that achieves the lowest MSE is the DQR, followed by the IQR method. As discussed above, these results suggest that the DQR method requires a certain amount of observations to achieve relatively good performance. However, in the case of Uruguay, which has a smaller sample than Argentina, DQR method has the lowest MSE. Then, this methodology may also depend on how well the model fits to the empirical distribution. However, large sample sizes should improve the approximation of the DQR method. The next step in this section is to perform the micro-decomposition discussed in Section 3.3 in order to compare the results obtained with the three methods. As an illustration, we estimate the parameters effect in the equations of labor earnings. Table 2 The greatest discrepancies among methodologies belong to the SUR method, while the IQR and DQR do not differ significantly from each other. The differences between the DQR and IQR are between 0 and 0.1 points (in absolute value) in the countries where the DQR achieves the lowest MSE. This result suggests a significant difference in terms of effects (between 0% and 30% in some cases). However, the economic significance of these differences is small (one tenth point of the Gini). The case of Paraguay shows a potential weakness in the DQR method when there are too few observations available. Since DQR has the lowest MSE in this country, the differences with the other methods suggest that with small samples the DQR method could present a potential bias in the estimated effects. Conclusions This paper proposes a method to incorporate the intra-household relationship between the labor incomes of the head and the spouse in decomposition studies. The paper closely follows the articles of Machado and Mata (2005) [10] and Melly (2005) [11]. We try to extend these papers by incorporating the correlation of intra-household income modeled by a simultaneous equation system. The key idea in our proposal is to associate conditional quantiles by adapting an standard method for generating random variables: the grid method. The complexity associated with the joint employment decisions in a household leads us to focus our analysis on the behavior of the head and spouse, independently of the decisions of the rest of the family members. Furthermore, our model only analyzes the determination of labor earnings, assuming all other sources of income remain unchanged. Incorporating these other sources is an exercise that does not allow certain generalizations because non-labor income depends mainly on the social policies applied in each country (Badaracco, 2014 [19]). An empirical application performing a simple decomposition exercise was implemented by using data from household surveys for the Southern Cone countries in Latin America. The counterfactual scenarios considered consisted on a change in the parameters in the labor earnings equations in two different moments in time. The results show that, in general, incorporating the interaction of household incomes substantially improves the goodness of fit to the empirical income distribution. Also, using quantile regression can dramatically change the results of the simulation exercise. However, although the introduction of correlation in incomes yields different results, the economic significance seems to be minor. The comparative exercise among different surveys shows that the performance of the method clearly depends on the sample size by limiting the number of grids. Moreover, given the sample size, the goodness of fit of the semiparametric method seems to be another key point. The paper omits some important issues related to the estimation of earnings equations such as sample selection and endogeneity of covariates (e.g., education). The main reason for doing this is that our target is to propose a methodology for the generation of counterfactual distributions, showing their application using standard regression methods developed in the literature. Solving all these problems requires the use of more specific methodologies that are still under development such as those in Buchinsky (2001) [21] and Chernozhukov and Hansen (2006) [22]. Exploring the performance of the proposed method under these estimation techniques is postponed for future research.
6,055
2015-11-09T00:00:00.000
[ "Economics" ]
Evaluation and New Innovations in Digital Radiography for NDT Purposes The identification of dental and maxillofacial sores has essentially improved with the utilization of digital radiography. A sort of x-ray imaging called digital radiography utilizes digital X-ray sensors instead of regular visual film. Time reserve funds from staying away from substance handling and the ability to move and improve photographs digitally while exposing labourers to less radiation are benefits. The high-energy non-destructive testing (NDT) applications are making progress thanks to the headways in digital radiography innovation. Digital radiography has various advantages, including the ability to make pictures in a flash and to introduce an excellent picture on the PC screen. This study investigated how a U-net profound learning semantic division model acted comparable to two picture quality boundaries: motion toward clamour proportion (SNR) and contrast-to-commotion proportion (CNR). The exposure factors used to make the information pictures, for example, kilo voltage, mill ampere, and exposure time, affected the nature of the radiography pictures that were delivered. The discoveries of this study feature the meaning of making a preparation dataset that is offset as per the quality factors that were examined to work on the usefulness of profound learning division models for NDT digital X-ray radiography applications. INTRODUCTION An enormous assortment of examination procedures known as non-destructive testing and assessment (NDT &E) are utilized in a wide range of circumstances.Modern radiography testing (RT) is a non-destructive testing (NDT) method that is much of the time used to guarantee the nature of mechanical items.For radiographic examination, electromagnetic radiations with more limited frequencies and more noteworthy sharp energies, including X-rays and gamma rays, are used.The mark of creation is where X-rays and gamma rays wander most. More info about this article: https://www.ndt.net/?id=28288 The e-Journal of Nondestructive Testing -ISSN 1435-4934 -www.ndt.netGamma rays are made by nucleonic changes, while the ordinary X-rays are made by electronic advances.Gamma rays are delivered by radioactive substances like Iridium 192 and Cobalt 60 (Co-60), which are used as modern radioisotopes, while X-rays are created by X-ray generators. The essential indicative technique for finding dental and maxillofacial injuries is radiography. Since radiologic pictures are two-layered portrayals of three-layered objects, particular physical highlights are layered on each other, making it trying to see sores.Digital Deduction Radiography (DSR) enjoys the benefit that it takes out the confounded anatomic foundation that the small changes happen against, extensively expanding the prominence of the modifications. Digital Radiography As strong state x-ray finder frameworks with higher goal and lower cost have opened up, film radiography is losing a portion of its importance.Digital radiography (DR) or processed radiography (CR) is terms regularly used to depict radiography performed with strong state finders.The speed of an electronic recording framework is one advantage.Without trusting that film will create, the radiographic picture is basically promptly accessible.Some DR frameworks additionally have the advantage of having the option to keep continuously.The examiner gets a grouping of radiographic photos of the moving example subsequently.Abandons that could not customarily be found in one direction may be obviously apparent in another.Moreover, on the grounds that deformities move, movement can work on the investigator's capacity to recognize them.To wrap things up, the auditor can contrast late radiographs with more established ones or to the radiographs of two distinct examples because of digital recording and chronicling.There are presently techniques accessible with enough goals to track down most of composite deficiencies.For DR frameworks, various setups have been created.The framework that performs digital radiography comprises of the accompanying helpful parts: • A sensor for digital pictures Need for digital X-radiography The progress between human vision and picture digitization is known as picture handling.By utilizing machine vision to handle the picture information, digital picture strategies give better visual data.Digital picture handling can be utilized to settle many issues, including face acknowledgment, weather conditions guaging, satellite imaging, PC vision examination, and clinical imaging.In assembling, PC vision frameworks that make due, screen, distinguish, and check the creation gear vigorously depend on digital picture handling. NDT is a vital cycle in sensitive assembling organizations like kettle plants, nuclear examination offices, safeguards hardware makers, oil and gas makers, weighty vehicle producers, space research associations, and development firms.Eyeball assessment during Xradiography tests for weld imperfection examination can't necessarily deliver exact discoveries since it relies upon the monitors' expertise and the type of the X-radiography film; subsequently, mistaken review might bring about a significant modern disaster.Consequently, consolidating digital X-radiography and machine vision calculations, such a deceptive evaluation can be forestalled. LITERATURE REVIEW The convenience of digital radiography in recognizing deficiencies, breaks, and different inconsistencies in materials, as well as its true capacity for use in 3D imaging, were featured in a survey of late improvements in digital radiography and its modern applications by Sun et al.For modern applications, Raju and Rao (2017) examined the presentation of different digital radiography frameworks and recommended that the decision of the best framework relies upon the sort, thickness, and state of the material as well as the ideal picture quality.Kim et al. (2018) led an investigation of the latest improvements in digital radiography for NDT, covering 3D imaging procedures such figured tomography (CT) and digital tom synthesis (DT).Picture quality, goal, and dose decrease in digital radiography were featured in their audit. The advancement of convenient and handheld digital radiography gadgets for field applications, as well as the work of state of the art calculations for picture handling and examination, was covered by Janney and Lavender (2015) in their conversation of the recent fads in digital radiography for NDT. Digital radiography was among the non-destructive assessment strategies for composite materials that Shah and Jalal (2016) examined.Their examination clarified that it is so significant to pick the right procedure relying upon the kind of material, the shape, the thickness, as well as the required responsiveness and goal. EVOLUTION OF DIGITAL RADIOGRAPHY TECHNOLOGY The improvement of X-ray picture beneficiaries has been altogether affected by headways in digital TV and PC innovation.When contrasted with other logical disclosures, the first filmbased radiography technique's strength of over a century is surprising.Because of its useful utility and seen extraordinary picture quality, X-ray film has been the business standard for modern radiography for over 60 years.The elements of picture information procurement, show, stockpiling, and correspondence are done through X-ray films.With the appearance of Digital Radiography (DR) innovation, the film-based innovation that had controlled the perch for the earlier century has started to shrink away. Because of the low difference gain got in the toe and shoulder locale of the film trademark bend, which will fundamentally diminish the identification of subtleties situated in these districts of the bend, the state of the film trademark bend decreases the Analyst Quantum Productivity (DQE) when a film is utilized as the optical locator.A film bend finds some kind of harmony between a wide unique reach and high nearby differentiation.To simultaneously show an extensive variety of energy values applied to the finder, a huge powerful reach is useful. Film radiography demands a lot of investment and has a short time span of usability, consequently handled film should be put away in a space with controlled moistness and temperature.Also, the synthetic compounds used to get ready film should be legitimately discarded.Digital radiography, in examination, needn't bother with any of the previously mentioned.Digital organizations can be utilized to make, advance, investigate, store, and offer radiographic pictures.Digital finders have totally supplanted film in the photography business throughout the course of recent a long time because of an intrusion of digital innovation. Radiography has as of late embraced digital innovation, for the most part because of the shortfall of proper X-ray optics and the requirement for huge region imaging gear, which is more troublesome and expensive to deliver.Since the past decade, various contraptions have been under dynamic turn of events and are presently being made available for use with digital applications.Three of film's four capabilitiespicture show, stockpiling, and correspondence can now be supplanted by electronic innovations on account of late progressions in both innovation and financial feasibility.The powerlessness of screen-film frameworks to isolate picture obtaining and picture show, which should be possible with digital locators, is perhaps of their most fragile point. There have been various ages of headways in radiography as it progressed from the customary film-based procedure to the latest digital innovation.The possibility of filmless radiography was propelled by fluoroscopy innovation from the 1970s.In this technique, the picture is created by the X-rays that pass through the item straightforwardly connecting with a fluorescent screen that has been covered in X-ray scintillators like Csl, Nal, and so on.The radiographic picture is made by the scintillator screen, which changes X-rays into noticeable photons.Previously, this picture was straightforwardly seen.Unfortunate picture quality made it hard to make out any subtleties.Afterward, to work on the brilliance and nature of the picture, a picture intensifier framework was presented. DEVELOPMENTS IN DIGITAL RADIOGRAPHY Level board X-ray locators are altogether more minimized and produce more noteworthy picture quality.During X-ray exposure, the indicator momentarily stores the electric charge design that is made when occurrence X-rays hit any level board.Following exposure, indistinct exchanging components move the electric charges from every identifier pixel to enhancers and consequently simple digital converters, which make the crude digital picture.Level board frameworks can give minimized plans quick admittance to the pictures in light of the fact that the charge assortment and readout circuits are found right next to the X-ray finder. For level board locators, there are two advancements accessible: roundabout transformation and direct change.Circuitous change includes a two-step system for X-ray location: X-ray energy is caught and switched over completely to light utilizing a fluorescent substance like gadolinium oxysulphide or cesium iodide.Because of some light energy being lost, the update scatters.Thus, charge is gathered in pixels other than those with which the X-ray connected, decreasing picture clearness prior to being changed into electronic charge by various meagre film photodiodes.A course of action of dainty film semiconductors (TFT) rearranges the charge design after that.prompt change Undefined selenium, a X-ray photoconductor material, is utilized in level board identifiers to straightforwardly change X-ray quanta into electric charges. The charges are assembled in similar pixel where they were created by the X-ray association because of the solid electric field.As opposed to roundabout identifiers, the image doesn't spread to adjoining pixels.Accordingly, instructive substance and picture sharpness are safeguarded.These locators require no further cycles, heightening screens, or halfway advances.The electric charge design is quickly held by the finder during X-ray exposure in both direct transformation locators and aberrant change identifiers.Following the exposure, this charge is guided by the TFT changing circuits to enhancers and simple to digital converters, which make the crude digital picture.The crude picture is then handled digitally to make it fitting for show subsequent to being so evolved.To give top notch photographs in any event, when they are overexposed or underexposed, totally programmed picture handling utilizes digital radiography frameworks that length a bigger unique reach. Level Board Locators empower continuous picture recovery, forestalling postpones in film handling.While examining the photo, the exposure boundaries can be changed and streamlined to give a superior picture without spending any more cash or exertion.For breaks, 100 percent Case is attainable.With the right programming, the blemish might be unequivocally found and estimated utilizing the innate optical thickness profile to distinguish it.Imperfection signal recognizable proof is conceivable utilizing the optical thickness profile.By putting away the photographs for a specific place of an item from various precise directions, 3D data about any inconsistency can be obtained.The information above can likewise be utilized to decide the profundity of the imperfection and its area. Phantom Aluminium Plates Seven square-moulded aluminium plates, each 300 mm 300 mm 6.5 mm in size, were used in this examination to gather information.There are 25 level base openings on each plate (a sum of 175 openings on every one of the 7 plates), which are either roundabout or square in shape and reach top to bottom from 0.5 mm (the shallowest) to 5.5 mm (the most profound). Data Acquisition This examination utilized a digital X-ray radiography imaging framework with a maximum cylinder voltage of 150 kV and a maximum current of 0.5 Mama.A fixed SDD of 600 mm is kept up with during the entire picture securing process for the 7 plates, with the plates laying straightforwardly on the finder.This is finished to ensure that the dispersion of gray qualities across all plates at districts with a similar thickness stays reliable for a given exposure factor. The gray qualities change for the level base openings with differed profundities, making highlights that should be visible outwardly in radiography.Table 1 shows the twenty different exposure boundaries that were used on each plate while keeping similar situating throughout the 20 exposures.This would simplify it to comment on highlights for use as ground truth during the improvement of profound learning models. Cropping and Dataset Preparation We trimmed each radiographic picture into 512 512 pixel areas of interest (returns for capital invested) with one level base opening to defeat the test presented by the inhomogeneous circulation of gray scale values that accidentally influences the SNR values across districts of the plates (as found in Table 1).Thus, 25 edited pictures were made from a solitary picture. Data Sorting The dataset including 2928 cleaned photographs was copied to achieve the review's objective. By sorting out the photographs as per rising SNR estimation results, the first dataset was made. The CNR values between each picture's component and setting were not considered. Furthermore, the second dataset was arranged to raise the CNR values between each picture's component and the scenery that goes with it.The SNR values were not considered in this CNR arranging technique, like the SNR arranging strategy. The SNR values are gotten utilizing Condition (1). The dataset was copied and arranged by the difference to-clamor proportion (CNR) readings. The CNR is characterized by EN ISO 17636-2:2022 as the proportion of the typical standard deviation of the sign levels to the contrast between the mean sign degrees of two returns for capital invested.Hence, considering the various sizes of the highlights, a procedure of making two returns for capital invested (one on a level base opening, and the other on the background) was made to complete this for each picture in the assortment.CNR values were gained by rehashing this procedure on all the photographs as per Condition (2), and the dataset was partitioned into different gatherings in light of the acknowledged CNR values: The stochastic conveyance of CNR values across the dataset might be seen while arranged by the rising upsides of SNR estimations, as should be visible from the graphical plot of CNR and SNR readings for a cross-part of haphazardly chosen trimmed photographs showed in Figure 2.This scattering results from the different level base opening profundities and the exposure boundaries applied during picture catch. Data Splitting and Creation of Ground Truth The recently parsed images were separated into four different data sets based on the CNR and SNR values to fully investigate the impact of CNR and SNR on the generation of flaw detection calculations for NDT radiographic images. .A scope of values that were portrayed as high or low for each gathering (CNR and SNR) for both datasets in light of CNR and SNR estimations were laid out.The preparation, approval, and test1 information for the four datasets delivered because of this had a place with either a high or low estimation esteem scope of CNR or SNR. A second test dataset (test2) was made for every one of the 4 gatherings utilizing photographs Subsequent to preparing and approval, the prepared model's presentation is at last assessed utilizing the test1 subset (20% of the dataset).To forestall predisposition in the model assessment, this piece of picture information isn't utilized for one or the other preparation or approval.Accordingly, it is prescribed to utilize this dataset to assess the model's speculation, or its ability to perform well on unnoticed information. test2 subset: For each of the arranged datasets in our review, a second test dataset (test2) was created, with each test2 picture informational collection having a place with the far edge of the deliberate CNR or SNR values considered for arranging the dataset. To create the ground truth information for model preparation, every one of the highlights on the plates were physically explained utilizing CVAT. Deep-Learning Model Training A U-net profound learning design was utilized in this review, which was first produced for biomedical picture division errands.Since its origination, U-net profound gaining engineering has drawn in a ton of consideration from scientists and is currently being utilized for semantic division errands in different fields.Figure 3 shows a realistic delineation of the U-net engineering.The design has an encoder-decoder structure with skip associations that take into account the recuperation of high-goal highlights, upgrading the precision of division task results. Figure 3: Deep-learning U-net architecture To get the best outcomes, various vital boundaries were utilized while preparing our model. The info pictures are 512 by 512 pixels in size, and to expand the variety of our dataset and increment speculation, we utilized information expansion procedures such arbitrary revolution and flipping. • A gadget for handling digital pictures • An instrument for overseeing pictures • Hardware for putting away pictures and information • A patient data framework's point of interaction • An organization for interchanges • A showcase with controls that watchers can utilize. Figure 1 : Figure 1: Digital Radiography System Hussain et al. (2019) evaluated the presentation of different digital radiography methods for nondestructive testing (NDT) of airplane parts.Registered radiography (CR) and digital radiography (DR), as indicated by their review, offer similar imaging capacities, despite the fact that CR requires longer exposure lengths. Figure 2 : Figure 2: A cross-section of randomly chosen data with corresponding CNR values between the flat-bottom holes and background, arranged in order of rising SNR values. Figure 4 :Figure 5 : Figure 4: U-net deep-learning model training curves on high CNR2 accuracy. Table 1 : Summary of the exposure factors and the corresponding signal-to-noise ratio in great Exposure settings for each picture SNR Sorted dataset with associated CNR readings t fall inside the scope of estimation values.This was finished to assess the impacts of the preparation dataset's absence of a particularly limited scope of estimated values (CNR or SNR) present in test 2. Following is a portrayal of every subset's specific capability: Subset utilized for preparing the U-net profound brain network model: This subset of the dataset contains the most pictures (60% of the dataset).By adjusting its loads and predispositions, the model plays out an advancement cycle as it figures out how to recognize named highlights (level base openings) on the radiographs and connections in the information.The objective is to foster a model that can distinguish related highlights in photographs that are not piece of the preparation dataset.Approval subset: The approval set (20% of the dataset) is utilized to assess the model's exhibition during preparing and to make essential changes.This approval subset, as opposed to the preparation dataset, is utilized to survey the model's exhibition on new information instead of to adjust the model's loads and inclinations.To forestall overfitting, which brings about the model performing inadequately on new information, hyperparameters (like learning rate) are adjusted utilizing the approval information.By differentiating the model's exhibition on the preparation and approval sets, overfitting not entirely settled. Table 2 beneath records the results of the model's application to the four datasets.The mean IoU values for every one of the four datasets (High SNR, Low SNR, High CNR, and Low CNR) are displayed for the relating two test sets (test1 and test2).. Table 2 : Four dataset's average intersection-over-union (IoU) values were sorted by SNR and CNR values.We made the interesting disclosure that there was no measurably tremendous distinction between the mean IoUs on the test pictures (test1 and test2) for the High SNR dataset.The mean IoU worth of test1, which falls inside a similar order as the preparation datasets, was a smidgen lower.Comparative model execution is seen when prepared on the Low SNR dataset, and the mean IoUs additionally show a few minor contrasts.Regardless of this, the test1 dataset, which falls inside a similar SNR range as the preparation dataset, shows a little unrivaled model presentation.As displayed in table 2, the varieties in mean IoU for the High CNR and Low CNR datasets are significantly more prominent on the comparing test datasets (test1 and test2). Table 3 ' s outcomes show a critical disparity between the test1 and test2) readings.Figures4 and 5show the relating preparing bends. Table 3 : With a smaller range of measurement values, mean intersection-over-union (IoU) values from high CNR2 datasets are sorted by CNR values.
4,802
2023-07-01T00:00:00.000
[ "Materials Science" ]
Using Reinforcement Learning to Provide Stable Brain-Machine Interface Control Despite Neural Input Reorganization Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled. Introduction Brain-machine interface (BMI) research has made significant advances in enabling human subjects to control computer and robotic systems directly from their neural activity [1][2][3][4][5][6]. These achievements have been supported by neural decoding studies that have shown how functional mappings can be made between single neuron activity, local field potentials (LFPs), and electrocorticograms (ECoG) and kinematics, kinetics, and muscle activation [7][8][9][10][11][12][13][14][15][16][17][18][19]. Such research has revealed multiple factors that influence neural decoding accuracy on even short timescales (hours to days). For example, performance can be enhanced or degraded by the quantity, type and stability of the neural signals acquired [7,10,11,13,14,16,18,19], the effects of learning and plasticity [8,10,17,18], availability of physical signals for training the neural decoders [2,3,8], and duration of decoder use [7,12,17]. These conditions create a dynamic substrate from which BMI designers and users need to produce stable and robust BMI performance if the systems are to be used for activities of daily living and increase independence for the BMI users. Two of the particularly significant challenges to BMI neural decoders include how to create a decoder when a user is unable to produce a measureable physical output to map to the neural activity for training the decoder, and how to maintain performance over both long and short timescales when neural perturbations (inevitably) occur. For BMIs that use chronically implanted microelectrode arrays in the brain, these perturbations include the loss or addition of neurons to the electrode recordings, failure of the electrodes themselves, and changes in neuron behavior that affect the statistics of the BMI input firing patterns over time. The first challenge includes situations such as paralysis or limb amputation, in which there is no explicit user-generated kinematic output available to directly create a neural decoder. To address this, some studies have utilized carefully structured training paradigms that use desired target information and/or imagined movements to calibrate the BMI controller [1][2][3][4][5][6][20][21][22][23]. Other methods involve initializing the decoder with values based on baseline neural activity, ipsilateral arm movements, or using randomized past decoder parameters, and then refining the decoder [23]. These methods all involve using supervised learning methods to adapt the decoder to the user's neural activity until effective BMI control has been achieved. The second challenge involves adaptation. Adaptation of a neural decoder after its initial calibration can lengthen a BMI's effective lifetime by compensating for gradual changes in the behavior of the BMI inputs. Several studies that used linear discriminant analysis of electroencephalogram (EEG) data, have shown that unsupervised adaptive methods can be used to update model aspects that do not depend on labeled training data [24][25][26]. However, in most cases adaptive BMI systems have relied purely on supervised adaptation. During supervised adaptation, the training data that is used to calculate the decoder is periodically updated using either additional kinematic data [27], recent outputs of the decoder itself (the current decoder being assumed effective enough to adequately infer the user's desired BMI output) [28][29][30][31][32], or inferred kinematics based on known target information as new trials occur [20,21,23]. Rather than using supervised adaptation, we are developing a new class of neural decoders based on Reinforcement Learning (RL) [33,34]. RL is an interactive learning method designed to allow systems to obtain reward by learning to interact with the environment, and which has adaptation built into the algorithm itself using an evaluative scalar feedback signal [35]. As with supervised adaptation methods, these decoders can adapt their parameters to respond to user performance. Unlike supervised adaptation methods, they use a decoding framework that does not rely on known (or inferred) targets or outputs (such as kinematics) as a desired response for training or updating the decoder. Therefore they can be used even when such information is unavailable (as would be the case in highly unstructured BMI environments), or when the output of the current BMI decoder is random (e.g. an uncalibrated BMI system, or when a large change has occurred within the decoder input space), because they use a scalar qualitative feedback as a reinforcement signal to adapt the decoder. Several studies have shown that RL can be used to control basic BMI systems using EEG signals [36,37] and neuron activity in rats [38,39]. We have recently introduced a new type of RL neural decoder, based on the theory of Associative reinforcement learning, that combines elements of supervised learning with reinforcement based optimization [34]. In that work, we used motor neuron recordings recorded during arm movements as well as synthetic neural data, generated by a biomimetic computational model, to show how the decoder could be used to solve simulated neuroprosthetic tasks. These tasks involved both multiple targets and required the controller to perform sequences of actions to reach goal targets. The current study extends that work by applying this new RL decoder to a real-time BMI task, and by testing its performance in that task when large numbers of the BMI inputs are lost or gained. The RL neural decoder was evaluated under three basic conditions: an absence of explicit kinetic or kinematic training signals, large changes (i.e. perturbations) in the neural input space, and control across long time periods. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to control a robot arm during a two-target reaching task. Only two robot actions were used to emphasize the relationship between each specific robot action, the feedback signal, perturbations, and the resulting RLBMI adaptation. The RLBMI parameters were initially seeded using random numbers, with the system only requiring a simple 'good/bad' training signal to quickly provide accurate control that could be extended throughout sessions spanning multiple days. Furthermore, the RLBMI automatically adapted and maintained performance despite very large perturbations to the BMI input space. These perturbations included either sudden large-scale losses or additions of neurons amongst the neural recordings. Overview We developed a closed-loop BMI that used an actor-critic RL decoding architecture to allow two (PR and DU) marmoset monkeys (Callithrix jacchus) to control a robot arm during a twochoice reaching task. The BMI was highly accurate (,90%) both when initialized from random decoder initial conditions at the beginning of each experimental session and when tested across a span of days to weeks. We tested the robustness of the decoder by inducing large perturbations (50% loss or gain of neural inputs), and the BMI was able to quickly adapt within 3-5 trials. Ethics Statement All animal care, surgical, and research procedures were performed in accordance with the National Research Council Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. They were approved by the University of Miami Institutional Animal Care and Use Committee (protocol: . The marmosets are housed in a climate controlled environment, with their overall care closely supervised by the University of Miami Division of Veterinary Services (DVR). The animals are regularly inspected by DVR staff to verify healthy behavior. In addition, the senior veterinary staff performs regular checks that include physical examinations and blood tests. The marmosets receive a daily diet that combines fresh fruits with dry and wet species-specific feeds and have access to water ad libitum. The marmosets are given environmental enrichments, which include: toys, novelty food treats, and various privacy houses for play and sleep to ensure animal welfare. Surgical procedures are carried out under sterile conditions in a dedicated operating suite under the supervision of the veterinary staff. Following surgical procedures, lab personnel and DVR staff closely monitor subject health during convalescence until they can be returned to the standard (daily) observation schedule. After the completion of all experiments, the brains are processed for histological evaluation of the recording sites following transcardial perfusion under deep anesthesia. Microwire Electrode Array Implantation Each monkey was implanted with a16-channel tungsten, microelectrode array (Tucker Davis Technologies, Alachua FL) in the motor cortex, targeting arm and hand areas. A craniotomy was opened over the motor area, and the dura resected. The array placement was made using stereotaxic coordinates [40][41][42][43][44] and cortical mapping (DU motor implant) using a micropositioner (Kopf Instruments, Tujunga, CA). The implant was secured using anchoring screws, one of which served as reference and ground. The craniotomies were sealed using Genta C,ment (EMCMBV, Nijmegen, The Netherlands). Surgical anesthesia was maintained using isoflurane (PR) or constant rate ketamine infusion (DU), steroids (dexamethasone) were used to minimize brain edema and swelling, and analgesics (buprenorphine) and antibiotics (cefazolin, cephalexin) were administered postoperatively for 2 and 5 days, respectively. Neural Data Acquisition Neural data were acquired using a Tucker Davis Technologies RZ2 system (Tucker Davis Technologies, Alachua, FL). Each array was re-referenced in real-time using a common average reference (CAR) composed of that particular array's 16 electrodes (if an electrode failed it was removed from the CAR) to improve SNR [45]. Neural data were sampled at 24.414 kHz and bandpass filtered (300 Hz-5 kHz). Action potential waveforms were discriminated in real-time based on manually defined waveform amplitudes and shapes. The recorded neural data included both multineuron signals and well-isolated single neuron signals (collectively referred to here as neural signals), which were used equivalently in all real-time and offline tests. On average there were 18.3+/23.1 (mean +/2 std) motor neural signals for DU and 21.1+/20.4 for PR (10 signals for PR following a mechanical connector failure in which half the electrodes were lost). Neural signal firing rates were normalized (between 21 to 1) in real-time by updating an estimate of the neural signals' maximum firing rates during each experimental trial. Actor-critic Reinforcement Learning Brain-machine Interface Control Architecture The actor-critic RLBMI architecture used for these experiments is described in detail in [34]. Briefly, actor-critic systems are characterized by the actor and critic modules, Figure 1A. The actor interacts with the environment by selecting system actions given a specific input state (here neural states). The critic provides an evaluative feedback regarding how successful the actions were in terms of some measure of performance, and which is used to refine the actor's state to action mapping. The actor was a fully connected 3-layer feedforward neural network, Figure 1B, that used a Hebbian update structure [34]. The actor input (X) was a vector (length n) of the spike counts for each of the n motor cortex neural signals during a two second window following the go cue of each trial. A parsimonious network was chosen for decoding, using only 5 hidden nodes and two output nodes (one for each of the two robot reaching movements). The output of each hidden node (OutHi) was a probability of firing (21 to 1) computed using a hyperbolic tangent function, and in which WH i is the synaptic weights vector between node i and the n inputs (b is a bias term): The output nodes determined the action values (AV) for each of the j possible robot actions: S(OutH) is a sign function applied to the hidden layer outputs (positive values become +1, negative values become 21), and WO j is the weights matrix between output j and the hidden layer. The robot action with the highest action value was implemented each trial. The actor weights were initialized using random numbers, which were updated (DW) using the critic feedback (f): DWO~m Feedback is +1 if the previous action selection is rewarded and 21 otherwise. This update equation is composed of two terms that provide a balance between the effects of reward and punishment on the network parameters. Under rewarding conditions, the first term contributes to the changes in the synaptic weights, whereas in the case of punishment both terms will affect the weight update. After convergence to an effective control policy the output of the node tends to the sign function and thus the adaptation will stop automatically [34]. In the current work an 'ideal' critic was used that always provided accurate feedback. However, such perfect feedback is not intrinsically assumed by this RL architecture, and there are many potential sources of the feedback in future BMI applications (see Discussion). S() is again the sign function and m H and m O are learning rates of the hidden (0.01) and output (0.05) layers, respectively. The update equations are structured so that the local input-output correlation in each node are reinforced using a global evaluative feedback, hence Hebbian reinforcement The actor interacts with the environment by selecting actions given input states (here the BMI Controller). The critic is responsible for producing reward feedback that reflects the actions' impact on the environment, and which is used by the actor to improve its input to action mapping capability (here the Adaptive Agent). (B) The actor used here is a fully connected three layer feedforward neural network with five hidden (H i ) and two output (AV i ) nodes. The actor input (X) was the normalized firing rates of each motor cortex neural signal. Each node was a processing element which calculated spiking probabilities using a tanh function, with the node emitting spikes for positive values. doi:10.1371/journal.pone.0087253.g001 learning. In the current work, the architecture is applied to a two state problem, but this architecture and update equations 3 and 4 can be directly applied to multistep and multitarget problems, even while still using a binary feedback signal [34]. Brain-machine Interface Robot Reaching Task The BMI task required the monkeys to make reaching movements with a robot arm to two different spatial locations in order to receive food rewards, Figure 2A and Movie S1. The monkeys initiated trials by placing their hand on a touch sensor for a randomized hold period (700-1200 msec). The hold period was followed by an audio go cue, which coincided with the robot arm moving to the start position. Simultaneously to the robot movement, an LED spatial target on either the monkeys' left ('A' trials) or right ('B' trials) was illuminated. Prior to the real-time BMI experiments, the monkeys had been trained to manually control the robot movements by either making or withholding arm movements. During those training sessions, the monkeys moved the robot to the A target by reaching and touching a second sensor, and moved the robot to the B target by keeping their hand motionless on the touchpad, and were rewarded for moving the robot to the illuminated target. The differences in the neuron firing rates shown by the rasters in Figure 2B illustrate how this had trained the monkeys to associate changes in motor activity with moving the robot to the A target, and static motor activity to B target robot movements. In the real-time BMI experiments, the robot movements were determined directly from the monkeys' motor cortex activity using the actor-critic RL algorithm previously described. A and B trials were presented in a pseudorandom order of roughly equivalent proportions. The monkeys were immediately given food rewards (waxworms/marshmallows) at the end of trials only if they had moved the robot to the illuminated LED target. In these initial RLBMI tests, we controlled the experiment to examine the basic adaptive capabilities of the RL architecture as a state-based BMI controller, and thus only two robot action states ('move to target A' or 'move to target B') were used. This allowed us to highlight the relationship between each individual robot action, the feedback training signal, and the resulting adaptive modifications of the RLBMI parameters in a direct and quantifiable manner. This was particularly useful when considering parameter adaptation from wide ranging, random initial conditions, and when we introduced perturbations to the neural input space. To speed the initial adaptation of the RL algorithm, real time 'epoching' of the data was used. After each robot action, the algorithm weights were updated (equations 3 and 4) using not only the most recent trial's data, but rather with a stored buffer of all the previous trials from that session, with the buffered trials being used to update the weights ten times following each action. The RL was initialized using random numbers and therefore employed random exploratory movements until more effective parameters are learned, thus this epoching helped prevent the monkeys from becoming frustrated at the beginning of sessions by moving the system more rapidly away from purely random actions. RLBMI Stability when Initialized from Random Initial Conditions During real-time closed loop robot control experiments the parameter weights of the RLBMI were initialized with random values, with the RLBMI learning effective action mappings through experience (equations 3 and 4). Performance was quantified as percentage of trials in which the target was achieved. In addition to these closed-loop real-time experiments, we also ran a large number of offline 'open-loop' Monte Carlo simulations to exhaustively confirm that the RLBMI was robust in terms of its initial conditions, i.e. that convergence of the actor weights to an effective control state during the real-time experiments had not been dependent on any specific subset of initialization values. For the offline simulations, the neural data and corresponding trial targets for the first 30 trials of several closed-loop BMI sessions from both monkeys (10 sessions for DU and 7 for PR) were used to build a database for open-loop simulations. During the simulations, data from each session were re-run 100 times, and different random initial conditions were used for each test. . Two target robot reaching task using the RLBMI. The monkeys initiated each trial by placing their hand on a touch sensor for a random hold period. A robot arm then moved out from behind an opaque screen (position a) and presented its gripper to the monkey (position b). A target LED on either monkey's left (A trials) or right (B trials) was illuminated to indicate the goal reach location. The RLBMI system ( Figure 1) used the monkeys' motor cortex activity to either move the robot to the A or B target (panel A). The monkeys received food rewards only when the RLBMI moved the robot to the illuminated target (position c), Movie S1. Panel B shows examples of the spike rasters for all the neural signals used as inputs to the RLBMI during experiments which tested the effects of neural signals being lost or gained. Data is shown for trials 6-10 (which preceded the input perturbation) and trials 11-15 (which followed the input perturbation). For each trial, all the recorded neural signals are plotted as rows (thus there are multiple rows for a given trial), with data from type A trials being highlighted in red. Differences in firing patterns during the A and B trials are evident both before and after the perturbation, although the RLBMI still had to adapt to compensate for the considerable changes in the overall population activity that resulted from the input perturbations. doi:10.1371/journal.pone.0087253.g002 For BMI systems to show truly stable performance, nonstationarities or other changes in the input space should not adversely affect performance. While some changes of the input space can be beneficial, such as neurons changing their firing pattern to better suit the BMI controller [8,10,17,18,[46][47][48], large changes in the firing patterns of the inputs that dramatically remove the input space from that which the BMI had been constructed around are significant problems for BMIs. Such perturbations can result from neurons appearing or disappearing from the electrode recordings, a common occurrence in electrophysiology recordings. In several closed-loop BMI sessions, we deliberately altered the BMI inputs to test the RLBMI's ability to cope with large-scale input perturbations. These perturbations were done following the initial learning period so that the RLBMI had already adapted and gained accurate control of the robot prior to the input perturbation. During input loss tests, the real-time spike sorting settings were adjusted (following the 10 th trial) so that a random 50% of the neural signals were no longer being detected by the RLBMI. During input gain tests, when the RLBMI was initialized at the beginning of the experiment the real-time spike sorting settings were configured so that the action potentials of a random half of the available neural signals were not being acquired. Then, following the initial adaptation of the RLBMI, the parameters were updated so that the previously avoided signals suddenly appeared as 'new' neural signals amongst the BMI inputs. We verified the real-time input perturbation experimental results with additional offline simulations and during several realtime tests that spanned multiple days. The offline simulation tests used the same Monte Carlo simulation database previously described. For the offline input loss simulation tests, the firing rates for a randomly chosen (during each simulation) half of the neural signals were set to zero after 10 trials and the ability of the RLBMI to compensate was evaluated. Similarly, for the 'found neuron' simulations, for each simulation half the inputs were randomly selected to have their firing rates set to zero for the first 10 trials. Finally, during several real-time RLBMI experiments that spanned multiple days, we found that abrupt 50% input losses only caused temporary performance drops even though the system had been adapting for several days prior to the perturbation (see: RLBMI stability over long time periods below). We used the mutual information between the neuron data and the robot task to quantify the impact of input perturbations on the RLBMI input space. The mutual information (MI) between each neural signal (x) and the target location (y) was determined [49]: where H is the entropy: (H(Y) is 1 bit when A and B trials are equally likely) We used Monte Carlo simulations in which different fractions of the neural signals were randomly 'lost' (i.e. firing rate became zero) and used the resulting relative change in the average mutual information to gauge the effect of losing neural signal recordings on the RLBMI input space. RLBMI Stability Over Long Time Periods We tested how well the RLBMI would perform when it was applied in closed-loop mode across long time periods. These contiguous multisession tests consisted of a series of robot reaching experiments for each monkey. During the first session, the RLBMI was initialized using a random set of initial conditions. During the follow-up sessions, the RLBMI was initialized from weights that it had learned from the prior session, and then continued to adapt over time (equations 3 and 4). We also tested the impact of input perturbations during the contiguous multisession experiments. During the contiguous PR tests, a failure in the implant connector resulted in half of the neural signals inputs to the RLBMI being lost. We ran another contiguous session in which the RLBMI successfully adapted to this change to its inputs. This input loss was simulated in two of the contiguous sessions with monkey DU. In those experiments, a random half of the motor neural signals were selected (the same signals in each test), and in those perturbation experiments the firing rates of the selected inputs were set to zero. For comparison purposes, in monkey DU two final contiguous session experiments were also run in which the whole input space remained available to the RLBMI system. Actor-Critic Reinforcement Learning Brain-Machine Interface (RLBMI) Control of Robot Arm The actor-critic RLBMI effectively controlled the robot reaching movements. Figure 3 shows a typical closed loop RLBMI experimental session (PR). Figure 3A shows that the algorithm converged to an effective control state in less than 5 trials, after which the robot consistently made successful movements. The algorithm was initialized using small random numbers (between +/2.075) for the parameter weights (equations 1 and 2). Figure 3B shows the gradual adaptation of the weight values of the two output nodes (equation 4) as the algorithm learned to map neural states to robot actions ( Figure 3C shows a similar adaptation progression for the hidden layer weights). The weights initially changed rapidly as the system moved away from random explorations, followed by smooth adaptation and stabilization when critic feedback consistently indicated good performance. Larger adaptations occurred when the feedback indicated an error had been made. The RLBMI system was very stable over different closed loop sessions, robustly finding an effective control policy regardless of the parameter weights' initial conditions. Figure 4 shows that during the closed loop robot control experiments, the RLBMI controller selected the correct target in approximately 90% of the trials (blue bar: mean +/2 standard deviation; DU: 93%, 5 sessions; PR: 89%, 4 sessions, significantly above chance (0.5) for both monkeys, p,.001, one sided t-test). Similarly, Figure 4 (red bars) shows that the open-loop initial condition Monte Carlo simulations (see Materials and Methods) yielded similar accuracy as the closed loop experiments, confirming that the system converged to an effective control state from a wide range of initial conditions (DU: 1000 simulations, PR: 700, significantly above chance (0.5) for both monkeys, p,.001, one sided t-test). The accuracy results in Figure 4 correspond to trials 6-30 since the first 5 trials were classified as an initial adaptation period and the monkeys typically became satiated with food rewards and ceased interacting with the task (e.g. went to sleep, began fidgeting in the chair, otherwise ignore the robot) after 30 to 50 trials. A surrogate data test was used to confirm that the RLBMI decoder was using the monkeys' brain activity to control the robot arm, and not some other aspect of the experimental design. These tests involved additional open-loop simulations in which the order of the different trial types recorded during the real-time experiments was preserved while the order of the recorded motor cortex neural data was randomly reshuffled, thus destroying any consistent neural representations associated with the desired robot movements. Despite the decoder's adaptation capabilities, Figure 4 (black bars) shows that the RLBMI system was not able to perform above chance levels under these conditions (DU: 1000 simulations, PR: 700, p,1, one sided t-test), demonstrating that the RLBMI was unable to accomplish the task without the direct connection between the motor cortex command signals and the desired robot actions that had been recorded during the real-time experiments. RLBMI Stability during Input Space Perturbations: Loss or Gain of Neuron Recordings The RLBMI quickly adapted to compensate for large perturbations to the neural input space (see Materials and Methods). Figure 5A shows the RLBMI performance when a random 50% of the inputs were lost following trial 10 (vertical black bar). By trial 10 the RLBMI had already achieved stable control of the robot, and it had readapted to the perturbation within 5 trials, restoring effective control of the robot to the monkey. The inset panel in Figure 5A contrasts the mean results of the RLBMI simulations (solid lines) against simulations in which a static neural decoder (dashed lines, specifically a Wiener classifier) was used to generate the robot action commands. The Wiener classifier initially performed quite well, but the input perturbation caused a permanent loss of performance. Figure 5B shows that the RLBMI system effectively incorporate newly 'found' neural signals into its input space. This input perturbation again occurred following the 10 th trial (vertical black bar), prior to that point a random 50% of the RLBMI inputs had had their firing rate information set to zero. In both the closed-loop BMI experiments and open-loop simulations the system again had adapted to the input perturbation within 5 trials. By comparison, a static decoder (Wiener classifier) was not only unable to take advantage of the newly available neural information, but in fact showed a performance drop following the input perturbation ( Figure 5B inset panel, RLBMI: solid lines, static Wiener: dashed lines). Both the losses of 50% of the recorded inputs and the abrupt appearance of new information amongst half the recordings represent significant shifts to the RLBMI input space. In Figure 6A, we contrast the change in the available information between the neural signals (equation 5) with losses of varying quantities of neural signals (red boxes; DU: solid, PR: hollow). By the time 50% of the inputs have been lost, over half of the information had been lost as well. Abrupt input shifts of this magnitude would be extremely difficult for any static neural decoder to overcome. It is thus not unexpected that the static Wiener classifier (black circles; DU: solid, PR: hollow) nears chance performance by this point, any decoder that did not adapt to the change would show similar performance drops. Figure 6B contrasts the average performance show how throughout every trial the RLBMI system gradually adapted each of the individual weights that connected the hidden layer to the outputs (B) as well as all the weights of the connections between the inputs and the hidden layer (C), as the RLBMI learned to control the robot. The shape of these weight trajectories indicate that the system had arrived at a consistent mapping by the fifth trial: at that point the weight adaptation progresses at a smooth rate and the robot is being moved effectively to the correct targets. At trial 23 an improper robot movement resulted in the weights being quickly adjusted to a modified, but still effective, mapping. doi:10.1371/journal.pone.0087253.g003 RLBMI Stability Over Long Time Periods and Despite Input Perturbations The RLBMI maintained high performance when applied in a contiguous fashion across experimental sessions spanning up to 17 days, Figure 7. The decoder weights started from random initial conditions during the first session, and during subsequent sessions the system was initialized from weights learned in the previous session (from the 25 th trial), and was then allowed to adapt as usual (equations 3 and 4) without any new initializations or interventions by the experimenters, this was done to approximate use of the BMI over long time periods. The solid lines in Figure 5 give the accuracy of the system during the first 25 trials (mean: DU: 86%; PR: 93%) of each session when the inputs were consistent. For monkey PR, half of the neural input signals were lost between day 9 and 16 (dashed line). However, the system was able to quickly adapt and this loss resulted in only a slight dip in performance (4%), despite the fact the RLBMI had been adapting its parameters for several days to utilize the original set of inputs. Likewise, the RLBMI controller maintained performance during two DU sessions (day 8 and 13, dashed line) in which a similar input loss was simulated (see Materials and Methods). In fact, performance during those sessions was similar or better to DU tests that continued to use all the available neural signals (days 14 and 17). Potential Benefits of using Reinforcement Learning Algorithms for BMI Controllers Adaptive and interactive algorithms based on reinforcement learning offer several significant advantages as BMI controllers over supervised learning decoders. First, they do not require an explicit set of training data to be initialized, instead being computationally optimized through experience. Second, RL algorithms do not assume stationarity between neural inputs and behavioral outputs, making them less sensitive to failures of recording electrodes, neurons changing their firing patterns due to learning or plasticity, neurons appearing or disappearing from recordings, or other input space perturbations. These attributes are important considerations if BMIs are to be used by humans over long periods for activities of daily living. The Reinforcement Learning BMI System does not Require Explicit Training Data The RLBMI architecture did not require an explicit set of training data to create the robot controller. BMIs that use supervised learning methods require neural data that can be related to specific BMI behavioral outputs (i.e. the training data) to the RLBMI had already adapted and achieved high performance by the 10 th trial. Following the 10 th trial (vertical black bar), 50% of the neural inputs were abruptly lost, with RLBMI readapting to the loss within 5 trials. (B) shows that when the recording electrodes detected new neurons, the RLBMI adaptation allowed the new information to be incorporated into the BMI without the emergence of new firing patterns degrading performance. In these perturbation tests, a random 50% of the available neural signals were artificially silenced prior to the 10 th trial (vertical black bar). The sudden appearance of new input information caused only a small performance drop, with the RLBMI again readapting to the perturbation within 5 trials. The inset panels in both (A) and (B) contrast the averaged results of the RLBMI open loop simulations (solid lines, DU: gray, PR: red) with the simulation performance of a nonadaptive neural decoder (dashed lines, a Wiener classifier created using the first five trials of each simulation). In contrast to the RLBMI, the nonadaptive decoder showed a permanent performance drop following perturbations in which neural signals were lost, as well as in the tests in which new signals appeared. doi:10.1371/journal.pone.0087253.g005 calibrate the BMI. In many BMI experiments that have used healthy nonhuman primate subjects, the training data outputs were specific arm movements that accomplished the same task for which the BMI would later be used [7][8][9][10][11]28,[50][51][52]. While those methods were effective, gaining access to this type of training data is problematic when considering paralyzed BMI users. Other studies have found that carefully structured paradigms that involve a BMI user first observing (or mentally imagining) desired BMI outputs, followed by a process of refinements that gradually turn full BMI control of the system over to the user, can provide training data without physical movements [1,6,22]. While these methods were again effective, they require carefully structured BMI paradigms so that assumed BMI outputs can be used for the calibration. The RLBMI system shown here avoids such issues. Calculating the parameters of the RLBMI architecture never requires relating neural states to known (or inferred) system outputs, but rather the system starts controlling the robot with random parameters which are then gradually adapted given feedback of current performance. Thus, the robot made random movements when the system was initialized (as can be seen in Figure 3A), but the RLBMI was able to quickly (typically within 2-4 trials) adapt the parameters to give the monkeys accurate control (,90%) of the robot arm. This adaption only required a simple binary feedback. Importantly, the same RLBMI architecture utilized here can be directly applied to tasks that involve more than two action decisions, while still using the exact same weight update equations. This means that the system can be readily extended to more sophisticated BMI tasks while still only requiring the same type of binary training feedback [34], this opens numerous opportunities for RLBMI deployment with paralyzed users. Finally, not relying on explicit training data helped make the RLBMI system stable over long time periods (,2 weeks), since the architecture continually refined its parameters based on the user's performance to maintain control of the robot arm, as shown in Figure 7. The Reinforcement Learning BMI System Remained Stable Despite Perturbations to the Neural Input Space It is important that changes in a BMI's neural input space do not diminish the user's control, especially when considering longer time periods where such shifts are inevitable [53][54][55][56][57]. For example, losses and gains of neurons are very common with electrophysiology recordings using chronically implanted microelectrode arrays: electrodes fail entirely, small relative motions between the brain and the electrodes cause neurons to appear and disappear, and even the longest lasting recording arrays show gradual losses of neurons over time from either tissue encapsulation of the electrodes or from the gradual degradation of the electrode material [58,59]. While some changes in neural input behavior can be beneficial, such as neurons gradually adopting new firing patterns to provide a BMI user greater control of the system [8,10,17,18,[46][47][48], large and/or sudden changes in neuron firing patterns will almost always reduce a BMI user's control if the system cannot compensate, as can be seen in performance drop of the static Wiener decoder in Figures 5 and 6. While input losses may be an obvious adverse perturbation to BMI systems (as shown in Figure 6), the appearance of new neurons is also a significant input perturbation: when the representations of new neurons overlap with neurons that were already being used as BMI inputs, this causes the previous inputs to appear to have acquired new firing patterns, thus perturbing the BMI's input space. Such appearances could be a particular issue in BMI systems that rely on action potential threshold crossings on a per electrode basis to detect input activity [21,52,60]. Finally, BMIs that cannot take advantage of new sources of information lose the opportunity to compensate for losses of other neurons. Currently, most BMI experiments avoid the issue of large changes in input neurons on BMI performance since the experimenters reinitialize the systems on, at least, a daily basis [1][2][3]6,[21][22][23]. However, it is important for practical BMI systems to have a straightforward method of dealing with neural input space perturbations that are not a burden on the BMI user and do not require such daily recalibrations. The RLBMI controller shown here does not require the intervention of an external technician (such as an engineer or caregiver) to recalibrate the BMI following changes in the input space. Rather, it automatically compensates for input losses, as demonstrated in Figure 5 in which the RLBMI adapted and suffered only a transient drop in performance despite neural signals disappearing from the input space. Similarly, Figures 5 and 6 show how the RLBMI automatically incorporated newly available neural information into the input space. Figure 5 shows that the RLBMI did display greater variation in performance following the addition of new inputs compared to its performance following input losses. This may reflect the variability to which the RLBMI algorithm had learned to ignore initially silent channels, combined with the variation in the magnitude of the firing activity of the neural signals once they were 'found'. In situations in which the algorithm had set the silent channel parameter weights very close to zero, or in which the activity of the new channels was relatively low, the addition of the new neural signals would have had little impact on performance until the RLBMI controller reweighted the perturbed inputs appropriately to effectively use them. Conversely, during the input loss tests there would be a higher probability that dropped inputs had had significant weight parameters previously attached to their activity, resulting in a more obvious impact on overall performance when those neural signals were lost. Finally, since the RLBMI constantly revises which neural signals, and by extension which electrodes, to use and which to ignore as BMI inputs, engineers or caregivers initializing RLBMI systems would not to spend time evaluating which electrodes or neurons should be used as BMI inputs. The RLBMI architecture is intended to balance the adaptive nature of the decoder with the brain's learning processes. Understanding the intricacies of these dynamics will be an important focus of future work for the study of brain function and BMI development. Numerous studies have shown that neurons can adapt to better control BMIs [8,10,17,18,[46][47][48]. RL adaptation is designed so that it does not confound these natural learning processes. RL adaption occurs primarily when natural neuron adaptation is insufficient, such as during initialization of the BMI system or in response to large input space perturbations. Figures 3 and 5 show the current RLBMI architecture offers smooth adaptation and stable control under both such conditions. In the current experiments, the speed and accuracy of the RLBMI balanced any adaptation by the recorded motor cortex neurons themselves. Further research that combines studies of natural neuron learning capabilities with more complicated BMI tasks will be necessary though to develop RLBMI architectures that can provide mutual optimal adaptation of both the brain and the neural decoder, and thus offer highly effective and robust BMI controllers. Obtaining and using Feedback for Reinforcement Learning BMI Adaptation The ability of the RLBMI system to appropriately adapt itself depends on the system receiving useful feedback regarding its current performance. Thus both how accurate the critic feedback is and how often it is available directly impacts the RLBMI's performance. The current experimental setup assumed an ideal case in which completely accurate feedback was available immediately following each robot action. While such a situation is unlikely in everyday life, it is not essential for RL that feedback always be available and/or correct, and there are many potential methods by which feedback information can be obtained. The RLBMI architecture presented here does not intrinsically assume perpetually available feedback, but rather only needs feedback when necessary and/or convenient. If no feedback information is available, then the update equations are simply not implemented and the current system parameters remain unchanged. Since feedback information does not depend on any particular preprogrammed training paradigm, but rather simply involves the user contributing good/bad information during whatever task for which they are currently using the BMI, this makes the system straightforward to update by the user whenever is convenient and they feel the RLBMI performance has degraded. Finally, other RL algorithms are designed specifically to take advantage of only infrequently available feedback by relating it to multiple earlier actions that were taken by the system and which ultimately lead to the feedback [61]. . During the first session, the system was initialized with random parameters, and during each subsequent session the system was initialized using parameter weights it had learned previously. This approximates deploying the RLBMI across long time periods since it never has the opportunity to reset the weights and start over, but rather must maintain performance by working with a single continuous progression of parameter weight adaptations. Additionally, despite working with the same sequence of weights for multiple days, the RLBMI was still able to quickly adapt when necessary. A mechanical connector failure caused a loss of 50% of the inputs for PR between day 9 and 16 (X: black dashed line), but the RLBMI adapted quickly and only a small performance drop resulted. This input loss was simulated in two sessions with DU (X: red dashed line), and the system again adapted and maintained performance. Notably, the RLBMI performance during those perturbation sessions was similar or better than in two final DU tests in which no input loss was simulated (in the day 14 session the parameter weights were reset to those learned on day 6). doi:10.1371/journal.pone.0087253.g007 When considering possible sources of feedback information, it is important to consider how the critic accuracy impacts on the RLBMI's overall performance. We thus ran a several closed loop experiments and offline simulations in which we tested how well the RLBMI algorithm was able to classify trials from the closed loop BMI experiments when the accuracy of the critic feedback varied. Figure 8 shows how the RLBMI performance can be limited by the accuracy of the feedback. Thus for the current RLBMI architecture it may be better to only use feedback information when the confidence in its accuracy is high, even if that means feedback is obtained less frequently. There are a wide variety of potential options for the RLBMI user to provide critic feedback to the system, including using reward or error information encoded in the brain itself. While assuming that ideal feedback is available following each action may not be practical for real BMI systems, the fact that the necessary training feedback is just a binary 'good/bad' signal (even when the system is expanded to include more than two output actions) that only needs to be provided when the user feels the BMI performance needs to be updated, leaves many options for how even a user suffering from extreme paralysis could learn to provide critic feedback. For example, the user could use a breath puff system, vocal cues, or any sort of small residual movement or EMG signal that can be reliably evoked. Furthermore, error related signals characteristic to EEG, ECoG, or other recording methods could be employed as well [32,[62][63][64][65][66][67]. An exciting option that would place the smallest burden on the BMI user would be to automatically decode feedback information regarding the BMI's performance directly from the brain itself, perhaps from learning or reward centers such as the nucleus accumbens, anterior cingulate cortex, prefrontal cortex etc. [68]. More research will be necessary to investigate potential sources of feedback information that the BMI user could easily provide, as well as how to best and how frequently to use that feedback for effective adaptation by the RLBMI architecture. Giving the BMI user access to a straightforward method of providing feedback will enable them to use the BMI system effectively over long periods of time without outside interventions by engineers or caregivers despite inevitable changes to the inputs. This will greatly increase the practicality of BMI systems by increasing user independence. Conclusions These experiments highlight several of the advantages offered by reinforcement learning algorithms when used as BMI controllers: the system can learn to control a device without needing an explicit set of training data, the system can robustly adapt and maintain control despite large perturbations in the input signal space, and control can be maintained across long time periods. Two marmoset monkeys used an actor-critic Reinforcement Learning BMI to control a robot arm during a reaching task. The RLBMI system was initialized using random initial conditions, and then used a binary training feedback signal to learn how to accurately map the monkeys' neural states to robot actions. The system achieved 90% successful control of the arm after only 2-4 trials, and could maintain control of the arm across sessions spanning days to weeks. Furthermore, because the RLBMI continuously adapted its parameters, it was quickly (within 4 to 5 trials) able to regain control of the robot when half the BMI input neural signals were abruptly lost, or when half the neural signals suddenly acquired new activity patterns. The advantages of the adaptive algorithm illustrated here offer a means for future BMI systems to control more complicated systems with a reduced need for recalibration or other outside inventions by external agents such as engineers or caregivers, which would greatly increase independence to the BMI user. Supporting Information Movie S1 RLBMI control of the robot arm during a 50% loss of inputs. This movie was recorded during a session in which the monkey used the RLBMI to move the robot between two targets. Shown are trials 1-4 and 11-16. The RLBMI learned an effective mapping of the neuronal inputs to the desired robot actions within the first few trials. After the tenth trial, the system was perturbed by dropping 50% of the neuronal inputs, forcing the RLBMI to automatically adapt in order to restore effective control of the robot to the monkey. (WMV) Figure 8. Accuracy of critic feedback impacts RLBMI performance. Shown is the accuracy of the RLBMI system (trials 1:30) during closed loop sessions (DU: blue squares, 5 sessions) and during open loop simulations (mean +/2 standard deviation; DU: black X, 1000 simulations; PR: red O, 700 simulations) when the accuracy of the critic feedback was varied (0.5 to 1.0). Gray line gives a 1:1 relationship. The RLBMI performance was directly impacted by the critic's accuracy. This suggests that choosing the source of critic feedback must involve a balance of factors such as: accessibility, accuracy, and frequency of feedback information, with adaptation preferably only being implemented when confidence in the feedback is high. doi:10.1371/journal.pone.0087253.g008
11,260.2
2014-01-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Quantum size effects on surfaces without a projected bandgap: Pb/Ni(111) We have studied the initial growth of Pb on Ni(111) using low-energy electron microscopy (LEEM) and selective area low-energy electron diffraction (μLEED). First, a one-layer-high wetting layer develops that consists of small (7 × 7) and (4 × 4) domains. For larger coverages, Pb mesas are formed that are embedded in the wetting layer. In spite of the absence of a projected bandgap on clean Ni(111), we observe distinct quantum size effect (QSE)-driven preferred heights. These are apparent from a variety of frequently occurring island height transitions during growth, both on wide terraces and across substrate steps. Also, the average island heights that evolve during deposition at 422 and 474 K show a clear signature of QSE-driven preferred heights. These distinctly include five, seven and nine layers and thus correspond nicely to the values obtained in the key examples of QSE: Pb films on Si(111) and Ge(111). We suggest that the Pb-induced surface modification of Ni(111) shifts the Fermi level into the gap of the interface projected Ni bulk bands, thereby effectively causing decoupling of the Pb states with the bulk Ni states. Introduction With their low surface free energy, Pb films tend to wet most surfaces. However, quantum size effects (QSE) often result in distinct preferred heights for isolated Pb islands [1]. Recently, the self-organization into these island heights due to QSE, referred to as quantum growth or electronic growth, was observed on different substrates and was characterized with a variety of techniques [2][3][4][5]. A prototypical example of QSE is provided by thin Pb(111) films. For Pb with d ≈ 0.75λ F , with d being the interlayer distance and λ F the Fermi wavelength, every increment by two layers almost perfectly accommodates three additional antinodes of the Fermi wavefunction. Recently, many intriguing features were discussed for the growth of Pb on Si(111) [2], [6][7][8][9], where flat top Pb islands with steep edges grow to specific integer heights that are stabilized by QSE. It was shown that Pb mesas grown on both Si(111) and Ge (111) are atomically smooth at substrate temperatures in the range of 200-300 K. The growth shows a strong preference for a bilayer increment in height on the top of a wetting layer, which is thought to passivate the substrate. The stable thicknesses found for Pb/Si(111) are initially odd layer heights 5, 7, 9, 11 and 13, crossing over to even layer heights 14, 16, 18, 20 and 22 thick on top of the wetting layer. The crossing from preferred odd to even island heights and vice versa shows a long 9.5 layer beating pattern as a result of the slight incommensurability of the Fermi wavelength to the interlayer distance [10,11]. Typically, for QSE to manifest themselves quantum well states have to form in the metallic overlayer. This is, for instance, the case for Cu(111) and Ag(111), in contrast to Ni(111) [12]. Indeed, Pb layers on Cu(111) show strong evidence for the presence of quantum well states [4,13]. Closer inspection of the calculations of Goldmann et al [12] reveals that the projected band structure obtained for the clean Ni(111) resembles Cu(111) and Ag(111). At the point, the projected bulk band maximum is only 0.2 eV above the Fermi level and the crystalderived surface bands are in close vicinity to it. Therefore, Ni(111) is an interesting candidate to investigate whether an interface projected band gap develops upon deposition of Pb, thereby facilitating quantum well states in thin Pb films. Here, we present a study of the growth of thin Pb films and islands on the Ni(111) substrate and their subsequent temporal evolution as a result of QSE-driven growth. Using in situ low-energy electron microscopy (LEEM) and selected area low-energy electron diffraction (µLEED), we are able to probe the properties of different selected microscopic sample areas in real time. Using LEEM, we are also able to measure the island height evolution 3 during growth, thereby revealing electronic growth-driven transitions of Pb islands. Using µLEED we are able to probe the ordering of the Pb films and determine their structural properties. Methods The experiments were performed in an Elmitec LEEM III instrument. A Ni(111) surface was cleaned by successive cycles of 1 keV Ar + bombardment at room temperature (RT), followed by flash annealing to a temperature of 1150 K. The cleanness of the sample was monitored by Auger electron spectroscopy and LEEM. LEEM images revealed terraces with a width of ∼1 µm. All sample temperatures are subject to an uncertainty of 5% and were calibrated using the uphill motion of steps over time at a temperature where sublimation is observed, as described in [14]. Lead was deposited from a Knudsen cell. According to the bulk phase diagram, lead and nickel are immiscible in the bulk [15]. The typical deposition rate used in the experiments was about 1 × 10 −3 ML s −1 , where a coverage of θ Pb/Ni = 1 ML corresponds to one Pb atom per Ni surface atom. Wetting layer properties To determine the properties of a single Pb layer on top of the Ni(111) surface, we performed µLEED, illuminating a circular area of 19 µm diameter. When depositing Pb on the clean Ni(111) surface at a temperature of 474 K, we found a faint ( √ 3× √ 3)-R30 • surface alloy peak emerging, showing its maximum peak intensity at a coverage of 0.33 ML, corresponding to 250 s of deposition (see figure 1). The measured deposition rate is therefore 1.3 × 10 −3 Pb atoms per unit cell (uc) per second (Pb/uc/s). The ( √ 3× √ 3)-R30 • surface alloy was also found in the literature, where annealing to T > 850 K followed deposition at RT before alloy formation occurred [16][17][18][19]. A further increase in coverage results in dealloying and the formation of a Pb wetting layer that covers the entire surface above θ Pb/Ni ≈ 0.40 ML. From this coverage onward, new diffraction peaks emerge at a position corresponding to an in-plane lattice constant of 3.93 Å, giving rise to a weak moiré pattern. These Pb peaks move outward with increasing coverage, indicating a continuous compression of the aligned and incommensurate hexagonal Pb layer. The coverage in this wetting layer increases both as a result of the addition of Pb from the gas phase and due to continuous dealloying. The rate of dealloying increases with coverage, as can be concluded from the convex shape of the in-plane lattice constant versus coverage curve in figure 2(a). The wetting layer is completed when the compression of the inplane lattice constant ends abruptly at a value of 3.50 Å. At this point the dealloying and the compression stop. This provides an excellent opportunity for an exact in situ calibration of the deposition rate. The resulting deposition rate is 1.33 × 10 −3 ML s −1 , in agreement with the value obtained from the maximum peak intensity of the ( √ 3 × √ 3)-R30 • surface alloy. We find no indication of the locking-in of the previously reported (3 × 3) structure found at RT [16,18,19], at an in-plane lattice constant of 3.73 Å. Beyond a coverage of θ Pb/Ni = 0.51 ML, the line profiles shown as insets in figure 2(a) exhibit an unexpected peak splitting. The splitting reveals two different domains in the wetting layer, as illustrated by the µLEED pattern. These two domains are the (7 × 7) and (4 × 4) structures with lattice constants of 3.50 and 3.32 Å. Both have been reported on in the literature [16,19] and are illustrated in figures 2(b) and (c). For increasing coverage the main spot intensity moves gradually from the less dense (7 × 7) domains at θ Pb/Ni ≈ 0.51 ML to the denser (4 × 4) domains at θ Pb/Ni ≈ 0.55 ML, as is reflected in the line profiles shown as insets in figure 2(a). This remarkable two-domain structure of the wetting layer was not anticipated and is found to disappear at 520-525 K, resulting in the anomalous transition of Pb mesas into hemispheres [20]. With the nearest-neighbor distance in bulk lead being 3.50 Å and the known generic tendency for tensile surface stress [21], we suggest that the (7 × 7) domains in the wetting layer involve tensile stress. A slight compression then locks the (4 × 4) domains. To discuss the stability of both domains, we perform a coordination analysis. As a measure of the coordination of an individual Pb atom we take the sum of the lateral components of its position vector, measured along the directions of the three nearest Ni atoms underneath, relative to those values for an atom in a hollow site. For an on-top site, bridge site and threefold hollow site, we found a value of 1.734, 0.444 and 0, respectively, expressed in Ni lattice constants. The average numbers for the (7 × 7) and (4 × 4) structures are 0.550 and 0.618, respectively. This clearly suggests that the coordination is higher, and thus the binding stronger, for the (7 × 7) structure than for the (4 × 4) structure, in line with our observations. The strong binding to the Ni(111) substrate results in the accumulation of lateral stress, forcing the system to relieve this by keeping the compressed (4 × 4) domains small and alternating them with low-tensilestress (7 × 7) domains. These Pb domains with different surface stresses can give rise to selfassembly [22]. A typical example is the striped phase formation of Pb on Cu(111) driven by competition between tensile and compressive stresses of a PbCu surface alloy and a Pb overlayer phase [23]. Here, the energy involved in the creation of the domain walls is low due to the alignment to the Ni substrate and the similar atomic densities of the (7 × 7) and (4 × 4) domains. This results in small domains with probably fast fluctuations. These domains are not and (4 × 4) structures of the wetting layer, respectively. Green color (dark gray) corresponds to threefold hollow, darker green (darker gray) to close to threefold hollow, red (darkest) to (near) on-top and yellow (light gray) to intermediate positions. resolved, probably because of the spatial resolution of our instrument of about 7 nm, possibly in combination with fast fluctuations at these temperatures. For coverages θ Pb/Ni > 0.55 ML, small Pb islands form. For these and all the other structures grown, µLEED shows only the bulk nearest-neighbor distance for completely relaxed Pb(111), coexisting with the strained double domain wetting layer. This observation is a consequence of the low misfit energy, which also allows for gradual in-plane lattice constant compression of the incommensurate wetting layer, as shown in figure 2(a). The relatively low misfit energy is attributed to the large difference of atomic radii for Ni and Pb. From here onwards, we define the coverage θ Pb = 1 ML as the equivalent of one bulk Pb(111) layer. Electronic growth-driven shape transitions of Pb islands An eye-catching feature of the evolving morphology of the growing Pb film is the apparent difference of growth rates of individual islands. A significant fraction of the islands even appear to shrink in fractional area. These remarkable events occur both for islands far away from substrate steps and for those located at or in the close vicinity of substrate steps. They also occur during initial and more advanced stages of growth and are all explained in the context of QSE-driven stable island heights, as described in [24]. We consider them to be strong indications of the importance of QSE for Pb on Ni(111). After completion of the wetting layer, Pb island nucleation is observed for coverages beyond θ Pb ≈ 1.08 ML, imaged as dark features in figures 3 and 4. The experiment shown in figures 3 and 4 is performed at a slightly lower temperature of 422 K. During the growth of these small islands, we observe the sudden collapse of their fractional area within a few seconds. In figures 3 and 4, two examples are shown. Island areas are measured by making use of common image analysis techniques such as background subtraction, Otsu's method to automatically perform histogram shape-based adaptive image thresholding [25] and standard erosion and dilation filters. Quite clearly, we can distinguish islands that undergo a reduction in fractional area between frames. The condition of mass conservation leads to the conclusion that the islands must undergo a height increase at the same time. We first focus on the LEEM images in figure 3 (insets) and the corresponding cartoons below them. Going from image (a) to (b), taken only 7 s apart, we observe a distinct reduction in the fractional area of the highlighted island. This is exemplified by the plot of the fractional area against time in figure 3. The ratio of the final and the initial projected areas, A 5 and A 3 , respectively, is R A = 0.50. Since there is no mass transport between this island and any of its neighbors, and nor does any intensity fluctuation in its surroundings take place, mass conservation must apply. The reduction in area must therefore be accompanied by a corresponding increase in height; see figures 3(c) and (d). The wetting layer (gray) has an approximately 8% higher density than the completely relaxed Pb(111) layers in the islands. The height of the islands is counted with respect to the substrate. Assuming mass conservation, the following condition must apply: the ratio R A is identical to the measured value within the experimental uncertainties. The conclusion is that the transformation of the island is from a three-layer-high island into a five-layer-high island. Keeping in mind that the measurement takes place during the deposition of Pb, the growth rate of the islands must differ by a factor of 2. Indeed, this is nicely confirmed by the two-times higher expansion rate of the three-layer-high island compared to the five-layerhigh one, as inferred from the measured slopes in figure 3. These findings provide strong support for the assessment of a transition from a three-layer-high island to an island with the preferred height of five layers. We attribute this bilayer height transition to QSE. A similar feature is illustrated by the LEEM images in figure 4 (insets). Here we start with a larger island situated near a descending step in the Ni(111) substrate. The reduction of the fractional area of this island is plotted in figure 4. The transition occurs in a more complex way, but the overall result is a ratio of the final fractional area to the initial one of 0.75. For a transition from a four-layer-high island to a preferred five-layer-high island, shown in figures 4(c) and (d), mass conservation conditions imply 4 × A 4 = 5 × A 5 + 1.08 × (A 4 − A 5 ), which gives R A = A 4 A 5 = 0.74, consistent with the experimental value within the experimental uncertainties. The scatter in the growth rate is in this case much larger, but again the obtained result of the final growth rate being 0.64 times smaller than the initial one is consistent with the expected factor of 2/3. So again, in this transition of the island we find strong evidence for a preferred island height of five layers. We conclude that the data in figures 3 and 4 demonstrate the importance of QSE in this system. Note that the growth rate of the island in figure 3 is two times larger than that found for the island in figure 4. This difference is attributed to the influence of substrate steps in the vicinity of the islands. From the conservation of the total amount of deposited material and the observed height transitions, we can also derive that the Pb(111) islands grow directly on the metallic Ni(111) substrate and are surrounded by the wetting layer consisting of both (7 × 7) and (4 × 4) domains. This is in contrast to the wetting layer of Pb and Bi on Si(111), where the semiconducting substrate is first passivated by the wetting layer before electronic growth starts on top of it [10,26,27]. After prolonged deposition at a slightly higher temperature of 474 K, islands can achieve large fractional areas. Adjacent islands can make concerted movements in an attempt to minimize their energy. Figures 5(a) and (b) show an example at an average film thickness of about 17.5 layers. A concerted movement across a step enables a height transition of one of the islands. The two islands taking part in this concerted movement are labeled by their contours in red (dark) and green (bright) and are assumed to initially have the same uniform height, h, composed of n layers. The Ni substrate step in between these islands is marked by the white dashed line. As a result of the net flow of Pb atoms from the green (bright) labeled island towards the red (dark) labeled island across the substrate step, both islands reduce their projected area, as shown in figures 5(a) and (b). The filled circles in the graph of figure 5(c) show the total measured fractional area times a constant height h. Assuming that both islands are initially smooth and thus have a uniform height, as well as assuming mass conservation, a height doubling of the red (dark) island results in the open circles in figure 5(c). This curve represents the fractional area multiplied by a time-dependent height h(t) for the red (dark) labeled island, where h(t) is a step function that is equal to h for θ Pb < 8.439 ML and 2h for θ Pb , the average island height, calculated from a total fractional island area of 47.7%, is about 17.5 atomic layers. It is therefore likely that the red (dark) island makes a transition from a layer height of, e.g., nine layers to 18 layers. From the literature, it is known that Pb(111) films are rather special in that the Friedel oscillations at their surface allow strong QSE to persist even over 30 atomic layers thickness [11]. The slight incommensurability of the Fermi wavelength to the interlayer distance results in periodic crossovers in the stability of odd versus even layer heights [24]. This provides strong support for the height transition of the red (dark) island from an odd layer height below 11 (e.g. seven or nine) atomic layers, to an even layer height (e.g. 14 or 18). From average island height measurements, we also found stable island heights of seven and nine at 474 K, discussed below and visualized in figure 7. These are in agreement with the observation in figure 5 and the corresponding proposed height of less than 11 atomic layers. We note that a comparison of the growth rate of the resulting island with those found previously (see e.g. figure 4) is consistent with this assignment. We also note that the local free energy minima near the transition from odd to even crossover of preferred layer heights facilitate a pronounced instead of an incremental increase of the island height [10]. Figure 6 shows another example of the strong preference for specific QSE stabilized heights. In the center of the 4 µm wide image in figure 6(a), a QSE stabilized island has expanded its area up to the full terrace width, where the bounding steps are marked by white dashed lines. As soon as it extends slightly over the descending step, the height on the lower terrace becomes one atomic layer higher, which is energetically less favorable due to the bilayer periodicity of stable island thicknesses. Therefore, it becomes energetically more favorable for the island to split into two separate islands, one on each terrace with similar heights (see figures 6(c) and (d)), despite the creation of additional island boundaries. The average island height in figure 6, calculated from the islands' total fractional area, is about 15 layers. Since the heights of both islands are very similar, we are not able to assign exact island heights within the experimental error of the fractional area. 14-and 16-layer-high islands would be likely candidates. In order to gather more dynamic and global information on the evolution of the island height, we measured the average island height for a large number of frames for coverages from θ Pb = 1.0-2.5 ML. In figure 7, such measurements are shown for temperatures of 422 and 474 K, where the island density for the latter is lower. From this figure, the tendency of both curves to flatten at five layers for 422 K, at seven layers, and less pronounced at nine layers for 474 K, is attributed to the strong preference for QSE stabilized island heights. Islands reach an energetically stable height and merely expand laterally. Keeping in mind that we consider average values for the island heights and that we are dealing with a distribution of island heights, we have to consider the tendency for a preferred height to be strong support for its existence. The measured QSE stabilized heights of five, seven and nine atomic layers show bilayer periodicity and are in agreement with the literature [10,24,28]. For the higher temperature of 474 K, the first stable height is seven atomic layers. At a slightly lower temperature the first stable height appears to be five atomic layers. However, we have clear evidence for the existence of threelayer-high islands (see above). From the fact that the curve for the 422 K data starts at four layers for θ Pb = 1.0 ML, where exact data are missing up to θ Pb = 1.15 ML due to re-adjustment of the instrument, we are probably dealing with a mixture of both three-and five-layer-high islands. The first are metastable at this temperature. Consequently, these data may provide the first experimental evidence for preferred three-layer-high islands in electronic growth of Pb films, as predicted by theory [10,28]. The 'noise' at the start of the 422 K data can actually be traced back to transitions from three to five layers and is by no means just a statistical feature. For increasing coverage, the spread in island heights increases and therefore the flattening in figure 7 becomes less pronounced. Table 1 summarizes the observed stable heights in our experiments in comparison to those found for Pb on Ge(111) and Si(111) [3,29]. Note that the growth of QSE-driven Pb [10]. See also figures 3-5. I /V -LEEM interpretation We have discussed the electronic growth observations of Pb on Ni(111) using the morphological changes in our LEEM observations. We now focus on the interpretation of the I /V -LEEM curves obtained from the Pb islands. By varying the incident electron energy while measuring the reflected intensity, I /V -LEEM curves of the (0, 0) beam at normal incidence can be obtained. Typically, these I /V -LEEM curves exhibit pronounced quantum interference peaks, resulting from the QSE. The positions of these peaks can be easily predicted by a simple Kronig-Penney (KP) model; see [30] for a full description. In short: the KP model uses two potential boxes for each layer, with depth V and width w, centered at the atoms (V a , w a ) and in between the atoms (V g , w g ). The substrate is given as a featureless box with depth V 0s . By requiring the wave functions and their first derivatives to be continuous at the various transitions, including the vacuum-film interface, we can derive the reflection coefficient at the latter interface, which represents the measured quantity. The result is N − 1 interference peaks for an N -layer-thick film. Analogous to the Pb on Ni(111) system, these predicted interference peaks were also found for the electronic growth of Bi on Ni(111) for island heights of three and five atomic layers [31]. Figure 8(a) shows the measured I /V -LEEM curves for a five-atomic-layer-high island, where the height is determined with an evaluation scheme similar to the one used in figures 3 and 4. A small but distinct quantum interference peak can be found at 3.5 and 11.9 eV, marked by arrows. The peak intensity of the quantum interference peaks (I QSE ) is rather small in comparison to the Bragg peak intensity (I Bragg ) around 22.0 eV, marked 'B' in figures 8(a) and (b), with a typical ratio of I Bragg I QSE ≈ 20. The S/N ratio is limited by the available intensity. In order to qualitatively understand the distinct interference peaks in figure 8(a) we propose a simple KP model. Since no parameters for the Pb on Ni(111) system are known to the best of our knowledge, the parameters are based on the best fit parameters for a five-layer-high Bi island on Ni(111) [31] (see table 2). Using these values, we are able to qualitatively reproduce the experimental I /V -LEEM data (see figures 8(a) and (b)) with the exception of the intense peak at 7 eV. To fit the Bragg peak to the experimental data we use an interlayer distance, w a + w g , of 3.05 Å. This corresponds to a surface relaxation of almost 7% compared to bulk Pb. Large relaxations for the (four) outermost Pb(111) layers, larger than typically found for other fcc(111) surfaces, are known to dominate at temperatures over 0.5T m [32], where bilayer modulations of the interlayer distance are also reported [33]. The large surface relaxations found for Pb(111) cause the quantum interference peaks to broaden and thereby reduce their peak height. We show that the positions of the quantum interference peaks at energies of 3.7, 10.2 and 18.1 eV are in qualitative agreement with the experimental I /V -LEEM curve. The intensity ratio between the Bragg peak and the quantum interference peak at 3.7 eV for the KP model is about I Bragg I QSE = 10. The higher ratio for the measured curve may well be explained by the peak broadening, along with the Debye-Waller factor and/or dynamic effects. The intense peak at 7 eV in figure 8(a) is attributed to band structure effects. From the literature it is known that sharp peaks in reflectance correlate with band structure crossings and gaps along the L line of the Brillouin zone [34]. Typically for fcc metals, these sharp peaks figure 8(a). The values are based on [31]. Value 20.7 10.2 20.0 1. 35 1.70 lie near the 2 point or − 7 in double group notation [35]. The position of 2 at 10 eV above E F [36] correlates with the intense peak at 7 eV after the addition of the inner potential [37] and the subtraction of the work function difference between the LaB 6 cathode [38] and thin Pb(111) films [9,39]. Taking into account this band structure effect, we superimposed a peak at 7.0 eV in the KP model, shown by the solid line in figure 8(a) in the energy range of 4.6-9.5 eV. This results in qualitative good agreement with the measured I /V -LEEM curve in figure 8(a). Therefore, we conclude that the weak quantum interference peaks in the I /V -LEEM curve are also dominated by strong band structure effects around 7 eV. We have demonstrated a wide variety of mostly circumstantial evidence for the occurrence of QSE in thin Pb layers on Ni(111). All features show a close resemblance to previous observations obtained for Pb on Cu(111) [13] and even on Si(111) and Ge(111) [10,28]. Therefore, we can safely conclude that QSE are present in thin Pb films on Ni(111) and govern the film morphology. For an explanation of the decoupling of the Pb states with the Ni bulk bands, see [16] in which the authors report a work function reduction of not less than 0.7 eV for Pb-modified Ni(111). This directly results in a clear band gap in the interface projected Ni bulk bands with the Fermi level about 0.5 eV above the band maximum in the point. In this situation, we fulfill all the requirements for observing QSE in Pb/Ni(111) as experimentally observed. Summary We have studied the growth and properties of thin Pb films on Ni(111). First, a one-layerhigh wetting layer develops that consists of small (7 × 7) and (4 × 4) domains, where the former has a stronger binding to the Ni(111) substrate. This results in the accumulation of tensile lateral stress, forcing the system to relieve this by allowing the growth of compressively stressed (4 × 4) domains. Since the density in both domains is very similar and their azimuthal orientation is identical, the domain wall energy will be low. These low-energy cost domain walls result in small domain sizes. For coverages θ Pb/Ni > 0.55 ML, Pb mesas form, which are embedded in the wetting layer. We have shown distinct QSE-driven preferred heights for the Pb mesas in a number of specific examples for relatively thin and thick films. This is apparent from island height transitions both on wide terraces and at substrate steps. The average island heights that evolve during deposition at 422 and 474 K show a clear signature of QSE-driven preferred heights, distinctly including stable heights of five, seven and nine atomic layers. All features closely resemble previous observations for Pb films on Cu(111) as well as on Si(111) and Ge(111) [10,13,28]. The identified QSE in Pb/Ni(111) are attributed to Pbinduced modifications of the Ni(111) interface, leading to a band gap in the interface projected Ni bulk bands. Weak quantum interference has also been identified in I /V -LEEM measurements. The experimental I /V -LEEM curve is also dominated by strong band structure effects around 7 eV.
6,892.2
2011-10-01T00:00:00.000
[ "Physics" ]
The First Testament in the Gospel of Matthew 1 Andries van Aarde Department of New Testament Studies (Sec A) University of Pretoria Manhew is to be read as a narration with an ongoing plot and an open end. There is a correlation between the (post-paschal) Jesus' commission and the risen Jesus' presence in his disciples' (post-paschal) commission until the parousia. This insight amounts to the fact that the plot of Manhew is continuing after its apparent conclusion, only to be resolved in its implied continuation. The intention of the paper is to describe, against the background of the debate among Matthean scholars, the function of the use of the First Testament in the light of the abovementioned two sequences. The term 'First Testament' in this instance is not restricted to the Hebrew canon but also includes some pseudepigrapha which were not considered as 'outside a canon' either by the synagogue or the church, for example 1 Enoch, 2 Baruch, The lives of the prophets and Pseudo-Philo. INTRODUCTION The Gospel of Matthew is to be read as a narration with an ongoing plot and an open end.The plot of Matthew's story about Jesus consists of a correlation between the earthly Jesus' commission and the risen Jesus' presence in the (post-paschal) commission of the disciples until the coming of the parousia.The author wrote his gospel from a retrospective viewpoint.This after-the-event point of view enabled the narrator to provide the plot in the Matthean story, from the perspective of reader involvement, an effective open end.Willi Marxsen (1959:63f), who points out in his well-known work on the Gospel of Mark some of the most important characteristics of the other two synoptic gospels as well, makes the following reference to the open-endedness of the Gospel of Matthew: Where Mark wrote against the background of an anticipation of Jesus' early return, Matthew began to allow for a possible delay in his return.He offered an interim solution.He enlarged upon the commission theme, which was also present in Mark (cf Mk 13:10), to make it an independent epoch with a typical Matthean function, which was to make disciples of all people .This period of the disciples' commission follows the 'time' of Jesus.It extends from Jesus' resurrection from the dead to the 'time' of Matthew hUnself.It goes even further.It actually extends into our time.The end of Matthew's gospel is thus open since, after the conclusion of the epoch of Jesus, another began which continues up to the end of time. The above insight amounts to the plot of Matthew's story continuing after its apparent conclusion, and only being resolved in its non-explicit continuation.The intention of this paper is to describe the function of the use of the First Testament in the light of the two 'temporal' sequences in the plot of the Gospel of Matthew, against the background of the debate among Matthean scholars.However, to restrict the term 53-74 (the time from Abraham to David, from David to the Babylonian exile, from the Babylonian exile up to and including the Messiah).Unlike Luke, Matthew does not begin with the history of Israel just to drop it again as.something of the past and npt to be regarded as a continuing sequence in his plot.What then as far as time sequence is concerned, is the function of the use of the First Testament in the Gospel of Matthew?It does not serve as the 'thesis' for Jesus' (the Messiah's) 'antitheses', nor as the 'promise' which is brought to its fullest significance through its 'fulfIllment' by the Messiah. How should the portrayal of Jesus' 'fulfIllment' (T)..7IPWaat) of the 'law and the prophets' be understood in Matthew? .My thesis is that the narrator uses the First Testament functionally in order to present his disciple/church-image (the second time sequence) as analogous to and in continuity with his Jesus-image (the first time sequence).The first time level is oriented towards and paralleled by the second.Jesus is 'God-with-us' in the first sequence and he is 'God-with-us' in the second.This expression has been taken from the world of the First Testament.Matthew's clarifying clause, (5 eanv jJ.e(JepjJ.7IvevojJ.evovjJ.e()' r,jJ.wvb (Jeo~ (Mt 1 :23b), resembles Isaiah (LXX) 8:8, 'which, being interpreted, means, With us is God' (see Allison, Jr 1993: 154).What is at stake in the Matthean birth story with regard to this Leitmotiv, is probably a Moses typology, as can be observed in Exodus 4: 16 and 7: 1. Moses is obviously not identified with God in these verses, but he clearly 'play(s) the role of God' (Allison, Jr 1993:154;cf Meeks 1970:354-371).Matthew parallels the popular expansions on Exodus about the birth of Moses which were known in the first century as can be seen in Josephus' Antiquities of the Jews (4. and in Pseudo-Philo's Liber Antiquitatum Biblicarum (9.2-10) (cf Crossan 1986Crossan :18-27: 1994:62-66):62-66). There is a continuity as well as an analogy between the Jesus commission (the first sequence) and the disciples' commission (the second sequence).The first temporal level is oriented towards the second.This relationship can therefore be typified as that of a transparency.In.the transmission, conversion and re-interpretation of earlier traditions (oral and written) the Jesus era is transposed to the early church era in such a way that two historical worlds are simultaneously taken up as a narrative entity in the' gospels.The story in a gospel thus concerns people and things from an earlier time while the later period in which the gospel arose and communicated is transparent in the text.A gospel thus simultaneously refers to two 'real' worlds.In the gospels the prepaschal world of Jesus, the disciples and the others is generally the most transparent. Nevertheless, the world of the post-paschal church is more transparent in some passages.The one world is never manifested totally isolated from the other.The world of the early church and that of Jesus and the disciples are, in a dialectical.sense, simultaneously taken up in the gospel as a narrative record.These two worlds are presented in accordance with the narrator's 'ideological' perspective.Exactly what the continuity and analogy between the 'history of Jesus, the Messiah', and the 'history of the church' involve should be defined from the ideological perspective of the narrator.Ulrich Luz (1994:55), in respect of a quite different issue, suggests the same idea as follows: '... Matthew links the church exclusively with the earthly Jesus .... because Matthew has a narrative theology.He tells the story of Jesus.In this story, the church does not simply exist but becomes the church, because Jesus, who heals his people, shares his power with his disciples and gives them a task'. The ideological level is basic to all other levels in the narrative (cf Uspensky' 1973: 8-9).These levels include characterization, the way characters act, speak, feel and think, as well as the temporal sequences and spatial order in terms of which the characters move, as the plot of the narrative develops.In 'religious' literature, as in Matthew's gospel, the 'ideological' perspective is to be seen as 'theological' of nature.In the Gospel of Matthew the ideological/theological perspective of the narrator coincides with the narrated perspective of the protagonist (cf Van Aarde 1994:35ff).This phenomenon boils down to the fact that all events, places, characters, and the like being presented froIP one consistent perspective, that is from that of one character, Jesus, who is called the Messiah.This single dominant perspective resounds through every episode in the narrative.By means of this technique the narrator 'lures the reader into ... times, and places by perspectively locating himself [or herself] and the reader in the midst of the scenes and events he [or she] describes, enabling the reader to see, hear, and know things he [or she] would not have access to without the narrator's guiding voice' (Petersen 1980:36-38). The plot of the Gospel of Matthew is, as indicated, characterized by two 'lines of actions', or 'narrative lines', that of the pre-paschal Jesus-commission (the primary sequence) and that of the post-paschal disciple-commission (the secondary sequence). The dominant perspective in the theology of the Gospel of Matthew is that from which the narrator accomplishes the analogy and association between the events of these two 'lines of action'.Seen thus, the continuity and analogy between the first and the second sequence is based on the narrator's image of Jesus as Immanuel.Jesus is Godwith-us in the first sequence and he is God-with-us in the second.Kingsbury (1973: 471) describes this analogy as follows: '[Tlhe coalescence of the time of Jesus and the "time of the Church" in the theology of Mt. is, ultimately, christologically motivated and has its roots in the pre-Easter -post-Easter continuity of the person of Jesus .... ' Thus, the ideological/theological perspective of the narrator is closely associated with the expression God-with-us which occurs explicitly at the beginning (Mt 1 :23), middle (Mt 18:19f) and end (Mt 28: 18ff) of the Gospel of Matthew.Matthew's gospel relates that God came to the world from his domain, the kingdom of heaven.Instead of manifesting himself in the temple, which had been his dwelling place among his people, but which had degenerated (cf Lohmeyer 1942: 109f) as a result of the actions of the Israelite elites (cf Mt 21: 12ff) and occupants of Moses' cathedra (cf Mt 23:2), he became God-with-us in Jesus, the Messiah/Son of Man, the Son of God, who is 'greater than the temple' (Mt 12:6).This Jesus-mission had the purpose of forgiving the sins of all people outside the structures of the temple, especially those of the outcasts within the Israelite crowds, the 'lost sheep ofIsrael', but also the Gentiles (Mt 1 :21; 3:6; 9: 13 -cf Saldarini 1994:75), as 'sinners' -the new eschatological community (Lohmeyer 1 942:60ff).Jesus did this by executing the will of the Father with total obedience, so as to 'fulftllall righteousness: (Mt 3:15).Theoretically, the 'will of the Father' is the 'law and the prophets' (Mt 5:17), and this turns into practice (cf Stanton 1992:383) when there is compliance with the radical demand for love (Mt 19:19b,21;(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40).It is in this sense that Gibbs (1968) refers to the Matthean Jesus as the 'Torah incarnate' .'For Matthew, love is the criterion for truth and falseness of faith aad also for real understanding' (Luz 1994:95) Although Jesus had already called disciples at the commencement of his work among the Jewish crowds and the gentiles, and had made them 'fishers of human beings' (Mt 4:19) to assist him, their mission into the world only began with Jesus' resurrection from the dead.The disciples were commissioned to teach and the content of their commission was the 'law and the prophets', which was the will of the Father as interpreted and embodied by Jesus himself.Matthew makes no distinction between the law and the prophets (Saldarini 1994:161).The continuing presence of the risen Jesus as God-with-us until the end of the world became visible in the obedience of the disciples who, in executing their commission to make disciples of others, were doing God's will just as Jesus did. THE ESCHATOWGICAL TURNING OF THE TIDE Turning again to Marxsen' contribution, he does not note a correlation between the 'time of Jesus' and the 'time of the church' in the Gospel of Matthew (Marxsen 1959: 64).Furthermore, he considers that Matthew wrote his gospel on the basis of three temporal phases.He does not consider that these three are, as we indicated above, the 'time' of the pre-paschal Jesus commission, the 'time' of the post-paschal disciples' commission and the 'time' after the parousia.In his view they consist of the 'time of the First Testament', the 'time of the earthly Jesus' (to me, the first sequence) and the 'time of the evangelist and his community' (to me, the second sequence).What is fundamental to Marxsen' s theory is that a distinction has to be drawn in the Gospel of Matthew between the 'time of the First Testament' and the first temporal sequence.His view basically corresponds with that of Conzelmann (1977) in regard to Luke-Acts.According to this the 'time of Jesus' forms a central point between the 'time of the First Testament' and the 'time of the church'.Or, as Marxsen typifies it with regard to the Gospel of Matthew: the time of Jesus is an epoch between two others. Conzelmann and Marxsen, with their viewpoints, initiated investigation into the socalled Heilsgeschichte in the theology of the Lukan and the Matthean gospels.Research has shown that the.so-called Heilsgeschichte, that• is the parallel between Christology and ecclesiology, forms one of the central themes, if not the most central theme, in the theology of Luke-Acts (cf Rengstorf 1969:6).This statement can to some extent also be made applicable to the Gospel of Matthew.Questions that can be disputed in this connection as far as the Gospel of Matthew is concerned, are those regarding the number of sequences that are discernible in the gospel, the articulated spot at which one sequence ends and another begins, and the place and nature of the time of the First Testament within the heilsgeschichtliche framework of the gospel.Although I shall be focusing my attention on the latter, the three aspects noted above are closely integrated. While in the Gospel of Mat:k there are only two occurrences (Mk 1:15; 14:49) of the prediction that the 'time of Jesus' is a fulfillment of a promise, which would be the First Testament, the idea of fulftllment plays a much more prominent role in Luke and Matthew.Marxsen has shown that, just as Matthew finds a correlation between the 'time of the earthly Jesus' and the 'time of the evangelist and his community', there is a correlation between the 'time of the earthly Jesus' and the 'time of the First Testament'.He, moreover, points out that the latter correlation is expressed in the fulfillment citations in particular.The conclusion of the time of Jesus also refers to a previous beginning, namely that of the First Testament (Marxsen 1959:64).The question is that of the nature of this correlation, or continuity, which is expressed by the fulfillment citations, as well as by other First Testament citations and First Testament allusions. I have mentioned above that there is an analogical continuity between the prepaschal Jesus commission and the post-paschal disciples' commission,.and that the continuity centers around the presence of Jesus as God-with-us.I shall now show that the First Testament is used particularly effectively by the narrator as something on which to base this continuity.Senior (1976:670) remarks correctly in this connection: 'Perhaps no evangelist perfonned this "ministry of continuity" with more skill than Matthew.To study his Gospel under the rubric of "continuity" is to discover the core agrees with that which we find in, for example, Strecker (1966), Walker (1967) and Kingsbury (1973), while authors such as Barth (1961), McConnel (1969), Barr (1976), Senior (1976) and Aguirre (1981) do not hold the same view.Scholars such as Trilling (1969) and Meier (1975) tioned, and at the same time show that none of the scholars mentioned recognized the relationship between Matthew's Immanuel theology and the different temporal levels in the Gospel.This relationship simultaneously serves to explain the role and the nature of the 'time of the First Testament' in the Gospel of Matthew, as well as the paradox between the so-called particular and the universal purport in the this gospel. Although both Strecker (1966:86-93) and Walker (1967) are greatly influenced by Conzelmann, they differ from each other in respect to certain finer details.Both, however, agree that three temporal phases can be distinguished in the Gospel of Matthew.Walker (1967:115) refers to these three temporal phases as the 'prehistory of the Messiah', which began with Abraham, the 'history of the call~ng of Israel' (the particular purport) which consists of the ministry of John the Baptist as the precursor of the Messiah and Jesus himself as the Mitte der Mitte, and finally the 'history of the mission to the Gentiles' (the universal purport) which began with the crucifixion and resurrection of Jesus and extends to the day of judgment and thus partly coincides with the time of the evangelist.Strecker (1966:184-188) refers to these three temporal phases as the 'time of the fathers and the prophets', the 'time of Jesus' and the 'time of the Heidenkirche'.Like Walker (1967:115), Strecker (1966:187) regards John the Baptist as part of the 'time of Jesus'.After Jesus' death and resurrection this 'time' went over into the 'eschatological time'.Unlike Strecker and Walker, Kingsbury (1973:471) does not distinguish three temporal phases in the Gospel of Matthew, but two.He formulates his view as follows: It has long been recognized that especially the formula quotations in the first Gospel reveal that Mt. has theological affinity for the categories of 'prophesy' and 'fulfllment'.These terms aptly characterize Mt's view of the history of salvation.There is the 'time of Israel', which is preparatory to and prophetic of the coming of the Messiah; and there is the 'time of Jesus ... ', in which the time of Israel finds its fulflliment and which, from the vantage point of Matthew's day, extends from the beginning of the ministry of John and of Jesus (past) through post-Easter times (present) to the coming consummation of the age (future).In Mt's scheme of history, one does not, strictly speaking, find any such epoch as the 'time of the Church', for this 'time' is subsumed under the 'last days' inaugurated by John and Jesus.(Kingsbury 1973:471) Kingsbury differs from Strecker and Walker not with regard to the beginning of the 'time of Jesus', but with regard to the end of this 'time' ..He holds the opinion that there was no change in 'time' at Jesus' death and resurrection, but that the 'eschatological time' coincides fully with the 'time of Jesus'.This 'eschatological time' begins with the commencement of John the Baptist's work.In this connection the three scholars mentioned above consider that Matthew 3: 1, as the beginning of John the Baptist's service, indicates the division between the 'time of the First Testament' and the 'time of Jesus'.According to this view, the elements promise (the 'time of the First Testament') and fulfillment (the 'time of Jesus') separate the two temporal levels of time.Kingsbury (1973:470;.cf Strecker 1966:87) builds his argument chiefly on the time formula, 811 8Kei1lUL<; rui<; ~p.epuL<;, which appears in Matthew 3:1 and 24:19,22, 29.He considers that this time formula has an exclusive 'eschatological' connotation that refers to 'that period of time which precedes the consummation of the age and the return of Jesus, Son of Man'.Matthew thus, according to Kingsbury, employs this time formula inclusively and uses it to refer to the 'time of John the Baptist', .the 'time of Jesus', and the 'time of the church'.And, because of this inclusiveness, the Gospel of Matthew does not, according to Kingsbury, show a separation between the 'eschatological community' and the 'time of Jesus', but a separation between the 't~e of the First Testament' and the 'time of Jesus'.The latter begins with the 'time of John the Baptist' .' ... Matthew, as 11: 13 indicates, sees the law and the prophets, the entire OT, as "prophesying", as pointing forward, to the events that mark the eschatological age of salvation' (Kingsbury 1977:83f).Kingsbury (1973), like Strecker (1966), considers that the three stages in the 'eschatological time', that is to say the 'time of John the Baptist', the' 'time of Jesus' and the 'time of the church', should not be seen as a progressive increase in eschatological intensity.Although various 'historical' stages are distinguishable in the 'eschatological time', these stages, according to Kingsbury and Strecker, do not represent qualitative differentiation, but rather make a qualitative whole.Strecker, unlike Kingsbury, draws a type of distinction between the 'time of Jesus' and the 'time of the church'.These two 'times' function, according to him, alongside one other in the Gospe.l of Matthew.He formulates this mutual impact of the two 'times' on each other by saying that the eschatological element is historicized.In other words, eschatology is consequently organized in time, as, vice versa, the story of the Jesus of history can not be understood in secular-historical categories any more, but attains an eschatological quality: 'Das eschatologische Element wird historisiert, namIich konsequent der Zeit eingeordnet, wie umgekehrt die Historie nicht mehr in profangeschichtlichen Kategorien zu erfassen ist, sondern eine eschatologische Qualitat erlangt' (Strecker 1966: 185) As far as both the beginning and end of the 'time of Jesus' is concerned, I do not regard Kingsbury as convincing.With regard to the end of the Jesus commission I have already pointed out that there is an analogy in the Gospel of Matthew between the pre-paschal Jesus commission and, in pursuance to this, the post-paschal disciples' commission ('the time of the eschatological community' -in Lohmeyer's terminology).Nevertheless these two sequences do not function as exclusive compartments.They are mutually integrated by means of thematic parallels (cf Mt 4:23; 9:35 with 10: 6ff) , cross-references (cf Mt 16:19 with.18:18; 23:13), prospection (cf Mt 5:12 with 23:34ff) and retrospection (cf Mt 14:13-21; 15:32-39 with 16:9ff).This mutual .integration of the pre-paschal Jesus commission and the post-paschal disciples' commission relates.to the comment above by Strecker, that the 'historical element' in the Gospel of Matthew has gained an eschatological quality and the 'eschatological element' has again been historicized.It is this insight which I want to express by means of the transparency concept.Aguirre (1981: 152) formulates it as follows: Matthew contains a level of narration, grounded in tradition and embodying an historical perspective on the past -though seen through faith and hence idealized.But there is also a second level that makes this past narrative relevant to the present needs of Matthew's communiy.Though neither level of discourse is ever totally absent, in some contexts one level may take precedence over the other, and the Gospel will slip imperceptibly from one to the other. Kingsbury's use of the time formula BJI BKBLJlCXU; ~Jl.ipcxu; in Matthew 3:1, 24:19, 22, 29, to support his point of view, does not hold water here either.Similarly, this is the main reason that we differ from Kingsbury regarding the beginning of the Jesus commission.Since we do not draw a distinction between the singular form of the time formula BJI Tjj ~Jl.ep'rYBKBi."11 and the plural form BJI Ta~ ~Jl.epcxu;,I have pointed.out that this time formula marks both the first sequence (Mt 3:1; 7:22; 13:1; 22:23) and the end of the second sequence (Mt 24:19,22,29).The time formula concerned has in other words an eschatological connotation in the so-called eschatological discourse (chapters 23-25), but not in Matthew 3: 1. Kingsbury therefore integrates the pre-paschal Jesus commission with the post-paschal disciples' commission, with the result that the continuity and analogy between them are thereby lost. It is therefore important to realize that the shift between these two sequences takes place at Jesus' crucifixion and resurrection.Trilling (l969a, 1969b), in two separate articles, has convincingly shown that the 'Wende der Zeit' takes place at this point in the Gospel of Matthew (cf Meier 1975:207).He writes in the first article that Matthew 27:51ff is highly remarkable, since the death of Jesus not only causes the veil to tearwhich signifies according to The lives of the prophets [Habakkuk] 12:11-12 God's judgment of the temple cult (Garland 1995:260), the end of the old cultic order -but also causes earthquake (see Zechariah 14:4) and the resurrection of the death (see Ezekiel 37:13-14 and 1 Enoch 51:1-2).These are eschatological signs: The earthquake belongs to the apocalyptic elements; it marks the beginning of the end and the rearrangement of the world (Trilling 1969a:195;Allison, Jr 1985:40-46).The same point of view is expressed in the second article of Trilling when he states that, in regard to Matthew 27:51f, these verses can only be seen as an announcement, through the death of Jesus, of the beginning of the new aeon, a change that encloses the whole cosmos. .It is a dramatic anticipation of Jesus' resurrection in the story of Jesus' death.It announces the destruction of the old and the dawning of the new time (Trilling 1969b:221f;cf Waetjen 1976:248).Because of difference with Kingsbury in this important matter regarding the eschatological turning of the tide in the Gospel of Matthew, I consider that he mistakenly wishes to separate the 'time of the First Testament' from the time of the earthly Jesus as Immanuel (the first sequence) and, as far as I am concerned, also from the time of the.risenJesus as Immanuel (the s~nd sequence).Meier (1975:207;1976:30-35) also considers that the crucifixion and resurrection of Jesus introduces the 'Wende der Zeit'.He, however, holds the view that there is a radical distinction between the 'old time' and the 'new time'.He equates the 'old time' with the 'time of the First Testament' and thus the demand for obedience to the Mosaic law and the time of Jewish particularism.He equates the 'new time' with the period of the universal purport, which began with the death and resurrection of Jesus and was foreshadowed during the 'old time', as it can be seen in texts such as Matthew 8:5-13 and 15:21-28.Meier builds his argument chiefly on the baptismal command to the disciples with regard to the -rav-ra -rix s9VTJ (Mt 28:19).According to him baptism replaces circumcision, which symbolized the 'old time'.Just as the particular purport went over into the universal, the demand for obedience to the Mosaic law, according to Meier, falls away with Jesus death and resurrection.Variations on this view are encountered in Trilling (1964:211), Hamerton-Kelly (1972) and Waetjen (1976:244).The latter, despite so many meritorious insights in his book, The origin and destiny of humanness, with regard to Fitst Testament allusions in the Gospel of Matthew, uses misleading expressions like: 'The death of Jesus is also the death of Israel' (Waetjen 1976:248) and '(T)he promises of the Old Testament have been fulfllied and cancelled at the same time '{Waetjen 1976:244). What these scholars do not take into account, however, is that the use of the First Testament in the Gospel of Matthew can be seen as a narrative technique which principally has the same function as narrator's commentary.Narrator's commentary serves the reader as an important directive to read the narrative as the narrator intends it to be read.The introductory formula of the fulflliment citations can, seen thus, be regarded as the introduction to the narrator's commentary.Graham Stanton (1992:348) calls this introductory formula '''asides" of the evangelist' which 'are not placed on the lips of Jesus or of other participants in the evangelist's story'.By means of scriptural proof crnd fulfillment citation the First Testament functions in the Gospel of Matthew as the narrator's commentary, on which he bases the continuity and analogy between the prepaschal Jesus commission and the post-paschal disciples' commission.This continuity and analogy lies in the presence of Jesus as God-with-us on both temporal levels.And Jesus' Immanuel nature is manifested in his absolute obedience to the will of the Father (the 'law and the prophets').David Barr (1976:357f), therefore, rightly remarkes that Digitised by the University of Pretoria, Library Services The First Testament in the Gospel or Matthew the relationship between prophesy (the 'time of the First Testament') and fulfillment (the 'time of Jesus' and the 'time of the church') is not one of antithesis, but one of completion. Just like Barr, Senior (l976:672f) also considers that Matthew uses the First Testament to build a continuity and analogy between his Jesus-image (first sequence) and his disciple/church-image (second sequence).One finds the same conviction in Aguirre's (1981) article on the interrelationship between 'cross' and 'kingdom' in Matthew's theology.The result of my investigation largely agrees with their views on the levels of the pre-paschal Jesus commission and the post-paschal disciples' commission.I shall now give a short explanation of this result. THE FUNCTION OF THE USE OF THE FIRST TEST AMENT The Gospel of 'Matthew is circumscribed by Jesus' birth record (Mt 1 :2-17) and the commission to the disciples (Mt 28:16-20).The genealogical register relates Jesus' divine legitimacy and royal ministry to Mosaic kingship and covenantal kinship in the First Testainent -being Son of God, Son of Abraham and of David, born in Bethlehem; Immanuel, the 'new Moses'.The commission of the disciples relates the ministry of 'sons of God' and 'brothers of each other' in the BKKAT,uicx with that of Jesus.In effect, the pre-paschal Jesus commission and the post-paschal disciples' commission are both linked to the First Testament (the 'law and the prophets').In terms of traditional theologoumena this means that the theology of the Gospel of Matthew is neither eccle-siologica1 (cf e g Strecker 1966) nor christological (cf e g Kingsbury 1975), but that ecclesiology and chnstology, as a result of the God-with-us perspective of the Gospel of Matthew, are a two-part unit (cf e g Frankemolle 1974:230, 239, 243).On the levels of both the pre-paschal Jesus commission and the post-paschal disciples' commission the First Testament (the 'law and the prophets') functions as the directive medium.This statement can be debated as follows. Matthew 5: 17-20 functions in the gospel as the key to the lasting validity of the kerygma in the First Testament.Jesus did not come to make the First Testament invalid and replace it, but to illustrate its 'true meaning' in his actions and disposition, and thus 'fu1f111' it.This disposition contrasts, according to Matthew, sharply with that of the Israelite elites.It is thus in obedience to the will of the Father that Jesus turns to the 'lost sheep of Israel' (Mt 9: 13 -OU 'Yap ~Mo" KcxAeam oLwiovC; OtAAa OtP.CXPTWAOVC;) -an obedience stripped of formalism (cf e g the question of keeping the Sabbath -Matthew 12: 1-8; keeping the tradition of the elders -Matthew 15: 1-6; service to the temple authorities -Matthew 17:24-27; 21: 12-17).His service is the embodiment of the core of the demand of the 'law and the prophets' (cf Mt 22:34-40). 138 HTS 53/1 & 2 (1997) Digitised by the University of Pretoria, Library Services He is the perfect example of the absolutely obedient 'Son of God' (Mt 5:45).As far as discipleship is conCerned, the following remark by Senior is important: 'To be a disciple of this Master is not to abandon one's heritage, but to bring that heritage to its fullest potential.'The success of the disciples' executing their call to be Jesus' helpers, and the criterion that will count during.the parousia, are determined by obedience to God's will -the 'law and the prophets'.It is however not obedience to the 'law and the prophets' as such that will separate the sheep from the goats (Mt 25:38).The authority of the First Testament is relevant 'only to the degree that they [the "law and the prophets"] are embodied in the commands of Jesus' (McConnell 1969:97;cf Mt 7:28f;22:16). Nevertheless, scholars such as McConnell (1969:90) and Kingsbury (1977:82ff) point out the paradox between Matthew 5:17-20 and Matthew 5:21-48 (the so-called 'antitheses').We have already mentioned that Matthew 5: 17-20 explicitly states that Matthew considered that it was not Jesus' intention to reduce the validity of the First Testament (cf Mt 24:35).It, however, seems that this very same positive approach regarding to the First Testament can at least not be made applicable to the .third'antithesis' -the prohibition on divorce (Mt 5:31f; cf Mt 19:3-12), the fourth 'antithesis' -the prohibition on oaths (Mt 5:33-37) -and the fifth 'antithesis' -the nullifying of the doctrine of retribution .Strecker (1978:69f) for example, on the basis of a traditional redaktiongeschichtliche investigation, formulates his findings by stating that it is important to note that, in the distinction between 'real' (pre-Matthean) and 'false' (redactional) antitheses, the alternatives 'tightening the Torah' or 'annulment of the Torah' do not constitute a sufficient criterion.In antithesis 1 and 2 (verse 21ff and 28ff) the wording of the First Testament is radicalized.However, antithesis 4 (verse 33f) the First Testament oath is not only outdone, but totally abolished and antithesis 3 (verse 31 f) annuls the First Testament nomism, while antithesis 5 (verse 38ff) , specifically, criticizes the First Testament ius taliones.In other words, the Matthean Jesus does not mention the will of God only with regard to the Israelite tradition, but also in critical analysis of the Mosaic Torah, in order to 'fulfill' its true sense (cf Allison 1993:289-290).Other examples of the use of the First with-us is the embodiment of the will of the Father, his mission (pre-paschal and postpaschal) is cloaked with authority (see Mt 28: 18).This e~ovC1ia manifests in the Moses-like teaching and the healing miracles of the Son of David.The teaching and the healings have as their content the proclamation of the gospel of the /3acTLAeia TWP ovpapwp.What is therefore remarkable is the fact that it is the fulfillment citations, in particular, which emphasize these moments of teaching and healing as the realization of the 'law and the prophets' (cf Senior 1976:674).Sand 1974:192;Saldarini 1994:161).By 'reduction' I do' not mean the legitimation of only a part of the First Testament, the 'core' which, according to Matthew, would be the commandment to love (see Luz 1978:400f).For Matthew, the call to love serves rather as the hermeneutic key according to which obedience to the whole 'law and the prophets' is demanded.To Matthew the authoritative explanation of the law by Jesus, in which the call to love should have precedence in all circumstances, and on which all the other laws are dependant, is crucial (Luz 1978:420). Obedience to the call to love concretizes in the Gospel of Matthew in the ministry of the pre-paschal Jesus as Immanuel (the fIrst sequence) with regard to the Israelite multitude in particular, but to the Gentiles as well -the indicative.During the period of the mission to all the people (the second sequence) the disciples were expected to continue this radical call.to love by analogy with the example set by Jesus himself, the embodiment of absolute obedience to the will of the Father -the imperative.McConnel (1969:90) sheep and the goats reveals, judgement is based on whether one has shown mercy to the needy (25:31ff).Matthew emphasized that judgement takes place according to one's works or his doing the will of God (7: 16-17). The analogical continuity between the ministry of the disciples in the period of their mission to the TavTa T& eOJITI in the second sequence and the ministry of Jesus in the first sequence thus manifests in loving care towards the Israelite multitude, while the ~ission to the Gentiles is assumed).This continuity and anatogy betw~n the first seque~ce and the second is thus dialectically based on the one hand in the presence of Jesus as Immanuel in both sequences and on the other in the obedience to the will of the Father (the 'law and the prophets') during both sequences.The Gospel of Matthew 'contains a defining dialectic: the past informs the present, and the present informs the past' (Allison 1993:289).As far as the first sequence is concerned: 'His [Jesus Immanuel's] bond with the disciples [and thus with the church] is repeatedly stressed by means of ... catch phrases such as "with them", "with you", "with me".And the abiding presence of Jesus ... is a promise without end (18:20; 28:20) ... the risen Lord is present wherever a community of people hear the gospel and respond with ... compassion and service' (Senior 1976:676).As far as the latter sequence is concerned, Jesus' way is the disciples' way, and the congregation who follows suit is reminded by Matthew, as by his predecessors, of the cOnsequences of the following of Jesus.The following demands an instruction about its reasOn and meaning, which is strongly emphasized in Matthew's gospel through the five discourses which are referred to in Matthew 28:20 (TaVTa Dua 8V8'Tl~,)...aJl:qv vp.tv).The content of this instruction is God's longstanding will.As Jesus fulfllied it totally, so the disciples are called upon to fulfill God's will, which includes 'being with him' (Frankemolle 1974:82). The closing words (Mt 13:52) of the parable discourse (Mt 13:1-51) express this analogy between the Jesus-image and the disciple-image, based in the radicalized Jesusinterpretation of the 'law and the prophets' (the 'old' and the 'new' in one): 'Therefore every teacher of the law who has been instructed about the kingdom of heaven is like the owner of a house who brings out of his storeroom new treasures as well as old' (Mt 13:52).The disciples are reminded of how Jesus in his teaching and work made the old things new and how he interpreted old traditions in a radically new way and are thus informed of how they should go about with what they already know but also with their newly acquired knowledge of the kingdom (Vorster 1977:136).The sequence in Matthew 13:52 should be noted: new and old! 'What then is "old"?The Jewish connotation of the term "scribe" suggests that, with "old", Matthew is referring to the "law and the prophets" which, in the opinion of the evangelist, remain valid for of his message.' With regard to the function of the use of the First Testament in the Gospel of Mark, one can remark on a difference between Mark and Matthew (cf Vorster 1981: 70).Although the use of the First Testament in both gospels functions according to the promise-fulfillment technique, this technique is implemented by the citations in the Gospel of Mark, unlike in Matthew, where the First Testament is considered fulfllied in Jesus.Willem Vorster states this as follows: ' ... these quotations fonn part of the Markan narrative of Jesus and are fulfllied in that narrative.In other words it is not the same as in Matthew's account, where the First Testament is regarded as fulfllied in Christ.In Mark's gospel these quotations are part of the narrative statement and are fulfilled within the boundaries of that text.'This reference to the Gospel of Matthew implies that the 'time of the earthly Jesus' (the first sequence) and the 'time of the First Testament' do not coincide, but that, according to Matthew, the latter would be the advance 'promise' of the fonner, which would then be its fulflliment.This view 132 HTS 53fl & 2 (1997) Digitised by the University of Pretoria, Library Services 136HTS 53/1 el2 (1997)Digitised by the University of Pretoria, Library Services Testament in the Gospel of Matthew, like the picking and eating of the ears of com on the Sabbath (Mt 12:1-8), the healing of the man with the shrivelled hand (Mt 12:9-14) and the interpretation of the regulations regarding what is clean and what unclean (Mt 15:1-20), can in a certain sense in this context be added to the third, fourth and fifth 'antithesis' .Matthew uses the 'la~ and the prophets'.as the will of the Father in heaven, to give authority to his ideological/theological perspective.In as much as Jesus as God-ISSN 0259-9422 = HTS 53/1 & 2 (1997). refers to this imperative which was to be realized in the ministry of disciples: 140 It is necessary that the disciples have a 'better righteousness' (5:20) ... and this means performing the commands of Jesus which primarily concern showing love to God and to one's neighbour.As the parable of the H1'S 5311 & 2 (1997) Digitised by the University of Pretoria, Library Services Andries WIll Aarde Barr, D L 1976.The drama of Matthew's gospel: A reconsideration of its structure and purpose.ThD 24, 349-359.Barth, G 1961.Das Gesetzesverstiindnis des Evangelisten Matthaus, in Bornkamm, G et al, Uberliejerung und Auslegung im Mauhiiusevangelium, 54-154.2.Auflage.Neukirchen: Neukirchen Verlag.Charlesworth, J H 1985. Introduction for the general reader, in Charlesworth, J H (ed) , The Old Testament Pseudepigrapha, Volume 2: Expansions of the 'Old Testament' and legends, wisdom and philosophical literature, prayers, psalms, and odes, fragments of lost Judeo-Hellenistic works, xxi-xxxiv.Garden City, NY: Doubleday.Conzelmann, H 1977. Die MiUe der Zeit: Studien zur Teologie des Lukas.6. Auflage.Tiibingen: Mohr.Crossan, J D 1986.From Moses to Jesus: Parallel themes.Bible Review 212, 18-27.---1994.The infancy and youth of the Messiah, in Shanks, H (ed), The search for Jesus: Modern scholarship looks at the Gospels, 59-81.Washington, DC: Bib- lical Archaeological Society.Frankemolle, H 1974. Jahwebund und Kirche Christi: Studien zur Fonn-und Traditionsgeschichte des 'Evangeliums' nach Mauhiius.Munster: Aschendorf.Garland, D E 1995.Reading Mauhew: A literary and theological commentary on the First Gospel.New York: Crossroad.Gibbs, J M 1968.The Son of God as the Tora incarnate in Matthew, in Cross, F L (ed), Studia Evangelica, 38-46.Berlin: Akademie.(Studia Evangelica 4.) Hamerton-Kelly, R G 1972.Attitudes to the law in Matthew's gospel: A discussion of 5:18.BR 17, 19-32.Harrington, D J 1985.Pseudo-Philo, in Charlesworth, J H (ed), The Old Testament Pseudepigrapha, Volume 2: Expansions of the 'Old Testament' and legends, wisdom and philosophical literature, prayers, psalms, and odes, fragments of lost Judeo-Hellenistic works, 297-377.Garden City, NY: Doubleday.Hare, D R A 1985.The lives of the prophets, in Charlesworth, J H (ed), The Old Testament Pseudepigrapha, Volume 2: Expansions of the 'Old Testament' and legends, wisdom and philosophical literature, prayers, psalms, and odes, fragments of lost Judeo-Hellenistic works, 379-399.Garden City, NY: Doubleday.Isaac, E 1983. 1 (Ethiopic Apocalypse of) Enoch, in Charlesworth, J H (ed), The Old Testament Pseudepigrapha, Volume 1: Apocalyptic literature and Testaments, 5- 89.London: Darton, Longman & Todd.Kingsbury, J D 1973.The strttcture of Matthew's gospel and his concept of salvationhistory.CBQ 35, 451-474.ISSN 0259-9422 = HTS 53/1 & 2 (1997) 143 Digitised by the University of Pretoria, Library Services adopt another interesting view in this connection.I have already made the point that the poetics of the Gospel of Matthew only display two explicit temporal (and topographical) levels, namely that of the pre-paschal and that of the post-paschal.As a consequence the 'time of the First Testament' does not function as a separate sequence in the Gospel of Matthew, but is a part of the prepaschal Jesus commission.We thus differ from scholars such as Vorster, Kingsbury, Walker and Strecker with regard to the place and nature of the 'time of the First Testament' in the Gospel of Matthew.If we were to concur with these scholars in this con- nection, it would imply that Matthew and Luke, coincidently, broadly recognized the same heilsgeschichtliche theology.The Immanuel perspective of the narrator in the Gospel of Matthew, however, makes a heilsgeschichtliche viewpoint, such as that maintained by the above-mentioned colleagues, impossible.My own view is rather more that of, for example, Barth, Barr, Senior and Aguirre.With regard to the very important point that features in this context, namely the point at which the first sequence switches over to the second, my view agrees with that of people such as Strecker and Walker, as well as with that of Trilling and Meier.I shall now explain my viewpoint against the background of the other opinions men-. Those fulfIllment citations in 8:17, 12:17ff and 13:35 (and other First Testament citations and allusions)that indicate Jesus' ministry, as well as First Testament motifs that are behind some of his christological names as indications of his task (cfSenior 1976:673; Rotfuchs 1969:121-128), cannot be seen as separate from Jesus' mission to the Israelite multitude (i e, the 'lost sheep' of the house Israel) and the Gentiles, and the opposition of the Israelite elites.The interest of some fulfllment citations indeed lies in the conviction that the life and work of Jesus, as the revelation of God's grace, is meant for the lost ones from the house of Israel as well as for the Gentiles(Rotfuchs 1969:103; cf Senior 1976:675).The fulflliment of the 'law and the prophets' by Jesus in the Gospel of Matthew should be understood as a reduction of the First Testament to the single instruction to love one's neighbor (cf
9,952.6
1997-12-13T00:00:00.000
[ "Philosophy" ]
Increasing the efficiency of land use in real property complexes development projects . The relationship between the development of the city 's transport infrastructure and urban land use is a pressing problem in the territory development of any metropolis. Addressing this challenge requires the creation of new tools and mechanisms. Transport infrastructure is the main component of the entire infrastructure of any city, affecting the social, economic and environmental efficiency of urban land use. Therefore, the authors, choosing transport infrastructure as the object of the research, analyze the patterns that arise during its development on the conditions of the entire urban land use. Methods such as calculation-graphical, analysis and synthesis, abstraction method, etc., are used. The authors analyze management methods of urban transport systems as part of real property complexes. The ecological and economic justification of development projects for the real property complex is proposed as a tool for determining perspective directions and mechanisms for increasing the efficiency of land use. Introduction In recent decades, the world has witnessed an accelerated process of urbanization of the territories. Urban areas are actively expanding, urban development is compacting, so urban lands are playing an increasing role in the economic development of countries. The increase in urban population density is accompanied by an increase in anthropogenic environmental pressure and an increase in the intensity of natural resource consumption. The main problem of cities is limited land resources. Currently, the issues is actively studied for solving problems of urban development, and first of all, improving the layout and development of the territory, urban zoning. At the same time, insufficient attention is paid to problems of urban land use efficiency, peculiarities of urban land use, ecological justification of preservation of urban natural complex. There is a growing need for ecological and economicjustification for the development of the transport real property complex, which forms for the functioning basis of urbanized territories. The growth of cities, the development of their territories, the preservation of the ecological balance of urban natural complexes, as well as the impact of infrastructure facilities on the state of the environment were addressed by such scientists as Stefan M. Knupfer, Vadim Pokotilo, Jonathan Woetzel , Vasenev V.I., Stoorvogel J.J., Leemans R., Valentini R., Hajiaghayeva R.A. [1,1], Mikheeva A. S., Ayusheeva, S. N. [2], Medvedeva O. E., Trofimenko Yu. V. [4]. Problems of land use development, infrastructure and management of real property complex in terms of urbanization in relation to transport systems are considered in the works of O. A. Antipov, B. E. Bondarev, S. I. Nosov, L. M. Papikian [4], E. O. Koncheva [5], Vukan R. Vuchic [7]. Modern problems of project management in the investment and construction sphere, peculiarities of their ecological and economic assessment are considered in the works of V.I. Resin, I.L. Vladimirova, A.N. Dmitriev, E.P. Pankratov [8,9], O. E. Medvedeva [9,10] and others. Tools for taking into account both economic and environmental factors in the development of urban transport infrastructure are provided, for example, in the relevant methodological documents of the UK, USA [12] and EU countries [13], and also the world Bank [14]. At the same time, it should be noted that the available tools are general in nature and have not been worked out in detail despite the fact that Russia has approved guidelines for evaluating the effectiveness of investment projects at the Federal level. There are examples of evaluating infrastructure investment projects at the regional level. However, many problems of forming economic mechanisms for regulating land use and improving the efficiency of accounting for land resources as conducting ecological and economicjustification of development projects for the development of real property complexes have not been sufficiently studied. The role of land use in urban infrastructure development Russian lands, forests, subsoil and water are the world's most powerful natural resource potential and the main competitive advantage of the Russian state's economic development over other countries. In this regard, the organization of rational use and protection of land should become a source of development and prosperity of the country in the 21 st century. The earth is the main environmental component of nature, the material component of people's lives and activities, the base for the placement and development of all sectors of the economy. Rational land use is the use of land that meets the overall interests of society, owners and users of land, providing the most appropriate and cost-effective consumption of its useful properties in the production process, optimal interaction with the environment, protection and reproduction of land resources. Large cities are the main centers of human activity. They play a leading role in the socio-economic development of the country. The industrial, social, educational and cultural activities of the society are concentrated in the cities and surrounding territories. At the same time, a high concentration of population, vehicles, industrial and residential development creates an environment that is radically different from the natural environment, so-called urbanization. More intensive development of the city's housing, social, transport and other infrastructure is required due to the growth of cities. As the world's population increases, man-made pressure on the environment will also increase, including a rise in the concentration of pollutants in the air, water and soil, and the expansion of built-up areas mainly through the development of agricultural and forest land. Compliance with the principles of rational environmental management and, first of all, rational land use, becomes important especially in large cities, as the process of urbanization contributes to the aggravation of all types of environmental problems. Effective management of the large cities resources contributes to the sustainable development of the whole country's economy. Despite the fact that each city has its own unique history, natural environment, cultural traditions and other distinctive features, certain groups of cities have similar problems. For modern large cities, the problem of insufficient natural and spatial resources is particularly relevant. The growing population of cities increases the demand for natural resources, the availability of which is decreasing accordingly. In this regard, for the further development of cities, it is necessary to find compromises and balanced solutions in terms of ensuring environmentally friendly and comfortable living of residents and sustainable consumption of limited natural resources, especially land resources. Therefore, the issues of managing existing land resources are the most acute. The urbanized lands act as the basis of all activities. These lands are a reserve of urban spaces beyond which the functioning and sustainable development of the city is impossible. Spatial characteristics of urban land are most important, in the same time as soil fertility is important in the forest industry and agriculture. However, the development of urban spaces is inevitably accompanied by the destruction and degradation of land cover. The main cause of land degradation in the city is the coating of land with impermeable asphalt concrete materials in the construction of housing, roads, industrial and other urban facilities. Sealing of urban areas is the main cause of urban soils degradation, disruption of urban ecology. Sealed land plots have altered air, water and heat regimes, disturbed ecosystems. The share of sealed areas in major cities of the world exceeds 60%, while in central areas the share of such areas increases to 95%. Unsealed plots of land are usually under the objects of greening the territory. It is necessary to carry out an accomplishment of territory and landscaping of urban spaces. These processes have a positive impact on the sanitary, architectural and planning, social and aesthetic characteristics of the city. Analyzing data of analysts [16] comparing 12 comparable megacities of the world, it can be seen that the area of green natural and recreational objects per capita of Moscow in 2019 is at the level of Hong Kong and Berlin, while Beijing, Shanghai and Seoul take the lowest positions of the rating. However, all cities have long-term greening programs for their territories. Green plantations form parks, gardens, etc. make the area more attractive and more convenient for the population, that is positively affects the land use of the area and increases the cost of real estate. Due to the insufficient amount of ground space in modern cities, their successful development will be connected with the development and implementation of design solutions aimed at the most rational use of all types of available spaces, including ground, above-ground (trestle), surface and underground. Thus, current trends in the development of large cities determine their limited spatial resources. The main value of the land as a natural resource in the conditions of the modern city lies in its essence as a spatial basis for carrying out various activities and placing urban objects. In case of dense development and relatively small size of land plots, underground and above-ground space has special significance. It is necessary to maintain a strict balance of the use of territories in order to create conditions for the effective functioning of natural and technical systems that ensure the high quality of life of the urban population. The city's territory development leads to an increase in the need for additional space for accommodation construction of residential, commercial, social and other objects. At the same time as the development of land use in the area, the increase of floors and the construction of new buildings, the volume of passenger transport are also increasing. Transport infrastructure for personal and public transport also should be developed as the additional passenger transport is required. Additional space is nessesary to meet transport needs. Thus, there is a demand to differentiate limited urban spaces for transport and nontransport needs. It is unacceptable that transport dominates other activities, and its intensive development should not negatively affect the ecology of the city and the quality of life. In addition, given the substantial requirements for transport facilities, and limited areas of the city in dense areas, there is competition for available space between the two functions of the city's basic activities, such as work, trade, service, housing and leisure, and transport. From the point of view of land use optimization, a rational approach to the use of land in the construction of transport facilities is as follows: • use of least value agriculture land for transport facilities, which are neither particularly valuable land nor specially protected natural areas; • minimizing the land required for transport by improving vehicle productivity and engineering solutions at transport facilities. The use of the land at the disposal of the city and the nature of urban design play a decisive role in the choice of the direction of transport systems development. For example, in areas with multi-storey development and few parking spaces, the development of highspeed public transport plays a decisive role, while the area designed to accommodate private houses remains an increased demand for private vehicles. The need for close interaction and balance between land use and the environmentally sound development of the city's transport infrastructure is due to the fact that changes in the transport system have an immediate impact on land use and have long-term positive or negative consequences. Street-road network and linear objects of public transport form the structure of the city, its framework. Their placement strengthens the physical location of all other facilities located on the urbanized area for years and decades to come, so planning for their placement requires a particularly responsible approach. In general, it can be noted that transport development is the cause of land use development, and the opposite is true. Improvement of transport infrastructure is a key factor in the development of both individual districts and the city as a whole. The creation of a transport infrastructure facility increases the intensity of the development of other infrastructures in the region and affects the location of urban facilities. The development of the area requires the development of transport routes. Thus, the functioning of all facilities is related to their transport accessibility. Publications on urban land administration define the concept of "real property management". As the city's land is inextricably linked to other types of movable and immovable property. "Real property complex" is an integral concept combining in its content the definition of the term "real" as a socio-economic essence of real estate and the term "property" as a category expressing a set of property rights and obligations belonging to a natural or legal person, or a set of things, for example, the subject of property employment, purchase and sale, lease, pledge, etc. The real property complex is considered as a set of all land resources in a dynamic variety of different forms of ownership and types of land use, administrative and economic structures, legal entities and individuals connected with legal relations over the distribution and use of land. From an economic point of view, the real property complex is understood as a combination of land and other related resources in a variety of forms of ownership and management, taking into account the uniqueness of the land as a natural resource and the basis of the existence of mankind, the productive forces of society. A real property complex is a collection of land and other property with a certain functional purpose. At the same time, the earth is not just a functional part of this population, but its base, a system-forming element. A mandatory element of any real property complex is a land plot. The complex also includes other real estate objects; Movable property objects belonging to fixed assets; Movable property relating to revolving funds. In order to solve the problem of ecological and economic justification of the land use development occupied by transport facilities, we propose to allocate in a separate category another type of such complex -transport real property complex. By transport real property complexes we mean real property complexes intended for realization of tasks of transport support of the population. The structure of transport real property complex is proposed: 1) set of land plots, different in the form of ownership, purpose, value and volume of payments for them, used to ensure the operation of the transport system; 2) inextricably connected with land plots real estate objects of different levels of accommodation: -tunnels, underground structures, etc.; -land linear objects (roads, separate public transport routes, etc.); -structures for above-ground arrangement of linear objects (bridges and trestles); -other buildings and structures ensuring the operation of transport (parking, depot, garages, ventilation shafts, stations, ground buildings of crossings, etc.); 3) vehicles (public rolling stock, automobiles, etc.). Carrying out ecological and economicjustification It is advisable to manage the development of infrastructure from the position of the project approach of managing the real property complex development. However, a mechanism for analysing the long-term environmental impact of the project is not sufficiently developed to provide indicators such as the benefits of increased economic activity and land use development resulting from the project or the costs of eliminating environmental damage caused by the construction and operation of the transport facility. The preparation of an ecological and economicjustification will allow to determine the costs and benefits of transport projects for society, state and business, as well as help to find a balance of interests, avoid negative consequences from the implementation of the project and optimize the necessary volume of investments. These provisions are reflected in the applied methods of foreign countries and are partly present in certain domestic legal and regulatory documents and recommendations, which will be considered in detail. It should be noted that there is currently insufficient attention to this issue. Therefore, we consider it relevant to analyze the existing methods of assessing investment and construction projects, which include consideration the above-mentioned factors. There should also be developed an integrated approach to the preparation of ecological and economicjustification for the implementation of transport projects that include an assessment of the territorial factors of the urban environment. The most common applied methods for evaluating the effectiveness of investment and construction projects in the field of transport, used abroad, include the methods of the UK, the US and the European Union, as well as the world Bank. Let's look at the main ones: 1) Transport project assessment system developed by the UK Department of transport "WebTAG" (Transport Appraisal Guidance). The assessment is carried out on four main parameters: impact on the economy, environment, society and the state budget. 2) The widely used document is the Methodological Recommendations of the U.S. Department of Transportation "to justify the public effectiveness of TIGER road development projects" [12], which are used on a competitive basis in the process of state investment in transport projects. The projects proposed for consideration should be aimed at achieving long-term social and environmental objectives and assessed by their impact on the economy and the promotion of innovation. 3) Separately, it is worth noting the project "HEATCO" (Developing Harmonised European Approaches for Transport Costing and Project Assessment) [13]. This project aims to unify approaches to assessing the effectiveness of transport projects in European countries. According to the above-mentioned document, in order to carry out the "costsbenefits" analysis, the European Union most often takes into account the cost of implementing the project, the cost of maintaining the object in a standard state, limiting the movement of vehicles during the project implementation, reducing the cost of operating vehicles, reducing the cost of transporting goods, reducing travel time for passengers, the amount of fare, the impact on safety, the impact on noise levels, the impact on air pollution, and the impact on climate change. 4) Approach of the world Bank -"Notes on the Economic Evaluation of Transport Projects" [14]. This document describes the basic principles and approaches to assessing the cost-effectiveness of transport infrastructure development projects, which the World Bank proposes to use when evaluating projects in both developed and developing countries of the world. In the document presented on the website, the calculation of the economic efficiency of the transport project includes: a gain for users of transport infrastructure, a gain for operators, carriers and public sector organizations, a gain due to an improvement in the environment, a decrease in the number of road accidents and others Despite the common practice of consideration a wide range of factors for the evaluation of transport projects abroad, it can be said that in Russia such a comprehensive assessment is hardly carried out. The legislation does not clearly state the need for a detailed analysis of environmental factors in the preparation of projects, and the available methodological recommendations do not fully take into account the entire impact of the transport complex on the natural environment and society. To some extent, the need for ecological and economicjustification of projects is enshrined in the Methodological Recommendations for Assessing the Effectiveness of Investment Projects. Projects for new construction and reconstruction of transport and other infrastructure, which have a direct impact on improving the quality of life in this area, should be classified as infrastructure investment projects. The efficiency of the project includes commercial and social components. Indicators of social efficiency take into account the social and economic impact of projects on society: immediate results and costs of the project, as well as costs and results in related sectors of the economy, environmental, social and other non-economic effects. It is recommended to consider the impact of the project implementation on the activities of third-party enterprises and the population, including the change in the market value of real estate due to the project implementation, the impact of the project implementation on the health of the population, etc. Despite the recommendations in the law on the preparation of such assessments of ecological and economiceffects, there are no formal requirements and methods for their implementation at the federal level, but there are regional recommendations as well as research results in the scientific literature. The example of one of the approved methods is the Methodological Guidelines for Assessing the Social and Economic Efficiency of Investment Projects in the Field of Transport Infrastructure Development, approved in Moscow, as well as a draft Methodology for assessing the socio-economic effects of implementing infrastructure projects with state support. Methodological guidelines establish the procedure for evaluating investment projects in the field of capital construction, reconstruction, technical equipment of transport and other infrastructure, partially or fully financed from the regional budget. Methodological approaches for carrying out ecological and economicassessment are considered in the Temporary Methodological Recommendations for assessing ecological and economicefficiency of projects of the planned economic activity (not officially approved), as well as the main ideas were developed in subsequent scientific research [15]. The "Moscow metro", as an object of testing the developed approach, was considered in the form of a transport real property complex. For this purpose, there was carried out a comprehensive analysis of the urban spaces occupied by it, the number and area of land plots issued for the placement of objects of various purposes, ensuring the possibility of operation of the metro system. The main ecological and economicfactors of metro operation are considered. Results According to the analysis, we can conclude that nowadays the mechanism for ecological and economicevaluation of investment projects in the field of transport construction has not been fully studied and has not been implemented in practice but it is a key factor in the development of the country's economy. After analyzing the foreign practice of evaluating the effectiveness of transport projects and domestic documents for evaluating investment projects, it is advisable to develop and implement a separate regulatory document. This document will contain recommendations for preparing an ecological and economicjustification for transport infrastructure development projects. The authors propose to conduct the ecological and economicjustification by comparing projects with their net present value, return on investment and internal rate of return. The calculation of these indicators must necessarily include cost-benefit criteria calculated using special formulas. The approach meets the principles of rational and effective environmental management. The methodology allows to take into account, first, the cost of consumed natural resources and the change in their cost as a result of the project implementation. In this case, the main natural resources are land resources, as well as spaces under and above the surface of the earth. Second, the methodology will allow a preliminary assessment of the potential damage to the environment. The ecological and economicjustification using this methodology will allow to include in the analysis and justification a wider range of factors in the selection of investment projects for the construction of transport infrastructure facilities and, accordingly, to determine in a comprehensive, more objective manner the effectiveness of the projects implemented, taking into account the possibility of reducing their cost and optimizing land use. The inclusion of a wide range of social and environmental aspects beyond economic effects in the project's estimated costs and benefits is a feature of the project evaluation papers reviewed. We will classify the main indicators considered by these documents by their classification as cost factors directly related to the construction and operation of the transport facility and natural, related to the impact on the environment and consumption of natural resources. Table 1. Classification of ecological and economicjustification of transport infrastructure projects. Type Factors Economic -design and survey, construction and installation works (capital expenditures for project implementation); -the cost of maintaining the object in the standard state (operating costs); -compensation for lost profits and damage to the owners of property seized for the purpose of construction of the object, as well as compensation for other possible restrictions related to construction; -costs for the operation of facilities; -revenue from passenger transportation; -income from other activities (placement of revenue objects); -the cost of consumption of urban space; -change in the value of land resources. Environmental -noise and vibration pollution; -air pollution; -pollution of soil and water resources; -changing the landscape, urban landscape (aesthetic impact); -the fragmentation of the landscape (barrier effect); -seized the soil for the purpose of placing underground facilities and soil disruption; -soil degradation from sealing land plots with linear transport objects. We propose to implement the integrated economic effect of the transport project as part of the ecological and economicjustification on the basis of the following model: EE tr =I tr +DΔ r.est +I pl r.est −C con −C comp −C oper −C space − C noise − C vibr − C subsoil − C land degr , (1) where: I tr -Income from the transport facility for the period, (rubles/period); DΔ r.est -Difference from the change in the value of real estate in the area adjacent to the transport facility, (rubles); I pl r.est -Income from the placement of real estate objects above or under the transport object, (rubles/period); C con -Total capital costs for the construction of the facility, (rubles); C comp -Costs for compensation of losses of right holders of real estate objects, (rubles); C oper -Sum of costs for operation of the facility, (rubles/period); C space -Cost of space required for transportation, (rubles); C noise -Noise removal costs, (rubles); C vibr -Costs of vibration impact elimination, (rubles); C subsoil -Costs of damage to the subsoil, (rubles); C land degr. -Costs of land degradation from land sealing by linear transport facilities, (rubles); Indicators such as net present value (NPV), return on investment ratio (BCR), internal rate of return (IRR), and return on investment period (PB) can be used to further determine the effectiveness of the project. Discussion As a result of the study, it was revealed that the environmental problem is particularly acute for modern megacities, including Moscow. In conditions of insufficient natural and spatial resources, it is necessary to improve the system of land use management in the city. Land resources are the main element of urban space, which has environmental characteristics and soil cover. It is proved that the development of urban spaces, as a rule, causes soil disturbance and degradation on urbanized lands, due to the sealing of urban areas as a result of covering the land with impermeable materials during the construction of transport and other urban infrastructure. It is necessary to keep records of sealed and unsealed urban spaces, along with an indicator of the density of the building, in order to balance and organize the improvement and greening of urban areas. These territories have a favourable influence on sanitary and hygienic, architectural and planning, social and aesthetic indicators of the territory. Use of underground and above-ground space for accommodation of structures and communications in conditions of high density of urban development and limited ground spaces has a special significance. The development of transport contributes to the more dynamic development of the territory and the intensification of urban land use. Improved land use leads to the transport necessity, which requires a balance between the limited space available for transport and non-transport needs. Conclusions It has been established that the organization of ecological and economically justified land use of the transport real property complex is the main aspect of the interaction of transport development management and compliance with environmental management. From the point of land use view, a rational approach to the use of land in the placement of transport facilities is to use the least valuable land for transport facilities and to minimize the spaces required for transport. This is achieved by increasing the productivity of vehicles and the efficiency of engineering solutions at transport facilities. Based on the calculations, it is proved that a comprehensive accounting of all long-term costs and benefits of underground and above-ground placement types of transport real property complex objects characterizes them as the most ecological-cost-effective. Calculations have shown that the placement of linear objects by an overpass or underground method is more effective than the ground method of location by 274.0% and 629.4%, respectively (in terms of the difference in annual costs and benefits).
6,423.2
2020-01-01T00:00:00.000
[ "Economics" ]
What can be learned from the Belle spectrum for the decay tau- ->nu_tau K_S pi- A theoretical description of the differential decay spectrum for the decay tau- ->nu_tau K_S pi-, which is based on the contributing K pi vector and scalar form factors F_+^{K pi}(s) and F_0^{K pi}(s) being calculated in the framework of resonance chiral theory (R$\chi$T), additionally imposing constraints from dispersion relations as well as short distance QCD, provides a good representation of a recent measurement of the spectrum by the Belle collaboration. Our fit allows to deduce the total branching fraction B[tau- ->nu_tau K_S pi-] = 0.427 +- 0.024 % by integrating the spectrum, as well as the K^* resonance parameters M_{K^*} = 895.3 +- 0.2 MeV and Gamma_{K^*} = 47.5 +- 0.4 MeV, where the last two errors are statistical only. From our fits, we confirm that the scalar form factor F_0^{K pi}(s) is required to provide a good description, but we were unable to further constrain this contribution. Finally, from our results for the vector form factor F_+^{K pi}(s), we update the corresponding slope and curvature parameters lambda'_+ = (25.2 +- 0.3)*10^{-3} and lambda''_+ = (12.9 +- 0.3)*10^{-4}, respectively. Introduction An ideal system to study low-energy QCD under rather clean conditions is provided by hadronic decays of the τ lepton [1][2][3][4][5]. Detailed investigations of the τ hadronic width as well as invariant mass distributions allow to determine a plethora of QCD parameters, a most prominent example being the QCD coupling α s . Furthermore, the experimental separation of the Cabibbo-allowed decays and Cabibbo-suppressed modes into strange particles [6][7][8] opened a means to also determine the quark-mixing matrix element |V us | [9][10][11] as well as the mass of the strange quark [12][13][14][15][16][17][18][19], additional fundamental parameters within the Standard Model, from the τ strange spectral function. The dominant contribution to the Cabibbo-suppressed τ decay rate arises from the decay τ → ν τ Kπ. The corresponding distribution function has been measured experimentally in the past by ALEPH [8] and OPAL [7]. More recently, high-statistics data for the τ → ν τ Kπ spectrum became available from the Belle experiment [20], and results for the total branching fraction are also available from BaBar [21,22], with good prospects for results on the spectrum from BaBar and BESIII in the near future. These new results call for a refined theoretical understanding of the τ → ν τ Kπ decay spectrum, and in ref. [23] we have provided a description based on the chiral theory with resonances (RχT) [24,25], under the additional inclusion of constraints from dispersion relations. To start with, the general expression for the differential decay distribution takes the form [26] dΓ (1) where we have assumed isospin invariance and have summed over the two possible decays τ − → ν τ K 0 π − and τ − → ν τ K − π 0 , with the individual decay channels contributing in the ratio 2 : 1 respectively. In this expression, S EW is an electro-weak correction factor, F Kπ + (s) and F Kπ 0 (s) are the vector and scalar Kπ form factors respectively which will be explicated in more detail in section 2. Furthermore, ∆ Kπ ≡ M 2 K − M 2 π , and q Kπ is the kaon momentum in the rest frame of the hadronic system, By far the dominant contribution to the decay distribution originates from the K * (892) meson. In the next section, we shall recall the effective description of this contribution to the vector form factor F Kπ + (s) in the framework of RχT that was presented in ref. [23], quite analogous to a similar description of the pion form factor given in refs. [27][28][29]. A second vector resonance, namely the K * (1410) meson, can straightforwardly be included in the effective chiral description. Finally, the scalar Kπ form factor F Kπ 0 (s) was calculated in the same RχT plus dispersive constraint framework in a series of articles [30][31][32], and the recent update of F Kπ 0 (s) [33] will be incorporated in our work as well. Based on the theoretical expression (1) for the spectrum and the form factors discussed in section 2, in section 3, we shall perform fits of our description to the Belle data [20] for the decay τ − → ν τ K S π − . From these fits it follows that both the scalar contribution and the second vector resonance are required in order to obtain a good description of the experimental spectrum. In addition, the fits allow to determine the resonance parameters of the charged K * (892) and K * (1410) mesons. Finally, integrating the distribution function dΓ Kπ /d √ s, we are also in a position to present results for the total B[τ − → ν τ K S π − ] branching fraction. The form factors A theoretical representation of the vector form factor F Kπ + (s), which is based on fundamental principles, has been developed in ref. [23], in complete analogy to the description of the pion form factor presented in refs. [27][28][29]. This approach employed our present knowledge on effective hadronic theories, short-distance QCD, the large-N C expansion as well as analyticity and unitarity. For the pion form factor the resulting expressions provide a very good description of the experimental data [27][28][29]. Precisely following the approach of ref. [27], in [23] we found the following representation of the form factor F Kπ + (s): The one-loop function H(s) is related to the corresponding function H(s) of [34] by H(s) ≡ H(s) − 2L r 9 s/(3F 2 0 ) ≈ [sM r (s) − L(s)]/(F K F π ). 1 Explicit expressions for M r (s) and L(s) can be found in ref. [35]. The one-loop function H(s) depends on the chiral scale µ, and in eq. (3), this scale should be taken as µ = M K * . In ref. [36], the off-shell width of a vector resonance was defined through the two-point vector current correlator, performing a Dyson-Schwinger resummation within RχT [24,25]. Following this scheme the energy-dependent width Γ K * (s) is found to be where Γ K * ≡ Γ K * (M 2 K * ), and G V is the chiral vector coupling which appears in the framework of the RχT [24]. The phase space function σ Kπ (s) is given by σ Kπ (s) = 2q Kπ (s)/ √ s, and σ Kη (s) follows analogously with the replacement M π → M η . Re-expanding eq. (3) in s and comparing to the corresponding χPT expression [34], in the SU(3) symmetry limit one reproduces the short-distance constraint for the vector coupling G V = F 0 / √ 2 [25] which guarantees a vanishing form factor at s to infinity, as well as the lowest-resonance estimate. Since the τ lepton can also decay hadronically into the second vector resonance K * ′ ≡ K * (1410), this particle has been included in our parametrisation of the vector form factor 1 In our expressions, we have decided to replace all factors of 1/F 2 0 by 1/(FK Fπ) since for the Kπ system it is to be expected that higher-order chiral corrections lead to the corresponding renormalisation of the meson decay constant. F Kπ + (s). A parametrisation which is motivated by the RχT framework [24,25] can be written as follows: (5) This parametrisation incorporates all known constraints from χPT and RχT. At low energies, it reproduces eq. (3) up to corrections proportional to γ s (M K * − M K * ′ ). The relation of the parameter γ to the RχT couplings takes the form γ = F V G V /(F K F π ) − 1, when one assumes a vanishing form factor at large s in the N C to infinity limit. It is difficult, to a priori asses a precise value for γ, but below we shall be able to fit it from the comparison of our description with the Belle spectrum. The width of the second resonance cannot be set unambiguously. Therefore, we have decided to endow the K * (1410) contribution with a generic width as expected for a vector resonance. Hence, Γ K * ′ (s) will be taken to have the form As a final ingredient for a prediction of the differential decay distribution of the decay τ → ν τ Kπ according to eq. (1), we require the scalar form factor F Kπ 0 (s). This form factor was calculated in a series of articles [30][31][32] in the framework of RχT, again also employing constraints from dispersion theory as well as the short-distance behaviour. 2 Quite recently, the determination of F Kπ 0 (s) was updated in [33] by employing novel experimental constraints on the form factor at the Callan-Treiman point ∆ Kπ , and in our fits below, we shall also make use of this update. A remaining question is which value to use for the form factors F Kπ + (s) and F Kπ 0 (s) at the origin. However, inspecting eq. (1), one realises that what is needed is not F Kπ + (0) = F Kπ 0 (0) itself, but only the product |V us |F Kπ + (0). Once this normalisation is fixed, in the fits we only need to determine the shape of reduced form factors F Kπ + (s) and F Kπ 0 (s) which are normalised to one at the origin: This also entails, that after fixing the normalisation of the decay spectrum by giving a value to |V us |F Kπ + (0), we are in a position to predict the total branching fraction B[τ − → ν τ K S π − ] just from a fit of the shape of the form factors, independent of normalisation issues. The product |V us |F Kπ + (0) is determined most precisely from the analysis of semi-leptonic kaon decays. The most recent average was presented by the FLAVIAnet kaon working group, and reads [37] |V us |F K 0 π − In what follows, we have renormalised our description for the form factors to one and have assumed the result (8) for the global normalisation. Incidentally, the value in (8) already 2 The original motivation for a precise description of F Kπ 0 (s) was the determination of the strange quark mass ms from scalar sum rules, also performed in [32]. corresponds to the K 0 π − channel which was analysed by the Belle collaboration [20]. Therefore, possible isospin-breaking corrections to the normalisation are already properly taken into account. 3 Fits to the Belle τ → ν τ Kπ spectrum For our fits to the decay spectrum of the τ − → ν τ K S π − transition as obtained by the Belle collaboration [20], we make the following Ansatz: The factors 1/2 and 2/3 come from the fact that the K S π − channel has been analysed. Then, 11.5 MeV was the bin-width chosen by the Belle collaboration, and N T = 53110 the total number of observed signal events. Finally, Γ τ is the total decay width of the τ lepton and B Kπ a remaining normalisation factor that will be deduced from the fits. The normalisation of our Ansatz (9) is taken such that for a perfect agreement between data and fit function, which is obtained by integrating the decay spectrum. Differences betweenB Kπ and B Kπ point to imperfections of the fit, and will constitute one source of systematic uncertainties. As we shall see further below, for better fits also the agreement betweenB Kπ and B Kπ improves as expected. Before entering the details of our fits, let us discuss the numerical values of all input parameters. For the meson masses, we employ the physical masses corresponding to the decay channel in question, namely M K S = 497.65 MeV, M π − = 139.57 MeV and M η = 547.51 MeV [38]. For the meson decay constants, we use the findings of the recent review [39], in our normalisation that is F π = 92.3 MeV and F K /F π = 1.196. For the electro-weak correction factor, we have utilised the result for inclusive hadronic τ decays, S EW = 1.0201 [10] (and references therein). Even though the electro-weak correction factor for the exclusive decay in question need not be the same as S EW , to the precision we are working this choice is supposedly sufficient. Besides, we are not aware of a published result for the correct factor in the case of the exclusive decay studied here. All remaining input parameters which have not been mentioned explicitly, are taken according to their PDG values [38]. As an initial step, only the central K * resonance region is fitted, in order to get an idea about the K * resonance parameters. For this fit, two forms of the dominant vector form factor F Kπ + (s) are used. On the one hand, we employ our description (3) as discussed in the last section. On the other hand, we also investigate a pure Breit-Wigner resonance shape as was used in the experimental work of the Belle collaboration. This later allows a better comparison to the findings of ref. [20]. The Breit-Wigner resonance factor is defined by where the energy dependent width Γ K * (s) takes the form Thus, the K * width of (11) coincides with eq. (4) if the Kη contribution is neglected. Although our equations (10) and (11) are written in a form different from the one employed in [20], the expressions are in agreement. The Breit-Wigner version of the Kπ vector form factor F Kπ In practice, as discussed above, for our fits we only require the reduced form factor F Kπ + (s) which in this case is equal to the Breit-Wigner factor BW K * (s). For our first fit, we employ the Belle data [20] in the range 0.808 -1.015 GeV (data points 16 -34), where the vector form factor dominates and should provide a good description. The resulting fit parameters are presented as the left-hand column in table 1 for the Breit-Wigner fit, and the right hand column for the chiral fit. Graphically, the corresponding fits are shown as the dotted and short-dashed lines in figure 1 respectively, together with the experimental data points. The fitted K * mass M K * for the Breit-Wigner fit is close to the result by the Belle collaboration [20], while the width Γ K * is found to be somewhat larger. Besides the normalisation factorB Kπ , in table 1 we have also listed in brackets the result for the branching fraction B Kπ that would be obtained when integrating the spectrum. The χ 2 /n.d.f. for this fit is found to be of order 2. Nevertheless, later we shall see that our final fit including all contributions will have a χ 2 /n.d.f. of order 1. So this is nothing to worry about at this point. From figure 1, one observes that the fit provides a reasonable description of the data in the fit region, but both, much below and much above the resonance peak marked deviations are clearly visible, implying missing contributions that will be discussed below. Performing in an analogous fashion the fit to the Belle data with the RχT form of F Kπ + (s), the obtained fit parameters are listed in the right-hand column in table 1, and the fit curve is displayed as the short-dashed line in figure 1. The parameters obtained from both fits differ to some extent, especially the normalisationB Kπ , due to the different functional forms of the vector form factor. Still, we will postpone a detailed discussion of our numerical results until presenting the complete fit including all contributions below. 3 From figure 1, we see 3 As the fit is practically insensitive to the parameter r in the Blatt-Weisskopf barrier factor appearing in our previous parametrisation of the K * width [23], we have decided to set r to zero, so that our fits are more directly comparable to the fits performed by the Belle collaboration [20], who have not applied such a factor. Employing the central result of our previous fit r = 3.5 GeV −1 [23] would give practically the same χ 2 , but would result in a K * mass that is about 1.4 MeV lower and a K * width about 0.8 MeV lower. These Figure 1: Fit result for the differential decay distribution of the decay τ → ν τ Kπ, when fitted with a pure K * vector resonance (dotted and short-dashed curves) or with K * plus the central scalar form factor F Kπ 0 (s) as given in [33] (long-dashed and solid curves). that while both, the chiral and the Breit-Wigner fits give a similar spectrum below the K * resonance peak, above the peak there are substantial differences. This will play an important role below, when we shall aim at improving the fit by adding a second vector resonance K * ′ , because it will certainly influence its fit parameters. F Kπ 0 (s) to the differential τ → ν τ Kπ decay spectrum. When adding the corresponding contribution with the central parameters as presented in [33], it is found that the combined theoretical spectrum gives a good description also in the region below the K * resonance, with the exception of three data points in the range 0.682 -0.705 GeV (points 5, 6, 7). Therefore, as our next fit, we fit the entire low-energy region 0.636 -1.015 GeV while keeping the scalar form factor F Kπ 0 (s) fixed but leaving out in the fit the problematic data points 5, 6 and 7. 4 The resulting fit parameters in the case of the Breit-Wigner and chirally inspired vector form factor F Kπ + (s) are tabulated in table 2, and the corresponding fit curves are plotted as the long-dashed and solid lines in figure 1 respectively. From table 2 one observes that M K * is almost unchanged, the width Γ K * is slightly decreased, and also the χ 2 /n.d.f. is somewhat reduced, although it is still larger than roughly 1.5. Nevertheless, it is clear that the scalar contribution is required in order to give a more satisfactory description of the region below the K * resonance. Table 3: Full fit to the Belle τ → ν τ Kπ spectrum with the two K * and K * ′ vector resonances in F Kπ + (s) and the central scalar form factor F Kπ 0 (s). As the last step, now we also improve upon the description of the region above the K * resonance by including as a second vector resonance the K * ′ . In the case of the Breit-Wigner form factor, the inclusion of the K * ′ resonance can be achieved by writing whereas in the case of the chiral resonance description, the corresponding expression for F Kπ + (s) including the K * ′ is given above in eq. (5) and depends on the mixing parameter γ. [20] for the differential decay distribution of the decay τ − → ν τ K S π − . Our theoretical description includes the Breit-Wigner (dashed line) or RχT (solid line) vector form factors with two resonances, as well as the scalar form factor according to ref. [33]. For RχT also the scalar (dotted line) and K * ′ (dashed-dotted) contributions are displayed. parameters have been collected in table 3. We observe that the chirally inspired description of ref. [23] provides the better fit, and that as expected the K * ′ mass M K * ′ turns out to be very different, though the Γ K * ′ widths (probably by chance) agree rather well. The mixing parameters β and γ also differ, but due to the different functional forms of our two descriptions for the vector form factor F Kπ + (s), anyway they cannot be compared. Up to now, in our fits we have only employed the central prediction for the scalar form factor F Kπ 0 (s). Thus the question arises what happens if we modify F Kπ 0 (s). As the normalisation of the form factors can be fixed by experiment, we only require the shape of F Kπ 0 (s) and for this, in our dispersive approach [31][32][33], the dominant input parameter is the value of the ratio F Kπ 0 (∆ Kπ )/F Kπ 0 (0) at the Callan-Treiman point ∆ Kπ ≡ M 2 K − M 2 π , which has been discussed in detail in [33]. We can then introduce a fit parameter α which describes the change of shape of F Kπ 0 (s) when F Kπ 0 (∆ Kπ )/F Kπ 0 (0) is modified. Let α = 0 correspond to our central result of [33], α = 1 to the scalar form factor which arises when F Kπ 0 (∆ Kπ )/F Kπ 0 (0) is larger by 1σ, and α = −1 when F Kπ 0 (∆ Kπ )/F Kπ 0 (0) is smaller by 1σ. Adding α to our fit parameters, for the chirally inspired F Kπ + (s) we obtain α = 4.4 ± 1.9, and for the pure Breit-Wigner form α = 6.3 ± 2.7, with only a slight change of the other parameters and a small improvement in the χ 2 /n.d.f. From this we conclude that the fit prefers a slightly larger F Kπ 0 (s), but the sensitivity to α is not very strong. Furthermore, the largest changes when leaving α free are in the parameters of the K * ′ , which entails that the found values for α are driven by the energy region above the K * resonance, where the theoretical description is less well founded. If the same exercise is repeated with the fits which only include the low-energy and K * resonance region (fits of table 2), then we obtain α = 4.7 ± 7.9 in the case of the RχT description. Hence, with the present precision of the data and in particular the open question about the three data points in the low-energy region, we are not able to further constrain the contribution of the scalar form factor F Kπ 0 (s). Let us now come to a detailed discussion of our central fit results of table 3. The χ 2 /n.d.f. of both the chiral and the Breit-Wigner fits is of the order of one, but nevertheless the chiral fit provides the better description of the experimental data. For the complete fit including two vector resonances and the scalar contribution, within the fit uncertainties the normalisationB Kπ and the branching fraction B Kπ are in very good agreement. In addition, as can be observed from table 3, also the branching fractions extracted from the two versions of parametrising F Kπ + (s) display perfect consistency, once all contributions have been included in the fit. The remaining small difference can be traced back to the exponential factor in the numerator of the RχT expression (3). Since our chiral model for F Kπ + (s) is theoretically better motivated and furthermore provides the better fit quality, as our central result for the branching fraction, we quote: The first quoted error corresponds to the statistical fit uncertainty. To be conservative, in the second error we made an attempt to estimate systematic uncertainties. To this end, we have performed analogous fits, where the chiral factors 1/F 2 0 are taken to be 1/F 2 π , which should give an idea of the importance of higher-order chiral corrections. (See footnote 1.) Then the branching fraction for the full RχT fit turns out to be B Kπ = 0.448 %, and we take the difference of this result to our main value as an additional systematic uncertainty. When comparing to previous determinations, within the uncertainties our result (14) is in agreement with the findings of the Belle collaboration B[τ − → ν τ K S π − ] = 0.404 ± 0.013 % [20], which are just based on a pure counting of events, as well as the Particle Data Group average for the related branching fraction B[τ − → ν τK 0 π − ] = 0.90 ± 0.04 % [38]. When assuming isospin invariance, the above results can also be compared with the BaBar measurement B[τ − → ν τ K − π 0 ] = 0.416 ± 0.018 % [21], showing very good overall consistency. As far as the parameters of the charged K * resonance are concerned, within the uncertainties our value M K * = 895.3 ± 0.2 is in very good agreement with the Belle result [20]. However, it is about 3.5 MeV larger than the current PDG average [38]. 5 On the other hand, our finding for the width Γ K * = 47.5 ± 0.4 MeV is significantly lower than the PDG average, but still roughly 1 MeV larger than the Belle result. The corresponding value of the chiral coupling G V which appears in eq. (4) is found to be G V = 72.0 ± 0.6 MeV. For the second vector resonance, the K * (1410), the obtained mass from our central chiral fit is about 100 MeV lower than the PDG average [38], while for the width, we find reasonable agreement to the PDG value. However, for the Breit-Wigner fit M K * ′ was found to turn out much larger, which implies that the mass of the K * (1410) strongly depends on our parametrisation of the form factors and its determination is therefore not very reliable. As a general remark, we like to emphasise that one should not compare or average determinations done with different functional parametrisations. Conclusions Let us briefly summarise our findings before drawing further conclusions. From a description of the Kπ vector and scalar form factors F Kπ + (s) and F Kπ 0 (s) in the framework RχT, additionally imposing constraints from dispersion relations as well as short distance QCD, we were able to obtain a good fit to the recent Belle data [20] for the spectrum of the decay τ − → ν τ K S π − . From our fit we could extract the corresponding branching fraction and the parameters of the K * resonance where the quoted errors for M K * and Γ K * only include the statistical fit uncertainties. Besides, we observe a substantial model dependence of these parameters. (See footnote 3.) This model dependence is even more pronounced for the second included resonance, the K * (1410), and therefore we are unable to make a reliable prediction for M K * ′ and Γ K * ′ . As far as the scalar form factor F Kπ 0 (s) is concerned, below the K * resonance it is obvious that this contribution is required in order to provide a satisfactory description of the data. (With the exception of three data points which appear to form a small bump.) Trying to also fit the scalar part, it is seen that the data prefer a slightly larger contribution, but on the basis of the present data this is statistically not significant. Above the K * , we have the well established K * 0 (1430) resonance, but here it interferes with higher vector resonances. Due to these correlations and the strong model dependence of the higher vector resonances, it will be difficult to disentangle scalar and vector contributions without a dedicated analysis of angular correlations [26,40]. An independent investigation of the Belle τ − → ν τ K S π − decay spectrum on the basis of Mushkelishvili-Omnés integral equations, also incorporating chiral constraints at low energies as well as QCD short-distance constraints at high energies was recently published in ref. [41]. A visual inspection of the corresponding fit results presented in figure 5 of [41] suggests that the quality of the fit is not as good as in our case, though no further details, e.g. a χ 2 , were provided in [41]. Still, it would be interesting to see, if somehow the approaches used in ref. [41] and in our work could be merged, to be able to impose as many theoretical constraints as possible on the employed form factors. Already in ref. [23], from our description of the vector form factor F Kπ + (s), we deduced slope and curvature of the form factor close to s = 0, which are important parameters in the determination of |V us | from K l3 decays. Let us define a general expansion of the reduced form factor F Kπ + (s) as: where λ ′ + , λ ′′ + and λ ′′′ + are the slope, curvature and cubic expansion parameter respectively. On the basis of our fit results of table 3, we are now in a position to update these quantities, also estimating the corresponding uncertainties, which yields: In an attempt to estimate systematic uncertainties from higher orders in the chiral expansion, like in the last section we have again also investigated the case F K = F π , which contributes the largest part of the error quoted in (18). The next important source of uncertainty stems from the mixing parameter of the K * ′ resonance γ, for which we have used the fit result of table 3. Besides, the vector masses M K * and M K * ′ have been varied by 1 MeV and 100 MeV respectively, but these modifications only have a small impact on the uncertainties for the expansion parameters of F Kπ + (s). Comparing to the most recent determination of λ ′ + and λ ′′ + from an average of current experimental results for K l3 decays [37] (where also detailed references to the individual experiments can be found), we observe that both determinations are in very good agreement, though for the time being our theoretical extraction is more precise. To conclude, our RχT description of the Kπ vector and scalar form factors provides a good representation of the experimental data of the Belle collaboration for the spectrum of the decay τ − → ν τ K S π − [20], thereby allowing to deduce many parameters of this approach. The used method can also be applied to τ decay channels which involve three final state hadrons, and this has already been performed successfully for the decays τ → ν τ πππ [42] as well as τ → ν τ KKπ [43]. In the near future, we plan to return to the still missing decay mode τ → ν τ Kππ, which is the most interesting one in view of getting a better handle on the hadronic τ decay rate into strange final states.
7,243.6
2008-03-12T00:00:00.000
[ "Physics" ]
Towards a Model-Based Field-Frequency Lock for Fast-Field Cycling NMR Fast-field cycling nuclear magnetic resonance (FFC NMR) relaxometry allows to investigate molecular dynamics of complex materials. FFC relaxometry experiments require the magnetic field to reach different values in few milliseconds and field oscillations to stay within few ppms during signal acquisition. Such specifications require the introduction of a novel field-frequency lock (FFL) system. In fact, control schemes based only on current feedback may not guarantee field stability, while standard FFLs are designed to handle very slow field fluctuations, such as thermal derives, and may be ineffective in rejecting faster ones. The aim of this work is then to propose a methodology for the synthesis of a regulator that guarantees rejection of field fluctuations and short settling time. Experimental trials are performed for both model validation and evaluation of the closed-loop performances. Relaxometry experiments are performed to verify the improvement obtained with the new FFL. The results highlight the reliability of the model and the effectiveness of the overall approach. Introduction Fast-field cycling nuclear magnetic resonance (FFC-NMR) is a low field technique that allows to study the dependence of the spin-lattice relaxation rate R 1 = 1∕T 1 on the strength of the B 0 magnetic field the sample is exposed to. The relation R 1 (B 0 ) is called NMR dispersion (NMRD) profile, and allows to gather 1 3 particular information about the molecular dynamics. An FFC experiment is based on a quick switch of the B 0 field and allows to obtain the NMRD profile in a point-wise way. The experiment is carried out by cycling over three phases [1], as depicted in Fig. 1: • Polarization a high polarization field B 0 = B pol , is applied to pre-polarize the sample. • Relaxation the sample relaxes at a magnetic field B 0 = B rel , whose intensity is changed at every cycle to obtain relaxation at different field strengths. • Acquisition the field is set to the acquisition field B 0 = B acq , for signal acquisition at a better signal-to-noise ratio (SNR). As in standard NMR experiments, stability of the magnetic field B 0 during acquisition is a key point to obtain precise and repeatable results. Still, while in standard NMR the magnetic field can be generated by means of superconducting magnets to provide the required level of stability, in FFC NMR, B 0 must be generated with very fast resistive magnets, that allow fast switchings of the field intensity. Control systems based on current feedback, typically implemented in the power supplies feeding FFC electromagnets, may not be enough to obtain the desired level of stability of the B 0 magnetic field, since no direct feedback is present on the field itself. In addition, FFC NMR experiments require to obtain the desired stability as soon as possible, at the beginning of the acquisition phase. In fact, any delay in the measure would allow part of the relaxation process to occur at the acquisition field, thus obtaining a poorly reliable measure [1]. The field-frequency lock (FFL) is a well-known approach to avoid magnetic field oscillations. The idea is to exploit the dependence of the NMR signal on the field, and obtain an indirect but very fine grained measure of the magnetic field fluctuations from a parallel NMR experiment, called the "lock experiment", which is carried out over Towards a Model-Based Field-Frequency Lock for Fast-Field… a known nuclear specie (i.e. a known ) [2][3][4][5][6][7][8]. Note that the two nuclear species must be different to avoid interference of the two experiments. The classical implementation of the FFL relies on a phase-locked loop (PLL), where the NMR lock signal is compared to a reference one and an error signal proportional to the frequency deviation is generated. This error signal can be used to feed a P or PI regulation block [2][3][4]. Still, this implementation suffers from low SNR and is ineffective in rejecting the high-frequency current/field disturbances [7] that typically affect FFC electromagnets. A different approach is investigated in [7,9], where the lock experiment is designed to obtain a continuous signal, which can be more effectively used as feedback in a control loop. However, this approach calls for a detailed model of the NMR lock experiment for a proper synthesis of the regulator. Just few works in literature exploit an NMR model for the design of the controller (e.g. [2,7,10]). In [9], the authors developed and tested in simulation a methodology based on dynamic models of the NMR sensor and on PID regulators that allow to obtain field stability within a prescribed time interval, as required by FFC NMR. The aim of this work is to give to the proposed methodology an experimental validation by performing closed-loop experiments. The performance of the control loop as FFL is also analyzed by performing the main experiment with and without loop closure, and comparing the results. The paper is organized as follows: the approach is first presented in Sect. 2 in a general framework, then a detailed description of the case studies is introduced along with the methodology applied to synthesize the regulators (Sects. 3.1-3.5). The results of closed-loop trials are reported and discussed in Sect. 3.6. Finally, Sect. 3.7 discusses the effect of the developed FFL on the main NMR experiment. Methodology This work faces the FFL problem with the methodology developed by the authors in [9]. This section describes the approach, which requires: • Building of the NMR lock sensor; • Modelling of the NMR lock sensor; • Designing of the closed-loop; • Synthesis of a PI/PID regulator according to the overall process transfer function. Each of these steps is discussed in detail in the following sections. Building of the NMR Lock Sensor Building the NMR lock sensor means setting up an NMR experiment so that the resulting signal can be effectively used as feedback in the control loop. In this phase, the typical features of a sensor, such as linearity, static gain and bandwidth, must be taken into account. All the aforementioned features depend on the choice of the sample used for the NMR lock experiment and on the way, it is stimulated by means of the radio-frequency (RF) pulse sequence. Both [7] and [9] suggest that a low-power, high-repetition rate pulse sequence allows to obtain a continuous NMR signal that is suitable to cope with fast magnetic field disturbances. Let be the angle the magnetization vector M moves away from the z-axis because of each pulse in the rotating reference frame. Let T be the inter-pulse period. The "lock sequence" is then composed of all identical RF pulses with small (i.e. few degrees) and T << T * 2 . The application of such a sequence to the sample makes the magnetization vector enter the steady-state free precession (SSFP) regime, described in [7,[11][12][13][14][15]. If the RF pulses are applied along the y-axis of the rotating frame, M reaches a steady state, oscillating in the x − z plane when the experiment is carried out on prefect resonance. This means that, if no field disturbance B is present, the y component of the magnetization, M y , is zero. In general, M y provides a measure of B according to the curve depicted in Fig. 2. It is important to underline that the B to M y curve is bijective only if restricted to the interval [ B min ; B max ] ; in this region, it is also approximately linear [7]. Let full width of the linear region (FWLR) be the width of the interval [ B min ; B max ] . A tentative expression relating the gain of the sensor (i.e. the slope of the linear part of the curve) to the parameters of the lock sample and sequence was reported in [7]. Still, whenever the main magnetic field B 0 is nonhomogeneous over the lock sample, i.e. T * 2 < T 2 , that expression loses significance, since the gain is mainly related to the homogeneity of the field. The more homogeneous the magnetic field, the higher the slope of the curve (and the narrower the FWLR). On the contrary, the less homogeneous the magnetic field, the lower the slope of the curve (and the wider the FWLR). The dynamics of the experiment is Towards a Model-Based Field-Frequency Lock for Fast-Field… also strictly related to the parameters of the lock sample and sequence [11], and to field homogeneity in a complex way. Modelling of the NMR Lock Sensor To account for all the nonlinearities affecting gain, FWLR and dynamic response of the lock sensor, the authors implemented and validated in [9] a simulator for the experiment, based on the discrete-time bloch equations (DTBE) [7,11,14], and the isochromat decomposition of the NMR signal [11-13, 16, 17]. The overall model is referred to as Bloch-based isochromat model (BBIM). The simulator allows to collect data and to fit a linear model which can be used for control purposes. It is also useful as a design tool to test the response of the lock sensor according to the desired combination of sample and sequence parameters. Transfer Function Model BBIM carefully describes the dynamics of the lock experiment, but is not suitable for control purposes because of its complexity. A linear transfer function model describing the dynamic behavior of the lock sensor can be identified by means of input-output identification techniques, exploiting BBIM model to collect data from simulations. The identification phase is set-up as follows: • Simulation of a step response around B = 0 . The system is first brought in SSFP on perfect resonance. Then, once the steady-state condition is reached, a step variation of B is applied. Let B step be the amplitude of the step, which must be chosen so that | B step | << B max . The M y (t) signal is collected at this point. • Filtering of output identification data The shape of the SSFP signal shows oscillations with fundamental harmonic at 1 T , and harmonic content at higher frequencies [11]. From the sensor point of view, these oscillations represent measurement noise and must not be described by the linear model. A low-pass filtering procedure allows to reduce the impact of SSFP oscillations on the M y (t) signal. • Definition of the structure of the model The structure of the local model must be chosen according to the behavior of the step response M y (t) . A first-order transfer function with no zero is used in case of monoexponential response. A secondorder transfer function with zero is used in case of more complex behaviors. • Identification of the parameters of the model via constrained least-square optimisation Constrained Least-Square (CLS) optimisation allows to exploit inputoutput data to optimize the values of model parameters, to provide the best fit between model prediction and identification data. In case of a second-order structure, it is possible to introduce constraints on the position of the zero according to the presence of an inverse response or an overshoot in the step response. The CLS optimisation problem is expressed as follows [18]: with the vector containing the measured M y (t) for each time instant t, ̂ the vector containing the predicted M y (t) for each corresponding time instant t, the vector of unknown parameters, and and , respectively, a matrix and a vector of proper size to introduce polytopic constraints on . The goodness of fit is evaluated with the following metric: In case of single exponential response of M y , the following first-order transfer function is adopted: with the corresponding step response predictor M y (t): In the other cases, a second-order transfer function is adopted: with the convention nmr 2 < nmr 1 and the corresponding predictor: In particular, in the presence of an inverse response, the constraint T nmr < 0 is introduced. In case of overshoot, the constraint T nmr > nmr 1 is instead introduced. Stability constraints nmr 1 > 0 nmr 2 > 0 are always present. Design of the Closed Loop The design of the closed-loop for the lock system should consider both the NMR lock sensor dynamics and the NMR hardware set-up required to run the lock experiment. The scheme in Fig. 3 shows the overall closed-loop setup for the FFL in terms of transfer functions. The regulator action u(t) is the voltage output from a DAC converter and is turned into a current by means of a known conductance C. The current output is limited; therefore, a saturation must be included. Note that it is Towards a Model-Based Field-Frequency Lock for Fast-Field… modelled as the equivalent voltage saturation to ease an anti-windup implementation of the regulator. Let u sat (t) be the control action after the saturation block. The power supply features an internal control loop: the current control action is introduced by summing it to the overall current reference of the power supply. Let us, therefore, denote the current control action as I * . The power supply unit is then modelled as a first-order transfer function relating I ps , i.e. the power supply current deviation from the resonance condition, to I * : The magnet is instead modelled as a static gain: with B mag the magnet field deviation from the resonance condition. The NMR receiver chain (the quadrature detector in particular) can be described by a low-pass filter, that should be considered in the design of the control loop as well. Note that, in this framework, the filter can be conceptually placed after the NMR lock sensor, as depicted in Fig. 3. It can, therefore, be modelled as where M y (t) is the low-pass filtered signal of M y (t) . The overall process transfer function is then given by Synthesis of the Regulator Once the process transfer function G(s) is known, it is possible to tune a parameterized regulator, which must cope with the following requirements: • stability of the closed-loop system; • settling time as short as possible; • rejection of current/field oscillations; • perfect rejection of step current/field disturbances (at steady state). For sake of generality, the synthesis of a PID regulator is discussed in this section. However, in some cases, a simpler PI regulator may be enough to face the requirements. Note that the presence of an integral action in the regulator allows to guarantee perfect compensation of step process disturbances (no derivative action is present in the process). This would not be obtained by means of simpler proportional regulators. In addition, the closed-loop bandwidth b w is required to be the largest possible to provide the shortest settling time and as much rejection of process disturbances as possible. Still, the presence of measurement noise N limits the closed-loop bandwidth. Recall that the oscillations of the SSFP signal represent a source of measurement noise (see Sect. 2.2). When b w is expressed in [rad/s], it, therefore, holds: Equation (13) stresses the importance of T as design parameter. By reversing the inequality, it is possible to place a constraint on the choice of T according to the desired closed-loop bandwidth (i.e. according to the desired closed-loop settling time). Let us consider a PID controller in the realizable form: It is possible to use the two zeros of the PID to cancel the two slowest poles of G(s), typically an NMR pole or the power supply one, while dq typically resides at high frequency. f is placed out of the desired bandwidth and may be used to improve filtering of measurement noise N. A careful choice of r and f allows to shape the loop function L(s) = G(s)R(s) to provide the required disturbance rejection and settling time. The stability of the closed-loop system can instead be assessed by means of the bode criterion [19]. Regulators are implemented in a discrete-time way, by exploiting the full computation capability of the hardware, which results in T s = 25 μs . Discretisation of regulators is performed with Tustin method to guarantee that the stability is 3 Towards a Model-Based Field-Frequency Lock for Fast-Field… preserved (all asymptotically stable/stable continuous time poles are, respectively, mapped into asymptotically stable/stable discrete-time poles). Experimental Trials To validate the preliminary results obtained in [9], a series of experimental trials are performed in this work. This section is first devoted to the description of both the main NMR experiment and the lock set-up which are used to carry out experimental trials. Then, the methodology described in Sect. 2.2 is applied to synthesize a PI regulator. The results of closed-loop experiments are finally presented and discussed. NMR Lock Set-up The main magnetic field B 0 is generated by a Stelar s.r.l. = 25 μs can be achieved with this implementation. The control action (voltage) is generated by a 16-bit DAC and is turned into current by means of a 5 resistance. For safety issues, the current output is limited to ±500 mA. Due to the presence of this saturation, an anti-windup scheme is adopted for the PI implementation. As already stated in Sect. 2.3, the corresponding voltage saturation is considered ( ±2.5 V). The current control action is then summed to the current reference of IECO power supply. Step and sinusoidal current disturbances I d (t) are artificially generated by an analog waveform generator and injected in the closed-loop by summing them to the reference of IECO power supply as well. NMR Sensor Models Transfer function models for the NMR sensor are now derived according to the methodology described in Sect. 2.2. Both silicone and copper sulfate cases are investigated. The NMR parameters of the two samples are reported in Table 1. An inversion-recovery (IR) experiment is used to estimate T 1 [23], while a Carr-Purcell-Meiboom-Gill (CPMG) experiment allows to estimate T 2 in the presence of a non-homogeneous B 0 [23]. T * 2 and M0 are obtained by fitting a simple free induction decay (FID) signal [23]. Table 2 shows instead the lock sequence parameters. All these parameters are required by BBIM to perform simulations of the NMR lock experiment. Silicone The dynamic behavior of the silicone sample stimulated with the lock sequence is simulated with BBIM model (details are reported in Tables 1 and 2). In particular, a step response experiment is simulated. The amplitude of the applied field step is B step = −0.048 Gauss. The step is applied at time t = 0 s; Fig. 4a depicts the field step and the M y (t) response, which are used as input-output data for CLS identification. Figure 4a also shows M y (t) after low-pass filtering. A detail is depicted in Fig. 5: note how the oscillations of the SSFP signal are removed after filtering. The behavior of M y (t) clearly shows an inverse response. A second-order system structure is then required, es defined in Eq. (7), with the predictor reported in Eq. (8). CLS identification allows to obtain the following values for the model parameters: A comparison of the step response obtained from BBIM model and that predicted by G nmr (s) is reported in Fig. 4b. The goodness of fit results FIT = 89%. Copper Sulfate A step response simulation is performed based on BBIM model. A field step of amplitude B step = − 0.113 Gauss is applied at time t = 0 s to the copper sulfate sample, stimulated with the lock sequence (details are reported in Tables 1 and 2). The input-output data for CLS identification are reported in Fig. 6a. Again, M y (t) shows an inverse response. The second-order system structure described in Eq. (7) is then required, with the predictor reported in Eq. (8). The parameters of the transfer function model obtained from CLS identification are then: Figure 6b shows a comparison of the step response obtained from BBIM model and that predicted by G nmr (s) . The goodness of fit results FIT = 88.5\%. Process Model The whole process transfer function G(s) is now derived for both NMR lock samples. As stated in Sect. 3.1, the IECO power supply guarantees perfect tracking of step current references, with a bandwidth of 25 kHz (157,082 rad/s). Hence, the parameters of Eq. (9) can be set as The quadrature detector low-pass filter is set to 40 kHz (251,327 rad/s) for the closed-loop experiments. Its transfer function time constant can then be set as It is important to note that both NMR lock samples (silicone and copper sulfate) show an inverse response in the step response experiment. This places an upper bound to the closed-loop bandwidth b w , which is not likely to be higher than the frequency of the non-minimum phase zero of G nmr (s) . In particular, in case of silicone, the zero is placed at 22.72 rad/s, while in case of copper sulfate, it is placed at 674.3 rad/s. In view of this consideration, it is possible to neglect the dynamics of both G ps (s) and G qd (s) , since their poles are placed at higher frequencies with respect to the expected closed-loop bandwidth b w . In addition, since ps = qd = 1 , it is possible to approximate: The NMR sensor dominates the overall dynamics of the system, while the gain is influenced by the magnet and the conductance. According to the manufacturer, the magnet is characterized by a gain mag = 47 Gauss/A, while the conductance Experimental Validation Trials The process models identified in the previous section are obtained by means of simulations only. In this section, the process models are instead identified relying on real experimental data. Note that this would not be possible for the end-user of a lock system; therefore, the regulators will be tuned according to the models derived in Sect. 3.3, and the procedure here discussed is only intended as a validation one. The steps required for the experimental trials are the same adopted for the simulated ones: the sample is stimulated with the lock sequence and brought in SSFP regime. The voltage control action u(t) undergoes then a step variation. Both u(t) and M y (t) are recorded as input-output data for identification, which is carried out by means of Matlab Identification Toolbox [24]. Data preprocessing is limited to the same low-pass filtering procedure also applied to the simulated data. The results are now presented for both NMR samples. Silicone The step response experiment performed with the silicone sample is depicted in Fig. 7a. As for the simulation, an inverse response is present in M y (t) . The same second-order . structure is, therefore, adopted for the new identification based on experimental data. Both experimental and model predicted step response are reported in Fig. 7b. Let G e (s) be the resulting process transfer function: (b) (a) A comparison to Eq. (20) highlights that the static gain, the non-minimum phase zero and the slow pole are closely identified, while the main difference is the fast pole. Figure 8a shows the step response experiment input-output signals, u(t) and M y (t) , in case of copper sulfate sample. As in the previous case, the experimental data are in agreement with the simulated ones and both the responses are characterized by the inverse response. The process model identified from the experimental data is then: Copper Sulfate while the one identified from simulation is reported in Eq. (21). Figure 8b shows a comparison of the experimental step response and the one predicted by G e (s) . Let us now compare Eqs. Synthesis of the Regulator The synthesis of the regulator, based on the process model G(s) derived in Sect. 3.3, is now discussed for both lock samples. Note that, in both cases, the pole of G(s) associated with nmr 2 , i.e. the one at higher frequency, is outside of the expected closed-loop bandwidth. It is, therefore, not necessary to compensate for its effect in the design of the loop function. This motivates the choice of implementing a PI regulator instead of a PID. The regulator transfer function can then be written as The synthesis is performed by setting: with to keep b3 lower than the frequency of the non-minimum phase zero of G(s). Silicone The process transfer function in case of silicone sample is reported in Eq. (20). By synthesizing the regulator according to the previous considerations, one has: This allows to obtain a closed-loop bandwidth b w = 13.1 rad/s, with the dominant closed-loop poles placed at s 1 = −22.83 and s 2 = −4572 . The expected closed-loop settling time is about 0.35 s. This stresses the fact that the choice of the NMR sample used for the lock experiment is crucial for the overall performances of the system. A careful design of the sample, without any inverse response-or with faster ones-may in fact allow to obtain better settling times and more rejection of process noise. With the lock sequence parameters adopted with the silicone sample (see Table 2), the measurement noise due to the SSFP signal is placed at 104,720 rad/s. The design of L(s) (see Fig. 9) allows rejection of −27 dB of such measurement noise. The phase margin m associated with the design results m = 60 • , ensuring robust stability of the closed-loop (phase margin reduction due to discretisation is negligible). Copper Sulfate In case of copper sulfate lock sample, the process transfer function is reported in Eq. (21). The syntheses of the regulator result in: The expected closed-loop bandwidth is b w = 388 rad/s, with an expected closedloop settling time of about 0.0119 s. With the lock sequence parameters adopted with the copper sulfate sample (see Table 2), the measurement noise due to the SSFP signal is placed at 125,660 rad/s and the loop function (see Fig. 9) design allows −34 dB of attenuation. The phase margin m associated with the design results m = 55.5 • , ensuring robust stability of the closed-loop (phase margin reduction due to discretisation is negligible). The dominant closed-loop poles are placed at s 1 = −1213 + 405i and s 2 = −1213 − 405i. Closed-Loop Trials A set of closed-loop experiments is designed to assess the performances of the two-loop function design, with particular focus on the rejection of process disturbance, when dealing with the real process. Note that, with the current NMR hardware set-up, the measurement noise at low frequency is relevant when compared to the measure of the current disturbance of the IECO power supply (see Sect. 2.3). Waiting for a new NMR receiver chain with reduced measurement noise, to test the controller design, an artificial current disturbance of significant (26) r = −168.575 T z1 = 0.0015. amplitude is injected in the system. The current disturbance is introduced by means of an analog current waveform generator, whose output is summed to the power supply current reference. In particular, two different kinds of experiments are performed: a step disturbance is injected to test the closed-loop settling time, while sinusoidal disturbance at different frequencies is useful to test the design of the loop function. Note that the amplitude of the current disturbances is chosen so that the corresponding field disturbance does not exceed the linear region of the NMR sensor. Silicone In case of NMR lock based on the silicone sample, according to the design described in Sect. 3.5, the expected closed-loop bandwidth is b w = 13.1 rad/s, corresponding to a settling time of about 0.35 s. Figure 10 shows the experimental response to a step current disturbance. The experimental evaluation results in about 0.1 s of settling time, which is faster than expected from the linear approximation. Figure 11a, b shows the open and closed loops to a 1 Hz sinusoidal disturbance, respectively. The effect on the feedback signal M y (t) is reduced of about five times by the closed loop. This is slightly better than predicted by the loop function design, which has a magnitude of about 6 dB at 1 Hz (see Fig. 9). Figure 11a open-and closed-loop responses to a 10 Hz sinusoidal disturbance. In this case, the effect on the feedback signal M y (t) is not reduced by the closed loop. This is consistent with the loop function design. Note that a DC current offset is present in the current disturbance. This offset is correctly compensated by the integral action featured in the closed loop. Copper Sulfate The expected closed-loop bandwidth and settling time in case of copper sulfate sample are b w = 388 rad/s and 0.0119 s, respectively (Fig. 12). The experimental results obtained with the application of a current step disturbance are reported in Fig. 13. The closed-loop settling time can be experimentally valued as 0.015 s, that it slightly longer than expected. Figure 14a, b shows the open-and closedloop responses to a 10-Hz sinusoidal disturbance. In this case, the effect on the feedback signal M y (t) is reduced of a factor 3 by the closed loop. This is consistent with the loop function design, which should provide 14 dB of rejection at 10 Hz. Figure 15a Effect of Lock System on the Main NMR Experiment The final aim of the NMR lock system is improving the results of the main NMR experiment which runs in parallel. For this purpose, a series of NMR experiments is performed without NMR lock first, then repeated with the lock system based on copper sulfate, in the presence of external current disturbances. The choice of copper sulfate is motivated by the better performances obtained in Sect. 3.6. The main NMR experiment is carried out on a pure Galden sample, targeting the Fluorine F 19 nucleus ( = 25,166.2 [rad∕(s × Gauss)] ) at a magnetic field B 0 = 1880 Gauss, corresponding to a resonance frequency of about 8 MHz. Each experiment consists in a series of standard S1P sequences, generating a standard T * 2 decay of the recorded NMR signal [23]. A S1P experiment runs on resonance if the imaginary component of the quadrature-detected NMR signal is zero, and the real component shows an exponential decay. Figure 17a shows the main NMR signal in the presence of the 10-Hz sinusoidal current disturbance. According to the analysis carried out in Sect. 3.6, the lock system should appreciably improve the results by rejecting the current oscillations. The results of the NMR experiment with the same disturbance and the lock system are depicted in Fig. 17b. Note how the imaginary component of the NMR signal is closer to zero with respect to the open-loop case, and the exponential shape of the real component. The experiment is repeated with a 50-Hz sinusoidal current disturbance. Results are depicted in Fig. 18a, b. The presence of the lock system still improves the results, but some oscillations are still present in the NMR signal, since the current disturbance is just slightly rejected by the closed loop. (b) (a) Fig. 18 Copper sulfate sample. Main NMR experiment without lock system (a) and with lock system (b), in the presence of a 50-Hz sinusoidal disturbance As stated before, the experiment runs on resonance if the imaginary signal is zero. Therefore, a possible way to quantitatively evaluate the effect of the lock system is to consider the power ℙ of that signal, with ℙ defined as follows: where z(n) is a generic discrete-time signal. Table 3 reports a comparison of the power of the imaginary part of the NMR in the two trials discussed in this section. In both cases, the presence of the lock system allows to reduce the power with respect to the no-lock case. As expected from the loop function design and as already shown by the closed-loop trials, the lock is more effective with the 10-Hz sinusoidal disturbance. Conclusion This paper aims to verify with experimental trials the effectiveness of the FFL approach developed in [9]. Two lock sensors are designed and their linear models are obtained from simulated data. Experimental data are collected to evaluate the correctness of the procedure. The lock control loop is then designed, on the basis of the sensor model and on models of the hardware needed to perform the NMR lock experiment. Two PI regulators are tuned accordingly. Closed-loop trials confirm the correctness of the approach, providing results in agreement with the loop function design. Standard NMR experiments are also performed with and without the designed lock system, and highlight the benefit of its introduction. The next step of the research will consist in applying the proposed methodology to develop an external lock system for proton FFC NMR, with the lock experiment performed on fluorine. This will require the engineering of a proper fluorine sample allowing to efficaciously perform the lock experiment, and a dedicated receiver channel with high SNR.
7,638.4
2019-06-15T00:00:00.000
[ "Physics" ]
Malaria Infections Do Not Compromise Vaccine-Induced Immunity against Tuberculosis in Mice Background Given the considerable geographic overlap in the endemic regions for malaria and tuberculosis, it is probable that co-infections with Mycobacterium tuberculosis and Plasmodium species are prevalent. Thus, it is quite likely that both malaria and TB vaccines may be used in the same populations in endemic areas. While novel vaccines are currently being developed and tested individually against each of these pathogens, the efficacy of these vaccines has not been evaluated in co-infection models. To further assess the effectiveness of these new immunization strategies, we investigated whether co-infection with malaria would impact the anti-tuberculosis protection induced by four different types of TB vaccines in a mouse model of pulmonary tuberculosis. Principal Findings Here we show that the anti-tuberculosis protective immunity induced by four different tuberculosis vaccines was not impacted by a concurrent infection with Plasmodium yoelii NL, a nonlethal form of murine malaria. After an aerogenic challenge with virulent M. tuberculosis, the lung bacterial burdens of vaccinated animals were not statistically different in malaria infected and malaria naïve mice. Multi-parameter flow cytometric analysis showed that the frequency and the median fluorescence intensities (MFI) for specific multifunctional T (MFT) cells expressing IFN-γ, TNF-α, and/or IL-2 were suppressed by the presence of malaria parasites at 2 weeks following the malaria infection but was not affected after parasite clearance at 7 and 10 weeks post-challenge with P. yoelii NL. Conclusions Our data indicate that the effectiveness of novel TB vaccines in protecting against tuberculosis was unaffected by a primary malaria co-infection in a mouse model of pulmonary tuberculosis. While the activities of specific MFT cell subsets were reduced at elevated levels of malaria parasitemia, the T cell suppression was short-lived. Our findings have important relevance in developing strategies for the deployment of new TB vaccines in malaria endemic areas. Introduction Plasmodium falciparum and Mycobacterium tuberculosis are among the world's most important tropical diseases. Malaria and tuberculosis are major global causes of morbidity and mortality with each causing 1-2 million deaths annually. The World Health Organization has reported that there are 300-500 million new cases of malaria and 9 million new cases of tuberculosis each year [1,2]. Moreover, it has been estimated that one-third of the world's population is infected with latent TB. Given the substantial geographic overlap of endemic regions for these diseases and especially the large number of individuals with latent TB living in malaria-endemic regions, it is highly probable that co-infections with M. tuberculosis and Plasmodium species are common [3,4]. This presumed high rate of malaria-TB co-infections could be problematic for the development of TB vaccines targeted for malaria-endemic areas of the world. Malaria parasites are known to be immunosuppressive and acute malaria infections have already been associated with decreased immune responses to meningococcal, Hib conjugate, and Salmonella typhi vaccines [5][6][7][8][9]. Since many potential vaccinees including children in the WHO Expanded Program for Immunization reside in areas with high rates of malaria, it is important to understand the effect of malaria infections on the immunogenicity and effectiveness of vaccines designed to prevent tuberculosis. To combat the lethal tuberculosis epidemic, numerous novel vaccine preparations and immunization strategies are being created to replace or augment the current TB vaccine, M. bovis BCG. While BCG does induce protection against disseminated tuberculous disease in children, it has been relatively ineffective in preventing the most prevalent form of the disease, adult pulmonary TB [10,11]. Furthermore, vaccination with live BCG poses a considerable risk of serious infection when it is given to infants perinatally infected with HIV [12][13][14]. Among the new TB vaccine types being tested to replace or augment the use of BCG are live, attenuated M. tuberculosis vaccines, TB fusion proteins formulated in immunostimulating adjuvants, and viral vectored vaccines. At least 10 of these new vaccine preparations are currently being evaluated in clinical trials [15][16][17]. While the efficacy of each of these new vaccine formulations have been assessed in pre-clinical M. tuberculosis vaccination/challenge models, the new TB vaccines have been only minimally evaluated in co-infection models. Despite the considerable public health importance of concomitant infections, the complex issues associated with developing immunity after immunization in the presence of co-infecting organisms generally have not been adequately addressed. To develop more efficacious therapeutic and vaccination strategies, it is imperative to dissect whether effective protective immune responses can be generated against deadly pathogens in individuals co-infected with multiple organisms. In particular, it is uncertain whether a malaria infection will alter the effectiveness of new candidate vaccines to protect against a tuberculous challenge. Given the documented immunsuppressive capacity of the malaria parasite, the potential inhibitory impact of malaria infections against the protective immunity induced by new TB vaccines is a significant concern. Although concurrent helminth or HIV infections have been shown to suppress BCG-induced anti-tuberculosis protective responses, the effect of malaria co-infections on the protective efficacy of vaccines designed to protect against tuberculosis has not been thoroughly investigated [18][19][20]. In this study, we examined the impact of malaria co-infections on the capacity of BCG and new TB vaccines to protect against an aerogenic virulent M. tuberculosis challenge of mice. The P. yoelii 17XNL parasite was used as a source of malaria infection in mice. The effect of the malaria infection on the immunity induced by TB vaccines was assessed in vitro using flow cytometry and in vivo with a standard mouse model of pulmonary tuberculosis. Although the flow cytometric data suggest that specific vaccine-induced immune responses can be suppressed by acute malaria infections, no overall reduction in pulmonary protection against TB was detected in vaccinated co-infected mice. Materials and Methods Animals C57BL/6 female mice that were 6-8 weeks of age were obtained from the Jackson Laboratories (Bar Harbour, Maine). All mice used in this study were maintained under appropriate conditions at the Center for Biologics Evaluation and Research, Bethesda, MD. This study was done in accordance with the guidelines for the care and use of laboratory animals specified by the National Institutes of Health. This protocol was approved by the Institutional Animal Care and Use Committee of the Center for Biologics Evaluation and Research under Animal Study Protocol 1993-09. Vaccines The BCG Pasteur vaccine preparation was derived from the mycobacterial culture collection of the Trudeau Institute. The E6-85B protein is an ESAT6-antigen 85B M. tuberculosis fusion protein which was purified by nickel affinity chromatography after cloning and expressing the ESAT6-antigen 85B fusion gene in the pET23b vector system (Novagen, SanDiego CA). The proteinadjuvant formulation was prepared by mixing the fusion protein (50 mg/ml) with dimethyldioctadecylammonium bromide (DDA; 150 mg/ml; Kodak) and monophosphoryl lipid (MPL; 250 mg/ml; Avanti Polar Lipids, Alabaster, AL). The MVA-5TB vaccine was generated by cloning five M. tuberculosis genes (antigen 85A, antigen 85B, ESAT6, Mtb39 and HSP65) as well as the interleukin-15 (IL-15) gene into a modified vaccinia virus Ankara (MVA) vector [21].The double deletion mutant strain (DsecA2-DlysA) of the H37Rv strain of M. tuberculosis was constructed using specialized transduction to disrupt the chromosomal copy of the lysA gene of an unmarked DsecA2 clone, as described previously [22]. Immunizations Five female C57BL/6 mice per group were used in the immunization studies. For live BCG vaccine, 10 6 CFU was given once subcutaneously. Five micrograms of the E6-85 protein in the DDA (15 mg)-MPL (25 mg) adjuvant was administered three times, 2 weeks apart. For the attenuated strain/protein mixture vaccine, 10 6 CFU of the DsecA2DlysA live attenuated M. tuberculosis strain was mixed with the E6-85/DDA adjuvant formulation and administered three times, 2 weeks apart. For the prime boost experiments, one month after the three priming vaccinations with the E6-85 vaccine preparation, two doses of 5610 7 PFU of the MVA/IL15/5TB construct were given subcutaneously 1 month apart. Plasmodium yoelii NL infections. Frozen stocks of P. yoelii 17XNL-infected erythrocytes were thawed and used to intraperitoneally (ip) infect three donor C57BL/6 mice. Percent parasitemias were then monitored every other day using blood smears. When ,10 to 20% parasitemias were detected, blood was collected by cardiac puncture, diluted in PBS and used to infect experimental animals with 1610 6 P. yoelii 17XNL parasites in 200 ul of PBS by the ip. route. In these studies, five to fifteen C57BL/6 mice per group were used. Evaluation of vaccine-induced protection using a mouse model of pulmonary tuberculosis For the vaccination/challenge experiments, five mice were evaluated for each group. At 2, 6, or 10 weeks following the P. yoelii infections, vaccinated and control mice were aerogenically challenged with the M. tuberculosis Erdman suspended in PBS at a concentration known to deliver 100-200 CFU in the lungs over a 30-min exposure time in a Middlebrook chamber (GlasCol, Terre Haute, IN). To assess the level of pulmonary exposure during the aerosol challenge, the number of CFU in the lung were measured at 4 h after the M. tuberculosis infection. To determine the extent of pulmonary bacterial growth, the mice were sacrificed at 4 weeks post-challenge. The lungs were then removed aseptically and homogenized separately in PBS using a Seward Stomacher 80 blender (Tekmar, Cincinnati, OH). The lung homogenates were diluted serially in 0.4% PBS-Tween 80, and 50-ml aliquots were placed on Middlebrook 7H11 agar (Difco) plates supplemented with 10% OADC enrichment (BectonDickinson, Sparks, MD) medium, 2 mg/ml 2-thiophenecarboxylic acid hydrazide (TCH) (Sigma), 10 mg/ml ampicillin, and 50 mg/ml cycloheximide (Sigma). The addition of TCH to the agar plates inhibits the growth of BCG but not M. tuberculosis. All plates were incubated at 37uC for 14 to 17 days in sealed plastic bags, and the colonies were counted to determine the organ bacterial burdens. Assessment of lung inflammation To evaluate the level of inflammation in the lungs of mice infected with M. tuberculosis, lung sections stained with hematoxylin and eosin (H & E) were photographed using a Nikon Optishot 2 microscope fitted with a camera which was connected to a computer. Spot Advanced software was used to save the computer images. The Image Pro Plus program (Media Cybernetics, Silver Spring, MD) was utilized to objectively assess the level of inflammation present in each image. In these images, the inflamed areas stained a more intense purple than the non-inflamed areas. For the analyses, colors were assigned as follows: red to represent the inflamed areas, green to represent non-inflamed areas, and yellow to represent the background. After the color assignments were established, the computer software identified inflamed and non-inflamed sections on each slide. The percentage of the lung sections staining red, green, or yellow was then determined by the computer software. To quantitate the percent area inflamed, we determined the mean percent red area from five lung sections of each of the different groups. Flow cytometry Five BCG vaccinated and control mice (3 mice per group) were used to determine the frequency of CD4 MFT cells at each time point post-vaccination. Lung cells were isolated by homogenizing the lung tissue in a stomacher bag using the end of a 20 cc syringe in PBS containing 2% FBS (PBS-FBS). The tissue was then incubated in PBS-FBS containing 4 mg/ml collagenase (final concentration) at 37uC for one hour. Afterward, the lung tissue was removed from the cells by placing the suspension in a Filtra-Bag (Labplas, Quebec, Canada). The resulting single cell suspension was centrifuged to pellet the cells and treated with ACK lysing buffer as described above. After washing, the cells were passed through a 70 mm cell strainer, pelleted and counted. After washing the lung cells with an equal volume of media, the cells were resuspended in cDMEM-FBS, counted and added to wells of a 24-well plate at a density of 2.5610 6 cells per well in 1.0 ml cDMEM-FBS. For measurement of antigen-specific responses, BCG Pasteur (or BCG+PPD) was added to the wells at a multiplicity of infection (MOI) of 0.5 bacilli per spleen cell. Wells which contained only lung cells served as unstimulated controls. Infections were allowed to proceed overnight followed by the addition of Golgiplug (BD Biosciences, San Jose CA) (1 ml per well). After 4-5 hours of incubation, the unbound cells were removed from the wells and transferred to 12675 mm tubes, washed with PBS and resuspended in ,50 ml PBS. Live-Dead stain (Invitrogen, Carlsbad, CA) (10 ml of a 1:100 dilution) was added to each tube and incubated for 30 min. at room temperature to allow for gating on viable cells. After washing the cells with PBS-FBS, antibody against CD16/CD32 (FccIII/II receptor, clone 2.4G2) (Fc block) was added in a volume of ,50 ml and incubated at 4uC for 15 min. The cells were then stained for 30 min. at 4uC by adding antibodies against the CD4 (rat antimouse CD4 Alexa Fluor 700 [AF-700] Ab, clone RM4-5), and CD8 (rat anti-mouse CD8 peridinin chlorophyll protein complex [PerCP] Ab, clone 53-6.7) proteins at 0.1 and 0.2 mg per tube respectively. Following the incubation, the cells were washed twice with PBS and then fixed for 30 min. at 4uC with 2% paraformaldehyde in PBS. After fixing, the cells were pelleted, washed twice with PBS-FBS and stored at 4uC. Fixed cells were washed twice with perm-wash buffer (1% FBS, 0.01 M HEPES, 0.1% saponin in PBS) followed by intracellular staining using the following antibodies at 0.2 mg per tube: rat anti-mouse IFN-c allophycocyanin [APC] Ab, clone XMG1.2; rat anti-mouse TNFa fluorescein isothiocyanate [FITC] Ab, clone MP6-XT22; rat anti-mouse IL-2 phycoerythrin [PE] Ab, clone JES6-5H4. The cells were incubated at 4uC for 30 min., washed twice with permwash buffer and then twice with PBS-FBS. All antibodies were obtained from BD Biosciences. The cells were analyzed using a LSRII flow cytometer (Becton Dickinson) and FlowJo software (Tree Star Inc., Ashland, Oregon). We acquired 250,000 events per sample and then, using FlowJo, gated on live, single cell lymphocytes. To determine the frequency of different populations of MFT cells, we gated on CD4 or CD8 T cells staining positive for TNF-a and IFN-c, TNF-a and IL-2, IFN-c and IL-2 or all three cytokines. Median fluorescence intensity (MFI) assessments The MFI for IFN-c or TNF-a from monofunctional and multifunctional CD4 and CD8 T cells was evaluated using the FlowJo software. For this study, the MFI is the average fluorescence intensity value for individual T cells secreting only IFN-c or TNF-a, secreting both IFN-c and TNF-a or cells secreting IFN-c, TNF-a and IL-2. The data are presented as the mean 6 the standard error of the individual MFI assessments for 5 groups of mice. Statistical analyses The protection, lung inflammation, and flow cytometry data were evaluated using t test analysis of the GraphPad Prism, version 5, program. The impact of malaria co-infections on the effectiveness of BCG vaccine in a mouse model of pulmonary tuberculosis To assess the impact of a malaria co-infection on the effectiveness of vaccines designed to protect against M. tuberculosis, a murine co-infection immunization model was developed. Initially, mice were vaccinated subcutaneously with 10 6 CFU of BCG Pasteur. Two months after the BCG immunization, the mice were given 10 6 P. yoelii blood stage parasites by the intraperitoneal route. At an appropriate time period following the malaria infection (2-10 weeks), the mice were aerogenically challenged with 100-200 CFU of virulent M. tuberculosis Erdman. Four weeks later, the TB infected mice were sacrificed and pulmonary mycobacterial burdens and lung pathologies were determined. For these studies, since extensive splenomegaly was seen after malaria infections and the lung is the primary site of M. tuberculosis infections, we concentrated on evaluating the pulmonary impact of the co-infection on the effectiveness of TB vaccines. Since our aerosol TB challenge model had been previously established, our initial efforts for these studies focused on characterizing the kinetics of the P. yoelii infection. In the representative data shown in Figure 1, malaria parasitemia in naïve mice peaked at 20.5% on day 14 and the infection was cleared by day 20. Consistent with previous studies, moderate protection against malaria parasitemia was seen in BCG vaccinated mice [23,24]. In this experiment, the level of parasitemia was reduced by 41% relative to naives at day 12 and 70% at day 14 in BCG-vaccinated mice but parasite clearance was again seen by day 20. The initial vaccination/challenge studies evaluated the temporal effect of P. yoelii infections on the effectiveness of BCG vaccine to protect against an aerogenic M. tuberculosis challenge. Mice were infected with P. yoelii two months after BCG immunization and then were challenged with M. tuberculosis at either two, six or ten weeks following the malaria infection. When mice were challenged with M. tuberculosis at two weeks after the malaria infection (at peak parasitemia levels), no significant impact was detected on the capacity of naïve or BCG vaccinated mice to control the acute tuberculosis lung infections at 4 weeks post-challenge ( Figure 2). In both malaria infected and non-infected BCG vaccinated mice, significant anti-tuberculosis protection (.0.95 log 10 compared to naïve controls) were seen at 4 weeks after the M. tuberculosis challenge. Similarly, at 6 and 10 weeks following the malaria infection, the anti-tuberculosis protective responses induced by the BCG vaccinated mice were not statistically different than the protection evoked in P. yoelii infected BCG vaccinated animals. Additionally, the pulmonary mycobacterial burdens were also not different in the naïve and P. yoelii infected naïve groups at 2, 6 and 10 week post malaria infection time point (data not shown). For example, the lung CFU levels that were detected when nonvaccinated mice were challenged with M. tuberculosis at the peak of malaria parasitemia (6.3660.12) were statistically equivalent to lung burdens seen in naïve mice infected with M. tuberculosis (6.2660.18). Overall, the presence of malaria parasites did not exacerbate the tuberculous pulmonary infection when the M. tuberculosis challenge occurred either near the peak of parasitemia or after parasite clearance. Interestingly, nearly identical results were obtained in co-infection studies of mice that had been vaccinated with BCG eight months before the P. yoelii blood stage infection and then were challenged with M. tuberculosis at the peak of parasitemia. At 8 months after BCG immunization, statistically equivalent lung CFU levels were detected in the BCG (5.5960.11 log 10 CFU) and BCG/P. yoelii (5.5860.22) groups as well as the naïve (6.4060.30) and naïve/P. yoelii (6.1160.15) animals. To support these findings, lung pathology was analyzed using H&E sections from mice that were challenged with M. tuberculosis two weeks after a P. yoelii infection. Overall, the P. yoelii infections did not impact lung pathology observed after an aerogenic M. tuberculosis infection. At 4 weeks post challenge, substantially less inflammation was observed in lung sections of BCG vaccinated and the BCG/P. yoelii infected animals relative to naïve controls ( Figure S1). The granulomatous type structures were more condensed, mature, and lymphocyte-rich in the lungs of both BCG vaccinated groups compared to the larger, more immature granulomas seen in the naïve and naïve/P. yoelii mice at this time point. To quantitate the pathology results, the lung sections were assessed by computerized scanning using the Image pro analysis system as described earlier [25]. With this imaging system, the proportion of lung sections that are inflamed can be quantitatively defined. This pathology analysis showed no statistical differences in the inflammatory response values seen in lung sections taken from the BCG (20.266.4) and BCG/P. yoelii infected mice (20.969.0). Similarly, the lung pathology values were not different in naïve (38.6610.6) and naïve/P. yoelii infected (34.963.9) animals. The impact of malaria-M. tuberculosis co-infections on the protective immunity induced by novel TB vaccines To assess the impact of malaria infection on the effectiveness of novel TB vaccine candidates, C57BL/6 mice were vaccinated with three unique immunizing preparations using different vaccination strategies. In an initial experiment with a novel vaccine, mice were immunized with the E6-85 TB fusion protein (ESAT6-Antigen 85B) suspended in DDA/MPL adjuvant [26,27]. As controls, other groups of mice were immunized with BCG. At two weeks after a P. yoelii infection, mice were aerogenically challenged with a low dose of M. tuberculosis. Four weeks later, pulmonary bacterial burdens were evaluated. Again no significant differences in lung CFU were seen between the malaria-infected and the non-infected control groups. As seen in Figure 3, the P. yoelii infection clearly did not increase pulmonary M. tuberculosis CFU levels in nonvaccinated mice. Moreover, the levels of protection detected in the vaccinated animals were consistent with previous results and were unaffected by the P. yoelii infection (1.4 log 10 for the BCG and BCG/P. yoelii groups; 1.2 log 10 for the E6-85 and E6-85/P. yoelii mice) [26]. In a second study of novel TB vaccination strategies, mice were immunized with either a attenuated M. tuberculosis DsecADlysA vaccine strain mixed with the E6-85/DDA formulation or a prime-boost procedure that involved priming with the E6-85 [28,29]. As shown in Figure 4, when the M. tuberculosis challenge occurred during elevated levels of parasitemia (2 weeks), the extent of anti-tuberculosis protection induced by the attenuated M. tuberculosis vaccine/protein mixture and the primeboost immunization procedure were not impacted by the malaria infection. About 1.3 log 10 protection was seen in both malaria infected and non-infected prime-boost groups and 1.6-1.7 log 10 protective responses were measured for both the DsecADlysA M. tuberculosis attenuated vaccine mixture and DsecADlysA vaccine/P. yoelii infected animals Furthermore, the malaria infection again did not exacerbate the tuberculous disease because no significant differences in pulmonary mycobacterial burdens were again detected between the malaria infected non-vaccinated and naïve control groups at 4 weeks post challenge. Flow cytometric analysis of vaccine-induced immune responses in BCG vaccinated and malaria infected BCG vaccinated mice Recent studies have shown that malaria infections can have immunomodulatory effects on host immune responses and in particular, malaria can inhibit antigen specific T cell responses [5][6][7][8][9]. To assess whether pulmonary mycobacterial-specific T cell responses were influenced by the malaria infection in our model, multi-parameter flow cytometric analysis was done on lung cells recovered from experimental animals. For these studies, mice were infected with P. yoelii two months after BCG vaccination and then 2, 7, or 10 weeks later the mice were sacrificed and the lung cells were isolated and stimulated with BCG (a surrogate for the M. tuberculosis challenge). Following intracellular cytokine staining, the cells were analyzed by flow cytometry. Since the induction of multifunctional T cells (MFT) by immunization has been shown to correlate with protection against Leishmania and M. tuberculosis in animal models, the cells were evaluated for the concurrent expression of IFN-c TNF-a and/or IL-2 [30][31][32]. As seen in Figure 5A and Figure S2, the levels of CD4 cells producing IFN-c, IFN-c/TNF-a, or IFN-c/TNF-a/IL2 all exceeded 1% in BCG vaccinated mice at week 2 of the P. yoelii infection (10 weeks post-BCG immunization) but declined at weeks 7 and 10 post-infection (15 and 18 weeks post-BCG vaccination). In contrast, the frequency of IL-2 and TNF-a/IL2 producing cells in BCG vaccinated animals significantly increased during the 10 week observation period. In these studies, the frequencies of cells from naïve controls expressing multiple cytokines were generally less than 0.01% (data not shown). Although lower overall cell frequencies were seen, a similar pattern was observed for CD8 T cells taken from BCG vaccinated animals that were not infected with malaria ( Figure 5B). The relative proportions of cells producing IFN-,c IFN-c/TNF-a and IFN-c/TNF-a/IL2 were elevated at 10 weeks after the BCG immunization while the frequencies of these cells declined in the lung 5-7 weeks later. Consistent with the CD4 data, the frequency of IL-2 and TNF-c/ IL2 producing CD8 T cells increased at the later time points of the experiment, but the magnitude was higher than that seen for CD4 cells. Surprisingly, the malaria infection did not generally impact the frequencies of vaccine-induced CD4 and CD8 cytokine producing cells at the later stages of this study. At 7 and 10 weeks after the malaria challenge (15 and 17 weeks post-BCG vaccination), malaria-related alterations in the cellular frequencies were not observed in BCG vaccinated animals. However, a negative impact was seen on the frequencies of cells synthesizing IFN-c/TNF-a/ IL2 in BCG vaccinated mice when malaria parasitemias were substantially elevated (2 weeks post P. yoelii infection). For the CD4 T cells, the frequencies of triple positive cells were significantly decreased (BCG = 1.22%, BCG/P. yoelii = 0.29%). Interestingly, dramatic declines in the triple positive CD8 T cells were also seen at the peak of P. yoelii infection (BCG = 0.355%, BCG/ P.yoelii = 0.002%). An important characteristic of MFT cells is their capacity to express substantially higher levels of cytokines than monofunctional cells. To further evaluate the effect of malaria infections on the immune responses induced by BCG vaccinated mice, the levels of cytokine production in pulmonary CD4 and CD8 T cells was assessed. For this study, the extent of cytokine expression was determined by evaluating the median fluorescence intensities (MFI) of the experimental lung cells. In contrast to the cellular frequencies of pulmonary cells from BCG vaccinated mice, IFN-c MFI values for CD4 MFT cells remained elevated throughout the study. As expected, the levels of IFN-c expressed in IFN-c/TNF-a and triple positive CD4 MFT cells from BCG vaccinated mice were increased 4-11 fold (relative to monofunctional IFN-c producing CD4 T cells) during the entire study. Although the IFNc MFI values for CD4 MFT cells from BCG immunized mice were not effected at 7 and 10 weeks after the malaria challenge, the extent of IFN-c expression in triple-positive CD 4 T cells was reduced 60% in the malaria infected, BCG vaccinated animals compared to the BCG vaccinated controls at 2 weeks post P. yoelii challenge ( Figure 6 Interestingly, significantly suppressed MFI values were seen in CD8 MFT cells recovered from malaria infected, BCG vaccinated mice at the peak of the P. yoelii infection. In these mice, the IFN-c MFI values were decreased by 79% (IFN-c/TNF-a CD8 cells) and 98% (triple positive CD8 cells) by the malaria infection. Similarly, elevated TNF-a MFI values for CD4 T cells of BCG vaccinated mice were detected in IFN-c/TNF-a double positive (increased 5 fold compared to monofunctional cells) and triple positive MFT cells (22 fold increase) at the two week time point (Figure 7). In contrast, for the malaria infected, BCG vaccinated mice, the TNF-a MFI values of CD4 T cells were strikingly reduced in IFN-c/TNF-a (70% reduction) and triple positive cells (89%) relative to controls not infected with P. yoelii. While significant 2-3fold increases in TNF-a MFI values compared to controls were seen in CD4 MFT cells at the later time points, the malaria infections did not impact the lower overall MFI values. For CD8 cells, substantially elevated TNF-a MFI values relative to monofunctional cells were observed for the IFN-c/TNF-a (88x) and the triple positive cells at 2 weeks (44x) after the malaria infection. However, for the malaria infected, BCG vaccinated mice, dramatic declines in CD8 TNF-a MFI values of .99% were detected in IFN-c/TNF-a and triple positive CD8 MFT cells at two weeks after the P. yoelii infection. At 7 and 10 weeks post-infection, consistently low TNF-a MFI values were seen in all CD8 T cell subsets. Discussion An understated concern about the deployment of new TB vaccines is the unknown impact that infections with other pathogens prevalent in the area may have on TB vaccine efficacy. In many areas endemic for tuberculosis, co-infections with unrelated pathogens are common and these co-infecting agents may modulate vaccine-induced immune responses. Earlier studies have shown that concurrent infections can decrease the antituberculosis protective responses induced by immunization with BCG. In animal models, helminth infections have been shown to reduce the efficacy of BCG vaccine to protect against virulent M. tuberculosis [18]. Exposure to non-tuberculous mycobacteria can also inhibit the induction of protective immunity to tuberculosis by BCG immunization [33,34]. In humans, HIV infection can severely impair the protective immune responses elicited by vaccination with BCG [20]. With the considerable geographic overlap in areas endemic for malaria and tuberculosis and the recent reports of co-infection of these organisms, it is important to assess the impact of malaria infections on the effectiveness of vaccines designed to prevent tuberculosis in pre-clinicalanimal models. Many different animal species are susceptible to tuberculous infections and artificially infected mice, guinea pigs, rabbits, and non-human primates (NHP) have been used as models of TB [35]. While mice (like humans) are relatively resistant to TB, can be infected by the aerosol route, and have been successfully used to elucidate host-pathogen interactions, TB infections of guinea pigs and rabbits yield more relevant lung pathology. Although the NHP model has been valaubale for studying TB latency as well as host immune responses, the cost, BSL-3 space requirements, and the potential for horizontal transmission of disease has limited its usefulness. Given that we had previously established standardized and affordable murine For the DsecADlysA mixture vaccine, C57Bl/6 mice were vaccinated three times two weeks apart and then were infected with P. yoelii 4 months after the final vaccination. For the prime/boost protocol, mice were vaccinated with the E6-85/adjuvant formulation three times two weeks apart, and one month later boosted with the TBMVA/IL-15 vaccine twice one month apart. The mice that had been primed and boosted were infected with P. yoelii two months after the final booster vaccination. All vaccinated mice were aerogenically challenged with virulent M. tuberculosis at 2 weeks after a P. yoelii infection and pulmonary mycobacterial CFU were determined at 4 weeks post-challenge. The asterisks show significant CFU differences (p,0.05) relative to naïve controls. doi:10.1371/journal.pone.0028164.g004 models of TB and malaria we decided to develop a mouse TBmalaria co-infection model. In this study, we showed using this mouse model that P. yoelii malaria co-infections did not have a significant impact on the capacity of four different M. tuberculosis vaccine formulations to control pulmonary growth of an acute virulent M. tuberculosis infection. In repeated experiments, we demonstrated that the pulmonary protective responses induced by vaccination with either BCG, the E6-Ag85 TB fusion protein formulated in adjuvant, a DsecADlysA M..tuberculosis attenuated strain/protein mixture or a prime-boost strategy involving the E6-85 antigen preparation and the MVA/IL15/5Mtb vaccine were not statistically different in immunized mice that had been infected with P. yoelii relative to uninfected vaccinated controls. These findings are consistent with results from a malaria chemoprophylaxis trial of Nigerian children where the immunogenicity of BCG vaccine was not affected by the presence of malaria parasitemia [6]. Collectively, these data suggest that a primary infection with malaria parasites will likely not significantly impact the capacity of new TB vaccines to control acute M. tuberculosis infections in humans. Although malaria infections in a murine model did not reduce the overall protection in the lung induced by vaccination against TB, BCG vaccine-induced pulmonary immune responses were impacted by elevated malaria parasitemia levels. Published reports in humans and mice have shown that CD4 and CD8 T cell responses against malaria or non-malaria antigens can be inhibited by malaria infections [9,[36][37][38][39]. In our study, malaria infections were shown to significantly decrease the frequency of CD4 and CD8 triple positive MFT cells expressing IFN-c, TNF-a, and IL-2 in lung cells of BCG vaccinated mice when high levels of parasitemia were present. Moreover, substantial reductions in cytokine expression (as measured by the median fluorescence intensity) was seen in lung MFT cells from P. yoelii infected BCG vaccinated mice relative to uninfected BCG immunized animals. For example, the IFN-c MFI values decreased by 60% in CD4 triple positive T cells (compared to controls) while a dramatic 98% reduction in MFI values was detected for the CD8 triple positive MFT cells recovered from the lungs of P. yoelii infected and BCG vaccinated mice. Additionally, the TNF-a MFI values for CD8 MFT cells was dramatically decreased by 99%. While a substantial suppression of specific T cell responses at 2 weeks after the malaria infection were clearly seen, the mechanisms by which the P. yoelii infections reduce the activity of these specific T cell subsets are uncertain. During acute blood stage malaria infections, regulatory T cells producing IL-10 and dendritic cells secreting TGF-b and prostaglandin E2 have been identified [9,40]. IL-10, TGF-b, and PGE2 have been shown to down-regulate general pro-inflammatory T cell responses, especially CD8 T cell responses. Whether these immune mediators specifically target vaccine-induced MFT cells is currently unclear. Importantly, the activity of MFT cells was not substantially impacted by the P. yoelii infection at 7 and 10 weeks after the malaria challenge. At these time points, when the malaria parasitemia in the blood was undetectable, both the cellular frequencies and the MFI values of CD4 and CD8 T cell subsets of the BCG vaccinated and the BCG vaccinated, malaria infected mice were not significantly different. Our data are consistent with the results of an earlier study which showed a substantial recovery of CD8 T cell function at one month after a P. yoelii infection [9]. Taken together, these data suggest that the suppression of BCG-induced T cell function by a P. yoelii infection is short-lived and the malaria-induced suppressive activity wanes after parasite clearance. It should be noted that a temporal decline in the frequency of IFN-c, IFN-c/TNF-a, and triple positive pulmonary T cells was generally observed in BCG vaccinated mice with or without concurrent P. yoelii infections. These reduced T cell frequencies seen at 7 and 10 weeks likely resulted because of the declining numbers of BCG organisms in the lung at 4-5 months after the BCG immunizations [41]. The reduction in CD4 and CD8 triple positive MFT responses seen in malaria infected animals at 2 weeks after the P. yoelii infection was surprising because of the previously reported correlation between vaccine-induced triple-positive cells and protective immunity. In animal models of Leishmania and M. tuberculosis, vaccine-induced immune responses from triple positive MFT cells have correlated with in vivo protection against an infectious challenge [30][31][32]. While CD8 T cells are probably not critical for controlling an acute tuberculous infection in mice, CD4 T cells are clearly essential for limiting the proliferation of the pathogen in the lung after an aerogenic M. tuberculosis challenge [42,43]. If there is a linear correlation between the early induction of triple positive CD4 MFT cells by vaccines and anti-tuberculosis protection, then the decreased frequencies and intensities of CD4 MFT triple positive cells observed in the flow cytometric studies of BCG vaccinated malaria infected mice at 2 weeks post-infection should have resulted in decreased protection. The apparent lack of correlation between the early levels of vaccine-induced triple positive responses and anti-tuberculosis protection could have been caused by the absence of malaria-induced suppression seen at later time points in the study when pulmonary bacterial burdens were evaluated. Alternatively, the association between vaccineinduced MFT cell responses and anti-microbe protection may be more complex than has been anticipated. Using a SIV macaque model, Sui et al recently reported that the levels of antigen-specific CD8 MFT cells correlated with protection but the correlation was non-linear and involved a threshold-like effect [44]. In studies of M. tuberculosis vaccines, our group recently showed that the antituberculosis protective responses evoked by immunization were also related to the induction of double-positive IFN-c/TNF-a expressing CD4 T cells [32]. In the current study, the activity of double-positive CD4 MFT cells (which were not suppressed by the malaria infection) could have partially compensated for the reduction of triple-positive CD4 MFT cells seen at two weeks after the malaria challenge. Clearly more studies including well designed longitudinal experiments are needed to delineate the role of vaccine-induced MFT cells in protecting against tuberculous disease. Improved strategies to efficiently purify MFT cells would facilitate studies focusing on the function of these vaccine-induced MFT cells. An important concern relevant to malaria and tuberculosis coinfections is whether cellular immunosuppression often associated with malaria parasitemia could result in increased cases of clinically detectable tuberculosis. In this study, primary P. yoelii infections did not exacerbate acute M. tuberculosis lung disease in non-vaccinated mice. In repeated experiments, no statistical differences were seen in pulmonary mycobacterial burdens or lung pathology at one month post-challenge in infected mice relative to naïve controls. In earlier studies, Scott et al and Hawkes et al had reported that malaria exacerbates mycobacterial disease in acute and latent infection models [45,46]. However, in both studies only modest increases in organ CFU levels and/or survival rates were detected. To further examine the impact of P. yoelii parasitemia on tuberculous disease, we are currently evaluating whether malaria infections increase the reactivation rate of mice with low level latent-like TB infections. The results of this study could be helpful for delineating whether malaria infections can contribute to the reactivation of latent tuberculosis. Overall, our studies in the mouse model of pulmonary tuberculosis suggest that primary malaria co-infections should not significantly impact the efficacy of novel immunization strategies against tuberculosis. However, to confirm these findings, well-designed studies are needed in humans to better understand the complex interactions between these co-infecting organisms. The results of these studies should facilitate the design of more effective immunization and therapeutic procedures against tuberculosis for use in regions with high rates of concomitant infections. Figure S1 H & E stained lung sections from BCG vaccinated and malaria infected mice after a M. tuberculosis challenge by the aerosol route. Sections were obtained from naïve, BCG vaccinated, non-immunized-malaria infected and BCG vaccinated-malaria infected mice at 4 weeks after an aerogenic challenge with M. tuberculosis and analyzed by computer scanning using an Image pro analysis system. This analaysis showed no statistical diiferences in the inflammatory responses for BCG (20.266.4) and BCG/P. yoelii infected mice (20.969.32). Similarly, significant differences were not seen between the lung pathology values for naïve (38.6610.6) and naïve/P. yoelii infected (34.963.9) animals. (TIF) Figure S2 The frequency of CD4 (A) and CD8 (B) monofunctional cells recovered form the lungs of BCG vaccinated (black bars) and BCG vaccinated, malaria infected (grey bars) mice at 2, 7, and 10 weeks following the P. yoelii challenge. Lung cells were removed and pooled form 3 mice per group, stimulated overnight with BCG, and analyzed by multi-parameter flow cytometry to determine the frequency of cells producing either IFN-c, TNF-a, or IL-2. The data are presented as the mean frequency 6 SEM for 4 groups of mice. #, Significant differences between the cellular frequencies of the BCG vaccinated and the BCG vaccinated, malaria infected groups. (TIF)
8,867.6
2011-12-19T00:00:00.000
[ "Biology", "Medicine" ]
Polarization contrast optical diffraction tomography : We demonstrate large scale polarization contrast optical diffraction tomography (ODT). In cross-polarized sample arm detection configuration we determine, from the amplitude of the optical wavefield, a relative measure of the birefringence projection. In parallel-polarized sample arm detection configuration we image the conventional phase projection. For off-axis sample placement we observe for polarization contrast ODT, similar as for phase contrast ODT, a strongly reduced noise contribution. In the limit of small birefringence phase shift δ we demonstrate tomographic reconstruction of polarization contrast images into a full 3D image of an optically cleared zebrafish. The polarization contrast ODT reconstruction shows muscular zebrafish tissue, which cannot be visualized in conventional phase contrast ODT. Polarization contrast ODT images of the zebrafish show a much higher signal to noise ratio (SNR) than the corresponding phase contrast images, SNR = 73 and SNR = 15, respectively. Introduction 3D imaging in the life sciences is of great importance for studying fundamental biology and performing (pre-) clinical studies. For these studies, label free optical imaging methods play an important role. There are various label-free contrast mechanisms such as scattering, absorption, or refractive index (RI). However, in some cases these contrast mechanisms are not sufficiently sensitive to observe the relevant information, hence, there is a need for imaging with alternative types of intrinsic contrast. Optical diffraction tomography (ODT) has shown to be an effective tool for 3D imaging of RI contrast on the scale of cells [1] or small organisms [2]. More recently, phase contrast ODT was applied on a millimeter scale, where different structural features of a zebrafish larva and a cryo-injured heart could be distinguished in 3D using RI contrast [3]. However, some types of tissue are not visible in conventional phase contrast ODT. An alternative form of contrast is given by the polarization change of the optical wavefield caused by tissue birefringence. Birefringent samples are not described by a single scalar RI value per voxel that contributes to the optical path length, but the RI value experienced by the wavefield depends on its polarization state. Polarization contrast has been widely applied in microscopy [4,5], digital holography [6], optical coherence tomography [7], and optical projection tomography [8]. Birefringence provides a high-constrast label-free mechanism for imaging fibrous structures such as muscle (collagen) or brain (myelin) tissue. Muscle tissue has been imaged in 3D using polarization sensitive optical projection tomography (OPT), as an extension of brightfield OPT using a white light source [8]. However, with OPT phase information is lost and refractive index contrast cannot be determined. In this work we show that in addition to phase contrast also polarization contrast is compatible with large scale ODT and offers a significantly higher signal to noise ratio (SNR) compared to conventional phase contrast ODT. We determine under what conditions a birefringent sample can be properly reconstructed using conventional filtered backprojection (FBP). Furthermore, we show that off-axis sample placement, which has been used in conventional ODT [9] for noise reduction, also for polarization ODT offers significant noise reduction and that the same steps of numerical refocusing to correct for defocus can be applied. Finally, we demonstrate 3D multi-contrast imaging of a zebrafish larva using two orthogonal components of the transmitted wavefield, from which a conventional phase contrast and polarization contrast ODT image are reconstructed. Polarization contrast imaging In conventional ODT, refractive index differences in the sample cause a change in optical path length of the transmitted light wave. Assuming an isotropic medium, each voxel in the sample gives a fixed contribution to the optical path length of a ray traveling through it regardless of its' polarization. However, when a sample is birefringent this contribution generally depends on the orientation of the polarization of the wave with respect to the medium. Here we use Jones calculus to calculate light interactions. We assume that the birefringent tissue locally can be described as uniaxial, where the optical axis corresponds to the predominant fiber direction. The birefringent tissue is modeled as a wave retarder that introduces a relative phase shift δ along the fast axis with respect to the slow axis, and introduces a common phase shift (i.e. the average phase of the two components) for both polarization components. The relative phase shift δ between the two components is then defined as [10] where α is the fiber inclination angle relative to the x-y plane of the polarizers as indicated in Fig. 1(a-b). The wavenumber k is given by k = 2π λ and ∆ is the optical path difference integrated over the sample. As indicated in Fig. 1(a-b), the angle ϕ indicates the angle of rotation of the optic axis of the uni-axial sample with respect to the x-axis projected onto the x − y plane. The rotation angle of the polarizers is given by ρ, which is the angle of the cross/parallel polarizers to the x-axis. The birefringent object is assumed to rotate around the x-axis for tomographic measurement with angle β, which is shown in Fig. 1(c). We define the tilt angle of the object with respect to the x axis as γ as show in Fig. 1(a-b). During tomographic measurements, the tilt angle γ stays constant. The tomographic rotation causes α and ϕ to change for each projection according to α = γ sin β and ϕ = γ cos β respectively. We assume an incoming beam polarized along the x-axis that travels through the sample in the z-direction. Both the x and y components are extracted by placing an analyzer in the sample arm that can be rotated to align with the parallel x or cross-polarized y-axis. The complex wavefield of an incoming wave polarized along the x-axis after transmission through the birefringent medium is with defined as the average phase Parallel-polarization output The first component in Eq. (3) is the x-component of the transmitted field with a polarization parallel to that of the input field. It can be extracted by placing a polarizer aligned along the x-axis after the sample. The x-component in Eq. (3) contains phase contributions of both the conventional phase contrast and the birefringence contrast δ. The phase of this component is defined as the inverse tangent of the imaginary part divided by the real part The derivative of φ U x with respect to is equal to unity and thus the measured phase of the x-component is a linear function of the phase contrast projection . There is however also a contribution to the phase of the birefringence δ, which is in general non-linear. This can be seen by taking the derivative of Eq. (5) with respect to δ, i.e., where csc is the cosecant or the reciprocal of the sine function. For small values of δ, Eq. (5) can be expanded (in zeroth and first order) as For small values of δ, the measured phase of the x-component will thus be dominated by the average phase , where tan −1 (tan( )) is the wrapped average phase. Cross-polarization output The vertical y-component is the second component of the field in Eq. (3) and is perpendicular to the input polarization. The amplitude of this component is given by Similar to what is done in polarimetry it can be measured using crossed polarizers. The presence of birefringence causes modulation in the amplitude of the wavefield as δ appears in the y-component as sin δ 2 in the amplitude. The amplitude modulation is utilized to generate qualitative birefringence contrast projections in 2D. However, this is problematic for 3D tomographic reconstruction as tomographic reconstruction algorithms usually assume a linear relation between contrast and projection. The projection function δ is thus not measured directly and must be retrieved. Taking the inverse sine of the modulation term we obtain with m and integer. In Eq. (9) the absolute value in the inverse sine is taken since the amplitude is the square root of the intensity and is thus always positive. The inverse sine changes the sign of the original δ 2 function for values π 2 ≤ δ 2 <π mod π, making the inverse sine of the signal not directly suitable as a linear input projection for FBP reconstruction. Moreover, to reconstruct for arbitrary large δ, the signal needs to be unwrapped using phase unwrapping. However, from Eq. (9) it follows that in case the maximum value of δ in the projection does not exceed π, the signal can be directly retrieved by taking the inverse sine and no further processing is necessary. Even more, if δ is small, the amplitude of the y-component of Eq. (3) can be approximated as a linear function of δ, since for small values of δ it holds that To demonstrate the general approach of tomographic birefringence tomography a polarization contrast calculation for the case of a uniaxial birefringent cylinder of 10 mm radius with a maximum projected phase shift of δ = 18 radians is shown in Fig. 2. The blue line indicates the original phase shift as a function of position after a plane wavefront travels through the cylinder and this is the signal that has to be retrieved. The red line shows two times the inverse sine of the measured |sin(δ/2)| term. The green line is obtained by flipping the inverse sine function in the appropriate domains and adding π according to Eq. (9). The function δ can then be retrieved with standard phase unwrapping and is plotted in magenta and corresponds with the original birefringence distribution. Thus, in theory the projection function δ can be retrieved. However, in practice this may not be possible, for example, when the data is noisy or the jumps in the sinusoidal signal of the transmitted field U y are not properly sampled due to large increase of δ. Polarization tomography In 3D polarization sensitive tomographic imaging, the sample is rotated and the x (parallel) and y (cross) components of the wave are recorded for each angle for phase and polarization contrast respectively. Due to the small contribution of the birefringence contrast in the x-component phase it can be used for conventional ODT. However, it should be noted that in order to preserve the linear relationship between the projection and the y-component, it can be seen from Eq. (10) that not the intensity (amplitude squared) of the wavefield should be taken as the projection, but the square root of the intensity (amplitude). However, in general δ itself depends on the tomographic rotation angle β through α in Eq. (1) and Eq. (2). Furthermore, the angle ϕ in Eq. (10) depends on β as well through Eq. (2). Using these dependencies we find that for small δ the y-component of the field is Thus, even though the amplitude of U y is linear with respect to ∆, the signal is non linear with respect to the rotation angle β. The first non-linearity occurs due to the cos 2 (γ sin(β)) term in Eq. (11). In Appendix A we show that this term causes an angular modulation across the projections in the Radon transform, which translates to a slowly varying angular background modulation in the tomographic reconstruction, that leaves the object contrast intact. The second term |sin(2ρ − 2γ cos(β))| in Eq. (11) modulates the amplitude as a function of the tomographic angle β. This can be compensated for by taking the cross-polarization angle ρ such that |sin(2ρ − 2γ cos(β))| is maximum. Experimentally, this implies that tomographic image acquisition should be done for a sufficient number of cross-polarizer angles ρ, and for each projection angle β the maximum amplitude projection is subsequently selected [8]. Thus, despite the angular dependency of the phase shift δ, a linear reconstruction algorithm can be used for polarization contrast tomography. The question arises whether the phase of the crossed-polarizer component can be used to do the conventional phase reconstruction, so that capturing of U x is not necessary. In cross polarization, the phase of the transmitted y-component is defined for any path through the birefringent sample where the field amplitude is not zero. Hence, this component cannot be used to reconstruct the conventional RI contrast across the whole sample. However, the phase of the y-component can be used in order to propagate the wavefield. This can be used to numerically refocus the wavefield if necessary, for example in the case of off-axis placement of the sample for noise suppression [3,9], or to extend the depth of field of the imaging system [2]. Acquisition of projections In ODT, the scattered field is recorded from multiple angles using digital holography. The digital holography setup is shown in Fig. 3 and consists of a Mach-Zehnder interferometer operated in transmission. The light source is a HeNe laser with a wavelength of 633 nm and an output power of 3 mW. Two lenses (Thorlabs, LD2568 and LA1979) are used to expand and collimate the illuminating laser beam to a full width at half maximum (FWHM) of approximately 15 mm. In the object arm a 10X objective lens (NA=0.3) is used in combination with a 200 mm focal length tube lens (Thorlabs) to image the sample in close proximity to the detector of a CMOS camera, (Basler beA4000-62kc) with 4096 × 3072 pixels and a pixel pitch of 5.5 µm. A rotation mount (Thorlabs CR1) rotates the sample stepwise over 360 • . One polarizer is placed in front of the sample (P1), and a second one is placed behind the sample (P2). For acquisition of the regular phase contrast projections, the optical axes of the polarizers are made parallel and a acquisition of 720 projections over 360 • is performed. For the polarization contrast projections, the relative angle between both polarizers is kept constant at 90 • . The complete tomographic measurement is then carried out as before. The polarization contrast measurement is then repeated after simultaneous rotation of both the polarizers by 30 • and 60 • , respectively. In the reference arm, a polarizer (P3) is placed in order to maximize the fringe contrast at the detector; this polarizer is rotated simultaneously with the polarizers in the object arm. A half-wave plate is placed behind the beam expander in order to maximize the signal at the detector. In the reference arm, a 10X Olympus microscope objective partly compensates for the object wave curvature to avoid the presence of too high spatial frequencies on the camera. The mirror in the reference arm is mounted onto a piezoelectric transducer (Thorlabs, KPZ 101) controlled by a computer for phase-shifting the digital hologram. We capture four holograms with reference arm phase shift increments of π/2 between each subsequent hologram. From a linear combination of these holograms a complex hologram is formed where the zeroth and out of focus conjugate orders are removed [11]. In this way we maximize the lateral resolution in the reconstructed image. This is specifically important for large scale ODT where magnification is low but an as high as possible NA is desired. Phase and polarization projections Autofocus correction is applied on the digital hologram in order to obtain the wavefield in the object region. The object position is determined by calculating a focus metric (grayscale variance) as a function of the reconstruction distance. For transparent objects the gray scale variance has a minimum value when the reconstruction distance is located at the object. For polarization contrast projections, the gray scale variance has a maximum value when reconstructed in focus. For both cases separately the minimum/maximum is determined for ten samples of a full rotation acquisition (i.e. 0 • , 36 • , 72 • , etc.). A sinusoidal function is then fitted to the minimum/maximum as a function of the projection angle to determine the object distance as a function of projection angle. For every angle the hologram is reconstructed for both the phase and polarization contrast data, with the object in focus by propagating the field to the object plane using the angular spectrum method for diffraction calculation, which is exact and valid for small propagation distances. In case of the phase projections, the phase is then calculated by taking the argument of the reconstructed wavefield. The phase projections are unwrapped using a least squares phase unwrapping algorithm [12]. For the polarization contrast projections, the amplitude of the cross-polarized component is calculated. This amplitude then gives a direct, but scaled measure for the birefringence: scaled n e − n o . For the different (cross) polarizations, the projections are misaligned horizontally by a few pixels. This is corrected by determining the center of rotation from the maximum variance of the tomographic reconstruction as a function of the shift for each polarization contrast sinogram individually. The projections are then shifted to the correct location using the circular shift function of MATLAB. The wavefield amplitude of the projections for the three angles are stacked, and the maximum value for each camera coordinate is extracted to form a single maximum birefringence projection sinogram. Tomographic imaging is performed with 720 projections over 360 • (steps of 0.5 • ) with four phase steps per projection. At every projection angle and phase step, four measurements are taken (one for phase, three for polarization contrast) in total. The net acquisition time for a full 3D measurement is approximately 7 minutes with the total acquired data around 160 GB. Tomographic image reconstruction and visualization For reconstruction of the phase contrast, assuming that RI variation in the sample is sufficiently small so that refraction does not occur, a phase projection is a scaled integral over the RI variation with respect to the background medium along the illumination direction. The average refractive index difference ∆n avg is calculated from the phase by using the system magnification and the pixel pitch [9]. Subsequently, the ∆n avg object is reconstructed using the FBP algorithm on a slice by slice basis. For polarization contrast, the maximum birefringence projection sinogram δ is reconstructed using the FBP algorithm as n e − n o . We used the Drishti software package [13] to visualize and merge the phase and polarization contrast reconstructions with a non-linear transfer function. Noise suppression in polarization sensitive ODT The sample is displaced from the center of rotation by approximately 0.5 mm. Figure 4 shows the noise distribution, in standard deviation σ, in a tomographic ODT reconstruction of both the polarization contrast (a) and (b) and the phase contrast (c) and (d). The polarization contrast ODT reconstruction suffers from increased noise in the region of the center of rotation, similar to what has been shown to be the case with phase contrast ODT. The noise at the center of rotation is approximately a factor 7 higher than outside of the center. This also shows that the on-axis noise reduction is even more significant in the case of polarization contrast ODT than in phase contrast ODT, where the noise reduction by off-axis placement was found to be in the order of a factor 2 for 720 projections [3]. Zebrafish sample preparation The sample is a 3 day old zebrafish embryo (wild type). The eggs are grown on a petridish and subsequently placed in PTU (1-phenyl 2-thiourea) to prevent pigment formation. At 72 hours, the eggs are dechorionated and fixated in 4% paraformaldehyde. Then, the eggs are washed with Phosphate buffered saline three times, after which it is replaced with 100% MeOH in two cycles for dehydration. The embryos are placed in small cylinders (4 mm diameter) and mixed with agarose (2% mass-percentage). After the agarose is dry, the agarose containing the embryo's is removed from the cylinders and as a whole placed in BABB, a mixture of benzyl alcohol (Sigma B-1042) and benzyl benzoate (Sigma B-6630) in a 1:2 ratio, which makes the sample completely transparent [14]. During this process, the RI of the sample becomes almost that of the BABB clearing solution. We used a clearing time of 3 hours (similar to [3]) that ensures that the sample is transparent enough for optical phase tomography, while at the same time maximizing remaining RI contrast in order to keep a good signal (RI contrast in the reconstruction) to noise (background) ratio in the final reconstruction. Results The polarization and phase contrast projections of a 3 day old zebrafish tail are shown in Fig. 5(a)-(b) and (d)-(e), respectively. The phase contrast projections are similar to our earlier work on ODT applied to zebrafish larvae [3]. In the polarization contrast projections most of the larva appears dark, due to the absence of birefringent tissue, except in the tail where the developing highly birefringent muscle tissue (myotome) is located. The polarization contrast results are found to be similar in comparison with 2D polarization contrast measurements of Jacoby et al. [15]. The histograms of the 3D polarization and phase contrast reconstructions are shown in Fig. 5(c) and (f) respectively. The polarization contrast histogram of the scaled birefringence shows two components, namely the background and the myotome tissue. In the phase contrast histogram of the polarization averaged refractive index multiple peaks, corresponding to different organs, are visible [3]. A 3D visualization of the phase contrast, the polarization contrast, the merged datasets and transverse cross-sections after tomographic reconstruction using FBP are shown in Fig. 6. It can be clearly seen from the visibility of the developing muscle tissue (myotome) that the phase and polarization contrast offer complementary contrasts, even though they spatially overlap. The anatomical structures are annotated based on reference data from microscopy [15] and OPT [16]. A striking result is the high contrast obtained in the polarization contrast projections compared to the phase projections. We quantify this by calculating the standard deviation of a background region outside of the center (since the level of noise is lower there), and estimate the mean of the (d) and (e) from two different angles of a 3 day old optically cleared zebrafish larva, illustrating the different contrasts obtained through polarization and phase contrast respectively. In (c) and (f) the histograms of the full 3D data set are plotted for the polarization and phase contrasts, respectively. The background contribution is indicated in both histograms, and the myotome and interstitial tissue for the polarization and phase contrast respectively. Fig. 6. 3D visualization of the phase (a) and polarization (b) contrast, and combined (c) ODT reconstructions of a 3 day old zebrafish larva tail. In phase contrast, the tail (in red) and the spinal cord (in purple) appear, but not the developing muscle tissue (myotome), which is birefringent. In the polarization contrast reconstruction the structure of the myotome can be clearly discerned. Insets show transverse cross sections in linear intensity scale taken at the dashed line. Scalebar for 3D reconstruction corresponds to 200 µm. signal in the tail at the same location for both the polarization and phase contrast reconstructions. For polarization contrast ODT, this yields an SNR of approximately SNR=73, and for the phase contrast ODT we obtain an SNR of approximately SNR=15. Polarization contrast ODT thus yields significantly higher SNR than phase contrast ODT for imaging the zebrafish tail. Discussion and conclusion We demonstrate 3D polarization contrast ODT, which has previously only been achieved only with OPT. Applying it within the framework of ODT makes it possible to image both phase and polarization contrast and make use of the benefits of ODT such as numerical refocusing and extended depth of field, due to the fact that both phase and amplitude of the polarization contrast field are measured. Polarization ODT contrast Coherent speckle causes increased noise levels close to the center of rotation in polarization contrast ODT similar as in conventional phase contrast ODT and the same strategy of off-axis placement and numerical refocusing can be applied to reduce the noise level up to a factor of 7. The polarization contrast ODT reconstruction yields a significantly higher signal to noise ratio compared to the phase contrast reconstruction. We attribute this to the fact that in phase contrast ODT the refractive index differences decrease during clearing, leading to a reduction of the signal to noise ratio in the reconstructed images. For polarization contrast ODT, the background is zero (no transmission in the absence of birefringence) and consequently leads to a relatively high contrast when birefringent tissue is present. Besides this qualitative argument, also quantitatively, the value of the average refractive index, which is proportional to n e + n o , and the birefringence n e − n o may vary during the clearing process [17] and thus influence the image contrast in both ODT modes. Limit on maximum projected δ Straightforward tomographic reconstruction only yields valid results for polarization contrast ODT in case δ is small. In phase projections of highly birefringent materials, such as a FEP (fluorinated ethylene propylene) tube, phase wrapping is clearly visible as a dense amplitude modulation. For cleared biological samples we have not observed dense amplitude modulation and, for all practical purposes, the wrapping problem is absent. Even for uncleared samples with 0.5 mm of birefringent tissue, phase wrapping is absent for birefringence lower than n e − n o = 6 · 10 −4 , which is still smaller than the typical birefringence of uncleared tissue [7]. For application outside of biomedicine, the wrapping of δ places a practical limitation on the amount of birefringence and/or the maximum sample thickness that can be imaged using conventional reconstruction. In principle, the correct projection and reconstruction can be retrieved in case the linearity requirement is violated using a modified unwrapping procedure based on the forward model. However, further research is needed for application of this procedure on experimental data. Absolute quantification of birefringence A limitation of the current method is that the polarization contrast is qualitative. Absolute quantification of the birefringence is challenging as the magnitude of the signal is dependent on the incident field distribution, sample optical absorption, the light to electron conversion, and the fiber orientation. In principle the first three factors can be divided out using a reference measurement, e.g., from the amplitude of the parallel polarization state projection. A further complication comes from the tomographic angle dependence of δ that causes a modulation outside of a continuous region of birefringence. In case the macroscopic assumption of uniform birefringence across a region does not apply, but the fiber orientation changes significantly on small length scales this may cause reconstruction artifacts. The case for quantitative birefringence tomography (quantification of optic axis, n e , and n o ) is more complicated as it requires more information per projection angle and a non-linear inversion scheme. This is outside of the scope of the current work. Applicability of the uniaxial model The analysis and simulations in this paper are based on the assumption of uniaxial birefringence. The uniaxial model is a simple and widely used model in polarization microscopy, and applicable to fibrous structures such as myelin, elastin, and collagen. The well-defined fiber orientation as is present in the uniaxial model would be of importance to extract from the data. Further research is needed to determine whether fiber orientation can be retrieved in 3D, for example by performing more measurements under different input polarizations and using a full vectorial reconstruction [18]. Although the uniaxial model works for a large class of tissues, some types of tissues exhibit biaxial birefringence [19]. In addition, in some voxels there may be overlapping tissue fibers. Incorporating this in the tomographic reconstruction requires a more elaborate birefringence model. Conclusion We demonstrated 3D polarization contrast ODT. The developing muscle tissue in the tail of the zebrafish larva is known to be birefringent and cannot be discerned in conventional phase contrast ODT reconstruction. By illuminating the sample with a single polarization input state and measuring both the parallel (for the phase) and the orthogonal component (for the polarization contrast) with digital holography a conventional and polarization contrast ODT reconstruction of the same object can be obtained. A. Appendix Here we demonstrate the effect of the angular dependency in the amplitude projection on the tomographic reconstruction. The object we consider is a cylinder according to the orientation outlined in the theory section of this paper. The cylinder has radius R, and birefringence n e − n o = δn. For a plane wave traveling along the z-axis linearly polarized along the x-axis, the polarization contrast is retrieved from the cross-polarized transmitted component after traveling through the sample. This component U y is given by the y component of Eq. (A2). Using the relations δ = k∆ cos 2 (α(β)), α = γ sin(β) and ϕ = γ cos(β), the full expression for U y on β becomes U y (β) = −ie i sin(2ρ − 2γ cos(β)) sin 1 2 k∆ cos 2 (γ sin(β)) , and the amplitude of U y (β) is |U y (β)| = |sin(2ρ − 2γ cos(β))| sin 1 2 k∆ cos 2 (γ sin(β)) . For a cylinder located at the origin with a tilt γ with respect to the x-axis of tomographic rotation, the cross-section seen by a wave traveling along the z-axis is an ellipse f (y, z), with semi-major and semi-minor axes a = R sec(γ) and b = R, respectively. The Radon transform (f ) for a 2D slice of the ellipse gives the path length experienced by the probing wave per projection angle β and is given by [20] (f ) = 2R 2 sec(γ) where A = R 2 cos 2 (β) sec 2 (γ) + R 2 sin 2 (β) , and p is the transverse coordinate along the projection. Replacing ∆ in Eq. (A2) with (f )δn, the effective amplitude projection function measured at the detector becomes sin(2ρ − 2γ cos(β)) sin R 2 δn k sec(γ) cos 2 (γ sin(β)) The projection functions |U y (p, β)| along with the resulting tomographic reconstructions are plotted in Fig. 7 for tilt angles γ = 0 • (a-b) and γ = 54 • (c-d). The simulation parameters are cross-polarizer angle ρ = 27 • , δn = 1 · 10 −5 , R = 1 mm and λ = 633 · 10 −9 . For comparison, the case for a non-birefringent cylinder at γ = 54 • is shown (e-f). It can be seen that the angular dependency of the amplitude projections results in a modulation along the horizontal projection angle axis in Fig. 7(c). Since this does not cause modulation along the transverse coordinate axis and the amplitude is zero outside of the projection of the birefringent object, the contrast inside the birefringent sample is not modulated (Fig. 7(d)). Instead, it gives a slowly varying angular modulation in the background.
7,117.6
2020-03-20T00:00:00.000
[ "Physics" ]
Breakdown dynamics of a horizontal evaporating liquid layer when heated locally Breakdown of liquid layer when heated from a localized hot spot was investigated experimentally. Water and ethanol were used as working liquids with a layer thickness of 300 μm. Basic steps of the breakdown process were found and mean velocities of the dry spot formation were determined. The formation of residual layer over the hotspot before the breakdown has been found for both liquids. The creation of a droplet cluster near the heating region is observed when using water as a working fluid. It was shown that evaporation is one of the general factors influencing the process of layer breakdown and dry spot formation as well as thermocapillary effect. Introduction Study of heat transfer from a local heat source becomes one of the most important and complicated problems in modern thermophysics. The problem is closely connected to the problem of microelectronic equipment cooling [1]. Permanent consumption of electricity by the device inevitably leads to increasing the microprocessor temperature and degradation of its performance and reliability. The average heat flux density on the surface of chips of commercially available computers and other electronic devices is currently known to reach 100 W/cm 2 . Continuous development and complication of microchip structure produce nonuniform heat flux distribution on the chip surface. This effect takes place due to the design features of computer chips, where the processor cores cause the formation of "hot" spots. Heat flux density in some regions is much higher than the chip average [2], of the order of 1 kW/cm². These specific regions called "hot spots" could have size from several hundred microns to 1-2 millimeters. However, using special localized cooling it is possible to produce large performance gains in microprocessors. Nowadays, there are several effective techniques for cooling of local hot spots such as spray cooling [3], boiling in microchannels [4], thermoelectric coolers [5]. One of the promising methods for removing such high heat fluxes from a spotted heat source is technology based on evaporation of a thin liquid layer. Dynamics of evaporation essentially depend on the conditions in the layer [6]. In particular, the breakdown of liquid layer leads to dramatic decreasing of heat transfer from a spotted heat source [7]. Processes of liquid layer rupture are actively studied experimentally [7][8][9] and theoretically [10][11][12] in the present time. The goal of the present work is to study using schlieren technique the breakdown dynamics of a horizontal evaporating liquid layer when heated from a localized hot spot. Experimental rig Experiments were conducted on the rig shown in fig. 1. The working area consists of a caprolon base, a metal substrate and a heating element. Fluid from the syringe pump enters the work area, forming a horizontal liquid layer opened to the atmosphere. Spot heating of the horizontal liquid layer takes place in the center of the substrate. The test cell consists of caprolon base, metallic substrate and the heating element. The caprolon base has a special cut on the upper side for installation of the substrate and a central through hole with a diameter of 1.6 mm. The substrate is made of stainless steel and has a diameter of 50 mm and a thickness of 1 mm. In the center of the substrate is a closed hole with a diameter of 1.6 mm and a height of 0.8 mm. The heating element is made of brass and has a round tip with a diameter of 1.6 mm and a height of 3 mm. It is tightly inserted into the closed hole of the substrate through the caprolon base. Thermal paste is used for better thermal contact between the heater tip and the substrate. The distance between the tip and the upper side of substrate is 0.2 mm. The heat source of the heater is a nichrome wire wound on the core rod. Heating is controlled by the power source. The heater core was placed in the cutout of the base on its lower side, and between the heater and the base there was an air gap of 2 mm. From the bottom of the heater there was an insulating material to minimize heat loss. Temperature in the test cell is measured by thermocouples (type K) with an accuracy of 0.1°C. Location of the thermocouples is shown in Fig. 1. Relative humidity and atmosphere temperature are measured using the thermohygrometer Testo 645 with an accuracy of 2% and 0.1°C, respectively. The heat flux density is determined by measuring the temperature difference between two different sections along the heater tip.: where  is the thermal conductivity of the heater material, W/cmK; l is the distance between the two sections of the heater tip, cm; and T is the temperature difference in two cross sections along the length of the heater tip, K. The height of the horizontal liquid layer is maintained in constant position during the all experiment. The probe with a diameter of 100 μm is installed at a required distance from the substrate surface. Precise linear actuator and shadowgraphy technique are used in order to determine the position of the probe. The probe moves in range of 5 mm by steps of 1 μm. With the help of high-precision syringe pump the flow rate is chosen for providing a constant layer thickness, taking into account the evaporation. The horizontal layer surface and the probe are observed using video camera Imaging Source DFK 23GP031 with a resolution of 2592х1944 pixels. To visualize surface deformations and register the breakdown an optical schlieren system is used with high-speed video camera Photron FASTCAM 675K-M3 (speed of 5000 fps at a resolution of 640x640 pixels and a scale of 25 µm/pix). The test section is mounted in horizontal position using a goniometer. The surface roughness of the substrate was determined by profilometer "Micro Measure 3D station" and the average roughness value is equal to Ra=0.327 µm. The contact angle on the working surface in the heating area was determined by the sessile drop method (Young-Laplace) [13] at a room temperature of 25±2°C and equals to θ1=6±1° for ethanol and θ2=76±1° for water. Results and discussion Experiments were conducted at atmospheric pressure, temperature and relative humidity of 28±2°C and 25±3%, respectively. The used working liquids were ethanol (95% (mass.), GOST R 51723-2001) and ultrapure water. For water purification the system Merck Millipore Direct-Q 3 UV is used which allows providing the water of type I (ultrapure water). The height of the liquid layer was 300 µm. Injection liquid flow rate was up to 200 μl/min. During the experiment heat flux is increased up to a critical value at which the liquid layer ruptures. At this moment heating process is stopped to prevent the failure of heating element. For both working fluids the critical heat flux density when the breakdown of the liquid layer occurs is measured. The value of the critical heat flux density for ethanol equals to 12.6 W/cm 2 at the substrate temperature in the heating area of 37.1°C, for water it is 117 W/cm 2 at the substrate temperature of 133°C, respectively. It was found out that for both working liquids the layer breakdown occurs according to one scenario [7,14,15]. First, a thermocapillary deformation of the layer above the point heating area appears (Fig. 2a, b). Further thinning leads to the formation of a residual liquid layer in the area of the point heating, Fig. 2c [8]. Then, the residual liquid layer evaporates to a critical thickness, at which the layer breakdown takes place, Fig. 2d. After the breakdown the entire heating area is intensely dried, and the round dry spot is formed, Fig. 2e. It should be noted that local heating of the water layer is followed by the formation of a droplet cluster [6,14,16] above the heating area and the breakdown takes place at a distance of about 1 mm from the substrate center. When using ethanol as a working liquid residual layer formation and rupture occur directly above the heating element in the center of the substrate. The average rate of dry spot formation was measured for two working fluids and was determined as the ratio of the characteristic radius of the resulting dry spot to the time of its formation in the heating area. The time of dry spot formation is counted from the moment of the residual layer breakdown to its complete evaporation. The average velocity of dry spot formation for ethanol is 0.06 mm/s, and for water it is 5.15 mm/s, respectively. The time of dry spot formation for ethanol as a working fluid is 7.85 seconds, and for water -0.13 seconds. First of all the difference in the velocity of dry spot formation is connected with different evaporation rate of the residual layer. The evaporation rate directly depends on the heat flux density and the substrate temperature. In addition, it is influenced by the difference in properties and hydrodynamic parameters of working fluids: contact angles and surface tension. In the study of thermocapillary breakdown of the water layer with local heating the existence of the droplet cluster was found (Fig. 3), followed by its falling and formation of capillary waves [16]. Droplets in a cluster are held above the layer surface due to intensive evaporation in the heating area and falling of individual droplets begins after breakdown of residual layer (Fig. 3 c, d). The phenomenon of droplet cluster was investigated in detail in [16][17][18] . Fig.3. Visualization of the droplet cluster and the breakdown dynamics. The liquid is water and the layer depth is 300 µm. Influence of the working liquid on the breakdown dynamics was studied. Visualization of the breakdown dynamics has been performed for thin layers of ethanol and water using schlieren technique. It is found that for rupture of the water layer the density of heat flux required to be by an order of magnitude higher than the critical heat flux density for ethanol layer at the same thickness. At the same time typical velocities of breakdown of water and ethanol differ by two orders of magnitude. This fact is directly related to the difference in the critical heat fluxes and intensity of evaporation. It is proved that before the breakdown the residual layer appears in the area of local heating. When using water as the working liquid the formation of a droplet cluster may be observed near of the heating area. Along with the thermocapillary effect, evaporation is one of the main factors that influence the breakdown of residual liquid layer and dry spot formation in the heating area. The study was financially supported by the Russian Science Foundation (Project 14-19-01755).
2,480.8
2017-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Mapping The Quality of EFL Mixed Methods Research: What Does The Research Synthesis Indicate? Beyond the mono-method quantitative and qualitative research syntheses (e.g. meta-analysis and meta-ethnography, respectively) and with a pragmatic perspective on conducting mixed methods research (MMR), recently a very few research synthesists have adopted a Mixed Methods Research Synthesis (MMRS) approach to answer complex review questions. Therefore, to better understand the issue of quality, this study takes the initiative in aligning the mixed methods research quality with the Plonskyian views with specic reference to study quality proposed in methodological synthesis literature. The main purpose of the methodological synthesis here was to provide empirically based evidence for describing and evaluating mixed methods studies in an Iranian EFL (English as a Foreign Language) context. We synthesized mixed methods theses in an Iranian EFL context by describing and evaluating three interrelated components of study quality through focusing on transparency and reporting practices related to: (a) MMR formulation stage (or MMR problem specication stage), (b) MMR design-related features, and (c) MMR interpreting and integration (or MMR implementation stage). The ndings indicated unsatisfactory application of MMR tenets in the EFL setting. The study has implications for designing and implementing sound MMR studies. mixed methods theses in an Iranian EFL context by describing and evaluating three interrelated components of study quality: (a) transparency and reporting practices related to MMR formulation stage (or MMR problem specication stage), (b) transparency and reporting practices associated with MMR design-related features, and (c) transparency and reporting practices related to MMR interpreting and integration (or MMR implementation stage). of and Corrigan (2014) and rst three features reect rigor in in reporting Introduction It is for over half a century that researchers in the social and behavioral sciences have established Mixed Methods Research (MMR) as a pragmatist methodological approach to conduct research (Dörnyei, 2007). This integrative approach is primarily used to corroborate ndings from the quantitative and qualitative camps to initiate and spread human knowledge. As Cooper (2016) stated "trustworthy accounts that describe past research are necessary steps in the orderly development of scienti c knowledge" (p. 2). Furthermore, the signi cance of the cumulatively solid account of previous studies is emphasized given the recent call for the Evidence-Based Practice (EBP) movement, which has put a renewed emphasis on the importance of how a research study was conducted, what it determined, and "what the cumulative evidence suggests is the best practice movement" (Cooper, 2016, p. 3; see also Heyvaert, Hannes, & Onghena, 2016). Likewise, as Plonsky and Gass (2011) put, "progress in any of the social sciences including applied linguistics depends on sound research methods, principled data analysis, and transparent reporting practices" (p. 325). We, drawing on the EBP movement and research methodological awareness, have recently witnessed an increasing awareness and tendency towards meta-research-"the study of research itself: its methods, reporting, reproducibility, evaluation, and incentives" (Ioannidis, 2018, p. 1). More speci cally, in the eld of applied linguistics, unlike the prior methodological studies that minimally evaluate research practices (e.g. Duff & Lazaraton, 2000;Henning, 1986;Lazaraton, 2005), and that such accounts were mainly "anecdotal rather than based on systematic inquiry of primary empirical studies" (Liu & Brown, 2015, p. 66), Plonsky champions the use of research synthetic techniques in a series of studies-de ning the domain, locating the primary-level studies, developing a coding sheet, searching the literature, collecting information from studies (see Plonsky, 2013)-in accounting for "methodological phenomena" (Plonsky & Gonulal, 2015, p. 10). This fresh look at methodological practices has been of great importance in applied linguistics (Liu & Brown, 2015), and has motivated a series of studies (Plonsky, 2013(Plonsky, , 2014Plonsky & Gass, 2011;Plonsky & Gonulal, 2015) For example, Plonsky (2013) investigated 606 quantitative research studies in terms of research designs, statistical techniques, and reporting practices in the top-tier journals of Language Learning (LL) and Studies in Second Language Acquisition (SSLA). The results revealed that mean-based analyses were frequently used with analyses of variance (ANOVAs) and t tests being the most prevalent statistical techniques. Advanced statistical techniques were applied sporadically. However, there exists dissatisfaction with reporting practices: reliability measures (occurred in 45% of the data), effect sizes (occurred in 26% of the data), checking of statistical assumptions (occurred in 17% of the data), con dence interval (occurred in just 5% of the data), and power analysis (occurred in only 1% of the data). Other methodological syntheses describe and evaluate a particular feature of L2 research (e.g., reliability issues, factor analysis, instrument practices, eta-square effect sizes, and multiple regressions in L2 research). For instance, in a study on the applications of effect-size indices, Norouzian and Plonsky (2018), using research synthetic techniques, described and evaluated the uses of etasquared and partial eta-squared in L2 research. Having outlined the conceptual and functional values of these two frequently used indices, they maintained that L2 researchers mistakenly represented partial eta-squared as eta-squared. In sum, authors of methodological research synthesis studies sought to evaluate second language research and particular L2 research features in general and domain-speci c research issues in particular. Retrospectively, they noted the strengths and de ciencies of L2 quantitative research studies in a wide range of sources such as published journals-more prevalently than other sources such as book chapters, and unpublished dissertations (emphasis added). They prospectively also offered systematic, objective, and transparent insights into the quality of L2 research (emphasis added) and put forward empirically grounded suggestions for improving the research issues (e.g. research designs, statistical analyses, and reporting practices). However, these researchers recruited various systematic sampling strategies (i.e. purposive or exhaustive) for locating the target sample and only evaluated quantitatively oriented research studies in applied linguistics. Johnson, Onwuegbuzie, and Turner (2007) maintain that We currently are in a three methodological or research paradigm world, with quantitative, qualitative, and mixed methods research all thriving and coexisting and a triple methodological world might be healthy because each approach has its strengths and weaknesses and times and places of need. (p. 117) Quality standards or quality assurance in the literature is a very challenging topic so that the "application of quality criteria is still a subject of discussion" (Poortman & Schildkamp, 2012, p. 1738 Therefore, in order to better understand the issue of quality, this study takes the initiative in aligning the mixed methods research quality with the Plonskyian views with speci c reference to study quality proposed in the methodological synthesis literature (see Plonsky, 2013;Plonsky & Gass, 2011). Unlike the previous studies which were mainly based on individualistic, theoretical, and idiosyncratic ideology (see Fàbregues & Molina-Azorín, 2017, emphasis added), research quality in mixed methods research in the present study is viewed through the lens of a synthetic research ethic (Norris & Ortega, 2006;Ortega, 2015) as a guidepost which, in turn, would strengthen collaboration, transparency, objectivity, systematicity, and boost synthetic thinking and acting with regard to the issue of quality. Research synthesis, as Plonsky and Oswald (2015) maintain, is "the microscope through which past L2 research is interpreted as well as the telescope through which future L2 research efforts will be directed" (p. 121). Accordingly, we also took a retrospectiveand-prospective approach to attend to both past MMR and future MMR endeavors in an EFL context. More speci cally, we hoped to contribute to the future of mixed methods research by examining its past in an EFL context (see Heyvaert et al., 2016;Plonsky, 2013). Research Questions The following research questions were addressed Here, we adhered to a detailed set of steps to conduct research synthesis in applied linguistics (see Plonsky & Oswald, 2015). To begin with, this study, unlike the previous methodological syntheses in applied linguistics, synthesized unpublished dissertations between 1987 and 2015 2 which were recorded in "the Iranian ResearchInstitute for Information, Science, and Technology (IRANDOC) Institute." This long-established institute, a liating with the Iranian Ministry of Science, Research, and Technology (MSRT), is a local and rich research-based center with an aim to collect, record, and disseminate research articles, research reports, government reports, and theses (see http://irandoc.ac.ir/about/overview). Furthermore, in order to examine a comprehensive range of research and trace advancements across time in EFL research context, a three-decade period of research was selected based on the statistical reports of EFL higher education in Iran (https://irphe.ac.ir/index.php?sid=25). In this all-inclusive methodological synthesis, the quality of dissertations was established as an empirical "a posteriori question, not an a priori matter of opinion" (Glass, McGaw, & Smith, 1981, p. 222). Therefore, based on the aforementioned criteria concerning location, time, and content, our initial search revealed a large number of research outputs that appeared to t the criteria. As Figure 1 illustrates, that researchers' nal search led to 119 mixed methods dissertations. Coding: Designing Coding Sheet and Coding Procedures In order to bring MMR quality into alignment with methodological synthesis, the following procedures were taken. In line with Plonsky's (2013) de nition of study quality, which is "adherence to standards of contextually appropriate, methodological rigor in research practices and transparent and complete reporting of such practices" (p. 658), and somewhat in line with those mixed methods scholars who are fervent supporters of parsimoniously agreed-upon set of core benchmarks for study quality rather than MMR particular long-list criteria (Bryman, This categorization was, to some extent, consistent with that of Onwuegbuzie and Corrigan (2014) wherein rigor is de ned as conducting and reporting a mixed methods research study, which is comprehensive, systematic, evaluative, defensible, and transparent. The rst three features re ect rigor in conducting MMR studies; the last two components represent rigor in reporting MMR studies. It seems that by aligning MMR agreed-upon set of core benchmarks with Plonskyian research quality criteria, one can argue for shaping a posteriori category based on empirical MMR data rather than a priori category based on pre-determined theoretically decontextualized criteria. Coder Issues and Reliability Estimates The following procedures were taken in order to boost the reliability of codes: (a) a reliability team was created including three Ph.D. students who had the research backgrounds and two experienced mentors who had been involved in teaching EFL research methodology at M.A. and Ph.D. levels; (b) three training sessions (each for 2 hours) were held in order to delineate the purposes of the study, the coding sheet components, and coding procedures; (c) coding guides or manuals accompanying coding sheets were distributed among the coders; (d) the coders were independently supposed to rate ve M.A. theses retrieved from the IRANDOC research database; (e) the coders were asked not to look at the study identi ers of a given study because it might have an in uence on coding; and (f) in case of any questions and inconsistencies, the researchers relied on the related literature and research synthesists. The overall inter-rater reliability of rated theses for mixed methods research studies was 0.83. Furthermore, in order to depict a thorough and comprehensive picture of coder consistency, "it is essential that reliability be considered and reported not simply overall, but rather for each category under examination (Norris & Ortega, 2006, p. 26). Table 1 presents the results of inter-rater reliability with several categories. Results In this section, we synthesized mixed methods theses in an Iranian EFL context by describing and evaluating three interrelated components of study quality: (a) transparency and reporting practices related to MMR formulation stage (or MMR problem speci cation stage), (b) transparency and reporting practices associated with MMR design-related features, and (c) transparency and reporting practices associated with MMR interpreting and integration (or MMR implementation stage). Transparency and Reporting Practices Related to MMR Formulation Stage Comprehensive and thorough reporting of the MMR formulation stage, as "the rst step in any research endeavor" (Cooper, 2016, p. 20), empowers primary-level research consumers to better understand authors' "mixed methodological way of thinking" (Onwuegbuzie, 2012, p. 204). Table 2 presents the extent to which the MMR theses adhered to standards of transparency and rigor in the reporting of the interconnected components (i.e., working titles, research questions, rationale, and philosophical clarity). Table 2 revealed that only two studies (2%) included the term mixed methods and related terms. Approximately 98% of the studies did not embrace the words of mixed methods or related terms in their titles. Furthermore, 14% of the studies (n=17) conveyed quantitative orientation in their titles; approximately 6% of the studies (n=7) conveyed qualitative orientation in their titles. This revealed that approximately one quarter of the studies leaned toward mono-method rather than mixed methods orientation in reporting the titles. Furthermore, a great portion of the studies (79%, n=94) was guided by separate research questions. That is, there existed at least one quantitative-led research question accompanied by at least one qualitative-led research question without a clear mixed methods research question. However, the second most frequent type was what Plano Clark and Badiee (2010) referred to as combination research questions (9.24%, n=11), wherein the EFL authors initially posed separate mono-method (i.e., quantitative-led and qualitative-led) research questions followed by a transparent mixed methods research question. The least prevalent research question type was concerned with what Plano Clark and Badiee (2010) referred to as hybrid research questions (8.4%, n=10) through which the EFL authors initially posed an overall research question consisting of two distinct strands. Then, they employed a quantitative-oriented approach to address one strand and used a qualitative-oriented approach to deal with the other strand. The results also revealed that approximately 35% of the studies (n=42) explicitly outlined the rationale for using a mixed methods research approach. To put it differently, a great portion of the researchers (65%, n=77) did not explicate the reasons for using a mixed methods research approach. Therefore, it is not clear whether mixed methods research in 65% of the studies is more appropriate than mono-method approach to answer the research questions. The last feature in the formulation stage is the extent to which philosophical clarity was explicitly reported in the data set. Philosophical clarity, as Collins, Onwuegbuzie, and Johnson (2012) assert, is "the degree that the researcher is aware of and articulates her/his philosophical proclivities in terms of philosophical assumptions and stances in relation to all components, claims, actions, and uses in a mixed research study" (p. 855). It is incumbent on the MMR authors to clarify his/her philosophical positioning in a given study. However, unfortunately, philosophical stances as a major indicator of the MMR style of thinking was completely absent in the M.A. theses. Design-related Features in MMR With regard to the purpose of mixing quantitative and qualitative phases (see Table 3), the results revealed that approximately 61% of the MMR theses (n=73) identi ed (implicitly and explicitly) the purpose for mixing quantitative and qualitative phases in a given study. As for purpose types, the complementarity purpose (37%, n=44) in which the authors seek to elaborate, to enhance, to illustrate, and to clarify "the results from one method with the results from the other method" (Johnson & Christensen, 2014, p. 502), was, by far, the most prevalently represented purpose in the data set. To a lesser degree, the triangulation purpose (13%, n=15) in which the authors seek to converge and to corroborate the results from different angles or research approaches (Greene, Caracelli, & Graham, 1989) was identi ed as the second purpose. This was closely followed by the development purpose (12%, n=14) through which the authors attempt to utilize "the results from one method to develop or inform the other method" (Johnson & Christensen, 2014, p. 502). Prominently absent in the data set were initiation and expansion purposes. According to Figure 2, as for timing in the MMR designs, the results revealed that around 60% of the MMR studies (n=69) were implemented in two distinct phases. That is, most of the EFL authors conducted the studies sequentially through which two strands of quantitative and qualitative approaches occurred one after another. Around 29% of the MMR studies were implemented simultaneously in a single phase. That is, around 29% of the EFL authors (n=35) conducted the studies concurrently through which the quantitative and qualitative phases occurred at almost the same time. A very small percentage of the studies (6%, n=7) was conducted in multilevel phases. The results further revealed that the EFL authors used a variety of mixed methods research designs. It was found that the most frequently used design in the theses was an explanatory sequential design (41.2%, n=49) with the aim of surveying the intended problem(s) quantitatively at the beginning, and then exploring the problem qualitatively to help explain the quantitative-led results at the end. The second most frequently reported design in the theses was the embedded concurrent design (19.3%, n=23). Conversely, the embedded sequential design (6%, n=5) received the least attention in the data set. Finally, the exploratory sequential design (12.6%, n=15) and the triangulation concurrent design (11%, n=13) were used sporadically (see Table 4). With regard to the nomenclature of MMR designs, two evaluative questions were included (i.e., Is the speci c type of design clearly stated? Or is the speci c type of design identi ed based on main components from the corpus?). Surprisingly, the results revealed that just the author of one study described explicitly the name of speci c type of MMR design (i.e. the name of design was multiple stage mixed methods research; see Creswell & Plano-Clark, 2018). Almost all of the speci c MMR designs (99%, n=118) were identi ed based on the main elements of mixed methods research design from the documentation. Finally, regarding the issue of rigor (i.e. the strengths of MMR designs in comparison to mono-method research approach) in the MMR designs employed, the results showed that the reporting of rigor in relation to the MMR designs employed was completely missing in the data set. That is, the EFL authors rarely highlighted or reported issues of rigor in relation to the MMR designs. Reporting Practices in Sampling-related Features As depicted in Table 5, the results revealed that the EFL authors used a variety of mixed methods sampling designs. As can be seen, in line with the total frequencies of MMR designs in Table 5, sequential and concurrent sampling designs were reported in 57% and 30% of the data set, respectively. These percentages were somewhat close to the percentages of the M.A. theses employing sequential (60%) and concurrent (29%) research designs. This signi ed the fact that there existed a regular and direct relationship between sampling designs and research designs. The overall results further revealed an inconsistent picture of sampling designs in the data set. For example, the most frequently represented sampling design was related to what Collins, Onwuegbuzie, and Jiao (2006) referred to as sequential designs utilizing nested samples (38%, n=45) for the qualitative and quantitative strands of the M.A. theses. Then, it was followed by a sequential design with multistage samples (12%, n=14). However, within sequential designs, parallel (2%) and identical (6%) sampling designs received less emphasis than did multilevel and nested sampling designs. As for the concurrent designs, on the other hand, the most prevalent design was identical sampling design (13%, n=15), followed closely by nested sampling (11%, n=13). The least frequent was related to parallel (2%) and multilevel (5%) sampling designs, respectively. Surprisingly, the results showed that none of the authors of the M.A. theses attended to explicit description of a speci c type of MMR sampling design. This means that all of the speci c MMR sampling designs were implicitly identi ed based on the main elements of the sampling designs from the documentation. Reporting Practices with Regard to Integration-related Issues In an attempt to better understand the transparency and reporting practices for integration and to gure out the degree to which the authors of M.A. theses implement this cornerstone factor for the MMR community (see Creswell, 2015), this section of analysis rst reports the instances of the stage of integration along with frequencies and percentages. The results, as shown in Table 6, revealed that integration at the level of interpretation and reporting (29.4%, n=35), typically represented in the discussion and conclusion sections, was reported more frequently than at the level of methods (27%) and design (25%), respectively. More speci cally, the analysis of discussion and conclusion sections of the M.A. theses revealed that no distinct or separate part in the theses was given to meta-discussion. Also, in a great portion of the studies (71%), meta-inferences were not drawn according to both quantitative and qualitative inferences. However, as can be seen in Table 6, approximately 30% of the studies made general inferences based on the data from quantitative and qualitative strands (Riazi, 2017). As for mixing strategies, integration via narrative means (28%, n=33) was identi ed as the most prevalent mixing strategy at the level of interpretation. Data transformation and joint display approaches received less and/or no emphasis. With regard to mixing strategies at the level of methods, it was found that the connecting approach (17%, n=20) and the building approach (10%, n=12) reported as the most prevalently used approaches. Remarkably, the merging approach wherein researchers bring the two strands of quantitative and qualitative together for comparison and analysis, and the embedding approach, wherein researchers link data collection and data analysis at interrelated stages, were completely missing in the data set. Notably, it was found that none of the authors of the M.A. theses attended to explicit description of speci c stages of integration at different levels. This means that all of the speci c MMR integration types were implicitly identi ed based on the main indicators from the corpus. Transparency and Reporting Practices at MMR Formulation Stage Approximately 98% of the authors did not use the words of mixed methods or related terms in their titles, despite the fact that some mixed methods research authors recommend that the title of any MMR reports should transparently convey and embrace the words of mixed methods or related notions (e.g. Creswell, 2015;Creswell & Plano Clark, 2011;Plano Clark & Badiee, 2010). Considering that some MMR authors believe that MMR researchers must "stay away from words that convey a qualitative leaning, such as explore, meaning, or discover and stay away from words that convey a quantitative orientation, such as relationship, correlation, or explanation" (p. 10), the ndings revealed that approximately one quarter of the studies (24%) leaned toward mono-method orientation (i.e. a quantitative OR qualitative connotation) rather than including mixed methods in the title. With regard to the mixed methods research question, the rather minimum use of mixed methods research questions might be attributed, in part, to the lack of adequate attention in the MMR literature given to the issue (Riazi, 2017), the relative unfamiliarity of researchers with the pivotal role of research questions in MMR (Tashakkori & Creswell, 2007), the lack of due attention to the challenge of integration of quantitative and qualitative data (Riazi, 2017), and predominant focus of MMR literature on designrelated features, challenges, and integrations (see Creswell, 2015). Another reason for the minimal posing of research questions might be related to the impact of schooling and training on raising postgraduate authors' awareness of writing research questions in MMR studies, which lends support to Onwuegbuzie and Leech's (2006) contentions that "it is surprising that an extensive review of the literature revealed no guidance as to how to write research questions in mixed methods studies" (p. 477), which, in turn, should embrace "quantitative questions, the qualitative questions, and a mixed methods question" (Creswell, 2014, p. 148). The ndings further revealed that a large portion of the authors (65%) did not explicate the reasons for using mixed methods research. Therefore, it is not clear whether mixed methods research in 65% of the studies is a more appropriately t than the monomethod approach to answer the research questions (see Creswell, 2015;Riazi, 2017). This reveals that a signi cant majority of M.A. authors did not check whether or not research problems would warrant an approach "that combines quantitative and qualitative research or a mixed methods inquiry" (Creswell & Plano Clark, 2011, p. 8). Finally, the ndings revealed that philosophical clarity, a major indicator of the MMR style of thinking , was conspicuously absent in the data set. Despite the fact that the MMR authors need to clarify their philosophical positioning (Creswell, 2015;Riazi, 2017), the M.A. authors did not justify and explicate their "philosophical proclivities in terms of philosophical assumptions and stances in relation to all components, claims, actions, and uses in a mixed research study" (Collins et al., 2012, p. 855). All in all, despite the fact that the MMR formulation stage need to provide adequate information for primary-level research consumers in order to better understand authors' "mixed methodological way of thinking" (Onwuegbuzie, 2012, p. 204), the current ndings demonstrated that the reporting of the aforementioned features, representing the MMR formulation stage, is far from satisfactory in an EFL setting. This methodological style of thinking or reasoning, as Greene (2007) Accordingly, this nding might reveal a predisposition among EFL postgraduate students not to be mindful of their set of beliefs as to the nature of knowledge, training, ethics, knowledge accumulation, and quality benchmarks, coupled with the core notions of epistemology, ontology, and axiology (see Onwuegbuzie, Johnson, & Collins, 2009;Riazi, 2017). Transparency and Reporting Practices in Design-related Features The conspicuous missing of MMR studies with initiation and expansion purposes was not certainly unexpected because their implementation requires MMR researchers to spend a great time and money and need maximum level of skills and expertise, which de nitely is "beyond the capabilities of novice researchers" including M.A. students (Riazi, 2017, p. 72). With regard to MMR designs, the ndings revealed that the EFL authors gave a disproportionate degree of emphasis to the various mixed methods research designs. More speci cally, the current ndings demonstrated that sequential designs (60%) were implemented more prevalently than were the concurrent designs (29%), which is inconsistent with those of prior studies in social sciences (Christ, 2007;Collins, Onwuegbuzie, & Jiao, 2006) and particularly represents a pattern opposite to Hashemi and Babaii's (2013) ndings (71.71% for concurrent designs vs. 24.88% for sequential designs) in applied linguistics. The rather higher employment of the sequential explanatory design might be related to the developmental nature of quantitative and qualitative data collection and analysis, which, in turn, makes sequential designs more straightforward for researchers to utilize (Creswell & Plano Clark, 2011;Ivankova & Greer, 2015;Morse & Niehaus, 2009). Reporting Practices with regard to Integration-related Issues Due to the fact that "the most dynamic and innovative of the mixed methods designs are mixed across stages" (Teddlie & Tashakkori, 2009, p. 46), the present study, following Fetters and Freshwaters (2015a) and Creswell's (2015) works, operationalized integration in terms of designs, methods, and interpretations, which, in turn, can be located within the data collection, data analysis, and discussion and conclusion components of a given study. Although not satisfactory, the ndings revealed that integration at the level of interpretation (29.4%), typically appeared in the discussion and conclusion sections, being reported more frequently at the level of methods (27%) and design (25%), respectively. As such, despite the fact that meta-inferences, stemming from both quantitative and qualitative inferences, are a leverage that can help improve the quality of MMR ndings and boost their value, the current ndings, resonating Bryman's (2006) contentions and Hashemi and Babaii's (2013) ndings, demonstrate that integration at the level of interpretations, methods, and designs have received unwelcome attention in EFL setting. This nding regarding integration strategies is also in vivid contrast with Creswell's (2015) recommendation, which asserts that researchers need to explicate the speci c strategies of integration (e.g. merging, building, connecting, embedding, and joint display) in a given study. However, considering the most prevalent use of narration at the point of interpretation, the current nding partially supports Fetters and Freshwater's (2015a) assertion that in research studies "where there was little or no integration provided during the methods or results, by default, integration through narrative in the discussion is critical" (p. 212). Taken together, it can be inferred that the quantitative and qualitative strands are simply represented sequentially, concurrently, or in a multilevel manner, with them less interacting and intersecting in any particular phase (i.e. method, design, interpretation). Accordingly, such studies, as Brown (2014) asserts, might more "aptly be labeled multi-method research studies" (p. 9). To put it differently, the ndings signify the fact that just using different MMR designs cannot certainly guarantee a well thought-out and sound mixed methods research. Therefore, the more a given study integrates across stages, the more mixed methods "as opposed to multiple studies, is taking place" (Yin, 2009, p. 42). The current practices in MMR with regard to integration issues cannot also be considered satisfactory because there was a minimum presence of integration across three points of reference. This omission might be attributed to the complicated nature of integration (Fetters & Freshwater, 2015a), its elusive nature (Bryman, 2014), researchers' unfamiliarity with and di culty in writing up MMR discussions and conclusions (Creswell, 2015), lack of expertise and awareness of its e ciency in MMR quality (Maxwell, 2016;Tashakkori, Teddlie, & Sines, 2012), and lack of training (Bryman, 2014;Creswell, 2015). Therefore, raising the graduate student's consciousness about the value of integration at various points of inferences, and adopting unanimous, steady, and innovative strategies to work out the mathematically challenging "integration equation of 1+1=3" might be a signi cant pedagogical practice to pursue in an EFL context seeking to "reap the rewards of the integration equation" (Fetters & Freshwater, 2015b, p. 204; see also Greene, 2015). Language of MMR or MMR Nomenclature The ndings demonstrated that almost all of the authors of selected studies failed to specify explicitly the name of the MMR design, sampling scheme, and integration procedure. Despite the fact that designating appropriate MMR nomenclature and terminologies has been considered as one of the indicators of scienti c advancement (Creswell, 2015), the current ndings revealed a considerably unfortunate state in designating methodologically speci c notions for MMR studies in an EFL setting. This implies that EFL postgraduate students failed to apply and adopt the MMR appropriate terminologies in their studies, and it seems that this context is drastically different "from the later, self-conscious development of mixed methods as a distinct methodology, which has been largely characterized by typological conceptions of design" (Maxwell, 2016, p. 20). In the absence of MMR terminologies, an understanding of the breadth and depth of combining quantitative and qualitative strands in terms of design, method, and integration might be super cial and further "problematizes the assumption that these are essential for the development and informed practice of mixed methods research" (Maxwell, 2016, p. 22). Moving forward, researchers need to clarify their decisions about "these decisions, rather than leaving them hidden, and to consider the implications of the choice for the way that ....... the study can be interpreted" (Curtis, Gesler, Smith, & Washburn, 2000, p. 1012). Conclusion To gain a better understanding of mixed methods research and reporting practices in EFL contexts, we, drawing on the methodological research synthesis, sought to describe and evaluate MMR research in unpublished M.A. mixed methods theses over three decades. Retrospectively, under the microscope-led perspective, our ndings singled out several patterns of research strengths and weaknesses in mixed methods research approach in an EFL context. The important message echoed, based on the ndings, is that most EFL theses take what Riazi (2017) has referred to as eclectic mixed methods research where authors try to expand the scope of their studies "by adding some breadth or depth to a predominantly qualitative or quantitative study without necessary mixing the two methods in principle" (p. 35). Moving forward, under the telescope-led perspective, these obtained patterns can then inform the present status quo of conducting mixed methods research in EFL contexts and put them on the right path of reporting future studies by presenting a set of recommendations to boost the strengths and amend the weaknesses. Accordingly, in order to depict both strengths and weaknesses of the studies, we present a research agenda for conducting mixed methods research in Table 7: In this study, we examined unpublished M.A. theses over three decades. Future studies can expand the scope of this study by investigating the issue of quality in mixed methods research synthesizing methodological issues in published articles. Furthermore, similar studies can be conducted on cross-comparing published mixed methods research across different disciplines. Percentage of the MMR studies with regard to timing of the quantitative and qualitative strands
7,502.6
2020-07-24T00:00:00.000
[ "Education", "Linguistics" ]
Addressing Program Sustainability: A Time Series Analysis of the Supplemental Nutrition Assistance Program in the USA : Food stamps, or, more formally, the Supplemental Nutrition Assistance Program (SNAP), form a crucial support system for many individuals in the United States. The welfare benefit has been associated with improved nutritional outcomes, making it an effective strategy to address hunger. Additionally, it has been found to enhance labour market outcomes for recipients and contribute to higher birth weights among children born to SNAP recipient mothers. Furthermore, it has been linked to improved height and overall health outcomes. Overall, this benefit is a crucial resource during times of need and helps mitigate the adverse effects of economic fluctuations. Even with the manifold advantages of the program, there exist notable apprehensions about the program's long-term viability. SNAP has experienced significant expansion over more than 50 years. The recipients grew more than 14 times, from approximately 2.9 million individuals in 1969 to over 41 million in 2022. This study utilizes data from the United States Department of Agriculture to investigate the temporal characteristics of this massive expansion. Augmented Dickey-Fuller tests are performed with and without trend and optimally selected lag lengths. In all specifications, the presence of a unit root cannot be rejected. An overwhelming body of evidence suggests that the growth in SNAP beneficiaries may need to be revised. Results of the unit root tests provide credence to the argument that the historical growth rate in the number of SNAP beneficiaries is highly unpredictable and may pose significant challenges to policymakers. This study does not attempt to calculate fraud and abuse in the program. Neither does the study attempt to ascertain the number of beneficiaries that may not be worthy of receiving the benefits. Introduction A federal assistance program in the US named Supplemental Nutrition Assistance Program (SNAP) offers help to low-income individuals and families buying food.The United States Department of Agriculture (USDA) started a trial experiment in 1939 that would later become the forerunner to the current Food Stamp Program.Through the scheme, producers were allowed to sell surplus food to low-income people at a reduced price to support them.As part of the Agricultural Act of 1961, President John F. Kennedy's administration established a permanent Food Stamp Program.Participants in the voluntary program bought food stamps from the government, which they then used like cash to buy groceries.The program was only offered in a few places.In 1973, the Food Stamp Program was made available in all 50 states on a national scale.Participants received paper coupons that could be used to purchase food and were eligible based on their income and resources. In 1996, President Bill Clinton's signing of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) resulted in significant modifications to the Food Stamp Program.Its name was officially changed to the Supplemental Nutrition Assistance Program (SNAP) to represent better its broader objective of improving nutrition.Paper coupons were replaced by Electronic Benefit Transfer (EBT) cards, improving efficiency and lowering fraud.The Great Recession in 2008-2009 led to a spike in unemployment and a rise in the need for food assistance.As a result, SNAP enrolment hit historic highs and helped millions of struggling households by offering crucial support.The year 2010 saw the passage of the Hunger-Free Kids Act, which raised the nutritional requirements for school meals and made it easier for SNAP participants to get healthier food options.In 2014, SNAP underwent some modifications due to the Farm Bill, sometimes referred to as the Agricultural Act of 2014.It tightened some qualifying restrictions while funding pilot projects to examine cutting-edge approaches to promoting self-sufficiency. The COVID-19 epidemic caused a significant rise in unemployment and financial hardship, which sparked a spike in SNAP usage.The government temporarily increased SNAP benefits and eligibility to help those impacted by the pandemic.Millions of low-income people and families receive essential support from SNAP, which remains a key tool in combating hunger and food insecurity in the United States.SNAP issues electronic benefit transfer (EBT) cards to qualified people and families that can be used to buy qualified food items at accredited retail establishments, such as groceries, supermarkets, and farmers' markets.This support ensures that participants have the resources to buy various nourishing foods.By supplementing lowincome households' food budgets, SNAP boosts their purchasing power.The quantity of benefits offered depends on several variables, including household size, income, and expenses.These extra funds allow recipients to stretch their limited funds further and purchase more food.SNAP strongly emphasizes promoting nutrition by incentivizing consumers to buy nutritious foods.The program's rules specify what foods are acceptable, focusing on fruits, vegetables, whole grains, dairy products, and proteins.By doing this, it is made possible for participants to have access to a balanced diet and essential nutrients. SNAP members can select the foods that best suit their tastes and nutritional requirements.It acknowledges that people and families come from various ethnic origins and have various food needs and preferences.Receivers are free to choose their food according to their needs.SNAP helps prevent hunger and lowers the likelihood of food insecurity among disadvantaged individuals by ensuring reliable access to food.It provides that individuals and families do not go without food in times of financial difficulty, unemployment, or other disasters.Nutritionally sound eating habits are crucial for general health and well-being.SNAP promotes users' health outcomes − particularly those of children, expectant mothers, and those with chronic illnesses − by facilitating access to wholesome food.It supports healthy growth and development, brain health, and illness prevention.SNAP also has beneficial economic consequences by pouring federal funds into local economies.When participants utilize their benefits to buy food, it helps farmers and retailers, promoting regional economic growth and job creation. Annual adjustments are made to SNAP benefits to reflect increases in the cost of living.The changes are based on the Thrifty Food Plan (TFP), a formula that determines how much a balanced diet will cost at a low cost.The TFP considers food costs, nutritional recommendations, and consumption trends.SNAP payment modifications have historically been made to keep up with inflation to guarantee that members can afford an acceptable amount of food.However, the adequacy of SNAP payments in proportion to the cost of living can change over time and between geographical areas.The consequences of increased food costs or other economic conditions, which can affect SNAP beneficiaries' ability to purchase goods and services, may occasionally be partially countered by benefit adjustments.It is crucial to highlight that there is continuous discussion and disagreement regarding the effectiveness of SNAP payments in reducing food insecurity and meeting participants' nutritional needs.The sufficiency of benefit levels and prospective upgrades to better support low-income individuals and families are frequently at the heart of policy discussions. The reach and benefits of SNAP can adjust in reaction to the economic cycle to offer more assistance when there is a downturn in the economy or greater need.More people and families may become eligible for SNAP during economic hardship, such as recessions or high unemployment rates.As people face financial problems, they may be able to meet the program's income and asset requirements.Consequently, SNAP membership tends to rise when more people seek assistance during economic downturns.Every year, SNAP benefits are modified to reflect the increased cost of living.Benefit levels may be raised to assist recipients in maintaining their purchasing power and guarantee access to enough food during inflation or rising food prices.Usually, these changes are made in response to inflation rates and economic indices. The government may put temporary measures in place to offer more significant support through SNAP during national crises or catastrophes, such as natural disasters or economic shocks.For instance, during the COVID-19 pandemic, the government enacted transient measures such as emergency allotments and broadened eligibility to offer more significant support to people and families affected by the pandemic's economic effects.SNAP payments have a stimulating influence on the economy.Families and people who receive SNAP assistance frequently use them to buy groceries, which boosts regional economies.Retailers, farmers, and other businesses involved in the food industry might benefit from increased SNAP funds during economic downturns, promoting employment and economic activity.It is important to note that specific modifications to the Food Stamp Program, such as eligibility standards, benefit amounts, and emergency assistance programs, are subject to legislative policy decisions and may alter over time based on the political and economic environment. In this paper, the author looks at the time series properties of average benefits received by SNAP recipients.He analyses 54 straight years of data from 1969 to 2022.The period captures several vital milestones of American history: the oil crisis and economic turmoil of the 1970s, economic recovery, inflationary period and savings and loan crisis of the 1980s, post-cold-war economic expansion of the 1990s, dot com bubble crash and 9/11 terrorist attacks towards the beginning of 21st century, the market crash of 2008-09, and more recently, the COVID-19 pandemic. Literature Review The reference section lists some of the most influential studies on nutritional assistance, food stamps, and other income-supporting assistance.The results are mostly unequivocally positive.Income-supporting and nutritional status-enhancing welfare programs like SNAP improve health status, reduce malnutrition, deliver superior labour market outcomes, improve newborn children's weight, reduce lifetime healthcare costs, reduce volatility in household access to essential resources, etc. Hoynes, H., Schanzenbach, D.W., & Almond, D. (2016) concluded that having access to food stamps as a child lowers the risk of developing metabolic syndrome and boosts economic independence in women.Bailey, M.J., Hoynes, H.W., Rossin-Slater, M., & Walker, R. (2020) find that before the age of five, children who have access to more financial resources see increases in their adult human capital of 6% standard deviation, adult economic selfsufficiency of 3% standard deviation, adult neighbourhood quality of 8% standard deviation, adult longevity of 0.4 percentage points, and adult likelihood of being incarcerated of 0.5 percentage points.2011) shows that pregnancies exposed to SNAP three months before delivery resulted in higher birth weights, with the most improvements occurring at the lightest birth weights.Additionally, Almond, D., Hoynes, H.W., & Schanzenbach, D.W. (2011) observe slight but statistically insignificant reductions in newborn mortality.They conclude that white and black mothers benefited more from the substantial rise in income from the SNAP, with an even more significant effect on the latter group.In an influential study, Rank, M.R., & Hirschl, T.A. (2009) found that nearly half (49.2%) of all American children would live in a household that receives SNAP benefits at some point between the ages of 1 and 20.Families who required the program used it for just brief periods but were also likely to return to it multiple times throughout the child's childhood.The proportion of children living in a food stamp family was strongly influenced by race, parental education, and the head of household's marital status.In another influential study, Peltz and Garg (2019) showed the close relationship between the lack of nutritional sufficiency, emergency medical care usage, and school absenteeism.In this paradigm, investments in expanding healthy supplementation programs like SNAP have solid implications for reduced healthcare costs and improved educational outcomes.Pinard et al. (2017) examined the close relationship between income supplementation via transfers like SNAP, which has strong implications for the uptake of appropriate nutrition and poverty alleviation.Tach and Edin (2017) document the critical impact of transfer programs for the working poor in terms of getting ahead through positive long-term health improvements and better child health. Food assistance systems have played a crucial role in development and disaster mitigation efforts in other parts of the world.For example, Arora, Nabi, and Sarin (2023) looked at the effect of food assistance in helping India during the Covid-19 pandemic.George and McKay (2019) provide an extensive study regarding India's public food distribution system and its impact on ensuring food security for well over a billion people.Mooji (1998) provides a powerful analysis connecting the public food distribution system with the political economy in India.Mwaniki (2006) discusses the critical role of food distribution systems in ensuring nutritional security in Africa.Del Nino, Dorosh, and Subbarao (2007) provide an excellent international contrast study connecting food distribution and nutritional security between South Asia and Africa.Espinosa-Cristia, Feregrino, and Isla (2019) put pre-existing and emerging concerns regarding food distribution in Latin America.These studies covering vast continents like Asia, Africa, Latin America, etc., provide valuable insight into the critical nature of food distribution and its relationship with nutritional security, political economy, and public policy that affect billions of lives in countless developing nations. The reference section contains several more influential studies that strengthen and complement the abovementioned general results.The list is by no means exhaustive. Methodology and Research Methods The data is collected from the US Department of Agriculture's (USDA) Food and Nutrition Service.The USDA provides easily accessible data related to SNAP at its data website at https://www.fns.usda.gov/pd/supplemental-nutrition-assistance-program-snap.This study analyses the data for the national-level annual summary for participation and costs for 1969-2022.The author uses the annual average participation data for this analysis.Please note that the participation numbers can vary monthly as new participants join the recipient rolls while some existing ones leave.The numbers are reported in thousands. The time series analysis is done using the following methodology.Let t denote time where = 1,2, . . . . ., .Let denote the number of participants in time t.I check the stationarity of the data by using the unit root test.The Augmented Dickey-Fuller (ADF) test is used for this purpose.The standard Dickey-Fuller test involves fitting the model: The value = 1 represents the null hypothesis.The residual serial correlation may need to be considered while estimating the standard model parameters using OLS.This is addressed by the augmented Dickey-Fuller (ADF) test, which increases the standard model by k delayed differences of the dependent variable.In more detail, it changes standard model into a different form as The stationarity can be easily checked in the ADF framework by testing the null hypothesis = 0.It should be noted that ADF has a universal form and that for regression requirements that result in various distributions of the test statistic, we can restrict either or or both to zero.The test statistic distribution for four potential examples is listed in Hamilton (1994, ch. 17). The estimation of the ADF model is conducted using a Generalized Least Squares (GLS) method where the optimal lag length is chosen by the Schwert, (1989) method.The lags = {1,2, . . . ., } can be optimally chosen to be . Because of the choice of the first differenced series, + 1 observations are lost, and we are left with − observations to work with.Therefore, a longer time series is beneficial as we will be left with a large sample even after losing some observations due to the selection of optimal lag length. Results As Figure 1 exhibits, the number of SNAP recipients has dramatically increased from about 2.9 million in 1969 to over 41 million in 2022.It represents over 14 times increase in nearly 53 years, raising serious questions regarding the stationarity of the number of participants.A simple visual examination of the data also uncovers a robust upward trend of growth in the number of beneficiaries between 1969 and 2022.The correlogram and partial auto-correlogram are provided in Figures 2 and 3. A correlogram, alternatively referred to as an autocorrelation plot, is a graphical representation that illustrates the correlation coefficients between a time series and the lagged values of that series.Simply put, it demonstrates the degree of correlation between a specific data point and its preceding values across several time intervals.Every data point on the correlogram is associated with a particular lag, and the vertical position of the point indicates the value of the correlation coefficient at that precise lag.Positive correlation coefficients signify a positive association between the present and past values, while negative coefficients suggest a negative association. Correlograms are a valuable tool for identifying recurring patterns at particular time lags and uncovering any seasonal patterns in the data.Peaks occurring at consistent intervals suggest the presence of probable seasonal patterns.A stable time series generally displays correlation coefficients that decline quickly as the time lags between observations increase.If the coefficients continue to exhibit significance at higher delays, it indicates the presence of non-stationarity.Correlograms are valuable tools for identifying optimal parameters for autoregressive integrated moving average (ARIMA) models since they provide insights into the order of autoregressive (AR) and moving average (MA) components.The time series data may be highly persistent of the correlogram does not decline quickly as the lag length is increased, pointing to the possible unstableness of the underlying time series. Figure 2. Correlogram of SNAP Beneficiaries with Different Number of Lags Source: Generated using data from Supplemental Nutrition Assistance Program Participation and Costs available from https://www.fns.usda.gov/pd/supplemental-nutrition-assistance-program-snapA partial correlogram, commonly known as a partial autocorrelation plot, is an enhanced version of the correlogram.The analysis illustrates the relationship between a specific data point and its previous values, considering the influence of short-time lags within the time series.In essence, this metric quantifies the direct correlation between two specific points while excluding the impact of any intervening points − the order of autoregressive (AR) models.The partial correlogram is a useful tool for establishing the optimal order of the autoregressive (AR) component within an autoregressive integrated moving average (ARIMA) model.It accomplishes this by identifying large lags directly impacting the model's overall performance. Differentiating between pure seasonal patterns and mixed seasonal and non-seasonal patterns can be achieved by examining the decay of partial autocorrelation at various lags.Both the correlogram and partial correlogram are essential tools for comprehending the dynamics and stability of a long-term time series dataset.The utilization of these graphical representations' aids in the identification of latent cyclic, trend, and seasonal patterns that may not be readily discernible in unprocessed data.This information holds significant importance in the context of forecasting and decision-making. Correlograms and partial correlograms provide valuable insights for identifying suitable time series models, facilitating precise predictions and analysis.The presence of rapid decay of correlations in correlograms and partial correlograms serves as an indication of stationarity, a crucial requirement for numerous time series models.The presence of non-stationarity might result in predictions and analyses that need more reliability.Typical spikes or unanticipated patterns observed in the plots may indicate the presence of outliers or anomalies within the dataset, potentially impacting the analysis and forecasting outcomes meaningfully.Source: Generated using data from Supplemental Nutrition Assistance Program Participation and Costs available from https://www.fns.usda.gov/pd/supplemental-nutrition-assistance-program-snapTable 1 contains the results from the ADF test using GLS, where the optimal lag is selected by the Schwert (1989) criterion.The Augmented Dickey-Fuller (ADF) test, employing Generalized Least Squares (GLS) estimation, is a statistical technique for assessing the stationarity of time series data.The assumption of stationarity is of utmost importance for several time series models, as it guarantees that the statistical characteristics of the data remain constant during the period under consideration.The ADF-GLS test is precious in identifying stationarity because it can consider potential serial correlation and trend patterns in the data.The ADF-GLS test commences by assuming the null hypothesis that the time series data is characterized by non-stationarity, indicating the presence of a unit root (a root of 1) and displaying a stochastic trend.The alternative hypothesis posits that the observed data exhibits stationarity, indicating the absence of a unit root and the presence of consistent statistical characteristics.The test incorporates lags of the differenced series, which represents the degree of dependence between the present value of the series and its previous values.Incorporating these lagged differences in the ADF-GLS test allows for examining potential serial correlation and autocorrelation within the dataset. The ADF-GLS test provides flexibility in modelling trends by accommodating many possibilities, such as the absence of a trend, a linear trend, or a quadratic trend.Considering trends is crucial in analysing time series data since such data frequently demonstrate temporal patterns.Therefore, it is imperative to account for these trends while conducting tests for stationarity.GLS estimation incorporates the consideration of heteroscedasticity, which denotes the presence of unequal levels of variation in the data across several periods.The significance of this matter lies in the fact that conventional OLS (Ordinary Least Squares) techniques presume a constant variance.This condition may not apply to time series data. The Augmented Dickey-Fuller Generalized Least Squares (ADF-GLS) test employs crucial values obtained from statistical tables to ascertain the significance of the derived test statistic.Suppose the value of the test statistic is more negative (or less positive) than the critical values.In that case, it leads to the rejection of the null hypothesis, which suggests non-stationarity, in favour of the alternative hypothesis, which suggests stationarity.If the null hypothesis is rejected, it can be inferred that the time series data exhibits stationarity.It implies the absence of a stochastic trend and the preservation of its statistical characteristics throughout time.Suppose the null hypothesis cannot be rejected.It suggests that the data exhibits non-stationarity and that more analysis or modifications may be necessary before applying time series models. It may be noted that the presence of unit root cannot be rejected for lags 1-10 for both 1% and 5% levels.This powerful result indicates that the number of participants in the SNAP program may be following an explosive and unpredictable path, supporting the general observations from Figure 1. 1 does not explicitly adjust for various lags along with trends in a standard ADF model, which has been shown to have a higher power.Therefore, the time series for the number of participants is subjected to more rigorous ADF tests with various lags (3,5,10) and trends.The results are presented in separate panels. Performing ADF-GLS tests with varying lag lengths is crucial to effectively evaluate the stationarity of a given time series dataset.ADF test is a commonly employed statistical technique to assess a time series' stationarity or non-stationarity.This determination holds significant importance as it is a fundamental assumption for numerous time series models.When implemented using the Generalized Least Squares (GLS) approach, the ADF test accommodates heteroscedasticity, enabling the consideration of fluctuating amounts of variance within the data throughout distinct periods.There are several advantages to conducting tests with various lag durations.Time series data can manifest diverse patterns, including varying seasonality, autocorrelation, and trends.We can effectively capture a diverse array of potential patterns in the dataset by conducting experiments with various lag durations.Certain patterns may only become evident at specific time delays, and doing tests with different time lags helps to avoid overlooking these significant qualities.A reduction in lag length has the potential to mitigate bias, but it may also lead to an increase in inefficiency. Conversely, a longer lag length has the potential to amplify bias, but it may also result in a decrease in inefficiency.By conducting tests using various lag lengths, one can achieve a balance between these tradeoffs and gain a complete understanding of the stationarity of the data.Time series data can exhibit different frequencies, encompassing daily, weekly, monthly, or yearly observations.Accurately capturing underlying patterns may require varying lag durations for various frequencies.Conducting tests with different lag lengths is crucial to ascertain the suitability of the test for the particular frequency characteristics of the data. Sometimes, the association between a variable and its previous values may exhibit a complex pattern.The presence of intricate interrelationships can result in varying time lags becoming significant at different junctures within the time series.Conducting tests using numerous delays facilitates the identification of intricate correlations.Various lag durations offer valuable insights for model selection.Identifying stationarity at one lag length but not another can guide the selecting of suitable models for the data.Considering unique lag dependencies while constructing models is imperative, as the accuracy of analysis and predictions greatly hinges on picking an appropriate model.Statistical tests are susceptible to being influenced by minor alterations in data or underlying assumptions.Including several lag lengths in the testing process enhances the robustness of the analysis, as it ensures that the conclusions on stationarity remain consistent across varied lag specifications.Using numerous lag lengths in tests is a preventive measure against data mining, a phenomenon in which a specific lag length is selectively chosen to align with the desired conclusion.Including diverse delays in the analysis enhances objectivity and reduces susceptibility to prejudice.In all panels (A, B, and C), the unit root must be accepted for all values of chosen lags even after including the trend in the estimation.Therefore, the results presented in Table 1 and Figure 4 are entirely consistent.Both tables establish the strong result that unit root cannot be rejected for any reasonable lag length, both with and without including trend. Conclusions The number of participants in the SNAP program has increased dramatically (over 14-fold) between 1969 and 2022.The sustainability of the welfare program has attracted significant attention.Using the Augmented Dickey-Fuller tests both in standard form with trend and various lags and the Generalized Least Square estimation, the existence of a unit root cannot be rejected.In simple terms, the results mean the growth in the number of beneficiaries in the last half a century is explosive and unpredictable.While an essential program like SNAP is a lifeline for millions of deserving beneficiaries, the explosive growth of the program raises serious questions regarding the sustainability of the program.The significant increase in individuals receiving SNAP benefits underscores the criticality of a solid and comprehensive social safety net.The program is a crucial resource for individuals and families during economic hardship, guaranteeing their ability to obtain nourishing sustenance despite financial constraints.The provision of SNAP benefits not only mitigates food insecurity but also functions as an economic stimulant.Using SNAP benefits by individuals for food purchases stimulates economic activity within grocery shops and food markets, supporting local companies and the overall economy.The increase in SNAP enrollment highlights the magnitude of poverty and financial instability inside the nation.Policymakers focus on the underlying factors contributing to poverty to mitigate the long-term necessity for aid.Expanding the Supplemental Nutrition Assistance Program (SNAP) necessitates a continuous assessment by policymakers about the program's efficacy, criteria for eligibility, and strategies for reaching potential beneficiaries.The constant evaluation of this program guarantees its adaptability to evolving needs and demographic shifts. Although the SNAP plays a crucial role, the program's expansion presents significant budgetary problems.Policymakers are required to strike a delicate balance between the expansion of the program and the broader fiscal obligations while ensuring the efficient use of resources.Sufficient nutrition is essential for the holistic maintenance of health and general welfare.Policies aimed at enhancing the availability of nourishing food through initiatives such as the Supplemental Nutrition Assistance Program (SNAP) can provide enduring beneficial effects on public health indicators. Bütikofer, A., Løken, K.V., & Salvanes, K.G. (2019) specifically observe favourable impacts of nutritional input (something like a SNAP would offer) on adult height, reduced health risks at age 40 and reduced baby diarrheal mortality.Research by Almond, D., Hoynes, H.W., & Schanzenbach, D.W. ( Gundersen, C., & Ziliak, J. P. (2003) estimated income volatility with and without food stamps and a variance decomposition of consumption using data from the Panel Study of Income Dynamics covering 1980-99.They found that food stamps decreased income volatility by around 12% and food consumption volatility by about 14% among households with a significant ex-ante risk of receiving assistance.Evans, W. N., & Garthwaite, C.L. (2014) find improvements in the self-reported health of impacted women using data from the Behavioral Risk Factors Surveillance Survey.They discover decreases in the likelihood that these same women will have dangerous levels of biomarkers using data from the National Health and Nutrition Examination Survey.Bergmans and Wegryn-Jones (2020) examined the relationship between food insecurity and depression.Harper et al. (2022) conducted an extensive study of the nutritional assistance program during the Covid-19 pandemic and SNAP's critical role in ensuring essential nutrition for a vast fraction of the population. Figure 1 . Figure 1.Title The Growth in the Number of SNAP Beneficiaries 1969-2022 Source: Generated using data from Supplemental Nutrition Assistance Program Participation and Costs available from https://www.fns.usda.gov/pd/supplemental-nutrition-assistance-program-snap Figure 4 . Figure 4. Augmented Dickey-Fuller Tests with Trend with Different Lags Source: Author's calculations derived using data from Supplemental Nutrition Assistance Program Participation and Costs available from https://www.fns.usda.gov/pd/supplemental-nutrition-assistance-program-snap
6,447
2023-12-31T00:00:00.000
[ "Economics" ]
Modelling of multi-component droplet evaporation under cryogenic conditions . The vaporization of drops of highly vaporizable liquids falling inside a cryogenic environment is far from being a trivial matter as it assumes harnessing specialized thermodynamics and physical equations. In this paper, a multi-component falling droplet evaporation model was developed for simulating the spray cooling process. The falling speed of the sprayed droplets was calculated with the momentum equations considering three forces (gravity, buoyancy and drag) applied to a droplet. To evaluate the mass and heat transfer between the sprayed droplet and the surrounding gas phase, a gaseous boundary fi lm of suf fi cient thinness was assumed to envelope the droplet, while the Peng-Robinson equation of state was used for estimating the phase equilibrium properties on the droplet ’ s surface. Based on the relevant conservation equations of mass and energy, the key properties (such as temperature, pressure and composition) of the liquid and gas phases in the tank during the spray process could be simulated. To conclude, the simulation algorithm is proposed. Introduction A wide body of studies exists on the vaporization of drops of multi-component liquids at moderate and high temperatures. By contrast, the same process has been the subject of relatively little research when it occurs at very low temperatures, especially under cryogenic conditions. This scarcity of literature is namely due to the need to resort to complex thermodynamics and access to specialized thermal and transport equations. The matter is all the more complex when the drops follow a falling motion that adds a dynamic dimension to the subject. The vaporization of drops of Liquefied Natural Gas (LNG) well exemplifies this statement. Natural Gas (NG) is a flexible fuel that is used extensively in power generation, industrial and household consumption, as well as the production of advanced petrochemical derivatives. Compared with other fossil fuels, natural gas creates lower emissions of greenhouse gases and local pollutants, and is therefore expected to play a greater role in the future global energy mix. Natural gas can be delivered either by high pressure pipelines or, depending on the location of the gas field and the security of supply, it can be liquefied and then transported by Liquefied Natural Gas Carriers (LNGC) [1]. This part of the voyage is called the laden voyage. On arrival, the liquefied natural gas is unloaded and the LNGC travels to another loading terminal. This voyage is called the ballast voyage, due to the fact that the ship's ballast compartments are full of water while the LNG tanks are almost empty. Some LNG remains in the tanks, as the LNGC uses LNG as fuel. The Liquefied Natural Gas (LNG) is stored in highly insulated storage tanks at pressures slightly above atmospheric and temperatures close to boiling (% 111 K). During the laden and ballast voyages, some of the LNG will vaporize due to unavoidable heat ingress into the storage tank from its surroundings. Such a generated vapour is used as fuel gas in the main engines to drive the ship. During the ballast voyage, the temperature of the tank significantly increases. In order to avoid excessive vapour generation during the next loading operation, the tanks are cooled down for a few days before loading. Spraying some LNG at the top of each tank cools down the vapour as well as part of the insulation system. Obviously, vapour and liquid temperatures, pressure and compositions have a significant impact on the spraying operation. The aim of this paper is to propose a numerical method for simulating this spraying process. A multi-component falling droplet evaporation model is developed to estimate the mass and heat transfer between the sprayed droplets and the surrounding gas phase. The thermodynamic aspects are dealt with using the Peng-Robinson Equation 1. The set of connected storage tanks (as shown in Fig. 1) is considered to form an isolated system with no mass or heat exchange with its surroundings. The spraying time is normally quite short compared to the duration of LNG transportation, and thus the heat ingress from the surroundings is relatively small. 2. The NG phase has uniform properties (temperature, pressure and composition). 3. The LNG phase has also uniform properties (temperature, pressure and composition). The vaporization of the liquid phase is ignored, and the mass or heat exchange between the LNG phase and NG phase is also ignored. 4. Each droplet has uniform properties (temperature and composition), and its shape is assumed to be approximately spherical. As a simplification, the droplet distribution will be assumed to be monodisperse so that the droplets generated in a short interval of time (Dt layer ) can be grouped into a "droplet layer": all drops of a given layer will have the same vaporization history as well as the same falling kinetics and will come back to the LNG phase at the same time (in the case they are not totally vaporized before reaching the liquid phase). This notion of "droplet layer", in which all droplets have the same properties, will be useful to reduce computation time. LNG spraying processthe modelling approach The modelling approach uses mass and heat balances to predict the physical properties (temperature, pressure and density) and the amounts of the bulk gas phase (NG), liquid phase (LNG) and spraying droplets in the storage tanks. The approach will consist in estimating the mass and heat exchanges between the sprayed droplets and the bulk gas phase with the suitable models. In this section, the models and methods used to describe the sprayed droplets, the gas phase and the liquid phase are detailed. Droplet modelling In the spraying process, each droplet falls, and as it falls it exchanges mass and heat with the bulk gas. Therefore, a multi-component falling droplet evaporation model including motion equations and evaporation rate equations (for estimating heat and mass transfer between droplets and gas phase) is developed to predict the velocity, temperature, composition and mass of the sprayed droplets. While the general mass and heat transfer equations remain conventional ones, the main difficulties of this task reside in the requirement to dispose of an equation of state as well as properties correlations that fit the low temperature conditions. (a) Motion of a droplet As shown in Figure 2, three forces, gravity (F G ), buoyancy or Archimedes' force (F A ), and drag force (F D ), act on a liquid droplet once it leaves the sprayer. In Figure 2, u d is the velocity vector of the droplet, and h d is the angle between the droplet's motion and the direction of gravitational pull. According to Newton's second law, we obtain the following equations: where m d is the mass of a droplet. By assuming the droplets to be spherical, m d can be calculated from: And F G , F A , and F D can be calculated using the following equations: where g is the standard gravitational acceleration, q g is the density of the gas mixture, and r d is the droplet's radius. The drag coefficient C d is classically correlated with Weber number "We" and Reynolds number "Re". In this study, the empirical equations developed by Loth [2] and used: See equation (8) bottom of the page Re is obtained from: The Weber number (We), which indicates the degree of shape deformation of the falling droplet, is the ratio of continuous fluid stresses (which cause deformation) to surface tension stresses (which resist deformation): In the above equations, r d is the surface tension of the liquid (droplet) and the reduced dynamic viscosity l* is defined as: where l d and l g are the dynamic viscosities of the gas and liquid (droplet), respectively. In Loth's approach [2], C d,We ? 0 denotes the drag coefficient for a spherical solid particle, which is not deformable when falling, and C d,We ? 1 denotes the drag coefficient for a bubble with maximum deformation. As discussed in the paper by Loth [2], this method can accurately predict the drag coefficients of various airborne droplets under the conditions: We 12 and 400 Re 7000. (b) Evaporation of one droplet The evaporation model is employed to estimate the heat and mass exchange between the droplet and the gas phase. As shown in Figure 3, to evaluate mass and heat fluxes, a sufficiently thin gaseous boundary film is assumed to be around the droplet. If assuming the quasi-steady species balance around the droplet, the instantaneous evaporation mass flow rate (ṁ d ) of the droplet can be estimated by equation [3]: In the above equation, N is the number of species in the droplet, _ m i;d is the evaporation mass flow rate of species i in the droplet. Note that _ m i;d would be negative in cases where the species i is condensed into the droplet. Furthermore: q f is the density of the gas film, and D i,f is the mean diffusion coefficient of species i in the gas film. r V,i is the volume equivalent partial radius of component i corresponding to its instantaneous volume fraction w i in the droplet, and it is defined as [3]: w i can be estimated with equation where x i;d is the mole fraction of species i in the droplet, and r i;UNIQUAC is the molecular volume parameter of species i from UNIQUAC model [4]. Sh à i is the modified Sherwood number of species i. Sh à i accounts for the effect of a high mass transfer rate and it is estimated by equation [3,5]: Sh i,0 is the Sherwood number of species i for the droplet with a small mass transfer rate, and it can be given by the Frössling correlation [6]: Reynolds number (Re) can be computed with equation (9). Sc i is the Schmidt number of species i in the gas film, and is defined by equation: l i,f and q i,f are the dynamic viscosity and the partial density of species i in the gas film, respectively. In equations (12) and (15), B M,i is the Spalding mass transfer number of species i, and is defined by equation: Y i,S and Y i,1 are the mass fraction of species i at the inner sides (which is close to the droplet surface) and the outer side (which is at infinity) of the gaseous film (see Fig. 3), respectively. Y i,1 is the mass fraction of species i in the bulk gas. Y i,S can be estimated by Peng-Robinson Equation of State (PR-EoS) [9], which will be introduced in Section 3.4. As illustrated in Figure 3, the accumulated thermal energy of the droplet is denoted _ Q d . Its expression is basically: where m d is the mass of the droplet and C p;d is the specific heat capacity of the droplet at constant pressure. In practice, equation (19) is used to express the derivative dT d dt . To do so, it is necessary to dispose of another expression for _ Q d . This heat flux can be deduced from the Spalding heat transfer number of species i, B T;i , the definition of which is [3]: where _ Q d;i is the contribution of species i to _ Q d (this latter is obtained by summing the _ Q d;i quantities over all the species); C p;i;f is the mean specific heat capacity of species i in the gas film. T d and T 1 are the droplet temperature and the bulk-gas temperature, respectively. Á vap;i H is the specific enthalpy of vaporization for species i. Therefore, the knowledge of B T ;i makes it possible to estimate _ Q d : For estimating the Spalding heat transfer number of species i B T;i , we refer to Brenn et al. [3] and Abramzon and Sirignano [5] who postulate that B T;i is coupled with the mass transfer number B M;i through: C p;i;g is the mean specific heat capacity of species i in the bulk gas. Le i is the Lewis number of species i defined [3,5] by: where k i,f is the thermal conduction coefficient of species i in the gas film. In equation (22), Nu à i is the modified Nusselt number of species i estimated by equation (24). Following Abramzon and Sirignano [5], it was indeed assumed that such an equation could be used for the case of an evaporating droplet: Nu i;0 is the Nusselt number of species i for the nonevaporating droplets, that can be computed [3,6] using the following correlation especially developed for spheres from experimental data on mass-transfer rates: Pr i is the Prandtl number of species i defined by the equation: Note that, B T ;i should be solved from equations (22) (19) and (21): As illustrated in Figure 3, the heat flux exiting the film is simply given by: The film around the droplet can be assumed to be at steady state (no energy nor mass accumulation). Consequently, as shown in Figure 3, the energy balance equation for the film can be written as: Such an equation makes it possible to calculate _ Q g which is the heat flux from the bulk gas into the film. The average properties of the film (such as l i,f , D i,f , C p,i,f , k i,f and q i,f ) are estimated at the average temperature of the film T f , given [3,5] by the equation: It is worth noting that in the evaporation model _ m i;d can be positive (which indicates evaporation from the droplet) or negative (which indicates condensation to the droplet). Heat fluxes ( _ Q d , _ Q f and _ Q g ) can be positive or negative also. All computations described in this section are based upon one droplet. Gas phase modelling As noted in the above section, the mass and heat exchanges between a single droplet and the gas phase can be calculated with equations (12) and (29) respectively. Assuming N L layers of sprayed droplets in the tank, with each layer containing N d droplets, the total mass transferred into the bulk gas can be computed with the following equation: The total heat flux from the bulk gas into the droplets can be computed with the following equation: The temperature change rate of the bulk gas dTg dt in the tank can be calculated using the following equation: where m g is the total mass of the gas phase and C p;g is the specific heat capacity of the gas. Liquid phase modelling If the sprayed droplet reaches the liquid phase, or if the gas phase is condensed (due to a decrease in temperature), the liquid phase properties need to be updated. Assuming that the influx fluid (from either the falling droplet or the condensed gas) is instantaneously mixed with the liquid phase, the new total molar amount of the liquid phase is calculated as follows: N old liq and N new liq are the total molar amounts of the liquid phase before and after mixing, respectively. N in liq is the molar amount of the influx fluid. The new mole fraction of each species in the liquid can be computed with this equation: x old i;liq and x new i;liq are the mole fractions of species i in the liquid phase before and after mixing, respectively while x in i;liq is the mole fraction of species i in the influx fluid. If the mixing enthalpy is assumed to be negligible, the energy balance equation is expressed as follows: Assuming that the heat capacity of the liquid phase C p;liq and the heat capacity of the influx liquid phase C in p;liq are constant (i.e. temperature-independent), equation (36) can be converted to: This equation makes it possible to estimate the new liquid temperature T new liq . Thermodynamic modelling In the simulation, an adequate thermodynamic model is necessary to express the composition of the gas film around the droplet (i.e. Y i,S ) and the densities of the liquid phase (LNG), the bulk gas phase (NG), the droplets, and the gas film. This requires resorting to equation of states like the Peng-Robinson Equation of State (PR-EoS) [7,8] accounting for the specific behaviour of the various LNG molecules at low temperature and properly reflecting the corresponding molecular interactions. At the droplet-gas film interface (see Fig. 3), the condition of the vapour-liquid phase equilibrium is given by the equations: In this paper, the PR-EoS was not volume-translated [9][10][11] and classical van der Waals mixing rules [12] were used: It is today acknowledged that the Soave [13] a-function is non-consistent [14][15][16] since it diverges at very high temperatures. This study, carried out under cryogenic conditions, is however absolutely not concerned by such an inconsistency so that the Soave a-function was chosen: See equation (41) bottom of the page The fugacity coefficient can be calculated [12] from the equation: where In the above equations, P is the pressure, R is the gas constant, T is the temperature, a i and b i are the cohesive parameter and molar co-volume of pure component i, v is the molar volume, z i is the mole fraction of component i, T c,i is the experimental critical temperature, P c,i is the experimental critical pressure, and x i is the experimental acentric factor of pure i. In this paper, the binary interaction parameters k ij were predicted using the E-PPR78 model [17][18][19]. The droplet's properties are updated after each time step (Dt), which was preset for the simulation. The calculation steps for one droplet are listed below. (1) and (2). 3. Next we compute the evaporation mass flow rates of each component ( _ m i;d ) of the droplet with equation (12) (using Brenn's method [3]). 4. If the droplet has vaporized completely, we compute the heat flux _ Q g from the bulk gas and then skip the next steps. 5. Now we compute the heat fluxes ( _ Q d , _ Q f and _ Q g ) between the droplets, the gas film and the bulk gas with equations (21), (28) and (29). The quantity dT d dt is obtained from equation (27) while dT d dt is deduced from equation (33). 6. Then we compute the new mole amount (n d ) and composition (x i,d ) of the droplet with the equations: 7. Now we compute the new temperature (T d ) of the droplet using equation (27) and the following equation: 8. Now we compute the new molar volume (v m,d ) of the droplet using the PR-EoS with the new temperature (T d ) and composition (x i,d ), then compute the new radius (r d ) of the droplet with the equation: In each simulation time step (Dt), a single droplet's properties (i.e. velocity (u d ), angle between directions of motion and gravity (h d ), height from the liquid phase surface (h d ), radius (r d ), temperature (T d ), composition (x i,d ) and amount (n d )) are updated, and would be used as earlier properties in the next time step. Meanwhile a single droplet's evaporation mass flow rates of each component ( _ m i;d ) and heat flux _ Q g , which would be used for computing the gas phase properties (according to Eqs. (31) and (33)), are outputted. The algorithm for computing the properties of one droplet is summarized in Figure 4. One shall remember that all droplets in the same layer are assumed to be identical. Modelling of one tank in one simulation time step After calculating the new properties of all spray droplet layers, the properties of the gas phase and the liquid phase in the tank can be updated as well. The computation steps are listed below. 4. Compute the new temperature of the gas phase using equation (33) with m new g and the following equation: 5. Compute the new volume of the gas phase as follows: where V tank is the tank total volume. 6. Calculate the gas pressure using the PR-EoS with the new temperature, volume and composition values. 7. If the gas phase is condensed, compute the properties of gas and condensed liquid with the PR-EoS. We then update the liquid phase properties using equations (34), (35) and (37). The algorithm for updating the properties of liquid and gas in the tank is summarized in Figure 5. Modelling overview The algorithm for the master simulation program is shown in Figure 6. The sequence is as follows: 1. Set up the parameters and the initial values of the simulation: time step, time interval of one droplet layer, volume of tank; height of tank; number of sprayers; sprayers orientation; diameter of sprayer's circular orifice; size of droplet; volume flow rate of droplets; volume of liquid phase in tank; volume of gas phase in tank; liquid level in tank; pressure in the tank; gas temperature; liquid temperature; number of components; gas and liquid compositions; number of droplets in each layer (computed from number of sprayers, volume flow rate and diameter of droplets, and time interval of one droplet layer); and initial velocity (determined by volume flow rate of droplets and diameter of sprayer's circular orifice), motion angle (depending on sprayers orientation), radius, height, temperature, mole amount and compositions of droplet in first layer. 2. Compute the properties of the gas, the liquid and the spray droplets tank by tank, following the instructions given in Section 4.2. 3. At each fixed time interval (Dt layer ), generate a new droplet layer (please note that the new droplet's initial velocity, motion angle, size, and height are determined by the parameters of sprayer and tank, and are therefore fixed during simulation, however its temperature, mole amount and compositions are changed with the updated properties of spraying liquid). 4. Compute the equilibrations between tanks, if more than one tank is included in the spraying process. 5. Check the spraying time. If the spraying is not finished, go to the next simulation time-step. Size changes of the falling droplets Depending on the initial conditions (such as bulk gas pressure, temperature and composition, droplet temperature and composition), the size of a sprayed droplet may decrease, remain constant or increase as it falls. A typical example showing the partial vaporization of a liquid droplet during spraying (falling inside a hotter gaseous environment) is shown in Figure 7. Conversely, the gas phase may condense onto a sprayed droplet, making it increase in size. An example is shown in Figure 8. As this figure shows, the droplet increases in size during the first 10 s of its fall before becoming almost constant as the droplet and the gas phase reach an equilibrium. Finally, if the droplet and the bulk gas phase are close to the equilibrium condition when the spraying starts, the size of this sprayed droplet remains almost constant during its fall, as shown in Figure 9. Example case of spraying process An example of simulated gas phase pressure and temperature variations over time is presented in Figure 10. For this simulation, only the effect on the vapour phase has been modelled: no heat ingress from the surrounding environ-ment has been considered. As expected, the pressure and temperature of the gas phase decrease during the spraying process. This example highlights the usefulness of the spraying process for cooling down a liquid-gas system made up of a cold liquid phase and a hot gas phase. The tool we propose is capable of predicting the kinetics of a cooling process and could be used to automatically control the temperature of a LNG tank during a laden voyage. Conclusion This paper was intended to address a relatively complex scenario that occurs in a cryogenic process where the cooling phase consists of drops of a highly vaporizable, multicomponent liquid that are sprayed into the gaseous environment they cool and evaporate during their fall. The proposed approach offers an overall simulation of all the thermal and kinetic aspects of this process, using a set of numerical methods combined with physics and thermodynamics matching the low temperature conditions. Such model describing the evaporation multi-component falling droplet enables the computation of the mass and heat exchanges between the droplet and the gas phase. This simulation allows the prediction of the properties of the gas and liquid phases in the tank during the spraying process. The complete simulation algorithm that has been developed can be of interest to the Oil and Gas community facing similar heat and mass transfer within low temperature conditions.
5,724.6
2020-01-01T00:00:00.000
[ "Physics" ]
Generations of apex locators : which generation are we in ? Endometrics is one of the key factors responsible for the success of endodontic therapy. Electronic determination of working length has gained enormous popularity, owing to its extreme accuracy and predictability. The literature is flooded with the self-proclaimed generations of the apex locators. This article is aimed at concise description of the actual scientific rationale behind the generations in order to diminish the related perplexity. INTRODUCTION Endometrics, the science of determining working length (WL) in endodontics holds high significance in the success of endodontic therapy.In the world of modern endodontics, the electronic WL determination by the use of electronic apex locators has become an integral component of the treatment protocol.The literature is full of the details regarding these fascinating electronic machines.However, the categorisation in the chronological order has somehow always been confusing.One of the convenient methods of segregating the apex locators is based on dividing them into different generations.This paper deals with simplifying FIRST GENERATION APEX LOCATORS These apex locators use the resistance method for determining the WL [1] .Basically these instruments measured the opposition to the flow of direct current (resistance) and hence the name Resistance based apex locators.Initially an alternating current of 150 Hz Sine wave was used (Root canal meter, 1969) but the pain was felt by the patient due to high currents.Therefore modifications were made and new machines which used current less than 5 micro amperes were introduced (Endodontic Meter S II, Kobayashi, 1995).Since these machines were not found to be accurate, the research work continued to develop in this field. SECOND GENERATION APEX LOCATORS These apex locators use the Impedance method for determining the WL.Basically these instruments measure the opposition to the flow of alternating current (impedance) and hence the name Impedance based apex locators.These units utilize the current of a single frequency to perform the task.Formatron IV [2] , Sono Explorer [3] and Endocater are a few examples of this generation, almost all having the similar drawback of inaccurate readings especially in the presence of irritants in the canal [4,5] . THIRD GENERATION APEX LOCATORS These apex locators use two frequencies instead of a single one to measure the impedance in order to determine the WL.With this scientific rationale these should be called "comparative impedance" type apex locators.However, the impedance of any given circuit is influenced by the frequency of the current flow, hence the name frequency based apex locators.The credit of being the first apex locator in this category goes to Endex [6] .However it had the drawback of requiring calibration for each canal before use.Later came Root ZX, which did not require any calibration [7] .It uses two different frequencies of 400 Hz and 8 kHz simultaneously to measure the impedance in the canal.Then it determines a quotient value by dividing 8 kHz impedance value by 400 Hz impedance value.The reading of minor diameter is revealed when the quotient value is 0.67 [8] .These apex locators had the upper hand over their predecessors in terms of accuracy and reliability.Other units falling into this category are AFA, Neosono Ultima EZ, Justy II, etc. FOURTH GENERATION APEX LOCATORS These apex locators use multiple frequencies (2-5 frequencies) to measure the impedance in order to determine the WL [9] .Multi-frequency measurement system is used to calculate the distance from the tip of the file to the foramen by measuring changes in impedance between two electrodes.Unlike the third generation, these ones do not use the impedance value as a mathematical algorithm only to assess the WL but instead utilize the resistance and capacitance measurements and thereafter compare them with a database to measure the distance of the file to the apex of the canal.This technology presumably leads to less sampling error and more consistent readings.Canal pro apex locator (Coltene) belongs to this category.The measurements in Canal Pro apex locator are performed using AC signals at two frequencies.The frequencies are alternated rather than mixed, as it is done in other apex locators, thus canceling the need for signal filtering and eliminating the noise caused by non-ideal filters.The RMS (Root Mean Square) level of the signal is measured, rather than its amplitude or phase.The RMS value is much more immune to various kinds of noises than other parameters of the measured signal.The two-field display with file tracking over the whole canal length and enlarged apical Zoom makes this apex locator uniquely different from the existing third generation ones.The apex locators of this generation, so far, are the best in their category owing to their high accuracy and reliability.For a clinician, looking for high accuracy and reliability in their WL determination, the fourth generation apex locators would be the most ideal, for they can be trusted upon the most. CONCLUSION AND FUTURE GENERATIONS A couple of companies are coming up with new apex locators proclaiming to be of fifth (dual frequency ratio type) and sixth generations [10] .However there is no clear distinction as how these ones are technically different from the already existing fourth generation apex locators for which their superiority in performance is being claimed.Therefore, before a seventh or eighth generation apex locator comes in, which could be a cordless one and proclaims it to be the most superior one, a critical analysis needs to be done regarding the technical specifications of all the apex locators beyond the fourth generation ones.
1,250.2
2019-03-20T00:00:00.000
[ "Computer Science" ]
Mechanisms of regulation of glycolipid metabolism by natural compounds in plants: effects on short-chain fatty acids Background Natural compounds can positively impact health, and various studies suggest that they regulate glucose‒lipid metabolism by influencing short-chain fatty acids (SCFAs). This metabolism is key to maintaining energy balance and normal physiological functions in the body. This review explores how SCFAs regulate glucose and lipid metabolism and the natural compounds that can modulate these processes through SCFAs. This provides a healthier approach to treating glucose and lipid metabolism disorders in the future. Methods This article reviews relevant literature on SCFAs and glycolipid metabolism from PubMed and the Web of Science Core Collection (WoSCC). It also highlights a range of natural compounds, including polysaccharides, anthocyanins, quercetins, resveratrols, carotenoids, and betaines, that can regulate glycolipid metabolism through modulation of the SCFA pathway. Results Natural compounds enrich SCFA-producing bacteria, inhibit harmful bacteria, and regulate operational taxonomic unit (OTU) abundance and the intestinal transport rate in the gut microbiota to affect SCFA content in the intestine. However, most studies have been conducted in animals, lack clinical trials, and involve fewer natural compounds that target SCFAs. More research is needed to support the conclusions and to develop healthier interventions. Conclusions SCFAs are crucial for human health and are produced mainly by the gut microbiota via dietary fiber fermentation. Eating foods rich in natural compounds, including fruits, vegetables, tea, and coarse fiber foods, can hinder harmful intestinal bacterial growth and promote beneficial bacterial proliferation, thus increasing SCFA levels and regulating glucose and lipid metabolism. By investigating how these compounds impact glycolipid metabolism via the SCFA pathway, novel insights and directions for treating glucolipid metabolism disorders can be provided. Introduction Glucose metabolism and lipid metabolism are important processes in the maintenance of energy homeostasis and normal physiological functions and play crucial roles in maintaining intracellular homeostasis and energy balance [1].The molecular mechanism of glucose metabolism involves multiple steps such as glucose uptake, glycogen synthesis, glycogenolysis, glycolysis, and the tricarboxylic acid cycle [2].During fasting, glycogenolysis is the main source of glucose released into the bloodstream [3].Disorders of glucose metabolism are characterized mainly by hyperglycemia, dyslipidemia, fatty liver, and atherosclerosis [4].Lipids, including triglycerides, phospholipids and steroids, are important components of the body [5].The regulation of lipid metabolism, such as lipid uptake, synthesis and hydrolysis, is essential for the maintenance of cellular homeostasis [6].Dysregulation of lipid metabolism in the human body can cause a variety of diseases, such as hyperlipidemia [7], osteoporosis [8], atherosclerosis [8], obesity and diabetes [9]. Natural compounds are widely distributed in plants and their derived materials [10].Natural compounds are secondary metabolites of plants [11], and they include nutrients essential for health, such as proteins, carbohydrates, vitamins, minerals and other chemicals, such as phenolic acids, flavonoids and other phenolic substances [12].The ability of natural compounds to prevent chronic diseases by preventing oxidative stress and inflammation, inducing autophagy, and interacting with the gut microbiota, among other signaling pathways, and their nutritional effects have been widely studied [13].Natural compounds have been shown to regulate the body's metabolism and reduce the risk of chronic diseases such as type 2 diabetes, atherosclerosis and cancer [14].Recent studies have indicated that many natural compounds affect the production of short-chain fatty acids (SCFAs), thereby ameliorating disorders of glucolipid metabolism [15,16]. SCFAs are produced by anaerobic fermentation of dietary fiber and resistant starch by the microbiota in the gut [17].SCFAs are a type of saturated fatty acid, among which acetic, propionic and butyric acids are the most abundant SCFAs in the human body and can maintain the integrity of the intestinal barrier and influence the production of gastrointestinal mucus [18].Studies in recent years have shown that SCFAs play important roles in human energy metabolism and energy supply [19].As a microbial metabolite, they can maintain host health through mechanisms related to the regulation of gut barrier function, microbial activity, and glucose homeostasis [20].Short-chain fatty acids can prevent and manage diabetes by increasing insulin sensitivity, improving glucose homeostasis and inhibiting hepatic gluconeogenesis [21].SCFAs also inhibit the production of fat [22].Acetate reduces lipid accumulation, inhibits the lipolysis of white adipose tissue and induces browning of white adipose tissue, which reduces body fat by increasing thermogenesis [23].Increasing SCFA levels may be an effective therapeutic modality against dysglycolipidemia [24].SCFAs may regulate glucose homeostasis by decreasing glucose production [25], increasing glucose uptake and glycogen synthesis in the liver [26], increasing pancreatic β-cell mass and regulating insulin secretion [27].In regulating lipid metabolism, short-chain fatty acids may improve lipid metabolism by decreasing inflammation in adipose tissue [28], increasing the lipid buffering capacity of adipose tissue, and enhancing fatty acid oxidation and mitochondrial function in the liver and skeletal muscle [24]. In this paper, we comprehensively review the relevant roles of SCFAs in glycolipid metabolism; introduce potential natural compounds that may ameliorate the dysregulation of glycolipid metabolism by modulating the SCFA pathway, such as polysaccharides, anthocyanins, quercetin, resveratrol, carotenoids, and betaines; and briefly discuss the ways in which each of these compounds influences glycolipid metabolism through the SCFA pathway, with the goal of providing new ideas for the treatment of glycolipid metabolism dysregulation.We briefly discuss how each compound affects glycolipid metabolism through SCFAs to provide new ideas for the treatment of dysglycolipidemia. Generation of SCFAs SCFAs are produced by the fermentation of indigestible dietary components, including complex carbohydrates [29] and dietary fiber, by the gut microbiota.The colon is the primary site of SCFA production within the human body, as it harbors the highest density of the gut microbiota [30].While SCFAs are derived primarily from the fermentation of microbially accessible carbohydrates (MACs), they can also be byproducts of bacterial amino acid metabolism.The relative contribution of amino acid metabolism to overall SCFA production is not well understood, but the total intake of protein and fiber are considered influential factors [31].Owing to the physiological pH range of the colonic lumen, which typically falls between 5.6 and 6.6, the majority of SCFAs exist in their anionic form, rendering simple diffusion challenging [32].After production, SCFAs are primarily transported from the colonic lumen into the colonic cells through passive diffusion and/or carrier-mediated transport [33].SCFA concentrations vary along the length of the colon, with the highest levels observed in the cecum and proximal colon and a decline in concentration toward the distal colon.This gradient is likely due to the increased absorption of SCFAs mediated by the sodiumcoupled monocarboxylate transporter SLC5A8 and the low-affinity H+-coupled monocarboxylate transporter SLC16A1 [34].It may also be due to the greater availability of carbohydrates and water in the proximal portion of the colon than in the distal portion.The total amount of SCFAs in the proximal colon is estimated to be 70-140 mM, whereas the total amount of SCFAs in the distal colon decreases to 20-70 mM [35,36]. SCFAs are carboxylic acids containing 1-6 carbon atoms, including acetic, propionic, butyric, valeric, and caproic acids, of which acetic (C2), propionic (C3), and butyric (C4) acids are the most abundant and are produced by anaerobic fermentation of dietary fiber (DF) in the intestines [35,37].The molar ratio of C2, C3 and C4 is approximately 60:20:20 [38].Acetate is the major anion in human intestinal contents, accounting for more than 50-60% of short-chain fatty acids, followed by propionate and butyrate in roughly equal amounts, with small amounts of the branched-chain fatty acids isobutyric and isovaleric acids, as well as small amounts of lactic acid and succinic acid [35,39]. The acetic acid production pathway is widely distributed in the bacterial community, whereas the propionic, butyric, and lactic acid production pathways are more conserved and substrate specific [40].The main dietary sources of acetate are acetate-containing foods such as pickles, cheese and other dairy products; processed meats; bread; wine; beer; and bacterial breakdown of dietary fibers such as resistant starch, indigestible oligosaccharides and other plant polysaccharides.Although fructose is mainly absorbed from the small intestine, unabsorbed fructose reaches the colon, where it can be converted by the microbiota to acetic acid.Bacteria such as A. muciniphila and Bacteroides spp.produce acetic acid by digesting food fibers in the colon.The gut bacteria that produce acetic acid also include Bifidobacterium spp., Akkermansia muciniphila, Pravotella spp.and Ruminococcus spp.After dietary fiber reaches the intestinal tract, the intestinal microbiota breaks down the dietary fiber through two metabolic pathways, glycolytic fermentation or acetogenesis, to produce acetate.Most of these species of gut bacteria produce acetic acid by fermenting pyruvic acid through acetyl-CoA.In addition, acetic acid-producing bacteria can also produce acetic acid from CO2 and H2 via the Wood-Ljungdahl pathway.Acetate can also be derived from the metabolism of the oxidative breakdown of alcohol in the liver.Acetic acid may also be produced from microbial fermentation of residual peptides and fats [35,[41][42][43][44][45]. Propionic acid is produced mainly by Bifidobacterium spp.and mucin-degrading bacteria such as Akkermansia muciniphila [37].There are three pathways for propionate formation in human gut bacteria: the succinate pathway, the acrylate pathway, and the propylene glycol pathway [46].Among them, the phylum Bacteroidetes utilizes the succinate pathway to produce propionic acid via methyl propionyl coenzyme A. Negative bacteria in the Firmicutes phylum also utilize the succinate pathway to produce propionic acid.In addition, propionic acid can be formed from organic acids, such as the succinic acid in organic acids.In addition, Megasphaera elsdenii produces butyric acid when it is grown on glucose but produces propionic acid when it is grown on lactate.Some bacteria can also produce 1,2-propanediol from oligosaccharides, dihydroalcoholic phosphate or lactic acid, which is then further metabolized to propionic acid.Propionate is generated from fucose via propylene glycol in Roseburia inulinivorans from the human gut [47]. Butyric acid is produced by intestinal bacteria of the genera Faecalibacterium prausnitzii, Eubacterium rectale and Roseburia spp.[48] and by Ruminococcaceae, Lachnospiraceae, Anaerobutyricum hallii and Anaerostipes spp.[49].Butyric acid is the preferred energy source for colonic epithelial cells and plays an important role in their metabolism and normal development [50].Butyric acid is derived from carbohydrates through glycolysis and produced from the combination of two molecules of acetyl coenzyme A to form acetoacetyl coenzyme A, which is then progressively reduced to butyryl coenzyme A. There are two different pathways for the formation of butyric acid from butyryl coenzyme A, either by butyryl coenzyme A, acetate coenzyme transferase or phosphotransbutyrylase and butyric acid kinase [51].Resistant starch (RS) is the main source of butyrate [52].A study by Venkataraman et al. [53].reported that dietary supplementation with resistant starch increases fecal butyrate concentrations.A study by Louis et al. [54].revealed that although Faecalibacterium prausnitzii accounts for approximately 10% of human fecal bacteria, it accounted for only 4% of the butyl coenzyme A transferase sequences identified in their study, suggesting that only a few strains of Fraecalibacterium prausnitzii have the butyryl coenzyme A and acetate coenzyme transferase genes and that the majority of strains are not butyrate producers. SCFAs and human glucose and lipid metabolism Short-chain fatty acids constitute approximately 10% of human caloric needs [35] and play important roles in the regulation of glucose metabolism and lipid metabolism [19].Glycometabolism refers mainly to the way blood glucose is metabolized, which is derived from intestinal absorption, hepatic glycogenolysis, and gluconeogenesis [55].Under fed conditions, carbohydrates in food are digested and processed by various glucosidases in the digestive tract, and the resulting monosaccharides are transported to various tissues as the main fuel for ATP production [56].The liver plays a major role in controlling glucose homeostasis by controlling various pathways of glucose metabolism, such as gluconeogenesis, glycogenolysis, glycolysis, and gluconeogenesis, with glycolysis being essential for most of the cells where glucose catabolism for energy production is crucial [57].Glucose is phosphorylated by hexose kinase to form glucose 6-phosphate, which enters the glycolytic pathway and undergoes a series of enzyme-catalyzed reactions to produce pyruvate, along with ATP for energy or glycogen storage [19].Whereas lipid metabolism refers mainly to the way in which lipids are metabolized, lipids are stored mainly in adipose and other tissues or in other nonadipose tissues; excessive accumulation of triacylglycerol (TAG) and cholesteryl esters (CE) leads to abnormalities in lipid metabolism [55], and inactivation of the synthesis of TAG, a major energy substrate stored in adipose tissues, leads to temporal and spatial variations in fat absorption and a reduction in postprandial triglyceridemia, postprandial changes in gut hormone levels, and resistance to diet-induced obesity in rodents [58].In contrast, elevated levels of cholesterol in the circulation are among the major risk factors for atherosclerosis [59]. SCFAs are able to regulate glucose and lipid metabolism by acting on the G protein-coupled receptors GPR43 and GPR41 in the terminal ileum and colon [60], with propionate being the most potent agonist of GPR41 and GPR43.Acetic acid is more selective for GPR43, whereas butyric and isobutyric acids are more active for GPR41 [61].Andrew J. Brown et al. [62].reported that valerate (C5) activated GPR41 more effectively than did acetic acid, but acetic acid activated GPR43 more effectively than did valeric acid.GPR43 is also known as free fatty acid receptor 2 (FFAR2).Ikuo Kimura et al. [63].reported that short-chain fatty acid-mediated activation of GPR43 could inhibit lipid accumulation and ameliorate obesity.GPR41 is also known as free fatty acid receptor 3 (FFAR3).Ikuo by Kimura et al. [64] reported that propionate was able to maintain metabolic homeostasis and body energy expenditure by directly modulating sympathetic nervous system (SNS) activity through GPR41 at the sympathetic ganglion level. Glucoregulatory functions of SCFAs in different metabolic tissues In the liver, Shoji Sakakibara et al. [65].reported that sodium acetate directly activates AMPK by neutralizing acetic acid (AcOH), which in turn reduces the expression of genes such as glucose-6-phosphatase (G-6-pase) and sterol regulatory element-binding protein-1 (SREBP1) in rat liver cells.Research by Huating Li et al. [66] indicated that butyrate can increase fibroblast growth factor 21 (FGF21) levels in the liver, stimulating the oxidation of long-chain fatty acids and decreasing glucose levels.Short-chain fatty acids (SCFAs) are capable of increasing the mRNA expression of glucose transporter 2 (GLUT-2) and glycogen synthase 2 (GYS2) in the liver, thereby reducing hepatic glycolysis and gluconeogenesis while increasing glycogen synthesis [17,67]. In adipose tissue, SCFAs can promote the release of adiponectin by increasing the expression of GPR41 and GPR43 [68,69], and adiponectin can increase glucose metabolism during glycogen breakdown in the liver, skeletal muscle, and brown adipose tissue (BAT) [70].A study by Ikuo Kimura et al. [71] revealed that GPR43 activation mediated by short-chain fatty acids could inhibit insulin signaling in adipocytes, whereas acetate could suppress glucose uptake in adipocytes and promote glucose metabolism in other tissues. Skeletal muscle is considered the largest organ in the body and is responsible for approximately 80% of insulinstimulated glucose uptake [72].An increase in SCFAs can activate AMP-activated protein kinase (AMPK), leading to improved insulin sensitivity and increased glucose metabolism in skeletal muscle [73].This phenomenon is likely attributed to the ability of AMPK to regulate the expression of GLUT4 [74], which is the predominant glucose transporter in skeletal muscle cells [19].Glucose is transported into muscle cells through the GLUT4 transporter, thereby influencing glucose uptake [74].Furthermore, a study by T Fushimi et al. [75] indicated that in skeletal muscle, acetate may inhibit glycolysis by suppressing the activity of phosphofructokinase-1 (PFK-1). In the gut, SCFAs act as promoters for two crucial gut hormones, glucagon-like peptide-1 (GLP-1) and peptide YY (PYY) [17,76].SCFAs can increase the secretion of plasma GLP-1 by upregulating the expression of GPR41, GPR43, PC1/3, and GCG.Subsequently, GLP-1 induces insulin secretion and inhibits glucagon secretion, effectively regulating and controlling blood glucose metabolism [77,78].SCFAs increase the expression of PYY in enteroendocrine cells within the gut through two distinct pathways.Acetate, propionate, and butyrate stimulate GPR43, resulting in a slight increase in PYY mRNA levels.Additionally, propionate and butyrate induce a significant increase in PYY mRNA levels by inhibiting histone deacetylases (HDACs).PYY plays a critical role in regulating food intake and insulin secretion [79]. SCFAs can also increase the sense of satiety through the gut-brain axis.Acetate can inactivate AMP-activated protein kinase (AMPK) in the hypothalamus, leading to increased activity of acetyl-CoA carboxylase (ACC).This process, in turn, stimulates the expression of proopiomelanocortin (POMC) and predominantly GABAergic neurotransmission in the hypothalamus while reducing the expression of neuropeptide Y (NPY) and agouti-related peptide (AgRP).This cascade ultimately decreases appetite and food intake, preventing weight gain and reducing the risk of type 2 diabetes mellitus (T2DM) [17,80].SCFAs can also control glucose and energy homeostasis through the gut-brain neural axis.Propionate activates the fatty acid receptor FFAR3 (GPR41), leading to increased c-Fos expression (a well-recognized marker of neuronal activation) and the activation of intestinal gluconeogenesis (IGN) gene expression through neural pathways, such as the dorsal vagal complex (DVC), the C1 segment of the spinal cord, the parabrachial nucleus (PBN), and the hypothalamus.Butyrate, conversely, increases the expression of cyclic adenosine monophosphate (cAMP), which induces upregulation of the glucose-6-phosphatase catalytic subunit (G6PC) and phosphoenolpyruvate carboxykinase 1 (PCK1) genes, directly activating IGN gene expression in intestinal cells [81].Acetate can also provide fuel for the tricarboxylic acid (TCA) cycle in central nervous system (CNS) microglia.Studies have shown that germ-free (GF) mice have significantly lower acetate levels, possibly because microglia utilize acetate to produce oxaloacetic acid (OAA), activating ATP-citrate lyase (ACL)-mediated conversion of citrate to acetyl-coenzyme A (Ac-CoA) as a carbon donor in the TCA cycle, the production of citrate and the generation of ATP [82]. Glycolysis is a primary means by which normal cells obtain energy under aerobic conditions.In the presence of oxygen, normal cells generate energy through mitochondrial oxidative phosphorylation (OXPHOS).However, when oxygen is limited, cells rely on glycolysis to produce ATP.In contrast, cancer cells undergo metabolic reprogramming to prioritize glycolysis for energy production, even in the presence of sufficient oxygen [83,84].Histone deacetylases (HDACs) are evolutionarily conserved enzymes that remove acetyl modifications from histones, playing crucial roles in epigenetic gene silencing [85].HDAC inhibitors can suppress c-Myc protein levels and increase the expression of proliferator-activated receptor γ coactivator 1α (PGC1α) and peroxisome proliferator-activated receptor δ (PPARδ), driving oxidative energy metabolism.This process leads to an increase in fatty acid oxidation (FAO) and oxidative phosphorylation (OXPHOS) while weakening glycolysis and reducing ATP levels [86].As HDAC enzyme inhibitors, short-chain fatty acids (SCFAs) can epigenetically regulate cellular metabolism [34].Valerate and butyrate can increase the activity of mTOR (mechanistic target of rapamycin) and inhibit class I HDAC enzymes, leading to increased expression of CD25, interferon gamma (IFNγ), interleukin-2 (IL-2), and tumor necrosis factor alpha (TNF-α) in cytotoxic T lymphocytes (CTLs) through the glycolytic pathway, thereby enhancing cellular antitumor immunity [87].Additionally, mTORC1-driven glutamine uptake can suppress the expression of glycolytic genes, such as Slc2a1 (glucose transporter 1), the hexokinase isoforms Hk2 and Hk3, and glucose metabolism in cancer cells [88]. In summary, SCFAs in different metabolic tissues are able to regulate glucose metabolism by decreasing glycolysis and gluconeogenesis, increasing insulin secretion, and increasing glycogen synthesis (Fig. 1). SCFAs have systemic effects on glucose metabolism to varying degrees in different tissues.The major metabolic pathways include the liver, adipocytes, skeletal Fig. 1 Glucose metabolism is affected by SCFAs muscle, intestine, pancreas and gut-brain axis, impacting the human liver through glucose transport, FGF21 and AMPK and adipocytes through GPR41 and GPR43; skeletal muscle through AMPK and PFK-1; and the human intestine via GPR41, GPR43, PC1/3 and GCG to subsequently affect the brain via the brain-gut axis. Lipid regulatory functions of SCFAs in different metabolic tissues In the context of the liver, a study by Hua Zhou et al. [67] demonstrated that the exogenous provision of shortchain fatty acids (SCFAs) can increase the protein levels of GPR43 and reduce the protein levels of ACC within hepatic tissue.Concurrently, SCFA supplementation was observed to upregulate the expression of AMPK, collectively contributing to the improvement of lipid metabolism.Furthermore, there is evidence suggesting that acetate can induce the phosphorylation of AMPKα, which in turn leads to the induction of PPARα expression in hepatocytes.This cascade of events ultimately suppresses the expression of SREBP-1c and ChREBP, thereby inhibiting the mRNA expression of lipogenic genes and reducing fatty acid synthesis in the liver [89,90]. In the context of adipose tissue, a study by Johan W E Jocken et al. [91] revealed that acetate could attenuate the phosphorylation of hormone-sensitive triglyceride lipase (HSL) in adipocytes, thereby modulating lipid metabolism in human adipose tissue.Furthermore, research conducted by Zhanguo Gao et al. [92] revealed that butyrate could increase the expression of two thermogenesisrelated genes, PGC-1α and UCP-1, in brown adipose tissue (BAT).Additionally, a study by Gijs den Besten et al. [93] demonstrated that supplementation with shortchain fatty acids (SCFAs) could reduce the expression of peroxisome proliferator-activated receptor γ (PPARγ) and increase the expression of UCP-2 in adipocytes, ultimately stimulating the oxidative metabolism of adipose tissue through activation of AMPK.Importantly, existing evidence [94] suggests that mitochondrial uncoupling in adipocytes plays a crucial role in the regulation of lipid metabolism and obesity, with mitochondrial uncoupling protein 1 (UCP-1) being the most significant marker of BAT.Upregulation of UCP-1 expression can promote lipid consumption and heat generation [95].Furthermore, mitochondrial uncoupling protein 2 (UCP-2) is expressed in various tissues, including the spleen, kidneys, immune system, pancreas, and central nervous system, and it can also contribute to the promotion of lipid metabolism [96]. UCP3 is the predominant UCP isoform in skeletal muscle [97].Mitochondrial dysfunction in skeletal muscle may be a contributing factor to impaired lipid oxidation, which is associated with decreased expression of the UCP3 protein in muscle [98].A study by Jian Hong et al. [99] revealed that butyrate could increase the content of muscle by increasing the expression of UCP-2, UCP-3, and fatty acid oxidation enzymes such as recombinant carnitine palmitoyltransferase 1b (CPT1-b) and peroxisome proliferator-activated receptor gamma coactivator-1α (PGC1-α) in skeletal muscle.Furthermore, SCFAs can increase the production of muscle in children by increasing the expression of interleukin-15 (IL-15), a myokine associated with muscle growth [100]. In the intestinal tract, SCFAs act on the GPR41 receptor to increase the release of the satiety hormone PYY in the colon, which has been shown to increase fasting lipid oxidation [101,102].Interestingly, a study by Zhuang Li et al. [103]] demonstrated that the ingestion of butyrate can stimulate the secretion of GLP-1 in the intestine, thereby activating GLP-1 receptor signaling in the vagus nerve.This process, in turn, reduces the activity of the orexigenic neuropeptide Y (NPY) in the hypothalamus and the neurons in the nucleus of the solitary tract (NTS) and the dorsal vagal complex (DVC) through the gut-brain axis, leading to decreased food intake and increased satiety, ultimately preventing obesity and fat accumulation.Additionally, Yibing Zhou et al. [104] reported that valerate can increase the concentration of GPR43 in the colon, which in turn reduces the expression of the NLRP3 inflammasome, TNF-α, and IL-6, thereby decreasing lipid deposition. In conclusion, SCFAs in different metabolic tissues are able to regulate lipid metabolism by promoting lipid oxidation, reducing lipid synthesis, decreasing lipid deposition and increasing thermogenesis (Fig. 2). SCFAs have different systemic effects on lipid metabolism at different levels.The main metabolic pathways include the liver, adipocytes, skeletal muscle, intestine, pancreas and gut-brain axis, impacting the human liver via GPR43 and AMPK.Additionally, adipocytes are affected via HSL, PPARγ, UCP-1, UCP-2, PGC-1ɑ and AMPK; skeletal muscle is affected via UCP-2, UCP-3, CPT1-b, PGC1-α and IL-15; and the gut is affected via GPR41, GPR43, and GLP-1 to subsequently impact the brain via the brain-gut axis. Natural compounds that regulate glucolipid metabolism Compounds from various plants are widely used as regulators of human glucose and lipid metabolism and play important roles in the treatment of diabetes, hyperglycemia, obesity, and other diseases [105].Moreover, natural compounds are widely used in the development of new drugs and play key therapeutic roles in the fields of cancer, infectious diseases, cardiovascular diseases, and multiple sclerosis [106].In this section, we assess the role of several phytochemicals in regulating human glycolipid metabolism through SCFAs.These phytochemicals include polysaccharides, anthocyanins, quercetin, resveratrol, carotenoids, and betaine.Their basic sources, roles in different metabolic tissues and effects on SCFAs are shown in Table 1. Polysaccharides Polysaccharides are naturally occurring macromolecular polymers that usually consist of more than 10 monosaccharides through linear glycosidic or branched chains [107].Polysaccharides are found in almost all living things in nature, including seeds, stem and leaf tissues, herbal plants, animal body fluids and cell walls [108].Polysaccharides can lower blood glucose and lipids by repairing pancreatic islet cells, improving insulin resistance, regulating the intestinal flora, enhancing antioxidant capacity, and regulating the activity of key enzymes in glucose and lipid metabolism [109].A study by Ying Hong et al. [110].revealed that astragalus polysaccharide was able to increase the content of D. vulgaris strains in the midgut of mice fed a high-fat diet, whereas D. vulgaris strains were able to significantly increase the content of acetic acid, regulate hepatic lipid metabolism, and effectively attenuate hepatic steatosis in mice.A study by Doudou Li et al. [111].revealed that acetic acid levels were elevated in high-fat chow-fed mice after 12 weeks of LBP supplementation, and the administration of moderate and high doses of LBP significantly lowered blood glucose and increased fasting serum insulin levels.These findings suggested that acetic acid may bind to receptors on intestinal neurons and regulate duodenal hypercontractility, thereby improving glucose homeostasis.A study by Xinyi Tian et al. [112].revealed that polysaccharides increase the relative abundance of Allobaculum and Lactococcus, decrease the relative abundance of Proteobacteria, and increase the level of SCFAs and the expression of related G-protein-coupled receptors in the intestinal bacterial community of mice fed a high-sugar and high-fat diet.A study by Jinli Xie et al. [113].revealed that Ganoderma lucidum polysaccharides significantly increased the levels of propionic acid and butyric acid in the small intestine and cecum of rats, thereby enhancing intestinal immunity and reducing inflammatory reactions.A study by Ying Lan et al. [114].revealed that Seabuckthorn polysaccharides increase the proportions of Muribaculaceae_unclassified, Bifidobacterium, Alistipes, and Bacteroides in the intestines of high-fat diet-induced obese mice; and decrease the proportions of intestinal Lactobacillus, Firmicutes_unclassified, Dubosiella Bilophila, and Streptococcus, which are also able to increase the content of SCFAs in feces and regulate hepatic lipid metabolism by modulating changes in the gut microbiome and the SCFA content.A study by Liman Luo et al. [115].revealed that inulin-type fructans could increase fecal and serum acetate concentrations, reducing mitochondrial dysfunction and toxic glucose metabolite levels.A study by Ye Yao et al. [116].revealed that srychnine polysaccharides were able to increase the production of SCFAs by SCFA-producing bacteria, such as Ruminococcus bromii, Anaerotruncus_colihominis, and Clostridium_methylpentosum, and upregulate GLP-1 and Fig. 2 Lipid metabolism is influenced by SCFAs PYY to improve glucose metabolism in rats.A study by Ciliang Guo et al. [117].revealed that hawthorn polysaccharides increased the proportions of Alistipes and Odoribacter in the intestinal tract microbiota, increased the contents of acetic acid and propionic acid, and inhibited the expression of inflammatory cytokines such as interleukin-1β (IL-1β), interleukin-6 (IL-6) and tumor necrosis factor alpha (TNF-α). Polyphenols Polyphenols are among the most abundant and widely distributed natural products in the plant kingdom [118].They are phytochemicals that are synthesized from plants with one aromatic ring and one hydroxyl group and are found in large quantities in various natural plants, including fruits and vegetables.Polyphenols promote health and prevent various types of chronic diseases.They have the ability to regulate signaling pathways and exert antioxidant activities, which can regulate processes such as oxidative stress, inflammation and apoptosis [119].According to their different chemical structures, polyphenols can be categorized into phenolic acids, flavonoids, and polyphenol amides, among which anthocyanins and quercetin are flavonoids, and resveratrol is a nonflavonoid polyphenol found in grapes and red wine [120]. Anthocyanins Anthocyanins are glycosylanthocyanidins that are widely distributed in plant vesicles, and their color depends on the pH of the environment [121].A study reported that [122] anthocyanins can improve obesity, control diabetes, and prevent cardiovascular disease and cancer.Baoming Tian et al. [123].reported that anthocyanins can increase the production of SCFAs by SCFA-producing bacteria, such as Ruminococcaceae, Akkermansia, Bacteroides Table 1 Basic sources of six phytochemicals, their roles in different metabolic tissues and their effects on SCFAs Reduction of the Firmicutes/Bacteroide ratio.Quercetin Tea, buckwheat Gut: Altering the concentration of the gut microbiota [133,134].Downregulation of the expression of TRPV1, AQP3, and iNOS.Upregulation of the expression of GDNF and c-Kit [133]. Upregulation of the concentration of Coprococcus, Ruminiclostridium and Roseburia.Downregulation of the concentration of Enterococcus and Enterobacter. Upregulation of the concentration of Allobaculum.Downregulation of the concentration of Lachnospi-raceae_NK4A136_group, Desulfovibrio, and Alistipes. Betaine Beta vulgaris Gut: Altering the concentration of the gut microbiota [159,160].Liver: Upregulation of the hepatic lipid oxidation genes PPARα and CPT1α and the hepatic lipid transporter gene FATP2 expression [160].Adipose: Downregulation of the expression of the adipogenic genes Fas and ACC [160]. Bran, Wheat Germ, Spinach and Odoribacter, and attenuate HFD-induced intestinal barrier damage by activating SCFA receptors, such as FFAR2 and FFAR3, and by upregulating TJ proteins.A study by Xu Si al [124].revealed that blueberry anthocyanins attenuated the high-fat diet-induced oxidative stress state in mice.The levels of SCFAs in the intestine are promoted by increasing the SCFA-producing bacteria Roseburia, Faecalibaculum, and Parabacteroides, and increases in of SCFAs were shown to reduce hepatic steatosis and to improve the status of hippocampal neurons in mice.A study by Jiebiao Chen et al. [125].showed that anthocyanins from common berries such as blackberries, black goji berries, strawberries, mulberries, prunes, raspberries, and red goji berries were able to increase the level of SCFAs in the intestine, improve the internal antioxidant status of mice, alleviate body weight gain and inhibit food intake by enriching the SCFA-producing bacteria Ruminococcus, Intestinimonas, and Clostridium_XVIII, among others.A study by Telma Angelina Faraldo Corrêa et al. [126].showed that the intake of blood orange juice anthocyanins affected the abundance of operational taxonomic units (OTUs) in the intestinal flora, significantly increasing the levels of propionic acid and isobutyric acid in the intestinal tract of overweight females, reducing fasting blood glucose and insulin levels, and improving insulin resistance.A study by Ting Chen et al. [127].found that purple red rice bran anthocyanins could promote the production of SCFAs in the intestinal tract and upregulate the expression of tight junction proteins (TJs) and nuclear factor kappa B (NF-κB) pathway proteins to improve the intestinal barrier function and dysbiosis of the intestinal flora in mice.A study by Yun Zhang et al. [128].revealed that cactus anthocyanins significantly increased the content of SCFAs in the cecum of mice by altering the microbial diversity and flora composition of the intestinal tract, with the greatest increases observed in the contents of acetic acid, propionic acid and butyric acid. Quercetin Quercetin is a plant pigment that is widely found in tea, lettuce, radish leaves, cranberries, apples, buckwheat, cucumber, and onion and exists in plants as quercetin-3-O-glucoside, which provides color to a wide variety of vegetables and fruits [129,130].It has antiallergic, anti-inflammatory, cardiovascular protective, antitumor, antiviral, antidiabetic, immunomodulatory and antihypertensive effects [131].Quercetin is also able to modulate metabolic disorders through different mechanisms, such as by increasing lipocalin, decreasing leptin, decreasing insulin resistance, increasing insulin levels, and blocking calcium channels [132].A study by Wenhui Liu et al. [133].reported that quercetin reduced the expression of transient receptor potential vanilloid 1 (TRPV1), aquaporin 3 (AQP3), and inducible nitric oxide synthase (iNOS) in the intestine and increased the expression of glial cell line-derived neurotrophic factor (GDNF), c-Kit, and stem cell factor (SCF).These changes inhibited the growth and reproduction of Enterococcus and Enterobacter in the intestine, increased the intestinal acetic acid, propionic acid and butyric acid contents in the intestine, improved gastrointestinal peristalsis, and increased the intestinal transit rate.Quercetin increases the abundance of Coprococcus spp.Ruminiclostridium spp.and Roseburia spp. in the mouse intestine [134] and the amount of these gut microbes that can produce SCFAs [135], which improves glucose and lipid metabolism in the body. Resveratrol Resveratrol was first isolated by Takaoka in 1939 from the flower Quercus serrata [136].This phenolic substance is found not only in grapes and wine but also in peanuts, soybeans and berries [137].It has powerful regenerative, antioxidant, protein-regulating and anticancer properties [138].Early studies have shown that resveratrol inhibits the oxidation of low-density lipoprotein (LDL) in humans [139], reduces insulin resistance in animal models [140], and reduces adipocyte size and the inflammatory response in adipose tissue [141].A study by Jin-Xian Liao et al. [142].showed that resveratrol increased the abundance of S24-7 and Adlercreutzia; decreased the abundance of Allobaculum, Blautia, Lactobacillaceae, and Prevotella; increased the concentration of acetic and propionic acids in the intestine; and regulated hepatic antioxidant capacity by increasing the expression of NF-E2-related factor 2 (Nrf2) in the liver to promote HO-1 transcription as well as the expression of superoxide dismutase (SOD) and Catalase (CAT).A study by Yu Zhuang et al. [143].reported that resveratrol supplementation decreased expression of the inflammatory cytokines IL-6 and IL-1β; increased the expression of propionic acid, isobutyric acid, butyric acid and isovaleric acid in the intestinal tract; and affected the metabolism of the intestinal flora by regulating amino acid metabolism and lipid metabolism to improve intestinal health in mice.A study by Hongjia Yan et al. [144].revealed that resveratrol ameliorated the progression of diabetic kidney disease by increasing the abundance of Faecalibaculum and Lactobacillus bacteria in the intestinal tract, increasing the concentration of acetic acid in the feces, and modulating the gut microbiota-SCFA axis. A study by Le-Feng Wanget al [145].reported that resveratrol lowered the intestinal pH, promoted the growth and proliferation of probiotics in the intestinal tract, and significantly increased the content of isobutyric acid in the colons of aging mice. Carotenoids Carotenoids are a general term for yellow, red and orange pigments containing long-chain hydrocarbons with conjugated double bonds.More than 1,100 carotenoids exist in nature, and humans can obtain them from vegetables and fruits [146].Carotenoids can reduce free radical damage to cells [147] and are an important natural antioxidant.One molecule of beta-carotene can inhibit the activity of 1000 oxygen molecules [148].Additionally, carotenoids can also be metabolized in the intestine and other tissues into vitamin A-like bioactive compounds, which provide immunomodulation and enhance the immune response [149].Carotenoid supplementation activates the AMPK signaling pathway, which in turn activates upstream kinases, upregulates transcription factors, induces the discoloration of white adipose tissue, and blocks adipogenesis [150,151].Lycopene is a carotenoid [152] with antioxidant, anti-inflammatory, apopand cellular communication-modulating effects [153].A study by Xiang Gao et al. [154].revealed that lycopene can decrease the expression of proteins such as NLRP3, Pro-Caspase-1, Caspase-1, and NF-κB in the liver; decrease the abundance of Lachnospiraceae_ NK4A136_group, Desulfovibrio, and Alistipes in the gut microbiota; and increase the abundance of Allobaculum microbiota, thereby increasing the content of SCFAs, inhibiting the NF-κB/NLRP3 inflammatory pathway, and ameliorating nonalcoholic fatty liver disease (NAFLD).However, intake of beta carotene has been a question worth exploring, with studies concluding that 20 mg/day or more beta carotene is contraindicated for heavy smokers; the Panel on Nutrition, Dietetic Products, Novel Food and Allergy of the Norwegian Scientific Committee for Food Safety has set a tentative upper limit (TUL) of 4 mg/day for beta carotene supplementation; and the Panel on Vitamins and Minerals has set a safe upper limit of 7 mg/day for lifetime beta carotene supplementation for the general population (excluding smokers) and has discouraged concomitant use of beta carotene supplements by smokers [155]. Betaine Betaine is a stable and nontoxic natural substance that was first discovered in the plant Beta vulgaris and subsequently found in relatively high concentrations in several other organisms, such as wheat bran, wheat germ, spinach, and sugar beet [156].Betaine [157] is a trimethyl derivative of the amino acid glycine that promotes glucose uptake through GLUT-4 expression, directly increases ATP production while helping to stimulate glucose utilization in myocytes, and enhances energy production by increasing mitochondrial biogenesis.Betaine can also play an important role as an antioxidant and protect the liver from oxidative stress [158].A study by Jingjing Du et al. [159] revealed that betaine supplementation increases acetate-and butyrate-producing intestinal flora such as A. muciniphila, Ruminococcus, Oscillospira, and Lactobacillus in the gut, which in turn increases acetate and butyrate levels and improves the prevention of obesity and obesity-related metabolic comorbidities.A study by Liuqiao Sun et al. [160] revealed that betaine supplementation upregulated expression of hepatic lipid oxidation genes, such as PPARα and CPT1α, and the hepatic lipid transporter gene FATP2; downregulated expression of the adipogenic genes fatty acid synthase (Fas) and ACC in adipose tissue; decreased the relative abundance of Proteus mirabilis, Vibrio desulfuricans, and Ruminococcus ruminanti in the intestine; and increased the relative abundance of Lactobacillus and Lactobacillus paracasei in the intestine.Increasing the concentration of SCFAs in the feces can modulate the hepatic triglyceride content and improve NAFLD. Conclusions Glucose and fatty acids are the main sources of energy in the human body.Under normal metabolic conditions, glucose metabolism and lipid metabolism pathways can meet the body's normal activity needs, and the two metabolic pathways can affect each other; for example, glucose can be converted to fatty acids and cholesterol through the lipid biosynthesis pathway in glucose and lipid metabolism disorders, cardiovascular disease, diabetes, fatty liver and other serious diseases [161].SCFAs modulate the structure of the gut microbiota [162], enhance the intestinal epithelial barrier [163], and can slow the onset and progression of disease [164].By regulating the levels of SCFAs in the gut, it is able to exert beneficial effects on human glucose and lipid metabolism.Natural compounds in plants continue to be a hot topic in current research because of their safe composition, wide range of sources, and ability to treat a variety of diseases, such as cancer, diabetes, heart disease and Alzheimer's disease [165].Here, we review the role of SCFAs in glucose and lipid metabolism and describe the mechanism by which natural compounds in plants, such as polysaccharides, anthocyanins, quercetin, resveratrol, carotenoids, and betaines, modulate human glucose and lipid metabolism by increasing the content of SCFAs.Research has shown that these natural compounds can increase the number of beneficial bacteria, such as Alistipes and Odoribacter, thereby helping to maintain intestinal health.Moreover, they can also reduce potentially harmful bacteria, such as Lactobacillus, and lower their content in the intestine.Natural compounds can also regulate the content of short-chain fatty acids by affecting the abundance of OTUs in the gut microbiota and the intestinal transport rate.This regulatory effect helps to maintain the stability of the gut microbiota and prevent excessive proliferation of harmful bacteria.By using natural compounds, we can adjust the structure of the gut microbiota, maintain the stability of the intestinal environment, increase the content of SCFAs in the intestine, and promote human health.Further research on the effects of natural compounds on human SCFAs can help develop more beneficial products and methods for improving human health.Although the mechanisms by which natural compounds in plants regulate human glucose and lipid metabolism have been widely studied, the shortcomings of the results reported to date are that most were conducted in animal, with very few clinical trials.Moreover, the variety of natural compounds with SCFAs as unique targets of action is still limited.As a next step, preclinical and clinical studies are needed to understand and identify natural compounds that can play a role in regulating human glucose and lipid metabolism by modulating SCFAs.Additionally, targeted studies are needed to develop natural compounds in plants that provide new therapeutic options for the treatment of glucose and lipid metabolism disorders.
9,009
2024-07-18T00:00:00.000
[ "Environmental Science", "Biology", "Medicine" ]
A Latent Implementation Error Detection Method for Software Validation Model checking and conformance testing play an important role in software system design and implementation. From the view of integrating model checking and conformance testing into a tightly coupled validation approach, this paper presents a novel approach to detect latent errors in software implementation. The latent errors can be classified into two kinds, one is called as Unnecessary Implementation Trace, and the other is called as Neglected Implementation Trace. The method complements the incompleteness of security properties for software model checking. More accurate models are characterized to leverage the effectiveness of the model-based software verification and testing combined method. Introduction In software engineering practices, model-based software development and analysis methods receive extensive attention [1]. Software model checking [2] and model-based conformance testing [3] are two well-established approaches validating the accuracy of software executions. Model checking aims at verifying whether the software specification model satisfies a set of key properties that represent the software functional requirements, while conformance testing aims at checking if the actual black-box implementation behaves as what the specification model describes according to some kind of conformance relation. More specifically, software model checking validates the specification model with the key properties during the design phase, while conformance testing checks the nonconformance relation between the system programs and the specification model during the implementation phase. Thus, model checking and conformance testing could perform sequentially and work as an integrated validation process to assure the functional correctness of a software system. However, only applying model checking followed by conformance testing is not fully satisfactory. The essential reason is that model checking focuses on an accurate system formal model, not considering the system implementation, while conformance testing focuses on checking whether the system implementation behaves as the model specified, not considering whether the key properties are totally tested [4]. Specifically, in the ideal scenario, all the software behaviors in the system implementation should satisfy the key properties, that is, all key properties are tested and all system behaviors are verified, as shown in the midpart of Figure 1. Unfortunately, in the software engineering practices, there always exist some key properties verified by model checking in the design phase, but they might not be tested at all in the implementation phase. This kind of scenario is called as "Under Tested, " as shown in the left part of Figure 1. From the other side, some system error behaviors may still exist in the software implementations after conformance testing, and they also might not be checked out by model checking in the design phase. This kind of scenario is called as "Under Verified, " as shown in the right part of Figure 1. Therefore, the "Under Tested" and "Under Verified" scenarios are two major drawbacks of the traditional software validation methods, where the model checking and the conformance testing exercise sequentially and individually. In order to integrate model checking and conformance testing into a tightly coupled validation approach, two kinds of studies have been done. First, most studies in the literature focus on fixing the "Under Tested" problem [4,5]. Specifically, the key properties are considered in the conformance test case generation method, so executing such set of test cases could guarantee that all key properties are tested. For example, key properties could be formulized as the test purpose models and the test generation process utilizes model checking as a major generation technology. Consequently, all generated test cases will definitely cover the key properties. These studies could improve the "Under Tested" scenario, and the major contribution is to make the conformance testing more complete and more accurate, that is, the software verification well complements the software testing. However, we also need certain studies to improve the "Under Verified" scenario, that is, the software testing should complement the software verification as well. But in the literature, this kind of studies is really few. Black-box checking [6] and adaptive model checking [7] are two special cases. They aim to construct more specific system models from a partial or an empty model through testing and learning. The difficulties in this kind of studies result from the fact that the implementation errors are always detected unexpectedly through the conformance testing process, that is, we cannot guarantee detecting such errors definitely. Therefore, we herein propose a more proactive method to detect such latent errors which have been programmed in the system implementations and improve the "Under Verified" scenario consequently. To demonstrate the motivation of our work more clearly, we make further analysis first. Without loss of generality, we adopt an instance model checking method based on the Input-Output Labeled Transition System (IOLTS) models [4], and an instance conformance testing approach based on the IOLTS models and the Input-Output Conformance (IOCO) relation [8]. Implementing an automatic vending machine (AVM) system is taken as an example. It is supposed that a specific software implementation of the vending machine has been developed. The machine releases the tea or milk when a coin is inserted. But in this implementation, there exists a fatal error, that is, this machine could also release the tea or milk as an incorrect coin is inserted. Furthermore, it is supposed that the formal model of this AVM system has no corresponding parts to deal with such exception behavior. Therefore, as discussed previously, if we do not consider a specific property towards this exception error in the model checking phase unintentionally, the verification will pass without counterexamples. Then, some test cases are generated from this verified model, and no test case aims to detect such exception behavior, because no corresponding specification exists in the verified model. So, according to the IOCO relation, the conformance testing will also determine the conformance relation between the implementation and the model. Herein, a serious problem is emerged. Though this exception behavior does exist in the system implementation, both model checking and conformance testing do not detect this fatal error. However, the error should be repaired. Based on this analysis, a novel Latent Implementation Error Detection (LIED) method for the software validation will be proposed in this paper to proactively detect such latent errors in the implementations. The LIED method explores model checking towards the actual software implementation to check such latent errors and utilize the counterexamples to improve the system models. The LIED method not only complements the incompleteness of the key security properties for the software model checking, but also constructs more accurate models to promote the effectiveness of the modelbased software verification and testing sequentially combined method. On the one hand, the original model checking could be performed more completely, because the key properties and the system models are both well improved. On the other hand, the conformance testing could be performed more precisely, because the improvement of the system models results in better test cases, which are generated with higher accuracy and stronger capability of detecting implementation errors. So, the "Under Verified" scenario is well improved consequently. The paper is organized as follows. Firstly, certain preliminaries and related work are discussed in Section 2. Then, two kinds of specific latent implementation errors are defined formally in Section 3. Finally, the LIED method is given in detail in Section 4. The method includes three major parts: enumerating the possible key properties, model checking towards the system implementation, and revising the system model using the counterexamples. To elaborate the feasibility and effectiveness of the LIED method, we put it into practice with a simplified AVM system as a representative. Preliminaries and Related Work In this section, we first introduce the formal definition of the IOLTS model and the basic ideas about the IOCO relationbased conformance testing method. Then, we discuss several related and important studies on how to integrate model checking and conformance testing technologies in recent literature. An IOLTS model is actually an LTS model with explicitly specified input and output actions [8]. It is widely used not only as a kind of formal modeling approach to model the reactive software systems directly, but also as the operational semantic model for several process languages, such as LOTOS (Language of Temporal Ordering Specification). (2) = LI ∪ LU ∪ LE : LI is a countable sets of input action labels, LU is a countable sets of output action Journal of Applied Mathematics 3 labels, LE is a countable sets of inner action labels, and they are disjoint; (3) ⊆ × × , is the action transition relation; (4) 0 is the initial state. Definition 2. For an IOLTS model , An IOLTS model for the specification of the AVM software is presented in Figure 2(a). In this specification, when a coin is inserted, which is modeled as an input action "? , " the machine will release a bottle of tea or milk, which is modeled as two output actions "! " and "! . " We also present two IOLTS models for the AVM system implementations. The model 1 in Figure 2(b) specifies an implementation with part functions of the AVM system, that is, when a coin is inserted, only a bottle of tea is released. The model 2 in Figure 2(c) specifies an implementation with additional functions of the AVM system, that is, this machine may release a bottle of milk after an incorrect input action "? , " that is, no coin is inserted actually. The IOCO relation-based testing approach has welldefined theoretical foundation [9] and high feasibility with automatic testing tools, such as Torx [10] and TGV [11]. The major framework of this testing approach has four related components. First, IOLTS is used to model the specification of a software system, and then IOLTS which is input enabled is further defined as IOTS (Input Output Transition System) to specify the behavior model of the system implementation. IOTS models characterize significant external observations in conformance testing, for example, quiescent states are crucial for distinguishing valid no-output actions from deadlocks. It just represents the state that an implementation is waiting for an input data. Second, the IOCO relation is defined as follows. Definition 3. IOCO ⇔ def for all ∈ traces( ): out( after ) ⊆ out( after ), where is an implementation of the system specification . Its intuitive idea is to compare output, that produced after executing trace in and , respectively, where is generated just from . Two major aspects are emphasized in the definition: what actions should be observed, and what it means for an implementation to conform to its specification. Taking the AVM system in Figure 2 as an example, according to the IOCO relation definition, it is determined that 1 IOCO and 2 IOCO , that is, the implementations with part functions or additional functions are successfully determined to conform to the specification model. IOCO relation is a guidance of test case generation. Third, test cases are generated automatically and recursively based on specification models. Generally, the first transition of a test case is derived from the initial state of the specification model, after which the remaining part of the test case is recursively derived from all reachable states. Traces are recorded from the initial state to reachable states including quiescent states. If an output action, whether a real output or quiescence, is not allowed in the specification model, the test case will terminate with fail, otherwise, continue the further trace explosion or just terminate with pass at last. The detailed test generation algorithm is presented in [8]. Finally, the test execution is an asynchronous communication, directed by a test case, between the system implementation and its external environment, that is, a tester. The tester provides the implementation with input data and then observes its responses, a real output or just quiescence. If the fail state is reached where the real observations have not been prescribed in the test case model, the nonconformance between this implementation and its specification is definitely determined. Test cases are sound if they are able to make this kind of nonconformance decision. As for related studies about integrating model checking and conformance testing technologies in the literature, most of them focus on fixing the "Under Tested" problem, which is mentioned in the above section. That is, key properties are involved in the test case generation methods, so executing such set of test cases could guarantee that all key properties are tested. The VERTECS research team at INRIA [4,[12][13][14] specifies certain key properties, such as possibility properties ("something good may happen") and safety properties ("something bad ever happens"), using the IOLTS-based formal models, and integrates these property models directly into the IOCO test case generation algorithm. Besides, other studies [5,[15][16][17][18] first formulize the key properties as the test purpose models and then perform model checking to generate the test cases. Finally, the produced counterexamples, actually representing the system execution traces satisfying the key properties, could be acted as the conformance test cases. Consequently, all generated test cases will definitely cover the key properties. The related studies mentioned previously could improve the "Under Tested" scenario and make the conformance testing more complete and more accurate. However, black-box checking [6] and adaptive model checking [7] are two special cases which aim to construct more specific system models through testing and learning, where initially the system implementation is available but no specific model or just only a partial model is provided. The model refinement process, which consists of the model checking and test execution, is performed iteratively to produce more accurate semimodels which conform to the system implementations, until the model satisfies the required key properties and conforms to the system implementation. In this paper, we propose an LIED method to proactively detect certain latent errors in the implementations and improve the "Under Verified" scenario consequently. The LIED method not only complements the incompleteness of key security properties for the software model checking, but also constructs more accurate models to promote the effectiveness of the model-based software verification and testing sequentially combined method. Based on the discussion in Section 1 and previous related work analysis, we should note that integrating the software verification and conformance testing needs definitely iterative refinements, as shown in Figure 3. Traditional software verification and conformance testing execute sequentially and separately, and they often have obvious boundaries towards the abstract models or the implementation. Especially, the model-based test case generation process will use abstract models. However, in the integrated software verification and testing methodology, these two methods are performed iteratively and complementarily. First, the model checking and testing process have no strict boundaries, and several semimodels and semiimplementations may exist before the final system model and implementation are developed. For example, conformance testing can be performed ahead on some semiimplementations to guide the refinement of semimodels. After iterative refinements, the final system model and system implementation are destined to be more accurate and less error-prone. Second, some specific model checking technologies could be used in the testing phase complementarily and vice versa. For example, CTL model checking algorithm is used to generate test cases [5]. In this paper, the LIED method is regarded as an integrated software validation method. It is essentially well compatible with the traditional model checking and conformance testing procedures. That is, the LIED method is developed as a complementary method for detecting latent implementation errors, not for replacing the traditional model checking and conformance testing. So, the LIED tends to be an accelerant, because its central merit is to construct more accurate system formal models, which are quite helpful to promote the effectiveness of model checking and the model-based conformance testing. We could perform model checking, the conformance testing, and the LIED process iteratively and complementarily. Consequently, these validation methods work collaboratively to make software design and implementation more effective and more efficient. Two Kinds of Latent Implementation Errors As discussing the motivation of this paper in Section 1, we demonstrate that though both model checking and conformance testing have been performed successfully, some kinds of programmed errors still exist in the software implementations. That is, these two software validation methods are ineffective against detecting such latent errors. Therefore, we propose an LIED method to proactively detect such latent errors herein. Before presenting the detailed LIED method, we start with formally defining the specific kinds of latent implementation errors that we want to detect and repair in this paper. The concept of Trace in the IOLTS modeling (refer to Definition 2) is used to specify such implementation errors, that is, any specific error will definitely correspond to an execution trace in the IOLTS model for the system implementation. So, we could suppose with certain rationality that as long as the system implementation is programmed with some latent errors, there definitely exists an execution trace, in the IOLTS model of the system implementations, describing how such error behaviors execute step by step. Herein, we focus on two specific kinds of latent implementation errors: unnecessary implementation error and neglected implementation error. The obvTraces operator restricts the original traces for an IOLTS model to only input and output action labels ( ∪ ) * , that is, the inner action labels are omitted and just externally observed actions are considered. So, the intuitive idea of the Unnecessary Implementation Trace consists of two parts. On the one hand, such traces could be observed from the system implementation, but not described in the system specification models, that is, ∈ obvTraces( ) − obvTraces( ). On the other hand, such traces do not satisfy the key properties which the system functional requirements desire, that is, ∃ ∈ : ¬( → ). In a word, every UIT trace models a detailed unnecessary implementation error that is programmed in the system implementation, and the system specification model has no corresponding parts to deal with such exception behaviors. Clearly, in this case, if we do not consider a specific key property against this unnecessary implementation error in the model checking phase, the verification does pass. After then, the IOCO conformance testing will also pass, because none of test cases, which are generated from verified system model, are capable of detecting such implementation error. Consequently, though an unnecessary implementation error does exist in the system implementation, both model checking and conformance testing do not detect this fatal error, and this kind of errors indeed should be repaired. NIT (MI) = def { ∈ obvTraces( ) ∩ obvTraces( ) | ∃ ∈ : ¬( → )}. Each Neglected Implementation Trace represents a specific neglected implementation error, but this kind of traces is a really special case. First, they appear in the specification models, as well as the implementation models, that is, ∈ obvTraces( ) ∩ obvTraces( ). Besides, they do not satisfy the key properties which the software functional requirements desire, that is, ∃ ∈ : ¬( → ). Thus, under normal circumstances, the model checking phase can detect such exceptional behaviors. However, under special circumstances, the Neglected Implementation Trace may be omitted by the abstraction in the model checking procedure. In this case, the verification will be passed unexpectedly. Then, we take the conformance testing into account. Because such neglected implementation traces behave the same in both MS and MI, the IOCO conformance relation between the specification model and the implementations are still determined unexpectedly. That is, we consider the exception behaviors during the IOCO conformance testing as legal behaviors, because they have been specified in the MS in the same way. Consequently, though the neglected implementation error does exist in the system implementation and specification, both model checking and conformance testing do not detect this fatal error, and this kind of errors indeed should be repaired too. To sum up, we aim to detect and repair two kinds of latent implementation errors, that is, the unnecessary implementation errors and the neglected implementation errors, which may not be detected using the traditional model checking and IOCO conformance testing combined method. They have the same fatal effects, but they result from different causes. Therefore, in the LIED method, we utilize the unified method to detect these two kinds of latent implementation errors, but fix them using respective methods. The LIED Method As we discussed in Section 1, the unnecessary implementation error and the neglected implementation error always occur nondeterministicly through the conformance testing process, that is, we cannot guarantee detecting such errors definitely. Therefore, the LIED method is designed to proactively detect such latent errors which have been programmed in the system implementations. In this section, we first present the central idea and the framework of our LIED method. Then two core parts of this method are discussed in detail, respectively, that is, constructing the analogy set for the key properties and improving the system formal models and system implementations with the counterexamples. Finally, the AVM system is analyzed using the LIED method as a representative to elaborate the feasibility and effectiveness of the LIED method. The Overarching Methodology. The central goal of our LIED method is to find out some unnecessary implementation errors or neglected implementation errors, where no evident clues are provided by the system models. So, designing the LIED method has two necessary preconditions. First, we need to check the system implementation. Second, we need to check as many key properties as possible against the system implementations. Consequently, we adopt model checking as the basic technology and apply it directly into the system implementations. That is, the central idea of our LIED method is to explore model checking towards the actual software implementation to check whether some kinds of latent implementation errors exist and then utilize the counterexamples, which illustrate the exception behavior executions, as guidance to improve the system models and implementations. The framework of the LIED method is shown in Figure 4. It is composed of three related parts. (a) Constructing the Analogy Set for Key Properties. The traditional model checking is performed against the original set of key properties. Such key properties are usually extracted from the functional requirement specifications for a specific software system, and they describe the necessary functional system behaviors. However, in order to detect more latent implementation errors, we need to do some analogy enumeration based on the original set of key properties. For example, an original key property could be specified as "if condition is true then do actions. " The analogy enumeration for this property could construct two more properties, that is, "if condition is false then do actions" and "if condition is true then not do actions. " In this way, after such analogy enumeration procedure, the analogy set of key properties are generated for checking more possible exception behaviors for the system implementations. The details of the analogy enumeration procedure are discussed in Section 4.2. (b) Detecting the Latent Implementation Errors. Based on the analogy set of key properties, we apply model checking into the system implementations directly, and the Copper model checker [19] is adopted in this paper. In this way, we could proactively detect certain latent implementation errors against the analogy set of key properties, and such latent errors are actually not detected in the traditional software verification process. If a counterexample is produced, we may detect a latent implementation error, and this counterexample could be used as an intuitive guidance for improving the system models and the system implementations. (c) Revising the Models or Implementations with Counterexamples. In this paper, we propose a counterexample-guided refinement method for the software validation process. Specifically, we perform synchronous simulation between a specific counterexample and the system model. If the simulation produces an empty set of synchronous traces, it means that the exception behaviors which are represented by this counterexample are not considered in system model; an unnecessary implementation error, that is, UIE, is actually detected. In this case, the system model should be improved with additional parts about dealing with the exception behaviors, and the system implementation should be fixed too. Otherwise, if the simulation produces a nonempty set of synchronous traces, it means that though the system model has corresponding parts to deal with such exception behaviors, a neglected implementation error, that is, NIE, is still detected from the system implementation. In this case, the system model should be modified according to the counterexample scenario, and the system implementation should be fixed too. The details of the synchronous simulation procedure and the corresponding refinements are discussed in Section 4.3. The LIED method is developed for detecting latent implementation errors, and it is well compatible with traditional model checking and conformance testing procedures. The major advantages of our LIED method lie in two aspects. First, it complements the incompleteness of the key properties for the software validation. More importantly, it benefits constructing a more accurate system formal model to promote the effectiveness of the model-based software verification and testing sequentially combined method. Specifically, the original model checking could be performed more completely, because the key properties and the system models are both improved, and the conformance testing could be performed more precisely, because the improvement of the system models results in better test cases, which are generated with higher accuracy and stronger error-detecting capability. Consequently, the LIED method improves the "Under Verified" scenario as expected. Constructing the Analogy Set for Key Properties. In order to detect more latent implementation errors proactively and focus on more necessary functional behaviors for a software system, we perform analogy enumeration based on the original set of key properties, which are constructed in the traditional model checking phase. According to the survey of patterns in property specifications [20][21][22], most properties (more than 90%) could be formulized within five kinds of property patterns, where each of them could be either specified as a Linear Temporal Logic (LTL) formula or a Computation Tree Logic (CTL) formula. As shown in Table 1, we present the original specifications (OS) and its analogy set (AS) for each kind of property patterns, respectively. As we want to detect more latent implementation errors proactively, the above analogy set for properties is used from two aspects in the LIED method. On one hand, if one specific property is verified in original model checking process, the properties of its analogy set should be paid more attention and correspondingly checked against the system implementations. On the other hand, taking the cause-effect ( → (¬ )) ¬ ( ∪ ( ∧ )) OS: the occurrence of action must be followed by the occurrence of action AS: action occurs without the postoccurrence of action or action occurs without the preoccurrence of action relation into account, the absence, existence, and universality properties could be classified as a group, while the precedence and response properties as another group. So, if one specific property is verified in the original model checking process, the other kind of properties and its analogy set properties should be also considered to have a check against the system implementations. In this way, we complement the incompleteness of key security properties for the software model checking, and more importantly, we have more opportunities to find out the unnecessary implementation errors or the neglected implementation errors. Refining the Models with Counterexamples. According to the framework of our LIED method in Figure 4, the analogy set of the original key properties for a specific software system is generated as a new set of properties, and then we apply the model checking, against this new set of properties, onto the system implementations directly. If all of the properties are verified successfully, we could determine that the system implementation works correctly with respect to the system specification, and then we could perform traditional conformance testing as usual. However, the LIED method is actually more willing to get a counterexample, which may reveal an existing latent implementation error. As autoproduced counterexamples could intuitively present the scenarios about how the latent errors occur, they are quite helpful for revising the system models and fixing the system implementations. First, we present the formal definition of the counterexample from the LTS point of view, and it could be concretized with corresponding syntax towards different model checkers. Definition 6 (unified counterexample (UCE)). A Unified Counterexample is a kind of Trace of an IOLTS model : Intuitively speaking, a counterexample is a specific execution of a software system, that is, a trace of detailed behaviors. The model in the above definition refers to the system specification in program level; for example, in the Copper model checker, it is the program specification file ( * .pp). According to the preceding Definition 2, a specific counterexample may be a sequence of input actions, output actions and internal actions, where internal actions reflect the value variation of corresponding program variables without external behaviors. Based on the counterexample and the software model, a Counterexample-Guided Synchronous Simulation (CGSS) algorithm is proposed as Algorithm 1 to check whether the system model has the same behavior trace as the counterexample from the input/output point of view. If the simulation produces an empty set of synchronous traces, it means that the exception behaviors which are represented by this counterexample are not totally considered in the system model, so a UIE is actually detected. Otherwise, a nonempty set of synchronous traces reveals that the system model does have corresponding parts to deal with such exception behaviors, and an NIE is detected consequently. If the unnecessary implementation errors are detected, the system model should be improved by adding additional parts to deal with the UIE errors that demonstrated by the counterexamples, and, the system implementations should be fixed by taking out extra program codes. Similarly, if the neglected implementation errors are detected, the system model should be modified against corresponding parts to handle the NIE errors that demonstrated by the counterexamples, and, the system implementation should be fixed by revising the inaccurate program codes. Case Study: An AVM System. To elaborate the feasibility and effectiveness of our LIED method, an AVM software system is analyzed using this method as a representative in this section. The Copper model checker is adopted. The IOLTS model for this AVM system is presented in Figure 2(a). Besides, we implement a program for such AVM system, which may release milk without inserting coin, just like the scenario in Figure 2(c). The core segment of this program is if (strcmp(input,"coin")==0) output coffee (); else output milk ();. Obviously, when something else (not a coin) is inserted, a bottle of milk is then released, and no error message is posted as expected. Step 1 (enumerating the key properties). In traditional model checking phase, we consider a requirement that if a coin is inserted, the milk or coffee is released. We formulate this requirement into an LTL property with precedence format. Besides, its analogy set is generated, which checks the scenario that the milk or coffee is released without inserting a coin. Original property is formulated as ( ) → (¬ ∪ ). Analogy property is formulated as: ( ) → (¬ ∪¬ ), where stands for releasing action (output: ! or ! ) and stands for inserting action (input: ? ). Copper supports temporal logic claims expressed in State/Event Linear Temporal Logic (SE-LTL). The syntax of SE-LTL is similar to that of LTL, except that the atomic formulas are either actions or expressions involving program variables. Therefore, the analogy property could be formulated as follows: ltl ExamProp {#F (output ⇒ ((!output) #U(! [input == coin]))) ;} , where output represents the output action output coffee() or output milk() in the AVM programs. Step 2 (model checking the program). The program for the AVM system is processed into the AVM.pp file and the above property is specified into the AVM.spec file. Then, the model checking towards the program is executed using the following command. copper --default --specification ExamProp AVM.pp AVM.spec --ltl The result of this LTL model checking is "conformance relation does not exist !! specification ExamProp is invalid. . .. " Besides, a counterexample is produced correspondingly. As follows, a detailed UCE trace is generated from the program variables assignment parts and the action parts in that counterexample. In this trace, 0:epsilon stands for internal actions that present value assignments of variables or decision of branch statements. This trace reveals that when the input is assigned with value key, not the expected value coin, the output action output milk still occurred. The ExamProp property cannot hold against the AVM program. Step 3 (revamping the model and program with the counterexample). We put above UCE trace and the system model shown in Figure 2(a) as inputs into the CGSS algorithm. After the counterexample-guided synchronous simulation procedure, it produces an empty set. So, a UIE is actually detected. That is, a bottle of milk will be released when incorrect input is inserted, and the model has a lack of specification to deal with such error. Therefore, we improve the system model-by adding additional parts to deal with this UIE error, as shown in Figure 5. If certain incorrect input (not a coin) is inserted, the AVM system will output error messages and terminate its execution in stop state. Furthermore, we also fix the core segment of system program into "if (strcmp(input, "coin")==0) output coffee(); else output error();", so that the UIE error mentioned above will not occur. Through the above exemplified execution of our LIED method towards the AVM system, its feasibility and effectiveness are demonstrated. That is, certain latent implementation errors are detected, and more importantly, the system models and implementations are well improved. Conclusion To validate the functional accuracy for a software system only applying model checking followed by conformance testing may not detect some latent implementation errors, that is, the unnecessary implementation errors and the neglected implementation errors. In this paper, the LIED method is proposed to detect such latent implementation errors proactively. Based on the analogy set of key properties, the LIED method applies model checking directly into the actual software implementation to check whether some latent implementation errors exist and utilize the counterexamples, which illustrate the exception behavior executions as intuitive guidance to improve the system models and system implementations respectively. The LIED method is essentially well compatible with the traditional model checking and model-based conformance testing procedures. It could be applied as an effective complementary method for detecting latent implementation errors, but not for replacing the traditional model checking and conformance testing. The major advantages of our LIED method could be concluded from two aspects. First, it efficaciously complements the incompleteness of the key security properties for the software validation process. Second, it helps to construct more accurate system formal models to promote the effectiveness of model checking and model-based conformance testing, that is, the original model checking could be performed more completely because the key properties and the system models are both improved, and conformance testing could be performed more precisely because the improvement of the system models result in generating test cases with higher accuracy and stronger capability of detecting the implementation errors. In a word, the LIED method tends to be a well accelerant for better model checking and conformance testing iterative executions, where the "Under Verified" scenario is improved as expected, and consequently, these software validation methods work collaboratively to make software design and implementation more effective and more efficient. In the future, the LIED method will try to work in more complex and practical systems [23][24][25].
8,216.6
2013-03-28T00:00:00.000
[ "Computer Science" ]
A Novel Coplanar Based Adder Logic Design Using QCA Now a days, VLSI is a one of the top most technology are used in the field of electronics communication. It is used to create an integrated circuit by merging million of MOS transistor into a single chip. In VLSI, most of the transistors are design in micro scale level. Now a days people will want all materials are in compact size. So, it is necessary to design a circuit in nanoscale level. In the field of VLSI, CMOS technologies are used for designing a integrated circuit (IC) chips. But in CMOS, size that are used to designing a circuit is in micro scale level. So researcher is introducing new nanotechnology that new technology is called QCA technology. Logic function gate is a one of the fundamental components to design an any circuit in electronics communication. In this paper, novel coplanar approach to designing an efficient QCA based 4-bit full adder using XOR/XNOR logic gate is proposed. QCA Designer Version 2.0.3 simulation tools are used in this proposed method. Performance is analysed and verified to determine the capabilities of proposed full adder. Introduction Adder has a major role in designing a digital circuit. It is not only to perform in arithmetic operations but it also performs in logical operation. Adder is a building block for designing a microprocessor and digital signal processing chips. A full adder means adding more than two bits. Recently, VLSI is one of the most used technology in the electronics communication field. It creates IC chips with million numbers of transistors. CMOS technologies are used to constructing integrated circuit in VLSI. But, a size that is used in CMOS technologies for designing a circuit is the microscale level. Due to physical limits in CMOS technologies, they cannot able to extend their size up to nanoscale. To conquer this issue, researchers proposed new nanotechnology ideas. The new idea has proposed by recent time researchers named efficient Quantum-Dot Cellular Automata (QCA) which is new nanotechnology.QCA nanotechnology is a replaceable approach to CMOS technology with many additional features. Recently this technology becomes one of the top six emerging technologies. QCA is one of the concepts of novel digital technologies. This novel digital technology results in low energy consumption and excessive density. In QCA, binary information is not encoded as '0' and '1'. But it encodes binary states as charge configuration. Binary state'0' is representing by '+1', which means positive charge configuration. Binary state '1' is represented by '-1', which means a negative charge configuration. The fundamental elements of QCA are QCA cell or quantum cell. In each corner of the QCA cell, quantum dots are located. Totally 4 quantum dots in each QCA cell. Every quantum cell has two additional free electrons. When columbic repulsion takes place between the cells two electrons are placed in quantum cell diagonally opposite to each other. QCA cells have two possible polarization configuration each one will denote the binary state '0' or '1'. Due to the columbic repulsion binary information can easily move from input to output of QCA cell. N number of adjacent quantum cells in series is called QCA wire. There are four stages of clock phases in QCA cells to reduce signal Metastability in the circuit. Two essential components of wire crossover are coplanar crossover and multilayer crossover. This paper objective is to overcome the CMOS physical limits and design a QCA based coplanar QCA XOR/XNOR full adder circuits with minimum cost. To exhibit the functionality and capabilities of the proposed XOR logic gates and 4-bit full adder architecture, performance is evaluated and analysed. 3.System implementation In this paper, the proposed method introduce QCA based 4-bit full adder using an XOR gate. XOR/XNOR logic gates are preferred over other circuits because they reduce the complexity of the circuit. Why this paper used XOR gate for the implementation of the full adder? Because it is easy to calculate adder logic sum and carry. There are two types of wire crossing methods that are used in QCA nanotechnology. QCA cells are aligned properly and they do not interact with each other in the coplanar crossover wiring method. Coplanar crossover is preferred for designing a complex circuit in QCA nanotechnology. In developed circuit cell size has been reduced and the complexity also reduced. The designed full adder comprises a minimum number of logic gates, which minimize the latency and cell count. The proposed schematic diagram for the XOR gate has appeared in Figure 3. In this diagram input contribution of the gate is A and B. P1, P2 are the polarisation inputs of the gate. Based on the enable input it either acts as an XOR gate or XNOR gate. Mathematical expression of full adder is given by cell count. ' (1) Adder logic sum =A ' (BC)+A(BC) Adder logic sum is expressed with ex-or operation of all three inputs. And Adder logic carryout is expressed in terms of previous stages of inputs. Simulation results & discussion QCA designer version 2.0.3 simulation tools are used to implementing and simulating the proposed method. QCA designer is a powerful CAD capabilities tools. It allows the designer quickly to design a layout and simulate the circuit construct with 'n' number of QCA cells. Figure 5 Contribution to the logic gate XOR is applied and yield of the corresponding gate is checked. Similarly, if A=0 and B=1, then C=1 is the output. According to inputs A and B of the XOR gate, their corresponding output will be evaluated. Figure 5(b). Simulation result for XNOR logic. Figure 5(b) illustrates the special gate XNOR logic style and its simulated output for various input combination. The polarisation inputs (P1 P2) are '10' in binary, and then it performs the XNOR operation. If inputs of XNOR are A=1 and B=0, then C=O is the output. Similarly, if A=1 and B=1, then C=1 is the output. According to inputs A and B, their corresponding output will be evaluated. Figure6 shows the input sequence of adder logic and figure 7 shows the output sequence of adder logic. There is a total of 32 inputs/outputs combination for 4-bit adder logic. The first sixteen inputs and their outputs are related to carry-in is low. The remaining sixteen inputs and their outputs are related to carry-in is high. Adder logic sum has 4-bit. If carry-in is low, then the first 8 carry-out are low and the remaining 8 carry-out are high. If carry-in is high, then the first 8 carry-out are high and the remaining 8 carry-out are low. Adder logic outputs are simulated and verified for all possible inputs combination. Figure 8. Performance analysis. Conclusion This paper first designed for full adder addition for 1 bit and then the addition of 4 bits using the coplanar approach in QCA technology. The proposed adder logic requires 0.009µm2 of space and has a delay of 0.5(ns). Adder logic designed with the help of QCA design requires less space in size compared to an existing method. Due to lower latency in the proposed method cell count is less. This work has been proved an efficient programmable circuit. Using this proposed XOR/XNOR logic gate, we will design more complex circuits in future.
1,682.8
2021-08-01T00:00:00.000
[ "Computer Science", "Engineering" ]
AN APPLICATION OF AN AVERY TYPE FIXED POINT THEOREM TO A SECOND ORDER ANTIPERIODIC BOUNDARY VALUE PROBLEM . In this article, we show the existence of an antisymmetric solution to the second order boundary value problem x (cid:48)(cid:48) + f ( x ( t )) = 0 , t ∈ (0 ,n ) satisfying antiperiodic boundary conditions x (0) + x ( n ) = 0 , x (cid:48) (0) + x (cid:48) ( n ) = 0 using an Avery et. al. fixed point theorem which itself is an extension of the traditional Leggett-Williams fixed point theorem. The antisymmetric solution satisfies x ( t ) = − x ( n − t ) for t ∈ [0 ,n ] and is nonnegative, nonincreasing, and concave for t ∈ [0 ,n/ 2]. To conclude, we present an example. 1. Introduction. The study of the existence of solutions to boundary value problems has long been an interesting and well-researched area within differential equations. In particular, we can see that antiperiodic boundary conditions have been an important part of the literature, [8,12,13,14]. Recently, Avery et. al. have published several articles which extend the original Leggett-Williams fixed point theorem, [3,9,4,5,6]. The extension does not require the functional boundaries of the arguments to be invariant. Also quite interestingly, Avery et. al. provide a topological proof for some of their results instead of using index theory arguments. There has been a significant amount of work published utilizing Avery fixed point theorems to prove the existence of solutions, typically positive solutions, to differential, difference and dynamic equations with varying types of boundary conditions. For a small sample see, [1,2,10,7,11,16,17]. In this paper, we will apply the Avery fixed point theorem, [4], to a second order boundary value problem with antiperiodic boundary conditions to prove the existence of an antisymmetric solution in the sense that x(t) = −x(n − t) for t ∈ [0, n]. Of note, in many related papers, the authors utilize a concavity like property of the Green's function. Here the approach is similar, but the property is somewhat different due to the antisymmetric nature of our solution. For Section 2, we provide much of the background information required for the problem and define a few important sets. In Section 3, we present the fixed point theorem. Quickly followed by Section 4 where the BVP, Green's function, and operator are defined. Here we will state and prove a Lemma involving the crucial concavity like property. We also note the importance of the midpoint, n/2, of the interval [0, n] Finally, in Section 5, we apply the fixed point theorem to the BVP and conclude in Section 6 with an example. Definitions. Definition 2.1. Let B be a real Banach space. A nonempty closed convex set P ⊂ B is called a cone provided: (i) x ∈ P, λ ≥ 0 implies λx ∈ P; for all x, y ∈ P and t ∈ [0, 1]. Similarly we say the map β is a nonnegative continuous convex functional functional on a cone P of a real Banach space B if β : P → [0, ∞) is continuous and for all x, y ∈ P and t ∈ [0, 1]. Next, we define sets that are integral to the fixed point theorem. Let ψ and δ be nonnegative continuous functionals on P. Then, we define the sets: The Fixed Point Theorem. The following fixed point theorem is attributed to Anderson, Avery, and Henderson [4] and is an extension of the original Leggett-Williams fixed point theorem [15]. Theorem 3.1. Suppose P is a cone in a real Banach space E, α is a nonnegative continuous concave functionals on P, β is a nonnegative continuous convex functionals on P, and T : P → P is a completely continuous operator. If there exists nonnegative numbers a, b, c, and d such that (A1) {x ∈ P : a < α(x) and β(x) < b} = 0; is bounded, then T has a fixed point x * in P (α, β, a, d); 4. The Antiperiodic Boundary Value Problem. Let f : R → R be a continuous map and n > 0 be fixed in R. We will apply the fixed point theorem to the second order boundary value problem x + f (x(t)) = 0, t ∈ (0, n) (1) with antiperiodic boundary conditions We will show that if f satisfies certain conditions, (1), (2) has an antisymmetric solution in the sense that Throughout this paper, we will utilize the Banach space E = C[0, n] endowed with the supremum norm. If x is a fixed point of the operator T defined by is the Green's function for the operator L defined by Lx(t) := −x satisfying antiperiodic boundary conditions (2), then (see [8]) x is a solution of the boundary value problem (1), . Proof. If y = 0, 0 ≤ wG(n/2, s) = w min{s/2, (n − s)/2}, so we assume y = 0. We consider 3 cases. . Notice in Case 2 and Case 3, y, G(n/2 − y, s) ≥ 0, so the inequalities can be cross multiplied to obtain the desired result. x(t) = x(0). 5. Solutions using (H1). In the following theorem, we demonstrate how to apply the expansive condition (H1) of Theorem 3.1 to prove existence of at least one solution to (1), (2). An application of (H2) is similar. A standard application of the Arzela-Ascoli Theorem may be used to show that T is completely continuous. Claim: (A1) {x ∈ P : a < α(x) and β(x) < b} = 0.
1,306.2
2015-11-01T00:00:00.000
[ "Mathematics" ]
Skeleton Line Extraction Method in Areas with Dense Junctions Considering Stroke Features : Extraction of the skeleton line of complex polygons is di ffi cult, and a hot topic in map generalization study. Due to the irregularity and complexity of junctions, it is di ffi cult for traditional methods to maintain main structure and extension characteristics when dealing with dense junction areas, so a skeleton line extraction method considering stroke features has been proposed in this paper. Firstly, we put forward a long-edge adaptive node densification algorithm, which is used to construct boundary-constrained Delaunay triangulation to uniformly divide the polygon and extract the initial skeleton line. Secondly, we defined the triangles with three adjacent triangles (Type III) as the basic unit of junctions, then obtained the segmented areas with dense junctions on the basis of local width characteristics and correlation relationships of each Type III triangle. Finally, we concatenated the segments into strokes and corrected the initial skeleton lines based on the extension direction features of each stroke. The actual water network data of Jiangsu Province in China were used to verify the method. Experimental results show that the proposed method can better identify the areas with dense junctions and that the extracted skeleton line is naturally smooth and well-connected, which accurately reflects the main structure and extension characteristics of these areas. Introduction Ai's studies [1] have pointed out that the extraction of skeleton lines is a key step to realize map generalization operations such as polygon collapse and dissolving. The extraction of skeleton lines needs to take into account the features of polygons and summarize their main bodily structures and extension characteristics. Extraction should meet the human visual cognition requirements while conforming to the drawing specifications. Therefore, how to obtain the accurate and reasonable skeleton line has always been a difficult point of research [2]. There are three common methods for extracting skeleton lines: the round skeleton line method [3], straight skeleton line method [4,5] and skeleton line method based on Delaunay triangulation (DT) [6,7]. In recent years, the DT-based method has been widely used for researchers extracting skeleton lines for vector and raster data because of the circular rule and maximum/minimum angle rule [8,9]. The research in this paper falls within the scope of vector data. DeLucia et al. [10] first proposed a skeleton line extraction method based on boundary-constrained Delaunay triangulation (CDT). Li et al. [11] and Wang et al. [12] used the CDT algorithm to extract main skeleton lines of polygons, and the experimental results reflected the direction of the main extension and morphological characteristics of the polygon well. However, in the process of research, some scholars found that at the skeleton line extracted by CDT, there existed jitters at the branch junctions and the polygon boundary. Hence, Jones et al. [13], Uitermark et al. [14] and Penninga et al. [15] proposed using the branch skeleton line direction, boundary simplification, densification boundary nodes and other methods to modify the skeleton lines. However, these optimization methods are only applicable to simple polygons with regular shapes and flattened boundaries. Haunert et al. [16] studied a large amount of road data and found that existing methods have difficulty maintaining the main structure and extension characteristics of polygons with irregular and complex junctions. Li et al. [17] made a preliminary exploration of skeleton line extraction in certain areas with many junctions; however, their method relies on the skeleton line direction and intersection criterion in the junction area and is thus still unable to handle complex areas. This paper will focus on an approach for the skeleton line extracting in areas with dense junctions, and aims to propose a new method that may identify the complex junction areas automatically and obtain the skeleton lines fitting in with human visual cognition in the process of map generalization. The structure of the article is as follows: Section 2 provides related work on the skeleton lines extraction method based on Delaunay triangulation; Section 3 presents the method for extracting skeleton lines in areas with dense junctions considering the stroke feature; Section 4 provides a series of experiments that were conducted to validate the reliability and superiority of the proposed method; Section 5 discusses conclusions and future works. Existing Skeleton Line Extraction Methods Based on Delaunay Triangulation Li et al. [17] makes a full analysis of the existing skeleton line extraction methods based on Delaunay triangulation and proposes an optimized algorithm for the dissolving operation of long and narrow patches in land-cover data. The basic idea is to introduce a constrained Delaunay triangulation, identify the junction areas based on the degree of node correlation and eliminate the jitter on the skeleton line of the junction areas under direction and intersection criterion constraints. The specific steps are as follows: Step 1: Construct a boundary constrained Delaunay triangulation to divide the long and narrow patches. The triangles in the Delaunay triangulation can be divided into three types according to the number of adjacent triangles [6]: Type I triangle: There is only one adjacent triangle, and the two sides of the triangle are the boundaries of the polygon. As shown by ∆ABC in Figure 1a, the vertex A is the end point of the skeleton line. Step 3: Identify the junction areas according to the location of the Type III triangle; Step 4: Calculate the direction of the branch skeleton lines in the junction areas. If there are two branch skeleton lines with the same direction, it is preferentially connected as a line, and the remaining branch skeleton lines extend to the line in their respective directions. If there are no arbitrary two skeleton lines in the same direction, the Euclidean distance between the nodes is used as intersection criterion of similarity, the branch nodes are aggregated to their geometric center and the branch skeleton lines are connected to the aggregation nodes in respective directions. Shortcomings in the Existing Method In the existing method, each junction area is used as a processing unit, and the skeleton line of the simple junction areas can be well obtained by setting a direction threshold and a branch node aggregation distance threshold, as shown in rectangles A and B in Figure 2. However, when the junctions are clustered together, it is difficult for the existing method to precisely extract the skeleton Type II triangle: There are two adjacent triangles, which is the backbone structure of the skeleton line and describes the extension direction of the skeleton line. As shown by ∆ABC in Figure 1b, the advancement direction of the skeleton line in the Type II triangle is unique. Type III triangle: There are three adjacent triangles that are the intersections of the skeleton line branches as the starting points for stretching in three directions. As shown by ∆ABC in Figure 1c, the three extension directions occur at point O. Step 2: Extract the central axis from the three types of triangles as follows for a connection to form a skeleton line, wherein the common edges of two adjacent triangles are called adjacent edges: Type I triangle: Connect the midpoint of the unique adjacent edge with its corresponding vertex, as shown by arc AD in Figure 1a; Type II triangle: Connect the midpoints of two adjacent edges, as shown by arc DF in Figure 1b; Type III triangle: Connect the centroid and the midpoint of the three sides, as shown by segments OD, OF and OH in Figure 1c. Step 3: Identify the junction areas according to the location of the Type III triangle; Step 4: Calculate the direction of the branch skeleton lines in the junction areas. If there are two branch skeleton lines with the same direction, it is preferentially connected as a line, and the remaining branch skeleton lines extend to the line in their respective directions. If there are no arbitrary two skeleton lines in the same direction, the Euclidean distance between the nodes is used as intersection criterion of similarity, the branch nodes are aggregated to their geometric center and the branch skeleton lines are connected to the aggregation nodes in respective directions. Shortcomings in the Existing Method In the existing method, each junction area is used as a processing unit, and the skeleton line of the simple junction areas can be well obtained by setting a direction threshold and a branch node aggregation distance threshold, as shown in rectangles A and B in Figure 2. However, when the junctions are clustered together, it is difficult for the existing method to precisely extract the skeleton line. In addition, the shape of the skeleton line in the junction areas is complicated, and each junction area is used as a unit for processing; it is thus impossible to consider the overall characteristics of the area formed by the mutual association between the junctions, which results in the destruction of the overall structure of the area. As shown in rectangles C in Figure 2, the shape of the skeleton lines in the areas with five junctions are not consistent with the spatial structure of the original polygon. Step 3: Identify the junction areas according to the location of the Type III triangle; Step 4: Calculate the direction of the branch skeleton lines in the junction areas. If there are two branch skeleton lines with the same direction, it is preferentially connected as a line, and the remaining branch skeleton lines extend to the line in their respective directions. If there are no arbitrary two skeleton lines in the same direction, the Euclidean distance between the nodes is used as intersection criterion of similarity, the branch nodes are aggregated to their geometric center and the branch skeleton lines are connected to the aggregation nodes in respective directions. Shortcomings in the Existing Method In the existing method, each junction area is used as a processing unit, and the skeleton line of the simple junction areas can be well obtained by setting a direction threshold and a branch node aggregation distance threshold, as shown in rectangles A and B in Figure 2. However, when the junctions are clustered together, it is difficult for the existing method to precisely extract the skeleton line. In addition, the shape of the skeleton line in the junction areas is complicated, and each junction area is used as a unit for processing; it is thus impossible to consider the overall characteristics of the area formed by the mutual association between the junctions, which results in the destruction of the overall structure of the area. As shown in rectangles C in Figure 2, the shape of the skeleton lines in the areas with five junctions are not consistent with the spatial structure of the original polygon. Methodology In this paper, a method for extracting skeleton lines in the areas with dense junctions considering stroke features is proposed that includes three key steps: (1) Long-edge adaptive node densification: construct fine Delaunay triangulation to avoid skeleton jitter in junction areas. (2) Identification of areas with dense junctions: identify the branch structure of the polygon as a junction unit and aggregate the junctions on the basis of local width characteristics and association relation; (3) Skeleton Methodology In this paper, a method for extracting skeleton lines in the areas with dense junctions considering stroke features is proposed that includes three key steps: (1) Long-edge adaptive node densification: construct fine Delaunay triangulation to avoid skeleton jitter in junction areas. (2) Identification of areas with dense junctions: identify the branch structure of the polygon as a junction unit and aggregate the junctions on the basis of local width characteristics and association relation; (3) Skeleton line optimization in the areas with dense junctions: optimize the skeleton line based on the good continuity characteristics of the stroke to get the line which fits in with human visual cognition. The detailed process diagram for our method is depicted in Figure 3. line optimization in the areas with dense junctions: optimize the skeleton line based on the good continuity characteristics of the stroke to get the line which fits in with human visual cognition. The detailed process diagram for our method is depicted in Figure 3. Long-Edge Adaptive Node Densification Boundary node densification is one of the key steps in establishing the boundary-constrained Delaunay triangulation. Many junction areas have dense branches on one side but no branch on the other side, as shown in Figure 4a. If the number of nodes on the boundary is too small, the triangles would be stretched toward these points when constructing the triangulation, which will cause zigzag jitters on skeleton lines. The traditional node densification algorithm usually operates on all boundary arcs of polygons; invalid branches in the normal end will be produced. Therefore, the longedge adaptive densification algorithm is proposed to perform node densification on such complex areas in this paper. The specific steps are as follows: Step 1: Identify the obtuse triangle in the Type III triangles and set the minimum angle threshold Amin. If the minimum angle in an obtuse triangle is smaller than Amin, it is marked and put into the triangle set S, as shown by the blue triangle in Figure 4a; Step 2: Select a triangle in set S and identify the longest edge of this triangle, find the Type II triangle which shares the longest edge and get one edge as the polygon boundary, as shown by the yellow triangle in Figure 4b; Step 3: Identify the longest edge of the abovementioned Type II triangle and determine if it is a polygon boundary arc; if so, it is marked as the local long-edge, and the Type II triangle is marked as the triangle to be densified, as shown by the purple triangle in Figure 4c; Step 4: Take the length of the shortest edge in the Type III triangle as the densified step size, add new nodes on the local long-edge by the densified step size, as shown in Figure 4d. Long-Edge Adaptive Node Densification Boundary node densification is one of the key steps in establishing the boundary-constrained Delaunay triangulation. Many junction areas have dense branches on one side but no branch on the other side, as shown in Figure 4a. If the number of nodes on the boundary is too small, the triangles would be stretched toward these points when constructing the triangulation, which will cause zigzag jitters on skeleton lines. The traditional node densification algorithm usually operates on all boundary arcs of polygons; invalid branches in the normal end will be produced. Therefore, the long-edge adaptive densification algorithm is proposed to perform node densification on such complex areas in this paper. The specific steps are as follows: Step 1: Identify the obtuse triangle in the Type III triangles and set the minimum angle threshold A min . If the minimum angle in an obtuse triangle is smaller than A min , it is marked and put into the triangle set S, as shown by the blue triangle in Figure 4a; Step 2: Select a triangle in set S and identify the longest edge of this triangle, find the Type II triangle which shares the longest edge and get one edge as the polygon boundary, as shown by the yellow triangle in Figure 4b; Step 3: Identify the longest edge of the abovementioned Type II triangle and determine if it is a polygon boundary arc; if so, it is marked as the local long-edge, and the Type II triangle is marked as the triangle to be densified, as shown by the purple triangle in Figure 4c; Step 4: Take the length of the shortest edge in the Type III triangle as the densified step size, add new nodes on the local long-edge by the densified step size, as shown in Figure 4d. Step 5: Repeat steps 2-4, iteratively process the triangles in set S until S=∅. Step 5: Repeat steps 2-4, iteratively process the triangles in set S until S=∅. Junction area Identification In the first step, junctions in the polygon are identified. The identification algorithm is as follows: (1) After long-edge adaptive node densification, construct the boundary-constrained Delaunay triangulation and extract the initial skeleton line. (2) Construct the node-are-polygon topology for the initial skeleton line; for any node of the skeleton line, the number of arcs associated with the node is defined as ArcNum(Node). (3) When the value of ArcNum(Node) is 3 for a node, the node is marked as junction node, and the area where the node is located is marked as the junction area, as shown in the red point in Figure 5(a). It can be found that these junction nodes are the center points of Type III triangles. Junction Area Association To determine whether the junction areas with discrete distribution can be aggregated, the length of the connecting arcs between them is an important measure. If the first and end points of an arc are junction nodes, the arc is then a connecting arc. The specific steps for determining the associated relationship between the junction areas are as follows: Step 1: Calculate the local approximate width (WNODE) of the junction area. Take the maximum value of the three branch skeleton lines twice in the Type III triangle as the WNODE of the junction area, using Figure 5(b) as an example; the mathematical function is shown in Equation (1): Step 2: Calculate the local approximate width (WARC) of the area where the connecting arc is located. Calculate the WNs and WNe of the area where the start node Ns and the end node Ne of the connecting arc are located on the basis of the calculation formulae of WNODE, and then use the larger Junction area Identification In the first step, junctions in the polygon are identified. The identification algorithm is as follows: (1) After long-edge adaptive node densification, construct the boundary-constrained Delaunay triangulation and extract the initial skeleton line. (2) Construct the node-are-polygon topology for the initial skeleton line; for any node of the skeleton line, the number of arcs associated with the node is defined as ArcNum(Node). (3) When the value of ArcNum(Node) is 3 for a node, the node is marked as junction node, and the area where the node is located is marked as the junction area, as shown in the red point in Figure 5a. It can be found that these junction nodes are the center points of Type III triangles. value of WNs and WNe as the local approximate width (WARC) of the area where the connecting arc is located, the mathematical function is shown in Equation (2): Step 3: Calculate the effective length (Lv) of the connecting arc. The length of the skeleton line between the start and end nodes of the connection arc, but excluding the inner length of the Type III triangle, is recorded as the effective length of the connection arc, as shown in Figure 5(b); the effective length of the connecting arc between the start node A and the end node B is shown in Equation (3): Step 4: If the effective length Lv of the connecting arc is smaller than the local approximate width WARC of the area where the connecting arc is located, the two junction areas are then associated with each other, and the connecting arc between them is marked Arclink, the mathematical function is shown in Equation (4): Junction Area Aggregation Calculate the associated arc Arclink for all junction nodes of the polygon and put them into the set S (Arclink). Select an Arclink(i) and use its start node Ns and end node Ne as tracking nodes to detect if there also exists other Arclink in the first node Ns and end node Ne (except for Arclink(i) itself). If the Arclink exists then put it into the neighboring association set NeighborArclink(Arclink). After each Arclink is detected, it is clustered and expanded to obtain the junctions aggregation result. Junction Area Association To determine whether the junction areas with discrete distribution can be aggregated, the length of the connecting arcs between them is an important measure. If the first and end points of an arc are junction nodes, the arc is then a connecting arc. The specific steps for determining the associated relationship between the junction areas are as follows: Step 1: Calculate the local approximate width (W NODE ) of the junction area. Take the maximum value of the three branch skeleton lines twice in the Type III triangle as the W NODE of the junction area, using Figure 5b as an example; the mathematical function is shown in Equation (1): Step 2: Calculate the local approximate width (W ARC ) of the area where the connecting arc is located. Calculate the W Ns and W Ne of the area where the start node N s and the end node N e of the connecting arc are located on the basis of the calculation formulae of W NODE , and then use the larger value of W Ns and W Ne as the local approximate width (W ARC ) of the area where the connecting arc is located, the mathematical function is shown in Equation (2): Step 3: Calculate the effective length (L v ) of the connecting arc. The length of the skeleton line between the start and end nodes of the connection arc, but excluding the inner length of the Type III triangle, is recorded as the effective length of the connection arc, as shown in Figure 5b; the effective length of the connecting arc between the start node A and the end node B is shown in Equation (3): Step 4: If the effective length L v of the connecting arc is smaller than the local approximate width W ARC of the area where the connecting arc is located, the two junction areas are then associated with each other, and the connecting arc between them is marked Arc link , the mathematical function is shown in Equation (4): Junction Area Aggregation Calculate the associated arc Arc link for all junction nodes of the polygon and put them into the set S (Arc link ). Select an Arc link(i) and use its start node N s and end node N e as tracking nodes to detect if there also exists other Arc link in the first node N s and end node N e (except for Arc link(i) itself). If the Arc link exists then put it into the neighboring association set NeighborArc link (Arc link ). After each Arc link is detected, it is clustered and expanded to obtain the junctions aggregation result. Skeleton Line Optimization in Areas with Dense Junctions For any area with dense junctions, this paper considers the stroke feature to extract its internal skeleton line. Accordingly, a stroke is first constructed with the connecting arc as a unit, and the skeleton line of the trident region is then extracted and subsequently optimized, which then leads to natural extension according to the stroke feature and the obtaining of a skeleton line more in line with human cognition. Arc Importance Evaluation In this paper, the algorithm proposed by Liu et al. [18] is applied to determine the importance of connecting arcs. The basic idea is to use the length, approximate width, connectivity, proximity and betweenness of connecting arcs weighted by the CRITIC method [19] to obtain the importance of connecting arcs. The meanings of the parameters of connecting arcs are shown in Table 1. where r(v i , v k ) indicates the connectivity between nodes. Proximity Minimum number of connections from the connecting arc to all other connecting arcs, reflecting the possibility that other connecting arcs will be aggregated in this connecting arc where d(v i , v k ) indicates the shortest distance between two nodes. Betweenness Measurement of the extent of this connecting arc between other connecting arcs and if the connecting arc acts as a bridge where m jk indicates the number of the shortest distance between two nodes; m jk (v i ) indicates the number of the shortest distance between two nodes passing the node v i CRITIC (Criteria Importance Though Intercriteria Correlation) is an objective weighting method based on a mutual relationship criterion proposed by Diakoulaki that determines the objective weight of the index is determined by the contrast intensity and the conflict degree between the indicators. The specific steps are described by Diakoulaki et al. [19] and are will not repeated here. Construct the Stroke Connection Based on the importance of each connecting arc, the stroke connection of the areas with dense junctions is iteratively calculated. The main steps are as follows: Step 1: Identify the junction nodes with only one connecting arc, and select one as the start tracking node of the stroke connection. Then, the connecting arc is taken as the tracking arc to get the node on the other side of the arc, which is used as the second tracking node; Step 2: If there is more than one connecting arc at the second tracking node, then put these arcs into the stroke connection candidate set R and calculate the importance of each connecting arc; Step 3: Preferentially connect the arc of larger importance with the first connecting arc to form a stroke; Step 4: Repeat Steps 2 and 3, continue to track the stroke connection until there is no more connection arc that can be connected, at which time a single stroke connection is constructed; Step 5: Explore the branch connecting arc of the existing stroke connection until all the connecting arcs of areas with dense junctions have been calculated, at which time the stroke connection calculation ends, as shown by the thick blue line in Figure 6. As shown in Figure 7a, assuming that the arcs of OA and OB belong to the same stroke at the junction node O, the midpoint A and B of the two edges are connected as the reference arc. For the third arc with an unstable direction, the midpoint P of AB is taken to connect arc CP to form the new skeleton lines, as shown in Figure 7b; for the third arc with a stable direction, the new skeleton line is obtained by extending the arc to AB along its direction, as shown in Figure 7c. Skeleton Line Adjustment The connecting arc Arc link , as the basic unit of stroke connection in areas with dense junctions, connects two junction nodes. For any of the junction nodes, the two arcs connecting the node and forming a stroke connection are used as the reference arc. First, delete these two arcs and connect the midpoint of the two edges where the endpoints are located in the Type III triangle to form a new arc Arc adjust , then the third arc associated with the junction node is adjusted according to its direction characteristics. If the skeleton lines of the branch have an unstable direction, then connect the endpoint of the third arc with the midpoint of the Arc adjust ; otherwise, extend the third arc to Arc adjust along its directions. The direction of the skeleton is defined by the algorithm proposed by Li et al. (2018), which is then determined by the direction of internal arcs of five adjacent triangles. The skeleton lines are deemed to be directionally stable if the direction differences of these five arcs are less than 5 • . As shown in Figure 7a, assuming that the arcs of OA and OB belong to the same stroke at the junction node O, the midpoint A and B of the two edges are connected as the reference arc. For the third arc with an unstable direction, the midpoint P of AB is taken to connect arc CP to form the new skeleton lines, as shown in Figure 7b; for the third arc with a stable direction, the new skeleton line is obtained by extending the arc to AB along its direction, as shown in Figure 7c. Experimental Data and Environment Relying on the WJ-III map workstation developed by the Chinese Academy of Surveying and Mapping [20], the method of extracting the skeleton line of the areas with dense junctions considering stroke features proposed in this paper is embedded, and a complex water area group in the topographic map of an area in Jiangsu with a scale of 1:10000 was taken as the experimental data for reliability and effectiveness verification. The experimental data space range was 2.7 × 2.7 km 2 , the Experimental Data and Environment Relying on the WJ-III map workstation developed by the Chinese Academy of Surveying and Mapping [20], the method of extracting the skeleton line of the areas with dense junctions considering stroke features proposed in this paper is embedded, and a complex water area group in the topographic map of an area in Jiangsu with a scale of 1:10000 was taken as the experimental data for reliability and effectiveness verification. The experimental data space range was 2.7 × 2.7 km 2 , the software system running environment was the Windows 7 64-bit operating system, the CPU was an Intel Core I7-3770, the main frequency was 3.2 GHz, the memory as 16GB, and the solid-state hard disk was 1 TB. Reliability and Effectiveness Analysis To verify the reliability and effectiveness of the proposed method, it is compared with the skeleton line extraction method of Li et al. [17]. The overall information of the processing area using the method of this paper is shown in Table 2. It can be seen from Table 2 that the number of junction areas is 2286, the number of areas with sparse junctions is 347, and the number of areas with dense junctions is 124, which included 1939 junctions, which means these areas of water elements are densely distributed and about 85% of the junction areas meet the aggregation conditions. In the dense junction areas, the minimum value of the number of connecting arcs is 1, which indicates that two junctions form a dense area, the largest dense area consists of 307 connecting arcs and 66 strokes, which indicates that this element has many branches with a compact arrangement. Through visual interpretation by human cartographers, the skeleton lines of 347 sparse junction areas obtained by the existing method and our new method can reflect the main extension direction and shape characteristics of the polygon well without jitter. However, the skeleton lines of the identified 124 dense junction areas are all refined by our new method; contrarily, only 72 of them get the similar results via the existing method, and 52 of them with obvious jitters. Visual Cognition Analysis The two typical areas with dense junctions in the experimental area are selected, shown partially enlarged in Figures 8 and 9. Among them, Figure 8 shows the simple junction areas with smooth boundaries and the branching water is arranged regularly and has consistent extension directions. Figure 9 shows complex junction areas with uneven boundaries and the branching water is arranged irregularly and has inconsistent extension directions. Visual Cognition Analysis The two typical areas with dense junctions in the experimental area are selected, shown partially enlarged in Figures 8-9. Among them, Figure 8 shows the simple junction areas with smooth boundaries and the branching water is arranged regularly and has consistent extension directions. Figure 9 shows complex junction areas with uneven boundaries and the branching water is arranged irregularly and has inconsistent extension directions. It can be seen from Figure 8 that for the simple junction areas, the method of Li et al. [17] is basically consistent with the results of the method proposed in this paper, which is able to well remove the jitters at junction areas and more accurately reflect the main structure and extension features of water elements. However, for the areas with irregular and complex junctions, as shown by the rectangles A, B, and C in Figure 8a, the method of Li et al. [17] depends on the fitting distance threshold and the direction threshold, which leads to the skeleton line having a certain degree of deviation, as shown by the rectangles A, B, and C in Figure 8b. However, the method proposed in this paper treats these areas as a whole without depending on any threshold. Therefore, the extracted skeleton line is smoother and more natural, as shown by rectangles A, B, and C in Figure 8c. It can be seen from Figure 9 that for complex areas with dense junctions, the method of Li et al. [17] is subject to severe interference of the complex boundaries and arrangement structure of the branch; it is unable to process the skeleton line jitter of this area, and the extracted skeleton line has a large degree of distortion and thus loses the overall structure of this region, as shown by the rectangles A in Figure 9(b). In contrast, the method proposed in this paper can better extract the main structure of this region and more accurately describe the skeleton line of the main structure. For the backbone area with larger connectivity, the skeleton line extracted in this method can also well summarize its extension characteristics, as shown by the rectangles A in Figure 9c. Figure 10 shows the results we obtained for the input in Figure 2a. As shown in rectangles A and B in Figure 10, the skeleton lines in the simple junction areas obtained by these two methods are all consistent with the main body shape of the original elements. However, as shown in the areas with five junctions in rectangles C, there were noticeable jitters on the skeleton lines of Li [17], and these jitters did not exist on the skeleton lines of this paper. The red solid line is the adjustment result of the skeleton line stroke in this area which was constructed via our method. The skeleton lines in each triangle were connected by the midpoint of the edge on the stroke, the remaining branch skeletons extend up to the adjustment skeleton line along their individual directions. As a consequence, we obtain a smoother centerline that better reflects the aim of a cartographer. It can be seen from Figure 8 that for the simple junction areas, the method of Li et al. [17] is basically consistent with the results of the method proposed in this paper, which is able to well remove the jitters at junction areas and more accurately reflect the main structure and extension features of water elements. However, for the areas with irregular and complex junctions, as shown by the rectangles A, B, and C in Figure 8a, the method of Li et al. [17] depends on the fitting distance threshold and the direction threshold, which leads to the skeleton line having a certain degree of deviation, as shown by the rectangles A, B, and C in Figure 8b. However, the method proposed in this paper treats these areas as a whole without depending on any threshold. Therefore, the extracted skeleton line is smoother and more natural, as shown by rectangles A, B, and C in Figure 8c. It can be seen from Figure 9 that for complex areas with dense junctions, the method of Li et al. [17] is subject to severe interference of the complex boundaries and arrangement structure of the branch; it is unable to process the skeleton line jitter of this area, and the extracted skeleton line has a large degree of distortion and thus loses the overall structure of this region, as shown by the rectangles A in Figure 9b. In contrast, the method proposed in this paper can better extract the main structure of this region and more accurately describe the skeleton line of the main structure. For the backbone area with larger connectivity, the skeleton line extracted in this method can also well summarize its extension characteristics, as shown by the rectangles A in Figure 9c. Figure 10 shows the results we obtained for the input in Figure 2a. As shown in rectangles A and B in Figure 10, the skeleton lines in the simple junction areas obtained by these two methods are all consistent with the main body shape of the original elements. However, as shown in the areas with five junctions in rectangles C, there were noticeable jitters on the skeleton lines of Li [17], and these jitters did not exist on the skeleton lines of this paper. The red solid line is the adjustment result of the skeleton line stroke in this area which was constructed via our method. The skeleton lines in each triangle were connected by the midpoint of the edge on the stroke, the remaining branch skeletons extend up to the adjustment skeleton line along their individual directions. As a consequence, we obtain a smoother centerline that better reflects the aim of a cartographer. large degree of distortion and thus loses the overall structure of this region, as shown by the rectangles A in Figure 9(b). In contrast, the method proposed in this paper can better extract the main structure of this region and more accurately describe the skeleton line of the main structure. For the backbone area with larger connectivity, the skeleton line extracted in this method can also well summarize its extension characteristics, as shown by the rectangles A in Figure 9c. Figure 10 shows the results we obtained for the input in Figure 2a. As shown in rectangles A and B in Figure 10, the skeleton lines in the simple junction areas obtained by these two methods are all consistent with the main body shape of the original elements. However, as shown in the areas with five junctions in rectangles C, there were noticeable jitters on the skeleton lines of Li [17], and these jitters did not exist on the skeleton lines of this paper. The red solid line is the adjustment result of the skeleton line stroke in this area which was constructed via our method. The skeleton lines in each triangle were connected by the midpoint of the edge on the stroke, the remaining branch skeletons extend up to the adjustment skeleton line along their individual directions. As a consequence, we obtain a smoother centerline that better reflects the aim of a cartographer. Network Function Analysis The global efficiency commonly used in complex network theory is used to evaluate the network function of the results in this paper. The global efficiency of the network is proposed by Latora V, which describes how the nodes in the network interact and reflects the smoothness of information dissemination in the network as a global indicator of network function. The concept of dual graphs is introduced, in which the nodes represent the connecting arcs between trident nodes and the edges represent the relationship between the connecting arc segments and other connecting arc segments. It is formalized as G = G(V, E), where V is the set of nodes and E is the set of edges. Then, the "global efficiency" of the network G is calculated by Equation (5): where N is the total number of nodes, ε ij is the efficiency between node i and node j, and d ij is the minimum number of steps required by connecting node i and node j, i.e., the path length. The value of global efficiency range is [0,1]. Additionally, the number of stroke connections formed by the arcs of the experimental area is counted, as shown in Table 3. It can be found from Table 1 t the overall efficiency of the method proposed in this paper is improved by 23% compared with the traditional method, which indicates that the method proposed in this paper improves the smoothness of information dissemination in the network. Meanwhile, in the case of the same number of arcs, the number of strokes constructed by the method in this paper was reduced by 72 compared with the traditional method, which indicates that the network stroke access is better. Conclusions A method of extracting the skeleton line considering the stroke feature is proposed in this paper to address the problem that the existing method cannot accurately maintain the main structure and extension characteristics of areas with dense junctions. The skeleton line is optimized according to the good continuity characteristics of the stroke connection, which is more in line with human cognition laws. After verifying the topographic map of the actual water in a certain area of Jiangsu, the main conclusions were as follows: (1) The method in this paper can better distinguish the areas with dense junctions and the areas with sparse junctions. For the identified 124 areas with dense junctions, the existing method can only process 58% of the dense junction areas, but this method can process all; (2) Visual cognition analysis shows that for the complex junction areas with uneven boundaries and the branching water is arranged irregularly and has inconsistent extension directions, irregular branch arrangement and unfixed directions, the skeleton line extracted by the method proposed in this paper can better display the regional main structure and extension characteristics; (3) The analysis of network function indicates that the overall efficiency of this paper improved by 23% compared to the existing method, and that the number of strokes constructed is reduced by 57%, which proves that the skeleton line extracted by this method has better connectivity. The stroke generation strategy has an important influence on the accuracy of the skeleton line extraction results using the method proposed in this paper. Hence, a future research focus is to further refine the arc importance evaluation system and establish a more reasonable stroke generation strategy to make the skeleton line extraction result more refined. In addition, the method proposed in this article is only used to deal with polylines at present; the applicability of our method for area objects will be studied in future research. Author Contributions: Chengming Li conceived the original idea for the study; all co-authors conceived and designed the methodology; Yong Yin and Wei Wu conducted processing and analysis of the data; Chengming Li and Pengda Wu drafted the manuscript; all authors read and approved the final manuscript. Conflicts of Interest: The authors declare no conflict of interest.
9,801.4
2019-07-16T00:00:00.000
[ "Computer Science" ]
miR-136 Regulates the Proliferation and Adipogenic Differentiation of Adipose-Derived Stromal Vascular Fractions by Targeting HSD17B12 Fat deposition involves the continuous differentiation of adipocytes and lipid accumulation. Studies have shown that microRNA miR-136 and 17β-hydroxysteroid dehydrogenase type 12 (HSD17B12) play important roles in lipid accumulation. However, the regulatory mechanism through which miR-136 targets HSD17B12 during ovine adipogenesis remains unclear. This study aimed to elucidate the role of miR-136 and HSD17B12 in adipogenesis and their relationship in ovine adipose-derived stromal vascular fractions (SVFs). The target relationship between miR-136 and HSD17B12 was predicted and confirmed using bioinformatics and a dual-luciferase reporter assay. The results showed that miR-136 promoted proliferation and inhibited adipogenic differentiation of ovine SVFs. We also found that HSD17B12 inhibited proliferation and promoted adipogenic differentiation of ovine SVFs. Collectively, our results indicate that miR-136 facilitates proliferation and attenuates adipogenic differentiation of ovine SVFs by targeting HSD17B12. These findings provide a theoretical foundation for further elucidation of the regulatory mechanisms of lipid deposition in sheep. Introduction Excess lipids are stored in the animal body as adipose tissue.In sheep, excess fat is deposited in the tail, providing energy during adverse conditions.However, in modern farming, excessive fat deposits in the tail do not provide economic benefits [1].Therefore, exploring the regulatory mechanisms underlying fat deposition in sheep is necessary. Fat deposition is regulated by several factors, including circular RNAs, microRNAs (miRNAs), and long non-coding RNA.miRNAs are endogenous small non-coding RNAs (20-24 nucleotides) that bind to target gene mRNAs, promote their degradation, inhibit their translation at the transcriptional level, and affect various cellular properties and physiological processes in vivo [2][3][4].Numerous recent studies have shown that several miRNAs play important roles in adipogenesis.miR-127 targets mitogen-activated protein kinase 4 to promote porcine adipocyte proliferation [5].miR-130b inhibits the lipid accumulation of porcine preadipocytes by directly targeting the peroxisome proliferator-activated receptor gamma (PPARγ) [6].In sheep, miR-128-1-5p promotes the expression of lipogenic marker genes and the formation of lipid droplets by targeting the Kruppel-like transcription factor 11 5 -UTR [7].miR-301a inhibits adipogenic differentiation of ovine preadipocytes by targeting homeobox C8 [8].miR-136 expression is lower in large white pigs with higher back fat deposition [9].Furthermore, the expression of miR-136 in subcutaneous adipose tissue was significantly higher than in sheep perirenal adipose tissue [10].However, the 2 of 12 regulatory mechanism by which miR-136 targets HSD17B12 during ovine adipogenesis remains unclear.Based on previous studies, we speculated that miR-136 might play a role in the proliferation and adipogenic differentiation of ovine stromal vascular fractions (SVFs). Based on the mechanism of action of miRNAs, we made bioinformatics-based predictions and found that HSD17B12 was the target gene of miR-136.HSD17B12 is a member of the 17β-hydroxysteroid dehydrogenases, a class of enzymes that catalyze the interconversion of active and inactive steroid hormones [11].HSD17B12 is widely expressed in animal kidneys, livers, and ovaries [12].In addition, HSD17B12 has many important biological functions, including fatty acid metabolism, sex hormone production, and cell cycle regulation [12,13].Notably, the expression of HSD17B12 did not increase in the livers of transgenic mice overexpressing sterol regulatory element-binding proteins (SREBP) [14].However, another study found that the expression of HSD17B12 and other SREBP-regulated genes, such as fatty acid synthase, significantly increased in HepG2 cells where SREBP-1 was activated [15].However, the results of these two studies were inconsistent.These inconsistencies may be due to species differences; further research is required to confirm this hypothesis.Moreover, interference with HSD17B12 expression inhibits the proliferation of breast cancer cells [16].In human adipocytes, HSD17B12 downregulates the lipoprotein lipase expression and affects adipocyte maturation and lipid accumulation [17].However, there is a dearth of studies on the precise function of HSD17B12 in ovine adipogenesis. In this study, we investigated the target relationship between miR-136 and HSD17B12.We also explored the effects of miR-136 and HSD17B12 on the proliferation and adipogenic differentiation of ovine SVFs and their possible mechanism of action.This study aimed to elucidate the mechanism of action of miR-136 in the proliferation and adipogenic differentiation of ovine SVFs, thereby providing a foundation for studying adipogenesis regulation by miRNAs. Identification of Ovine SVFs Isolated and cultured ovine SVFs are spindle-shaped (Figure 1A).Oil Red O (ORO) staining showed that more lipid droplets were produced after 10 days of differentiation induction (Figure 1B).In conclusion, these cells were successfully isolated and used in subsequent experiments. pigs with higher back fat deposition [9].Furthermore, the expression of miR-136 in subcutaneous adipose tissue was significantly higher than in sheep perirenal adipose tissue [10].However, the regulatory mechanism by which miR-136 targets HSD17B12 during ovine adipogenesis remains unclear.Based on previous studies, we speculated that miR-136 might play a role in the proliferation and adipogenic differentiation of ovine stromal vascular fractions (SVFs). Based on the mechanism of action of miRNAs, we made bioinformatics-based predictions and found that HSD17B12 was the target gene of miR-136.HSD17B12 is a member of the 17β-hydroxysteroid dehydrogenases, a class of enzymes that catalyze the interconversion of active and inactive steroid hormones [11].HSD17B12 is widely expressed in animal kidneys, livers, and ovaries [12].In addition, HSD17B12 has many important biological functions, including fatty acid metabolism, sex hormone production, and cell cycle regulation [12,13].Notably, the expression of HSD17B12 did not increase in the livers of transgenic mice overexpressing sterol regulatory element-binding proteins (SREBP) [14].However, another study found that the expression of HSD17B12 and other SREBP-regulated genes, such as fatty acid synthase, significantly increased in HepG2 cells where SREBP-1 was activated [15].However, the results of these two studies were inconsistent.These inconsistencies may be due to species differences; further research is required to confirm this hypothesis.Moreover, interference with HSD17B12 expression inhibits the proliferation of breast cancer cells [16].In human adipocytes, HSD17B12 downregulates the lipoprotein lipase expression and affects adipocyte maturation and lipid accumulation [17].However, there is a dearth of studies on the precise function of HSD17B12 in ovine adipogenesis. In this study, we investigated the target relationship between miR-136 and HSD17B12.We also explored the effects of miR-136 and HSD17B12 on the proliferation and adipogenic differentiation of ovine SVFs and their possible mechanism of action.This study aimed to elucidate the mechanism of action of miR-136 in the proliferation and adipogenic differentiation of ovine SVFs, thereby providing a foundation for studying adipogenesis regulation by miRNAs. Identification of Ovine SVFs Isolated and cultured ovine SVFs are spindle-shaped (Figure 1A).Oil Red O (ORO) staining showed that more lipid droplets were produced after 10 days of differentiation induction (Figure 1B).In conclusion, these cells were successfully isolated and used in subsequent experiments. miR-136 Targets HSD17B12 3 -UTR Based on bioinformatics analysis (Figure S1), we predicted that a binding site for miR-136 is present on HSD17B12 3 -UTR (Figure 2A).Dual-luciferase reporter assays confirmed this hypothesis (Figure 2B).miR-136 significantly reduced the luciferase activity of reporters containing the HSD17B12 3 -UTR (p < 0.05), whereas no significant difference was observed between the mutant and blank vectors.Furthermore, qPCR results (Figure 2C) showed that after the overexpression of miR-136, the HSD17B12 mRNA expression was significantly downregulated (p < 0.05).After interference with miR-136, the HSD17B12 mRNA expression significantly increased (p < 0.01).Western blotting results (Figure 2D,E) showed that the HSD17B12 protein expression significantly decreased (p < 0.01) after miR-136 overexpression.After interfering with miR-136, the HSD17B12 protein expression was significantly upregulated (p < 0.05).These results indicate that the miR-136 seed region can bind to the 3 -UTR of HSD17B12 and that miR-136 negatively regulates HSD17B12 expression. reporters containing the HSD17B12 3′-UTR (p < 0.05), whereas no significant diffe was observed between the mutant and blank vectors.Furthermore, qPCR results (F 2C) showed that after the overexpression of miR-136, the HSD17B12 mRNA expre was significantly downregulated (p < 0.05).After interference with miR-136 HSD17B12 mRNA expression significantly increased (p < 0.01).Western blotting re (Figure 2D,E) showed that the HSD17B12 protein expression significantly decreased 0.01) after miR-136 overexpression.After interfering with miR-136, the HSD17B12 pr expression was significantly upregulated (p < 0.05).These results indicate that the 136 seed region can bind to the 3′-UTR of HSD17B12 and that miR-136 negatively regu HSD17B12 expression. miR-136 Promotes the Proliferation of Ovine SVFs To understand the function of miR-136 on the proliferation of ovine SVFs, SVFs t fected with miR-136 mimics or miR-136 inhibitors were collected two days later mRNA expression of proliferation markers cyclin B, cyclin D, cyclin E, and PCNA wa termined using qPCR (Figure 3A).The results showed that the mRNA expression of D and cyclin E was significantly upregulated compared to the transfected miR-136 m NC (p < 0.05), and the mRNA expression of cyclin B and PCNA was significantly up lated (p < 0.01).In contrast, after transfection with miR-136, the mRNA expression of B, cyclin E, and PCNA was significantly downregulated (p < 0.05), and cyclin D m expression was significantly downregulated (p < 0.01).A cell counting kit 8 (CCK-8 miR-136 Promotes the Proliferation of Ovine SVFs To understand the function of miR-136 on the proliferation of ovine SVFs, SVFs transfected with miR-136 mimics or miR-136 inhibitors were collected two days later.The mRNA expression of proliferation markers cyclin B, cyclin D, cyclin E, and PCNA was determined using qPCR (Figure 3A).The results showed that the mRNA expression of cyclin D and cyclin E was significantly upregulated compared to the transfected miR-136 mimic NC (p < 0.05), and the mRNA expression of cyclin B and PCNA was significantly upregulated (p < 0.01).In contrast, after transfection with miR-136, the mRNA expression of cyclin B, cyclin E, and PCNA was significantly downregulated (p < 0.05), and cyclin D mRNA expression was significantly downregulated (p < 0.01).A cell counting kit 8 (CCK-8) was used to determine the activity of cells at different proliferation stages (Figure 3B,C).The results showed that ovine SVFs transfected with miR-136 mimics proliferated for 24 h, and the cell proliferation rate was significantly higher than that of cells transfected with miR-136 mimic NC (p < 0.05).The degree of increase reached a significant level at 60 h (p < 0.01).After transfection with miR-136 inhibitors, the proliferation rate of the cells decreased and reached a significant level after 36 h (p < 0.01).These results suggested that miR-136 enhanced the proliferation of ovine SVFs. used to determine the activity of cells at different proliferation stages (Figure 3B,C).results showed that ovine SVFs transfected with miR-136 mimics proliferated for 24 h, the cell proliferation rate was significantly higher than that of cells transfected with m 136 mimic NC (p < 0.05).The degree of increase reached a significant level at 60 h (p < 0. After transfection with miR-136 inhibitors, the proliferation rate of the cells decreased reached a significant level after 36 h (p < 0.01).These results suggested that miR-136 hanced the proliferation of ovine SVFs. miR-136 Inhibits the Adipogenic Differentiation of Ovine SVFs We investigated the effect of miR-136 on the adipogenic differentiation of ovine SV The FABP4 mRNA expression significantly decreased (p < 0.05), and the mRNA express of PPARγ, C/EBPα, and adiponectin significantly decreased (p < 0.01) after the transfec of miR-136 mimics (Figure 4A).In contrast, the mRNA expression of C/EBPα and PPA were significantly upregulated (p < 0.05), and the adiponectin mRNA expression was nificantly upregulated (p < 0.01) after the transfection of miR-136 inhibitors.However, FABP4 mRNA expression was not significant between groups.Cells transfected with m 136 mimics accumulated fewer lipid droplets, whereas those transfected with miR-136 hibitors accumulated more (Figure 4B).The triglyceride determination (Figure showed that the triglyceride content significantly decreased after transfection with miR-136 mimic (p < 0.01) and significantly increased after transfection with the miRinhibitor (p < 0.05).These results implied that miR-136 inhibited the adipogenic differ tiation of ovine SVFs. miR-136 Inhibits the Adipogenic Differentiation of Ovine SVFs We investigated the effect of miR-136 on the adipogenic differentiation of ovine SVFs.The FABP4 mRNA expression significantly decreased (p < 0.05), and the mRNA expression of PPARγ, C/EBPα, and adiponectin significantly decreased (p < 0.01) after the transfection of miR-136 mimics (Figure 4A).In contrast, the mRNA expression of C/EBPα and PPARγ were significantly upregulated (p < 0.05), and the adiponectin mRNA expression was significantly upregulated (p < 0.01) after the transfection of miR-136 inhibitors.However, the FABP4 mRNA expression was not significant between groups.Cells transfected with miR-136 mimics accumulated fewer lipid droplets, whereas those transfected with miR-136 inhibitors accumulated more (Figure 4B).The triglyceride determination (Figure 4C) showed that the triglyceride content significantly decreased after transfection with the miR-136 mimic (p < 0.01) and significantly increased after transfection with the miR-136 inhibitor (p < 0.05).These results implied that miR-136 inhibited the adipogenic differentiation of ovine SVFs. HSD17B12 Suppresses the Proliferation of Ovine SVFs To investigate the effect of HSD17B12 on the proliferation of ovine SVFs, we either overexpressed or disrupted HSD17B12 expression in ovine SVFs by packaging lentiviruses.Overexpressing and interfering with HSD17B12 significantly increased and decreased its expression in ovine SVF cells at both the mRNA (Figure 5A) and protein (Figure 5B,C) levels.Moreover, sh-HSD17B12-2 was more effective than sh-HSD17B12-1.Therefore, sh-HSD17B12-2 was used for subsequent experiments.Cells cultured for two days were collected, and the mRNA expression of proliferation markers cyclin B, cyclin D, cyclin E, and PCNA was determined using qPCR (Figure 5D).The results showed that overexpression of HSD17B12 highly downregulated the cyclin mRNA expression (p < 0.01), whereas the mRNA expression levels of PCNA, cyclin D, and cyclin E were significantly downregulated (p < 0.05).In contrast, the expression of cyclin B (p < 0.01), cyclin D (p < 0.01), and PCNA (p < 0.001) mRNA was significantly upregulated, and the cyclin mRNA expression was significantly upregulated (p < 0.05) after the knockdown of HSD17B12.The CCK-8 results showed that the proliferation rate of ovine SVFs overexpressing HSD17B12 was significantly lower than that of the pHB-NC group (p < 0.05) after 24 h of proliferation (Figure 5E), and this reduction reached a significant level (p < 0.01) at the subsequent four time points.After the inhibition of HSD17B12 (Figure 5F), the proliferation rate of cells was higher than that of the shRNA-NC group and reached a highly significant level (p < 0.01) at 24 h.These results indicated that HSD17B12 suppressed the proliferation of ovine SVFs. HSD17B12 Suppresses the Proliferation of Ovine SVFs To investigate the effect of HSD17B12 on the proliferation of ovine SVFs, we either overexpressed or disrupted HSD17B12 expression in ovine SVFs by packaging lentiviruses.Overexpressing and interfering with HSD17B12 significantly increased and decreased its expression in ovine SVF cells at both the mRNA (Figure 5A) and protein (Figure 5B,C) levels.Moreover, sh-HSD17B12-2 was more effective than sh-HSD17B12-1.Therefore, sh-HSD17B12-2 was used for subsequent experiments.Cells cultured for two days were collected, and the mRNA expression of proliferation markers cyclin B, cyclin D, cyclin E, and PCNA was determined using qPCR (Figure 5D).The results showed that overexpression of HSD17B12 highly downregulated the cyclin mRNA expression (p < 0.01), whereas the mRNA expression levels of PCNA, cyclin D, and cyclin E were significantly downregulated (p < 0.05).In contrast, the expression of cyclin B (p < 0.01), cyclin D (p < 0.01), and PCNA (p < 0.001) mRNA was significantly upregulated, and the cyclin mRNA expression was significantly upregulated (p < 0.05) after the knockdown of HSD17B12.The CCK-8 results showed that the proliferation rate of ovine SVFs overexpressing HSD17B12 was significantly lower than that of the pHB-NC group (p < 0.05) after 24 h of proliferation (Figure 5E), and this reduction reached a significant level (p < 0.01) at the subsequent four time points.After the inhibition of HSD17B12 (Figure 5F), the proliferation rate of cells was higher than that of the shRNA-NC group and reached a highly significant level (p < 0.01) at 24 h.These results indicated that HSD17B12 suppressed the proliferation of ovine SVFs. HSD17B12 Facilitates the Adipogenic Differentiation of Ovine SVFs We also explored the function of HSD17B12 during the adipogenic differentiation of ovine SVFs.After HSD17B12 overexpression, the PPARγ mRNA expression was significantly upregulated (p < 0.01), and the mRNA expression of C/EBPα, adiponectin, and FABP4 was significantly upregulated (p < 0.001) (Figure 6A).After the knockdown of HSD17B12, the mRNA expression of PPARγ, FABP4, and adiponectin was significantly downregulated (p < 0.01), and the C/EBPα mRNA expression was remarkably downregulated (p < 0.05).The ORO staining results (Figure 6B) showed that the accumulation of lipid droplets in the pHB-HSD17B12 group was significantly higher than that in the pHB-NC group and that the shRNA-HSD17B12-2 group accumulated fewer lipid droplets than the shRNA-NC group.In addition, the results of the triglyceride determination (Figure 6C) showed that the triglyceride content significantly increased after HSD17B12 overexpression (p < 0.05), and the triglyceride content significantly decreased after interference with HSD17B12 (p < 0.05).Collectively, these results demonstrate that HSD17B12 promotes adipogenic differentiation and lipid accumulation in ovine SVFs. HSD17B12 Facilitates the Adipogenic Differentiation of Ovine SVFs We also explored the function of HSD17B12 during the adipogenic differentiation of ovine SVFs.After HSD17B12 overexpression, the PPARγ mRNA expression was significantly upregulated (p < 0.01), and the mRNA expression of C/EBPα, adiponectin, and FABP4 was significantly upregulated (p < 0.001) (Figure 6A).After the knockdown of HSD17B12, the mRNA expression of PPARγ, FABP4, and adiponectin was significantly downregulated (p < 0.01), and the C/EBPα mRNA expression was remarkably downregulated (p < 0.05).The ORO staining results (Figure 6B) showed that the accumulation of lipid droplets in the pHB-HSD17B12 group was significantly higher than that in the pHB-NC group and that the shRNA-HSD17B12-2 group accumulated fewer lipid droplets than the shRNA-NC group.In addition, the results of the triglyceride determination (Figure 6C) showed that the triglyceride content significantly increased after HSD17B12 overexpression (p < 0.05), and the triglyceride content significantly decreased after interference with HSD17B12 (p < 0.05).Collectively, these results demonstrate that HSD17B12 promotes adipogenic differentiation and lipid accumulation in ovine SVFs. Discussion Adipose tissue includes adipocytes and other types of cells called SVF cells [18].In a previous study, we isolated SVF cells from ovine back adipose tissue through collagenase digestion, and the cultured SVFs were spindle-shaped or triangular [19], which is consistent with our isolated SVFs.A large number of lipid droplets were observed in adipogenic-induced differentiated SVFs stained with ORO [19].We used this method to identify the adipogenic differentiation ability of ovine SVFs, and the results showed that isolated SVFs produce large amounts of lipid droplets.Therefore, the SVFs isolated in this study can be used to study molecular functions associated with adipogenesis in vitro. miRNAs play important roles in adipogenesis.Thus, miR-136 is a potential regulator of adipogenesis [10].miRNAs can bind to the mRNAs of their target functional genes in a partially or fully complementary manner and promote their degradation or inhibit their translation at the post-transcriptional level [2].Indeed, our study revealed that miR-136 and HSD17B12 have a targeting relationship and that miR-136 negatively regulates HSD17B12 mRNA and protein expression.Therefore, we speculated that miR-136 likely affects adipogenesis by regulating HSD17B12 expression in sheep. Adipogenesis involves two important biological processes: the proliferation and differentiation of adipocytes [20].Many studies have demonstrated that the same miRNAs have opposing effects on adipocyte proliferation and differentiation.In other words, if a miRNA plays an inhibitory role in cell proliferation, it promotes cell differentiation.For example, miR-146b inhibits the proliferation and promotes the differentiation of porcine intramuscular preadipocytes [21], whereas miR-125a-5p promotes the proliferation of 3T3-L1 adipocytes and inhibits their differentiation [22].miRNAs have also been reported to play the same roles in adipocyte proliferation and differentiation.For example, miR-146a-5p targets SMAD family member 4 and tumor necrosis factor receptor-related factor 6, inhibiting the proliferation and differentiation of porcine intramuscular preadipocytes [23].In the present study, miR-136 promoted SVF proliferation and inhibited adipogenic differentiation and lipid accumulation.However, whether miR-136 modulates the Discussion Adipose tissue includes adipocytes and other types of cells called SVF cells [18].In a previous study, we isolated SVF cells from ovine back adipose tissue through collagenase digestion, and the cultured SVFs were spindle-shaped or triangular [19], which is consistent with our isolated SVFs.A large number of lipid droplets were observed in adipogenicinduced differentiated SVFs stained with ORO [19].We used this method to identify the adipogenic differentiation ability of ovine SVFs, and the results showed that isolated SVFs produce large amounts of lipid droplets.Therefore, the SVFs isolated in this study can be used to study molecular functions associated with adipogenesis in vitro. miRNAs play important roles in adipogenesis.Thus, miR-136 is a potential regulator of adipogenesis [10].miRNAs can bind to the mRNAs of their target functional genes in a partially or fully complementary manner and promote their degradation or inhibit their translation at the post-transcriptional level [2].Indeed, our study revealed that miR-136 and HSD17B12 have a targeting relationship and that miR-136 negatively regulates HSD17B12 mRNA and protein expression.Therefore, we speculated that miR-136 likely affects adipogenesis by regulating HSD17B12 expression in sheep. Adipogenesis involves two important biological processes: the proliferation and differentiation of adipocytes [20].Many studies have demonstrated that the same miRNAs have opposing effects on adipocyte proliferation and differentiation.In other words, if a miRNA plays an inhibitory role in cell proliferation, it promotes cell differentiation.For example, miR-146b inhibits the proliferation and promotes the differentiation of porcine intramuscular preadipocytes [21], whereas miR-125a-5p promotes the proliferation of 3T3-L1 adipocytes and inhibits their differentiation [22].miRNAs have also been reported to play the same roles in adipocyte proliferation and differentiation.For example, miR-146a-5p targets SMAD family member 4 and tumor necrosis factor receptor-related factor 6, inhibiting the proliferation and differentiation of porcine intramuscular preadipocytes [23].In the present study, miR-136 promoted SVF proliferation and inhibited adipogenic differentiation and lipid accumulation.However, whether miR-136 modulates the proliferation and adipogenic differentiation of ovine SVFs by regulating the expression of HSD17B12 remains unclear. HSD17B12 is a multifunctional enzyme highly expressed in the brown and white adipose tissues of mice, and upregulation of HSD17B12 induces fatty acid elongation [24].Additionally, interfering HSD17B12 inhibited the growth of breast cancer cells, whereas supplementation with arachidonic acid completely restored growth [25].These studies suggest that HSD17B12 is directly or indirectly involved in fat metabolism.Our study showed that HSD17B12 inhibited proliferation and promoted adipogenic differentiation of ovine SVFs.Consistent with this, the overexpression of HSD17B12 in bovine mammary epithelial cells inhibits cell proliferation and induces apoptosis [26].Considering the target relationship between HSD17B12 and miR-136 mentioned above, we believe that the influence of HSD17B12 on ovine SVF development is regulated by miR-136.Genes are regulated by multiple miRNAs.In bovine mammary epithelial cells, the HSD17B12 mRNA and protein expression significantly decreased because of the overexpression of miR-152 [26].miRNAs can also target multiple genes [27].For example, miR-136-3p inhibits the occurrence of gliomas by targeting KLF7 in vivo [28].The other target genes of miR-136 or miRNAs regulating HSD17B12 need to be supplemented through further studies.In addition, the functional study of miR-136 and HSD17B12 in vivo is also a problem that we need to solve next. In summary, miR-136 inhibits HSD17B12 expression by binding to its 3 -UTR, thereby promoting proliferation and negatively regulating adipogenic differentiation of ovine SVFs.We elucidated the negative regulatory effect of HSD17B12 on proliferation and the positive regulatory effect on adipogenic differentiation in ovine SVFs.This study provides a scientific basis for further understanding the regulatory mechanism of miRNAs in the fat metabolism of sheep. Ethics Statement All animal procedures were approved by the Animal Care and Ethics Committee of Shanxi Agricultural University, China (No. SXAU-EAW-2022S.UV.010009). Isolation and Culture of Ovine SVFs Healthy 3-month-old Guangling large-tailed sheep were sacrificed, and their tails were sterilized with 75% ethanol.Then, a small sterile piece of tail fat tissue was rinsed in 75% alcohol several times and placed in phosphate-buffered saline (PBS; Solarbio, Beijing, China) containing 1% penicillin-streptomycin.After cutting and digesting the tail fat tissue with 2 mg/mL collagenase type II (Solarbio, Beijing, China), the suspension filtered through 75 and 37.5 µm nylon meshes was inoculated onto a culture dish.The solution was shaken gently to distribute the cells evenly.Lastly, the culture dishes were placed in an incubator.After culturing for 6 h, we observed whether the cells adhered to the culture dish and replaced the medium with fresh 89% low-glucose Dulbecco's modified Eagle's medium (Biological Industries, Kibbutz Beit Haemek, Israel) containing 10% fetal bovine serum (Biological Industries, Kibbutz Beit Haemek, Israel) and 1% penicillin-streptomycin, and cultured for an additional 48 h.These are the cultured SVFs. Adipogenic Induction and ORO Staining If the SVFs were evenly distributed in the culture dish and reached approximately 85% confluence, the growth medium was replaced with an induction medium, which contains 10 mM rosiglitazone (Cayman Chemical, Ann Arbor, MI, USA), 1.4 mg/mL 3-isobutyl-1-methylxanthine (Solarbio, Beijing, China), 1 mg/mL dexamethasone (Solarbio, Beijing, China)), and 3 mg/mL bovine insulin (Solarbio, Beijing, China).This induction was maintained for 10 days of differentiation.The SVFs' growth state and the generation of lipid droplets were continuously monitored. When a large number of lipid droplets appeared in the cells, the induction of differentiation of ovine SVFs was stopped.ORO staining was used to identify the lipid droplet distribution.To stain the lipids, cells were washed with cold PBS three times and fixed in 4% paraformaldehyde overnight at 4 • C. The cells were then incubated with the ORO working solution for 30 min.After repeated cleaning with double-distilled water, images were obtained with a microscope (Leica, Wetzlar, Germany). Target Gene Prediction and Luciferase Reporter Assays The binding between miR-136 and HSD17B12 was predicted using the online tools TargetScan, miRDB, and miRBase.AnnHyb 4.946 software was used to design two specific amplification primers for sheep HSD17B12 3 -UTR, insert appropriate restriction sites at both ends of the primers, and synthesize them (Thermo Fisher Scientific, Waltham, MA, USA).Tissue RNA was extracted and reverse-transcribed to obtain cDNA, which was used as a template for PCR amplification of the sheep HSD17B12 3 -UTR.The pmirGLO vector was digested with the restriction enzymes Xho I and Sal I.The cloned and purified target fragments were ligated with a pmirGLO linear vector to construct HSD17B12 3 -UTR wild (HSD17B12 3 -UTR-wt) and HSD17B12 3 -UTR mutant vectors (HSD17B12 3 -UTR-mut) using the ClonExpress Ultra One Step Cloning kit (C115-02, Vazyme, Jiangsu, China). When the 293T cell density reached 70%, and the distribution was uniform, the transfection reagents, recombinant plasmids, miR-136 mimics, and miR-136 mimic NC were co-transfected.Luciferase activity was detected at 48 h using a Dual-Luciferase Reporter Assay System kit (Promega, Shanghai, China). Transfection of Mir-136 Mimics and Inhibitors into Ovine Preadipocytes Ovine SVFs were evenly seeded into 6-well plates.When the growth density reached approximately 75%, lipofectamine 3000 (Thermo Fisher Scientific, Waltham, MA, USA) was used to transfect the miR-136 mimic, miR-136 mimic NC, miR-136 inhibitors, and miR-136 inhibitor NC.After 6 h of transfection, the medium was replaced with a fresh one to continue the culture.After 48 h of culture, cells were collected for subsequent experiments. Lentiviral Infection Based on the predicted sequence of sheep HSD17B12 mRNA (XM_004016421.4) published in GenBank, specific primers for the CDS region of HSD17B12 were designed using AnnHyb 4.946 software (Informer Technologies) (Table 1).Primer sequences were synthesized by Thermo Fisher Scientific.The CDS of the ovine HSD17B12 gene was cloned using sheep cDNA as a template.The pHBLV-CMVIE-ZsGreen-T2A-puro vector was digested to form a linear fragment, which was used to construct a recombinant plasmid.According to the seamless cloning kit instructions, the recovered and purified HSD17B12 target fragment was ligated into a successfully digested pHBLV-CMVIE-ZsGreen-T2A-puro linear vector.Using BLOCK-iT RNAi Designer online software, two pairs of shRNA interference sequences of HSD17B12 were designed and sent to Thermo Fisher for synthesis.The primer sequences are shown in Table 2.The annealed HSD17B12-shRNA was ligated into a linearized pHBLV-U6-ZsGreen-Puro vector. The 293T cells were resuspended in a culture dish.The recombinant plasmid and two packaged plasmids, PMD2.g and psPAX2 (purchased from Hanbio, Shanghai, China), were transferred to 293T cells whose density reached 70%.The cell culture medium was collected 48 and 72 h after transfection and filtered through a disposable filter with a pore size of 0.45 µm.The virus solution prepared was collected and stored at 4 • C for later use.Ovine SVFs were cultured in six-well plates, and growth was observed to ensure that the cells in each well adhered evenly.The virus solution prepared was added when the cell density reached 60%.After 48 h of infection, the complete culture medium was replaced.The cells were observed for green fluorescence.The presence of green fluorescence indicated that the lentiviral infection was successful.Total RNA was extracted from cells using RNAiso Plus (Takara, Kusatsu, Japan).The M5 miRNA qPCR Assay kit (MF307-01; Mei5bio, Beijing, China) was used to determine the expression of miR-136 after transfection.The TB Green Premix Ex Taq II Master Mix (Takara, Kusatsu, Japan) was used to determine the expression of proliferation and differentiation marker genes.The expression of β-actin was used as an internal reference for coding genes, and U6 was used as an internal reference to evaluate miR-301a levels.All primer sequences used for qRT-PCR analysis are listed in Table 3. Cell Count Determination When the ovine SVFs reached a density of 50%, cells were transfected with miR-136.Cells were harvested after treatment at seven time points from 0 to 72 h (0, 12, 24, 36, 48, 60, and 72 h).The CCK-8 solution and complete medium were diluted at a ratio of 1:10, added to the cells, and incubated for 2.5 h.The absorbance of the cells was measured at 450 nm in the dark. Figure 1 . Figure 1.Ovine SVFs cultured in vitro.(A) Ovine primary SVFs were cultured for 6 days (B) OROstained SVFs after the induction of differentiation for 10 days.Blue indicates the background, while brown indicates lipid droplets.Scale bars: 100 μm. Figure 1 . Figure 1.Ovine SVFs cultured in vitro.(A) Ovine primary SVFs were cultured for 6 days (B) OROstained SVFs after the induction of differentiation for 10 days.Blue indicates the background, while brown indicates lipid droplets.Scale bars: 100 µm. Table 1 . Cloning primers for HSD17B12 CDS in sheep. Table 3 . qPCR primer names and sequences of marker genes.
6,823.6
2023-10-01T00:00:00.000
[ "Biology", "Medicine" ]
Receptor-independent Metabolic Effects of Thiazolidinediones in Astrocytes Thiazodinedione (TZD) agonists of the peroxisome proliferator activated receptor gamma (PPAR) exert metabolic effects in glial cells. In primary astrocytes, TZDs are cytoprotective and have anti-inflammatory actions; in contrast, in glioma cells TZDs are cytotoxic. Although PPAR is considered their primary target, TZDs including pioglitazone and troglitazone also bind to a mitochondrial protein MitoNEET; whether their metabolic effects are mediated by activation of PPAR or MitoNEET are not known. We generated PPAR null astrocytes by crossing a PPAR floxxed mouse with a transgenic line expressing CRE recombinase under control of the GFAP promoter. PPAR deficient astrocytes showed reduced lactate production under basal conditions and in response to pioglitazone; however at later times similar levels of lactate were produced. In the presence of troglitazone lactate production was similar in PPAR null cells as wildtype as-trocytes. In astrocytes in which MitoNEET expression was reduced using siRNA, basal lactate production was lower than control cells, however the cells increased lactate production in response to TZDs. When MitoNEET was decreased in the PPAR null astrocytes, responses to TZDs were reduced compared to non-infected cells. These results indicate that metabolic effects of TZDs are not exclusively mediated via PPAR, but involve binding to MitoNEET. Real time PCR revealed significantly greater MitoNEET mRNA in glioma cells than astrocytes. Differences in MitoNEET expression or activity could therefore contribute to differential effects of TZDs on astrocyte versus glioma cells. INTRODUCTION Thiazolidinediones (TZDs) are synthetic compounds used as oral anti-diabetic drugs.TZDs, which include Troglitazone, Pioglitazone, and Rosiglitazone, are agonists of peroxisome proliferator activated receptor gamma (PPAR ) of the PPAR family.PPAR is the best characterized of the three major isoforms ( , / , and ), in part due to its therapeutic potential for treatment of diabetes [1] and related consequences such as metabolic syndrome [2].The glucose-lowering effect of TZDs has been generally assumed to be due to activation of PPAR which is known to increase transcription of insulin-sensitive genes [3,4].In addition to effects on energy and fuel metabolism, several studies report TZD involvement in suppression of cell proliferation, induction of cytotoxicity; and perturbation of mitochondrial function.In these studies the rapid occurrence of agonist induced effects, the lack of correlation between the effects and the affinity for PPAR , and the incability of antagonists to block the effects suggest PPAR independent action by TZDs. In primary cultures of astrocytes, we reported that TZDs caused a rapid increase in glucose consumption and lactate production, associated with an rapid decrease in mitochondrial membrane potential followed by a subsequent hyperpolarization [5].Similarly, in isolated rat liver, infusion of *Address correspondence to this author at Department of Anesthesiology, University of Illinois; and Jesse Brown Veterans Affairs Hospital, Chicago, IL, USA; Tel: 312-355-1665; Fax: 312-996-9672; E-mail<EMAIL_ADDRESS>increased lactate production in less than 10 min [6].Rapid effects on mitochondria were reported with ciglitazone which increased ROS production [7] in astrocytes, and similar effects were observed using 10 to 20 uM pioglitazone or troglitazone in mouse astrocytes and astrocytoma cells [8].These findings suggest that TZDs may exert direct and rapid effects on mitochondrial respiration leading to changes in glucose metabolism and fuel substrate specificity.Differential effects of TZDs on mitochondrial respiration may account for the selective ability of TZDs to induce toxicity in transformed glial cells . A possible site of TZD action on mitochondria may be the recently described mitochondrial protein ''mitoNEET'' [9].mitoNEET was identified by saturable binding of labeled pioglitazone to crude mitochondrial membranes from bovine brain and several other tissues.The binding of pioglitazone to this protein was specific with half maximal binding occurring between 0.1 and 1 uM.After cross-linking with photoaffinity labeled pioglitazone a 17 kD protein was determined to be the binding site.Consequent purification and proteomic analysis revealed a novel protein containing the sequence Asp-Glu-Glu-Tyr (''NEET'').mitoNEET was also found to be associated with several other mitochondrial proteins, including components of the pyruvate dehydrogenase complex, suggesting a means by which TZD binding to mi-toNEET could block pyruvate driven respiration. In this study we investigated the involvement of mi-toNEET in mediation of PPAR independent metabolic ef-fects of TZDs.Our data show that knockdown of MitoNEET in astrocytes accomplished by siRNA leads to an increase in lactate production; In contrast, the effects of pioglitazone on lactate production are comparable in PPAR expressing versus null cells. Cells Primary astrocytes were prepared from cerebral cortices of postnatal day 1 C57BL/6 or PPAR null mice as described previously [10].Mouse GL261 glioma cells were grown as previously described [8].The cells were grown in DMEM containing 25 mM glucose, 10 % FCS and antibiotics (penicillin and streptomycin) by changing the medium every three days for two weeks before using for experiments. Infection of Cells with siRNA and Drug Treatment Adenovirus particles containing siRNA for mitoNEET were designed and produced from Galapagos Genomics (Mechelen, Belguim).The RNA duplex targeted the 3'end of the mitoNEET mRNA sequence bases 415 5'-AAACCTAAT GGACAGTTGCGA.-3'435 which spans the stop codon at base 422.The particles were added to DMEM containing 1% FBS and penicillin/streptomycin. 500 μl / well of this medium were added to astrocytes on 6 well plates to give a final concentration of 10 MOI / cell.Control plates were treated with the same medium but without adenovirus.Plates were left in the incubator for 2hrs then the medium was changed to DMEM containing 10 % FBS and penicillin/strepto-mycin.Total cellular protein levels were not significantly altered by infection.After 72 hours of incubation the cells were washed twice with low glucose (5.6 mM) DMEM containing 1% FBS.Cells were then treated with pioglitazone or troglitazone (20 μM in DMSO) or DMSO (0.1%) in the same medium and incubated for 6 hrs.Samples were taken from the culture medium at 0 time point then every 2 hrs for assessment of lactate levels.After 6 hrs the medium was removed and 1 ml Trizol reagent (Invitrogen, Carlsbad, CA) / well was added onto the cells.Trizol samples were frozen and kept at -80 o C. Lactate Assay The lactate amounts in samples taken from the culture medium were determined enzymatically using assay kits following manufacturer's protocol with some modifications.Briefly, 5 ul of sample was incubated with 95 ul of reagent (Trinity Biotech, Bray, Ireland) for 20 min at room temperature and then the absorbance was measured at 540 nm.Lac-tate amounts were calculated by interpolation from a standard curves of L-lactate in H 2 O. Determination of mitoNEET Knock Down Total RNA was isolated from Trizol samples by chloroform extraction and ethanol precipitation. 1 μg of the RNA was converted into cDNA using random hexamer primers, and mRNA levels were determined by quantitative real time-PCR.The reactions were carried out in the presence of SYBRGreen diluted 1:10000 from stock solution (Molecular Probes, Eugene, OR) in Corbett Rotor-Gene real time PCR unit (Corbett Research, Sydney, Australia).The primers used for mouse mitoNEET (Cisd1, accession number NM 134007) were 5'-AAC CTA ATG GAC AGT TGC GAG GCT-3' forward and 5'-AAG GCC GAT GCC ATG GAT ATG AGA-3' reverse, which gave a 158 bp product.Relative mRNA concentrations were calculated from the take off point (C t ) of reactions using manufacturer's software. Quantitative PCR Real time PCR was used to measure mitoNEET mRNA levels using forward primer 5'-CAA AGC TAT GGT GAA TCT TCAG and reverse primer 5'-GTG CCA TTC TAC GTA AAT CAG which generates a 158 bp product.Values were normalized to levels measured for beta-actin mRNA in the samples using primers 5' CCT GAA GTA CCC CAT TGA ACA and reverse 5'-CAC ACG CAG CTC ATT GTA GAA.PCR conditions were 35 cycles of denaturation at 94°C for 10s; annealing at 64°C for 15s; and extension at 72°C for 20s on a Corbett Rotorgene Real-Time PCR unit (Corbett, Australia).PCR was done using Taq DNA polymerase (Invitrogen), and contained SYBR Green (SybrGreen110,000x concentrate, diluted 1:10,000; Molecular Probes, Eugene, OR).Relative mRNA concentrations were calculated from the takeoff point of reactions using manufacturer's software. Data Analysis Time dependent changes in lactate production between wildtype and PPAR null cells was compared by 2-way repeated measures ANOVA, and considered significantly different if the time x cell type interaction effects P value was < 0.05. RESULTS In order to determine whether the metabolic effects of pioglitazone on astrocyte metabolism are mediated through PPAR we made use of PPAR conditional astrocyte knockout mice [11].In wildtype (WT) cells, incubation with pioglitazone or troglitazone induced significant time-dependent increases in lactate production as previously shown [5].In PPAR null cells, the baseline production of lactate was slightly, but significantly decreased compared to that in the WT cells (Fig. 1).Between 0 and 4 hr the rate of production was reduced 28% compared to WT cells, however at 6 hr levels were similar.In the presence of pioglitazone, lactate production was also increased in PPAR null astrocytes although the absolute values were significantly lower than those in the WT cells between 0 and 4 hr, although again at 6 hr levels were comparable.Because of lower production in vehicle treated cells, the magnitude of the increase due to pioglitazone versus control cells was similar in PPAR null and WT cells (approximately 80% increase after 2 hr, and 40% increase at 4 and 6 hr).In con-trast, lactate production was comparably increased in the two cell types in the presence of troglitazone, which although having a lower affinity than pioglitazone for PPAR , caused a greater increase in lactate production.These results suggest that metabolic effects of pioglitazone show a partial dependence on the presence of PPAR , but that this dependence is lost after longer times, or in the presence of a more potent metabolic induce such as troglitazone. Since pioglitazone can also bind to the mitoNEET protein, we investigated a possible involvement of this protein in mediating the metabolic effects of TZDs (Fig. 2).Infection of primary mouse astocytes with 10 MOI adenovirus containing siRNA directed against mitoNEET reduced mi-toNEET mRNA levels approximately 70% as compared to mock infected cells (Fig. 2A).The reduction in mitoNEET expression was associated with a slight but significant decrease in lactate production versus the non-infected cells (Fig. 2B); at 4 and 6 hr levels were reduced about 30%, compared to non-infected cells.However, treatment with pioglitazone significantly increased lactate production in the control cells as well as the siRNA treated cells; as did incubation with troglitazone.These results suggests that TZD can continue to influence astrocyte metabolism either via effects on residual MitoNEET, or through interactions with PPAR . To determine the consequences of combined mitoNEET and PPAR depletion, we compared the effects of infecting PPAR null versus WT astrocytes with adenovirus containing siRNA (Fig. 3).In this study, basal production of lactate was similar in the mitoNEET depleted WT and PPAR null cells; although a slight increase was seen after 6 hr in null cells.In the presence of pioglitazone, lactate production increased over time in both cells types, however there was a significantly lower increase observed in the null cells com- Fig. (1). Effect of TZDs on lactate production in PPAR null astrocytes. Astrocytes from PPAR null (open symbols) and wild type (filled symbols) mice were incubated with 20 μM pioglitazone (circles) or 20 μM troglitazone (triangles in low glucose (5.6 mM) DMEM containing 1% FBS.Control (squares) cultures were incubated with medium plus the equivalent amount of vehicle (DMSO).Lactate levels in the culture media were determined at the indicated time points.The data is the mean ± s.e.m. of n=3 measurements for each point.Two way ANOVA showed that lactate production was significantly different between wilt type and PPAR null cells in the presence of vehicle (P = 0.015) or pioglitazone (P = 0.032); but was not different in the presence of troglitazone. Fig. (2). Effect of mitoNEET depletion on lactate production in astrocytes. Mouse astrocytes were incubated with adenovirus particles (10 MOI / cell), containing siRNA for mitoNEET (open symbols), in DMEM with 1% FBS or with medium alone (filled symbols) for two hours.The medium was then changed to DMEM with 10% FBS.After 72 hrs of incubation the cultures were treated with 20 μM pioglitazone (circles) or troglitazone (triangles) in low glucose (5.6 mM) DMEM containing 1% FBS.Control (squares) cultures were incubated with medium plus the equivalent amount of vehicle.(A) After 6 hr incubation mitoNEET mRNA levels were determined by QPCR, normalized against -tubulin mRNA levels.(B) Lactate levels were determined in the media at the indicated times.The data is the mean ± s.e.m. of n=3 replicates.The production of lactate in the presence of vehicle was significantly different between control cells and mitoNEET depleted cells (2 way ANOVA, P = 0.0035).pared to the WT cells (lactate levels increased at 6hr over 300% in the WT cells, but only about 40% in the PPAR null cells).Likewise, the effect of troglitazone on lactate production was attenuated by mitoNEET depletion in the PPAR null cells, but to a smaller and non-significantly different extent than that seen with pioglitazone. Using quantitative real time PCR we measured relative mitoNEET mRNA levels in primary mouse astrocytes, and compared that to levels in mouse GL261 glioma cells (Fig. 4).Interestingly, mitoNEET levels were significantly higher (about 8-fold) in the glioma cells, possibly due to the higher metabolic state of the cells. Fig. (4). MitoNEET mRNA levels in astrocytes versus glioma cells. Total mRNA was isolated from primary mouse astrocytes and from mouse GL261 glioma cells, converted to cDNA, and relative levels of mitoNEET mRNA determined by quantitative real time PCR.(A) The data is average of 3 measurements for each sample nor-malized to values for beta-actin mRNAs measured in the same samples.The gel shown in (B) confirms that the PCR products are the correct size generated from astrocytes and glioma cells. DISCUSSION In this report we have shown that the ability of two TZDs, pioglitazone and troglitazone to increase lactate production in astrocytes is not fully dependent on PPAR .Both basal lactate production as well as the increased production due to the presence of pioglitazone were significantly different between control astrocytes and astrocytes in which PPAR was depleted; however by 6 hr the lactate production was comparable in the two cell types.Furthermore, in the presence of troglitazone virtually identical rates of lactate production were observed.In contrast, depletion of an alternate target of TZDs, the mitoNEET protein present in mitochondria [9], led to a small decrease in the basal production of lactate; although the TZD-dependent increases were similar.Together these findings suggest that both PPAR as well as mitoNEET contribute to the metabolic effects of TZDs in astrocytes; and this is supported by results showing that depletion of mitoNEET in the PPAR null astrocytes led to a statistically significant reduction in pioglitazone-dependent lactate production (Fig. 3); and a lesser decrease in the presence of troglitazone. Metabolic effects of TZDs have been reported for several cell types [12][13][14][15][16][17], and in many cases these effects occurred relatively rapidly or at TZD doses much higher than their binding affinity for PPAR , and in some studies, non TZD PPAR ligands showed little or no metabolic effects.These results raised the possibility that these metabolic effects occurred in a PPAR independent manner, which are consistent with the current findings in PPAR null astrocytes. Alternate sites of action for TZDs have been suggested [18,19], and recently selective binding to the mitochondrial protein mitoNEET was demonstrated [9].Using biochemical approaches to identify the binding site for pioglitazone, studies using photoaffinity labeled pioglitazone could only identify a single 17kDa protein that could be specifically crosslinked, which is expressed in the mitochondria; while binding studies using high specific activity radiolabeled pioglitazone Pio showed that MitoNEET was the only protein that could be labeled in whole cell lysates.MitoNEET is a member of a small family of proteins which contain a novel zing finger motif; but contains iron rather than zinc, and contains a 2Fe-2S cluster that is released in a redox and pH dependent manner [20].MitoNEET has been crystallized and a 1.5A structure determined, which shows that pioglitazone stabilizes the homodimeric form of the protein against release of the 2Fe-2S cluster [21,22].The exact function(s) of mitoNeet remain to be clarified, but TZDs including pioglitazone induce a conformational change influencing overall mitochondria redox potential and respiration, and its association with subunits of PDH complex suggests it may be involved in regulation of pyruvate transport or metabolism.Mitochondria isolated from mitoNEET null mice show reduced oxidative capacity [23].Our findings that when mi-toNEET levels are decreased in PPAR null cells the effect of pioglitazone also decreases suggests an involvement of mitoNEET in mediation of this effect. Fig. ( 3 ) Fig. (3).Effect of mitoNEET depletion on lactate production in PPAR null astrocytes.Astrocyte cultures from PPAR knockout (open symbols) and wild type (filled symbols) mice were incubated with mitoNEET siRNA containing adenovirus particles (10 MOI/cell), then after 72 hr the media was changed and the cells were incubated in low glucose (5.6 mM) DMEM with 1% FCS and 20 μM pioglitazone (circles) or 20μM troglitazone (triangles as in Fig.(3).Control (squares) cultures were incubated with medium plus the equivalent amount of vehicle.Lactate levels in the media were determined at the indicated time points.The data is the mean ± s.e.m. of n=3 measurements.Lactate production was significantly different between the two cells types in the presence of pioglitazone (2 way ANOVA, P < 0.0001), but not in the presence of troglitazone (P = 0.11).
3,991.4
2010-12-31T00:00:00.000
[ "Biology" ]
Biomorphic Transformations: A Leap Forward in Getting Nanostructured 3-D Bioceramics Obtaining 3-D inorganic devices with designed chemical composition, complex geometry, hierarchic structure and effective mechanical performance is a major scientific goal, still prevented by insurmountable technological limitations. With particular respect to the biomedical field, there is a lack in solutions ensuring the regeneration of long, load-bearing bone segments such as the ones of limbs, due to the still unmet goal of converging, in a unique device, bioactive chemical composition, multi-scale cell-conducive porosity and a hierarchically organized architecture capable of bearing and managing complex mechanical loads in a unique 3D implant. An emerging, but still very poorly explored approach in this respect, is given by biomorphic transformation processes, aimed at converting natural structures into functional 3D inorganic constructs with smart mechanical performance. Recent studies highlighted the use of heterogeneous gas-solid reactions as a valuable approach to obtain effective transformation of natural woods into hierarchically structured apatitic bone scaffolds. In this light, the present review illustrates critical aspects related to the application of such heterogeneous reactions when occurring in the 3D state, showing the relevance of a thorough kinetic control to achieve controlled phase transformations while maintaining the multi-scale architecture and the outstanding mechanical performance of the starting natural structure. These first results encourage the further investigation towards the biologic structures optimized by nature along the ages and then the development of biomorphic transformations as a radically new approach to enable a technological breakthrough in various research fields and opening to still unexplored industrial applications. INTRODUCTION Despite great progress in science and technology occurred in the past decades, to date insurmountable barriers still prevent the solution of crucial clinical needs, therefore a substantial quantum leap in technological development is highly desired. In this respect, we are probably witnessing the dawn of a new era. As the last decades were characterized by the extensive development and use of plastics and other synthetic disposable products, today the scientific community is intensively called to seek for new virtuous pathways. Particularly, scientists are turning their attention to nature, attracted by the unique structures of living beings, characterized by unusual and often contrasting properties exhibited at the same time, such as lightness and resistance, toughness and resilience, etc. Particularly, plants, shells, mammal bones and exoskeletons, show outstanding performances in terms of strength, compliance to multi-axial forces and self-repair ability, permitted by their hierarchical architecture organized along multiple scales from the nano-to the macroscopic size (Reznikov et al., 2018;Mishnaevsky and Tsapatsis, 2016). Such unique features were developed along a complex and million years-long evolutionary pathway, but often they are unachievable with current manufacturing technologies (Abdulhameed et al., 2019;Behera et al., 2021). Therefore, scientists are now intensively looking at radically new approaches permitting to copy and translate the unique and outstanding abilities of natural structures into new functional devices (Fish, 2020;Baines et al., 2020;Fish et al., 2021;Collier, 2013;Singh et al., 2019;Huang et al., 2019). In this respect, it is important to consider that relevant functionalities of materials are strongly related to their chemical composition and their organized structure, both factors co-existing in a balanced equilibrium. Many applications, such as in energy production, photonics and biology, require functional phases showing specific nuances in the atomic composition and crystalline structure (Wongmaneerung et al., 2009;Qian et al., 2013;Tampieri et al., 2011;Limonov and De La Rue, 2016). The maintenance of chemical composition and nanostructure in 3D materials requires a conceptual change in the fabrication approach aimed at achieving smart performances and surpassing the basic paradigm in material science, particularly in the case of ceramic technology so far defined by powder processing, 3D forming and sintering. Recent studies have highlighted alternative approaches, based on chemically-guided assembly processes as in the case of geopolymers (Provis and Bernal, 2014), which are natural materials with pozzolanic activity of great prospect for building applications, or of metastable calcium phosphatic compounds, able to consolidate at body temperature thanks to activation of dissolution/reprecipitation processes, thus functioning as bone cements (Schumacher and Gelinsky, 2015;Zhang et al., 2014). However, even if these approaches can generate 3D consolidated ceramics maintaining the nanostructure and reactive chemical composition, they are based on the assembly of building blocks such as powders or nanocrystals (Wegst et al., 2015). Such assembling phenomena are however difficult to be controlled and directed towards the achievement of constructs exhibiting structural organization and hierarchy so that, in spite of good strength, the mechanical performances are usually insufficient to bear relevant mechanical loads. Pursuing the development of new 3D inorganic materials with a greater degree of control over the chemical composition, structural organization and mechanical performance, recent approaches have identified the possibility to use existing natural structures as models guiding the transformation processes generating biomorphic products (Wegst et al., 2015;Tan and Saltzman, 2004;Xie et al., 2019). A key aspect of such biomorphic transformation processes consists in the application of heterogeneous reactions that begin at the surface of the solid model and then propagate in its 3-D bulk structure by diffusive phenomena until chemical conversion is obtained overall. This approach entails the interaction of multiple physico-chemical parameters relevant during the biomorphic transformation, revolving around a fundamental aspect which is the kinetic control of the involved reactions, in turn being crucial to ensure: i) the attainment of the designed chemical composition, ii) the maintenance of the nanostructure as well as the hierarchical architecture leading to functional and mechanical competence. NEED OF A LEAP FORWARD IN BIOMATERIALS SCIENCE TO RESPOND TO UNMET CLINICAL NEEDS Biomorphic transformations can be extremely relevant to achieve new effective solutions for regenerative medicine. In fact, biomaterials play a key role as implants or scaffolds, intended to guide the endogenous cells to regrow and regenerate missing or diseased tissues, when spontaneous regeneration is prevented . Such a problem is particularly crucial in orthopaedics, where the regeneration of critical size bone defects (i.e. defects that cannot heal spontaneously) requires the use of scaffolds bridging the defect and able to promote and sustain the appropriate cascade of biologic phenomena yielding the bone regeneration. However, the goal to obtain such regenerative scaffolds is still largely unmatched because of insuperable obstacles so far encountered in achieving the relevant ensemble of biomimetic physicochemical, structural and mechanical properties, able to appropriately drive and modulate the cell fate and metabolism. Indeed, the bone tissue metabolism is mainly regulated by its mineral component: a nearly amorphous apatitic phase enriched with various biologically relevant ions in dynamic chemical equilibrium with the physiological environment, thus behaving like a "living inorganic crystal." Seeking to mimic the chemical composition of bone, nano-apatites with such properties can be quite easily obtained by wet synthesis methods, but only in form of powders (Iafisco et al., 2014;Sprio et al., 2008). Conversely, the attainment of 3D apatitic scaffolds with effective regenerative ability was so far prevented by the impossibility to synthesize such a bioactive phase in the form of large consolidated bodies with multi-scale hierarchic structure, due to the need of high temperature sintering processes that invariably degrade the chemistry, nanostructure and pore architecture of the scaffold, as a whole responsible of its regenerative ability. The lack of these features limits the extent of those appropriate physicochemical and topotactic signalling to cells, promoting and sustaining the complex metabolic activity related to deposition and remodelling of newly formed bone. Besides the chemical signalling, cell mechanotransduction is also a biologically relevant phenomenon based on the conversion of mechanical forces into biochemical processes active at the cell level, by which osteoblasts continuously remodel the bone tissue structure to adapt to ever changing mechanical forces, thus permitting its self-repair in the case of damage of limited entity. Such phenomena are activated by the unique multi-scale hierarchical structure of the bone tissue that permits effective distribution of mechanical forces from the macro to the microscale and down to bone cells (Ingber, 1993;Pavalko et al., 2003). This mechanism is particularly crucial when load-bearing bones such as the ones of the limbs are involved. Today, the need of restoring the mechanical functionality in such critical bony districts forces to adopt very invasive and poorly resolutive approaches based on the use of Ilizarov implants or, alternatively, metallic plaques and bank bone pieces that do not help bone regeneration and can, instead, provoke infections and bad clinical outcomes (Kanakaris and Giannoudis, 2007;Longo et al., 2012;Patil and Montgomery, 2006). Even when using sinter-free methods to consolidate apatitic phases into porous 3D devices, as obtained with recently developed apatitic bone cements, the lack of multi-scale open and interconnected porosity organized in a hierarchical architecture hampers effective vascularization of the whole scaffold and to mimic the outstanding mechanical ability of the natural bone (Roffi et al., 2017;Rho et al., 1998). Furthermore, in such a kind of materials the low temperature consolidation mechanism, yielding physical entanglement of acicular apatitic particles, does not produce a cohesion between the ceramic grains, sufficient to ensure adequate biomechanical performance. Seeking to a paradigmatic change in bioceramics development, we noticed that natural vegetal structures show extraordinarily complex architectures organized along multiscale hierarchies, capable of conferring outstanding mechanical performance, lightness and self-repair capacity, similarly as shown by the bone structure. Such vegetal architectures are therefore ideal models that can inspire the design of new generation biomorphic bone scaffolds with superior and unpreceded functional properties. The first pioneering attempts to generate 3D inorganic materials through transformation of natural woods into oxides or carbides date from the early 2000s (Li et al., 2006;Rambo and Sieber, 2005;Sieber et al., 2000;Greil, 2001;Esposito et al., 2004). Following, biomorphic inorganic structures for application as bone substitutes were obtained by transformation of natural woods into biocompatible silicon carbide scaffold, suitable as inert device to simply fill and repair the bone defect. In such experiments, pyrolysis processes were used to convert the natural wood structure into a carbon template, then subjected to infiltration/reaction with molten silicon at high temperature thus activating the reaction Si-C forming silicon carbide (SiC) phase (Parfen'eva et al., 2005;González et al., 2003). Such a biomorphic SiC scaffold showed high mechanical strength permitting implantation in long bone defects in sheep (Filardo et al., 2020;Filardo et al., 2013). However, despite its biocompatibility SiC is a bio-inert material, thus with very limited ability to induce new bone formation and vascularization in the scaffold's pores and, besides, it is unable to be resorbed by metabolic cell activity. Such a bioinert scaffold is destined to remain unmodified within the bone defect and so the physical and mechanical discontinuity existing between the scaffold and the surrounding bone easily yield stress shielding effects leading to bone resorption with time and the increasing risk of new fractures (Navarro et al., 2008;Best et al., 2008). In the attempt to obtain bone scaffolds with bioactive composition, previous studies developed biomorphic transformation processes to achieve calcium carbonate or calcium phosphate constructs. In one case, natural cork wastes were pyrolyzed, then infiltrated with a suspension of calcium salts, and finally sintered to achieve porous calcium carbonate bodies (Scalera et al., 2020). In a different study, a pyrolysed wood was infiltrated with calcium phosphatic sol-gel suspensions, then sintered to achieve biomorphic calcium phosphate scaffolds (Eichenseer et al., 2010). In both cases, the obtained scaffolds resulted in poor reproduction of the original natural structure and very low mechanical properties. These adverse effects can be ascribed to the use of liquid reactants. On one hand, ceramic suspensions usually exhibit viscosity preventing penetration into small, micronsize pores, thus hampering to achieve a precise replica of the original template structure. On the other hand, in ceramic suspensions, particularly when featuring low viscosity, the concentration of the ceramic powder/granules is often not sufficient to activate the grain coalescence process during sintering, as required to give mechanical properties to the final ceramic body. In the attempt to overcome such limitations and obtain mechanically effective biomorphic scaffolds, suitable to fit critical size bone defects in load-bearing regions, we recently described the biomorphic transformation of rattan wood by using a multi-step process based on gas-solid reactions . The use of gaseous reactants was preferred in order to facilitate the chemical interaction with the solid template, in turn permitting a more accurate control of the reaction kinetics and of diffusive phenomena yielding phase transformation throughout the whole solid. The choice of rattan wood as a model of bone tissue was related to its structure, where wide channels (∼300-500 μm) are hierarchically interconnected with smaller pores thus forming a vascular network closely resembling the anisotropic osteon architecture typical of the compact bone. Such a structure inducing anisotropic mechanical properties and high vascularization capability, is particularly relevant in long, loadbearing bone segments. The present review will describe chemical aspects inherent to such a biomorphic transformation process. It will be given an insight to main strategies adopted in the various steps to modulate the kinetic of concomitant chemical reactions occurring at the gas-solid interface and to limit the grain growth. This latter was relevant to achieve reactive intermediate products and facilitate the attainment of the final biomorphic scaffold. Furthermore, we show how the fine tuning of the reaction kinetics permitted to modulate the chemical composition and nanostructure of the final product. Indeed, a decisive impulse to superior biological and mechanical properties was given by the maintenance of bone-mimicking chemistry, nanostructure and unique 3-D hierarchical architecture, inherited by the original natural wood. CRITICAL ASPECTS IN 3D GAS-SOLID REACTIONS Heterogeneous gas-solid reactions have been largely investigated, particularly for application in the chemical and petroleum industries, and are fundamental phenomena involved in heterogeneous catalysis processes (Kreider and Lipiński, 2018;Groppi and Tronconi, 2000). However, differently from catalysis, where the solid phase plays the role of enhancer or modulator of the reaction kinetics but remains virtually unchanged during the whole process, here the solid is subjected to chemical changes with continuous alteration of the composition and structure so that the system is inherently unsteady with time. Generally, critical phenomena occurring in the course of gas-solid reaction systems can be resumed in: i) the adsorption of gaseous reactants on the solid surface; ii) the actual chemical reaction between the adsorbed gas and the solid surface; iii) the diffusion of gaseous reactants through the solid product formed at the surface to activate the heterogeneous reaction also in the solid bulk (Vinu, 2017;Xu et al., 2012). However, such a scenario is further complicated when it comes to transform solids with 3D functional architecture rather than particles: in some cases, besides the achievement of successful phase transformation, a major purpose is also to retain all functionally valuable structural and mechanical features of the reacting solid. This aspect is exacerbated by the relatively low reactivity of inorganic solids at room temperature, that forces to adopt thermal treatments at conditions suitable to make effective the heterogeneous reaction process. In this respect, structural parameters such as porosity, specific surface area, pore size distribution, that can markedly affect the kinetic of the heterogeneous reactions and even their progress, can be subjected to important changes during the transformation process. The existence of all these concomitant effects render the understanding of the overall gas-solid 3D system and-consequently-the setup of gas-solid reactions yielding effective biomorphic transformation, a definitely non-routine task requiring originality and deep investigation of all the ratecontrolling steps and their inter-relation. The biomorphic transformation of rattan wood into a hierarchically organized scaffold addressing the regeneration of long, load-bearing bone segments, was obtained by a multi-step process ( Figure 1) based on heterogeneous gas-solid reactions transforming the wood into a sequence of intermediate products, i.e., carbon, calcium carbide, calcium oxide and calcium carbonate, prior to the conclusive hydrothermal treatment transforming the calcium carbonate into the final product: a scaffold made of hydroxyapatite and tricalcium phosphate phases, both characterized by partial substitutions with Mg 2+ and Sr 2+ ions . In the design of such a process, relevant aspects to be kept into strict consideration were identified as: i) the operating conditions (i.e., temperature and partial pressure of the reacting gas) that activate specific gas-solid reaction(s) yielding the nucleation of the new solid phase; ii) the presence of pores into the newly formed solid product favouring the penetration and diffusion of the gaseous reactant into the solid bulk; iii) the volume variation accompanying the formation of the new solid phase. All these aspects are closely inter-related and greatly depend on the reaction kinetics. On one hand, the reaction conditions, basically related to the thermodynamics of the reacting system, should be chosen in order to achieve a reaction rate, effective for practical purposes. Generally, from a chemical perspective the use of higher temperatures, in appropriate ranges, enhances the reactivity and the reaction rate but, conversely, it can also induce adverse effects. For instance, temperature-induced grain growth can provoke reduction of intergranular porosity, thus limiting or even preventing the diffusion of gaseous reactants from the surface to the inner part of the reacting solid. Furthermore, the grain growth, yielding a decrease of the specific surface area and of active sites at the surface, can reduce the driving force for chemical transformation, so that the use of high temperature can induce various rate-limiting effects, penalizing the whole transformation process. Volume variations invariably occur during phase transformations, and these structural changes may represent a major issue affecting not only the integrity and mechanical stability of the solid reactant, but also its 3D architecture, including pore size distribution, hierarchical organization and interconnection. Changes in pore size distribution can help to accommodate deformation of the solid volume at the multi-scale during phase transformation, however these changes have to be controlled when the role of pore size distribution is functional. TOWARDS BIOACTIVE, MECHANICALLY COMPETENT 3D NANOSTRUCTURED CERAMIC SCAFFOLDS The understanding and consideration of all the above-described concomitant phenomena helped to design effective processes to transform a vegetal structure into a bioactive scaffold. In particular, starting from the rattan wood a major requirement for the final success was the maintenance, in all the intermediate products, of functionally-relevant multi-scale structure, adequate physical integrity and chemical reactivity sufficient to enable the subsequent transformation steps up to the final product. To face critical stages of the multi-step biomorphic transformation process involving gas-solid reactions, we found that the use of minimal energy conditions was relevant to control the kinetics of the different reactions involved in the transformation process and the diffusive phenomena that allowed the reactions to proceed from the surface to the bulk. In turn, both these aspects are strongly inter-related with the compositional and microstructural evolution of the reacting solid, particularly relevant when high specific surface and the maintenance of interconnected multi-scale porosity facilitate the mass transfer and the progress of the heterogeneous reactions. Pyrolysis, Carburization and Oxidation of Natural Rattan Wood The multi-step biomorphic transformation process started with the pyrolysis of the wood template, i.e., a thermal treatment carried out in oxygen-free atmosphere to transform the wood into inorganic carbon bodies suitable as template guiding the subsequent heterogeneous reactions. The pyrolysis process implied a huge mass loss and reduction of the original volume, due to the decomposition of the organic components, such as cellulose, hemicellulose and lignin, as well as water and other gaseous byproducts such as carbon dioxide and monoxide. In order to accomplish such a relevant volume variation and prevent structural and morphological deformations in the final carbon template, the pyrolysis process was carried out at very low heating/cooling rate (i.e., ∼1°C/min) (Tampieri et al., 2009). After pyrolysis, the first relevant step was the introduction of calcium in the reacting system through the formation of calcium carbide (CaC 2 ) phase. Preliminary approaches consisted in the exposure of the carbon template to gaseous calcium in oxygen-free atmosphere, obtained by heating calcium granules above their boiling point (i.e., 1,484°C) (Tampieri et al., 2009). In spite the process was successful, when applied to carbon templates with large dimensions, the reaction was less effective, often leaving unreacted carbon in inner regions of the solid. Considering that the subsequent transformation step was the oxidation of CaC 2 into calcium oxide (CaO) in air atmosphere, the thermal oxidation of the residual unreacted C resulted into formation of void regions, penalizing the mechanical integrity of the 3D product. This finding shows how diffusive phenomena permitting the gas-solid interaction in the whole volume of the reacting template become progressively more critical as the template increases in size. To face this impasse, a more recent experiment was carried out by inducing the Ca-C chemical reaction at lower temperatures by decreasing the atmospheric pressure in the reaction chamber, thus obtaining a strong reduction of the boiling point of calcium . It was thus possible to conduct the whole carburization reaction at minimal energy conditions, suitable to activate the nucleation of CaC 2 crystals but limiting their growth. This permitted to achieve greatly reduced grain size, from ∼100 μm down to ∼10 μm, and to retain substantial intergranular nano-porosity favouring the diffusion of the reacting calcium gas into the inner part of the C template, thus resulting in the complete conversion of C into CaC 2 . In this respect, Figures 2A,B show the microstructure of CaC 2 obtained by Ca-C reaction carried out at ∼1,500°C, whereas Figures 2C,D report the microstructure of CaC2 formed at T below 900°C. To be noted the nanosized porosity in samples treated at lower temperature ( Figure 2C in comparison with Figure 2A) and the much smaller particles obtained at lower temperatures (comparison between Figure 2D and Figure 2B). To better accommodate structural inhomogeneity at the multi-scale, related to changes occurring in the crystal structure when converting C into CaC 2 , the reaction temperature was reached upon slow heating (i.e., below 10°C/ min). The applied processing conditions permitted to obtain biomorphic CaC 2 bodies retaining an interconnected network of intergranular nanopores that facilitated the diffusion of oxygen gas during the subsequent transformation step, aimed at the thermal oxidation of the CaC 2 body into a biomorphic CaO (Figure 3). Further retained up to the final scaffold, such a porous nanostructure also has invaluable utility in favouring the exchange of nutrients and autogenous growth factors when implanted in vivo, and improving the bio-resorption ability thanks to the higher specific surface area. Generation of Highly Reactive Calcium Carbonate Biomorphic Precursors The subsequent step was the transformation of the obtained CaO into biomorphic CaCO 3 bodies, intended as a precursor for the final product, i.e., the apatitic bone scaffold. The hydrothermal conversion of calcite into hydroxyapatite has been studied for decades (Verwilghen et al., 2009;Onoda and Yamazaki, 2016;Yoshimura et al., 2004); in the majority of cases the process was carried out on CaCO 3 particles, granules, or even on corals, but never attempted to achieve a biomorphic hydroxyapatite body Frontiers in Chemistry | www.frontiersin.org September 2021 | Volume 9 | Article 728907 6 characterized by multi-scale hierarchy and organized porosity. Recent experiments to convert a wood into biomorphic calcium carbonate were carried out by heating a CaO template at 400°C under CO 2 gas pressure (2.2 atm) or, differently, by heating a CaO template at 900°C in pressure-less CO 2 atmosphere (Tampieri et al., 2009). In both cases the reaction occurred at the CaO surface but with very slow penetration rate into the template bulk, thus resulting in incomplete reaction overall. In the first attempt, the low temperature and low pressure adopted were ineffective to induce gas diffusion into the CaO body; on the other hand, when using high temperature treatment, the enhanced thermally-induced grain growth provoked the formation of a surface layer poorly permeable to further CO 2 gas penetration, thus hampering the progress of the reaction towards the inner core of the bulk structure. To solve the problem, a more recent study made use of a furnace capable of thermal treatments under gas pressure. CaO templates were heated under CO 2 pressure (100 atm), thus reaching supercritical conditions that greatly increased the reactivity of CO 2 and enabled nearly complete conversion of CaO into CaCO 3 at T ∼800°C. In spite of this success, the obtained CaCO 3 body could not be used for subsequent transformation into hydroxyapatite, as planned: in fact, the sample was subjected to spontaneous disintegration within short time after its achievement. Such an occurrence shows how chemical, physical and mechanical features are closely inter-related when the goal is to achieve a biomorphic transformation of large macroscopic samples, where indeed diffusivity phenomena become increasingly relevant. In fact, the disintegration was ascribed to residual stresses accumulated in the CaCO 3 body as provoked by the thermally-induced grain growth. Therefore, even in supercritical conditions enhancing the reaction kinetics, it was not possible to prevent the CaCO 3 grain growth. In a different approach, the CaO-CO 2 reaction was further facilitated by introducing water vapour in the chamber to induce the formation of a thin aqueous film on the CaO surface during the process. Such a hydration layer could accelerate the CaO-CO 2 reaction by inducing the formation of calcium hydroxide (Ca(OH) 2 ) at the surface. Such an occurrence was previously reported as detrimental for the integrity of the solid CaO reactant, because the large volume change occurring during transformation of CaO to Ca(OH) 2 provoked its prompt disintegration. However, in the presence of the aqueous layer, the reacting CO 2 gas could dissolve thus creating an acidic aqueous environment rising the solubility of the newly formed Ca(OH) 2 phase, thus making available free Ca 2+ ions which, reacting with the CO 2 , formed the CaCO 3 phase at a much lower temperature. The above mentioned adverse effects related to the formation of Ca(OH) 2 could be avoided thanks to the use of high gas pressure in supercritical conditions which permitted the formation of Ca(OH) 2 only as a transient phase. The use of high pressure was a key to achieve the penetration of the gaseous CO 2 reactant in the inner regions of the CaO solid until complete transformation occurred. The CaCO 3 precursors with enhanced mechanical strength and smaller grain size obtained with this method (i.e., ∼1-2 μm vs 10-20 μm) was accompanied by enhanced intergranular porosity and higher specific surface area, thus increasing the chemical reactivity and ability to undergo dissolution-reprecipitation into hydroxyapatite during the final transformation step, as described in the following paragraph. Hydrothermal Reactions to Generate 3D Hydroxyapatite Scaffolds With Designed Crystal Structure and Nanotexture Hydrothermal reactions can be defined as wet chemistry techniques making use of an aqueous solvent above its boiling temperature and pressure. Hydrothermal reactions are characterized by phenomena of chemical dissolution, nucleation and growth of inorganic crystals, that can be modulated by controlling fundamental parameters such as temperature/pressure, pH, and reaction time. In turn, the control of such parameters can yield enhanced control of the shape and morphology of the particles forming the product. The thermodynamic driving force for hydrothermal reactions, i.e., the molar Gibbs energy, is expressed by the simple formula: where ΔG is the molar Gibbs energy; R is the universal gas constant; T is the absolute temperature; n is the number of ions in the product molecule and S is the supersaturation degree that is defined by the ratio between the activity product of ion units (A) and the corresponding solubility product (K sp ), at a given temperature. It is thus evident that the kinetics of hydrothermal processes are strongly related to the process temperature and, in a minor extent, to the supersaturation extent. As a general rule, high temperature leads to the formation of large and long fibres/ particles, while at lower temperature small dimensions are preferred (Loo et al., 2008;Zhang and Vecchio, 2007;Jiang and Zhang, 2009). When experimental conditions are tuned to decrease the free energy variation, the reaction proceeds slower, thus nucleation of the product occurs at a reduced rate but this may result in drawbacks in terms of enhanced crystal growth. In this respect, the control of the saturation index, defined as SI log (S), allows the fine tuning of the crystal size: when SI < 0, the solution is undersaturated and dissolution processes are favoured, if S 0, the system is in equilibrium, while at SI > 0 crystallization is favoured, with higher nucleation rates when SI increases. Another major parameter affecting hydrothermal processes is the pH. Particularly, when considering the specific case of CaCO 3 conversion into hydroxyapatite (HA), the tuning of pH can modulate the distribution of various phosphate species (i.e., H 3 PO 4 , H 2 PO 4 − , HPO 4 2− , and PO 4 3− ) characterized by different pH values at the equilibrium point. An increment of pH value shifts the phosphate species equilibrium from H 3 PO 4 ➔ H 2 PO 4 − ➔ HPO 4 2− ➔ PO 4 3− , respectively, and increase the saturation index of HA according to the following equation (Viswanath and Ravishankar, 2008): This means that SI > 1, a condition leading to HA nucleation and growth, can be reached at pH values >5. It was observed that, moving from acidic to alkaline conditions, the morphology of the newly formed HA particles changes from need-like or rod-like Frontiers in Chemistry | www.frontiersin.org September 2021 | Volume 9 | Article 728907 shape to smaller and more rounded particles (Earl et al., 2006;Sadat-Shojai et al., 2011;Salariana et al., 2008). The above general concepts also apply to cases when wet synthesis processes carried out at low temperature or under hydrothermal reactions involve a solid reactant. In biomorphic transformations, the purpose is to achieve phase transformation in a solid while retaining its original microstructural details at the multi-scale. In this respect, the overall dissolution/reprecipitation kinetics should be tuned, not only to achieve products with the desired composition and nanostructure but also, and importantly, with the purpose to prevent the disruption of the final material, either due to excessive dissolution rates or structural deformation related to the growth of the new phase. In this respect, previous experiments carried out in wet conditions with a CaCO 3 piece obtained by biomorphic transformation of rattan wood, confirmed that temperature is a very crucial parameter determining the reaction rate . In fact, even when working at quite low temperatures (i.e., 20 or 40°C), slightly acidic conditions provoked excessive dissolution rate, thus provoking the disruption of the material; at higher temperatures (i.e., up to 60°C) similar results were obtained also at more alkaline conditions. Such a phase transformation resulted more effective when applying hydrothermal treatments in alkaline conditions to CaCO 3 biomorphic precursors characterized by high reactivity . Such a process could fasten the transformation of CaCO 3 into hydroxyapatite, characterized by lamellar nanostructure similar in morphology to the inorganic part of bone. This achievement was facilitated also by topotactic information given by the CaCO 3 crystals, yielding the epitaxial growth of the new apatitic phase along specific crystal directions (Álvarez-Lloret et al., 2010). At this point, the development of specific nanostructures can be driven by tuning processing parameters such as pH, ionic strength and reaction time. During the process, self-assembling phenomena can lead to coalescence of the newly formed HA particles into nanosized superstructures with lamellar, petal-like or flower like morphologies . (Figure 4) The wet process yielding the final scaffold can be carried out under conditions permitting to modulate the composition of the apatitic phases at the atomic level, by activating diffusive and ion exchange phenomena driven by temperature and pH. In this respect, it is possible to induce the partial replacement of Ca 2+ ions with foreign biologically-relevant ionic species, that alter the chemistry and crystallinity of the apatitic phase thus enhancing the bioactivity, particularly the capability of ion exchange in physiologic environment and the bio-resorbability. Relevant bioactive ions include Mg 2+ , Sr 2+ , CO 3 2-, SiO 4 4-, Zn 2+ which are well known to enhance the osteogenic process and promoting the cross-talk between different cell lines active in bone regeneration such as osteoblasts, osteoclasts and endothelial cells (Montesi et al., 2017;Landi et al., 2008;Bigi et al., 1992). The ion exchange process can be performed by varying the parameters influencing the diffusion rate of doping ions, i.e., temperature and ion concentration, according to the Gibbs equation for ion exchange phenomena in a closed system (i.e., ΔG RT ln(Q/K q ), where Q is the reaction quotient and K q is the standard equilibrium constant). In a previous study it was possible to obtain the partial replacement of Ca 2+ ions with small amounts of Mg 2+ and Sr 2+ ions that are among the most relevant cell instructors to promote and sustain new bone formation and maturation Galli et al., 2017;Saidak and Marie, 2012). Biologic Performance of Biomorphic Apatitic Scaffold The relevance of the above approach resides in the outstanding results coming from various biological and mechanical tests. The unique composition and structure of the biomorphic scaffold were the source of great osteogenic ability in a bioreactor study: Tampieri et al., 2019, the biomorphic scaffold induced overexpression of various genes involved in osteogenesis, greatly enhanced in comparison with a sintered apatitic scaffold featuring similar porosity extent. This finding confirmed the great relevance of biomimetic scaffold composition and multiscale structure, characterized by ability of ion exchange and biosolubility, to act as instructors for cells, in comparison with sintered bodies that, in spite very good biocompatibility, are too chemically inert to enable bioactive interaction with cells. In the same study, the biomorphic scaffold also demonstrated ability to induce and guide the formation of new bone with organized osteon-like morphology, when subcutaneously implanted in rabbits. Such result is, as well, of great relevance because it demonstrates that topological information given by the biomimetic structural hierarchy is able to effectively guide the direct formation of mature and structurally organized tissue, even when the scaffold was implanted in ectopic site, i.e., free of any osteogenic endogenous signal, added cells or growth factors. It is worth to consider that, lacking such instructive information, osteoinductive ability is usually observed by preliminary formation of disordered woven bone and further occurrence of remodelling, that however can occur only in response to specific mechanical needs that usually do not occur in ectopic site. From a mechanical perspective, the achievement of biomorphic structure with multi-scale hierarchy was relevant to yield damagetolerant mechanical behaviour, similarly as occurs with the natural bone. This is a very unusual feature for pure ceramic materials that are overall recognized as brittle, a feature that greatly limit their application in load-bearing applications. Indeed, it was recently reported that, together with the multi-scale architecture of the natural construct from which they originate, biomorphic materials inherit also their vascularizing and mechanical performance so that, from a mechanical perspective, they can be considered as a new class of inorganic compounds Bigoni et al., 2020). Scaffolds with bone-like mechanical performance can mimic the behaviour of the natural bone under loading, thus being very promising as an instructor for the activation of cell mechanotransduction phenomena, particularly relevant in loadbearing bones. Such hypothesis was confirmed by a recent in vivo study, carried out by implanting the biomorphic scaffold in a critical segmental defect (2 cm) in sheep metatarsus, in comparison with allograft (Kon et al., 2021). Analysis of the explants after 6 months revealed effective osteogenesis, osteointegration and vascularization involving the whole scaffold volume, thus leading to regeneration of the whole segmental bone defect. Such phenomena were related to extensive bio-resorption of the scaffold, permitted by its biomimetic composition and nanostructure, in turn enabling physiologic metabolic activity by endogenous bone and endothelial cells so that extensive vascularization could also occur. This latter was also favoured by the unique 3D architecture of the scaffold which also yielded appropriate mechanical performance. In fact, mechanical analysis of the explants carried out by microhardness reported the recovery of bio-competent mechanical properties in the newly formed bone, as related to appropriate mineralization extent, whereas osteopenia at bone-scaffold interface was observed with allografts. Considering that such study was carried out in a large animal model by using a cell-free device without any added growth factors, these results confirm that the use of 3D biomorphic scaffolds recapitulating chemical composition, 3D hierarchical architecture and mechanical behavior of the natural bone tissue, is a promising way to achieve extensive regeneration of long load-bearing bone segments in real clinical scenarios. CONCLUSION: PERSPECTIVES FOR FUTURE APPLICATIONS OF BIOMORPHIC MATERIALS The mankind has always observed with curiosity and amazement the functional abilities of many living beings that populate the earth, air and water, dreaming one day of imitating them. In recent years, materials scientists have stepped up their efforts to investigate the structural mechanisms that determine such unique capabilities; however, establishing technological processes able to develop devices that replicate such mechanisms is still an unsolved challenge. In this regard, biomorphic transformation processes represent a very promising approach to achieve this goal, also in consideration of recent results showing the feasibility to obtain macroscopic constructs with controlled composition, nanostructure and 3D multi-scale architecture that synergistically generate outstanding and unusual properties. In particular, it has been shown that the application of heterogeneous chemical reactions in the 3D state allows to transform a natural wood into a ceramic bone scaffold with a hierarchically organized architecture, endowed with unpreceded biological and mechanical properties, which could one day allow to solve clinical cases of great relevance in regenerative medicine, but still unmet. In a broader perspective, obtaining inorganic constructs with biomorphic structure could open the way, in the near future, to functional applications that are still unexplored as they are not feasible with current devices. In this regard, the study and application of biomorphic transformation processes are still at their infancy; however, given the presence of innumerable natural structures from which materials scientists can draw inspiration, it is foreseeable that the study of new processes based on heterogeneous chemistry in the 3D state might constitute a new area of research in materials science in the incoming years.
8,789.2
2021-09-07T00:00:00.000
[ "Materials Science" ]
Shear-driven and diffusive helicity fluxes in alpha-Omega dynamos We present nonlinear mean-field alpha-Omega dynamo simulations in spherical geometry with simplified profiles of kinematic alpha effect and shear. We take magnetic helicity evolution into account by solving a dynamical equation for the magnetic alpha effect. This gives a consistent description of the quenching mechanism in mean-field dynamo models. The main goal of this work is to explore the effects of this quenching mechanism in solar-like geometry, and in particular to investigate the role of magnetic helicity fluxes, specifically diffusive and Vishniac-Cho (VC) fluxes, at large magnetic Reynolds numbers (Rm). For models with negative radial shear or positive latitudinal shear, the magnetic alpha effect has predominantly negative (positive) sign in the northern (southern) hemisphere. In the absence of fluxes, we find that the magnetic energy follows an Rm^-1 dependence, as found in previous works. This catastrophic quenching is alleviated in models with diffusive magnetic helicity fluxes resulting in magnetic fields comparable to the equipartition value even for Rm=10^7. On the other hand, models with a shear-driven Vishniac-Cho flux show an increase of the amplitude of the magnetic field with respect to models without fluxes, but only for Rm<10^4. This is mainly a consequence of assuming a vacuum outside the Sun which cannot support a significant VC flux across the boundary. However, in contrast with the diffusive flux, the VC flux modifies the distribution of the magnetic field. In addition, if an ill-determined scaling factor in the expression for the VC flux is large enough, subcritical dynamo action is possible that is driven by the action of shear and the divergence of current helicity flux. INTRODUCTION A crucial point in the study of astrophysical dynamos is to understand the mechanism by which they saturate. Nevertheless, a consistent description of this process has rarely been considered in mean-field dynamo (MFD) modeling and only a heuristic description is often used. An important phenomenon happens when the dynamo operates in closed or periodic domains: the turbulent contribution to the dynamo equation, i.e., the α effect, decreases for large values of the magnetic Reynolds number. This process is known as catastrophic quenching and can pose a problem in explaining the generation of magnetic field in late type stars like the Sun or the Galaxy, where Rm could be of the order of 10 9 or 10 15 , respectively. In the last few years the nature of the catastrophic quenching has been identified as a consequence of magnetic helicity conservation (for a review see Brandenburg & Subramanian 2005a). It has been found that ⋆ E-mail<EMAIL_ADDRESS>(GG) in the nonlinear phase of the dynamo process, conservation of magnetic helicity gives rise to a magnetic α effect (αM) with a sign opposite to the inductive contribution due to the helical motions, i.e., the kinematic α effect. As the production of αM depends on Rm, the final value of the magnetic field should also follow the same dependence. However, real astrophysical bodies are not closed systems, but they have open boundaries that may allow a flux of magnetic helicity. The shedding of magnetic helicity may mitigate the catastrophic α quenching. These ideas have been tested in direct numerical simulations (DNS) in both local Cartesian and global spherical domains. In the former (Brandenburg 2005;Käpylä, Korpi and Brandenburg 2008) it has been clearly shown that open boundaries (e.g. vertical field boundary conditions) lead to a faster saturation of a large-scale magnetic field compared with cases in closed domains (perfect conductor or triple-periodic boundary conditions). In the latter, it has been found that it is possible to build up large-scale magnetic fields either with forced turbulence (Brandenburg 2005;Mitra et al. 2010b) or with convectively c 2002 RAS driven turbulence (e.g., Brown et al. 2010;Käpylä et al. 2010). These models generally used vertical field boundary conditions. In flux-transport dynamos (Dikpati & Charbonneau 1999;Guerrero & de Gouveia Dal Pino 2008) as well as in interface dynamos of the solar cycle (e.g. MacGregor ) the quenching mechanism has been considered either through an ad hoc algebraic equation or by phenomenological considerations (Chatterjee, Nandy & Choudhuri 2004), but most of the time the models do not consider the effects of magnetic helicity conservation. An exception is the recent paper by , where these effects have been considered in the context of an interface dynamo. In general the magnetic helicity depends on time, so it is necessary to solve an additional dynamical equation for the contribution of the small-scale field to the magnetic helicity together with the induction equation for the magnetic field. In the past few years, some effort has already been made to consider this dynamical saturation mechanism in MFD models like in the 1D α 2 dynamo models presented in Brandenburg, Candelaresi & Chatterjee (2009), in axisymmetric models in cylindrical geometry for the galactic αΩ dynamo (Shukurov et al. 2006), and also in models with spherical geometry for an α 2 dynamo (Brandenburg et al. 2007). The role of various kinds of magnetic helicity fluxes have been explored in several papers (Brandenburg, Candelaresi & Chatterjee 2009;Zhang et al. 2006;Shukurov et al. 2006). Our ultimate goal is to develop a self-consistent MFD model of the solar dynamo, with observed velocity profiles and turbulent dynamo coefficients computed from the DNS. This is a task that requires intensive efforts. Hence we shall proceed step by step, starting with simple models and then including more realistic physics on the way. In this work we will study the effects of magnetic helicity conservation in simplified αΩ dynamo models for a considerable number of cases. More importantly, we shall perform our calculations in spherical geometry, which is appropriate for describing stellar dynamos, with suitable boundary conditions, and considering shear profiles which are a simplified version of the observed solar differential rotation. We shall also explore how magnetic helicity fluxes affect the properties of the solution. Two classes of fluxes are considered in this paper: a diffusive flux and a shear-driven or Vishniac-Cho (hereafter VC) flux (Vishniac & Cho 2001). We consider models with either radial or latitudinal shear. The effects of meridional circulation will be investigated in detail in a companion paper ). This paper is organized as follows: in Section 2 we describe the basic mathematical formalism of the αΩ dynamo, give the formulation of the equation for αM and also justify the fluxes included. In Section 3 we describe the numerical method and then, we present our results in Section 4 starting from a dynamo model with algebraic quenching to models with dynamical α quenching and different fluxes. Finally, we provide a summary of this work in Section 5. THE αΩ DYNAMO MODEL In mean-field dynamo theory, the evolution of the magnetic field is described by the mean-field induction equation, where B and U represent the mean magnetic and velocity fields, respectively, ηm is the molecular diffusivity, E = αB − ηtµ0J is the mean electromotive force obtained using a closure theory like the first order smoothing approximation, where E gives the contribution of the small-scale components on the large-scale field, α is the non-diffusive contribution of the turbulence, ηt is the turbulent magnetic diffusivity, J = ∇ × B/µ0 is the mean current density, and µ0 is the vacuum permeability. In spherical coordinates and under the assumption of axisymmetry, it is possible to split the magnetic and the velocity fields into their azimuthal and poloidal components, B = Bê φ + ∇ × (Aê φ ) and U = r sin θΩê φ + up, respectively. For the sake of simplicity we shall not consider the meridional component of the flow, i.e. up = 0. Then, the toroidal and poloidal components of equation (1) may be written as where D 2 = ∇ 2 − s −2 is the diffusion operator, η = ηm + ηt, s = r sin θ is the distance from the axis, and Bp = ∇×(Aê φ ) is the poloidal field. The two source terms in equations (2) and (3), sBp ·∇Ω and αB, express the inductive effects of shear and turbulence, respectively. The relative importance of these two effects may be quantified through the non-dimensional dynamo numbers: CΩ = ∆ΩL 2 /ηt and Cα = α0L/ηt, where ∆Ω is the angular velocity different between top and bottom of the domain. Note that equations (2) and (3) are valid only in the limit CΩ ≫ Cα, known as αΩ dynamo. The inductive effects of the shear may be understood as the stretching of the magnetic field lines due to the change in the angular velocity between two adjacent points. On the other hand, the kinematic α-effect is the consequence of helical motions of the plasma which produce screw-like motions in the rising blobs of the magnetic field. Using the first order smoothing approximation it may be expressed as: where, τ is the correlation time of the turbulent motions and ω = ∇ × u is the small-scale vorticity. The saturation value of the magnetic field may be obtained by multiplying αK by the quenching function fq = 1 + B 2 /B 2 eq −1 , which saturates the exponential growth of the magnetic field at values close to the equipartition field strength given by Beq = (µ0ρu 2 ). This form of algebraic quenching was introduced heuristically (see, e.g. Stix 1972) and has been often used as the standard quenching mechanism in many dynamo simulations. However, it does not give information about the back reaction process and is independent of any parameter of the system like the magnetic Reynolds number. A consistent description of the quenching mechanism will be presented in the following section. Dynamical α effect Recently, it has been demonstrated that when the amplitude of the magnetic field reaches values near the equipartition, the α-effect is modified by a magnetic contribution, the so called magnetic α effect, denoted by αM. It is usually the case that αM has a sign opposite to αK resulting thus in the saturation of the magnetic field. Pouquet, Frisch & Léorat (1976) have shown that αM is proportional to the smallscale current helicity of the system, hence it is possible to write α as a sum of two contributions, one from the fluid turbulence and other from the magnetic field, as follows: where ρ is the mean density of the medium, assumed here as a constant, and j = ∇ × b/µ0 is the current density of the fluctuating field. The mathematical expression that describes the evolution of αM may be obtained by taking into account the magnetic helicity evolution (Blackman & Brandenburg 2002), which leads to: where k f = 2π/(L − rc) with rc = 0.7L0 is a suitable choice for the wave number of the forcing scale, the magnetic Reynolds number RM = ηt/ηm and F α is the flux of the magnetic α effect related to the flux of the small-scale magnetic helicity, F f through: According to previous authors αM has a finite value in the interior of the domain in absence of fluxes (F α = 0), and its sign is usually opposite to the sign of αK in such a way that the final amplitude of the total α-effect decreases, and so does the final value of the magnetic energy. Magnetic helicity fluxes Recently it has been pointed out that the catastrophic quenching could be alleviated by allowing the flux of smallscale magnetic (or current) helicity out of the domain, so that the total magnetic helicity inside need not be conserved any longer. Alternately, we may introduce those fluxes in the equation for αM; see equation (7). Several candidates have been proposed for the helicity fluxes in the past (Kleeorin & Rogachevskii 1999;Vishniac & Cho 2001;Subramanian & Brandenburg 2004). Amongst them are the flux of magnetic helicity across the iso-rotation contours, advective and diffusive fluxes and also the explicit removal of magnetic helicity in processes like coronal mass ejections or galactic fountain flows, for the case of the galactic dynamo. From the mathematical point of view, the nature of the flux terms in the equation for αM has not been demonstrated with sufficient rigor. However, several DNS have pointed to its existence. Firstly, the shearing box convection simulations of Käpylä, Korpi and Brandenburg (2008) showed that in the presence of open boundaries, the large-scale magnetic field grows on temporal scales much shorter than the dissipative time scale. They concluded from this that open boundaries may allow the magnetic helicity to escape out of the system. These experiments seem to be compatible with the flux proposed by Vishniac & Cho (2001), whose functional form may be expressed as (see Subramanian & Brandenburg 2004;Brandenburg & Subramanian 2005b, for further details): where S lk = 1 2 (U l,k + U k,l ) is the mean rate of strain tensor and CVC is a non-dimensional scaling factor. As we assume up = 0, this flux has the following three components: with S φr = S rφ = r sin θ(∂Ω/∂r)/2 and S θφ = S φθ = sin θ(∂Ω/∂θ)/2. Secondly, Mitra et al. (2010a) performed α 2 dynamo simulations driven by forced turbulence in a box with an equator. They found that the diffusive flux of αM across the equator can be fitted to a Fickian diffusion law given by, They also computed the numerical value of this diffusion coefficient, and found it to be of the order of turbulent diffusion coefficient. They also found that the time averaged flux is gauge independent. Both results were later corroborated by simulations without equator, but with a decline of kinetic helicity toward the boundaries (Hubbard & Brandenburg 2010). Additionally, magnetic helicity may be advected by the mean velocity with a flux given by F ad = αMU , or it may be expelled from the solar interior by coronal mass ejections (CMEs) or by the solar wind. This flux, FCME, may account for ∼ 10% of the total helicity generated by the solar differential rotation, as estimated by Berger & Ruzmaikin (2000). It can be modeled by artificially removing a small amount of αM every τ time (Brandenburg, Candelaresi & Chatterjee 2009), or also by a radial velocity field that mimics the solar wind. The total flux of magnetic helicity may be written as the sum of these contributions, Since in this dynamo model we do not include any component of the velocity field other than the differential rotation, in this study we will consider only the first two terms on the rhs of equation (13). THE MODEL We solve equation (2), (3) and (6) for A, B and αM in the meridional plane in the range 0.6L r L and 0 θ π. We consider two different layers inside the spherical shell. In the inner one the dynamo production terms are zero and go smoothly to a finite value in the external layer. The magnetic diffusivity changes from a molecular to a turbulent value from the bottom to the top of the domain. This is achieved by considering error function profiles for the magnetic diffusivity, the differential rotation, and the kinetic α effect, respectively (see Fig. 1): where Θ(r, r1,2, w) = 1 2 [1 + erf {(r − r1,2)/w1}], with r1 = 0.7L0, r2 = 0.72L0 and w1 = 0.025L0. We fix CΩ = −10 4 and vary Cα. The boundary conditions are chosen as follows: at the poles, θ = 0, π, we impose A = B = 0; at the base of the domain, we impose a perfect conductor boundary condition, i.e. A = ∂(rB)/∂r = 0. Unless noted otherwise, we use at the top a vacuum condition by coupling the magnetic field inside with an external potential field, i.e., (∇ 2 − s −2 )A = 0. A good description of the numerical implementation of this boundary condition may be found in Dikpati & Choudhuri (1994). The equations for A and B are solved using a secondorder Lax-Wendroff scheme for the first derivatives, and centered finite differences for the second-order derivatives. The temporal evolution is computed by using a modified version of the ADI method of Peaceman & Rachford (1955) as explained in Dikpati & Charbonneau (1999). This numerical scheme has been used previously in several works on the flux-transport dynamo and the results were found to be in good agreement with those using other numerical techniques (Guerrero & de Gouveia Dal Pino 2007Guerrero, Dikpati & de Gouveia Dal Pino 2009). In the absence of magnetic helicity fluxes, equation (6) for αM corresponds to an initial value problem that can be computed explicitly. However, as we are going to include a diffusive flux, we use for αM the same numerical technique used for A and B. All the source terms on the right hand side of equation (6) are computed explicitly. We have tested the convergence of the solution for 64 2 , 128 2 , and 256 2 grid points. For cases with small Rm, there are no significant differences between different resolutions, but for high Rm, 64 2 grid points is insufficient to properly resolve the sharp diffusivity gradient. A resolution of 128 2 grid points is a good compromise between accuracy and speed. αΩ dynamos with algebraic quenching In order to characterize our αΩ dynamo model we start by exploring the properties of the system when the saturation is controlled by algebraic quenching with fq = (1+B 2 /B 2 eq ) −1 . We found that, with the profiles given by equations (14)-(16), Fig. 1, the critical dynamo number is around 2 × 10 4 (i.e., C C α = 1.975). The solution for the model is a dynamo wave traveling towards the equator since it obeys the Parker-Yoshimura sign rule (see Fig. 2). In this case, the maximum amplitude of the magnetic field depends only on the dynamo number of the system, CαCΩ, as can be seen in the bifurcation diagram in Fig. 3. The quenching formula is here independent of Rm, so the saturation amplitude is also independent on Rm. αΩ dynamos with dynamical quenching In this section we consider dynamo saturation through the dynamical equation for αM described in Section 2.1. In this models we distinguish three different stages in the time evolution of the magnetic field: a growing phase, a saturation phase and a final relaxation stage (see panels a, b and c of Fig. 5). The magnetic field is amplified from its initial value, 5 × 10 −4 Beq, following an exponential growth. From the ear- liest stages of the evolution we notice the growth of αM with values that are predominantly negative in the northern hemisphere and positive in the southern hemisphere. The latitudinal distribution of αM is fairly uniform in the active dynamo region, spanning from the equator to ∼ 60 • latitude. The radial distribution exhibits two narrow layers where the sign of αM is opposite to the dominant one developing at each hemisphere. These are located at the base of the dynamo region (r ∼ 0.7L0) and at a thin layer near to the surface (r > 0.95). In the equation for the magnetic α effect, equation (6), the production term is proportional to E · B = αB 2 − ηtµ0J · B. The first component of this term has the same sign as αK, which in general is positive in the northern and negative in the southern part of the domain. The minus sign in front of the right hand side of equation (6) defines then the sign of αM. However, at the base and at the top of the dynamo region, αK → 0 and B → 0, respec-tively. The term ηtJ · B is the only source of αM and leads to the formation of these two thin layers. The space-time evolution of αM depends on the value of the magnetic Reynolds number. For small Rm, the decay term in equation 6 (i.e. the second term in the parenthesis) becomes important, so that there is a competition between the production and decay terms resulting in an oscillatory behavior in the amplitude of the magnetic α effect, as is indicated by the vertical bars in the middle panel of Fig. 7. The period of these oscillations is the half the period of the magnetic cycle. With increasing Rm, the amplitude of the oscillations decreases such that for Rm 10 3 , αM is almost steady. The morphology of the magnetic field corresponds to a multi-lobed pattern of alternating polarity (left panels of Fig. 5). These lobes are radially distributed in the whole dynamo region with maximum amplitude at the base of this layer. The poloidal magnetic field follows a similar pattern with lines that are open at the top of the domain due to the potential field boundary condition. There is a phase shift between toroidal and poloidal components which we have estimated to be ∼ 0.4π. The model preserves the initial dipolar parity during the entire evolution. The evolution of αM traces the growth of the magnetic field, but its final value depends on the magnetic Reynolds number. For small Rm, after saturation, αM reaches a steady state, but for large Rm, its relaxation is modulated by overdamped oscillations. The relaxation time is proportional to Rm, which means that for Rm ≫ 1 the simulation must run for many diffusion times. The differences in the relaxation time observed for αM reflects the evolution of the magnetic field, as is shown in Fig. 4. We observe that the rms value of the magnetic field remains steady during the saturation phase for Rm < 10 2 . For 10 2 < Rm < 10 3 , a bump appears in the curve of magnetic field evolution, followed by the relaxation to a steady value, whereas for Rm > 10 3 , the magnetic energy shows over-damped relaxations with a final energy proportional to Rm −1 as has been previously reported (Brandenburg et al. 2007). These oscillations in the time evolution plot of the averaged magnetic field have been reported in mean field dynamo simulations including the dynamical α-effect (Brandenburg & Subramanian 2005b). Not many DNS of αΩ dynamo exist so far in the literature with Rm 100 in order to compare with our results. However, in the local αΩ dynamo simulations of Käpylä, Korpi and Brandenburg (2008), a rapid decay of the magnetic field seems to occur after the initial saturation for moderate values of Rm. This decay forms a bump in the curve of the averaged magnetic field (see their Fig. 14), similar to the bump that we obtain for 10 2 < Rm < 10 3 . For reasons of clarity in the Fig. 4 we do not show the entire time evolution of each simulation with Rm > 10 3 . The total evolution time as well as the final value of the magnetic field of each simulation are shown in the Table 1. For magnetic Reynolds numbers above 2 × 10 4 , the initial kinematic phase is followed by a decay phase during which the total α effect goes through subcritical values and then the dynamo fails to start again. In Fig. 5 we present the meridional distribution of the magnetic field (left panel), αM (middle panel) and the total α (right panel), in normalized units, for the three dif- Note that for Rm > 10 3 , we have allowed the simulations to evolve more than 4 diffusion times, as indicated in Table 1. ferent stages of evolution corresponding to the early kinematic phase, the late kinematic phase and the saturated phase. These snapshots correspond to the simulation with Rm = 10 3 (Run Rm1e3 in Table 1). The multi-lobed pattern of the toroidal field represented with filled contours remains unchanged during the evolution even though its amplitude increases. The same occurs for the poloidal component, shown by continuous and dashed streamlines for positive and negative values, respectively. The magnetic α effect (middle panels) is formed first at latitudes between ±30 • and then it amplifies and expands to latitudes up to ∼ ±60 • . This makes the total α effect, initially similar to αK ( Fig. 1 and top panel of Fig. 5a), smaller at lower latitudes in the central area of the dynamo region. At the bottom and at the top of the domain αM and αK have the same sign making the total α larger. However, the global effect is a decrease of the dynamo efficiency. Diffusive flux for αM In this section we consider a Fickian diffusion term in equation (12) for αM. We consider a diffusion coefficient varying from 5 × 10 −3 ηt to 10 ηt in the dynamo region and with κα = ηm in the bottom layer. In these cases, the initial evolution of αM is similar to the cases presented in the previous section: negative (positive) values for αM in the northern (southern) hemisphere, with narrow regions of opposite values nearby the regions where αK = 0 or B = 0. However, at the later stages, αM is much more diffuse in the entire domain and has only one sign in each hemisphere. This is the result of cancellation of αM with opposite signs occurring in each hemisphere due to radial diffusion. Contrary to the cases without fluxes, we now obtain finite values of Bsat for large values of Rm, as can be seen in Fig. 6. All the cases depicted in this figure correspond to κα = 0.005ηt. We notice that the final value of the magnetic field still remains small compared to the equipartition ( 0.1Beq), but it is clear that even this very modest diffusion prevents the α effect from being catastrophically quenched. This is also evident from the top panel of Fig. 7, where we plot the final strength of B as a function of Rm, for the cases with and without dissipative flux. In the middle and bottom panels of the Fig. 7 we compare the behavior of the normalized αM, at a given point inside the dynamo region, and also the time period, T , of the dynamo for models with and without fluxes. In both panels it is clear that for Rm above ∼ 10 3 , αM and T reach a saturated value. Besides its dependence on Rm, the evolution of αM depends also on κα. For models with κα ≪ ηt, the evolution of αM relies on Rm, but for κα 0.1ηt, the dissipation time of αM becomes comparable to, or even shorter, than the period of the dynamo cycle. This results in αM becoming oscillatory, as shown in the bottom panel of Fig. 8. The amplitude and the period of these oscillations depend on the value of κα. In the top panel of Fig. 8 we show the final value of the averaged mean magnetic field as a function of κα. We observe that for κα in the range (0.1-1) ηt, the value of Brms remains between 20% and 60% of the equipartition, a value similar to the one obtained in the simulations using algebraic α quenching (Section 4.1, Fig. 3). For κα > ηt, superequipartition values of the magnetic field may be reached. This is because larger values of κα result in oscillations of αM with larger amplitude, such αM may locally change its sign, increasing the value of the total α in each hemisphere and thereby enhancing the dynamo action. Such high values of the diffusion of the magnetic helicity are unlikely in nature. The Vishniac-Cho flux Our next step is to explore the magnetic helicity flux proposed by Vishniac & Cho (2001) in the form given by equation (8). For the moment we set κα = 0. In a previous study on the effects of the VC flux in a MFD model in Cartesian coordinates, Brandenburg & Subramanian (2005b) found that there exist a critical value for the parameter CVC above which there is a runaway growth of the magnetic field that can only be stopped using an additional algebraic quenching similar to the one used in Section 4.1. They found that this critical value, CVC * , diminishes with increasing the amount of shear. Since we have used a strong shear (CΩ = −10 4 ) we use nominal values of CVC = 10 −3 , but without any algebraic quenching. The term ∇·F VC develops a multi-lobed pattern which travels in the same direction as the dynamo wave, this confirms that the VC flux follows the lines of iso-rotation. From equation (8), we see that the VC flux is proportional to the magnetic energy density. In the present case, with CΩ ≫ Cα, the spatial distribution of ∇ · FVC/B 2 eq is dominated by the terms involving B 2 φ in equations. 9-11 (this may be inferred from the left hand panels of Fig. 10a). This results in a new distribution of αM, with concentrated regions of positive (negative) sign at low latitudes in the northern (southern) hemisphere, and a broad region of negative (positive) sign in latitudes between 20 • and 60 • latitude (see middle panels of Fig. 10). Surprisingly we find that the general effect of this flux is to decrease the final amplitude of the magnetic field with respect to the case without any fluxes as can be seen in Fig. 11. Note that we have until now used only the potential field boundary condition for the poloidal field. When we consider both diffusive as well as VC fluxes, with κα = 0.1ηt and CVC = 10 −3 , we obtain a magnetic field of slightly larger amplitude compared to the case with only the diffusive flux (compare the value of Brms in Runs DRm1e3 and VCD in Table 1). However we may say from the butterfly diagram of Fig. 12 that the toroidal magnetic field appears to be more concentrated at lower latitudes, where the sign of αM is same as that of αK. With negative values of CVC, it was found that the resulting profile of αM is only weakly modified from cases without fluxes, though its value is reduced marginally such that the final amplitude of Brms is slightly larger. But even this contribution does not help in alleviating catastrophic quenching in models with large Rm (see Fig. 11). Since VC fluxes transport helicity along lines of constant shear, it may be expected that they are more important in models with latitudinal shear, since in this case the magnetic helicity flux can travel either towards the bottom or the top boundaries, from where magnetic helicity can be expelled. For testing this possibility, we turn off the radial shear profile and consider a purely latitudinal solar-like differential rotation: where Ωeq/2π = 460.7 nHz is the angular velocity at the equator, and Ωs(θ) = Ωeq + a2 cos 2 θ + a4 cos 4 θ gives the latitudinal profile, with a2/2π = −62.9 nHz and a4/2π = −67.13 nHz. In order for the dynamo to be slightly supercritical, as in the previous cases, we consider CΩ = 5 × 10 4 . This dynamo solution corresponds to a dynamo wave produced at mid latitudes (∼ 45 • ) that travels upwards (since CΩ now is positive). As in the previous cases with radial shear, the distribution of ∇ · F VC/B 2 eq is similar to that of the divergence of magnetic energy density (left hand panels of Fig. 10 b,c and d). If no fluxes are considered, the final amplitude of the mean magnetic field is ∼ 0.03% of the equipartition value. In presence of VC fluxes, starting with CVC = 10 −3 for a model with Rm = 10 3 , we notice that the final magnetic field is twice as large as in the case with CVC = 0. Our model becomes numerically unstable beyond CVC = 10 −2 due to appearance of concentrated regions of strong αM. When VC and diffusive fluxes are considered simultaneously, with CVC = 10 −3 and κα = 0.1ηt, the relaxed value of Brms is only slightly below the value reached at the end of the kinematic phase (Fig. 11b). In this case αM spreads out in the convection zone, as shown in Fig. 10c, indicating that the effects of the VC flux are not important when compared with the diffusive flux. We repeated the calculation by considering the vertical field (VF) boundary condition, ∂(rB θ )/∂θ = 0, for the top boundary, instead of the potential field (PF) condition used throughout the rest of this work. Furthermore, in the models with VF conditions the presence of the VC flux leads to an increase of Bsat by a factor of ∼ 2 compared to the case without VC flux (see Fig. 11c). It may be noted that αM shows regions of both positive and negative signs in each hemisphere (see Fig. 10d). Thus, the total α effect is increased locally to values well above the kinematic one. This implies that in the region around ±45 • the dynamo action is driven by the magnetic α effect. A similar secondary dynamo is found to be working for a different distribution of shear and αK . As with PF boundary condition, large values of CVC result in a numerical instability of the magnetic field in the simulation with VF. Figure 11. Time evolution of the averaged mean magnetic field for different values of C VC : a) Radial shear, b) latitudinal shear with potential field boundary conditions and c) latitudinal shear with vertical field boundary conditions. The width of the different bands reflects the range over which the magnetic field varies during one cycle. Note that the cycle period is short compared with the resistive time scale on which the magnetic field reaches its final saturation. If not indicated, in all models Rm = 10 3 . The two dashed lines in the panel a) corresponds to Cvc = −0.002 for Rm = 10 3 and Rm = 10 4 . The main result of this section is that the VC flux does not alleviate catastrophic quenching of the dynamo for large values of Rm (see the dashed lines in Fig. 11 a and c). The reason for this may be related to the fact that the radial flux has components that are either proportional to B θ or to B φ (equation 9). As B φ vanishes on the top boundary, and B θ is small, the VC flux is not able to dispose of αM across CONCLUSIONS We have developed αΩ dynamo models in spherical geometry with relatively simple profiles of αK and shear (∂Ω/∂r and ∂Ω/∂θ). We choose potential field (also vertical field in some cases) and perfect conductor boundary conditions for the top and bottom boundaries, respectively. We estimate the critical dynamo number by fixing CΩ = −10 4 and varying Cα while using algebraic quenching. Using a dynamo number, CΩCα, that is slightly supercritical, we solve the induction equations for B and A together with an equation for the dynamical evolution of the magnetic α effect or αM. We find that for positive (negative) values of Cα in the northern (southern) hemisphere, αM is mainly negative (positive), with narrow fractions of opposite sign in regions where αK or B are equal to zero. We find that the kinematic phase is independent of Rm. However for Rm > 10 2 there exists a phase of relaxation post saturation in which the averaged magnetic field oscillates about a certain mean. The larger the Rm, the more pronounced are the damped oscillations and the longer is the relaxation time (Fig. 4). The final value of the magnetic energy obeys a R −1 m dependency (R −0.5 m for magnetic field, Fig. 7), which is in agreement with earlier work (Brandenburg & Subramanian 2005b;Brandenburg, Candelaresi & Chatterjee 2009). We argue that including equation (6) in MFD models is appropriate for describing the quenching of the magnetic field in the dynamo process. Since we observe large-scale magnetic fields at high magnetic Reynolds numbers in astrophysical objects, there must exist a mechanism to prevent the magnetic field from catastrophic quenching. We have studied the role that diffusive and VC fluxes may play in this sense. Their contribution may be summarized as follows: (i) In the presence of diffusive fluxes, αM has only one sign in each hemisphere (negative in the northern hemisphere and positive in southern) and is evenly distributed across the dynamo region (Fig. 9). (ii) For Rm < 10 2 the mean values of αM are similar to models without diffusive fluxes, whereas for Rm 10 2 , αM has smaller values that seem to be independent of Rm (see Fig. 7, middle). (iii) Even a very low diffusion coefficient, e.g. κα = 0.001ηt, causes Brms to depart from the R −0.5 m tendency and converge to a constant value which is then around 5% of the equipartition value for large values of Rm, but below the value of 10 7 used in this study (dashed line in Fig. 7, top). (iv) Larger values of κα result in larger final field strengths. (v) In models with only radial shear the Vishniac-Cho flux contributes to αM with a component that travels in the same direction as the dynamo wave. This produces a different radial and latitudinal distribution of the magnetic α effect that also affects the distribution of the magnetic fields. However, it does not help in alleviating the quenching at high Rm. On the contrary, the larger the coefficient CVC, the smaller is the resultant magnetic field. (vi) In models with only latitudinal shear the VC flux travels radially outward but it remains concentrated at the center of the dynamo region. In a given hemisphere the resultant distribution of αM has both positive and negative signs. The part of αM that has the same sign as αK enhances dynamo action. This effect is more evident in models with vertical field boundary conditions . (vii) In models with vacuum and vertical field boundary conditions and Rm = 10 3 , the VC flux increases the final value of the magnetic field by a factor of two compared to the case without any fluxes. (viii) The magnetic field in models with Rm 10 4 and with non-zero VC flux decays after the kinematic phase since the total α effect becomes subcritical (see dashed lines in Fig. 11 a and c). (ix) Larger values of CVC produce narrow bands of αM which drives intense dynamo action in these regions. This positive feedback between the magnetic field and αM causes the simulation to become numerically unstable in the absence of any other quenching effect. From the above results it is clear that diffusive fluxes are much more important in alleviating catastrophic quenching when compared to the Vishniac & Cho fluxes (in the form of equation 8) for a large range of Rm. This is somehow intriguing since it is known from DNS that shear in do-mains with open boundaries does indeed help in alleviating the catastrophic quenching. It may be understood as a result of the large value of CΩ compared with Cα and also to the top boundary condition for the azimuthal magnetic field (Brandenburg 2005;Käpylä, Korpi and Brandenburg 2008). The results presented above indicate that considerable work is still necessary in order to understand the role of larger-scale shear in transporting and shedding small-scale magnetic helicity from the domain. In snapshots of the meridional plane as well as in butterfly diagrams we notice that the diffusive fluxes do not significantly modify the morphology and the distribution of the magnetic field when compared with cases without fluxes or even with simulations with algebraic α quenching. On the other hand, for models with VC flux the distribution of αM becomes different and so does the magnetic field. This is clear from the butterfly diagram shown in Fig. 12b, which exhibits a magnetic field confined to equatorial latitudes reminiscent of the observed butterfly diagram of the solar cycle. Even though this result corresponds to a simplified model, it illustrates the importance of considering the dynamical α quenching mechanism for modeling the solar dynamo. Similar changes in the distribution of αM and B are expected to happen when advection terms are included in the governing equations. In the simulations presented here, Ω and α effects are present in the same layers. An interesting question is whether the quenching of the dynamo is catastrophic when both layers are segregated, as in the Parker's interface dynamo or the flux-transport dynamo models. We address this question in detail in two companion papers . We should notice that the back reaction of the magnetic field affects not only the α effect, but also the other dynamo coefficients, including the turbulent diffusivity. Contrary to quenching of α, the quenching of ηt may be considered through an algebraic quenching function (see e.g. Yousef, Brandenburg & Rüdiger 2003;Käpylä & Brandenburg 2009). Guerrero, Dikpati & de Gouveia Dal Pino (2009) have shown that in a flux-transport model these effects could affect properties of the models such as the final magnetic field strength and its distribution in radius and latitude. We leave the study of models with simultaneous dynamical α and η quenchings for a future paper. Solar-like profiles of differential rotation and meridional circulation along with dynamical α quenching will also be considered in a forthcoming paper. ACKNOWLEDGMENTS This work started during the NORDITA program solar and stellar dynamos and cycles and is supported by the European Research Council under the AstroDyn research project 227952.
9,692.2
2010-05-26T00:00:00.000
[ "Physics" ]
Saturated Feedback Control to Improve Ride Comfort for Uncertain Nonlinear Macpherson Active Suspension System With Input Delay A robust saturation control approach is developed for input-time delay Macpherson active suspensions, subject to dynamical uncertainties, exogenous disturbances, and road excitations. The proposed control method comprises of a linear combination of two smooth saturation functions of a filtered signal and a regulation error, hence the control law is smooth and bounded by a known and adjustable constant bound. An auxiliary signal involving a finite integral over the delayed time interval of past control values is exploited to convert the delayed system into a delay-free system, and Lyapunov–Krasovskii (LK) functionals are constructed to eliminate the residual delayed terms in a Lyapunov-based analysis. The vertical displacement and velocity of the sprung mass are proven to uniformly ultimately bounded regulating to improve the ride comfort, despite model uncertainties, additive disturbances and the input delay. Several simulations are performed to verify the improvement in the ride comfort under different road profiles, while the tire deflection and suspension deflection are within an admissible limitation in comparison with two other suspensions. I. INTRODUCTION Due to its important roles in vehicle performance, vehicle suspension control has been an interested subject in research literature. Ride comfort, road holding and suspension deflection are three critical performance requirements for controlling vehicle suspensions. However, these criteria are usually contrary, thus a compromise of the criteria must be attained. The control design solution for active suspension is a potential technical approach to enhance ride comfort, while holding the suspension deflection and tire deflection in an admissible level [1]. Various control techniques for active suspensions aiming at promoting the ride comfort have been presented in literature. For example, a non-fragile H ∞ output-feedback control in [2] was constructed to increase the ride comfort inside the intent frequency span and also assure the hard requirements in the time-domain. However, the optimal control approach The associate editor coordinating the review of this manuscript and approving it for publication was Rongni Yang. is not probably an appropriate solution for vehicle suspensions consisting of dynamics uncertainties and exogenous disturbances; because common methodology of H ∞ control approach for vehicle suspensions is all constraints weighted and formed into an unique cost function to be minimized to acquire an optimal control gain [3], [4], thus this control approach develops based on the linearization approximation of the suspension dynamics and requires the plenty knowledge of the system dynamics which is sometimes impossible to satisfy in the practice. The nonlinear nature in both kinematics and dynamics behaviors of the Macpherson suspension was indicated in [5]- [11], hence the application of the control methods constructing based on the dynamics linearization approximation for Macpherson active suspensions will lead to degrade control performance. Several robust nonlinear control strategies for active suspension systems have been introduced recently. For examples, the method in [12] presented a robust nonlinear suspension control system developed using the combination of fuzzy logic, neural network control, and sliding mode control (SMC) methodologies; Chen et. al. introduced an improved SMC for nonlinear active suspensions to accomplish the nominal optimal performance and better robustness in [13]; Taghavifar et. al. presented an adaptive SMC based indirect fuzzy neural network system for a nonlinear suspension subject to uncertain parameters and road excitations in [14], or Wang et. al. developed an Active Disturbance Rejection Control combining with a fuzzy SMC to improve the ride comfort of full car suspension systems, whereas unmodeled dynamics and external disturbances are estimated by an extended state observer in [15]. However, these approaches are discontinuous feedback methods with infinite control bandwidth and chattering limitations. Besides, the time delay and amplitude limitation issues of actuators have not been investigated thoroughly in controlling for uncertain nonlinear active suspensions. Time delay is inevitable and unfortunately, a resource of instability and attenuation problems of the system performance. Time delay in practical systems can be caused by many reasons, for instance, the control torque created by an internal combustion engine can be postponed caused by fuelair mixing, ignition delays, cylinder pressure force propagation, or communication delays exist in remote control applications (such as, master-slave teleoperation of robot, haptic systems) whereas time is irresistibly demanded to feedback the control information. Hence, the time delay particularly in actuators is also an important issue that needs careful consideration in active control of vehicle suspension systems. There are some results about the active suspension control with actuator input delay, such as [1], [16], [17]. However, these control strategies were developed based on the conventional model of the suspension with only the parameter uncertainties, so without allowing dynamics uncertainties and/or exogenous disturbances, the suspension dynamics is required to be linearization. Moreover, the fact that control inputs are a function depending on the system states, thus large initial conditions and/or unmodeled disturbances may evoke the controller to exceed physical limitations. Specially, control errors can add up over the delay interval for systems with input delays, also leading to large actuator requirements, aggravating potential problems with actuator saturation [18]. Due to the control performance degradation and the potential control failure risk, control schemes for active suspension systems ensuring performance within the actuator limitations are motivated. A saturated adaptive robust control strategy has been introduced to address the control problem of uncertain active suspension systems with saturated inputs in [19]. However, to the best of author's knowledge, a control method for uncertain Macpherson active suspensions considering all saturation limit, input delay, dynamics uncertainties and external disturbances is still an open problem. The problem of H ∞ state-feedback controller for semiactive seat suspension systems with both time-varying input delay and actuator saturation in [31], however, this control method develops for the conventional suspension systems with parametric uncertainties only, and a set of linear matrix inequalities needs to be solved approximately by numerical methods. Recently, Dinh et. al. presented a robust saturated RISE feedback control for uncertain nonlinear Macpherson active suspension systems in [29], and this saturated controller is developed based on the nonlinear dynamics model of the Macpherson suspension, but didn't consider about the input delay issue of the system. The contribution of this paper is that the both time delay and actuator saturation issues of the control input are taken into account for the nonlinear Macpherson suspension without transforming the system via the linearization step, whereas the saturated control design can predict/compensate for known input delays in active suspensions with nonlinear uncertainties and exogenous disturbances. The technical challenges for this control method are that to develop the stability analysis for the underactuated system to obtain the delay-free control input, and handle with the remaining delayed cross terms. A predictor term containing a finite integral of past control values over the delay time interval is utilized to inject a delay free control input into the stability analysis, and LK functionals are exploited in the design and stability analysis. The continuous saturated controller is developed with the bound on the control to be known and adjustable by changing the feedback gains. The control objective is to achieve uniformly ultimately bounded regulation of the vertical displacement and velocity of the sprung mass to improve ride comfort, which is proven by Lyapunov stability analysis. The performance of the proposed control method is examined by numerical simulations in comparison with two other suspensions in the improvement of the ride comfort while the suspension deflection and tire deflection within acceptable level. II. SYSTEM MODEL AND OBJECTIVES Different suspension dynamic models have been considered for analyzing the suspension oscillating behavior. Dynamic models of the Macpherson suspension system were discussed in [5]- [11]. In the following development, the nonlinear dynamics of the active Macpherson suspension subjected to the control input delay can be expressed via the following state space representation with the vertical displacement z s of the sprung mass and the rotation angle θ of the control arm chosen as the generalized coordinates [29] The model figure of a quarter-car Macpherson suspension is referred to Fig. 1 in [29] for details. In (1), x = x 1 x 2 x 3 x 4 T z sżs θθ T denotes the state vector with a finite initial condition x(0) = x 0 , z r ∈ R is the road excitation, u(t − τ ) ∈ R represents the delayed active control force, where τ ∈ R + is a known constant time delay. Throughout this paper, R denotes the set of real numbers, R + is the set of strictly positive real numbers, R n is the ndimensional Euclidean space, and a time-dependent delayed function denoted as ζ (t − τ ) or ζ τ is defined as ζ τ where the unknown functions (2) are rewritten according to the Macpherson suspension dynamics in [5] (see [29] for details). The suspension dynamics in (1) expanded considers an unknown nonlinear disturbance d ∈ R 4 , which can be unmodeled effects compared to the system model in [5]. The subsequent control development exploits the following assumptions about the suspension dynamics in (1). Assumption 1: The exogenous disturbance d and the road profile z r are assumed to be bounded by known constants (i.e. d, z r ∈ L ∞ ). Assumption 2: The control input u(t), its past values (i.e., u(t − ς ) with ∀ς ∈ [0, τ ]), the vertical displacement and velocity z s ,ż s of the sprung mass are available for feedback in the subsequent development. The use of the vertical displacement and velocity z s ,ż s feedback is typical in the suspension control design, which can fulfilled by an accuracy integration algorithm or filter design such as introduced in [11]. Moreover, the assumption of the past input measurement is also very common in the control design for the input delayed systems, such as in [18], [20], [21]. Remark 1: The technical control development in this paper utilizes several properties of the hyperbolic tangent function, such as with ∀ξ ∈ R [22], [23]: The essential objectives of the active suspension design is to expeditiously regulate the vertical car body displacement for ride comfort. The contribution of the control method in this paper is the construction of an amplitude-limited and continuous controller which ensures the vertical displacement, velocity z s ,ż s of the chassis are uniformly ultimately bounded regulation, despite unmatched uncertainties, nonlinear disturbances and the delay of the control input in the system. To quantify the state regulation objective, a regulation error e ∈ R and a filtered regulation error r ∈ R are defined as where e f ∈ R is an auxiliary signal whose dynamics are given byė and k, α, γ ∈ R denote constant positive control gains. Based on Assumption 2, the regulation errors e, r are measurable, whereas the measurable term e z ∈ R is subsequently defined. The motivation for the formation of the regulation errors and auxiliary signals is the need of adding and subtracting intermediate terms in the stability analysis step. Assumption 3: The system in (1) is assumed that if the control force is limited below a priori limit (i.e. |u| ≤ u, where u is a known positive constant), the unmeasurable states x u (t) = θθ T in (1) are bounded in terms of the measurable regulation errors such that the following condition holds: where z(t) ∈ R 4 is defined as and c 1 , c 2 ∈ R are known nonnegative bounding constants. This assumption is equivalent to assuming that the rotation of the control arm is bounded in a stable limit if the controller is saturated in a priori bound. The aim of this active suspension control design is to guarantee the system stability under the existence of the control input delay and to improve ride comfort, while suspension deflection and tire deflection are assumed within an acceptable level by applying saturated control input in order to make sure the car safety. III. CONTROL DEVELOPMENT Utilizing the regulation error in (4) and substituting the filtered error in (5) yield the open-loop dynamics as followṡ where C α 1 0 0 is a known constant vector, e z (t) introduced in (4) is defined as the finite integral over the delay-time interval [t − τ, t] of the past control inputs based on the Leibniz-Newton formula as In (9), the term υ(t) ∈ R is the fixed proportion of the control input u(t), which is designed based on the loop of the control development and stability analysis as follow where a constant feedforward estimateΩ ∈ R is defined aŝ Ω CĜ, and the constant estimateĜ ∈ R 4 is an invertible best-guess vector of the uncertain input matrix G. A feature of the controller in (10) needed to be emphasized is that this control law is upper bounded by the adjustable control gain k, as |u| ≤Ω −1 √ k + 2 ≤ u. Hence, this control amplitude can be limited below a known bound. To facilitate the subsequent development, the auxiliary functionΩ( which can be separated into components as In (11), the unknown function Ω(x 3 ) ∈ R is defined as Assumption 4: The subsequent development is based on the assumption that the constant estimateΩ introduced in (10) can be selected such that where ε ∈ R is a known positive constant. To facilitate the subsequent analysis, (1), the time derivative of r can be extracted by using (4), (5), (8), and (9) aṡ Utilizing (10) -(12), the closed-loop error system is abbreviated aṡ where the two auxiliary functions N d (t) ,Ñ (x, ω) ∈ R are defined as According to the Macpherson suspension dynamics provided details in [29], the unknown functions g 1 (x 3 ), g 2 (x 3 ), h 1 (x 3 ), h 2 (x 3 ) are assessed to be upper bounded by known constants, and the functions f 1 (x), f 2 (x) can be upper bounded by state-dependent terms. Hence, by using the suspension dynamics in [29], Assumption 3, the properties of hyperbolic tangent function in (3), the error definitions in (4) and (5), the expression in (16) can be evaluated to be bounded by an upper bound as where the function ρ ( z ) ∈ R is a globally invertible, nondecreasing positive function. Using Assumption 1 and the suspension dynamics in [29], the following bound can be developed where ζ i ∈ R, i = 1, 2 are determinable positive constants. In the subsequent stability analysis, D ⊆ R is the open and connected set defined as D (13), λ, k 2 are subsequently defined. Let y ∈ R 5 be defined as where Q, P ∈ R + are the auxiliary Lyapunov-Krasovskii functionals defined as [24] Q ε 2 where the constant ω ∈ R + is known and positive. IV. STABILITY ANALYSIS Theorem 1: Given the input-delayed nonlinear Macpherson active suspension system in (1), the saturated controller given in (10) guarantees semi-globally uniformly ultimately bounded regulation of the vertical displacement of the chassis z s , provided the adjustable control gains α, γ , k are selected according to the following sufficient conditions where the positive constant ψ ∈ R + is known and adjustable. Proof: Let consider the Lyapunov candidate function V L : D → R, defined as V L ln (cosh (e)) + 1 2 r 2 + 1 2 tanh 2 e f + Q + P. (24) The Lyapunov function candidate in (24) is a Lipschitz continuous positive definite function, which can be bounded using (3) as: V. SIMULATION RESULTS Several Matlab numerical simulations are executed to examine the controller in (10) for a quarter-car model with dynamics provided in [29], and the Macpherson suspension dynamic and kinematic parameters indicated in [30, Table 1]. In addition, the friction disturbance d is assumed applying to (1), as d = 0 d 1 (t) 0 d 2 (t) where d 1 = 5.3x 2 + 8.45tanh (x 2 ) and d 2 = 1.1x 4 + 2.35tanh (x 4 ) represent the summation of the static and the dynamic frictions. The performance of the proposed controller executed by Matlab numerical simulations which is taken into account both the delay and limited bound issues of the actuators is compared with both a passive suspension system and an active suspension with PID controller. The performance of the proposed controller is evaluated by considering both time and frequency domain responses with various delay time values varying from 1 ms to 200 ms, when the wheel is disturbed by two types of road profiles: bump and sinusoidal excitations. A. TIME RESPONSE The simulations in the time domain consider the vehicle driving at a steady horizontal speed of V = 50mph, excited by a road bump, as z r = |z r | [1 − cos (ω r (t − 0.5))] if 0.5 ≤ t ≤ T + 0.5 and z r = 0 otherwise, where |z r | = 5cm or |z r | = 7.5cm is the half bump height, ω r = 2π V /D r = 2π/T is the frequency of road excitation, D r is the width of the bump D r = 10m. The active control force is selected to be limited as |u a | ≤ u = 2500N , which is also the maximum damping force in [17]. The gains of the proposed controller are selected as The performance of the proposed active suspension is compared against the corresponding passive suspension, and the active suspension controlled by a PID controller, whereas the gains of the PID controller were tuned to acquire the best possible performance in the given saturation bounds. Finally, the control gains of the PID controller are K p = 2400, The criteria of evaluation for the suspensions consist of the body displacement and acceleration expressing for ride comfort and the suspension stroke, tire deflection for road holding, whereas the suspension stroke and tire deflection are determined as y sd (t) = l 2 Several simulation results were obtained using various time delays and various bump heights. The control performances are examined for the short and long input delay durations. The time responses of the chassis displacement and acceleration of three simulated suspensions under two driving conditions depicted in Figures 1 and 2: case 1: subjecting to a 10-cm-high bump excitation (|z r | = 5cm) and 5ms input delay (τ = 5ms) or case 2: subjecting to a 15-cm-high bump excitation (|z r | = 7.5cm) and 200ms input delay (τ = 200ms). Both active suspensions (using proposed method and PID control methods) are superior over the passive one in ride comfort in the suppression of the body displacement and acceleration. Between two controlled suspensions, the active suspension with the saturated controller has the better ride comfort under the both driving conditions. The rotation angular of control arm, the suspension stroke, and tire deflection are also presented respectively in Figs. 3, 4, and 5 to evaluate the road holding criteria, and the control inputs of two controlled suspensions are depicted in Fig. 6. The results show that the tire deflection in the suspension with saturated controller is considerably similar in comparison with the passive suspension and the active one with PID controller. The proposed saturated controller for the active suspension significantly improves the ride quality, but still keeps the rotation angular of control arm, the deterioration of the suspension deflection and tire deflection within an acceptable level in comparison with the passive one and the active one with PID control. However, to obtain more accurate comparison, further investigation in frequency domain is needed. B. FREQUENCY RESPONSE In the frequency domain simulations, the vertical acceleration of the vehicle body, the suspension and tire deflection responses to different road disturbances having the frequency varying from 2Hz to 30Hz are determined, then the variance gains of the corresponding measure of interest are computed using the definition given in [27], [28] as where z represents the performance measure of interest which is the vertical chassis accelerationz s , the suspension deflection y sd or tire deflection y td , respectively. In meanwhile, the suspension is subjected to various constant input delays, where τ = 5ms for short delay duration and τ = 200ms for long delay duration, and the road excitation is selected as the sinusoid z r = |z r | sin (ωt) with t ∈ [0, 2πN /ω] where N is a big enough integer for the system to reach the steady state, practically N can be chosen as N = 15, and ω = 2πf with f ∈ [2, 30] Hz; |z r | is selected to be 10cm for the medium amplitude and 15cm for the high amplitude. With each frequency f and each input delay case, the corresponding output signals are recorded to calculate the variance gains. The vertical displacement z s , the chassis acceleration z s , the suspension deflection y sd and the tire deflection y td response to the road disturbance z r in the frequency range 2Hz -30Hz and two different input delays representing the ride comfort, rattle space and road-holding performances showed in Figs. 7, 8, 9 and 10, respectively. It's worthy to note that in all simulations, the system is stable under both short and long input delays. Moreover, with regard to the ride comfort aspect, the transfer function from the road profile z r to the body displacement and acceleration caused by the proposed method have variance gains smaller than the corresponding gains resulted from the remaining suspensions. In comparison between graphs of variance gains in Figs. 7 and 8, the active suspension applying the saturated control approach has better ride comfort (i.e. smaller gains) than the passive suspension at all frequencies, and than the active suspension with PID control at almost all frequencies including the human sensitive range (4Hz-8Hz). The requirement of road-holding performance was not considered directly in control design step, but is testified in simulation section by comparing the peak values of the transfer functions from the road excitation to the suspension stroke and tire deflection through various simulation results depicted in Figs. 9 and 10, respectively. The peak value of the tire deflection for the active suspension applying the proposed saturated control is little bigger than the peak value in the passive one and the active one with PID controller for both cases of input delays. A similar comparison is also completed for the suspension stroke. Hence, comparing variance gains of the suspension stroke and tire deflection with regards to the rattle space and road-holding performances, the proposed FIGURE 8. Variance gain of the transfer function from the sinusoid excitation z r to the chassis accelerationz s over the considered frequency range 2Hz -30Hz, subject to a 10-cm-high bump excitation and 5ms input delay (in left side) or a 15-cm-high bump excitation and 200ms input delay (in right side). FIGURE 9. Variance gain of the transfer function from sinusoid excitation z r to the suspension deflection y sd over the considered frequency range 2Hz -30Hz, subject to a 10-cm-high bump excitation and 5ms input delay (in left side) or a 15-cm-high bump excitation and 200ms input delay (in right side). FIGURE 10. Variance gain of the transfer function from sinusoidal excitation z r to the tire deflection y td over the considered frequency range 2Hz -30Hz, subject to a 10-cm-high bump excitation and 5ms input delay (in left side) or a 15-cm-high bump excitation and 200ms input delay (in right side). approach shows the equivalent achievement throughout the examined frequency span. In summary, the proposed saturated control approach effectively enhances the ride comfort performance, while the suspension deflection and tire deflection are retained at an admissible level to ensure the rattle space limit and the car safety, even with different delay times. VI. CONCLUSION A continuous saturated controller is constructed for input delay Macpherson active suspension systems, consisting of nonlinear uncertainties, additive bounded disturbances and subject to constant input delay. The bound on the control input is guaranteed by using the hyperbolic tangent functions and can be adjusted by adjusting the control gains, which is chosen based on the sufficient gain conditions. Lyapunov stability analysis exploiting LK functionals is used to prove the saturated controller guaranteeing uniformly ultimately bounded regulation of the vertical displacement and velocity of the vehicle body despite uncertainties in the dynamics, exogenous disturbances and the input delay issue. Simulation results exhibit the advantage in the ride comfort of the proposed active suspension system. HUYEN T. DINH received the M.Eng. and Ph.D. degrees in mechanical engineering from the Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL, USA, in 2010 and 2012, respectively. She has been an Associate Professor since 2020. She is currently serves as the Vice-Dean of the Department of Mechanical Engineering, University of Transport and Communications, Hanoi, Vietnam. Her primary research interest is the development of Lyapunov-based control and applications for uncertain nonlinear systems. Her current research interests include Learning-based control, adaptive control for uncertain nonlinear systems, and control methods for autonomous vehicles.
5,843.6
2020-01-01T00:00:00.000
[ "Engineering", "Mathematics" ]
Evaluation of Agile Suppliers Using Fuzzy MCDM Approach In today’s competitive environment, supply chain need to be of high speed and flexibility, i.e., agile. Agility has been proposed as a response to the high levels of complexity and uncertainty in modern markets. It is a business-wide capability that embraces organizational structures, information systems and logistics processes. This study employs the hierarchical fuzzy MCDM algorithm proposed by Karsak and Ahiska for the evaluation of agile suppliers. This algorithm is based on the proximity to the ideal solution concept and it can address the problems containing both crisp and fuzzy data. The application of the decision making method is illustrated through a case study conducted in a privet hospital and the results are analysed. Introduction Supply chain is composed of a complex sequence of processing stages, ranging from raw materials supplies, parts manufacturing, components and end-products assembling, to the delivery of end products.In the context of supply chain management, supplier selection decision is considered as one of the key issues faced by operations and purchasing managers to remain competitive. Supplier selection and management can be applied to a variety of suppliers throughout a product's life cycle from initial raw material acquisition to end-of-life service providers.Thus, the breadth and diversity of suppliers make the process even more cumbersome.Supplier selection process has different phases such as problem definition, decision criteria formulation, pre-qualification of potential suppliers, and making a final choice.The quality of the final choice largely depends on the quality of all the steps involved in the selection process. Recently, industries face significant changes especially in marketing competition, technological innovation, and customer demands.Situations and possibilities are varied and organizations are responsible for adapting themselves to this various cases.Moreover, the scarce sources are considered as an important restriction faced by the companies.Agility is essential to deal with these problems.The term agility can be defined as an ability to give quick responses to changes and unpredictable situations.It is considered as the bearer competitive advantage in today's business environment.Companies have to be aligned with suppliers, customers, and competitors within the supply chain.Also, they should work together to achieve a level of agility. Supply chain agility is implemented in which the ordered innovative goods with small volumes of high margin in addition to short product life cycle through a complex global supply chain network.It is applied an equilibrium solution which copes with the complexity in a limited coordination and cooperation through the links in the network.The agility of supply chain might be measured as how well supply chain elements, suppliers, customers and competitors are coordinated and collaborated in the network enhance four pivotal objectives of agile manufacturing which are customer enrichment ahead of competitors, achieving mass customization at the cost of mass production, mastering change and uncertainty through routinely adaptable structures, and leveraging the impact of people across enterprises through information technology [1]. This study aims to develop multi-criteria decision making approach for evaluating agile suppliers.Supplier evaluation process requires the consideration of multiple conflicting criteria, yielding in a multi-level hierarchical structure, incorporating vagueness and imprecision with the involvement of a group of experts.Fuzzy set theory is one of the effective tools to deal with uncertainty and vagueness in criteria values.The objective of this study is to propose a hierarchical fuzzy multi-criteria group decision making methodology for agile supplier evaluation. In the literature, there are a few papers that evaluate agile suppliers.Wu et al. [2] proposed a two-stage approach, based on the application of analytic network process (ANP) and mixed integer multi-objective programming model, to solve the problem of partner selection in agile supply chains.Wu and Barnes [3] presented a four phase dynamic feedback model for supply partner selection in agile supply chains.Alminardi et al. [4] combined SWARA and VIKOR for supplier selection in an agile environment.Abdollahi et al. [5] integrated analytic network process (ANP) and data envelopment analysis (DEA), and DEMATEL method for evaluating suppliers considering lean and agile criteria.Lee et al. [6] employed fuzzy AHP and fuzzy TOPSIS to select suppliers.Beikkhakhian et al. [7] identified the criteria to evaluate agile suppliers and then they used fuzzy TOPSIS to rank alternative suppliers. The rest of the paper is organized as follows.Section 2 analyzed the fuzzy decision making methodology employed in this study for the evaluation of agile suppliers.Section 3 presents the application of the proposed model.Finally, conclusions are provided in Section 4. Hierarchical fuzzy MCDM approach Real-world decision problems such as the supplier selection often involve the consideration of numerous performance attributes.When a large number of performance attributes are to be considered in the evaluation process, it may be preferred to structure them in a multi-level hierarchy in order to conduct a more effective analysis.In this study, the hierarchical distancebased fuzzy MCDM algorithm introduced by Karsak and Ahiska [8] is employed for evaluating agile suppliers.This MCDM algorithm is based on the proximity to the ideal solution concept and which can address the problems containing both crisp and fuzzy data.The origins of the proposed decision making procedure are found in the multi-criteria decision tool named TOPSIS [9]. The proposed fuzzy MCDM approach can be described as follows: Step 1. Construct the decision matrix that denotes the fuzzy assessments corresponding to qualitative subcriteria and the crisp values corresponding to quantitative sub-criteria for the considered alternatives. Step 2. Normalize the crisp data to obtain unit-free and comparable sub-criteria values.The normalized values for crisp data regarding benefit-related as well as cost-related quantitative sub-criteria are calculated via a linear scale transformation as where ijk yc denotes the normalized value of ijk y , which is the crisp value assigned to alternative i with respect to the sub-criterion k of criterion j, m is the number of alternatives, n is the number of criteria, CB j is the set of benefit-related crisp sub-criteria of criterion j and CC j is the set of cost-related crisp sub-criteria of criterion j, Step 3. Aggregate the performance ratings of alternatives at the sub-criteria level to criteria level as follows: where ij x ~represents the aggregate performance rating of alternative i with respect to criterion j, 1 ~jk w indicates the average importance weight assigned to sub-criterion k of criterion j, and is the fuzzy multiplication operator. Step 4. Normalize the aggregate performance ratings at criteria level using a linear normalization procedure, which results in the best value to be equal to 1 and the worst one to be equal to 0, as follows: Step 7. Calculate the proximity of the alternatives to the ideal solution, i P , by considering the distances from ideal and anti-ideal solutions as . Step 8. Rank the alternatives according to i P values in descending order.Identify the alternative with the highest * i P as the best alternative. Case study In order to illustrate the application of the proposed decision making method, a case study conducted in a private hospital on the Asian side of Istanbul is presented.Benefiting from the literature, management capability, manufacturing capability, collaboration capability, and The expert used the linguistic variables given in Figure 1 to evaluate the importance of the criteria and sub-criteria, and also the ratings of alternatives with respect to various subjective criteria and sub-criteria.There are 4 suppliers who are in contact with the hospital. The evaluations are represented in Tables 2, 3, and 4. Table 2. Importance weights of criteria.Sub-criteria values are aggregated to criteria level using equation (2), and are represented in Table 5. Conclusions In this study, the hierarchical fuzzy MCDM algorithm proposed by Karsak and Ahiska [8] has been employed for the evaluation of agile suppliers.In classical MCDM methods, the ratings and the weights of the criteria are assumed to be known precisely.In general, crisp data are inadequate to model real-life situations.Besides having the capability of considering numerous attributes that are structured in a multi-level hierarchy, the proposed decision framework enables the decision-makers to use linguistic terms. Considering the fact that an alternative with the shortest distance from the ideal alternative may not have the farthest distance from the anti-ideal, Karsak and Ahiska's decision algorithm takes into account the weighted distances from both the ideal and anti-ideal simultaneously. FuUWKHUPRUH .DUVDN DQG $KÕVND ¶V approach does not require the use of fuzzy number ranking methods, which may yield different results according to the ranking method selected for application purposes. Further researches might focus on the extensions of the proposed methodology by employing both subjective and objective weight assessments of the criteria and related sub-criteria. Table 1 . agility, and their related sub-criteria are identified as the selection attributes as in Table1.Criteria and related sub-criteria. Table 3 . Importance weights of sub-criteria. Table 4 . Data related to agile supplier evaluation problem. Table 5 . Data related to agile supplier evaluation problem. Table 6 . Data related to agile supplier evaluation problem.
2,042.6
2016-01-01T00:00:00.000
[ "Business", "Engineering", "Computer Science" ]
A genetic algorithm-based job scheduling model for big data analytics Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy. Introduction Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop [1] is the most mature open-source big data analytics framework, which implements the MapReduce programming model [2] proposed by Google in 2004 to process big data. Scalability is the most important feature of Hadoop, mainly because it can easily add compute nodes in the original cluster to analyze big data. The performance of big data analytics application is related to the characteristics of jobs and the configuration of clusters, which have a direct impact on performance of big data analytics applications. When there are multiple jobs that need to be executed with diverse cluster configurations, the solution space of job scheduling is huge and manual job scheduling is inefficient and can hardly achieve the best performance. Genetic algorithms (GAs) [3] are used to obtain optimized solutions from a number of candidates. GAs are inspired by an evolutionary theory: weak and unit species *Correspondence<EMAIL_ADDRESS>College of Computer and Communication Engineering, China University of Petroleum, Qingdao, China are faced with extinction by natural selection and the strong ones have greater opportunity to pass their genes to future generations via reproduction [4]. Compared with other classic optimization methods, GAs have its specific advantages in terms of its broad applicability, ease of use, and global perspective [5]. GAs are particularly useful to one-objective and multiple-objective optimization problems [6] that make one-or multi-objective attainment to the optimum. The contribution of this work is mainly twofold. First, we propose an estimation module to predict the performance of Hadoop clusters when executing different big data analytics jobs, which can be used by GAs. Then, with the effective information which the estimation module provides, we present a genetic algorithm-based job scheduling model for geo-distributed data. We evaluate the proposed solution using the data centers and cluster nodes from the Amazon EC2 [7] platform. The experiment results show the proposed job scheduling model is effective and accurate. The remainder of the paper is organized as follows: Section 2 describes the five basic stages of MapReduce data processing that can be utilized in the calculation of estimation module. Section 3 presents the genetic algorithm-based job scheduling model. Section 4 details the performance estimation module, which is used by the algorithm for the calculation of time objective. Section 5 implements and evaluates the genetic algorithm. Section 6 covers the related work. Section 7 concludes the paper. Related work There have been numerous works devoted to Hadoop's performance prediction. Berlińska and Drozdowski [8] propose a mathematical model of MapReduce and analyze MapReduce distributed computations as a divisible load scheduling problem. However, they do not consider the system constraints. There is also some work on optimizing MapReduce [9,10]. Zaharia et al. [10] proposed a prediction model for sub-tasks of Hadoop job, rather than the entire job. Xu et al. [11] extracted characteristic values related to Hadoop performance and utilized machine learning methods to find the optimal value, without building performance models. Han et al. [12] proposed a Hadoop performance prediction model. However, it does not consider the data preparation phase of this thesis. There are also some GA-based approaches proposed in job scheduling. Krishan Veer and Zahid [13] presents a design and eventual analysis of a scheduling strategy using GA that schedules the job with the objective of minimizing the turnaround time of the job. The evaluation is simplified due to the limitations. In [14], the application of meta-heuristic for cloud task scheduling on Hadoop is investigated. A scheduling algorithm using execution time, order of task arrival, and location of data (i.e., assign task to the node which contains the required data) to determine the best execution schedule is presented. But the performance prediction model is unintelligible, and the cost is not taken in consideration. Background Hadoop consists of the MapReduce algorithm and the Hadoop Distributed File System (HDFS) [15]. The Hadoop workflow includes five phases, which is illustrated in Fig. 1. • Prepare. The source data in the local disk is uploaded to HDFS in this phase. According to the predefined partition size of the data segmentation, source data are segmented in blocks first and then stored a copy to the data node in the pipeline way according to the network topology distance. This paper utilizes the performance module for the case of this paper to predict Hadoop data processing performance including the abovementioned five phases. After the time consumption characteristics have been predicted, we can infer the cost of big data processing jobs, which depends on the characteristics of time consumption. Overall design of the GA-based approach The working flow of the GA-based decision-making for job scheduling is shown in Fig. 2, and the overall decisionmaking process is as follows. First, the estimation module is used to model the clusters and jobs. Then, some simulations are conducted to collect job execution information, such as the time and cost array which shows all the time and cost taken by each job run on all clusters. After that, the time and cost information is used in certain framework where GAs give optimized solutions for the job scheduling schema. For big data, we choose a certain cluster deployment way to process them. However, different processing jobs do not always have the same performance with the same cluster configuration, due to the job characteristics and one job also cannot obtain the consistent result with different clusters because of the cluster or Hadoop characteristics. When simultaneously assigning multiple jobs to process data in one data center, there are many kinds of optional cluster configuration circumstances for each job, which can bring about a large number of job scheduling schemas. So choosing the one with best performance among those jobs scheduling schema is another major issue to be addressed in this paper. In order to get optimized solutions in job scheduling decision, we use the genetic algorithm to choose solutions which have the minimal finishing time and cost. These are the objectives of the job scheduling problems. Fig. 1 Traditional MapReduce workflow We give a general mathematical description of the problem and then create the appropriate objective optimization model based on genetic algorithms: take the data processing in a data center, for example, and assume that a data center has N optional clusters and M jobs to be executed. We use an integer vector to represent the ultimate job scheduling scheme, S i indicates that the job i is assigned to the cluster S i , where S i ∈ [ 0, M]. So far, we have achieved getting the job scheduling scheme which have the shortest overall execution time by genetic algorithm. The overall execution time and cost refers to the time it consumed when all jobs complete the execution. (1) Here, we utilize Eqs. (18) and (19) to respectively represent the execution time and cost of all jobs assigned to one cluster, which are the objectives we considered in genetic algorithm. In order to facilitate the calculation of the overall execution time of job scheduling scheme, we add E(i, j) to indicate whether job i is assigned to the cluster j. The explanations of the symbols in the two equations are described one by one as follows: 1) i represents the sequence number of a job with the scope of [ 0, N]. 2) j ∈ [ 0, M] represents the sequence number of a cluster. 3) ST ij in Eq. (18) is the time a job consumes when running in the cluster j. The calculation of it will be achieved by performance estimation module in the next section. (19) is to calculate the cost of running a job in the cluster j , in which C ij represents the cost of running a certain node per second and N clus is the number of nodes in cluster j . 5) When job i is assigned to cluster j , A chromosome corresponds to a unique solution in the solution space. GAs can typically make use of Booleans, real numbers, and integers to encode a chromosome [9]. The representation of chromosome in our case is using integer (starting from 0) sharing the idea of [10]. That is to say, we are using an integer vector V = where p is the number of decision variables, and in our case, it is 12: the number of jobs) to represent a solution which is a natural number and acts as a pointer to the sequence of the cluster to which the job is assigned to execute. For example, a chromosome [0, 6,2,8,2,4,5,9,6,1,3,7] represents that a solution chooses the first cluster whose sequence number is 0 to execute job 0 and chooses the eighth cluster for job 9, the eleventh cluster as the processing cluster of the third job, and so on. Based on the chosen allocation strategies, the GAs then decide its fitness using the objective Eqs. (1) and (2) introduced. Assumption To achieve the genetic algorithm for job scheduling problem, we need to input the source data of the two objectives: time and cost array. In this paper, we propose a performance estimation module which can predict the execution time and cost of data processing jobs, according to different characteristics. The paper makes the following assumptions for simplification: • About the reduce wave, we use recommendations in [16] and assumes that the maximum number of reduce that can be executed simultaneously is 1. • We do not support speculative execution. That means, we will not repeat map or reduce execution and select the faster as the final result, killing the slower one, for it proved to have little contribution to improve the overall execution time. Total execution time overview In this paper, performance-related parameters are divided into four categories: cluster, hadoop&HDFS, application, and obtained by module. The symbol and explanation of all parameters are listed in Table 1. The overall data processing time contains two parts: one part is the preparation time of the source data and the other is the time to perform the data processing job. Equation (3) shows the overall time to process data by clusters. The overall time for data processing in each data center needs to be calculated, and the maximum of them will be taken as the final result. T total = max(T prepare + T job ) Prepare time We need to upload the data, including the replicas, from the local disk distributed in multiple data centers to their own HDFS (Eq. (4)) in this phase, where the bandwidth between node in local cluster is B ii . 2. Job time (a) Map time Map phase execution time can be calculated by Eq. (5), where the average processing time of input data block in each wave multiplied by the corresponding number of wave is the map time of the ith data center. The average throughput of the map is obtained from the average processing time of running the job with the input data whose block size is given. The wave number of each data center is calculated by Eq. (6) and the number of the data blocks divided is calculated by Eq. (7). (b) Copy time This stage refers to the output data of the map copied to reduce. Since we only consider the case of one local cluster or different clusters in respective data center, it does not involve the transfer of data to remote and we need not consider the problem of bandwidth across the cluster between different data centers. When output data is copied to reduce, it often occurs that a plurality of thread copy data to reduce at the same time. The theoretical maximum copy speed is the sum of all thread. But in reality, copy speed is also limited by the local network bandwidth. Therefore, the actual copy speed of one thread is calculated as Eq. (8). The entire local copy speed is Eq. (8) multiplied by the number of thread (Eq. (9)). This paper argues that all nodes of a cluster are in the same subnet; thus, only the local copy speed and map output data size will have an impact on the copy time (Eq. (10)). (c) Sort time The time estimation of the sort stage is independent on the network, which is shown in Eq. (11). Among them, the calculation of the form is shown in Eq. (12). The output data of sort is the input of the reduce phase. During this phase, data processing and writing to HDFS are operated simultaneously; thus, we can take the (13)). Calculating the processing time of one reduce and then multiplied by the number of reduces can get the overall processing time of reduce (Eq. (14)). In this paper, it is assumed that there is only one reduce wave. The circumstances we considered is transferring all source data to a local data center to build a cluster or setting up clusters simultaneously in respective data centers without transferring source data; therefore, the source data in these two circumstances are different. The duplication is set as 3 in HDFS, including the original data set. In this paper, the remote cluster does not exit and all nodes are in the same subnet, so it is not time-consuming to transfer the copy to a remote cluster. One copy will be written to the local hard disk, while the remaining two copies will be stored in other nodes in the local cluster. This process should be limited to the local bandwidth. We choose the maximum time of the local disk writing time and the local clusters writing time as the reduce output writing time (Eq. (15)). The local disk writing time of a copy is equal to the average amount of data written to the local disk in each node divided by the disk write speed (Eq. (16)). The disk write speed is obtained by Bonnie++ [17], a tool for disk I/O performance test. Because of the assumption that one reduce only produce a single output file and two copies of it will be copied to the other two nodes in local cluster, the copy speed is limited by the bandwidth of the subnet the cluster belongs to. We utilize Eq. (17) to estimate the minimum store speed of each reduce data copy in the local cluster. The reduce output data size divided by Eq. (17) is the maximum time of data copies to store in the local cluster (Eq. (18)) . The copy phase from the first wave completion of the map results in the overlap between these two phases; thus, the job execution time of one cluster in respective data center is shown as Eq. (19). Setting of the experiment Suppose we have the following hypothetical scenario: there are 12 data processing jobs and 10 clusters which have specific configuration. This is considered as one of the examples which is discussed in detail in this paper. We utilize Amazon EC2 (Amazon Elastic Compute Cloud) as a test platform. The types of the nodes in the experiment are all m1.large which is one of the node types that AWS supports and their configuration is the same to each other, namely, 64-bit RHEL (short for Red Hat Enterprise Linux) operating system, two core, 7.5-G memory, 2 420-G storages. The cost of each node is $0.34 per hour. The data center is located in US East (Northern Virginia), and the bandwidth we test using Netperf [18] is 282.4 MB/s. The parameters of jobs and cluster configuration are listed in Tables 2 and 3, respectively. Table 4 shows other related parameters. As illustrated previously, we have 12 jobs and 10 clusters. For each job, it can be allocated to any of the 10 clusters. So, there will be allocation schemes. Our goal for job scheduling is to choose the best allocation scheme from all of the possible schemes using GAs. Before making this decision, we utilize the performance estimation module to predict the execution time and cost which are the decision indicators in GAs. Then, we get a two dimensional time and cost array which is shown in Tables 5 and 6, respectively. From Tables 5 and 6, we can see that different jobs running on the same cluster may not have the same time and cost consumption. Also, same jobs can take different times and costs to finish its execution in different clusters. In our implementation, we choose to use Java-based GA frameworks. There are some popular implementations, such as JGap [19], ECJ [20], and JMetal [21]. When compared with JMetal's counter-parts, the design of JMetal has a good separation of concerns in terms of its easiness for applying different GAs after a problem is abstracted. Therefore, JMetal is chosen as the GA framework in our paper. In our evaluations, GA Fig. 3 Comparison between real measurement and module prediction Non-dominated Sorting Genetic Algorithm (NSGA-II) is chosen as the concrete algorithm. Algorithm parameters setting in our implementation includes the maximum evaluations, the crossover probability, and the mutation operator, as shown in Table 7. As choosing integer chromosome, single-point crossover, bit-flip mutation, and Binary tournament2 operators are selected drawing on the experience of [22]. Reference [6] talks about how the operators work to make a change to the chromosome in GAs. Results In order to evaluate the effectiveness of the proposed performance estimation module, we choose US East (Northern Virginia) data center to set up real clusters in Amazon EC2 platform and run some experimental jobs and compare them to the results obtained from performance estimation module (Fig. 3). By comparing the results estimated and real results in Fig. 3, we can find that our performance module estimates accurately the data processing time in general. Bandwidth and resource load may lead to delays in real world when compared with predicted time performance, but this part of error can be acceptable. Since the estimation of cost performance depends on the times, so deduce the effectiveness of it from the above conclusion. Meanwhile, in order to compare the performance of NSGA-II with other algorithms, we choose simple allocation method for comparison, which is a random allocation policy: all jobs are assigned to a group of clusters randomly. We verify and evaluate each job scheduling policy derived by NSGA-II and simple allocation method. The results are shown in Tables 8 and 9, respectively. From these three tables, we can find that the scheme obtained from GA-based approach takes 989.5 time units and $2.6 to finish all of 12 jobs' execution while the other scheme from a simple method takes 2638.7 time units and $3.10. So, GA-based approach can do the optimized decision and make the data processing have the fastest execution efficiency and minimum cost. According to the GA-based scheme, data processing jobs can be finished as fast as possible with the optimized cost, and therefore, users can have better experience and can be more satisfied. Conclusions Job scheduling is one of the most important issues in big data analytics. In this paper, we propose a genetic algorithm-based approach, which uses a performance estimation module we put forward, for obtaining optimized jobs scheduling scheme that have the optimized
4,677.4
2016-06-27T00:00:00.000
[ "Computer Science" ]
B 3 : Fuzzy-Based Data Center Load Optimization in Cloud Computing Cloud computing started a new era in getting variety of information puddles through various internet connections by any connective devices. It provides pay and use method for grasping the services by the clients. Data center is a sophisticated high definition server, which runs applications virtually in cloud computing. It moves the application, services, and data to a large data center. Data center provides more service level, which covers maximum of users. In order to find the overall load efficiency, the utilization service in data center is a definite task. Hence, we propose a novel method to find the efficiency of the data center in cloud computing.Thegoal is to optimize date center utilization in terms of three big factors—Bandwidth,Memory, andCentral Processing Unit (CPU) cycle. We constructed a fuzzy expert system model to obtain maximum Data Center Load Efficiency (DCLE) in cloud computing environments. The advantage of the proposed system lies in DCLE computing. While computing, it allows regular evaluation of services to any number of clients. This approach indicates that the current cloud needs an order of magnitude in data center management to be used in next generation computing. Introduction Cloud computing is an evolving paradigm to access assortment of data pool via internet by using connective devices such as Personal Digital Assistant (PDA), work station, and mobile [1][2][3][4].It is a utility-based computing, which has the capability to deliver services over the internet.It provides on-demand access without any human intervention.The standard deployment object that is used in cloud computing is Virtual Machines (VM).It enhances flexibility and enables data center to be dynamic in nature.The techniques of dividing a physical computer into several partly or completely isolated machines are known as virtualization [5,6].A collection of data is stored in a centralized pool called Data Center (DC) [7][8][9].Cloud computing is the art of managing tasks and applications by altering the software, platform, and infrastructure and by organizing third party data centers known as Cloud Service Providers (CSP) such as Yahoo!, Amazon, Google, and VMware [2,10].Data center is deployed as an individual server room which is hosted within the organization.It runs several applications on a single server.In cloud computing, the data center provides more services, which covers maximum numbers of users [11].So, cloud service providers are prepared in better tolerance to manage and update the data centers.Cloud computing provides myriad of services [12].Therefore, the data center is too costly to build and manage.The challenges of data centers are the following. (i) Irrefutable Cost: Construction of low cost data center is unaffordable for a single compound.Cloud computing built a centralized data center which requires increasing cost in servers and storage.(ii) Workload Utilization: Cloud computing needs new servers to be installed in data centers.Virtualization has enabled many applications to run on a single server or couple of servers.Some key factors of utilization are storage, power, cooling, response time, capacity, and efficiency.(iii) Optimization of Services: Numerous data centers applications provide variety of services.So, finding overall load efficiency and utilization of services is a complex task associated with data center.Due to enormous applications running on it, optimization of data center service is a major challenge. The major difficulty in a data center is to deploy that producers are expected to have better knowledge in monitoring data centers so that they are able to find the service utilization issues by managing the data center load configurations [13][14][15][16].In [17], presented a data center utilization scenario to monitor and analyze cloud system, the utilization of client specification bounds such as bandwidth, memory and CPU utilization. Fuzzy Logic was introduced by Zadah [18][19][20].It is a problem-solving system methodology that lends itself to survive systems ranging from simple to sophisticated to survive.It is used in embedded, networked, distributed systems.Fuzzy set is a common set that has collection of elements measuring improbability in the set.It has varying degrees of membership in the set.A typical function of a crisp set allocates a value of either 1 or 0 to each individual in the universal set.The function can be comprehensive in such a manner that the values are assigned to the elements of the universal set.Huge values represent upper degrees of set membership, and it is called membership function and the set is identified as fuzzy set.The most usually employed range of values of membership function is the unit interval [0, 1].Each membership function plots elements of a given universal set , and it is always a crisp set, into real numbers in [0, 1].The membership function of fuzzy set is denoted by ; that is, : → [0, 1].Each fuzzy set is completely and uniquely defined by one particular membership function, and it may also be used as labels of the associated fuzzy sets [21].Each element of fuzzy set is mapped to a universal membership value by using function theoretic form [22].It is having an element in the universal set , is a member of fuzzy set , and then this mapping is given by () ∈ [0, 1], where () is called grade of membership. In this work, extensive use of Fuzzy logic has been deployed to find the data center load efficiency.Here, we used crisp value of input as real numbers, and in the next analysis, we intend to go in for Fuzzy Fractal Dimensions [23,24].Data center load efficiency is the key object.Here, the fuzzy fractal dimension is denoted by the pair of Bandwidth (BW) and Memory of the CPU fields [25,26].Here, BW is the numerical value of the fractal dimension of bandwidth, and 0 is the membership function of bandwidth, namely, the memory and CPU.It is mainly because the Memory and CPU are dependent on the bandwidth.The unevenness of the dynamically changing resource requirements and the emerging demand pattern can be compared to the different geometric objects [27,28].Hence, here, we apply fuzzy rules to differentiate the different patterns and cluster them.Fractal geometry can be used to classify different objects based on their roughness [24,26,29].In this case, the focus is based only on the smoother objects where the limitation of the fractal value is only up to one, and if it is closer to one, then that means maximum utilization of the memory and CPU resources.If memory is 1 and CPU cycle is 1 , then the data center load efficiency is DC1.Thus, based on the input parameter, the output object efficiency is predicted using the simple fuzzy rules.Only disadvantage here is that based on the parameters the total number of rules increases causing problem due to dimensionality.Based on this model which has been created, the future values of the demand for CPU and memory can be predicted leading to accurately assess the efficiency of the data center in the varying situations. 1.1.Background.In recent times, more attention is shown on the framework of cloud computing and the performance evaluation.Iosup et al. [30] have done "performance analysis of cloud computing services" approach for supporting efficiency of cloud computing.In their model, they analyze the performance of Many Task Computing (MTC) workloads.They have proposed a comparison on performance characteristics and cost models.Moreno-Vozmediano et al. [31] have deployed a computing cluster on the top of many task computing applications.In this subsequent work, cluster loads have been used for resources from different clouds to construct high availability strategies.These are used for proving viability to perform scalability of resources and performances for large scale cluster infrastructure.Dutreilh et al. [32] have considered the recent research to construct a data center management framework for atomic resource allocation in virtual applications.They evaluated in two ways, namely, threshold-based and reinforcement learning methods to dynamically scale resources. Yazir [33] presented a virtualization tool that provides this gap by applying ideas from computational geometry.It proved valuable assistance in providing quick and easy preliminary performance analyses.Data processing management is difficult to get as many machines as an application needs.The large scale jobs are distributed on different machines as parallel running processes.The control and coordination of these processes is complex with time dependent.Cloud Architectures [34] have solved such difficulties.Cloud administrators usually worry about hardware procuring (when they run out of capacity) and better infrastructure utilization (when they have excess and idle capacity).The lower network bandwidth and the inherent lower hardware dependability force enterprises to reorganize cloud application architecture [35].From the data center challenges and methodologies, the two key questions arise.How are the efficiency of data centers and performance of cloud computing calculated?What are the key factors to decide the efficiency of data center in cloud computing?This paper answers these questions.The contributions of the paper are outlined as follows. model which is possible through tangible implementation and assessment.This paper is organized as follows.Section 2, gives the problem identification.Section 3, the deals with problem formulation, preliminaries, and definitions.Section 4 presents finding of data center load efficiency using fuzzy modeling, Section 5, provides the performance analyzes and experiment results.Section 6 gives the conclusion of the paper. Problem Identification The objective of this work is to assess the data center load efficiency, when more number of clients and several requests are running on the same server.The typical web application used in cloud computing has the potential capacity constraints such as bandwidth into the load balancer, CPU cycle, and memory of the load balancer [36,37].The ability of the load balancer depends upon (i) bandwidth between the load balancer with application server [38,39]; (ii) CPU cycle and memory of the application server; (iii) bandwidth between application server and network storage devices; (iv) data storage and Disk I/O of database server [40].The following major three factors play a vital role in cloud computing: (1) Bandwidth, (2) Memory, (3) Central Processing UnitCycle (CPU Cycle). Bandwidth. In corporate motto, the cloud computing is operationally exhaustive and obviously parallel.In any software that runs on entire virtual client, it should be communicative.It is not giving operational transaction and bandwidth assurance.The cloud service provider [28] can offer a bandwidth, which is found through their network connections of data center with internal as well as in public internet.The data centers can provide consistency and service delivery efficiently.It includes the guaranteed amount of bandwidth that every client should get [41,42].The number of service tends to grow, and cloud service provider increases the cloud information rate which also brings increase in their bandwidth [43][44][45].Based on High Performance Computing (HPC), challenging results exist in [44,46].Figure 1 depicts the bandwidth utilization of High Performance Cluster Computing (HPCC) for GoGrid cloud computing platforms.Here, bandwidth is calculated for HPCC performance prediction.The volume of services on the cloud computing keeps on growing and tends to more bandwidth [24,26,47,48].The bandwidth utilization and the data center load are directly proportional to each other; that is, when the bandwidth utility in cloud increases, the data center load also increases, and vice versa.Hence, the bandwidth utilization is considered as one among the big three factors for providing a good cloud service to the customers. Memory. It is a major difficulty for storage and delivery of services in cloud computing.It is purely depending upon the application or task used by the client.In cloud computing, the applications and the files are permanently stored in data center by the access of third party clients and users.Amazon's Simple Storage Service (S3) (e.g.).In cloud survey [49], Figure 2 shows the memory usage of Amazon EC2 platforms m1.small to c1.xlarge.In dynamic nature of data centers [46], the database management system requires more amount of memory for processing the services.The memory should be elastic in nature, such that applications are being performed.Memory is comparatively low while running SaaS applications.So, the memory elasticity and memory visualization are manageable see; [50,51].In cloud computing many of CPU's transaction is done in a single data center.So, memory is able to tolerate the CPU transactions and service performance calculations.Because of this aforementioned facts, The memory is another important factor to construct DCLE. Central Processing Unit Cycle (CPU Cycle ).Third, cloud computing needs core of processors present in a single fragment and providing high concurrent throughput for services with parallel operation.In cloud computing, utilization of CPU is an important factor.An input supplied factor to a processor's computing power is its clock speed.It is an approximation to the division of clock speeds that actually take place for a given processor design.In addition, the advent of new processors affects purchase of existing processors.Data center applications need large amount of memory not at all having CPUS responsible for processing.According to this situation, CPU with efficient performance called work station is installed.In cloud computing, the same work station is termed as data center.In the real world, memory is limited and not infinite.Then, we only prefer CPU cycle to be the one of the prime factor to decide DCLE.The database applications are deployed on mainframe computer or server with huge capacity.In [46], the grid workload archive traces along with CPU utilization.The cloud computing system will need some of 100's CPU's for multiprocessing architectures.It starts from CPU ranges from 64 to 128.We identified that previous three big factors play a major role in computing of DCLE.We present these big three factors to obtain an optimized value of maximized data center efficiency.It is done through a valid problem solving control system using fuzzy modeling. Problem Formulation The proposed model is formulated as knowledge base fuzzy expert system modeling [52,53].We propose a novel approach that has been tightening in data center to find the new perception called Data Center Load Efficiency (DCLE).This factor is predicted in network load configuration region.DCLE is depicted as three important fundamental factors. The factors are Bandwidth (BW), Memory (MEM), and CPU cycle or Speed (CPU) of data center.This knowledge of finding DCLE is mentioned in terms of fuzzy inference rules which connect antecedents with consequences.A few definitions will be provided to demonstrate this perception model. Preliminaries Definition 1 (approximate reasoning).Fuzzy set corresponding to the linguistic values defined as 1 , 1 .We include a reasoning as multiconditional in the form Rule 1: Given Ifthen Rule, rule 1 through and a fact " is ".We conclude that " is " where , ∈ (), , ∈ () for all ∈ and , are sets of variables of and . Definition 2 (fuzzy implication).In general, fuzzy implication is defined as the function of the form for all , ∈ {0, 1}.We interpret disjunction and negation as a fuzzy union and fuzzy complement, and then in classical logic is to employ the formula Moreover, equation (4) may also be rewritten, due to law of absorption of negation in classical logic, as either Definition 3 (relation "R").The fuzzy relation employed in reasoning is obtained from the given if-then rules in (2).For each rule in (2), we determine a relation. by the formula, for all ∈ , ∈ , then is defined by the unions of relations for all rule in Definition 1 gives In this paper, consider the problem as disjunctive in nature.So, the interpretation of the rules in disjunctive can be returned as In general, may be determined by a suitable fuzzy implication mentioned in Definition 2 as a general counterpart of (2). Definition 4 (fuzzy proposition).The proposition is measured in its ranges and true values.It depends on the matter of degree.So, each fuzzy proposition is uttered by a number in the element interval [0, 1].We consider our model as conditional, and unqualified propositions, Propositions ", " of this type are expressed by the canonical form where and are variables whose values are in set and , respectively.Finally, and are fuzzy sets on and , respectively.The propositions may also be viewed as where is a fuzzy set on * that is determined for each ∈ , ∈ by formula where denotes a binary operation on [0, 1] representing a suitable fuzzy implication. Definition 5 (compositional rule inference).Consider variables and that take values from sets and , respectively, and assume that for all ∈ and all ∈ , the variables are related by a function = (), and is in a given set and in a given set is given by Similarly, since ∈ , we can infer that ∈ , where Examine that this inference may be expressed equally well in terms of characteristics functions , , of sets , , , respectively, by the equation for all ∈ .Let us proceed now one step further and assume that is fuzzy relation on * and , are fuzzy sets on and , respectively.Then, if and are given, for all ∈ which is the generalization of ( 7) obtained by replacing the characteristics functions in (7) with corresponding membership functions.We prefer this equation as generalization called compositional rule of inference to facilitate approximate reasoning. Cloud Data Center Efficiency Prediction Using Fuzzy Expert System Fuzzy controller is working as a feedback system by repeating the cycles to all and attaining a desired output.To establish the fuzzy controller modeling, first we have to define the input and output variables.Data center management is progressed by the DCLE () which is calculated among three factors In our assumption, these three factors are considered as input variables and data center load efficiency as output variable. The solution is judged by data center management as control problem in nature.To define the load efficiency of data center is a single output variable of cloud environment.This system consists of three modules: (i) fuzzification and defuzzification, (ii) fuzzy inference engine, (iii) fuzzy rule base. First observations are done of all input and output variables, which mention conditions of the data center management control process.Then, these observations are converted into appropriate fuzzy set to propose observation uncertainties called fuzzification.To define the data center load efficiency of a single variable inspite of bandwidth, memory, and CPU cycles, we consider the combinations of any two input variables , to be considered as bandwidth, CPU cycle, or memory.By utilizing these values, the fuzzy controller produces a control variable that is DCLE.Linguistic variables and their notations are depicted in Table 1. 4.1. Step 1.It is a process of identifying input/output variables and to assign a meaningful linguistic states and their ranges.To prefer exact linguistic states for each variable and pose them by corresponding fuzzy sets, these linguistic states are proposed as fuzzy sets (or) fuzzy numbers.Consider that the ranges of input variables belongs to [−, ], belongs to [−, ] and the range of output variable belongs to [−, ].The linguistic input variables are Bandwidth, and Memory, CPU cycle, and output variable is Data Center Load Efficiency (DCLE).The ranges of the each input variables are having three linguistic states as shown in Figures 3 and 4. Also the output variable has three linguistic states. 4.2. Step 2. In this step, we introduce a fuzzification function for each input variable to propose the associate observation uncertainness.To find grades of membership of linguistic values of linear variable corresponding to an input number or fuzzy number, it is used to calculate and interpret observations of input variable, each expressed as a real number.Consider a fuzzification function of the form where denotes the set of all fuzzy numbers and ( 0 ) is a fuzzy number chosen by as approximation of the measurement = 0 .We introduced trapezoidal shape as membership function to define ( 0 ).It is showing the two control variables and their trapezoidal view to represent fuzzy numbers.We illustrate fuzzification by showing the membership function for Bandwidth and Memory together with a trapezoid view of variables depicted in Figure 5. 4.3. Step 3. Fuzzy inference system can be generated as relevant fuzzy inference rules by fuzzy associated memory called FAM square.They can be conveniently represented by Figures 6, 7, and 8 as a FAM square. In our approach, , are inputs, is output variable, and then where , , are fuzzy numbers chosen from the set of numbers and their linguistic states.The possible rule generated for each input and output variable is 3; so, 3 2 = 9, and totally we have 36 rules.To find the fuzzy rules practically, we need a set of input-output data of the following: where is a attained value of output variable for given value and of the input variable and respectively, is an appropriate index set. Let ( ), ( ), ( ) denote the largest membership grades.Then the degree of relevance can be expressed by where 1 , 2 are t-norms. 4.4. Step 4. The observation of input variable must be periodically matched with fuzzy inference rules to make inference in terms of output variables. We choose composite inference logic mentioned in Definition 5 to define our variables.We convert the given fuzzy inference rules represented in (18) which are equivalent to simple fuzzy conditional proposition of the form where for all ∈ [−, ] and ∈ [−, ]. The output variable DCLE becomes the problem of approximate reasoning with composite inference fuzzy proposition mentioned in Definitions 4 and 5, respectively.The fuzzy rule base consists of "" fuzzy inference values; then, Rule 1: The symbols , , ( = 1, 2, . . .) denote fuzzy sets that represent the linguistic states of variables , , , respectively. The rule is explained in terms of relation , which is mentioned in Definition 2. The rules are considered as disjunctive in nature.We derive (17) to conclude that the output variable is defined by the fuzzy set as where is the sup- composition for a t-norm .The choice of the t-norm is a matter similar to the choice of fuzzy sets for given linguistic labels. 4.5. Step 5.The process of computing single fuzzy number from is called defuzzification.The fuzzy output variable is also a linguistic variable, whose values have been assigning grades of membership.In the last step, we find a single number compatible with membership function in Data Center Load Efficiency (DCLE) called output membership function depicted in Figure 9.This number will be the output from this final step in defuzzification process.There are several methods for calculating a single defuzzified number.We used a centroid method to convert the output values of inference engine as a crisp numbers expressed as fuzzy set.We calculated the output variable with centroid method which can be expressed as Let () be the corresponding grade of membership in the aggregated membership function, and let (1) min be the minimum value attain the minimum of data center load efficiency ; (2) mod the moderate value attain the moderate of data center load efficiency ; (3) max the maximum value attain the maximum of data center load efficiency . * is defuzzified output as a real number value. Performance Analysis We now asses the performance of the proposed cloud data center efficiency using the Fuzzy Expert system model to show that they load efficient.We will focus on the load efficiency of the data center in all the factors like bandwidth, memory, and CPU Cycles.variables becomes effective and it helps in providing better results.The If-Then rules of the experiment are formulated using rule editor.We performed our required operation in FIS editor which handles the high level issues.The membership function editor which defines the shapes of all membership function is associated with each variable and rule editor for editing the list of rules.The surface viewer plots an output surface map for the system.The input vectors of the fuzzy inference engine as calculated by the simple attribute function are 0.812, 0.872, and 0.884, and the unique output generated by the Mamdani method is 0.959.All the rules have been depicted as 3D graphs called surface viewer in Figures 10, 11, and 12. Through Figure 10, we infer that when the bandwidth and memory linearly increase, the load efficiency of the data center increases at the same time, when they decrease, it brings down the efficiency of the data center linearly.In Figure 11, the Bandwidth and the CPU cycles are compared with the efficiency of the data center load.When the bandwidth and the CPU are higher, the efficiency of the data center is also higher and vice versa.In Figure 12, memory and CPU cycles are compared with the DCLE.The results indicate that when the memory and the CPU cycles are higher the DCLE is also higher, and lower in the opposite case.However, the experiments suggest that our system is more accurate in predicting the efficiency of a data center than a human expert.Here, DCLE is used as a prime factor in determining the overall system utilization and assessment of the system efficiency.The results proved the increase in the number of services in the data center leading to increase in the complexity of the calculation in the DCLE.We list the features of our system Figure 13 and also make a comparison of our scheme with HPL performances (LINPACK Scheme) [44].It was observed that they performed the experiment using the virtual clusters for GoGrid cloud service provider instances according to the varying number of nodes and percentage of efficiency.The efficiency is varied from 60 to 70.In this experiment, they consider bandwidth, memory, and processing cycles.It was observed that when the bandwidth, memory, and the CPU Cycles ranges were higher for the instances, this resulted in the increase in efficiency of the GOGrid instances.Whereas even when any of the three big factors were reduced, it impacted on the efficiency of the HPL system.The three big factors have been used to study the data center load efficiency, and it was observed the attribute values of the three factors when increased resulted in higher efficiency of any cluster or virtual systems.It is clearly evident that the simulation results are 20 percentage higher in comparison to the results offered by HPL systems. Conclusion The most important task in the successful service of the internet is access through maximum data center load efficiency.In this paper, we examined the load efficiency of data center, which is essentially needed for the cloud computing systems.This system is designed according to the service layers of cloud computing, cloud service provider estimating the strategy.Data center maintains a chart to monitor the big three factors suggested in this work.The advantage of the proposed system lies in DCLE computing.While computing, it allows regular evaluation of services to any number of clients.This work is extended in the way of providing resource adaptation and trustworthiness of cloud computing environment. It gives any of possible true values , of given fuzzy propositions , , respectively, define the true value (, ) of the conditional proposition called IF Then rules like "IF , then ".This is called classical implication of → , from the restricted domain {0, 1} to the full domain [0, 1] of true values in fuzzy logic deriving "" in classical formula being (, ) = ∨ . Figure 10 : Figure 10: Fuzzy 3D view of bandwidth and memory versus DCLE. Figure 11 : Figure 11: Fuzzy 3D view of bandwidth and CPU cycles versus DCLE. Table 1 : Fuzzy linguistic values and notations.
6,224.4
2013-03-31T00:00:00.000
[ "Computer Science" ]
Genes, pathways and networks responding to drought stress in oil palm roots Oil palm is the most productive oilseed crop and its oil yield is seriously affected by frequent drought stress. However, little is known about the molecular responses of oil palm to drought stress. We studied the root transcriptomic responses of oil palm seedlings under 14-day drought stress. We identified 1293 differentially expressed genes (DEGs), involved in several molecular processes, including cell wall biogenesis and functions, phenylpropanoid biosynthesis and metabolisms, ion transport and homeostasis and cellular ketone metabolic process, as well as small molecule biosynthetic process. DEGs were significantly enriched into two categories: hormone regulation and metabolism, as well as ABC transporters. In addition, three protein–protein interaction networks: ion transport, reactive nitrogen species metabolic process and nitrate assimilation, were identified to be involved in drought stress responses. Finally, 96 differentially expressed transcription factors were detected to be associated with drought stress responses, which were classified into 28 families. These results provide not only novel insights into drought stress responses, but also valuable genomic resources to improve drought tolerance of oil palm by both genetic modification and selective breeding. In addition, it was reported that transcription factors (TFs), including NAC, GmNAC, HD-STARTtype and NF-YB family members, played important roles in drought tolerance by regulation of hormone metabolisms 27 . Beyond individual genes, cascaded signalling pathways and regulatory networks were suggested to have played indispensable roles in drought tolerance 28 . ABA coupled with corresponding TFs and ABA responsive elements, could regulate the expressions of a wide range of genes under osmotic stress, via cis regulations 28 . A number of protein families in the calcium signalling pathways, mitogen-activated protein kinases (MAPKs) signalling pathways and phosphorylation cascades were also involved in drought stress responses 29,30 . Previous studies on the responses of plants to drought stress have shed new insights in understanding of the mechanisms of drought tolerance in different species. However, in the oil palm, transcriptomic responses to drought stress are still poorly understood. The purpose of the current study was to identify genes, pathways, networks and transcription factors involved in drought response using RNA-seq and bioinformatics analysis to understand more about molecular responses of oil palm roots under drought stress. RNA-seq is a next-generation sequencing technology used to analyse the presence and quantity of RNA molecules in biological samples 31 . Herein, oil palm seedlings were firstly treated with drought stress. We studied the transcriptome responses of roots to drought in the primary tissue for stress signal perception and initiating of cascade gene regulation pathways in response to drought. A total of 1293 DEGs, including 96 transcription factors, were identified in drought stress responses. Besides individual genes, signalling pathways and protein-protein interaction networks, involving transcription factors, also likely have played crucial roles in drought tolerance. This study provides both novel insights of molecular response of oil palm to drought stress and genomic resources to improve and develop drought-tolerant oil palms for sustainable oil production. Results and discussion Morphological and physiological responses to drought. Obvious morphological changes in leaves and roots of oil palm seedlings under drought stress with a period of 14 days were observed. The effects of drought stress were first observed in leaf morphology, showing initial edge and tip necrosis and then wilting and yellowing for the drought treated samples (Fig. 1a). In comparison to the control, the drought-stressed palms showed significant decrease in the number of roots, root volume and overall biomass (Fig. 1b). Trypan blue staining showed that the drought treatment roots experienced not only obvious cell deformation but also more cell membrane injury than that of the well-watered controls (Fig. 1c). These observations are consistent with those of previous studies in oil palm [7][8][9] and other plant species [32][33][34] under drought stresses and undergoing water deprivation. These results indicate substantial physiological responses of the oil palm seedlings under drought stress 35 , and provided useful starting materials to study the genes, pathways and networks involved in drought responses using RNA-seq 36 Table S1). Nevertheless, the sequence coverage of the drought-stressed palms (> 200 × of transcriptomes) was sufficient to construct transcripts and identify DEGs. Approximately 70% of cleaned reads were uniquely mapped to the reference genome of 31,640 annotated protein coding genes 37 . The drought stressed seedlings showed slightly higher uniquely mapping rates than the controls (72.1% vs 66.3%), indicating the duplicated genes also play important roles in response to drought stress 38 in oil palm, a species of palaeotetraploid origin 37 and future studies should also focus on paralogous genes and their potential functions in stress responses 39 . A total of 2084 and 1358 DEGs were identified using two approaches: DESeq2 and EdgeR, respectively, within which 1293 were shared by the two data sets (Fig. 2). DESeq2 identified 944 down-regulated and 1140 up-regulated DEGs, while EdgeR screened 624 down-regulated and 734 up-regulated DEGs. The number of common down-and up-regulated DEGs were 614 and 679, respectively, between the two approaches ( Fig. 2; Supplementary Table S2). To obtain confident results, only the common DEGS were kept for further analysis. Based on the relative expression of DEGs across samples, the drought stressed and control samples were clearly differentiated by both PCA and hierarchical clustering analyses and showed substantial differences in expression profiles (Fig. 3). We further assessed the accuracy of the RNAseq data by comparing to the results of qPCR of randomly selected nine genes (Supplementary Table S3). We observed an overall high consistency of the expression patterns of these genes between RNA-seq and qPCR ( Supplementary Fig. S1a), with a correlation coefficient of 0.978 (P < 0.0001), as examined using Pearson's correlation test ( Supplementary Fig. S1b). Taken together, these data indicate that the RNA-seq data is reliable. Interestingly, we observed that most of the DEGs in subcategories phenylalanine metabolism and tryptophan metabolism were down-regulated (Table 1). In plant species, phenylalanine and tryptophan metabolisms are more involved in pathogen related immune responses 40 . The down regulation of most DEGs within these categories implies the effects of metabolic compensation to drought stress responses by sacrificing the less important biological functions. In addition, we found two genes: two-component response regulator ORR9 33 . Further studies on how the two genes are involved in the responses to drought stress in oil palm, are required. Ontology enrichment analysis of genes responding to drought stress. To understand the transcriptomic responses to the drought stress, we first carried out gene ontology enrichment analysis using the 1293 DEGs identified by both EdgeR and DESeq2. A total of 89 GO terms were significantly enriched, involving many categories of diverse functions (Supplementary Table S4). The most significant enrichment entities included GO terms related to cell wall biogenesis and functions (e.g., GO:0009834, GO:0044036, GO:0009664, GO:0016998 and GO:2000652), which was consistent with the our observation that the cell wall of the drought treatments has likely been damaged by severe drought stress and thus has triggered the mechanism of damage and repair (Fig. 4). Moreover, we also observed significant enrichments related to phenylpropanoid biosynthesis and metabolisms (e.g., ath00940 and GO:0009698 and GO: 0046271). It is known that phenylpropanoid pathway is activated by stress conditions, such as drought, salinity and extreme temperature, and leads to accumulation of phenolic compounds, which play critical physiological roles in regulation under abiotic stress to cope with environmental challenges 41 . Moreover, we found some GO terms classified into the groups related to ion transport and homeostasis (e.g., GO:0006811, GO:0030004 GO:0030007, GO:0015698 and GO:0034220) and response to osmotic stress and water homeostasis (e.g., GO:0006970 and GO:0030104). Differential expressions of genes in these functions likely result from the responses of plants to water deprivation by direct regulation of osmotic pressure 42,43 . In addition, a number of genes were enriched into the biological categories related to regulation of cellular ketone metabolic process (GO:0010565), suggesting that genes involved in ketone metabolic process play important roles in drought stress in oil palm 44,45 . Hormone regulations are also indispensable to stress responses of plant species. Here, we identified two enriched GO terms related to hormone regulation and metabolism (e.g., GO:0010817 and GO:0042447). Previous studies have shown that production of numerous secondary metabolites is essential for physiological processes to respond to abiotic stress 46,47 . Consistent with these results, we found several significant enrichments related to these terms: small molecule biosynthetic process (GO:0044283), amino sugar and nucleotide sugar metabolism (ath00520), galactose metabolism (ath00052), benzene-containing compound metabolic process (GO:0042537), linoleic acid metabolism (ath00591) and xyloglucan metabolic process (GO:0010411). Interestingly, we also identified significant enriched GO terms, like response to jasmonic acid (GO:0009753) and ABC transporters (ath02010), which play crucial roles in abiotic stress responses (Fig. 4). The interactions of these enriched GO terms were further investigated using network analysis. Eight enriched GO networks were identified, with each consisting of no less than 3 genes (Fig. 5). The major GO networks involved those related to cell wall related biogenesis and metabolism (GO:0009834 and GO:0044036), small molecule related biosynthetic and metabolic processes (ath00940, GO:0009698, GO:0044283 and GO:0010565) and ion transport and homeostasis related processes (GO:0006811, GO:0034220 and GO:0030004). These data imply that genes in these networks are more extensively induced to differentially express to respond to drought stress 42,47,48 . We further investigated the enriched KEGG pathways and found that the functions of the enriched pathways were generally consistent with those of the enriched GO terms as shown above (Supplementary www.nature.com/scientificreports/ Table S5). Above all, these enrichment analyses suggest that many genes, pathways and networks respond to the drought stress in the roots of oil palm seedlings. The DEGs, pathways and networks identified in this study provide valuable resources for future studies on their functions to improve drought tolerance of oil palm. Plant hormone signal transduction in drought stress responses. Plant hormones not only play crucial roles in controlling growth and development, but also are indispensable in regulation of stress responses 49 . Herein, we first focused on the DEGs involved in plant hormone signal transduction pathway and found significant enrichments of DEGs within subcategories of KEGG pathways including a-Linolenic acid metabolism, carotenoid biosynthesis, phenylalanine metabolism, tryptophan metabolism and zeatin biosynthesis (Table 1). Previous studies revealed that genes related to a-Linolenic acid metabolism played important roles in drought stress responses 50,51 . We found that four DEGS were involved in a-Linolenic acid metabolism and three out of them were up-regulated. Interestingly, the down-regulated DEG, jasmonic acid-amido synthetase JAR1 (LOC105048226), was a duplicated copy of the up-regulated one, jasmonic acid-amido synthetase JAR1 www.nature.com/scientificreports/ (LOC105046997), suggesting functional divergence of paralogous genes since genome duplication events. Nevertheless, consistent with the expression patterns of most DEGs in this subcategory, the up-regulated jasmonic acid-amido synthetase JAR1 might be more important in regulation of drought stress response in oil palm. Moreover, we found that all of the DEGs, including probable protein phosphatase 2C 24 and two duplicated copies of probable protein phosphatase 2C 75, in the subcategory carotenoid biosynthesis, were up-regulated. These three DEGs were also enriched into the subcategory abscisic acid pathway (ABA) within MAPK signalling pathway (Table 1). Carotenoid biosynthesis signalling pathway is specifically induced by root and contributes to induce ABA production to regulate ion homeostasis, as studied in Arabidoposis 52 . ABA-independent signalling pathways are involved in the regulation of drought stress response in many plant species 53 . Therefore, our results suggest that ABA related genes also play important roles in drought stress responses of oil palm. ABC transporters in drought responses. Membrane transporters play vital roles in regulation of water and ion homeostasis of organisms, among which ATP-binding cassette (ABC) transporters constitute one of the largest protein families and act as both exporters and importers, driven by ATP hydrolysis 54 . ABC transporters play irreplaceable roles in transmembrane allocations of various molecules to adapt to rapidly changing environments, such as water scarcity, heavy metal stress and pathogen stress 55 . In order to survive in these changing abiotic conditions, it is necessary for cells to absorb nutritious chemical substances and discharge endogenous toxins, as well as exchange signalling molecules 55 . Thus, the ABC transporters occupy a diverse range of func- www.nature.com/scientificreports/ tions and hence the regulations upon stress responses are also complicated. Here, we found that six DEGs were enriched into the pathway of ABC transporters (Table 1). Five of them were ABCB subfamily members, among which three [ABC transporter B family member 11, ABC transporter B family member 19 and putative multidrug resistance protein (LOC105038824)] were down-regulated and two [ABC transporter B family member 9 and another putative multidrug resistance protein (LOC105060251)] were up-regulated. Such differential expression patterns of these ABCB subfamily transporters indicate the complicated functions in controlling of influx and efflux of chemical molecules 56,57 . In addition, we also identified an ABCC subfamily member, ABC transporter C family member 5, which was up-regulated. Interestingly, two putative multidrug resistance protein genes (LOC105038824 and LOC105060251) were differentially expressed against drought stress. As shown in previous studies, multidrug resistance-associated proteins are widely involved in regulation of stress responses, such as salt stress, water deprivation, oxidative stress and fungal stress 58 . Taken together, these different types of ABC transporters likely play important roles in responses to drought stress in oil palm. Protein-protein interaction networks in response to drought responses. Other than significantly enriched GO terms and KEGG pathways, we also identified three protein-protein interaction networks, focused on ion transport, reactive nitrogen species metabolic process and nitrate assimilation ( Fig. 6; Table 2). Eight DEGs were involved in the ion transport network, among which five [ammonium transporter 2 member 1 (AMT2-1), amino acid transporter ANT1 (ANT1), cation/H(+) antiporter 20 (CHX20), plasma membrane ATPase 4 (PMA4) and potassium channel AKT1 (AKT1)] and three [receptor-like protein kinase HSL1 (HSL1), plasma membrane ATPase (PMA) and ABC transporter G family member 42 (ABCG42)] were up-and down-regulated, respectively (Table 2). Interestingly, most of the cation channel and transporter genes were up-regulated, including ammonium transporter 2 member 1 (AMT1), amino acid transporter ANT1 (ANT1), cation/H(+) antiporter 20 (CHX20) and potassium channel AKT1 (KT1), indicating their positive effects in regulating ion homeostasis in oil palm 16 . Nevertheless, we also observed that three DEGs were down-regulated in the same network, implying both positive and negative feedback regulations are acting on this network 59 . Reactive nitrogen species metabolic process is also suggested to have critical roles in stress responses, such as drought and salinity 60 . Consistently, we identified three up-regulated genes: magnesium transporter MRS2-1 (MGT2), putative chloride channel-like protein CLC-g (AT5G33280) and serine/threonine protein kinase OSK1 (KIN10), in this protein-protein interaction network. Nitrate assimilation is another biological process affecting salt and water stress tolerance in plants 61 . Here, we found four DEGs involved in this network: two were up-regulated [cationic amino acid transporter 6, chloroplastic (CAT6) and sodium/hydrogen exchanger 4 (NHX4)], while the other two were down-regulated [amino acid permease 8 (AAP8) and vacuolar cation/proton exchanger 1a (CAX1)]. As these protein-protein interaction networks play crucial roles in drought stress response, the DEGs involved in these networks provide important candidate genes to improve drought tolerance of oil palm by genetic engineering and/or selective breeding. Transcription factors in drought responses. To date, more and more studies focused on the biological functions of transcription factors as regulatory elements binding proteins 21 . Transcription factors are vital for development, response to intercellular and environmental signals and pathogenesis 21 . The expression changes www.nature.com/scientificreports/ are often associated with important cellular processes 15 . In this study, we identified 96 differentially expressed transcription factors that were classified into 28 families (Supplementary Table S6, Table 3). Previous studies have shown that transcription factors are broadly involved in drought/abiotic stress responses, such as the members of family MYB, WRKY, DREB, NAC and AP2/EREBP 27,62-64 . Here we also observed that genes in these transcription factor families were differentially expressed under drought stress in oil palm, further supporting their important roles in drought tolerance in plant species. Interestingly, we found several families of transcription factors that were rarely studied and involved in abiotic stress responses, such as the C2H2, LFY and TALE transcription families. Therefore, it is important to understand the mechanisms of regulatory functions of these genes, which might be useful to help improve drought tolerance of related plant species. Conclusions We investigated transcriptomic response of root against drought stress in oil palm seedlings. We identified over 1000 DEGs responding to the drought stress, including the genes mainly involved in cell wall biogenesis and functions, phenylpropanoid biosynthesis and metabolisms and ion transport and homeostasis. We functionally enriched the genes in plant hormone signal transduction and ABC transporters pathways, which likely have played crucial roles in regulation of water deprivation. Three protein-protein interaction networks were identified that were related to ion transport, reactive nitrogen species metabolic process and nitrate assimilation. We also detected 96 transcription factors that were differentially expressed upon drought stress. The identified DEGs, pathways and protein-protein interaction networks, and transcription factors likely play important roles in drought tolerance of oil palm. This study helps understand more about the mechanism of drought stress response and provides valuable resources for future genetic improvement of drought tolerance in oil palm. Future studies should analyse the functions of DEGs identified, in combination with metabolomics and morpho-physiological approaches to obtain a comprehensive overview of drought stress impact on oil palm. Table 3. Categorization of differentially expressed transcription factors and the patterns of their expressions after drought stress in the roots of oil palm seedlings. Materials and methods Plant materials and drought treatment. Seeds of Tenera palms (Elaeis guineensis, Jacq.) were geminated with a standard protocol 3,53 . The seedlings were grown in a nursery for 120 days before drought stress treatment. Eight oil palm Tenera seedlings were planted in pots with diameter of 20 cm containing natural soil with water content of 23% (2.3 g water per 10 g soil), and placed in a greenhouse with a natural tropical temperature ranging from 28 to 34 °C, 30-50% relative humidity and natural photoperiod. Four seedlings were watered twice a week to maintain water content of > 23% while the other four seedlings were used as drought treatment without watering for two weeks. After drought stress challenge of 14 days, the mortality rate of experimental group was estimated at ~ 50%. The root tissues of both the control and experimental groups were harvested and measured, respectively. The samples were then preserved at − 80 °C for RNA isolation. RNA extraction and sequencing. Total RNA was isolated from roots using RNeasy Plant Mini Kit (Qiagen, Germany), according to the manufacturer's instructions. RNA quality was assessed by agarose gels and concentration was measured by NanoDrop (Thermo Fisher Scientific, USA). One µg total RNA from each sample was firstly treated with RNase-free DNase I (Sigma-Aldrich, Singapore) and then used for mRNA library construction with Illumina TruSeq RNA Library Prep Kit v2 (Illumina, USA), according to the manufacturer's instructions. The libraries were paired-end sequenced (2 × 75 bp) using an Illumina NextSeq500 (Illumina, USA). Three biological replicates were sequenced for both control and drought treated samples. For validation of RNA sequencing data using real-time quantitative PCR (qPCR), two µg total RNA was treated with RNasefree DNase I (Sigma-Aldrich, Singapore) and was then used for synthesizing cDNA with the MMLV reverse transcriptase (Promega, USA). Identification of differentially expressed genes (DEGs). Raw sequencing reads were processed using the program process_shortreads in Stacks package 65 , to demultiplex samples, filter adaptors and clean up low quality reads. The program STAR 66 was employed to align and map the cleaned reads to the reference genome of oil palm 37 , with default parameters. Only uniquely mapped reads were used to analyse the expression patterns of annotated genes. The program HTSeq-count 67 was then used to count the expression level of each annotated gene, based on the information of gene features in the genome annotation file. We used both DESeq2 68 and EdgeR 69 to normalize the relative expression of transcripts across samples. Only transcripts with the number of counts per million (CPM) mapped reads of > 1 were retained for further analysis. Transcripts with a fold change (FC) value of > 2 or < − 2 and with a significance value of 0.01 after application of Benjamini-Hochberg false discovery rate (FDR) 70 were considered as differentially expressed genes, between drought treatment and control groups. Only DEGs that were consistently identified by both DESeq2 and EdgeR, were used for further analysis. Functional annotation of DEGs. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) accessions 71 were retrieved for each DEG, according to the PalmXplore database of oil palm 72 . We first clustered all the samples using both principal component analysis (PCA) and heatmap approaches with the program ClustVis 73 , based on the relative expression of DEGs, to investigate the overall expression patterns between drought treatment and control groups. Gene ontology enrichment analysis was carried out using the program Metascape 74 . The Metascape program 74 was further employed to study the protein-protein interactions using network analysis by referencing to Arabidopsis. The candidate signalling pathways associated DEGs were classified and enriched by annotating against the Kyoto Encyclopaedia of Genes and Genomes (KEGG) database 75 of oil palm. Validation of RNA-seq data using qPCR. DEGs were randomly selected and the relative expression patterns revealed by RNA-seq were examined by qPCR, to assess the effectiveness and accuracy of the whole DEGs dataset. Primers of randomly selected DEGs were designed according to the coding sequences, obtained from the annotated reference genome, using the program Primer3 76 . Both β-actin gene and glyceraldehyde 3-phosphate dehydrogenase gene (GAPDH) were used as housekeeping genes to normalize the relative expression of genes, according to our previous study 77 . The 2 −ΔΔCT method was used to quantify the expression level according to our previous method 78 . The experiment was carried out with three biological replications, each with three technical replicates. Ethics declarations. All authors have reviewed the final version of the manuscript and agree to publish the data. Data availability Raw sequencing reads used in this study have been deposited to the NCBI SRA database with an accession no. PRJDB9517.
5,160.4
2020-05-23T00:00:00.000
[ "Environmental Science", "Biology" ]
Circulating Fibroblast Growth Factor-23 Levels are Associated with an Increased Risk of Anemia Development in Patients with Nondialysis Chronic Kidney Disease Fibroblast growth factor-23 (FGF23) is an established biomarker of adverse outcomes in patients with chronic kidney disease (CKD). Several cross-sectional studies have suggested a possible association between FGF23 and anemia in these patients. In this large-scale prospective cohort study, we investigated this relationship and examined whether high FGF23 levels increase the risk of incident anemia. This prospective longitudinal study included 2,089 patients from the KoreaN cohort study for Outcome in patients With CKD. Anemia was defined as hemoglobin level <13.0 g/dl (men) and <12.0 g/dl (women). Log-transformed FGF23 significantly correlated with hepcidin but inversely correlated with iron profiles and hemoglobin. Multivariate logistic regression showed that log-transformed FGF23 was independently associated with anemia (odds ratio [OR], 1.14; 95% confidence interval [CI], 1.04–1.24, P = 0.01). Among 1,164 patients without anemia at baseline, 295 (25.3%) developed anemia during a median follow-up of 21 months. In fully adjusted multivariable Cox models, the risk of anemia development was significantly higher in the third (hazard ratio [HR], 1.66; 95% CI, 1.11–2.47; P = 0.01) and fourth (HR, 1.84; 95% CI, 1.23–2.76; P = 0.001) than in the first FGF23 quartile. In conclusion, high serum FGF23 levels were associated with an increased risk for anemia in patients with nondialysis CKD. Materials and Methods Study design and population. The KoreaN cohort study for Outcome in patients With Chronic Kidney Disease (KNOW-CKD) is a prospective nationwide cohort study investigating various clinical courses and risk factors for the progression of CKD in Korean patients. Patients aged between 20 and 75 years with CKD stage 1-5 before dialysis who voluntarily provided informed consent were enrolled from nine university-affiliated tertiary-care hospitals throughout Korea between June 2011 and February 2015. The detailed design and method of the study have previously been described elsewhere (NCT01630486 at http://www.clinicaltrials.gov) 24 . Among 2,238 patients in the KNOW-CKD cohort, 149 patients with missing data for hemoglobin, hepcidin, iron profiles, and C-terminal FGF23 (FGF23) levels were excluded. Finally, 2,089 patients were included in the present analysis ( Fig. 1). Data collection. Baseline sociodemographic information and laboratory data were obtained from the KNOW-CKD database. The resting blood pressure (BP) in the clinic was measured with an electronic sphygmomanometer, and body mass index (BMI) was determined using the formula "weight (kg)/height (m 2 ). Serum sample collected for initial study measurements sent to the central laboratory of the KNOW-CKD study (Lab Genomics, Seongnam, Republic of Korea) and were stored at −80 °C in deep freezer. Along with blood samples, urine samples were also immediately sent to central lab and subject for proteinuria measurement. Laboratory tests were obtained every 6 months in the first year and then annually thereafter. Serum creatinine was measured using an isotope dilution mass spectrometry-traceable method, and estimated glomerular filtration rate (eGFR) was calculated using the four-variable Modification of Diet in Renal Disease equation 25 . Serum levels of hepcidin were measured using commercially available ELISA kits (DRG Instruments GmBH, Marburg, Germany). Anemia was defined as hemoglobin levels of <13.0 g/dL for men and <12.0 g/dL for women according to World Health Organization (WHO) criteria. Transferrin saturation (TSAT) was calculated using the ratio of serum iron and total iron-binding capacity. Iron deficiency was defined as ferritin <100 ng/mL or TSAT <20%. Serum C-terminal FGF23 concentration was measured using enzyme-linked immunosorbent assay (ELISA; Immutopics, San Clemente, CA, USA). Regarding sensitivity of FGF23 assay, the 95% confidence limit on 20 duplicate determinations of the 0 RU/ml standard is 1.5 RU/ml. Precision of FGF23 assay was carried out for quality control measures. Intra-assay precision was calculated from 20 duplicate determinations of two samples each performed in a single assay. Coefficient of variations for FGF level of 33.7 and 302 RU/ml were 2.4% and 1.4%, respectively. In addition, inter-assay precision was calculated from duplicate determinations of two samples performed in 10 assays. Coefficient of variations for FGF level of 33.6 and 293 RU/mL were 4.7% and 2.4%, respectively. Study endpoint. We first performed a cross-sectional analysis to clarify the relationship between serum FGF23 levels and anemia in 2,089 patients using the baseline data. Then, we further examined whether elevated FGF23 levels increase the future development of anemia in 1,164 patients who had no anemia at baseline (Fig. 1). For this analysis, the primary outcome was newly developed anemia during the follow-up period. Statistical analyses. All analyses were performed with IBM SPSS Statistics version 21 (IBM Corp., Armonk, NY, USA) and SAS version 9.4 (SAS Institute, Cary, NC, USA). All variables with a normal distribution were expressed as mean ± standard deviation. If data did not have a normal distribution, they were expressed as median and interquartile range (IQR). Categorical variables were expressed as number and proportion. Comparisons were made using one-way analysis of variance for continuous variables and the chi-square test for categorical variables, as required. Pearson's correlation test was used to evaluate the relationship between covariables, and a multivariable linear regression analysis for hemoglobin level was performed after adjustment for age, sex, presence of diabetes mellitus (DM), BMI, systolic BP (SBP), Charlson comorbidity index (CCI), smoking status, eGFR, albumin, phosphorus, 1,25(OH)2 vitamin D, presence of iron deficiency, hepcidin, C-reactive protein (CRP), proteinuria, and anemia treatment including iron replacement and EPO-stimulating agent (ESA). In addition, a multivariable-adjusted logistic regression analysis was conducted to determine whether FGF23 was associated with anemia as defined based on WHO criteria. The results were presented as odds ratios (ORs) and 95% confidence intervals (CIs). Among 1,164 patients without baseline anemia, the cumulative anemia-free survival rates were estimated using the Kaplan-Meier method and differences between survival curves were compared with the log-rank test. Furthermore, multivariate Cox regression models for the development of anemia were constructed after rigorous and stepwise adjustments for confounding factors. Model 1 was the unadjusted model with no covariables. Model 2 included adjustment for age, sex, DM, BMI, SBP, CCI, and smoking. We constructed model 3 by adding eGFR, albumin, phosphorus, 1,25(OH)2 vitamin D, presence of iron deficiency, hepcidin, CRP, and proteinuria to model 2. Moreover, model 4 included iron replacement and ESA therapy in addition to model 3 variables. The results were presented as hazard ratios (HRs) and 95% CIs. We also examined the association between FGF23 levels and the development of anemia through subgroup analysis using a fully adjusted multivariate Cox regression model. The patients were stratified according to sex, history of DM, presence of iron deficiency, treatment with renin-angiotensin system (RAS) blockers, and median values of age, SBP, BMI, CCI, eGFR, albumin, CRP, and 1,25(OH)2 vitamin D. P < 0.05 was considered statistically significant for all analyses. . According to FGF23 quartiles, we categorized the study subjects into four groups and compared their baseline characteristics ( Table 1). The mean age was 53.6 ± 12.2 years and 1,275 patients (61.0%) were male. The mean eGFR was 50.3 ± 30.2 mL·min −1 ·1.73 m −2 and was significantly lower in high FGF23 quartiles than in low quartiles (P < 0.001). The prevalence of DM and serum levels of hepcidin, phosphorus, and intact parathyroid hormone were higher, whereas calcium and 1,25(OH)2 vitamin D levels were lower in high FGF23 quartiles (P < 0.001 for all). The mean hemoglobin levels were 12.8 ± 2.0 g/dL and were significantly lower in high FGF23 quartiles (P < 0.001). Regarding iron profiles, serum iron levels and TSAT were also lower in high FGF23 quartiles. However, serum ferritin levels did not differ among quartiles. There were more patients who received iron replacement and ESA therapy in the high FGF23 quartiles. Relationship between fibroblast growth factor-23 levels and baseline anemia. In Pearson correlation analyses, log-transformed FGF23 was inversely correlated with eGFR, albumin, calcium, 1,25(OH)2 vitamin D, iron, TSAT, and hemoglobin ( Fig. 2), whereas it was positively correlated with CCI, phosphorus, intact parathyroid hormone, hepcidin, and proteinuria. However, age, BMI, CRP, and ferritin were not correlated with FGF23. We then performed in-depth analyses to clarify the association between FGF23 and anemia. In a multivariable linear regression analysis adjusted for confounding factors, there was a significant inverse relationship between log-transformed FGF23 and hemoglobin levels (β = −0.067, P = 0.004; Table 2). This finding was further strengthened in a multivariable logistic model (Table 3). After rigorous adjustment of confounders, log-transformed FGF23 was independently associated with anemia (OR, 1.14; 95% CI, 1.04-1.24, P = 0.01). When FGF23 quartiles were entered as a categorical variable in the model, the highest quartile of FGF23 was significantly associated with anemia compared with the lowest quartile (OR, 1.72; 95% CI, 1.19-2.50, P = 0.004). High fibroblast growth factor-23 levels increase the development of anemia. We further investigated whether FGF23 levels increase the future development of anemia. To this end, we selected 1,164 patients without anemia at baseline measurement. During the median follow-up duration of 21 (IQR, 7-38) months, 295 (25.3%) patients developed anemia. Anemia occurred in 48 (16.5%), 63 (21.6%), 91 (31.3%), and 93 (32.0%) patients in the first, second, third, and fourth quartile of FGF23, respectively (P < 0.001, Table 4). The Kaplan-Meier curves for anemia-free survival according to FGF23 quartiles are presented in Fig. 3. The time to the development of anemia was significantly shorter in high FGF23 quartiles (P = 0.009 for first vs. second; P < 0.001 for first vs. third and fourth). An in-depth analysis on the association between FGF23 levels and the development of anemia was performed using multivariable Cox regression models ( respect to anemia in most stratified groups (Fig. 4). Notably, this association was observed particularly in nondiabetics, patients aged <50 years, patients treated with RAS blockers, patients with iron deficiency, or patients with SBP <130, BMI <25 kg/m 2 , CCI <3, eGFR >30 ml·min −1 ·1.73 m −2 or albumin ≥ 4.3 g/dl. Discussion In this prospective cohort study, we demonstrated the inverse relationship between serum FGF23 and hemoglobin in patients with nondialysis CKD. In addition, we also showed that high serum FGF23 levels were significantly associated with an increased risk for the development of anemia even after a rigorous adjustment for multiple confounding factors. This association was particularly evident in patients treated with RAS blockers, patients with young age, relatively preserved eGFR, low comorbid burden, and iron deficiency. Our findings are of great clinical importance, as anemia is a frequent complication in patients with CKD and is associated with 26 . In addition, a recent observational study by Honda et al. showed no significant relationship between FGF23 and hemoglobin levels in 282 patients undergoing hemodialysis 27 . These findings were contradicted by two other studies: one on patients undergoing peritoneal dialysis and one on patients with CKD before dialysis treatment 21,22 . These studies showed a significant inverse association between FGF23 and anemia. However, these studies are limited by their small sample size and cross-sectional analyses. Furthermore, as dialysis patients are more likely to receive iron preparation and ESA, interpretation should be made carefully in these patients. Of note, our study has a larger sample size than most previous studies, which thus assures power to detect statistical significance. In addition, we demonstrated that high FGF23 level increased the future development of anemia in a longitudinal observation of a CKD cohort. Our findings particularly corroborated findings from a recent publication by the Chronic Renal Insufficiency Cohort Study investigators 23 , indicating that elevated FGF23 is associated with prevalent anemia, change in hemoglobin over time, and the development of anemia. The mechanism responsible for FGF23-associated anemia is unknown. Although anemia in CKD is a multifactorial disorder, it is well explained by insufficient EPO, a hormone that stimulates red blood cell production in the bone marrow in response to low oxygen levels in the blood 6,8 . EPO production is impaired at any given hematocrit concentration in patients with decreased renal function 6 . Interestingly, one experimental study found that loss of FGF23 resulted in markedly augmented erythropoiesis in peripheral blood and bone marrow of young adult mice, which can be accounted for by elevated serum EPO levels and EPO mRNA synthesis in the bone marrow, liver, and kidney 20 . Conversely, in this study, administration of FGF23 in wild-type mice resulted in a decrease in erythropoiesis 20 . These experimental evidence together with clinical research studies including ours suggest that FGF23 is a negative regulator of erythropoiesis. Unfortunately, a correlation between FGF23 and EPO could not be determined in our study because the EPO levels were not measured. Future studies are required to clarify the relationship between FGF23 and EPO. Vitamin D deficiency has been suggested as a risk factor for anemia in patients with CKD. Previous studies from the Third National Health and Nutrition Examination Survey and Study to Evaluate Early Kidney Disease demonstrated that vitamin D deficiency was independently associated with anemia in patients with CKD 9,28 . Several studies using a burst-forming unit-erythroid assay have suggested a direct effect of vitamin D on proliferation of erythroid precursor cells obtained from patients with CKD, with a synergistic effect with EPO [29][30][31] . In addition, vitamin D deficiency is associated with secondary hyperparathyroidism, which can induce bone marrow fibrosis and suppress erythropoiesis in CKD 32 . Considering the fact that FGF23 decreases 1,25-dihydroxyvitamin D3 levels by inhibiting CYP27B1 (1-α-hydroxylase) and by stimulating CYP24A1 (24-hydroxylase) 10,11 , vitamin D deficiency may be a potential mechanistic link that can explain the relationship between FGF23 and anemia. In this study, however, the effect of FGF23 on the development of anemia was not altered after adjustment for 1,25-dihydroxyvitamin D3 levels. Moreover, there was no significant interaction between FGF23-related anemia and 1,25-dihydroxyvitamin D3 levels in subgroup analysis. These findings indirectly support the result from the aforementioned experimental study, in which abolishing vitamin D signaling from FGF23 null mice did not resolve the erythropoietic abnormalities 20 . It is well known that iron deficiency is an important factor that can promote anemia in CKD. Interestingly, animal and human studies demonstrated that absolute and functional iron deficiency stimulates FGF23 production [33][34][35][36] . In line with these findings, our data showed that FGF23 was inversely correlated with iron profiles, including iron and TSAT, and positively correlated with hepcidin, which induces functional iron deficiency through iron sequestration and inhibition of iron absorption in the gastrointestinal tract 37 . Furthermore, subgroup analyses showed that a significant association between high FGF23 levels and the development of anemia was evident in patients with iron deficiency and high inflammatory status. It can be presumed that iron deficiency induces anemia either directly or indirectly through a negative impact of FGF23 on erythropoiesis. Subgroup analyses showed that the use of RAS blockers can affect the relationship between FGF23 and the development of anemia. This association was evident in patients treated with RAS blockers (HR, 1.18; 95% CI, 1.07-1.29; P = 0.001), but not in patients without RAS blockers (HR, 1.04; 95% CI, 0.75-1.45; P = 0.81). Several experimental and clinical studies have suggested possible association between renin-angiotensin-aldosterone system and erythropoiesis [38][39][40][41] . Angiotensin II were demonstrated to be a physiologically important regulator of erythropoiesis, both as a growth factor of erythroid progenitors, and as an erythropoietin secretagogue to maintain increased erythropoietin 41 . In addition, serum aldosterone levels were demonstrated to play a role in the relationship between FGF23 and hemoglobin levels 21 . Moreover, RAS activation is reported to induce FGF23 resistance 42 . These findings together suggest that negative effect of FGF23 on erythropoiesis can be more evident in low renin-angiotensin-aldosterone status. Future studies are required to clarify the impact of RAS on FGF23-associated anemia. Several shortcomings of this study should be discussed. First, because this is an observational study, it is possible that potential confounding factors were not entirely controlled. However, this study included a large number of patients and yielded consistent results in various multivariable Cox models after rigorous adjustment. Second, patients in our study had relatively higher eGFR than those in a previous study 21 ; thus, the association between FGF23 and anemia needs to be verified in patients with advanced stages of CKD. Although there was no significant interaction between FGF23-related anemia and kidney function, subgroup analyses showed that association between FGF23 levels and incident anemia was particularly evident in patients with eGFR >30 ml·min −1 ·1.73 m −2 . Furthermore, high FGF23 levels were also significantly associated with the future development of anemia in patients with low disease severity (e.g., with well-controlled BP, no diabetes, no obesity, and low comorbid conditions). Presumably, there are many other factors that can affect erythropoiesis in uremic conditions. These unseen factors seem to have overwhelmed the effect of FGF23 on anemia in patients with a high disease burden. Third, although iron deficiency modulates FGF23 [33][34][35][36] , our study did not show that ferritin levels were correlated with FGF23 levels. However, it should be noted that ferritin is an acute-phase reactant and can be elevated in response to uremic inflammatory condition despite the presence of functional iron deficiency. This can explain a poor correlation between ferritin and FGF23 in CKD. Finally, we performed a single measurement of FGF23 concentrations at baseline and had no data for follow-up measurements. It would be interesting to see whether a change of FGF23 level is concordant to that of hemoglobin level. Further longitudinal studies are required to examine this relationship. In conclusion, this study showed that serum FGF23 levels were inversely correlated with hemoglobin levels in patients with CKD and that patients with high FGF23 levels were more likely to have anemia. Furthermore, in patients without anemia at baseline, elevated FGF23 levels were associated with an increased risk of new development of anemia. Our findings suggest that FGF23 can be a useful predictor of anemia in patients with CKD. Further studies are required to clarify the mechanism for FGF23-associated anemia in these patients.
4,148.2
2018-05-08T00:00:00.000
[ "Medicine", "Biology" ]
Investigation of Metal(II)-Curcumin-Glycine Complexes: Preparation, Structural Characterization and Biological Activities A novel Schiff base obtained from curcumin and glycine was prepared and it was reacted with Co, Ni, Cu and Zn metals in order to form the stable metal complexes and characterized by elemental analysis, magnetic, molar conductance, IR, UV-Vis.,1H NMR and PXRD. The data shows that the complexes have the structure [M(II)-(cur-gly)H2O] system. Electronic and magnetic data suggest a tetrahedral geometry for Co, Ni and Zn except Cu complex has a square planar geometry. The antimicrobial activity of cur-gly and its metal chelates were confirmed against the bacterial species as E. coli, P. aeruginosa, Enterococcus, B. cereus and S. aureus species. Antifungal activity was screened against C. albicans, C. parapsilosis and A. flavus. Metal chelates indicate excellent antimicrobial activity than their parent cur-gly and DNA photo cleavage activity shows that metal chelate effectively cleave the pUC 18 DNA. INTRODUCTION Since the cumulative recognition role of metal complexes in biological systems are widely studied in Schiff bases. The ligands are able to coordinate with metals through imine nitrogen and to the aldehyde or ketone. 1 1,7-bis(4-hydroxy 3-methoxyphenyl)-1,6-hepta-diene-3,5-dione has a specific conjugated β-diketone moiety and act as an influential natural chelating agent as an strong antioxidant 2 than Vitamin E. Over the past years complexation with metals has fascinated much consideration necessities for the treatment of Alzheimer's diseases. 3,4 Curcumin compounds have good combination with other anticancer therapies have been described to prevent the clonogenicity of cancer cells and induce anti-proliferative, apoptotic effects on drug-resistant and sphere-forming cancer cells expressing stem cell-like signs as well as converse the chemoresistance. Amino acids cur-gly form stable compounds and also inhibits the growth of bacterial and fungal strains after complexation. 5,6 Present investigation 7 deals with the preparation of the ligand resulting from curcumin-glycine (curgly) and containing Co(II), Ni(II), Cu(II) and Zn(II) complexes and their structural characterization was carried by using various instrumental techniques. The antimicrobial activities and DNA cleavage of cur-gly and its metal(II) complexes have been investigated systematically. Materials The AR grade reagents, chemicals including curcumin, glycine, EtOH, methanol, Co/ Ni/Cu/Zn(II) chloride salts were used. All chemicals and solvents were acquired from Merck. Infrared spectrum was recorded on SHIMADZU FT-IR Affinity-1 spectrophotometer by potassium bromide pellet disc method. UV-Vis. studies were carried out on SHIMADZU1800 spectrophotometer between 200-1100 nm by using suitable solvent. 1 H-NMR of Zn(II)-cur-gly complexes and cur-gly were recorded Bruker Drx-300 MHz NMR spectrometer using DMSO-d 6 solvent and Tetramethyl silane as internal standard. Magway MSB Mk 1 Magnetic susceptibility balance was used to carry out the magnetic moment measurements at room temperature. Preparation of cur-gly An ethanolic solution of curcumin (0.368 g, 0.001 mol), glycine (0.075 g, 0.001 mol) was added dropwise followed by 3 drops of glacial acetic acid and heated under reflux about 3-5 h on a hot plate at 55-60 o C. The resulting solution was reduced to one-third. Dark yellow precipitate was formed, filtered off washed with ethanol and finally dried over fused CaCl 2 . 8,9 The preparation route of cur-gly is outlined in Scheme 1. Preparation of metal complexes [M(II)-(cur-gly) H 2 O] An with an aqueous EtOH solution (15 mL) of cur-gly (1 mmol) was refluxed for about 5 hour. Then the reaction mixture was concentrated to 10 mL on a boiling water bath and then cooled at room temperature 10 . The solid product formed was filtered, washed with EtOH and recrystallized from methanol. The proposed reaction pathway is shown in Scheme 2. Antimicrobial studies The In vitro antimicrobial activity of curgly and [M(II)-(cur-gly)H 2 O] in DMSO were studied against the bacterial species such as P. aeruginosa, E. coli, Enterococcus, B. cereus and S. aureus and fungal species like C. albicans, C. parapsilosis and A. flavus by Kirby-Bauer disk diffusion 11,12 technique. Kirby-Bauer method was used to shade the antimicrobial (bacterial & fungal) activity. Plates were incubated for 16 to 18 h at 35-37 o C aerobically for fastidious organisms. The zones of diameters were reserved to the nearest mm with vernier calipers or a thin insincere mm scale values. The point of abrupt diminution of evolution, which in most circumstances resembles with the idea of complete embarrassment growth, was takes as the zone control. DNA cleavage studies Plasmid University of California, DNA models for the cur-gly and [M(II)-(cur-gly)H 2 O] were evaluated by agar gel-electrophoresis technique based on procedure described in the literature. Test trials (100 mg/mL) were ready with DMSO solvent; about 5 µL of the plasmid was subjected to the test solution and incubated on behalf of 1.5 h, 37 o C. About 10 µL of sample/ plasmid (Bromophenol blue dye, 5: 1 molar ratio) was overloaded sensibly into the electrophoresis compartment wells alongside with a standard DNA indication following the Trisacetate buffer (4.84 g, Trisbase, pH~8; 0.5 M, [CH 2 N(CH 2 CO 2 H) 2 ] 2 /1 L). Finally encumbered onto the agar gel (1% gel, 10 µg/mL, ethidium bromide). Gel covering the compound samples were linked to power supply of 100 V for about forty five min., PUC18-DNA possess in the UV-trans-illuminator existed experimental to govern the extent of DNA cleavage analysis. 13 Infrared spectra The IR spectrum provides respected evidence concerning the nature of the useful group coordinated to the metal atom. In cur-gly, the infrared spectrum showed a medium intensity band favoured at 1610 cm -1 may be consigned to ν(C=N) stretching vibration. 15 In the Mid IR spectrum curgly band is found at 3120 cm -1 is ascribed to -NH 2 stretching vibration. The bands appeared at 1589 and 1483 cm -1 of cur-gly corresponds to carboxylate asymmetric ν as (COO-) and symmetric ν sy (COO-) stretching frequencies. 16,17 In complexes, the υ(C=N) cm -1 band was shifted to lower wavenumber 1610-1570 cm -1 specifies the coordination of azomethine nitrogen atom with metal ion. In metal complexes the asymmetric υ sym (COO-) and symmetric υ asym (COO-) stretching bands shifted to lower wave frequency region designates from 1510 and 1402 cm -1 respectively, which reveals the materialization of a bond between metal and carboxylate O atom. The IR spectra of all M(II)-(cur-gly)H 2 O complexes containing hydration and or coordination water molecules display bands seen at 3487-3354 cm -1 due to ν(O-H) vibration mode of the H 2 O molecules. Therefore the fourth position would be occupied by water molecule in the metal complexes. IR spectra of the complexes also show new peaks at 474-450 cm -1 and 560-568 cm -1 region due to the formation of M-N, and M-O bond. Some important IR spectral assignments of cur-gly and its M(II)-(cur-gly)H 2 O] complexes is provided in Table 3. Table 3: Important selected IR bands of cur-gly and [M(II)-(cur-gly)H 2 O]. IR assignments, wavenumber (cm -1 ) PNMR spectra PNMR spectra of cur-gly and its [Zn(II)-(cur-gly)H 2 O] were studied in DMSO-d 6 solvent. PNMR spectra of (a) cur-gly and (b) [Zn(II)-(cur-gly) H 2 O] are given in Fig. 1. PNMR spectra of cur-gly displays a peak at d 12.34 ppm which may be attributable to the enolic -OH group of curcumin moiety. These PNMR spectral signals vanished in all complexes due to the deprotonation of OH group. The azomethine proton of the zinc complex appeared d 9.67 ppm indicating complexation of nitrogen atom of the azomethine with Zn(II) ion. The peaks at d 6.05 ppm in Schiff base and the complex are assignable to two phenolic -OH group in the curcumin moiety 18,19 which suggested that they are not involved in the coordination. The multi signals within the range d 6.73-7.56 ppm are assigned to the aromatic protons of ring in metal(II) complexes d 3.34-3.83 ppm for asymmetric proton while the CH 2 protons are shown in the d 2.50 ppm. Fig. 1. PNMR spectra of (a) cur-gly and (b) [Zn(II)-(cur-gly)H 2 O] Electronic absorption spectral analysis UV-Vis. spectrum of cur-gly exhibited a band centered at 330 nm, which is the characteristic transition of n→π* agrees to azomethine moiety. This spectral electronic band may be shifted to higher absorption wavelength region demonstrating the status of coordination. The electronic spectra of [Co(II)-(cur-gly)H 2 O] and [Ni(II)-(cur-gly)H 2 O] displays an broad absorption spectral band regions ensued at 620 and 615 nm respectively may be attributed to 4 Table 4. Powder XRD The PXRD patterns of cur-gly and its [Cu(II)-(cur-gly)H 2 O] complex are measured in the range 2q = 0-80 Å, Fig. 2(a, b). By Scherer's equation, d xrd = 0.89l/βcosq), the average grain size dXRD was calculated, 'd' characterizes the average grain size phases under examination. 'l' indicate the wavelength of X-ray beam used. 'β' is the full-width half maxima of diffraction, and 'q' is the Bragg's angle. From the XRD patterns, the average grain size for cur-gly and [Cu(II)-(cur-gly)H 2 O] are 67 nm and 50 nm respectively for the above mentioned compounds. XRD patterns subsequently on complexetion, the particle size decreases, indicate that the metal-ligand coordination. Magnetic moment studies The observed magnetic susceptibility value Biological activities of cur-gly and its complexes are summarized in Table 5 and the results infer that Co and Cu metal complexes have more inhibition towards S. aureus and [Cu(II)-(cur-gly)H 2 O] has added inhibition towards E. coli. Ni, Cu and Zn have more inhibition towards C. parapsilosis. Cur-gly and its [M(II)-(cur-gly)H 2 O] were compared its shows that, the metal chelates are more effectively inhibits the microorganism than their parent cur-gly against the same microorganism under indistinguishable experimental conditions. Complexation increases the polarity of metal ion by the partial distribution of its positive charge with donor groups in complexes. This increases the lipophilic environment of the central metal atom which ultimately favours its permeation through the lipid level of the cell membrane. [23][24][25][26] DNA Cleavage The pUC18 (Plasmid University of California) DNA using agarose gel electrophoresis experiment was conducted in the present investigation at 37 o C using our synthesized cur-gly and its [M(II)-(curgly)H 2 O] in existence of hydrogen peroxide as an oxidant. It is shown in Fig. 3, some complexes exhibit cleavage activity in presence of hydrogen peroxide at the low concentration levels. In Lane 1 shows the control DNA. Lane 2 does not display any substantial cleavage of pUC18 DNA even on longer exposure time interval. 27 ACKNOWLEDGEMENT Authors are thankful for using the instrumental facilities of IR, UV-Vis and PXRD provided in JA College Periyakulam. NMR in GRI Gandhi gram and magnetic studies in DRDO Laboratory, Thiagarajar College, Madurai, Tamil Nadu. We are thankful to Medauxin, Bangalore and Scubber Diagnstic Centre, Nagercoil, Tamil Nadu for the biological studies.
2,465.4
2021-01-01T00:00:00.000
[ "Chemistry" ]
Exergy Analysis of Thermal Power Plant for Three Different Loads : This paper presents the energy and exergy analysis of thermal power plant Tuzla in Tuzla, Bosnia and Herzegovina. The main aim of this paper is to analyze the components of a 200 MW steam power plant unit in order to identify and quantify the sites with the highest exergy losses and to calculate exergy efficiency values of all components when operating at nominal load. The influence of the change in ambient temperature and block load on the value of exergy losses and exergy efficiency was taken into analysis. The analysis further includes the impact of steam block operation without high-pressure and low-pressure heaters on the exergy efficiency of the steam block. The goal of the analysis is to determine the functional state of individual steam block components after a long period of exploitation and maintenance in order to take appropriate measures to improve their technical performance. Exergy losses during nominal operation of the steam power plant unit are the largest in boiler and amount to 313.42 MW, followed by a turbine with 205.60 MW, condenser 1 with 6.03 MW, condenser 2 with 5.75 MW, while other components of the steam power plant have exergy losses in the range of 0.03 to 2.15 MW. Operation of the unit at nominal load without HPH results in an exergy efficiency decrease from 5.60 to 9.80 %, while in case of operation without HPH and LPH it results in a decrease in exergy efficiency from 9.86 to 16.40 % depending on the pattern used to calculate. The conclusion after the analysis indicates that the biggest exergy losses are in the boiler and turbine and consequently these components have the lowest exergy efficiency values. The increase in ambient temperature has different effects on individual components of the thermal power plant, increasing exergy losses of the boiler while reducing the turbine exergy losses and condensers. INTRODUCTION Energy and exergy analysis of power generation systems are essential for the efficient utilization of energy resources.Therefore these analyses became interesting for researches and scientists in recent years.The most commonly -used method for analyzing energy conversion process is the first law of thermodynamics.However, a method combining the first and second law of thermodynamics has been increasingly used recently.This method is used to calculate exergy and exergy losses in order to determine the efficiency of use of available energy.Exergy analysis enables defining the difference between energy losses to the environment and the internal irreversibility of the process [1]. Exergy analysis evaluates the performance of system and process components, as well as the evaluation of exergy at individual points of the energy transformation process.Based on the obtained data, it is possible to assess efficiency and determine the places in the process with the greatest losses.[2].It is for these reasons that today's approach to process analysis includes exergy analysis, which provides a more realistic view of the process and is a useful tool for engineering evaluation [3].It enables a better assessment of the efficiency of the complete system, better optimization, designing and improving the performance of energy systems. A large number of researchers have sought to understand and improve the operation of thermal power plants, steam turbines and advanced cycles, using the method of energy and exergy analysis.Exergy analysis of energy systems in general and thermal power plants was dealt with by Aljundi et al. [4].Yang, et al. [5] investigated 660 MW ultrasupercritical steam power plant in China who have shown that, heavier exergy destruction is caused by exhaust flue gases with 73.51% of the total boiler subsystem.The exergy analysis of various thermal power plants led to the conclusion that the boiler is the main source of exergy losses [6][7][8][9][10][11][12].Many researchers have linked exergy to the cost analysis of the thermal power plants [13].Gogoi and Talukdar [14] analyzed how the pressure in the boiler and the fuel flow rate affect the parameters of the boiler, and found a significant influence of these two parameters on the performance of the energy cycle.Kanoglu, et al. [15] have analyzed and evaluated different efficiencies of energy conversion and heat transfer taking into account energy systems with constant flow (turbines, compressors, pumps, heat exchangers, etc.), various power plants, cogeneration plants and refrigeration systems.Rashad and Maihy [16] analyzed the exergy and energy of the Shobra El -Khima power plant in Cairo and found that the highest exergy destruction occurred in the turbine (about 28% at different loads), while the highest energy loss was recorded in the condenser (55% at different loads).Sengupata, et al. [17] analyzed the exergy of a supercritical coal-fired steam power plant with a capacity of 210 MW at the design values of the parameters and at different loads.Živić, Galović, Avsec and Holik [18] they analyzed four variables at the inlet to the turbine, namely: the ratio of gas inlet temperature to the turbine, the ratio of compressor outlet and inlet pressure and inlet air temperature to the compressor, and the isentropic efficiency of the compressor and turbine.The air temperature at the entrance to the turbine was kept constant, while the temperature of the flue gases at the entrance to the turbine varied from 900 to 1200 °C. The aim of this paper is to analyze the 200 MW unit of thermal power plant in Tuzla from the perspective of energy and exergy.The primary task is the exergy analysis of thermal power plant components at nominal operating mode, as well as the impact of exergy losses and thermal power plant operation without high -pressure and low -pressure heaters on exergy efficiency. For the operating modes at 90 % and 80 % the load, the exergy efficiencies will be calculated and a comparative analysis will be performed.Also, the influence of the outside temperature on exergy losses of boiler, turbine and both steam condensers will be analyzed. PLANT DESCRIPTION After the completion of construction, Tuzla thermal power plant 200 MW unit was for the first time synchronized with the grid in 1974 and a test facility has started that day.Prior to modernization, the unit had 153668 operating hours and 24267303 MWh of electricity submitted to the electricity grid.In the period from 2006 to 2008, the unit was revitalized by installing a new DCS control system, replacing electrostatic precipitators, coal mills, slag and ash transport systems, reconstructing boiler, installing electro -hydraulic turbine control and a new generator sealing system. Tuzla thermal power plant 200 MW unit has a singleaxle, three -cylinder, condensing turbine installed with two steam outputs and one intermediate heating.Each steam outlet from the turbine is connected to a special condenser.Inter -heating is performed between high -pressure and medium -pressure parts of the turbine. The high -pressure section consists of 12 stages, the medium-pressure section of 11 stages, while the lowpressure part which is divided into two parts has 4 stages of rotor blades.The turbine is equipped with 7 uncontrolled extraction points used to preheat feedwater before it enters the boiler.The above mentioned 200 MW unit has 4 low pressure and 3 high pressure regenerative system heaters [19]. Extraction points are located at different turbine stages is as follows: The data used for the thermodynamic analysis of the 200 MW unit are based on normative tests from 2014 at the state of the unit of 202 000 operating hours, with the data that the unit operated 6000 hours after the overhaul.The tests were performed for the operation at 100 % unit load (200 MW power) and steam production 600 t/h, 90 % unit load (180 MW power) and steam production 540 t/h, and 80 % unit load (160 MW power) with steam production 480 t/h.Boiler heating surfaces were cleaned. Numerical analyses (energy and exergy analyses) performed in this paper do not require knowledge of the steam turbine or any other steam system component's internal structure [20][21][22].The diagram of the 200 MW steam unit is shown in Fig. 1. The operating conditions of the power plant are summarized in Tab. 1. THERMODYNAMIC ANSLYSIS Exergy is the ability of a system to perform useful work when moving to a final state in equilibrium with the environment.In general, exergy is not conserved as energy, but destroyed in the system.Exergy destruction is a measure of irreversibility and is a source of performance loss.Through exergy analysis, it is possible to estimate the value of exergy losses, as well as the size and source of thermodynamic inefficiency of the heating system. Mass, energy and exergy balances for any control volume at steady state, with negligible potential and kinetic energy changes, can be expressed, respectively, by where the net exergy transfer by heat (E heat ) at temperature T is given by heat and the specific exergy is given by 0 ( ) Total exergy was calculated according to the formula where E x , T, m, h and s indicate the total exergy rate, temperature, mass flow rate, enthalpy and entropy, respectively.The subscript 0 shows the dead state condition.Plant exergy efficiency can be defined as [23]: where E x net, e and E xi are net exergy at the output and exergy at the input, in order and are calculated: where W T denotes turbine power and W own consum refers to the auxiliary devices consuming 10 % of net power generation.E x represents the exergy rate and subscripts indicate state points in Fig. 1.Exergy losses during coal combustion in the boiler and exergy losses related to exhaust gases were neglected in this analysis.On the other hand, plant exergy efficiency can be defined as: net, plant2 fuel fuel This definition takes into account the irreversibility of the heat transfer from gases to water in boiler pipe systems. In Eq. ( 10) m fuel stands for fuel mass flow rate and e fuel is specific fuel exergy that can be expressed as: where φ = 1.05 is exergy factor and LHV is fuel lower heating value [23].The above forms are used for the analysis of the steam block and the ambient temperature is 293.15K and the pressure is 101.3 kPa.The thermodynamic properties of the working fluid at the state points from Fig. 1 were calculated REFROP 8 software [24] and summarized in Tab. 2. Thermodynamic properties of the working fluid and exergy values in the state points from Fig. 1 for operation of thermal power plant with 100%, 90 % and 80 % load were calculated and summarized in Tab. 2. The values of the parameters in the state points next to which the load is not specified are valid for the nominal load. Values of LHV and mass flows of coal used for thermodynamic analysis are presented in Tab. 3. For work in stationary mode and by choosing each component from Fig. 1 as control volume, exergy losses and exergy efficiencies can be calculated in the manner shown in Tab. 4. RESULTS AND DISCUSSION Exergy losses of all components of the thermal power plant are shown in Fig. 2. It was found that the exergy destruction rate of the boiler is dominant over all other irreversibility in the cycle.Boiler exergy losses alone amount to 59 % of losses in the plant, while the exergy destruction rate of the condenser is only 0.84 to 1.07 %.Other components (HPH, LPH, feedwater pumps, condensing pumps and deaerator) have an exergy loss percentage of 0.001 to 0.4 %.Moreover, research shows that 38.50 % of exergy losses occur in turbine.The values of exergy destruction, the percentage values of exergy destruction and the exergy efficiency of all components for the nominal operation of the block are calculated and given in the Tab. 5.The exergy efficiencies of the thermal power plant components were calculated and shown in the Fig. 3.It is found that condensing pumps with the exergy efficiencies of 28.60 % are the least efficient devices in the plant and LPH1 with exergy efficiency 99.60 % is the most efficient one.Components with lower exergy efficiency values are condenser 1 (18.50 %), condenser 2 (17.95 %), boiler (44.50 %), turbine (49.42 %) and deaerator (54.22 %). The influence of the change in ambient temperature on the values of exergy losses of boiler, turbine and both steam condensers during operation of the unit at nominal load are shown in Fig. 4. Fig. 4 shows that with increasing ambient temperature the boiler exergy losses increase and turbines and both steam condensers decrease.More detailed analysis of the influence of ambient temperature on the exergy efficiency of steam condensers for three different loads of the 200 MW unit was processed in the research of the authors of this paper [25].As previously presented, exergy efficiency power plant can be calculated based on two different methods using two different equations, Eq. ( 7) and Eq.(10). Eq. ( 7) takes into account the energy carried by working fluid, neglecting the irreversibility of combustion process in furnace.Eq. ( 10) is based on exergy carried by the fuel combusted in furnace where irreversibility of the combustion process and exergy losses in exhaust gases are not neglected. Thus obtained exergy efficiency values of the thermal power plant unit for operation at 100 %, 90 % and 80 % load are shown in the Fig. 5.For power plant unit operating at 100 % load, the values of exergy efficiencies are 77.26% (Eq.( 7)) and 34.37 % (Eq.( 10)).The obtained values of exergy efficiency according to Eq. (10) The influence of the 200 MW unit operation at the nominal regime without HPH in one and without HPH and LPH in the other case, on its exergy efficiency was calculated and shown in the Fig. 6.In the first case, when operating without HPH, efficiencies according to Eqs. ( 7) and ( 10) are 67.74 % and 28.80 %, respectively.These exergy efficiencies values are lower by 5.6 % and 9.8 % compared to operating a thermal power plant with HPH.In the other case, when operating thermal power plant unit without HPH and LPH, the exergy efficiencies amount to 60.84 % and 24.51 %, respectively, which are 9.90 % and 16.40 % lower than the values when operating with HPH and LPH.Fig. 7 shows the exergy efficiencies of the boiler and turbine when the unit is operating at nominal mode without HPH and without HPH and LPH.Operation of the unit at nominal load without HPH and without and LPH results with boiler efficiencies of 42.40 % and 40.30%, respectively.Compared to the same values when unit is operating with HPH and LPH, these values are lower by 2 % and 4.2 %.Furthermore the exergy efficiencies of the turbine are 44.20 % and 40 % in the case when unit is operating without HPH and operating without HPH and LPH.When both HPH and LPH are operative, the turbine has by 5.2 % and 10 % higher exergy efficiency than in the previously mentioned case. CONCLUSIONS In this article, an analysis of the energy and exergy of a 200 MW steam block was performed, as well as the influence of the change in ambient temperature on the values of exergy losses and exergy efficiency.The analysis of this energy system revealed that the largest exergy loss is in the boiler and is 58.7%, followed by the turbine with an exergy loss of 38.5%.The percent exergy destruction in the condenser 1 and condenser 2 was 1.07 % and 0.84 % respectively, while all heaters, dearator and pumps destroyed less than 1 %. With the change in ambient temperature, the percentage of exergy losses and exergy efficiency of all components also changed, but the conclusion remained the same, that the boiler and the turbine primarily affect the irreversibility of the analyzed cycle.The exergy efficiency of the block when working at nominal load is the highest, regardless of the method of calculating it.Different ways of calculating exergy efficiency lead to different values of exergy efficiencies and they are significantly less when the irreversibility of heat transfer in the boiler from flue gases to steam is taken into account in the calculation. The exergy efficiency of the unit when operating at nominal load and without high -pressure heaters is 6 to 10 % lower, depending on the calculation method, while when operating the unit without high -pressure and low pressure heaters, it is lower by 10 to 16.5 %. The exergy efficiencies of the boiler and turbine during the operation of the unit at nominal load and without high pressure heaters are 2 to 4.2 % less compared to the operation of the unit with high pressure heaters.The exergy efficiencies of the turbine at this load and operation without high pressure and low pressure heaters are further reduced and are 5 to 10 % lower.The operation of the unit at nominal load and without high pressure and low pressure heaters leads to a greater reduction in the exergy efficiency of the turbine than the boiler. The analysis confirmed previous researches that indicate that the boiler within the steam block has the highest exergy losses.The analysis of exergy losses and exergy efficiency indicates that certain components such as LPH3, LPH4 and HPH6 have significantly lower values of exergy efficiency and that their performance can be improved with certain measures through revitalization or maintenance.Also, there is room for increasing exergy efficiency in both steam condensers.The fact that about 3 MW of cooling water exergy is released into the atmosphere at the cooling tower indicates the possibility of installing commercially available technologies for its use and generation of additional electrical and thermal energy. Figure 1 Figure 1 Schematic diagram of the thermal power plant Figure 2 Figure 2 Exergy destruction of the thermal power plant components for nominal operation mode Figure 3 Figure 3 Exergy efficiency of the thermal power plant components for nominal operation mode Figure 4 Figure 4 Exergy destruction in function of environment temperature Figure 5 Figure 5 Exergy efficiency of the thermal power plant unit for operation at 100 %, 90 % and 80 % load Figure 6 Figure 6 Exergy efficiency of the thermal power plant for operation at 100 % load without HPH and without HPH and LPH Figure 7 Figure 7 Exergy efficiency of boiler and turbine when unit is operating without HPH and without HPH and LPH extraction point for LPH 4 -beyond 8 th grade • V extraction point for LPH 3 -beyond 21 st degree • VI extraction point for LPH 2 -beyond 23 rd grade • VII extraction point for LPH 1 -beyond 25 th grade. • IV Table 1 Operating conditions of the thermal power plant Table 2 Thermodynamic properties, energy and exergy flow rates of state points in Table 3 Values used for thermodynamic analysis Table 4 Expressions of exergy efficiency and exergy destruction rate for each component Table 5 Exergy destruction and exergy efficiency of the thermal power plant components for nominal operation mode refer to the coal consumption of 237.10 t/h, coal lower heating value of 8347.10 kJ/kg, the boiler efficiency of 87.88 % and the power at the generator terminals of 195.99 MW.Exergy efficiencies at 90 % load are 76.89 % and 35 %.These values were obtained for coal consumption of 219.20 t/h, coal lower heating value of 8207.20 kJ/kg, boiler efficiency of 88.24 % and electric generator power output of 182.30MW.Operating at 80 % load, unit exergy efficiency values 75.58 % and 32.96 %.At the same time coal consumption is 210.60 t/h, coal lower heating value 8006.70 kJ/kg, boiler efficiency of 86.50 % and at generator terminals of 161.50 MW.
4,523.4
2023-05-12T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Properties Enhancement of High Molecular Weight Polylactide Using Stereocomplex Polylactide as a Nucleating Agent As one of the most attractive biopolymers nowadays in terms of their sustainability, degradability, and material tune-ability, the improvement of polylactide (PLA) homopolymer properties by studying the utilization of stereocomplex polylactide (s-PLA) effectively and efficiently is needed. In this sense, we have studied the utilization of s-PLA compared to poly D-lactide (PDLA) homopolymers as a nucleating agent for PLA homopolymers. The mechanical and thermal properties and crystallization behavior of PLA homopolymers in the presence of nucleating agents have been evaluated using a universal testing machine, differential scanning calorimeter, and X-ray diffractometer instruments, respectively. PDLA and s-PLA materials can be used to increase the thermal and mechanical properties of poly L-lactide (PLLA) homopolymers. The s-PLA materials increased the mechanical properties by increasing crystallinity of the PLLA homopolymers. PLLA/s-PLA enhanced mechanical properties to a certain level (5% s-PLA content), then decreased them due to higher s-PLA materials affecting the brittleness of the blends. PDLA homopolymers increased mechanical properties by forming stereocomplex PLA with PLLA homopolymers. Non-isothermal and isothermal evaluation showed that s-PLA materials were more effective at enhancing PLLA homopolymer properties through nucleating agent mechanism. Introduction The development of future materials is focused on their sustainability, degradability, and material tune-ability. Sustainability is an important parameter to ensure material resources will have a long life. Material tune-ability is a specific characteristic, which refers to the ability to alter and adjust the properties of the material to suit existing applications. Degradability is related to the environmental issues caused by material waste. Environmental problems receive more attention to recycle or develop the non-degradable material. Accumulation of non-degradable waste from the high consumption of fossil fuels-based materials in various applications causes tremendous environmental problems. Recently, many researchers have focused on the development and modification of the physical-mechanical properties of biodegradable polymers to substitute these for traditional polymers. Polylactide (PLA) is one of the most attractive biopolymers with many advantages that comply with future material development, such as sustainability, biodegradability, biocompatibility, and properties modification. PLA is a bio-based polymer that is commercially available and has accomplished large-scale production since 2001 [1,2]. It has the potential to replace fossil-based polymers due to its biodegradability and biocompatibility [3,4]. PLA is a stiff and brittle material at room temperature with a glass transition temperature of 55 • C and a melting temperature of 180 • C. It has weaknesses such as a low crystallization rate and heat distortion temperature, as well as insufficient crystallization ability during common industrial processes. As a thermoplastic biopolymer, it can be amorphous or semicrystalline in nature depending on its enantiomeric structures: Poly L-lactide (PLLA), Poly D-lactide (PDLA), or Poly DL-lactide (PDLLA). Commercial PLA made from L-lactide is a brittle material. The limitation in its thermal and mechanical stability restricts its wide application to replace oil-based polymers that require high impact strength [5]. Industrial processing of PLA results in a low heat distortion temperature due to its low crystallization rate and degree of crystallinity in a short processing time [6,7]. Considering its wide potential market, enhancement of the properties of PLA is mandatory in order to comply with specific applications. Blending PLA with other materials has been explored for the specific enhancement of PLA properties. The addition of plasticizers into the PLA matrix can reduce its brittleness and enhance its life span. Mixing PLLA and PDLA enantiomers forma stereocomplex PLA (s-PLA) with a different crystal structure and higher thermal and mechanical properties [8][9][10]. Furthermore, the blending of PLA with nucleating agents could improve its thermal and mechanical properties as well as increasing its crystallinity. As the crystal structure of s-PLA has unique characteristics with higher thermal and mechanical properties, crystalline s-PLA can be utilized as a nucleating agent to improve the thermal and mechanical properties of PLA [8,11]. Many studies have reported strategies to improve the properties of PLA through s-PLA formation [12][13][14][15][16][17]. Rahman et al. reported that the addition of PDLA into PLLA accelerated the crystallization of PLLA homopolymer through enhancement of the nucleation process, but slightly interfered with the crystallization growth [12]. The equimolar addition of high molecular weight PDLA (Mw > 2 × 10 5 g/mol) into PLLA homopolymer above melting temperature preferably crystallizes to form s-PLA [13]. Ji et al. reported the increasing crystallization rate and nucleation site of PLLA and PDLA homopolymer in the presence of low molecular weight s-PLA [14]. The presence of poly DL-lactide copolymer in the PLLA and PDLA mixture inhibited PDLA chain diffusion during the crystallization process of s-PLA formation [15]. The presence of s-PLA crystalline with various chain structures in the PLA brings higher mechanical and thermal properties [16]. Property enhancement by s-PLA will affect PLA processing: thermal processing, additive manufacturing, and solution casting [17]. Thermal and mechanical enhancement of PLA through stereocomplexation has been utilized to obtain stable PLA materials which are suitable for many applications, especially use as high-performance materials [8,[14][15][16][17][18][19][20][21]. The mechanical and thermal properties of s-PLA are higher than those of pure PLA films, which is caused by the strong molecular interactions (hydrogen bonds and dipole-dipole interactions) of PLLA and PDLA chains [21][22][23][24]. The stereocomplexation of PLA is a very well-known strategy for enhancing the properties of PLA-based materials by blending PLLA and PDLA homopolymers. But, it is requires an effective s-PLA production method. Previously, we have developed the effective stereocomplexation of high-molecular-weight PLA through supercritical fluid technology [24][25][26]. s-PLA also has some constraints for its use in real applications due to its melt stability during thermal processing in industries and also the high cost of PDLA in equimolar ratio with PLLA. Nevertheless, with its crystalline structure and characteristics, s-PLA materials can be used as a nucleating agent. Additionally, the material form of s-PLA is important to comply with industrial processing. By using supercritical fluid technology, a perfect s-PLA is obtained in a dry, powder form which is suitable to be used as a nucleating agent [24]. Utilization of s-PLA materials as a nucleating agent (additives) instead of PDLA homopolymer will reduce the use of PDLA by up to a half. As commercial applications require competitive costs, the low consumption of PDLA homopolymer will be beneficial for reducing production costs. For these reasons, it is necessary to find an effective and efficient utilization of s-PLA to improve PLA homopolymer properties. In this work, we have studied the utilization of s-PLA compared to PDLA homopolymers as a nucleating agent for PLA homopolymers. We have evaluated the mechanical, thermal properties, and crystallization behavior of PLA homopolymers in the presence of nucleating agents. Stereocomplex Formation The s-PLA was synthesized by combining PDLA (M n =~87,000 g/mol, M w = 125,000 g/mol, PDI = 1.437) and PLLA (M n =~87,000 g/mol, M w = 153,000 g/mol, PDI = 1.759) with 1:1 weight ratio and processed through supercritical carbon dioxidedichloromethane [24]. The processing condition was optimized at 65 • C to achieve a pressure of 350 bar and allowed to proceed for the predetermined times (5 h). The reactor was opened immediately after the reaction had finished obtaining a dry and powdershaped s-PLA. Polylactide Blending S-PLA and D-lactide were prepared as nucleating agents for high molecular weight PLLA materials. PLA blends were prepared by adding s-PLA particles or PDLA homopolymer with various contents into PLLA materials by a solution casting method. We denoted these as PLLA/PDLAx and PLLA/s-PLAx for the blends containing PDLA and s-PLA, respectively. The x values represent the PDLA or s-PLA content in the blends. PLLA/s-PLA3, PLLA/s-PLA5, and PLLA/s-PLA10 represent the PLA blends with 3%, 5%, and 10% of s-PLA particle content, respectively. In a similar notation, PLLA/PDLA3, PLLA/PDLA5, and PLLA/PDLA10 represent the PLA blends with 3%, 5%, and 10% of PDLA content, respectively. Neat PLLA homopolymers were used as control materials. The mixture was dissolved in dichloromethane with total polymer to total solvent ratio (weight to volume) of approximately 5:100. The mixture was vigorously stirred for 4 h and poured into a petri glass. It then underwent evaporation at room temperature for 24 h and was subsequently placed in a vacuum condition at 80 • C for 48 h. Characterization The PLA blend films were characterized to evaluate the enhancement of their mechanical and thermal properties. The mechanical testing method is adopted from ASTM D-638 to evaluate tensile properties. The mechanical properties of the PLA blends were measured by Universal Testing Machine (6800 Series, Instron, Norwood, MA, USA) apparatus with a specimen size of 20 mm × 5 mm and sample thickness of approximately 80 µm. The distance between the supports was 10 mm and the extension rate was 1 mm/min. The thermal properties of PLA blends were evaluated using a modulated differential scanning calorimeter (Modulated DSC 2910, TA Instrument, New Castle, DE, USA). The heating rate was fixed at 10 • C/min. Isothermal and non-isothermal crystallizations were evaluated by varying cooling rates and crystallization temperature, respectively. X-ray diffraction spectra were registered with an X-ray diffractometer D/Max-2500 (Rigaku, Japan) composed of Cu K α (λ = 1.54056Ǻ, 30 kV, 100 mA) source, a quartz monochromator, and a goniometric plate. A polarized optical microscope was also used to evaluate crystal growth during the isothermal and non-isothermal crystallization processes. Results and Discussions The s-PLA and PDLA we investigated are nucleating agents for PLLA homopolymers. Various reports have studied the nucleating effect of s-PLA by combining PDLA into PLLA homopolymer. Here, we report the use of real s-PLA materials to evaluate the nucleating effect of PLLA homopolymers compared to PDLA as a nucleating agent. The use of s-PLA as a nucleating agent was successfully synthesized through supercritical carbon dioxide-dichloromethane [24]. The s-PLA and PDLA were evaluated by DSC and XRD instruments to confirm the characteristics of the materials as shown in Figure 1. The synthesized s-PLA crystalline shows a single peak at~12 • of 2θ compared with PLLA and PDLA at~17 • and 19 • of 2θ which indicates the change of crystal structure and helical conformation of PLLA and PDLA driven by hydrogen bonding interactions. The different diffraction peaks mean that all of the PLLA and PDLA blends successfully formed s-PLA [20]. The s-PLA formation was also confirmed by its single melting temperature (T m ) at 230 • C, which was 50 • C higher than T m of homopolymers (~180 • C). The s-PLA can be produced by solvent casting [11][12][13][14][15]21,27], thermal processing [28,29], microwave irradiation [30], and the supercritical fluid technology method [24][25][26]. The solvent casting and thermal processing methods have molecular weight constraints to generate perfect s-PLA materials [21,28]. The microwave irradiation method is a fast and efficient process to produce bulk s-PLA materials in a short time [30]. However, it has limitations in producing s-PLA on a large scale. In this work, we obtained s-PLA using supercritical carbon dioxide-dichloromethane. The supercritical carbon dioxidedichloromethane generated perfect dry and powder-shaped s-PLA materials [24]. It is also possible to scale up this process into commercial production. Furthermore, a dry and powder-shaped s-PLA material is suitable for use as additives in industrial application. The addition of nucleating agent into the polymer matrix was adopted to improve or enhance specific matrix properties. Improvement of the properties of PLA homopolymers is important in replacing conventional polymeric materials. The addition of s-PLA or PDLA as the nucleating agent was expected to improve the mechanical properties of PLLA homopolymer. The nucleating agent's amounts in the PLLA homopolymer were varied at 3%, 5%, and 10% weight ratios. The PLLA mechanical properties enhancement with the addition of s-PLA and PDLA are tabulated in Table 1. The addition of s-PLA or PDLA improves the mechanical properties of PLLA homopolymer. The Young's modulus of PLLA homopolymer with s-PLA content of 3%, 5%, and 10% increased by 29.90%, 41.74%, and 44.47%, respectively. The s-PLA additions also increased tensile strength up to the addition of 5% s-PLA, then decreased at the addition of 10% of s-PLA. The s-PLA slightly reduced PLLA homopolymer elongation at 3% and 5%, but this drastically reduced at 10% of s-PLA contents. On the other hand, the addition of PDLA only increased Young's modulus of PLLA by 17.66% at the same content. It also slightly increased tensile strength by 11.21% but increased elongation at break. Previous studies generally focus on the enhancement of PLA properties through the formation of s-PLA materials from PLLA and PDLA with various blending ratios [9][10][11][12][13][14][15]27]. The mechanical properties of s-PLA increase with an equivalent ratio of PLLA and PDLA, such as Young's modulus by up to 25% [24]. The enhancement of mechanical properties also depends on the ratio of PLLA to PDLA [14,28]. In general, the mechanical properties of PLA blends improve by up to 25% (Young's modulus) with 50% of PDLA, then decrease when increasing the PDLA portion [31]. In this work, the addition of s-PLA and PDLA as nucleating agents increased the tensile strength and Young's modulus of PLA material. Overall, s-PLA and PDLA nucleating agents improve mechanical properties. Moreover, s-PLA makes significant improvements in tensile strength and Young's modulus compared with PDLA material. On the other hand, s-PLA slightly reduces elongation at break compared to PDLA. When compared to previous research, the addition of a small number of s-PLA materials into a PLA homopolymer contributes significant improvements in tensile strength and Young's modulus with a slight reduction in elongation at break. From the data, the addition of s-PLA and PDLA enhanced mechanical properties in a different pattern. The addition of s-PLA is predicted to enhance mechanical properties through a nucleating effect; however, the addition of PDLA enhanced the mechanical properties through the formation of stereocomplex crystallites and acted as an intermolecular cross-link connecting homopolymer crystallites [21]. We also evaluated the thermal properties that were affected by the addition of s-PLA particles and PDLA homopolymers. Based on DSC scanning, the addition of s-PLA enhanced the crystallinity of homopolymers. As shown in Table 2, PLLA homopolymer showed a single melting point (T m ) at~180 • C, and PLLA blends showed two T m values (T m 1 =~180 • C and T m 2 =~230 • C). The T m 1 is the melting point of PLLA homopolymer crystallites and T m 2 is the melting point of s-PLA crystallites. The heat of melting at high temperature (∆H 2 ) evaluation showed that PDLA has a bigger enthalpy value compared to s-PLA. Theoretically, the same addition of PDLA resulted in approximately double ∆H 2 compared with s-PLA due to the PDLA homopolymer percentage being approximately 50% in s-PLA crystallites. The degree of crystallinity of PLLA/PDLA blends showed a decrease in PLLA crystallites due to some portion of PLLA homopolymers being converted into s-PLA crystallites caused by the hydrogen bonding (CH 3 ···O=C interaction) between PLLA and PDLA homopolymers [8]. The s-PLA particle slightly increased the degree of crystallinity of PLLA homopolymer crystallites. The presence of s-PLA particles in the blend was confirmed by the T m 2 value. The degree of crystallinity (χ) of s-PLA from the PLLA/PDLA3 blend showed double the value when compared with the PLLA/s-PLA3 blend. Thus, the thermal evaluation data complies with the theoretical calculation of T m and ∆H 2 . The thermal evaluation data from DSC scanning was confirmed by XRD evaluation as shown in Figure 2. The XRD pattern confirmed the presence of PLLA homopolymer and s-PLA crystallites in the polymer blends. The XRD analysis indicated diffraction peak at 2θ = 14.6 • , 17 • , 19 • , 23.7 • on homopolymer, whereas diffraction peaks for stereocomplex PLA were observed at 12.5 • , 21 • , and 24 • . X-ray diffraction evaluation confirmed the presence of s-PLA crystallites on PLLA/PDLA3 and PLLA/s-PLA3 blends. The area of diffraction peak of PLLA/s-PLA3 was wider than PLLA/PDLA3 blends. This data aligned with DSC evaluation on the degree of crystallinity. The higher crystallinity will affect the flexibility of the material. Higher crystallinity caused materials to be more rigid and brittle which caused decreases in elongation at the break during mechanical testing. We also evaluated the melt stability characteristics, or the crystallization process after melting of the blends to ensure their suitability in real industrial processes and applications due to the importance of maintaining the physical properties and processability of the materials. The evaluation on melt stability of the blends was performed by a DSC comparing the degree of crystallinity before and after melting at 250 • C by scanning speed at 10 • C/min (shown in Figure 3). The degree of crystallinity of PLLA homopolymer drastically decreases. The degree of crystallinity of PLLA/PDLA blends shows significant improvement in crystallization after melting, but with a high content of PDLA (10%), the degree of crystallinity drastically decreases in the first and second scanning. Decreases in the degree of crystallinity with high content of PDLA homopolymers are caused by the limitations of high molecular weight PDLA to form stereocomplex crystallites in solution casting (first scan) and to re-assemble the enantiomeric homopolymer chains after melting (second scan) [28]. For the PLLA/s-PLA blends, the degree of crystallinity at the first and second scanning does not have a significant difference compared to PLLA/PDLA blends. From these data, the nucleating effect of s-PLA is more effective compared with PDLA materials due to PDLA materials forming as s-PLA first, then acting as a nucleating agent. Some studies reported the crystallization behavior of PLA homopolymers in the presence of PDLA as a source of s-PLA crystallites [14,27,29]. Here, we also evaluated the non-isothermal and isothermal crystallization behavior of PLLA homopolymers in the presence of PDLA and s-PLA. For non-isothermal crystallization, the materials were heated to 200 • C at the rate of 10 • C/min and held for 3 min at the same temperature. Based on the non-isothermal crystallization process at different cooling and scanning rates, increasing the scanning rate will decrease the cold crystallization temperature and the heat of melting values as shown in Figure 4. PLLA homopolymers required a certain time to initialize the crystallization process. Aligned with previous reports [27,29], the cold crystallization temperature (T c ) was increased by a slower cooling rate and the presence of a nucleating agent. The presence of s-PLA crystallites in PLLA homopolymers increased T c values which correspond to the acceleration of PLLA crystallization [27]. Figure 4 shows the T c and ∆H values of PLLA/s-PLA3 at a slow cooling rate are higher when compared with PLLA/PDLA3 which means the nucleating effect of s-PLA will show more significant effects compared with PDLA homopolymers. PDLA homopolymers are probably required to form s-PLA before acting as a nucleating agent. The crystallization study is important to evaluate the material's behavior during thermal processing. The use of s-PLA as a nucleating agent offers higher effectivity compared with PDLA homopolymer due to PDLA needing to form s-PLA before acting as a nucleating agent. For s-PLA formation during PLLA and PDLA blending, molecular weight and structure is important in structure re-arrangement during the melt. The s-PLA material can act directly as a nucleating agent during the thermal or melt process. To obtain information about crystallization behavior, the polarized optical microscope was used to evaluate the crystal growth of polymeric materials as shown in Figure 5. Nonisothermal crystallization showed that PLLA materials are able to form crystal structures in the presence or absence of a nucleating agent. The crystal formation shows the different size and number of spherulite per area. PLLA homopolymers show a bigger crystal size compared with PLLA/PDLA3 and PLLA/s-PLA3. It means PLLA homopolymer has a slower initialization of crystal formation. PLLA/s-PLA showed a smaller average crystal size compared with PLLA/PDLA3. Therefore, the s-PLA materials have a faster nucleating effect compared with PDLA materials which should form s-PLA crystallites first before acting as a nucleating agent. At isothermal temperature 120 • C, PLLA/s-PLA3 and PLLA/PDLA3 showed a higher density of crystallites compared with PLLA homopolymer. The PLLA/s-PLA3 showed smaller crystallite size and higher density compared with PLLA/PDLA3. Based on these data, s-PLA showed a better nucleating agent compared with PDLA homopolymers. Conclusions The PDLA and s-PLA materials can be used to increase the thermal and mechanical properties of PLLA homopolymers. S-PLA materials enhanced mechanical properties by increasing the crystallinity of the PLLA homopolymers. PLLA/s-PLA enhanced mechanical properties up to a certain level (5% s-PLA content), which then decreased due to a higher amount of s-PLA materials affecting the brittleness of the blends. The addition of s-PLA improved mechanical properties by more than 25% of tensile strength and Young's modulus. PDLA homopolymers increased mechanical properties by forming stereocomplex PLA with PLLA homopolymers. The addition of 10% PDLA homopolymer improved PLLA homopolymer by up to 11% of tensile strength and 17% of Young's modulus. Higher content of PDLA homopolymer has difficulties forming perfect stereocomplexation of PLA due to its limitations caused by molecular weight. Non-isothermal and isothermal evaluation showed that s-PLA materials are more effective in enhancing PLLA homopolymer properties through nucleating agent mechanism. Author Contributions: Conceptualization, P.P. and M.S.; methodology, P.P., M.S. and I.I.; writing and original draft preparation, P.P. and M.S.; writing, review and editing, P.P., M.S. and I.I. All authors have read and agreed to the published version of the manuscript.
4,922.4
2021-05-25T00:00:00.000
[ "Materials Science" ]
Synthesis of Ca-Doped Three-Dimensionally Ordered Macroporous Catalysts for Transesterification -e novel three-dimensionally ordered macroporous (3DOM) CaO/SiO2, 3DOM CaO/Al2O3, and 3DOM Ca12Al14O32Cl2 catalysts for biodiesel transesterification were prepared by sol-gel method. -e 3DOM catalysts were characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD), and Fourier transform infrared spectroscopy (FTIR). -e hierarchical porous structure was achieved; however, only 3DOM CaO/Al2O3 and 3DOM Ca12Al14O32Cl2 catalysts were used for transesterification due to high amount of active CaO. Various parameters such as methanol to oil molar ratio, catalyst concentration, reaction time, and their influence on the biodiesel production were studied. -e result showed that 99.0% RPO conversion was achieved using the 3DOMCa12Al14O33Cl2 as a catalyst under the optimal condition of 12 :1 methanol to oil molar ratio and 6wt.% catalyst with reaction time of 3 hours at 65°C. Introduction In recent years, the demand of fossil energy is increasing by a rapid growth of global transportation and industrial evolution, thus driving world economic.In addition, the fossil energy becomes expensive due to limited resources.It is predicted that the fossil energy would be exhausted by 20 centuries [1].Hence, more researchers are focusing on new alternative energy resources.Biodiesel is one of the alternative energies consisting of monoalkyl ester that was derived from recycled cooking oil, vegetable oil, and animal fats.In addition, it is renewable, clean-combustion diesel replacement.Biodiesel is produced by transesterification from either vegetable oil or animal fat with methanol in the presence of a catalyst, resulting in glycerol and biodiesel. is clean diesel provides low carbon monoxide emission, low greenhouse gases emission, noncombustion of hydrocarbon, and nonsulfur dioxide content compared to those of fossil fuel.e physical properties and energy content of biodiesel are similar to fossil fuel; therefore, it can be used to function conventional diesel engines efficiently without modification [2].e catalysts for transesterification are categorized into two groups: homogeneous and heterogeneous.e homogeneous catalysts such as sodium hydroxide or potassium hydroxide are most often used commercially because of their high catalytic activity and high productivity.However, the product must be neutralized, preventing corrosion in engine.e solid catalysts or heterogeneous catalysts, on the other hand, are easily separated from the produced biodiesel.For example, KNO 3 /Al 2 O 3 , La 2 O 3 /ZrO 2 , and K 2 CO 3 on alumina/silica support are highly active for transesterification of vegetable oils [3][4][5]. Recently, a new hierarchical material named threedimensionally ordered macroporous (3DOM) has been extensively studied because of its unique ordered structure with the interconnected wall.Owning to this structure, the refined palm oil (RPO) may not only enter the 3DOM catalyst pores easily but also transfer into the inner area of the catalyst.is may enhance biodiesel production.In this work, the threedimensionally ordered macroporous (3DOM) CaO/SiO 2 , 3DOM CaO/Al 2 O 3 , and 3DOM Ca 12 Al 14 O 32 Cl 2 were synthesized by the sol-gel method (SG).e obtained 3DOM catalysts were characterized by various techniques such as X-ray diffraction (XRD), scanning electron microscopy (SEM), and Fourier transform infrared spectroscopy (FTIR). e catalytic efficiency of the 3DOM catalysts for transesterification was investigated.Optimal conditions for transesterification using the obtained 3DOM catalysts were studied.e property of the produced biodiesel using the 3DOM catalysts was reported herein.e monodispersed poly(methyl methacrylate) (PMMA) spheres were synthesized by emulsifier-free emulsion polymerization as previously described by Phumthiean [6].A mixture of water and methyl methacrylate was stirred at 75 °C under nitrogen (N 2 ) atmosphere. Experimental Procedures en, 1.2 g of 2,2′-azobis(2-amidinopropane) dihydrochloride, an initiator, was added into the mixture and stirred for 2 h until reaction was completed.After reaction, the mixture was cooled down to room temperature and filtrated through glass wool to remove large particles. e PMMA spheres were self-assembled by gravitation until a clear solution and a colloidal crystal arrays were observed.e obtained PMMA arrays were dried at 60 °C for 24 h. Preparation of 3DOM Catalysts by Sol-Gel Method. e Ca-doped 3DOM catalysts were synthesized by the sol-gel method using calcium nitrate tetrahydrate (Ca(NO 3 ) 2 e precursor of each catalyst was mixed in ethanol and was stirred for 30 min at room temperature.Subsequently, each precursor solution was added on the PMMA arrays until the PMMA template was saturated with the precursor solution. e materials were dried at 80 °C for 24 h and calcined at 700 and 800 °C.e heating rate was set at 2 °C/min.e atomic mol percentage is shown in Table 1. Characterization of the 3DOM Catalysts.All 3DOM catalysts were characterized by XRD, FTIR, and SEM techniques in order to investigate the crystal structure, functional group, and morphology of the synthesized materials, respectively.X-ray diffraction measurements were performed by a Bruker D8 advance diffractometer using Cu Kα radiation (λ � 0.154 nm) with current of 40 mA and voltage of 40 kV.Data were collected in range of 10-70 °2θ with step size of 0.02 °. e infrared spectra were recorded at room temperature in the range of 400-4000 cm −1 with 32 scans and 4 cm −1 resolution using a Bruker Equinox 55 FTIR spectrometer. e surface morphology of the catalysts was observed by a FEI Quanta 450 scanning electron microscope (SEM) using an acceleration voltage of 20 kV. e reaction was carried out in a 100 mL three-neck round-bottom flask equipped with a magnetic stirrer.A 10 g of palm oil was added into the flask, and then the temperature of oil was raised to the designated temperature. e 3DOM catalyst and methanol were added into the flask using 8 wt.% of catalyst amount and 12 :1 of molar ratio of methanol to oil.e reaction was then stirred under 750 rpm at the temperature of 65 °C for 3 h.After the reaction was completed, the 3DOM catalyst was separated by centrifugation. An optimal condition for transesterification using the 3DOM catalyst was achieved by varying the amount of the 3DOM catalyst (6-12 wt.%), molar ratio of methanol to oil (9 : 1-24 : 1), and reaction time (3-5 h), respectively.e reaction temperature was fixed at 65 °C. Refined Palm Oil (RPO) Conversion Analysis and Biodiesel Analysis.Nuclear magnetic resonance (NMR) spectra were recorded on a VARIAN NMR spectrometer.e spectrum was obtained at 400 MHz for 1 H, using CDCl 3 as a solvent.e conversion of fatty acid methyl ester was measured using peak areas of the 1 H NMR signals from methyl ester at 3.6 ppm and that of glycerol at 2.3 ppm [8].e percentage of the fatty acid methyl ester conversion was calculated as follows: where C is the percentage of fatty acid methyl ester conversion, A 1 is peak of the methyl esters, and A 2 is peak of the methylene in glycerol.e biodiesel product from transesterification has been purified and used to study its properties using the following standard condition: acid value (ASTM D664), kinematic viscosity at 40 °C (ASTM D445), density (ASTM D1298), and flash point (ASTM D93). e average diameter was approximately 338 ± 38 nm.ese PMMA colloidal crystals were used as templates for syntheses of 3DOM CaO/SiO 2 , 3DOM CaO/Al 2 O 3 , and 3DOM Ca 12 Al 14 O 32 Cl 2 catalysts for transesterification reaction. For 3DOM CaO/Al 2 O 3 , the e ect of catalyst amount was investigated ranging from 6 to 12 wt.%with molar ratio of methanol to oil of 12 : 1 at 65 °C for 3 h.It was found that the re ned palm oil (RPO) conversion increased with the increase of the catalyst amount (Figure 6(a)).e 12 wt.% of catalyst gave the highest RPO conversion of 58%.Although the stoichiometric ratio of methanol to palm oil for transesteri cation is 3 : 1, additional of methanol makes equilibrium moving forward to produce more biodiesel.In this study, it was found that the reaction using 3DOM CaO/Al 2 O 3 catalyst at 12 : 1 MeOH : oil ratio resulted in 47% RPO conversion.However, the higher MeOH : oil ratio at 18 : 1 and 24 : 1 only produced 40% and 17% RPO conversion, respectively (Figure 6(b)).e decreasing of RPO conversion may be due to an alternation of the reaction equilibrium.Excess methanol may increase the solubility of glycerol; therefore, the equilibrium of the reaction shifted backward, resulting in a reduction of biodiesel [16,19].e e ect of reaction time is shown in Figure 6(c).e reaction time of 3 h gave the lowest RPO conversion of 47% because reaction was incomplete.When the reaction time was increased, the RPO conversion increased to the highest value of 94% at 5 h.e optimal conditions using the 3DOM CaO/Al 2 O 3 catalyst were as follows: 12 wt.%catalyst, methanol to oil molar ratio of 12 : 1, and reaction time 5 h under the temperature of 65 °C. For the 3DOM Ca 12 Al 14 O 32 Cl 2 catalyst, the e ects of the catalyst amounts varied from 3 to 12 wt.%.It can be seen that increasing catalyst amount gives higher RPO conversion (Figure 6(a)).However, a high catalyst amount (12 wt.%) was not suitable for transesteri cation because the high catalyst amount led to high viscosity in mixture.e optimum catalyst amount is 6 wt.% by weight.At MeOH : oil molar ratio of 9 : 1, the RPO conversion of 96% was observed (Figure 6(b)).e increasing methanol to oil molar ratio of 12 : 1 and 18 : 1 gave the maximum highest RPO conversion of 99%.e e ects of reaction time are shown in Figure 6(c).It obtained the highest RPO conversion at 3 h.e optimal conditions for transesteri cation were obtained as follows: by the catalyst amount 6 wt.%, methanol to oil molar ratio 12 : 1, and reaction time of 3 h.e RPO conversion was obtained at 99% under the optimal condition. Biodiesel Properties. e properties of biodiesel obtained from transesteri cation using 3DOM CaO/Al 2 O 3 and 3DOM Ca 12 Al 14 O 32 Cl 2 were studied following the biodiesel standard of USA (ASTM) and Europe (EN) as shown in Table 2. e density of the produced biodiesel was acceptable, but the acid value and viscosity did not meet the standard value.is may be due to the high acid value (3.02) of raw material.erefore, the raw material should be esteri ed to reduce free fatty acid Advances in Materials Science and Engineering [7].e high viscosity may be due to the Ca contaminant which causes side reaction (saponi cation) producing soap which then increases the viscosity of the biodiesel.), and hydroxyl (OH − ) groups, depending on the composition of the 3DOM materials.e 3DOM CaO/SiO 2 was not suitable for the reaction due to no active CaO.In addition, 3DOM materials that calcined at 800 °C exhibited collapsed 3DOM structure.Only 3DOM CaO/Al 2 O 3 and Ca 12 Al 14 O 32 Cl 2 were used for transesteri cation, because the ordered structure of both catalysts was maintained with an active CaO phase after calcination. Conclusion For 3DOM CaO/Al 2 O 3 , the optimal condition was 12 wt.% of catalyst, the methanol to oil molar ratio of 12 : 1, and reaction time of 5 h under stirring rate of 750 rpm and reaction temperature at 65 °C, giving rise to 93% of RPO conversion. e optimal condition of the novel 3DOM Ca 12 Al 14 O 32 Cl 2 catalyst was as follows: the catalyst amount 6 wt.%, the methanol to oil molar ratio of 12 : 1, and reaction time of 3 h under stirring rate of 750 rpm and reaction e viscosity and acid value of biodiesel product were slightly out of standard range (ASTM D445 and ASTM D664, resp.). is may be due to the Ca contaminant causing saponification.e density of biodiesel, however, was within specification (EN 14214). Figure 6 : Figure 6: RPO conversion: (a) e ect of % wt.catalyst, (b) e ect of MeOH : oil molar ratio, and (c) e ect of reaction time. Table 1 : Atomic mol percentage used for Ca-doped 3DOM material synthesis. 1 was a C O vibration of the carbonate group.It is noted that di erent calcination temperatures did not signi cantly alter the FTIR pattern of the 3DOM CaO/SiO 2 .FTIR spectra of both 3DOM CaO/Al 2 O 3 and 3DOM Figure 3: XRD patterns of the 3DOM CaO/SiO 2 .H Ca(OH) 2 , O CaO, S Ca 2 SiO 4 , and C CaCO 3 .Figure 4: XRD patterns of the 3DOM CaO/Al 2 O 3 at 700-800 °C and the 3DOM Ca 12 Al 14 O 32 Cl 2 .H Ca(OH) 2 , O CaO, M Ca 12 Al 14 O 32 Cl 2 , and C CaCO 3 .4 Advances in Materials Science and Engineering Ca 12 Al 14 O 32 Cl 2 exhibited characteristic bands at 500-700 cm −1 and 700-900 cm −1 which corresponded to Al-O vibration of octahedral and tetrahedral Al 2 O 3 , respectively [3]. e strong board band centered at 1460 cm −1 was attributed to C O vibration of the carbonate group.e band at 3645 cm −1 indicated OH stretching vibration of the absorbed water molecules onto CaO giving rise to Ca(OH) 2 [16].Al 2 O 3 and 3DOM Ca 12 Al 14 O 32 Cl 2 were used as a solid catalyst for transesteri cation of palm oil with methanol at 65 °C and stirring rate at 750 rpm. Table 2 : Properties of RPO biodiesel.Advances in Materials Science and Engineering temperature at 65 °C, giving 99% of RPO conversion.e 3DOM Ca 12 Al 14 O 32 Cl 2 catalyst had more efficiency than conventional solid and 3DOM CaO/Al 2 O 3 catalysts.e density of biodiesel was 0.86 g/cm 3 which was in the ASTM specification.
3,172.4
2018-03-13T00:00:00.000
[ "Chemistry", "Materials Science", "Environmental Science" ]
Optimization Methods of Tungsten Oxide-Based Nanostructures as Electrocatalysts for Water Splitting Electrocatalytic water splitting, as a sustainable, pollution-free and convenient method of hydrogen production, has attracted the attention of researchers. However, due to the high reaction barrier and slow four-electron transfer process, it is necessary to develop and design efficient electrocatalysts to promote electron transfer and improve reaction kinetics. Tungsten oxide-based nanomaterials have received extensive attention due to their great potential in energy-related and environmental catalysis. To maximize the catalytic efficiency of catalysts in practical applications, it is essential to further understand the structure–property relationship of tungsten oxide-based nanomaterials by controlling the surface/interface structure. In this review, recent methods to enhance the catalytic activities of tungsten oxide-based nanomaterials are reviewed, which are classified into four strategies: morphology regulation, phase control, defect engineering, and heterostructure construction. The structure–property relationship of tungsten oxide-based nanomaterials affected by various strategies is discussed with examples. Finally, the development prospects and challenges in tungsten oxide-based nanomaterials are discussed in the conclusion. We believe that this review provides guidance for researchers to develop more promising electrocatalysts for water splitting. Introduction With the rapid development of global modernization, the excessive consumption of non-renewable energy sources, such as oil and coal, has resulted in the crisis of greenhouse effect, energy shortage, and severe environmental pollution [1][2][3][4][5]. It is urgent to develop clean and sustainable energy to alleviate energy pressure and ameliorate environmental problems. Therefore, sustainable clean energy resources, such as wind, solar, tidal, and hydropower, have been extensively studied [6][7][8][9][10][11]. However, these energy sources have the disadvantages of uneven geographical distribution and intermittency, which seriously restrict their popularization and application [12]. Hydrogen fuel is expected to play a significant role in developing sustainable clean energy due to its high energy density, high energy yield (122 kJ/g), and environmentally friendly characteristics [13][14][15]. However, it is estimated that nearly 96% of worldwide hydrogen comes from the conversion of fossil fuels, where the pollution byproducts cause environmental problems, such as climate warming [14,[16][17][18][19]. Electrochemical water splitting, as a sustainable method of producing hydrogen with simple operation, mild reaction conditions, environmental protection, and low cost, has attracted much attention from researchers [13,[20][21][22]. The water-splitting process involves two half-reactions, i.e., hydrogen evolution reaction (HER) and oxygen evolution reaction (OER). However, due to the slow-kinetic four-electron transfer process with a high reaction barrier, the potential required for water splitting is always higher than the theoretical decomposition potential of water (1.23 V), resulting in additional electrical power consumption [23,24]. Effective electrocatalysts are required to reduce semiconductor−WOx, WOx−C, and metal-WOx heterostructures, have been develo [54][55][56][57]. These fantastic efforts substantially promoted the development of tungsten ide−based electrocatalysts. Tungsten oxides are widely utilized in energy storage [58][59][60][61][62][63], sensors [64][65][66][67][68], c ysis [69][70][71][72][73][74], and other fields because of their adjustable valence states (W 4+ , W 5+ , and and band gaps [31,75,76], various morphologies from zero to three dimensions [44 and different crystal phases. Here, the research progress of tungsten oxide−based ele catalysts for water splitting over the last few years is reviewed. Emphatically discu are the impacts of crystal phase, morphology, defect engineering, and heterojunctio fects on the electronic structure and catalytic activity of tungsten oxide−based nanoma als. We also present a brief summary and outlook of the tungsten oxide catalyst resea with the intention that this review may provide some insights into the constructio high−efficiency oxide catalysts. Phase Control Controlling the crystal phase of tungsten oxide and optimizing its physical chemical properties have been proven effective in improving catalytic perform [41,42]. The WOx is not simply composed of W 6+ and O 2− ions, which mainly consi hybrid conduction and valence band states of W 5d and O 2p [32]. The electronic struc of WOx in different crystal phases, such as monoclinic, orthorhombic, and hexag phases, is affected by the W-O bond length [32,78,79]. Consequently, it is possible to timize their catalytic performances by carefully adjusting the crystal phases [30,32,80] relatively stable monoclinic and metastable hexagonal phases have attracted extensiv search for electrocatalytic performances due to their tunnel structure and rich inter tion [81]. The synthesis methods and the properties (electrolyte, over−potentials at 10 cm −2 , and Tafel slopes) of tungsten oxide−based electrodes with different phases are s marized in Table 1. Tungsten oxides are widely utilized in energy storage [58][59][60][61][62][63], sensors [64][65][66][67][68], catalysis [69][70][71][72][73][74], and other fields because of their adjustable valence states (W 4+ , W 5+ , and W 6+ ) and band gaps [31,75,76], various morphologies from zero to three dimensions [44,77], and different crystal phases. Here, the research progress of tungsten oxide-based electrocatalysts for water splitting over the last few years is reviewed. Emphatically discussed are the impacts of crystal phase, morphology, defect engineering, and heterojunction effects on the electronic structure and catalytic activity of tungsten oxide-based nanomaterials. We also present a brief summary and outlook of the tungsten oxide catalyst research, with the intention that this review may provide some insights into the construction of high-efficiency oxide catalysts. Phase Control Controlling the crystal phase of tungsten oxide and optimizing its physical and chemical properties have been proven effective in improving catalytic performance [41,42]. The WO x is not simply composed of W 6+ and O 2− ions, which mainly consist of hybrid conduction and valence band states of W 5d and O 2p [32]. The electronic structure of WO x in different crystal phases, such as monoclinic, orthorhombic, and hexagonal phases, is affected by the W-O bond length [32,78,79]. Consequently, it is possible to optimize their catalytic performances by carefully adjusting the crystal phases [30,32,80]. The relatively stable monoclinic and metastable hexagonal phases have attracted extensive research for electrocatalytic performances due to their tunnel structure and rich intercalation [81]. The synthesis methods and the properties (electrolyte, over-potentials at 10 mA cm −2 , and Tafel slopes) of tungsten oxide-based electrodes with different phases are summarized in Table 1. Heat treatment is an effective way to control the crystal phase of WO x -based nanomaterials, in which the temperature is a vital factor. For instance, Guninel et al. annealed orthotropic WO 3 ·H 2 O in air, and they found that the crystal structure transformed to monoclinic WO 3 with the disappearance of water molecules in the structure. The detailed dehydration process is shown in Figure 2a [83]. Pradhan et al. also have prepared the monoclinic WO 3 by annealing orthotropic tungsten oxide hydrate at 400 • C in the air (Figure 2b) [82]. It shows that the double-layer capacitance (C dl ) of monocline WO 3 is 2.83 times that of the original WO 3 ·H 2 O, providing more active surfaces during the catalytic reaction. As a result, the monoclinic WO 3 exhibits an over-potential of 73 mV at 10 mA cm −2 in 0.5 M H 2 SO 4 , which is much lower than that of orthorhombic tungsten oxide hydrate (147 mV). The density functional theory (DFT) results ( Figure 2c) proved that the hydrogen proton adsorption energy on P2 1/n monocline WO 3 (200) is more suitable than that of Pt (111). Halder's group investigated the effect of heat treatment temperatures on the phase transition. With the increase in calcination temperature, the crystal phase of WO 3 changed from hexagonal to monoclinic and then to cubic phase [84]. The phase transformation process from hexagonal to monoclinic at 550 • C was further observed by in situ transmission electron microscopy (TEM). The monoclinic WO 3−x obtained at this temperature exhibits the best HER activity because of the highest oxygen vacancy concentration. Certain additives also have an impact on the crystal phase of tungsten oxide during the preparation process. Song's team precisely prepared the orthorhombic WO 3 ·0.33H 2 O and monoclinic WO 3 ·2H 2 O by utilizing ethylene diamine tetra acetic acid and DL-malic acid at room temperature, respectively (Figure 2d) [40]. It demonstrated that a lower over-potential (117 mV) and Tafel slope (66.5 mV dec −1 ) of monoclinic WO 3 ·2H 2 O were required to reach a current density of 10 mA cm −2 in 0.5 M H 2 SO 4 ( Figure 2e). Hajiahmadi et al. explored the reaction mechanism and adsorption model of hex-WO 3 (001) in acid oxygen evolution reaction (OER) (Figure 3) [85]. There are six adsorption models involved: (1) Certain additives also have an impact on the crystal phase of tungsten oxide during the preparation process. Song's team precisely prepared the orthorhombic WO3·0.33H2O and monoclinic WO3·2H2O by utilizing ethylene diamine tetra acetic acid and DL-malic acid at room temperature, respectively (Figure 2d) [40]. It demonstrated that a lower over−potential (117 mV) and Tafel slope (66.5 mV dec −1 ) of monoclinic WO3·2H2O were required to reach a current density of 10 mA cm orthorhombic unit cell of tungstite, which converts to the monoclinic tungsten oxide unit cell by dehydration at elevated temperatures (≥300 °C). Reprinted with permission from [83], with permission from Springer, 2014. (b) XRD patterns of orthorhombic WO3·H2O and monoclinic WO3. (c) Calculated energy landscapes of the HER on WO3 (200) and Pt (111). Reprinted with permission from [82], with permission from American Chemical Society, 2017. (d) Synthesis process and (e) linear sweep voltammograms at 2500 of hydrated tungsten oxide (WO3·nH2O, n values 0.33, 1.00, or 2.00) at room temperature. Reprinted with permission from [40], with permission from American Chemical Society, 2020. Certain additives also have an impact on the crystal phase of tungsten oxide during the preparation process. Song's team precisely prepared the orthorhombic WO3·0.33H2O and monoclinic WO3·2H2O by utilizing ethylene diamine tetra acetic acid and DL-malic acid at room temperature, respectively (Figure 2d) [40]. It demonstrated that a lower over−potential (117 mV) and Tafel slope (66.5 mV dec −1 ) of monoclinic WO3·2H2O were required to reach a current density of 10 mA cm Morphology Control Due to their flexible crystal structures, WO x nanomaterials with rich morphologies exhibit different physical and chemical properties. Reasonable design of catalyst morphology may increase the contact area between catalyst and electrolyte, thus improving the electrocatalytic performance. The impact of morphology on the catalytic performance of WO x has been studied recently, and some advancements have been made [42,86,87]. One-dimensional nanostructures, such as nanorods [38] and nanowires [88][89][90], have been extensively studied in tungsten oxide-based nanomaterials. For example, Liang's group prepared WO 3 nanowires with rich oxygen vacancy (WO 3 -V O NWs) by hydrothermal method combined with plasma sputtering [89]. The WO 3 -V O NWs grew along the [001] direction (c axis) and exhibited stable electrocatalytic oxygen evolution activity under acidic conditions. Two-dimensional nanostructures have also attracted extensive attention due to their increased surface area, abundant active sites, and appropriate adsorption of intermediates in recent years. Pradhan et al. prepared tungsten oxide hydrate nanoplates with apparent gaps between the stacked nanoplates using the hydrothermal method [82]. Guo's group prepared hierarchical WO 3 nanowire arrays on nanosheet arrays (WO 3 NWA-NSAs) by the hydrothermal method for alkaline OER [29]. The WO 3 NWA-NSAs electrocatalyst only requires an over-potential of 230 mV to reach the current density of 10 mA cm −2 because of its unique hierarchical structure. By changing the composition of surfactants and synthesis parameters of hydrothermal processes, Rajalakshmi et al. prepared various WO 3 nanomaterials, including one-dimensional nanowires and nanorods, two-dimensional nanoflakes and nanobelts, and three-dimensional nanoparticles, which are star-like and globule-like structures (Figure 4a-g). The influence of morphology on the band gap width and hydrogen evolution performance was also investigated [44]. It was found that the band gaps of tungsten oxide with different morphologies are inconsistent. Among them, WO 3 nanorods have a higher aspect ratio and better bandgap and adsorption energy in conjunction with the precise cutting of crystal facets along the (001) direction. As a result, the HER performance of one-dimensional WO 3 nanorods exceeds other morphologies in an acidic electrolyte. For the three-dimensional nanostructure, our group investigated the effect of the etching agent concentration on the morphologies and the alkaline OER performances of Ni-WO 3 nanostructures [91]. The Ni-WO 3 octahedral structure (Figure 4h) was in situ etched with (NH 4 ) 2 SO 4 , and the serrated Ni-WO 3 (Figure 4i) was obtained. It was found that the crystal phase of Ni-WO 3 was unaffected by the concentration of the etching agent, but the serrated Ni-WO 3 dramatically improves OER performance (an over-potential of only 265 mV at 10 mA cm −2 ) compared with octahedral Ni-WO 3 (365 mV). Defect Engineering Defect engineering, including oxygen vacancy construction and hetero atom doping, reduces the atomic coordination numbers in the material and, thus, regulates the electronic structure [89,92]. The band gap and the adsorption−free energy of the catalyst could be optimized by regulating the electronic structure of the catalyst, thus reducing the catalytic reaction barrier and improving the reaction kinetics. These doping and defect sites can also potentially increase the number of active sites and the concentration of free carriers, which are essential for reducing the reaction barrier and increasing the electron transfer efficiency [93]. The synthesis methods and the properties (electrolyte, over−potentials Defect Engineering Defect engineering, including oxygen vacancy construction and hetero atom doping, reduces the atomic coordination numbers in the material and, thus, regulates the electronic structure [89,92]. The band gap and the adsorption-free energy of the catalyst could be optimized by regulating the electronic structure of the catalyst, thus reducing the catalytic Nanomaterials 2023, 13, 1727 7 of 26 reaction barrier and improving the reaction kinetics. These doping and defect sites can also potentially increase the number of active sites and the concentration of free carriers, which are essential for reducing the reaction barrier and increasing the electron transfer efficiency [93]. The synthesis methods and the properties (electrolyte, over-potentials at 10 mA cm −2 , and Tafel slopes) of tungsten oxide-based electrodes modified by defect engineering are summarized in Table 2. Oxygen Vacancy Tungsten oxide-based nanomaterials with abundant oxygen defects show great potential for water splitting. Abundant oxygen vacancies can improve the conductivity and promote the adsorption of OH − , thus effectively increasing the oxygen evolution activity [89]. For example, Guo's research group prepared WO 3 with rich oxygen vacancies by hydrothermal method, and they explored its oxygen evolution properties in the alkaline electrolyte [29]. The abundant oxygen vacancy not only improves the conductivity of WO 3 , but also modifies its electronic structure, so that WO 3 only needs 230 mV overpotential to reach the current density of 10 mA cm −2 . Effect of oxygen vacancy on morphology and properties of WO x . Two extra electrons are produced when an O atom is removed from the WO 3 structure. One or two electrons can be transferred to a neighboring W atom to form the W 5+ or W 5+ -W 5+ centers [32]. Additionally, introducing the O-vacancy modifies the splitting of the W-O bond, increases the W-W distance at the O-vacancy position, and narrows the WO 3 band gap accordingly (Figure 5a-d) [79]. The electrode's conductivity is improved as a result of abundant oxygen vacancies, which can turn an n-type WO x semiconductor into a degenerate semiconductor with metallic characteristics (Figure 5e) [35,89]. Besides, a high oxygen vacancy concentration will increase the material's surface roughness and the area in contact with the electrolyte. For example, the surface of tungsten oxide with a smooth hexahedral structure (Figure 5f) turns rough after being annealed in a H 2 atmosphere, forming a porous structure (Figure 5g) [121]. As shown in Figure 5h,i, the porous WO 2 HN/NF has a BET surface area, pore size, and specific volume of 22.8 m 2 g −1 , 10-100 nm, and 0.138 cm 3 g −1 , respectively. Owing to the highly concentrated oxygen vacancies that provide more active sites and narrower band gaps, the porous WO 2 HN/NF electrode showed lower potential and excellent catalytic stability in HER, OER, and overall water splitting. Oxygen vacancy recognition. First, the O vacancy can be directly observed by atomic high−resolution TEM (HRTEM), where the variation in atomic column intensity indicates the variation in oxygen atomic occupation. As shown in Figure 6a, the tiny pits shown by the arrows indicate the presence of oxygen vacancies [89]. The variations in intensity and contrast were further highlighted in the colored image and the line profile ( Figure 6b). Second, UV−vis diffuse reflectance spectroscopy and UV−vis absorption spectroscopy are also used to identify oxygen vacancy defects. As shown in Figure 6c,d, a stronger photo−response in the visible to infrared region indicates a higher O vacancy concentration in the material [35,40,122]. Besides, electron paramagnetic resonance (EPR), a technique for detecting the chemical environment of unpaired electrons in atoms or molecules, can be used to confirm the existence of oxygen vacancies (Figure 6e). When oxygen vacancies capture electrons, symmetrical EPR signals appear at the position g ≈ 2.002. The stronger the intensity of the EPR signal, the higher concentration of oxygen vacancies present in the material [35,89]. Notice that the oxygen vacancy also changes the metal valence in the metal oxide. Therefore, in addition to the direct characterization methods of oxygen vacancies, alternative techniques, such as X−ray photoemission spectroscopy (XPS) and synchrotron X−ray absorption fine structure (XAFS), can also be employed to infer the existence of oxygen vacancies. As shown in Figure 6f, after the oxygen vacancy was introduced, the XPS peaks of W 4f were moved to the lower binding energy region [35,38,122]. For O1s XPS, the peak at ~531.3 eV corresponds to the oxygen vacancy [35,[123][124][125]. Oxygen vacancy recognition. First, the O vacancy can be directly observed by atomic high-resolution TEM (HRTEM), where the variation in atomic column intensity indicates the variation in oxygen atomic occupation. As shown in Figure 6a, the tiny pits shown by the arrows indicate the presence of oxygen vacancies [89]. The variations in intensity and contrast were further highlighted in the colored image and the line profile ( Figure 6b). Second, UV-vis diffuse reflectance spectroscopy and UV-vis absorption spectroscopy are also used to identify oxygen vacancy defects. As shown in Figure 6c,d, a stronger photoresponse in the visible to infrared region indicates a higher O vacancy concentration in the material [35,40,122]. Besides, electron paramagnetic resonance (EPR), a technique for detecting the chemical environment of unpaired electrons in atoms or molecules, can be used to confirm the existence of oxygen vacancies (Figure 6e). When oxygen vacancies capture electrons, symmetrical EPR signals appear at the position g ≈ 2.002. The stronger the intensity of the EPR signal, the higher concentration of oxygen vacancies present in the material [35,89]. Notice that the oxygen vacancy also changes the metal valence in the metal oxide. Therefore, in addition to the direct characterization methods of oxygen vacancies, alternative techniques, such as X-ray photoemission spectroscopy (XPS) and synchrotron X-ray absorption fine structure (XAFS), can also be employed to infer the existence of oxygen vacancies. As shown in Figure 6f, after the oxygen vacancy was introduced, the XPS peaks of W 4f were moved to the lower binding energy region [35,38,122]. For O1s XPS, the peak at~531.3 eV corresponds to the oxygen vacancy [35,[123][124][125]. Effect of oxygen vacancy concentration on performance. To further investigate th impact of oxygen vacancy concentration on catalytic performance, Thomas et al. explore the relationship between the electrocatalytic performance of Meso−WO2.83 and surface ox idation degree brought on by exposure to air (Figure 6g,h) [35]. With the increase in expo sure time, the plasmon resonance of Meso−WO2.83 was weakened, accompanied by a re shift, and its electrocatalytic activity gradually decreased, as well. This indicates tha abundant oxygen vacancies are advantageous for enhancing catalytic activity. Zeng et a demonstrated that abundant oxygen vacancies could optimize the hydrogen adsorptio Gibbs free energy (ΔGH*) using the DFT (Figure 6i) [94]. Method of producing oxygen vacancy. Oxygen vacancies are widespread in trans tion metal oxides because of their low formation energy [35,39]. Representative method of producing oxygen vacancies in metal oxides include heat treatment, reductive trea ments, and other methods [38,126]. The first method is heat treatment, which involve removing a small amount of lattice oxygen from metal oxides in low−oxygen condition at high temperatures without causing bulk phase transition. The oxygen vacancy concen tration can be adjusted by controlling the heat treatment temperature or the inert gas flow rate. For instance, Halder's group thermally treated WO3 in a vacuum environment t produce WO3−x with rich oxygen vacancies [84]. The second method of producing oxyge vacancies in tungsten oxide is reductive treatments, which create oxygen vacancies wit the aid of reducing agents, such as H2 [123], NaBH4 [127,128], and sodium dodecyl sulfat [29]). Taking the reduction in hydrogen as an example, with the extension of reductio (Figure 6g,h) [35]. With the increase in exposure time, the plasmon resonance of Meso-WO 2.83 was weakened, accompanied by a red shift, and its electrocatalytic activity gradually decreased, as well. This indicates that abundant oxygen vacancies are advantageous for enhancing catalytic activity. Zeng et al. demonstrated that abundant oxygen vacancies could optimize the hydrogen adsorption Gibbs free energy (∆G H* ) using the DFT (Figure 6i) [94]. Method of producing oxygen vacancy. Oxygen vacancies are widespread in transition metal oxides because of their low formation energy [35,39]. Representative methods of producing oxygen vacancies in metal oxides include heat treatment, reductive treatments, and other methods [38,126]. The first method is heat treatment, which involves removing a small amount of lattice oxygen from metal oxides in low-oxygen conditions at high temperatures without causing bulk phase transition. The oxygen vacancy concentration can be adjusted by controlling the heat treatment temperature or the inert gas flow rate. For instance, Halder's group thermally treated WO 3 in a vacuum environment to produce WO 3−x with rich oxygen vacancies [84]. The second method of producing oxygen vacancies in tungsten oxide is reductive treatments, which create oxygen vacancies with the aid of reducing agents, such as H 2 [123], NaBH 4 [127,128], and sodium dodecyl sulfate [29]). Taking the reduction in hydrogen as an example, with the extension of reduction time, WO 3 gradually evolved into WO 3−x , WO 2 , and finally, metallic W 0 [129]. Thomas et al. prepared Meso-WO 2.83 with oxygen-rich vacancies by replacing bulk materials with mesoporous materials to increase the reduction rate in the hydrogen atmosphere [35]. Due to the slow diffusion of H 2 molecules in bulk materials, the use of mesoporous WO 3 with a higher surface area and thin nanoscale pore wall allows H 2 molecules to diffuse and migrate more easily on its surface and inside. The results show that the mesoporous structure not only dramatically reduces the H 2 reduction temperature, but also selectively generates WO 2.83 . Other methods of producing oxygen vacancies include plasma treatment, flame, mechanical exfoliation, and hydrothermal methods [29,89,130,131]. For example, as a surface treatment technology, plasma has a certain reduction ability, and the oxygen vacancy generated by it only exists on the surface of the sample [131]. Hetero Atom Doping Hetero atom doping is also an effective strategy to prepare tungsten oxide-based nanomaterials with abundant defects, which achieve a significant leap in catalytic performance by regulating the electronic coordination environment, the number of active sites, and the adsorption strength of intermediates [48,113]. In practical applications, atom-doped materials usually contain both hetero atoms and oxygen vacancies, which can cooperatively promote the catalytic activity [93,118,132]. The effect of doping on catalytic performance can be adjusted by changing the type [48,105] and concentration [115,118,133] of hetero atoms. Atom doping could be divided into noble metal atomic doping and non-noble metal atomic doping. For the former, the noble metal atoms typically exist as single atoms or atomic clusters to reduce the costs and improve the utilization rate of noble metal atoms [112,113,[134][135][136]. For example, Hou et al. reported Pt sing-atoms supported on monolayer WO 3 (Pt-SA/ML-WO 3 ) for HER [104]. The high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) images (Figure 7a,b) and ICP result showed that 0.20 wt% Pt atoms were immobilized on ML-WO 3 . HRTEM images demonstrate the existence of defects, including lattice distortion, as well as W and O vacancies (Figure 7c,d). The HER performance of Pt-SA/ML-WO 3 (over-potential of 22 mV at 10 mA cm −2 , Tafel slope of 27 mV dec −1 ) was even better than that of 20% Pt/C (over-potential of 34 mV at 10 mA cm −2 , Tafel slope of 28 mV dec −1 ). The improved performance is mainly attributed to the strong interaction between Pt single atoms and ML-WO 3 , which drastically tunes the electronic structure of the catalyst, endowing Pt-SA/ML-WO 3 with a strong conductivity and an adequate ∆G H* (Figure 7e). Sun et al. investigated the overall water-splitting performance of the iridium-doped tungsten trioxide array (Ir-doped WO 3 ) in acidic conditions [100]. Ir can preempt some of the electrons in the WO 3 matrix to optimize its electronic structure because Ir 4+ has a smaller radius and a higher atomic number than W 6+ . The Ir-doped WO 3 exhibited low cell voltages of 1.56 and 1.68 V to drive the current densities of 10 and 100 mA cm −2 , respectively. Ma's group analyzed the effects of oxygen vacancy and Ru doping on electronic states and d-band center of WO 3 by density functional theory (DFT) calculations [136]. As shown in Figure 7f, when only oxygen vacancies exist, the electrons produced by oxygen vacancies are transferred to neighboring W atoms. After doping Ru atom, electrons are mainly transferred from Vo site to adjacent Ru site, and a few electrons are transferred to an adjacent W site. Therefore, Ru sites with sufficient electrons in Vo-WO 3 /Ru SAs may be active centers for adsorption intermediates in acidic media. In alkaline and neutral solutions, the oxygen atoms in water molecules are more easily captured by the electron-deficient W site, and the generated H* migrates to the nearby Ru site to form H 2 . In addition, the d-band center of Vo-WO 3 /Ru SAs (−5.180 eV) is much lower than that of WO 3 (−3.252 eV) and Vo-WO 3 (−4.133 eV), indicating that the strength of H* bond is weakened (Figure 7g), which is conducive to the desorption of H* from the surface and the promotion of HER. In addition to doping the noble metal atoms, doping of non−noble metal at Fe, Ni, F, and Mo) could also regulate the physicochemical properties of the cata effectively improve the catalytic performance of tungsten oxide−based nanom [48,107,111,115,116,133,137,138]. (Figure 8b). Therefore, Ni−WO2 only needs 83 mV the current density of 10 mA cm −2 in an alkaline environment. To investigate the Ni atom doping on the localized structure of tungsten oxide, the author compar L3−side X−ray absorption near side structures (XANES) and further proved that tion of Ni atoms can effectively reduce the d−band occupation state on W atoms. ing the whole contour plots of wavelet transform (WT) (Figure 8c,d) and the cha sity difference slices (Figure 8e,f) of WO2 and Ni−WO2, it was found that the R− about k = 12.2 Å −1 in the WT spectrum of Ni−WO2 is increased, and the charge between Ni-W is significantly decreased. All these results indicate that Ni dopin fectively regulate the coordination strength between metal atoms and reduce th to capture electrons of W atoms, which is closely related to the absorption/desor havior on the catalyst surface. In addition to doping the noble metal atoms, doping of non-noble metal atoms (Co, Fe, Ni, F, and Mo) could also regulate the physicochemical properties of the catalyst and effectively improve the catalytic performance of tungsten oxide-based nanomaterials [48,107,111,115,116,133,137,138]. For example, Xiao et al. simulated the d-orbital hybridization of WO 2 by a series of transition metal heteroatoms using first principles and explored HER properties of M-WO 2 (M = Fe, Co, Ni, and Cu atoms) [105]. The Fe, Co, Ni, or Cu heteroatoms replace one of the W atoms (W-M bond) to form higher d-band packed atomic coordination, resulting in increased filling of W-M bond antibonding orbitals and weakening of bond strength (Figure 8a). The DFT result showed that the Ni-W site has modest hydrogen adsorption (∆G H* = −0.43 eV) due to the dynamic transfer of some bonded electrons from the W-W/Ni bonds to the Ni-O bonds after the substitution of W atoms by Ni, which reduces the free energy, weakens the metal-metal bond, and increases the bond length of W-W/Ni (Figure 8b). Therefore, Ni-WO 2 only needs 83 mV to reach the current density of 10 mA cm −2 in an alkaline environment. To investigate the effect of Ni atom doping on the localized structure of tungsten oxide, the author compared the W L 3 -side X-ray absorption near side structures (XANES) and further proved that the addition of Ni atoms can effectively reduce the d-band occupation state on W atoms. Comparing the whole contour plots of wavelet transform (WT) (Figure 8c,d) and the charge density difference slices (Figure 8e,f) of WO 2 and Ni-WO 2 , it was found that the R-value at about k = 12.2 Å −1 in the WT spectrum of Ni-WO 2 is increased, and the charge density between Ni-W is significantly decreased. All these results indicate that Ni doping can effectively regulate the coordination strength between metal atoms and reduce the ability to capture electrons of W atoms, which is closely related to the absorption/desorption behavior on the catalyst surface. Heterostructure Construction Constructing heterostructured nanomaterials composed of tungsten oxide and another material is an effective strategy to improve the catalytic performance of tungsten oxide [139][140][141]. The heterostructured nanomaterials reaches the catalytic performance of 1 + 1 > 2 by exposing more active sites and promoting interfacial electron transfer, which is called the "spillover" mechanism of the system [52,141]. The synthesis methods and the catalytic properties of tungsten oxide−based heterostructure electrodes with different morphologies (such as nanowires, nanoflakes, nanospheres, urchin−like, and so on) are summarized in Table 3. Here, tungsten−oxide heterostructures are divided into three types according to the different components: (1) semiconductor-WOx; (2) WOx-C; and (3) metal-WOx. Heterostructure Construction Constructing heterostructured nanomaterials composed of tungsten oxide and another material is an effective strategy to improve the catalytic performance of tungsten oxide [139][140][141]. The heterostructured nanomaterials reaches the catalytic performance of 1 + 1 > 2 by exposing more active sites and promoting interfacial electron transfer, which is called the "spillover" mechanism of the system [52,141]. The synthesis methods and the catalytic properties of tungsten oxide-based heterostructure electrodes with different morphologies (such as nanowires, nanoflakes, nanospheres, urchin-like, and so on) are summarized in Table 3. Here, tungsten-oxide heterostructures are divided into three types according to the different components: (1) semiconductor-WO x ; (2) WO x -C; and (3) metal-WO x . Semiconductor-WO x The interaction between tungsten oxide and another semiconductor is conducive to adjusting its electronic structure. When tungsten oxide comes into contact with a semiconductor with different Fermi levels, the electrons will spontaneously diffuse from the semiconductor with a high Fermi level to another component until the chemical potential of the two parts is equal, thus forming a semiconductor-WO x heterojunction structure [172]. Consequently, net charges accumulate at the contacting interface, which lowers the initially higher Fermi level and raises the initial lower Fermi level. Meanwhile, the electronic band of the contacting semiconductor bends over subject to the movement of Fermi levels, generating different types of band alignments. Due to the synergistic effect and electronic effect between the components, the catalytic performance of the composites is improved. Wang and his colleagues prepared the Ni 2 P-WO 3 nanoneedle structure on carbon cloth using a combination of in situ electrodeposition and phosphating treatment methods [151]. The XPS results demonstrate the electrons transfer from Ni to P in Ni 2 P-WO 3 . Benefiting from the heterojunction structure, Ni 2 P-WO 3 exhibits excellent HER catalytic performance in both acidic (over-potential of 107 mV at a current density of 10 mA cm −2 ) and alkaline (over-potential of 105 mV at a current density of 10 mA cm −2 ) environments. Peng et al. designed a Fe 2 P-WO 2.92 heterostructure on nickel foam by a facile consecutive three-step synthesis method [72]. The oxygen vacancies and the synergistic effect between WO 2.92 and Fe 2 P facilitated a drastic reduction in over-potential for the catalytic OER performance of Fe 2 P-WO 2.92 /NF (over-potential of 215 mV at 10 mA cm −2 in 1.0 M KOH solution). Moreover, the interfacial richness of the two phases in the semiconductor-WO x heterostructure directly affects the number of active sites. Yang's group prepared Ni 17 W 3 /WO 2 heterojunction on nickel foam (WO 2 /Ni 17 W 3 /NF) by the hydrothermal and annealing method (Figure 9a) [150]. WO 2 /Ni 17 W 3 heterojunctions increase the exposure of active edge sites and facilitate the water dissociation and H intermediates association during HER kinetics. Therefore, WO 2 /Ni 17 W 3 /NF demonstrated high catalytic efficiency for HER with a low over-potential of 35 mV at 10 mA cm −2 . Following this work, Liu's group prepared R-Ni 17 W 3 /WO 2 catalysts on Ni foam and explored the effect of Ni 17 W 3 particle size decorated on the NiWO 4 /WO 2 substrate for hydrogen evolution reaction (Figure 9b) [152]. The R-Ni 17 W 3 /WO 2 with larger Ni 17 W 3 particles exhibited superior HER catalytic activity (over-potential of 48 mV at 10 mA cm −2 ), resulting from more interfaces and more active sites (Figure 9c) (Figure 9d). Our group also prepared a TA-Fe@Ni-WO x hierarchical structure by the interfacial coordination assembly process. After the introduction of the TA-Fe layer, the electrons transfer from W and Ni to TA-Fe, and as a result, the TA-Fe@Ni-WO x has an upward-moving Fermi energy, a smaller ionization potential, and a more electron-rich environment, which is more conducive to OER [167]. WO x -C Creating a heterostructure with another conductive material is a typical way to increase the electrocatalyst's overall electronic conductivity. Carbon materials, including graphene oxide (GO), carbon nanotube (CNT), carbon paper, and carbon cloth, are often used due to their superior electronic conductivity, high specific surface area, and excellent chemical durability [56,[173][174][175][176]. WO x /carbon composites have been widely used in electrocatalytic water splitting [71,142,143,145,146,177]. In this case, carbon-encapsulated WO x with rich oxygen vacancies was synthesized by pyrolyzing carbon/tungsten mixture (Figure 10a,b) [95,144,178,179], which has a favorable impact on enhancing charge transfer and compensating for the weak hydrogen adsorption of the tungsten oxide. The effects of W content [95], annealing time [179], and annealing temperature [178] on HER performance of WO x /C have been deeply explored. For example, Yin et al. studied the effects of different annealing times and temperatures on hydrogen evolution properties of WO x /C in 0.5 M H 2 SO 4 (Figure 10c) [179]. Pan's group introduced 15 nm thick carbon-based shells on the surface of tungsten oxide nanospheres (CTO) and investigated its catalytic performance in alkaline OER [145]. It was found that the overpotential of nanoparticles at 50 mA cm −2 decreased from 360 to 317 mV after the introduction of a carbon-based shell (Figure 10d). The improved catalytic performance is attributed to the carbon-based shell that speeds up the electron transfer between the catalyst and the reactant, provides the catalytic active site, and promotes the adsorption of the catalyst to the reactant and the dissociation of the O-H bond. WOx−C Creating a heterostructure with another conductive material is a typical way to increase the electrocatalyst's overall electronic conductivity. Carbon materials, including graphene oxide (GO), carbon nanotube (CNT), carbon paper, and carbon cloth, are often used due to their superior electronic conductivity, high specific surface area, and excellent chemical durability [56,[173][174][175][176]. WOx/carbon composites have been widely used in electrocatalytic water splitting [71,142,143,145,146,177]. In this case, carbon−encapsulated WOx with rich oxygen vacancies was synthesized by pyrolyzing carbon/tungsten mixture (Figure 10a,b) [95,144,178,179], which has a favorable impact on enhancing charge transfer and compensating for the weak hydrogen adsorption of the tungsten oxide. The effects of W content [95], annealing time [179], and annealing temperature [178] on HER performance of WOx/C have been deeply explored. For example, Yin et al. studied the effects of different annealing times and temperatures on hydrogen evolution properties of WOx/C in 0.5 M H2SO4 (Figure 10c) [179]. Pan's group introduced 15 nm thick carbon−based shells on the surface of tungsten oxide nanospheres (CTO) and investigated its catalytic performance in alkaline OER [145]. It was found that the overpotential of nanoparticles at 50 mA cm −2 decreased from 360 to 317 mV after the introduction of a carbon−based shell ( Figure 10d). The improved catalytic performance is attributed to the carbon−based shell that speeds up the electron transfer between the catalyst and the reactant, provides the catalytic active site, and promotes the adsorption of the catalyst to the reactant and the dissociation of the O-H bond. Reprinted with permission from [179], with permission from Elsevier, 2021. (d) LSV polarization curves measured at a scan rate of 5 mV s −1 in 1 M NaOH. Reprinted with permission from [145], with permission from the American Chemical Society, 2020. Metal−WOx Conductive metal is also widely used to increase the electrocatalyst's overall electronic conductivity and adjust the electronic structure of WOx. In order to reduce the cost of the catalyst, the metal part in metal-WOx heterostructure often exists in the form of single atoms, clusters, or nanoparticles [71,120,156]. Li et al. prepared Pt@WO3/C with three−dimensional nano architectures for HER via the water-oil two−phase microemulsion method and the annealing treatment [156]. Pt nanoparticles with a diameter of about 4 nm were monodispersed on the surface of WO3/C structures (Figure 11a,b). The over−potential at 10 mA cm −2 of Pt@WO3/C as HER electrocatalyst was 149 mV (Figure 11c), which was smaller than that of WO3/C (244 mV). The mechanism of HER consists of three main steps, including H2O adsorption, H2O dissociation, and H * desorption (Figure 11d). The Reprinted with permission from [179], with permission from Elsevier, 2021. (d) LSV polarization curves measured at a scan rate of 5 mV s −1 in 1 M NaOH. Reprinted with permission from [145], with permission from the American Chemical Society, 2020. Metal-WO x Conductive metal is also widely used to increase the electrocatalyst's overall electronic conductivity and adjust the electronic structure of WO x . In order to reduce the cost of the catalyst, the metal part in metal-WO x heterostructure often exists in the form of single atoms, clusters, or nanoparticles [71,120,156]. Li et al. prepared Pt@WO 3 /C with three-dimensional nano architectures for HER via the water-oil two-phase microemulsion method and the annealing treatment [156]. Pt nanoparticles with a diameter of about 4 nm were monodispersed on the surface of WO 3 /C structures (Figure 11a,b). The overpotential at 10 mA cm −2 of Pt@WO 3 /C as HER electrocatalyst was 149 mV (Figure 11c), which was smaller than that of WO 3 /C (244 mV). The mechanism of HER consists of three main steps, including H 2 O adsorption, H 2 O dissociation, and H * desorption (Figure 11d). The corresponding free energy calculation results show that the introduction of Pt has no obvious effect on water absorption, but it can promote the water dissociation and H * desorption of WO 3 , thus accelerating the HER process (Figure 11e). The XPS results (Figure 11f) also confirm that the peaks of W 4f were positively shifted after the introduction of Pt due to the difference in electronegativity between W and Pt atoms, resulting in the transfer of electrons from W to Pt. In another work, Pt-WO 3−x nanodots were anchored on rGO for water splitting [37]. The optimized composite Pt-WO 3−x @rGO exhibited the highest HER, OER, and overall water-splitting activities in alkaline environments (overpotentials of about 34 mV, 174 mV, and 1.55 V at 10 mA cm −2 , respectively). At the same time, its overall water-splitting performance showed excellent durability at 1.55 V, where 93.3% initial potential could be maintained after 14 h of cycling. Summary and Outlook In this review, we focus on the recent research progress of tungsten oxide−based nanomaterials for water splitting, especially revealing the mechanism of structural design and component regulation to improve the electrocatalytic performances, which is expected to guide the preparation of high−performance tungsten oxide−based electrocatalysts. The practical strategies for improving the performance of tungsten−oxide catalysts are discussed in this review. The band gap and contact area of a tungsten oxide−based electrode can be adjusted by controlling the morphology and crystal phase, thus improving its catalytic performance. The electronic structure of WOx can also be optimized through defect engineering to provide more active sites and an excellent electronic environment for catalytic reactions. Additionally, the synergistic effect can boost the electron transfer of tungsten oxide and provide more active boundaries, which is another commonly used effective strategy for enhancing catalytic performance. Due to the adjustment of structure and composition, noble metal single atom/tungsten oxide/carbon, the best tungsten−based catalyst at present, greatly reduces the cost on the premise of excellent catalytic performance. Despite the advancement of these methods, the practical application of tungsten−based materials in water splitting is still in the early stage, and there are still many problems and challenges. First, defect engineering has been widely used in regulating the electronic structure of tungsten oxide and promoting its catalytic performance. However, the precise control of the defect location and concentration is difficult, and the characterization technology of defect types and concentration is insufficient. Therefore, the improvement of catalytic performance by defect engineering lacks an intrinsic understanding. The development and Reprinted with permission from [156], with permission from Elsevier, 2022. Summary and Outlook In this review, we focus on the recent research progress of tungsten oxide-based nanomaterials for water splitting, especially revealing the mechanism of structural design and component regulation to improve the electrocatalytic performances, which is expected to guide the preparation of high-performance tungsten oxide-based electrocatalysts. The practical strategies for improving the performance of tungsten-oxide catalysts are discussed in this review. The band gap and contact area of a tungsten oxide-based electrode can be adjusted by controlling the morphology and crystal phase, thus improving its catalytic performance. The electronic structure of WO x can also be optimized through defect engineering to provide more active sites and an excellent electronic environment for catalytic reactions. Additionally, the synergistic effect can boost the electron transfer of tungsten oxide and provide more active boundaries, which is another commonly used effective strategy for enhancing catalytic performance. Due to the adjustment of structure and composition, noble metal single atom/tungsten oxide/carbon, the best tungsten-based catalyst at present, greatly reduces the cost on the premise of excellent catalytic performance. Despite the advancement of these methods, the practical application of tungsten-based materials in water splitting is still in the early stage, and there are still many problems and challenges. First, defect engineering has been widely used in regulating the electronic structure of tungsten oxide and promoting its catalytic performance. However, the precise control of the defect location and concentration is difficult, and the characterization technology of defect types and concentration is insufficient. Therefore, the improvement of catalytic performance by defect engineering lacks an intrinsic understanding. The development and utilization of more advanced in situ preparation and characterization technologies may somewhat solve this problem. For example, in situ atmospheric spherical correction transmission electron microscopy can be used to directly observe the defect formation process in nanomaterials and the resulting crystal phase changes. Besides, the current research on the structure-property relationship mainly adopts the post-analysis method, which compares the electrocatalyst's structure before and after the catalytic reaction and infers the structure evolution during the reaction based on the static characterization results. However, the structure of tungsten oxide dynamically evolves in the catalytic reaction process. The morphology, chemical composition, and electronic structure of tungsten oxide are constantly changing and recycling. As a result, the structure differences before and after long-lasting catalytic reactions may not reflect the real active sites. To further understand the structure-property mechanism of tungsten oxide-based electrocatalysts, it is necessary to develop and utilize in situ characterization methods, such as in situ electron microscope and the in situ XRD technique, to record the dynamic structural changes of the catalysts in real-time during the reaction. We believe that the growth mechanism of material preparation and structure-property relationship can be better understood by in situ observation of the structure and defect location and concentration, as well as in situ characterization of catalytic processes. Combined with advanced theoretical simulation techniques, it is possible to obtain tungsten oxidebased electrocatalysts with high intrinsic activity. All these developments will significantly contribute to the rapid development of renewable energy storage and conversion.
10,430
2023-05-25T00:00:00.000
[ "Chemistry", "Materials Science" ]
Improving Interoperability by Incorporating UnitsML Into Markup Languages Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this “scientific meta-data” and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML—a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML. Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domainspecific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML. appropriately developed to allow for the unambiguous storage, exchange, and processing of numeric data. Units of measure are not only needed by laboratory automation systems, but nearly all other application domains. Examples include: physics, chemistry, materials, and mathematics. The field of aeronautical and space engineering had the infamous Mars Climate Orbiter problem. The loss of NASA's Climate Orbiter on September 23, 1999 was traced to a measurement unit problem. The 125 million dollar space orbiter was lost as it entered the orbit of Mars. Mission managers have concluded that the cause of the mishap was confusion over the type of units used to measure the strength of thruster firings. The problem was due to an error in communication between the Mars Climate Orbiter spacecraft team in Colorado and the mission navigation team in California. The peer review preliminary findings indicate that one team used English units (e.g., inches, feet, pounds) while the other used metric units for a key spacecraft operation [1,2]. Developers have requested a single language for encoding units properties in XML. At the National Institute of Standards and Technology (NIST), we are developing a schema for encoding scientific units, quantities, and dimensions in XML, named UnitsML (Units Markup Language). The development and deployment of a markup language for units will allow for the unambiguous storage, exchange, and processing of numeric data, thus facilitating the collaboration and sharing of information. The usage of UnitsML in other markup languages will prevent duplication of effort and improve interoperability. Today there are many markup languages based on XML that could incorporate UnitsML including MathML (Mathematics Markup Language), AnIML (Analytical Information Markup Language), and AMDML (Atomic and Molecular Data Markup Language), etc. Extensible Markup Language XML (Extensible Markup Language) is a standard for the production of human and machine readable documents. XML is a W3C (World Wide Web Consortium)-recommended general-purpose markup language for creating special-purpose markup languages. A markup language is a mechanism to describe both markup and content in the same document. XML defines the rules for the syntax and structure of such documents. For a concrete XML application, the details of the respective documents must be specified. This requires the definition of structural components and their arrangement within a document tree. XML is therefore a standard for the definition of arbitrary markup languages. A markup language like XML, which is used for the definition of other languages, is called a meta language. One of the main purposes of XML is to facilitate the sharing of data across different systems or software modules or the sharing different types of data to be exported for interoperability or archival purposes [3][4][5]. Analytical Information Markup Language Analytical Information Markup Language (AnIML), is a markup language for analytical chemistry data that is currently under development by ASTM subcommittee E13.15. It is a combination of a highly flexible core schema, a technique schema, and a set of analytical technique instance documents (ATID files). The core schema defines containers for result data in a generic manner. The ATID files are XML files, which apply tight constraints to the flexible core. Each ATID file refers to a specific analytical technique. The organisation of ATID files is specified by the technique schema. Extensions of ATID files are possible for vendor-specific, institutional-specific, and user-specific parameters. The goal of AnIML is to interchange and store analytical results and their meta data [6]. More information about AnIML can be found on the AnIML web site, http://www.animl.org/. Units Markup Language Units Markup Language (UnitsML) is a general XML-based markup language for encoding scientific units. It has a single schema for handling units, which is desirable to facilitate moving information between different data domains. The UnitsML schema is designed for incorporating scientific units into other XML documents or into any XML-based software. Various tools are under development to assist in the use of UnitsML. "The value of a quantity is its magnitude expressed as the product of a number and a unit" [7]. The value of a quantity Q can be written as Q = N U, where N is the numerical value of Q when the value of Q is expressed in the unit U (Example: length = 5 m) [7]. UnitsML does not describe the numerical value; it only describes the unit. The main requirement for use of UnitsML is the availability of its schema. It can be problematic for each user to collect information on units and the associated quantities and to define conversions to other units. Alternatively, users can refer to unit definitions from a third party database. Such a database containing information on units, prefixes, quantities, and dimensions encoded in the UnitsML schema is under development at NIST. This database, called UnitsDB, contains detailed units and dimensionality information for SI units and an extensive collection of common, non-SI units. The database includes information on units, quantities, symbols, language-specific unit names, and representations in terms of other units, including conversion factors to reference units. In the representations table, the units database describes all units in terms of the seven SI (International System of Units) base units [7]. In addition some units are described in terms of related, appropriate units. Table 1 shows the expression of farad in the database. Recall that a farad is a unit of capacitance equal to one coulomb per volt. Reducing the definition of farad to SI base units gives F = C · V -1 = m -2 · kg -1 · s 4 · A 2 . More information about UnitsML can be found on the UnitsML website, http://unitsml.nist.gov/. More information about SI units can be found at http://www.bipm.org/ and http://physics.nist.gov/SP811/. Ways to Incorporate UnitsML Into Other Markup Languages UnitsML has been designed to be a component for inclusion into other markup languages. There are several different ways to incorporate UnitsML into other markup languages. These are referencing to the schema, including the schema, importing the schema, and redefining the schema elements. Refer to the UnitsML Schema UnitsML may be included in schema-based markup languages by referencing the UnitsML schema in an instance document. The W3C's finalization of the XML Schema specification allows greater flexibility and specificity in defining constraints than are available with DTDs (Document Type Definitions). One important part of using schemas is being able to reference them within other XML documents. Making a reference from within an XML document requires a declaration of the XML schema instance namespace, a prefix mapping (xsi), and associated URI (Uniform Resource Identifier) to give access to the attributes needed for referencing the XML schemas. If needed, there can be defined a default namespace to provide a home for all non-prefixed elements in the document. Once the XML schema instance namespace is available, one can provide the schemaLocation attribute within it. The schemaLocation attribute consists of two values. The first value, or argument, is the namespace, Volume 115, Number 1, January-February 2010 which must be unique (URI), and the second is the actual resolvable schema location (URL-Uniform Resource Locator). In this case, the first referenced schema location is the host schema and the second the UnitsML schema. In the same way we could reference a third, fourth, or additional schemas. There are many more options for referencing schemas, using them with and without namespaces. These options are documented in the W3C XML Schema specification. One way of incorporating UnitsML into AnIML documents by referencing is to create compound documents that reference the AnIML core schema and UnitsML schema. An example is shown in Listing 1. Features of UnitsML can be incorporated into XML instance documents by using the actual UnitsML schema within the host schema. The problem with this is the availability of the UnitsML schema. The following methods are dependent on having the UnitsML schema file (.xsd). The user could download the UnitsML schema to make it available offline. In this case, the user is responsible for updating the UnitsML schema, when schema updates are available on the UnitsML server. The UnitsML tool, which is described below in "Tools under development," should be able to warn the user of this update and to update the offline schema. To do this some changes must be made in the host schemas. There are three ways that this can be carried out: <include> the UnitsML Schema This directive results in the UnitsML schema being brought into the host schema within the host schema namespace. The element <include> brings in definitions and declarations from the UnitsML schema into the host schema. It requires the UnitsML schema to be in the same target namespace as the host schema namespace [8]. <xs:include schemaLocation = "unitsml.xsd"/> Listing 2 shows an example of the include method on an AnIML instance document. Compared with the import example shown in Listing 3, we see the difference in namespaces. <import> the UnitsML Schema The import function behaves similarly to the include directive with the difference that it is possible to import elements from other namespaces. In the example below, only the units element is imported from the UnitsML schema [8]. <xs:import namespace="http://unitsml.nist.gov/2009" schemaLocation="unitsml.xsd"/> <xs:element ref="unitsml:units"/> Using the import option, an AnIML data file would look like the example shown in Listing 3. It shows that the AnIML core namespace (xmlns:animlcore) is different than the UnitsML namespace (xmlns:unitsml) and that the units part of the document is described completely in UnitsML. The following element of the <UnitSet> element <Unit> is defined globally in the UnitsML schema. Therefore since this example doesn't need information on prefixes, quantities, or dimensions, it is possible to use the <Unit> element directly without using the root element <UnitsML>. <redefine> the Elements of UnitsML The redefine directive can be used in place of the include function. This directive, however, allows elements from the UnitsML schema to be redefined to meet current needs in the combined schema [8]. </xs:redefine> The instance documents using redefined schema elements look the same as those using the include method. An example is given in Listing 2. AnIML is a little different than other markup languages because AnIML works with two schemas. It has a core and a technique schema. In this case there are actually three schemas, including the UnitsML schema. Figure 2 shows one possible method of incorporating UnitsML into AnIML. This example requires that the AnIML client have real-time access to the internet to get the information from the UnitsDB database. Table 2 summarizes the four options for incorporating UnitsML into a host markup language. Tools Under Development We are currently working on web services to process queries that will return UnitsML code containing information from the UnitsDB. A web service provides integration over existing internet protocols, which makes the service compatible with most operating systems and programming languages. To use the web service, clients are required to support the XML-based Web Service Description Language (WDSL) and the XML-based exchange protocol SOAP (formerly Simple Object Access Protocol). Most recently developed web services packages support these standards. Figure 3 shows how the UnitsML web services will work. The service information could be published using the XMLbased UDDI (Universal Description, Discovery, and Integration) protocol. Applications can look up web services information to determine options to use. The public interface to the web service is described by the WSDL, an XML-based service description on how to communicate using the web service. After the client receives the information describing the services, the communication between client and server uses the SOAP protocol. The services in the UnitsML Server will be written in Java and will use the JDBC (Java Database Connectivity) driver to communicate with the database. The internal processing of the XML file in the UnitsML Server will be done using XML tools such as, a data binding framework, SAX (Simple API for XML), and DOM (Document Object Model) [3][4][5]. We are also working on a solution to manage offlinestored units information in UnitsML for clients lacking a real-time internet connection. With this tool, users will be able to manage their own copies of UnitsML data and will not be constantly dependent on access to UnitsDB. The ability to edit and view available unit information without specific XML knowledge will make the use of UnitsML easier. The ability of the tool to connect to the UnitsML web services and update the offline available unit information is intended. Development of the UnitsML schema has initially taken place at NIST, but completion of the development process should also include input from the international scientific and engineering community. To this end, an OASIS Technical Committee has been created to address any needed changes in the schema and to publish a final recommendation for UnitsML. The release data for UnitsDB and the Web Services tool will be sometime after the recommendation for the UnitsML schema has been published. Disclaimer Certain commercial software products are identified in this document. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the products identified are necessarily the best available for the purpose.
3,732.4
2010-02-01T00:00:00.000
[ "Computer Science" ]
Adaptive reduction of male gamete number in the selfing plant Arabidopsis thaliana The number of male gametes is critical for reproductive success and varies between and within species. The evolutionary reduction of the number of pollen grains encompassing the male gametes is widespread in selfing plants. Here, we employ genome-wide association study (GWAS) to identify underlying loci and to assess the molecular signatures of selection on pollen number-associated loci in the predominantly selfing plant Arabidopsis thaliana. Regions of strong association with pollen number are enriched for signatures of selection, indicating polygenic selection. We isolate the gene REDUCED POLLEN NUMBER1 (RDP1) at the locus with the strongest association. We validate its effect using a quantitative complementation test with CRISPR/Cas9-generated null mutants in nonstandard wild accessions. In contrast to pleiotropic null mutants, only pollen numbers are significantly affected by natural allelic variants. These data support theoretical predictions that reduced investment in male gametes is advantageous in predominantly selfing species. M ale gamete numbers, reflected by pollen grain (each containing two sperm cells) numbers in seed plants and sperm numbers in animals, have been studied extensively from agricultural, medical, and evolutionary viewpoints [1][2][3][4][5][6][7][8][9] . Evolutionary theory predicts that the breeding system could act as a major selective force on male gamete numbers. In highly promiscuous outcrossing species, a large number of male gametes should be produced because of male-male gamete competition, so reduced male gamete numbers are considered to be deleterious 1 . In contrast, they may be advantageous at lower outcrossing rates because of the high cost of their production, decreasing fitness. In an agricultural context, low pollen number may have been selected during domestication 10 , but may serve as a barrier for hybrid breeding of wheat and other species 11,12 . In flowering plants, the transition from an outcrossing to a selfing breeding system through loss of self-incompatibility is one of the most prevalent evolutionary trends 6,13 . Selfing populations or species generally show lower pollen grain numbers per flower (hereafter, pollen number) as well as reduced flower size. There has been a sustained debate on whether the reduced pollen number is a result of adaptive evolution or the accumulation of deleterious mutations owing to reduced selection 6,14,15 , but little was known about the genetic basis of pollen number variation to assess molecular signatures of selection on it. To unravel the genetic basis of quantitative natural variation in pollen number, we here focus on the predominantly selfing plant Arabidopsis thaliana 6,16 . Studies have shown that the evolution of predominant selfing in A. thaliana occurred much more recently than its evolutionary divergence from outcrossing relatives 6,16,17 . Thus, in addition to fixed, genetically based differences from these outcrossing relatives, we expect that variation with regard to pollen number may still be segregating among current accessions. By harnessing the genetic and genomic resources available in A. thaliana, we conduct a genome-wide association study on pollen number variation. We find that natural variants of the RDP1 gene confer variation in pollen number without detectable pleiotropy. Signatures of selection at the top genome-wide association study (GWAS) peaks, including the RDP1 locus, support the theoretical prediction that reduced investment in male gametes should provide an advantage in selfing species. Results Genome-wide association study and signatures of selection. To examine variability in pollen number on a species-wide scale, we determined pollen number per flower in 144 natural A. thaliana accessions (Fig. 1a- Data 1) and found approximately fourfold variation (averagẽ 4000) (Fig. 1e). Histological sections of stamens from representative accessions confirmed pollen number variation among accessions (Fig. 1c, d). We also measured the number of ovules per flower (Supplementary Table 2). We did not find significant correlations between numbers of pollen grains and ovules (P = 0.5164), although negative correlations have often been reported in between-species comparisons, as expected on theoretical grounds owing to trade-offs in resource allocation to male versus female function 14,18 . Furthermore, we found that pollen number per flower was not significantly correlated with any of the 107 published phenotypes of flowering, defense-related, ionomic, and developmental traits (Supplementary Table 3; Supplementary Note 1) 19 , nor with climate variables, geographic location, or Shaplogroups across the 144 accessions (Supplementary Tables 4 and 5, Supplementary Fig. 1) 20,21 . These data suggest that variation in pollen number is largely independent of other traits. To evaluate genome-wide signatures of natural selection on loci associated with gamete numbers, we first performed GWAS for pollen and ovule numbers using a genome-wide single-nucleotide polymorphism (SNP) data set for these lines, which was obtained by imputation based on genome-wide resequencing data and 250 k SNP data (Fig. 1f, h; Supplementary Fig. 2) 22 . In total, 68 peaks of association were identified (10-kb windows having SNPs with P < 10 -4 ), although only one pollen number-associated peak remained significant after Bonferroni correction. Focusing on the identified GWAS peaks, we performed an enrichment analysis to ask whether pollen and ovule number-associated peaks are enriched in long-haplotype regions, which could be owing to partial or ongoing sweeps of segregating polymorphisms [23][24][25] . To identify long-haplotype regions, we first calculated the extended haplotype homozygosity (EHH), which measures decay of haplotypes that carry a specified core allele as a function of distance 23 . We then obtained the integrated haplotype score (iHS) statistic for each SNP, which compares EHH between two alleles of the SNP by controlling for the allele frequency of each SNP 23 . We found that 10-kb windows including pollen numberassociated loci were significantly enriched in extreme iHS tails (P < 0.05, permutation test; Fig. 1i, j; Supplementary Fig. 3). These loci showed generally high iHS scores, and two of the top five GWAS peaks were outliers of the genome-wide iHS distribution (Supplementary Table 6). The enrichment was robust to changes in sample composition, allele frequency cutoffs, and the use of windows (Supplementary Figs. 4-6; see Supplementary Note 2 for details). Ovule number also showed enrichment, albeit less than pollen number (Fig. 1j). In principle, the iHS enrichment could be confounded by recombination rate and the accuracy of imputation. To deal with such potential confounding factors, we compared these results with the results of an iHS enrichment analysis for GWAS peaks (P < 0.0001) for 107 other phenotypes, as these confounding factors should also influence the enrichment for other traits. We found that the iHS enrichment for pollen number GWAS peaks (P = 0.002 for the top 1% iHS tail; Supplementary Fig. 3) was among the highest, compared with that for many known adaptive traits included in the 107 phenotypes, such as leaf number at flowering time and resistance to Pseudomonas pathogens 19,25 (Supplementary Table 7). In addition, iHS enrichment of the ovule number GWAS peaks was also significant (P = 0.030 for the top 1% iHS tail; Supplementary Fig. 3). These enrichments support polygenic selection on a considerable number of loci associated with male and female gamete numbers throughout the genome. Isolation of the REDUCED POLLEN NUMBER1 gene. To further understand the molecular basis of pollen number variation and to examine the nature of the putative targets of selection, we tried to identify the genes underlying pollen number variation; however, the top five peaks of association did not contain any genes with known functions in early stamen or pollen development. To obtain experimental evidence concerning genes underlying pollen number variation, we conducted functional analyses of the genes under the highest pollen number GWAS peak, which explains~20% of the total phenotypic variance between accessions and satisfies the criterion for genome-wide significance (-log 10 P = 7.60). This region is of particular interest because it also satisfies the criteria for genome-wide significance of the iHS statistic (P = 0.0149; Fig. 1i, Supplementary Fig. 7), suggesting a selective sweep. To test whether the signature of selection in this region might be owing to traits other than pollen number, we examined whether there is an association signal for any of the 107 published phenotypes, ovule number, or variants showing climatic correlations 19,20 . In the 10-kb window including the SNP of the highest GWAS score for pollen number, we found no genotype-phenotype associations below P < 10 -5 or 52 . Scale bars = 100 μm. c, d Histological sections of Bor-4 (c) and Mz-0 (d) stamens. At least three independent observations showed similar results (a-d). Scale bars = 50 μm. e Distribution of pollen number variation across 144 natural accessions. f Manhattan plot of the genome-wide association study (GWAS). g Closer view of the region around the significant GWAS peak on chromosome 1 with gene models and coordinates. f, g SNPs with minor allele frequency > 0.15 are shown; horizontal dashed lines indicate the nominal P < 0.05 threshold after Bonferroni correction. h Quantile-quantile plot of the GWAS. i Extended haplotype homozygosity (EHH) detected in the RDP1 genomic region. Red and blue lines correspond to the long haplotype and alternative variants, respectively. j Signatures of selection at pollen numberassociated loci. Each line indicates a phenotype (red: pollen number, black: ovule number, gray: 107 phenotypes 19 ). The x axis quantifies the extreme tails of the integrated haplotype score (iHS) statistic. The pollen and ovule GWAS show significant enrichment (permutation test, P < 0.05 cutoff for all iHS statistical tails; Supplementary Fig. 3). k Accessions with the long-haplotype variants (defined by SNP 1-8852112) generally showed lower pollen number (P = 2.152 × 10 -6 , two-sided t test; population structure-corrected GWAS P = 2.95 × 10 -6 ). Boxplots show center line: median; box limits: upper and lower quartiles; whiskers: not >1.5 times the interquartile range; dots: outliers. Source data underlying e, j and k are provided as a Source Data file. NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-16679-7 ARTICLE NATURE COMMUNICATIONS | (2020) 11:2885 | https://doi.org/10.1038/s41467-020-16679-7 | www.nature.com/naturecommunications climate-SNP correlations below an empirical P < 0.01, i.e., there is no significant evidence for selection on traits other than pollen number in this region. We also found that accessions with the long-haplotype variants produced lower pollen numbers than those with alternative haplotype variants (P = 2.152 × 10 -6 , t test; population structure-corrected GWAS P = 2.95 × 10 -6 ; Fig. 1k), as expected if this haplotype was under selection for reduced pollen number. Of the three genes in this chromosomal region with the highest GWAS scores (AT1G25250, AT1G25260, and AT1G25270; Fig. 1g), the expression level of AT1G25260, a gene of unknown function, was much higher in flower buds than that of the other two genes ( Supplementary Fig. 8). Therefore, we obtained two T-DNA insertion mutants of AT1G25260 in the standard Col-0 accession from the Nottingham Arabidopsis Stock Centre 26 . These mutants showed a 32% reduction in pollen number ( Fig. 2a; Supplementary Table 8). We hereafter refer to AT1G25260 as REDUCED POLLEN NUMBER1 (RDP1). Because both rdp1-1 (insertion in the 5′ UTR) and rdp1-2 (insertion at the end of the coding sequence) (Fig. 2b) homozygotes showed low levels of expression, they are likely to be hypomorphic mutants ( Supplementary Fig. 8). We generated two amorphic (null) frameshift mutants of RDP1 (rdp1-3 and rdp1-4) using the CRISPR/Cas9 system 27,28 in the Col-0 background. These mutants indeed showed an even greater reduction in pollen number, but still produced about half the number of pollen grains of the corresponding wild type (53% for rdp1-3; Fig. 2a), suggesting a quantitative nature of the effect of RDP1. Pollen size was slightly increased, in agreement with the well-known negative relationship between pollen number and size, even within the same genotype (Supplementary Fig. 9; Supplementary Table 9) 14 . The mutant phenotype was complemented by transforming a 4.3-kb genomic fragment of the Col-0 accession encompassing RDP1 (Fig. 2a, Supplementary Fig. 9). In contrast to RDP1 mutants, CRISPR/ Cas9 induced null mutants in AT1G25250 or AT1G25270 did not result in any significant change in pollen number ( Supplementary Fig. 10). The phenotype of four independent mutants of RDP1 together with successful complementation using the wild-type allele thus demonstrated that RDP1 is involved in the control of pollen number. Based on phylogenetic analysis, RDP1 is a putative homolog of the yeast mRNA turnover 4 protein (Mrt4p) (Supplementary Figs. 11 and 12). The MRT4 gene is nonessential in yeast but null mutants show a phenotype of slightly slower growth. The Mrt4p shares similarity with the ribosome P0 protein and is necessary for the assembly of the P0 protein into the ribosome. The human ribosome P0 gene is reported to have an extra-ribosomal function in cancer by modulating cell proliferation 29,30 . During anther development, sporogenous cells first divide and differentiate into microsporocytes 31 . Following meiosis of the microsporocyte, four microspores are formed, each of which undergoes two mitotic divisions to form a mature pollen grain containing the male gametes. The null rdp1-3 mutant produced fewer microsporocytes than the wild type ( Fig. 2c-e), indicating a reduction in cell numbers before meiosis. Consistent with this, in situ mRNA hybridization experiments detected strong expression of RDP1 in sporogenous cells and the microsporocytes derived from them, but not in microspores (Fig. 2f, g; Supplementary Fig. 13). RDP1 was also expressed in other proliferating cells, including those in inflorescences, floral meristematic regions, and ovules (Supplementary Figs. 8 and 13), supporting a more widespread role in proliferating cell types with putatively high demands for ribosome biogenesis 32 . Furthermore, fusing the RDP1 promoter to the uidA reporter gene encoding β-glucuronidase (GUS) to assess its activity confirmed the RDP1 expression pattern in stamens ( Supplementary Fig. 14) and demonstrated marked expression in root tips and young leaf primordia during the vegetative phase; these data are supported by quantitative reverse transcription PCR experiments ( Supplementary Fig. 8b). Consistent with RDP1 expression in proliferating tissues, the rdp1-3 null mutant showed pleiotropic phenotypes, including slower vegetative growth and reduced ovule numbers per flower ( Supplementary Fig. 15). Because these pleiotropic phenotypes would be deleterious in natural environments, these data indicate that natural alleles of RDP1 are not null variants (see below). In summary, these data suggest that RDP1 is required in proliferating A. thaliana cells, yet the natural variants we have identified predominantly affect the proliferation of sporogenous cells in the anthers, consistent with the function of its yeast homolog in cell proliferation. Natural variants of RDP1 confer pollen number variation. It has been difficult to experimentally determine whether a particular gene has natural alleles with subtle phenotypic effects on quantitative traits 33 . When allelic effects are subtle, transgenic analysis of natural alleles is not sufficiently powerful because the phenotypes of transgenic A. thaliana plants tend to be highly variable as a result of the variation between lines, e.g., owing to different transgene insertion sites. In contrast, a quantitative complementation test can identify responsible genes by testing the effect of natural alleles in a heterozygous state with a null allele if the effects of other loci are small, although this may be confounded by polygenic effects in the genetic background 33,34 . To conduct such quantitative complementation, we took advantage of the CRISPR/Cas9 technique to generate frameshift null alleles in nonstandard natural accessions, in which no prior mutant was available. We used Bor-4 and Uod-1, which have high and low pollen number phenotypes, respectively ( Fig. 2b; Fig. 3). There were a number of sequence differences between the two accessions in the region encompassing the RDP1 gene from 777 bp upstream of the start codon to 643 bp downstream of the stop codon. We found one non-synonymous and six synonymous substitutions in the coding region, and 62 substitutions and six indel mutations in the non-coding region (Supplementary Fig. 16). Yet, both accessions did not reveal obvious loss-of-function mutations (Supplementary Fig. 16), and rdp1 CRISPR null mutants of each accession showed reduced pollen number compared with the corresponding wild type (P < 2.2 × 10 -16 for Bor-4, P = 9.84 × 10 -7 for Uod-1; Fig. 3a, b; Fig. 2h). These results show that both of the naturally occurring variants of the RDP1 gene are not null mutants but rather encode a functional protein. Disruption of RDP1 had a stronger effect on pollen number in Bor-4 than in Uod-1 (analysis of variance (ANOVA) interaction effect P = 1.07 × 10 -5 ; Fig. 2h). This finding supports the notion that the Bor-4 allele has a stronger promotive effect on pollen number than the Uod-1 allele, although other loci in the genetic backgrounds of Bor-4 and Uod-1 may contribute to this difference through epistasis. To test the allelic effect of RDP1, we utilized a quantitative complementation test that controls for genetic background (Fig. 3). Among F 1 plants obtained by crossing heterozygotes for the frameshift mutation in each genetic background, we compared two genotypes: RDP1 Bor /rdp1 Uod vs. rdp1 Bor /RDP1 Uod . These F 1 genotypes are identical except for the differences at RDP1, where they both carry a frameshift allele but differ with respect to the functional allele; because of the crossing design, any independently segregating off-target effects, resulting from CRISPR/Cas9 mutagenesis would be equally distributed between the two genotype cohorts of interest. We found that pollen number in plants with RDP1 Uod was significantly lower than in plants with RDP1 Bor (nested ANOVA, 468 flowers from 26 individuals of RDP1 Bor /rdp1 Uod and 368 flowers from 20 individuals of rdp1 Bor /RDP1 Uod , P = 4.85 × 10 -8 ; Fig. 3c). The significant difference cannot be attributed to stochastic individual differences, because the significant difference between plants bearing functional RDP1 Bor and RDP1 Uod alleles was also observed in an individual-based test using averaged data of each individual separately (P = 0.0331). Thus, in an otherwise identical genetic background, the respective functional RDP1 haplotypes cause differences in pollen number. We also measured rosette leaf size, flowering date, ovule number, dry weight, and seed weight of plants in the two cohorts, but none of these traits showed significant differences between RDP1 Bor /rdp1 Uod and rdp1 Bor / RDP1 Uod (Supplementary Fig. 17). Thus, in contrast to the experimentally generated null mutants that showed pleiotropic growth defects, these results indicate that natural allelic differences at RDP1 affect pollen number in the absence of any detectable deleterious pleiotropy. This finding is also supported by no genotype-phenotype associations for other traits, as described above. Discussion We here isolated the RDP1 gene underlying natural variation in male gamete numbers. Our study provides evidence for polygenic selection on pollen number-associated loci, including RDP1. Even though RDP1 encodes a ribosome-biogenesis factor that would be required globally for proliferative growth, the naturally selected alleles predominantly confer reduced pollen number. This is analogous to a hypomorphic allele of the human G6PD gene, which encodes an enzyme in the pentose phosphate pathway and features a long haplotype because of selection for malaria resistance 24 . The mean number of pollen grains of an outcrossing population of Arabidopsis lyrata is~18,000 (ref. 35 ) and thus several times higher than our counts in A. thaliana (~2000-8000; Fig. 1e). Although the evolutionary split between the lineages leading to A. thaliana and A. lyrata is estimated to have occurred 5 million years ago or before [36][37][38] , several studies suggested that predominant selfing in A. thaliana evolved much more recently (0-0.413 million years ago based on the timing of the loss of a self-incompatibility gene,~0.5 million years ago based on the abundance of transposable elements, and~0.3-1 million years ago based on the pattern of genome-wide linkage disequilibrium) 6,39,40 . Thus, in addition to presumably fixed differences to distantly related outcrossing congeners, it is quite conceivable that some underlying loci, not limited to but including RDP1, are still segregating within A. thaliana. These might reflect an ongoing selection process for the further reduction in pollen number, which has been considered a hallmark of the so-called selfing syndrome 6,8,14 . Although we note the possibility that partial sweeps of RDP1 and other segregating loci are not directly related to the transition to predominant selfing, our analysis did not find evidence of other selective forces including local adaptation, climate association, or pleiotropic selection on other traits. Therefore, our study supports the theoretical predictions that reduced investment in male gametes is advantageous in predominantly selfing species. Our work also illustrates that a combination of GWAS and functional analysis using a quantitative complementation test based on the CRISPR/Cas9-based alleles provides a powerful approach to dissect allelic differences underlying quantitative natural variation. Methods Pollen and ovule counting for genome-wide association studies. To perform GWAS, numbers of pollen grains and ovules per flower were counted for 144 and 151 world-wide natural accessions, respectively (Supplementary Tables 1, 2 and Supplementary Data 1). Plants were grown at 21°C under a 16 h light/8 h dark cycle without vernalization. We grew four plants per accession. Three flower buds per plant were harvested from the main inflorescence, and each flower bud was collected into a 1.5 mL tube and dried at 65°C overnight. We sampled individual flower buds of young main inflorescences but avoided the first and second flowers of the inflorescence because these flowers tend to show developmentally immature morphologies. We collected flower buds with mature pollen but before the anthers were opened (flower stage 13), and added 30 μL of 5% Tween 20 (Sigma-Aldrich, St. Louis, MO, USA) to each tube. The tubes were sonicated using a Bioruptor (Diagenode, Seraing, Belgium) in high power mode with 10 cycles of sonication-ON for 30 s and sonication-OFF for 30 s so that the pollen grains were released from the anther sacs. After a short centrifugation and vortexing, 10 μL of the solution was mounted on a Neubauer slide. We took three images per sample using a light microscope. The number of pollen grains per image was counted using the b a c Fig. 3 Quantitative complementation test of the RDP1 gene. Violin plots with means and standard errors of means indicated by red bold bars and boxes, respectively. a, b Pollen number differences between wild-type and homozygous plants of a frameshift allele generated by the CRISPR/Cas9 technique in the Bor-4 background (a Numbers of flowers pollen-counted: RDP1 Bor /RDP1 Bor , N = 89; rdp1 Bor /rdp1 Bor , N = 77) and in the Uod-1 background (b Numbers of flowers pollen-counted: RDP1 Uod /RDP1 Uod , N = 47; rdp1 Uod /rdp1 Uod , N = 43) (same data sets with Fig. 2h). c The difference in the effect on pollen number by two natural alleles, RDP1 Bor and RDP1 Uod . Pollen number of plants with RDP1 Uod was significantly lower than that of plants with RDP1 Bor (nested analysis of variance; P = 4.85 × 10 -8 ; Numbers of flowers pollen-counted: RDP1 Bor /rdp1 Uod , N = 468 from 26 individuals; rdp1 Bor /RDP1 Uod , N = 368 from 20 individuals). The two alleles were compared in the heterozygous state with a frameshift CRISPR/Cas9 allele, with otherwise identical genetic backgrounds. F 1 plants were obtained from the cross of two heterozygotes, RDP1 Bor /rdp1 Bor and RDP1 Uod /rdp1 Uod . Source data are provided as a Source Data file. particle counting implemented in ImageJ (http://imagej.nih.gov/ij/) and in Fiji (http://fiji.sc/Fiji). We then estimated the total pollen number per flower based on the image size and the total volume. Ovule numbers were counted by dissecting young siliques (5.3 siliques per accession on average) under a dissecting microscope. Because of limited chamber space, we split the plants into two batches. The two batches were treated under the same conditions in the same chambers, but at different times. We controlled for this potential batch effect for pollen number by setting equal medians and standard deviations for the two batches and used as the GWAS input of pollen number phenotype (Source Data file, Supplementary Table 1). Sometimes, there were no or very few pollen grains per image. This was mainly in situations where anthers did not open. To eliminate these artefacts, we discarded flowers with pollen counts of <10 per image. We confirmed that such extremely low pollen numbers did not occur in specific accessions, indicating that this is not heritable. Plant materials and growth conditions for functional analyses. For functional analyses, we mainly used Arabidopsis thaliana wild-type and mutant plants of the Col-0, Bor-4, and Uod-1 accessions. The T-DNA lines SALK_064854/N666274 (rdp1-1) from the Salk collection 41 and GK-879G09/N484369 (rdp1-2) from GABI-Kat collection 42 were obtained from the European Arabidopsis Stock Centre 26 . The T-DNA insertion in each line was confirmed using PCR with the primers listed in Supplementary Table 10 as described at http://signal.salk.edu/ tdnaprimers.2.html. DNA was extracted from young leaves using the cetrimonium bromide (CTAB) method. Arabidopsis seeds were sown on soil mixed with the insecticide ActaraG (Syngenta Agro, Switzerland) and stratified for 3-4 days at 4°C in the dark. The plants were grown under 16 h of light at 22°C, and 8 h of dark at 20°C, with weekly treatments of insecticide (Kendo Gold, Syngenta Agro) unless noted otherwise (for GWAS, see above). Statistical analysis. Unless stated otherwise, statistical and population genetic analyses were performed using the statistical software R 43 . In boxplots, bars indicate the median, boxes indicate the interquartile range, and whiskers extend to the most extreme data point that is no >1.5 times the interquartile range from the box, with outliers shown by dots. Correlations with published GWAS results and climatic data. To examine whether pollen and ovule numbers were correlated with any of the other 107 published phenotypes 19 , or with climate and geographic variables 20 , Pearson's correlation coefficients were calculated (Supplementary Tables 3 and 5). We also surveyed whether there were any SNPs significantly associated with climate variables in the 10-kb window including the SNP of the highest GWAS score for pollen number (Chr1:8,850,000-8,860,000). The significance was based on the genomewide empirical P values 20,25 . Focusing on the same 10-kb window, we also surveyed whether there were significant SNPs (P < 10 -5 ; minor allele frequency (MAF)>0.1) for the 107 published phenotypes in the GWAS. The correlation of pollen number and the S-haplogroups 21 was also tested. SNP imputation. To perform GWAS, we generated a set of dense SNP markers that overlapped with the phenotyped accessions using imputation 22 . First, we constructed a reference set of 186 haplotypes from resequencing data 44,45 . MaCH version 1.0.16.c 46 was then used to impute non-genotyped SNPs for 1311 accessions in the 250 k SNP array data set 25 The output was then merged and converted into the homozygous SNP data set. Genome-wide association study. GWAS was performed to identify loci associated with pollen number variation in 144 natural accessions. We also performed GWAS for ovule number and for 107 published phenotypes 19 using the same SNP set as for pollen number. The median values of pollen number were used to represent each accession. We used the log-transformed pollen number values for GWAS, as they did not deviate significantly from a normal distribution (Shapiro-Wilk normality test: P = 0.438). To deal with confounding effects as a result of population structure, we employed the mixed model implemented in the software Mixmogam, in which a genome-wide kinship matrix was incorporated as a random effect (population structure-corrected GWAS) 47 . By using Mixmogam, we also generated a quantile-quantile (Q-Q) plot, which shows the relationship between the observed and expected negative logarithm of P values (Fig. 1h). For the Manhattan plot (Fig. 1f, g) Table 10). Data were collected using the StepOnePlus Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA) in accordance with the instruction manual. Expression levels were normalized using EF1-α (AT5G60390), which was used as an internal control. Measuring pollen number and size with a cell counter. To expedite pollen number counting, we established a rapid method using a cell counter (CASY TT, OMNI Life Science GmbH, Germany) 52 . We found that the pollen numbers of flowers on side inflorescences and side branches of the main inflorescence were similar, but those of flowers on the main inflorescence were higher ( Supplementary Fig. 18). To obtain a large number of replicates, we sampled flowers from the former two positions. We sampled during the first 3 weeks of flowering and excluded the first and second flowers on each branch; this yielded up to 40 flowers per individual. Collecting, suspending, and sonicating of flowers for GWAS were conducted as described above. All pollen solutions were suspended in 10 mL of CASYton (OMNI Life Science GmbH), and pollen numbers were counted with a CASY TT cell counter 53 . Three 400 μL aliquots of each pollen solution were counted. We counted particles within a size range of 12.5-25 μm (estimated diameter) as pollen. Samples with a clear peak at 7.5-12.5 μm were discarded as broken or unhealthy samples. ARTICLE were embedded in paraplast using an embedding machine (ASP200, Leica, Germany). RDP1 cDNA was PCR-amplified using the primer pair (At1g25260g2F; 5′-tgcctaatcaaagcgagtagacc-3′ and At1g25260gR; 5′-cagagcaagttcagcttgaaagtagc-3′) and cloned into pCR4-TOPO (Thermo Fisher Scientific) vector. Cloned cDNA was used as a template for in vitro transcription using a MAXIscript T7 labeling kit (Thermo Fisher Scientific) for hybridization 54 . GUS assays. Plant samples were incubated in 90% acetone for 20 min at room temperature, washed with 50 mM phosphate buffer containing 0.1% Triton X-100, 2 mM potassium ferrocyanide, and 2 mM potassium ferricyanide, and incubated in the same buffer supplemented with 1 mg mL −1 X-Gluc for 5 h at 37°C 55 . Phylogenetic analysis. Multiple sequence alignment was performed using Clus-talW implemented in the CLC Workbench (version 7.7). A phylogenetic tree was generated by the neighbor-joining distance algorithm, using the aligned region (amino acid positions 44-153) and a bootstrap value of 1000. Yvh1 of Saccharomyces cerevisiae was used as an outgroup. Accession numbers of used sequences are listed in Supplementary Table 11. Selection scan. For the selection scan, we used the imputed SNP data set that was also used for GWAS. We used 298 accessions (Supplementary Data 1), covering all the accessions used for our GWAS of pollen and ovule numbers and the GWAS of 107 phenotypes reported by Atwell et al. 19,25,56 . We used the iHS statistic for the selection scan; this statistic compares the EHH between two alleles by controlling for the allele frequency of each SNP 23 . The iHS statistic uses the contrast of EHH values on each SNP; iHS values strongly deviate from zero when one allele has a long haplotype (high EHH) and the other has a short one (low EHH); the R library rehh 57 was used to calculate the iHS statistic. The Arabidopsis lyrata reference genome 58 was used to infer the ancestral state for each SNP. We first calculated the iHS for each SNP. Then, we split the genome into 10-kb windows and used the maximum score from the iHS scan for each window as the test statistic, as LD is known to decay within~10-kb on average in geographically world-wide samples of A. thaliana 59 . Empirical P values were calculated for all windows and for all SNPs, based on their ranks in genomic distributions. To deal with the geographically biased sampling, which could be a possible confounding factor, we also performed the selection scan with 144 accessions that were used for GWAS (see Supplementary Note 2 for details; Supplementary Fig. 5). We then asked whether the GWAS-associated windows were enriched in the extreme tails of the iHS statistic. Enrichment analysis was performed across the 10kb windows. To examine whether enrichment of the iHS was commonly observed in GWAS peaks, we also performed this analysis for the GWAS results of the other 107 publicly available phenotypes 19 and compared them with the GWAS of pollen and ovule numbers. Four thresholds of empirical P value tails were considered: 10%, 5%, 2.5%, and 1%. GWAS SNPs were considered if P values were smaller than 10 -4 and the MAF was >0.1. We also performed the same enrichment analysis with MAF > 0.15 ( Supplementary Fig. 4). To assess the effect of using a 10-kb window size, we also performed the iHS enrichment analysis on a per-SNP basis, in addition to the 10-kb window size ( Supplementary Fig. 6). The statistical significance of the fold-enrichment was addressed based on permutation tests that preserve the linkage disequilibrium structure in the data. A set of windows was resampled for each permutation, preserving the relative positions of the windows, but shifting them by a randomly chosen uniformly drawn number of windows for each permutation. A similar method of permutation has been used in several population genomic studies 19,25,56 . Permutation was performed 1000 times. Measuring plant phenotypes. For measuring rosette leaf size, we took images of plants that included a ruler at 3 weeks after germination. A minimum circumscribed circle was drawn manually on the picture using Fiji; then, the area was measured and transformed depending on the scale. The flowering date was counted as days from sowing to flowering. The dry weight of plants was measured using aerial parts, and seed weight was determined by collecting seeds from dried plants. P values for quantitative complementation test (Supplementary Fig. 17) are shown in Supplementary Table 12. Reporting summary. Further information on experimental design is available in the Nature Research Reporting Summary linked to this paper. Data availability Data supporting the findings of this work are available within the paper and its Supplementary Information files. A Reporting Summary for this Article is available as a Supplementary Information file. The data sets generated and analyzed during the current study are available from the corresponding author upon request. RDP1 gene sequence data generated in this article were registered in GenBank (National Center for Biotechnology Information) databases under the following accession numbers: LC164158 (Mz-0), LC164159 (Bor-4), LC504218 (Uod-1), LC164160 (A. lyrata), and LC164161 (Arabidopsis halleri). P0 gene sequence data of A. halleri generated in this Article were registered in GenBank databases under the following accession numbers: LC164162 and LC164163. Raw and processed sequencing data for GWAS and population genetic analyses are publicly available at https://doi.org/10.5061/dryad.jh9w0vt7z (ref. 60
7,925.4
2020-06-08T00:00:00.000
[ "Biology", "Environmental Science" ]
Endophytic bacterial dataset of the Cavendish banana grown in Dak Lak Province of Vietnam using 16S rRNA gene metabarcoding The Cavendish banana (Musa cavendishii L.) is one of the main perennial crops grown in Dak Lak Province of Vietnam. However, data on the endophytic bacterial community of this plant are unknown. In this work, a representative sample, mixing from 5 root samples collected from five banana gardens (the Dwarf Cavendish cultivar) in Dak Lak, was used for analyzing the endophytic microbiome using 16S rRNA gene metabarcoding. Results showed that 5 phyla, 7 classes, 20 orders, 31 families, and 47 genera of endophytic bacteria were identified from the sample. Bacteria belonging to phylum Proteobacteria were the most predominant, with 72.64%, and functions involved in biosynthesis were the most abundant, with 75.35%, of the endophytic bacterial community. Data help to understand the endophytic bacterial community of the Cavendish banana cultivated in Dak Lak, Vietnam. These data can be useful for further experiments concerning relationships between the growth of the Cavendish banana and endophytic bacteria. This is the first report on the endophytic bacteria of the Cavendish banana cultivated in Dak Lak, Vietnam. a b s t r a c t The Cavendish banana ( Musa cavendishii L.) is one of the main perennial crops grown in Dak Lak Province of Vietnam.However, data on the endophytic bacterial community of this plant are unknown.In this work, a representative sample, mixing from 5 root samples collected from five banana gardens (the Dwarf Cavendish cultivar) in Dak Lak, was used for analyzing the endophytic microbiome using 16S rRNA gene metabarcoding.Results showed that 5 phyla, 7 classes, 20 orders, 31 families, and 47 genera of endophytic bacteria were identified from the sample.Bacteria belonging to phylum Proteobacteria were the most predominant, with 72.64%, and functions involved in biosynthesis were the most abundant, with 75.35%, of the endophytic bacterial community.Data help to understand the endophytic bacterial community of the Cavendish banana cultivated in Dak Lak, Vietnam.These data can be useful for further experiments concerning relationships between the growth of the Cavendish banana and endophytic bacteria.This is the first report on the endophytic bacteria of the Cavendish banana cultivated in Dak Lak, Vietnam. © 2023 The Author(s Value of the Data • Data help to understand the taxonomic and functional profiles of the endophytic bacteria of the Cavendish banana cultivated in Dak Lak Province of Vietnam.• Data can be useful for comparing the endophytic bacteria of the Cavendish banana and others.• Data can be useful for further experiments concerning relationships between the growth of the Cavendish banana and the endophytic bacteria. Background Bananas are Vietnam's third largest fruit exporter, next to dragon and durian.According to a report, Vietnam planted 157,600 hectares and produced 2,514,800 tons of bananas in the year 2022 [ 1 ].Bananas were grown everywhere in Vietnam, including Dak Lak Province.Among the banana cultivars, the Dwarf Cavendish is the primary cultivar grown in Dak Lak.To our knowledge, data on bacteria and their functional profiles in coffee, black pepper, sugarcane, and rice plants cultivated in this province have been reported [2][3][4][5]; however, those of the Cavendish banana are unknown.This work aimed to establish a dataset on endophytic bacteria of banana (the Dwarf Cavendish cultivar) cultivated in Dak Lak, using 16S rRNA gene metabarcoding, for further experiments involved in relationships between the growth of the Cavendish banana and indigenously endophytic bacteria. Sample collection Root sample collection, surface treatment, and maintenance were done as described previously [ 5 ].Briefly, 5 root samples (about 50 g each) of banana, the Dwarf Cavendish cultivar, were collected from five gardens in Dak Lak, in the month of October in the year 2021.The samples were then combined to create a representative sample.The representative sample was kept at 4 °C during the sampling, and the root surface was sterilized to remove microorganisms.The treated sample was stored at −80 °C until the metagenomic DNA of the sample was extracted. Genomic DNA extraction, library preparation, and sequencing Genomic DNA extraction, library preparation, and sequencing were conducted as described previously [ 6 ].Briefly, the DNeasy PowerSoil Pro kit (Qiagen, Germany) was used to isolate the metagenomic DNA from 300 mg of the root sample.Primers [ 7 ] were used to amplify the 16S rRNA gene (regions V1 to V9) of the metagenomic DNA, and the Swift amplicon 16S plus ITS panel kit (Swift Biosciences, USA) was used to prepare the library.The Illumina MiSeq platform (2 × 150 PE) was applied to sequence the library. Analysis of data Bioinformatic tools were used to analyze data as described previously [ 6 ].Briefly, the Trimmomatic 0.39 and Cutadapt 2.10 were used to remove adapters, primers, and low-quality sequences.The q2-dada2 plugin and QIIME2 pipeline 2020.8 were used to cluster and dereplicate reads into amplicon sequence variants.The SILVA and PICRUSt2 2.3.0-bdatabases were used to predict taxonomic and functional profiles of the endophytic bacteria, respectively. Limitations Not applicable. Ethics statement The current work does not involve human subjects, animal experiments, or any data collected from social media platforms. Data Availability Endophytic bacterial dataset of the cavendish banana grown in Dak Lak, Vietnam (Original data) (Mendeley Data). Fig. 1 . Fig. 1.Taxonomic profiles of the endophytic bacteria of the Cavendish banana.Note: The taxonomic profiles of the endophytic bacteria at levels of phyla (A), classes (B), orders (C), families (D), and genera (E). Fig. 2 . Fig. 2. Functional profiles of the endophytic bacteria of the Cavendish banana.
1,261.4
2023-12-01T00:00:00.000
[ "Environmental Science", "Biology" ]
High-resolution , high-reflectivity operation of lamellar multilayer amplitude gratings : identification of the single-order regime High resolution while maintaining high peak reflectivities can be achieved for Lamellar Multilayer Amplitude Gratings (LMAG) in the softx-ray (SXR) region. Using the coupled waves approach (CWA), it is derived that for small lamellar widths only the zeroth diffraction order needs to be considered for LMAG performance calculations, referred to as the single-order regime. In this regime, LMAG performance can be calculated by assuming a conventional multilayer mirror with decreased density, which significantly simplifies the calculations. Novel analytic criteria for the design of LMAGs are derived from the CWA and it is shown, for the first time, that the resolution of an LMAG operating in the single-order regime is not limited by absorption as in conventional multilayer mirrors. It is also shown that the peak reflectivity of an LMAG can then still be as high as that of a conventional multilayer mirror (MM). The performance of LMAGs operating in the single-order regime are thus only limited by technological factors. ©2010 Optical Society of America OCIS codes: (050.1950) Diffraction gratings; (230.4170) Multilayers; (340.0340) X-ray optics; (340.7480) X-rays, soft x-rays, extreme ultraviolet (EUV); (230.1480) Bragg reflectors References and links 1. A. E. Yakshin, R. W. E. van de Kruijs, I. Nedelcu, E. Zoethout, E. Louis, F. Bijkerk, H. Enkisch, S. Müllender, and M. J. Lercel, “Enhanced reflectance of interface engineered Mo/Si multilayers produced by thermal particle deposition”, 651701:1–9, Proc SPIE, 6517 (2007) 2. R. A. M. Keski-Kuha, and A. M. Ritva, “Layered synthetic microstructure technology considerations for the extreme ultraviolet,” Appl. Opt. 23(20), 3534 (1984). 3. B. Vidal, P. Vincent, P. Dhez, and M. Neviere, “Thin films and gratings Theories to optimize the high reflectivity of mirrors and gratings for X-ray optics ”, 142–149, Proc. SPIE, 563 (1985) 4. R. Benbalagh, J.-M. André, R. Barchewitz, P. Jonnard, G. Julié, L. Mollard, G. Rolland, C. Rémond, P. Troussel, R. Marmoret, and E. O. Filatova, “Lamellar multilayer amplitude grating as soft-X-ray Bragg monochromator,” Nucl. Instrum. Methods 541(3), 590–597 (2005). 5. A. Sammar, M. Ouahabi, R. Barchewitz, J.-M. André, R. Rivoira, C. Khan Malek, F. R. Ladan, and P. Guérin, “Theoretical and experimental study of soft X-ray diffraction by a lamellar multilayer amplitude grating ,” J. Opt. 24(1), 37–41 (1993). 6. T. Peng, “Rigorous formulation of scattering and guidance by dielectric grating waveguides: general case of oblique incidence,” J. Opt. Soc. Am. 6(12), 1869–1883 (1989). 7. A. Coves, B. Gimeno, J. Gil, M. V. Andres, A. A. San Blas, and V. E. Boria, “Full-wave analysis of dielectric frequency-selective surfaces using a vectorial modal method,” IEEE Trans. Antenn. Propag. 52(8), 2091–2099 (2004). 8. L. I. Goray, “Numerical analysis of the efficiency of multilayer-coated gratings using integral method,” Nucl. Instrum. Methods 536(1-2), 211–221 (2005). 9. A. Sammar, J.-M. André, and B. Pardo, “Diffraction and scattering by lamellar amplitude multilayer gratings in the X-UV region,” Opt. Commun. 86(2), 245–254 (1991). #128376 $15.00 USD Received 17 May 2010; revised 30 Jun 2010; accepted 8 Jul 2010; published 16 Jul 2010 (C) 2010 OSA 19 July 2010 / Vol. 18, No. 15 / OPTICS EXPRESS 16234 10. K. Krastev, J.-M. André, and R. Barchewitz, “Further applications of a recursive modal method for calculating the efficiencies of X-UV multilayer gratings,” J. Opt. Soc. Am. A 13(10), 2027 (1996). 11. L. I. Goray, and J. F. Seely, “Wavelength separation of plus and minus orders of soft-x-ray-EUV multilayercoated gratings at near-normal incidence”, 81–91, Proc. SPIE, 5900 (2005) 12. A. I. Erko, B. Vidal, P. Vincent, Yu. A. Agafonov, V. V. Martynov, D. V. Roschupkin, and M. Brunel, “Multilayer gratings efficiency: numerical and physical experiments,” Nucl. Instrum. Methods Phys. Res. 333(23), 599–606 (1993). 13. V. V. Martynov, B. Vidal, P. Vincent, M. Brunel, D. V. Roschupkin, and A. Yu, Agafonov, A.I. Erko and A. Yakshin, “Comparison of modal and differential methods for multilayer gratings,” Nucl. Instrum. Methods Phys. Res. 339(3), 617–625 (1994). 14. R. Benbalagh, “Monochromateurs Multicouches à bande passante étroite et à faible fond continu pour le rayonnement X-UV”, PhD Thesis, University of Paris VI, Paris, 2003. 15. I. V. Kozhevnikov, and A. V. Vinogradov, “Basic formulae of XUV multilayer optics,” Phys. Scr. T 137–145 17, (1987). 16. R. Petit, Electromagnetic Theory of Gratings, Springer-Verlag, Berlin, 1980. Introduction Multilayer mirrors (MM) are widely used as dispersive elements in the soft-x-ray (SXR) region.Typically, such mirrors have a spectral resolution E/∆E in the range of 20 to 200 and a peak reflectivity of several tens of percent [1].Absorption of SXR in the MM limits the number of bi-layers that can effectively contribute to the reflection of the incident beam and therefore limits the ultimate resolution.By fabricating a grating structure in the multilayer mirror, the penetration depth of SXR can be increased such that more bi-layers contribute to the reflection and a higher resolution can be obtained.Multilayer mirrors equipped with such a grating structure are referred to as Lamellar Multilayer Amplitude Gratings (LMAG) [2][3][4][5]. For the design of LMAGs with enhanced performance, in terms of resolution and reflectivity, adequate theory for the modeling of the diffraction of the incident SXR beam is required.At present, several rigorous approaches such as modal theory or integral method are used to simulate LMAGs [6][7][8], in particular in the soft X-ray region [3][4][5][9][10][11][12][13][14].However, the modal theory is poorly suited for LMAGs with groove shapes that differ from rectangular or for the case of smooth interfaces between neighboring materials arising due to implantation and interdiffusion of atoms.Although the integral method described in [8] overcomes these problems, it is also stated that it is too slow to allow the modeling of gratings coated by hundreds of layers. In this paper, we describe the results of a novel LMAG performance analysis using a coupled-waves approach (CWA) that does not have the aforementioned limitations.Its mathematical formulation is based on a general expansion of the field reflected from the LMAG, in terms of the waves diffracted into different orders, and is very understandable from the physical point of view.This CWA allows the implementation of arbitrary lamellar shapes, arbitrary depth distributions of the dielectric permittivity in the multilayer structure.In addition, it can be used without limitations on the grating period, lamellar width or number of bi-layers in the multilayer structure imposed by other models [8,10,11,13].The CWA is well suited for calculations of LMAG performance in the soft X-ray region, because of the very small polarizability of matter in this region.This small polarizability results in very narrow reflection and diffraction peaks and negligible coupling of the reflected and diffracted waves outside the peaks.Therefore, the number of diffraction orders that need to be considered in CWA in this region is limited and computation times are quite practicable, even for multilayer structures having several thousands of layers.We would like to emphasize that in this paper, the reflectivity (zeroth order diffraction efficiency) of an LMAG is analyzed as a function of the incident angle of the incoming beam.The resolution of the LMAG is then characterized by the angular width (full width at half maximum) of the zeroth order peak. Using the CWA presented in this paper, we derive that for small lamellar widths, LMAGs operates in a single-order regime in which there is no significant overlap of the zeroth order diffraction efficiency with higher orders.Only the zeroth order then needs to be considered when calculating the LMAG performance.We show that the reflection of a SXR wave from an LMAG operating in this regime simply equals the reflection from a MM with a material density that is decreased with a factor equal to ratio of the lamel width to the grating period.Sophisticated diffraction theories are thus not necessary for the proper calculation of LMAG performance in the single-order regime.In contrast to what was stated in [4], we demonstrate that it is possible to derive novel analytic design criteria for LMAGs operating in the singleorder regime.We also show here, for the first time, that the resolution of an LMAG operating in single-order regime is not limited by absorption, in contrast to the resolution of a conventional MM.A high resolution and high reflectivity have been shown to be mutually exclusive for a MM [15], whereas the resolution of an LMAG is only limited by technological factors and the peak reflectivity can then still be as high as for a conventional MM. In this paper, we will first discuss the basic equations of the CWA in section 2. Next, in section 3, the results of our diffraction calculations for the LMAG in comparison to the results of other theories will be presented.Finally, in section 4, the conditions necessary for the operation of LMAGs in single-order regime will be discussed and the advantages of this regime will be presented. Basic equations of the coupled waves approach In this section, we will first derive the basic equations of the coupled-waves approach (CWA).Let us begin with defining the parameters of an LMAG and its geometrical representation as shown in Fig. 1a.Inside the lamellas, we consider a two-component (absorber A and spacer S) periodic multilayer structure (with bi-layer period d and thickness ratio γ).For simplicity, we assume that the lamellas have rectangular shapes, although the coupled waves approach described below can be extended to any lamellar shape.For brevity, we will only consider reflection and diffraction of s-polarized radiation (with the plane of incidence perpendicular to the LMAG grooves) in this paper and we will neglect the effects of interfacial roughness.The Z-axis is defined as directed into the depth of the substrate and L is the total thickness of the multilayer structure.The piece-wise periodic function U, shown in Fig. 1b, describes the lamellar profile in the X-direction normalized to L. Other functions can also be used to describe different lamellar profiles, for instance trapezoidal.The spatial distribution of the dielectric permittivity ε is then written as follows: where the function ( ) z χ is simply the complex polarizability, which varies with depth in the multilayer structure.Although Fig. 1a, for simplicity, displays a polarizability ( ) z χ that varies between two values associated with materials A and S, we note that also arbitrary depth distributions of the polarizability can be used.The lamellar-profile function ( ) U z can be expanded into the Fourier series: To analyze the diffraction pattern, we use a plane wave superposition and solve the 2Dwave equation where the dielectric permittivity is a periodic function of x, as defined in Eqs. ( 1) and ( 2).The general solution then has the following form (chapter 1, Ref [16].): Here, 0 θ is the grazing angle of the incident monochromatic plane wave, q n is the Xcomponent of the wave vector for the n th diffraction order and k is the wave number in vacuum.The boundary conditions for our problem constitute that the wave field in vacuum and in the substrate should represent a superposition of plane waves propagating at different angles to the X-axis.Putting Eqs. ( 1)-( 3) into the wave equation, we obtain a system of coupled differential equations and boundary conditions for the wave functions: with boundary conditions: where ( ) ( ) are the Z-components of the wave vector for the n th diffraction order in vacuum and in the substrate, respectively, and ,0 n δ is the Kronecker symbol.The boundary conditions (Eqs.( 5)) signify that plane waves are only incident onto the LMAG from the vacuum at a single, grazing angle 0 θ .The results discussed in the following sections were obtained by direct numerical integration of the system (4)-( 5) without imposing any restriction on LMAG parameters.Equation ( 4) expresses how the diffracted waves of different orders are related with each other and with the incident wave through the coefficients n m U − , which characterize the lamellar profile (see Eq. ( 2)).For the rectangular lamel shape discussed here, the coefficients n m U − are numbers.The amplitudes of the waves diffracted into the vacuum, n r , and into the depth of the substrate, n t , can then be found after solving Eqs. ( 4) and ( 5) to be ,0 (0) . The interaction of the incident and diffracted waves with the multilayer structure is described in a very simple manner through the complex polarizability ( ) z χ . As a first test of the validity of this approach, we can insert 1 Γ = into Eq.( 4), in which case the LMAG actually corresponds to a conventional multilayer mirror.All coefficients n m U − then turn to zero except the coefficient 0 U , which equals 1. Equation ( 4) is then reduced to the simplest equation: which is indeed an ordinary 1D-wave equation describing the reflection of a wave from a MM, as would be expected. Calculation of LMAG diffraction efficiency In To obtain sufficiently accurate results within an acceptable calculation time, we must carefully choose the number of diffraction orders that will be taken into account in the calculations.In Fig. 2, we show the zeroth order reflectivity curves for an increasing numbers of diffraction orders.As can be seen in the figure, the peak value of the zeroth order diffraction efficiency first decreases and then approaches a constant value when increasing the number of diffraction orders up to 15, i.e. considering up to the ± 7 th diffraction order.Please note the difference in scale along the axes of the diffraction efficiency in Fig. 2. A further increase in the number of diffraction orders does neither changes the shape of the reflectivity curve nor the peak reflectivity.Such a behavior of the reflectivity curves for increasing number of diffraction orders is quite understandable from a physical point of view as incident energy must be distributed over all orders taken into account. Figure 3 shows the diffraction efficiencies of higher orders, taking at all times up to the ± 7 th diffraction order into account.The figure clearly shows that the diffraction efficiency near the Bragg angle is high for the lower diffraction orders and rapidly becomes negligible for higher diffraction orders.This is a specific feature of the SXR spectral range, where the very small polarizability of materials results in very narrow reflection and diffraction peaks.We can now conclude that for an accurate calculation of the zeroth order reflectivity curve for this specific LMAG, it is sufficient to consider up to the ± 5thdiffraction order (11 orders in total). To test the validity of our method, we compared the calculated zeroth order reflectivity curves from our coupled-waves approach to the calculated reflectivity curves of Benbalagh et.al. [4,14].For comparison, we included up to the ± 7 th diffraction order and conclude that the shape of the curves as well as the peak reflectivity are nearly identical.As an illustration, a peak reflectivity value for the zeroth order of 0.103 (see Fig. 2f) was obtained in our calculations and a value of 0.100 by Benbalagh.In our calculations the peak reflectivity decreases with increasing number of diffraction orders, whereas the results of Benbalagh show an increase.As discussed previously, a decrease in peak reflectivity is physically more understandable as energy needs to be conserved. LMAG single-order operating regime The reflectivity of an LMAG can be increased by reducing the overlap of the zeroth order diffraction efficiency with higher orders.From Figs. 2 and 3, we can understand that if there is no significant overlap of these diffraction efficiencies, only negligible amounts of energy will be diffracted into higher orders and the zeroth order reflectivity will be increased.This regime will be referred to as the single-order regime. The angular distance between diffraction peaks increases with decreasing grating period.Therefore we can expect that only the zeroth order diffraction efficiency needs to be considered for calculation of LMAG performance if the LMAG period is small enough.Equation ( 4) can then be reduced to: ) This equation only differs from Eq. ( 6) by the parameter 1 Γ ≠ , which is inserted as a multiplier of the polarizability ( ) z χ .As the polarizability in the SXR region is proportional to the material density, we can conclude that Eq. ( 7) describes the reflection of a wave from a conventional multilayer structure consisting of materials whose densities are effectively reduced by a factor of Γ. Let us now first derive the condition for LMAG operation in single-order regime.In this regime, the angular width (full width at half maximum) of the zeroth order reflectivity peak ( LMAG θ ∆ ) should be small compared to the angular distance, in terms of the incidence angle, between the zeroth and first order diffraction peaks.This angular distance equals ) is determined by the difference in the polarizabilities of the materials in the multilayer structure [15].Hence, it decreases by a factor of Γ for a single-order LMAG, leading to . The final condition for operation in the single-order regime is then written as (8) actually depends on the lamellar width ΓD rather than on the grating period D. By comparing calculated reflectivity peaks of several LMAGs with different lamellar widths, we determined that "much less" in Eq. ( 8) means less by a factor of 3, at least.The peak reflectivity for calculations only considering the zeroth order then differs by less than 1% compared to calculations considering many (11) orders. To investigate LMAG operation in single-order regime, let us now consider the same Mo/B 4 C LMAG as before, but with a smaller lamel width (ΓD) of 100 nm and a reduced grating period (D) of 0.3 µm (i.e. 1 / 3 Γ = ).The incident photon energy E is kept at 183.4 eV, The diffraction efficiencies of the zeroth (LMAG 0) and first (LMAG ± 1) orders are shown in Fig. 4. It is clearly visible that the angular distance (∆θ) between the diffraction orders increases by roughly a factor of 7 as compared to Fig. 3.As a result, the diffraction efficiency of higher orders is very low near the Bragg angle where the zeroth order reflectivity is high. Figure 4 also demonstrates that the peak reflectivity of a short-period, single-order LMAG can reach much higher values as compared to a long-period, multi-order LMAG.The peak diffraction efficiency of the short-period Mo/B 4 C LMAG (D = 0.3 µm) reaches 0.38, which is almost 4 times more than the peak reflectivity of the long-period LMAG (D = 2 µm) shown in Fig. 2f.This can be explained by the re-distribution of incident intensity into the diffracted orders.In single-order operation, the incident intensity is diffracted almost entirely into one order (Fig. 4), whereas the intensity must be distributed over several orders for LMAGs with longer periods (Fig. 2).As stated previously, Eq. ( 8) describes the reflection of waves from a conventional multilayer structure with reduced density.This is also demonstrated in Fig. 4, where a comparison is shown between the calculated reflectivity curve for an LMAG (LMAG 0) and that for a conventional multilayer mirror (MM) with material densities reduced by a factor of 1 / 3 Γ = . As can be seen, the agreement between the curves is excellent.In the single-order regime, sophisticated diffraction theories are thus not necessary for proper calculation of the reflectivity of an LMAG in the SXR region. In the following, we will compare the LMAG performance, in terms of resolution and reflectivity, with the MM performance.MM performance has already been described in previous work [15].Here, it was shown that for a MM in the SXR spectral region, the peak value of the reflectivity is completely determined by two parameters Re( ) / Im( ) , where χ A and χ S are the polarizabilities of absorber and spacer.Unfortunately, Ref [15].also showed that a high reflectivity and a high resolution are mutually exclusive for MM.The resolution of a MM can be enhanced in different manners, namely by decreasing the γ-ratio, decreasing the difference in polarizabilities of the bi-layer materials or using a MM that operates in a higher order Bragg reflection.However, all of these approaches will result in a loss of peak reflectivity and the angular resolution, which can be directly correlated to the spectral resolution, will eventually be limited by the absorption of the spacing material to: ( ) However, the performance is quite different in the case of an LMAG designed to operate in the single-order regime.The angular width of an LMAG is , as was discussed when deriving Eq. ( 8), and the resolution is thus only limited by the Γ that can be obtained technology-wise.As stated previously, Γ can be interpreted as a reduction factor for the material density and a proportional variation in the density of both bi-layer materials does not change the parameters f and g.Hence, the peak reflectivity of an LMAG operating in the single-order regime can be the same as that of a conventional MM consisting of regular density materials.The number of bi-layers in the multilayer structure of the LMAG that is required to obtain the maximum reflectance is inversely proportional to | | A S χ χ − and so must be increased by a factor of 1 / Γ as compared to a conventional multilayer mirror [15].Figure 5 illustrates these conclusions.Curve 1 shows the reflectivity curve (for E = 183 eV) of a conventional Mo/B 4 C multilayer mirror with multilayer parameters as before (Fig. 2) and N = 100.The angular width of the Bragg peak is 0.82 MM θ ∆ = .The three other curves show the reflectivity of LMAGs based on the same Mo/B 4 C multilayer structure, but with different parameters Γ and D, such that the lamellar width (ΓD = 70 nm) remains the same for all LMAGs.A lamellar width of 70 nm satisfies condition (8) and is quite practicable for existing fabrication technologies.From curves 2-4, it can be seen that the width of the reflectivity curve indeed decreases by a factor of 1 / Γ .The angular width of curve 4 is only 0.083°, which is actually about 1.5 times less than the minimal possible angular width ( min ( ) 0.13 ) for this MM (Eq.( 9).Yet, the peak reflectivity of the LMAGs is still the same as that of the conventional MM, although the number of bi-layers required for this is very high. We can now state novel analytic design rules for LMAGs.If the single-order condition (8) is fulfilled, the resolution as well as the number of bi-layers required for maximum reflectance simply scale with 1 / Γ .There are no physical limitations on the ultimate resolution of an LMAG operating in the single-order regime and the maximum reflectance can then still be as high as for a conventional MM.Evidently, the resolution will be limited by technological factors, like the accurate deposition and etching of multilayer structures with very large numbers of bi-layers. In this paper, we only considered the case of s-polarized radiation.However, all the main conclusions of the paper are also valid for p-polarized radiation, if MM θ ∆ in Eq. ( 8) is the width of the Bragg peak for p-polarization. Conclusions Using our coupled waves approach (CWA), we have identified a high-resolution, highreflectivity single-order operating regime for Lamellar Multilayer Amplitude Gratings (LMAG) for the soft-x-ray (SXR) region.In this single-order regime, the overlap of the zeroth order diffraction efficiency with higher order efficiencies is negligible.The performance, in terms of resolution and reflectivity, of LMAGs operating in the single-order regime can be calculated assuming a conventional multilayer mirror (MM) of which the material densities have been reduced by a factor of Γ (lamel-to-period ratio).For LMAGs operating in singleorder, both the resolution of the LMAG as well as the number of bi-layers N required for maximum reflectance scale with 1 / Γ in comparison to a conventional MM.This allowed us to define novel analytic design rules for LMAGs.We have also shown, for the first time, that the resolution and reflectivity of an LMAG are only limited by the number of bi-layers N and the lamel-to-period ratio Γ that can be obtained technology-wise.An LMAG can thus reach much higher resolutions than a conventional MM, without loss of peak reflectivity. Fig. 1 . Fig. 1.Schematic of the cross section of an LMAG.(a): An incident beam from the left (In), under grazing angle θ0, is reflected from the multilayer and diffracted into multiple orders (Out) by the grating structure.The multilayer is built up from N bi-layers with thickness d.Each bilayer consists of an absorber material (A) with thickness γd and a spacer material (S) with thickness (1-γ)d.The grating structure of the LMAG is defined by the grating period D and lamel width ΓD. (b): The normalized function U(x) is used to describe the lamellar profile. order to obtain an indication of the validity of our CWA, we compared our calculations of the performance of LMAGs with the results of Benbalagh et al. who used a recursive modal method [4,14].For the comparison, we considered an LMAG based on a Mo/B 4 C multilayer structure and operating at a SXR energy E of 183.4 eV.The parameters of the LMAG are: D = 2 µm, Γ = 0.3, N = 150, d = 6 nm, and γ = 0.33.Using Eq. (4) we numerically calculated the diffraction efficiency of the zeroth order (reflectivity) 2 0 | | r , which is shown in Fig. 2 as a function of the grazing angle of the incident wave 0 θ .The grazing incidence angle at which the highest reflectivity is obtained corresponds to the Bragg angle and for our example amounts to about 34.5°, as shown in Fig. 2f. Fig. 3 . Fig. 3. Diffraction efficiencies of higher orders at E = 183.4eV versus the grazing angle of the incident beam.Parameters of the LMAG are the same as for Fig. 2. At all times, 15 diffraction orders (up to ± 7 th order) were taken into account in the calculations. angular width of the reflectivity peak of a conventional MM ( MM θ ∆ Fig. 4 . Fig. 4. Diffraction efficiency (at E = 183.4eV) of the zeroth (LMAG 0) and first (LMAG ± 1) diffraction orders of a Mo/B4C LMAG versus the grazing angle of the incident beam.The grating period D = 0.3 µm and the rest of the LMAG parameters are the same as for Fig. 2. In the calculations, 11 diffraction orders were taken into account.The reflectivity of a conventional Mo/B4C multilayer mirror consisting of materials with decreased density is also shown (MM), which has an excellent agreement with the zeroth order LMAG diffraction efficiency.
6,148.8
2010-07-19T00:00:00.000
[ "Physics" ]
Tractor beams, pressor beams, and stressor beams in general relativity The metrics of general relativity generally fall into two categories: Those which are solutions of the Einstein equations for a given source energy-momentum tensor, and the"reverse engineered"metrics -- metrics bespoke for a certain purpose. Their energy-momentum tensors are then calculated by inserting these into the Einstein equations. This latter approach has found frequent use when confronted with creative input from fiction, wormholes and warp drives being the most famous examples. In this paper, we shall again take inspiration from fiction, and see what general relativity can tell us about the possibility of a gravitationally induced tractor beam. We will base our construction on warp drives and show how versatile this ansatz alone proves to be. Not only can we easily find tractor beams (attracting objects); repulsor/pressor beams are just as attainable, and a generalization to"stressor"beams is seen to present itself quite naturally. We show that all of these metrics would violate various energy conditions. This will provide an opportunity to ruminate on the meaning of energy conditions as such, and what we can learn about whether an arbitrarily advanced civilization might have access to such beams. To the best of our knowledge, no really focussed work has been carried out on putting tractor/pressor beams into a coherent general relativistic context. (Acoustic tractor beams [24][25][26][27], matter wave tractor beams [28], or optical tweezers [29], seem to be the closest one gets in the current scientific literature.) Herein we shall analyze tractor/pressor/stressor beams from a general relativistic perspective. The basic idea is to significantly modify and adapt the "warp drive" spacetimes [13][14][15][16][17][18] in a suitable manner, giving them a "beam like" profile, and analysing the induced stresses and forces. Instead of a spaceship riding inside a warp bubble, we will assume that the warp field is in the form of a "beam" generated to pull/repel a target. The mechanisms by which this field is generated is beyond the scope of this article. We will assume that some arbitrarily advanced civilisation [30,31] might have developed the appropriate beam generation technology. Specifically, we shall assume for convenience that the modified warp drive space-times are oriented in the z direction and give them a uniform transverse profile in the x and y directions, typically of the form f (x 2 + y 2 ). Doing so, one obtains a "beam" rather than a "warp bubble". Note that in this work we will let the (t, z) dependence remain arbitrary. As always, when working in this area of speculative physics, including wormholes, and warp drives, and now tractor/pressor/stressor beams, a major justification for undertaking this exercise is to push general relativity to the breaking point; in the hope that the resulting wreckage will tell us something interesting -possibly even about quantum gravity [5,16]. After first analysing Natário's generic warp drive case [14], we will consider three special cases: 1. We modify the Alcubierre fixed-flow-direction warp field. 2. We modify the Natário zero-expansion warp field. 3. We modify the zero-vorticity warp field. We shall also illustrate each of these three cases with some specific examples based on beams with a Gaussian profile. A recurring theme in the analysis will be the use of the classical point-wise energy conditions (null, weak, strong, and dominant; abbreviated NEC, WEC, SEC, and DEC, respectively) [32][33][34][35]. They can be considered as an attempt to remain as agnostic as possible about underlying equations of state. While the energy conditions do not seem to be fundamental physics, they are at the very least a very good sanity check on just how weird the physics is getting [33,36,37]. We already know of examples of violations at microscopic scales (e.g., Hawking radiation) and mesoscopic scales (e.g., Casimir effect). No macroscopic violations of the energy conditions are known up to this point, except at truly cosmological scales -and they violate only some of the energy conditions (the accelerated expansion of the universe violates the strong and dominant energy conditions, but not the null and weak energy conditions [38][39][40][41]). Therefore, besides the violation of the energy conditions not being an absolute prohibition, it is an indication that one should look very carefully at the underlying physics [36,37]. For more background on the energy conditions see . For the sake of full transparency, we should also mention that our interest in these topics was rekindled and inspired by three recent papers [70][71][72]. Unfortunately, significant parts of those three papers are incorrect, misguided, and/or misleading. See reference [18] for details. When things need to be moved One of Wheeler's adages that became standard general relativity folklore is the famous saying that "space-time tells matter how to move; matter tells space-time how to curve". From many a practical point of view, questions regarding objects' movement are less about the how and more about the ought -things are wanted elsewhere from where they are now. It is this logistical perspective that we shall address in the following: How can we ensure that general relativity does the job of moving an object (like a cow, [preferably a spherical cow, in vacuum], or a Corellian CR90 corvette) for us? The key ingredient will be to limit ourselves to test field cases, where we neglect the mass of the objects we want to move, how they interact with space-time and with the matter we put in space-time to move them. This reduces the core physics question to one of forces: We want to use the pressures encoded in the stress-energy tensor of a beam-like field to move target test masses. -3 - The field is assumed to be sourced by someone on the left at negative z, the target-a flat cow in the tractor field space-time-on the right at positive z. Choosing the source and target provides for a distinction between tractor and pressor (or repulsor) fields. Details concerning this particular beam configuration can be found in section 6.2.1. The parameters of equation (6.37) that we have chosen are: A = 0.5, B = C = 1.0. The purple line in the density plot for the zero-vorticity beam indicates the location where the energy density is zero. The primary force-related calculation we shall undertake is this: If the beam is pointed in the z direction, then one calculates the stress-energy component T zz (t, x, y, z), and integrates it over the entire transverse x-y plane to find the net force: Here the + sign corresponds to a beam impinging on the target from the left, whereas the − sign corresponds to a beam impinging on the target from the right. There is an approximation being made here, that the beam is narrow with respect to the target, so that it is a good approximation to integrate over the entire transverse x-y plane. If the beam is instead wide compared to the size of the target then one should instead use the approximation Here T zz (t, 0, 0, z) is the on-axis stress, and A is the cross sectional area of the target. For a beam of intermediate widths, (comparable to the size of the target), one would in principle need to calculate but this is unnecessarily complicated for the primary issues we wish to address. The quantity F (t, z) is the net force the beam exerts on some target located at some position z at time t. For convenience we shall henceforth assume that the field is generated by someone positioned on the left, and that the target be positioned to the right of the generator, (see Figure 1), thus allowing us to restrict attention to the plus sign in equations (2.1) -(2.3). We shall furthermore assume that the target will move under the influence of the field, while the "generator" will not, and-as mentioned above-both behave as test fields. This setup provides for a simple characterization of the effect of the field: If F (t, z) < 0, corresponding to attraction, we call this a tractor beam. If F (t, z) > 0, corresponding to repulsion, we call this a pressor beam 1 . On the other hand, the definition of a stressor beam can be a little trickier. The reason for this being that, independent of the overall sign of F (t, z), one can quite generally define a beam which has significantly varying pressure across the cross-sectional area of the target. In this way, there might be a certain ambiguity about when a specific beam would be considered to be a tractor/pressor or a stressor beam, since this would depend on the properties of the target material -such as its elasticity and ultimate yield strength, and so on. However, for most "applications", we expect the T zz component for a tractor (pressor) beam to not vary too greatly over its region of influence on the target. A quick measure of when a beam would behave as a stressor beam is given by: Here A is the cross-sectional area of the target exposed to the beam, and σ material is the ultimate yield stress of the material making up the target. While equations (2.1)-(2.3) are universally valid, both for standard general relativity, and for modified theories of gravity, we will focus mainly on standard general relativity. Therefore, using the Einstein equations, we have for a narrow beam in terms of the Einstein tensor: while for a wide beam These are the key equations we will be using in the following sections. As usual, we are using geometrodynamic units, where G Newton → 1 and c → 1. If one wishes to reinstate SI units, then in terms of the Stoney force and It is worthwhile mentioning that the magnitude of the Stoney force is truly enormous -some 1.2 × 10 44 Newtons. Accordingly, relatively small spacetime curvatures (weakfield gravity) can still lead to significant human-scale forces and stresses. It is beyond the scope of the present article to consider just how weak the weak fields can be before the test field approximations for the target mass break down. Kinematics Our tractor/pressor/stressor beams will be based on modifications of Natário style generic warp drives [14][15][16][17][18]. The generic form of the space-time metric line element is Note that the lapse is unity, N → 1, the spatial slices are flat, g ij → δ ij , and the "flow" vector v i (t, x, y, z) is the negative of what is (in the ADM decomposition) usually called the "shift" vector [78][79][80][81]. A kinematically useful quantity is the vorticity of the flow field, and its square, ω · ω. The constant-t spatial slices have covariant normal n a = ∂ a t = (−1, 0, 0, 0) a , whose contravariant components are the future-pointing 4-velocity n a = (1, v i ). Observers that "go with the flow", moving with 4-velocity n a , are geodesics, and are often called Eulerian. -7 - In the current context this can be recast as [18]: • The Gauss-Mainardi equations yield the Eulerian energy flux: • The 3 × 3 stress tensor is somewhat messier, and can be expressed in terms of the extrinsic curvature and its Lie derivatives [18]: (3.6) For the various explicit examples we consider below, we shall instead often use ab initio calculations instead of this general (but relatively intractable) result. • In contrast, the trace of the 3 × 3 stress tensor is somewhat easier to deal with. For the average pressurep we have [18]: ∇ a (Kn a ). (3.8) These are the key stress-energy components we need for the current task. For further discussion on these and related issues see references [18,[78][79][80]. An immediate consequence of these general results is that once appropriate fall-off conditions are imposed at spatial infinity one has This implies that violations of the WEC and NEC are unavoidable [18], and we shall see similar results repeatedly recurring in the subsequent discussion. Beam profile In this section, we will discuss the kinematics and general properties of the stress-energy tensor of such beams, including the forces key to our interpretation of them. Beam kinematics For our purposes we shall choose a factorized "beam" profile for the flow vector, one that respects axial symmetry around the z-axis: We shall refer to f (x 2 + y 2 ) and h(x 2 + y 2 ) as profile functions, whereas v(t, z) and k(t, z) will be referred to as envelope functions. Note the explicit presence of x and y in the flow components v x and v y , precisely to maintain axial symmetry. Furthermore, Useful definitions of the average transverse width of the beam are to consider and/or Both of these characterizations of average width depend only on the profile functions, not on the envelope functions. Far away from the beam axis, as x 2 + y 2 → ∞, we will demand that both profile functions tend to zero: f (x 2 + y 2 ) → 0 and h(x 2 + y 2 ) → 0, in order that the beam asymptotically reduces to flat Minkowski space. All of the t and z dependence is encoded in the two functions v(t, z) and k(t, z). Since one wants the beam to be of finite length, and not stretch all the way across the universe, one should demand both lim z→±∞ v(t, z) → 0 and lim z→±∞ k(t, z) → 0, again ensuring an asymptotic approach to Minkowski space. More precisely, we shall demand sufficiently rapid fall-off at spatial infinity, which will then also allow integration by parts unrestricted by boundary terms. We shall also enforce smooth on-axis behaviour by demanding that the profile functions and their derivatives be finite on the beam axis. These structural assumptions for the flow vector is basically our definition of what we mean by a "beam" directed along the z-axis. The previously introduced vorticity (3.2) for our beam geometry reduces to: The square of the vorticity, will show up quite often in subsequent calculations. Stress-energy basics If we now additionally impose the factorization conditions (4.1)-(4.2)-(4.3) appropriate to a beam geometry, then the axial symmetry imposes additional constraints on the stress-energy tensor. Specifically: and This implies in particular that and Similarly, for the x-directed and y-directed fluxes, we have: x, y, z) are specific scalar functions that can be explicitly calculated when required. However, the F i (t, x, y, z) are not the most interesting quantities for our purposes. We shall instead be more focussed on the comoving energy density ρ(t, x, y, z), the stress-energy component T zz (t, x, y, z), the flux component f z (t, x, y, z) directed along the beam axis, and the average stressp(t, x, y, z). We now continue our calculations using the generic beam-like flow (4.1)-(4.2)-(4.3) . As yet, we impose no extra restriction on the four functions v(t, z), k(t, z), f (x 2 + y 2 ), and h(x 2 + y 2 ), apart from the previously mentioned asymptotic conditions. Namely that f (x 2 + y 2 ) → 0 and h(x 2 + y 2 ) → 0 away from the beam axis and both lim Force In order to calculate the force (2.1), let us now investigate T zz (t, x, y, z) for this factorized flow, and integrate this over the x-y plane. For T zz (t, x, y, z) we find: Here, using the shorthand u = x 2 + u 2 , we have: Without detailed calculation we can immediately deduce: and Using this, we find that in the narrow beam approximation This is a sum of negative definite and positive definite terms, thus allowing the generic beam to potentially be fine-tuned as either a tractor or a pressor (or even a stressor). In contrast, in the wide beam approximation we need to evaluate T zz (t, 0, 0, z). Note Consequently (4.30) So in the wide-beam approximation the force exerted on the target is This is of indefinite sign, depending delicately on the envelope functions, potentially allowing either tractor/pressor behaviour. Flux The flux in the z-direction, as defined in equation (3.5), is given by: Thence, For the x-direction Similarly, for the y-direction and again by appealing to anti-symmetry, Consequently, for the general tractor/pressor/stressor beam we always have the net flux integrating to zero: Thence, at least in the narrow-beam approximation, we never need to worry about the net fluxes impinging on the target, they always quietly cancel. However, even if the net fluxes seen by Eulerian observers cancel, there might be significant fluctuations around zero over the cross-sectional area of the target. For instance, on axis we have It is now the envelope functions v(t, z) and k(t, z) that primarily drive the localized on-axis fluxes in the wide-beam approximation. Off-diagonal stress components Similar steps can be applied to equation (4.14) concerning the T xz and T yz components: implying (using anti-symmetry under x ←→ −x and y ←→ −y respectively) Finally, from equation (4.13) we get: implying (now using either anti-symmetry under x ←→ −x, or anti-symmetry under y ←→ −y) Combining all the above, the integral R 2 Tâb dxdy is purely diagonal, all off-diagonal elements vanish: This really is just a consequence of the assumed axial symmetry of our beam. These observations have the effect of focussing our attention on the diagonal components of the (integrated) stress-energy. Eulerian energy density For the Eulerian comoving energy density in this generic beam we find: Then, after an integration by parts, Now we also integrate over z and apply appropriate boundary conditions at z = ±∞ (where the beam has to switch off by definition) to discard the first term, which is a total derivative. Then, Weak energy condition This puts us into a good position to have a first look at an energy condition, this time the WEC. Let us do another integration by parts, again invoking suitable boundary conditions, to replace +∞ -15 -But this is now actually a perfect square: (4.55) The integrand appearing above is just 1 4 of the square of the vorticity ( ω · ω), see equation (4.10), so that this is equivalent to This should not come as a surprise, given it is just equation (3.9). Accordingly, in this generic tractor/pressor/stressor beam configuration, if the Eulerian comoving energy density is positive anywhere, then it must be negative somewhere else -so the WEC is certainly violated. Null energy condition Now consider the NEC: Take equation (3.8) and integrate over all space. Note and so (We have already seen in the previous subsection that this last quantity is nonpositive.) Accordingly, in this generic tractor/pressor/stressor beam configuration, if the quantity (ρ +p) is positive anywhere, then it must be negative somewhere else -so the NEC is certainly violated. -16 -Now, given that the NEC is the weakest of all standard, classical, point-wise energy conditions, we have that all the other energy conditions will also be violated. This has to hold for all tractor/pressor/stressor configurations based on modifications of the generic Natário warp drive. Furthermore, this is completely in accord with what we saw happen for generic warp drive space-times [18]. Special Cases We now consider three special cases that link our tractor/pressor/stressor discussion back to various previous warp drive analyses [13][14][15][16][17][18]. The connections between the envelope and profile functions of the generic Natário case described by equations (4.1)-(4.3) and those appearing in these special cases is summarised in table 1. Table 1: A summary of the connection between the generic Natário metric, its envelope functions k and v, and its profile functions h and f on the one hand, and the various functions appearing in the special cases considered section 5. . Modified Alcubierre warp flow For this particular special case we will assume the field to be oriented along a fixed direction, for convenience taken to be the z direction. This corresponds to taking the flow field to be: For this modified Alcubierre flow field the vorticity is and hence -17 -Now, using the result that for the Alcubierre warp field T zz = 3ρ, obtained in [17], a standard computation yields [13,[16][17][18]: This is already enough to guarantee that both the weak energy condition (WEC) and null energy condition (NEC) are violated in this space-time [17,18]. Calculating the net force we obtain for a narrow beam: But, given our factorization assumption, the stress reduces to Under this assumption the force factorizes to That is, using u = x 2 + y 2 , This is always a tractor beam. The x-y integral is just some positive dimensionless number characterizing the shape of the beam. (Recall that our convention was to always put the target to the right of the generator. If we flip target and generator, so that the target is now on the left and the beam impinges on the target from the right, then there is a sign flip for the force F (t, z), and with F (t, z) > 0 the target is still attracted to the generator.) If we instead assume a wide beam, one can immediately deduce that in this case equation (4.31) will always reduce to zero, as either k or h is zero. Zero-expansion beam Now consider a zero-expansion flow field subject to ∂ i v i = 0. Starting with the generic flow field appropriate to an axisymmetric beam, we have: v x (t, x, y, z) = k(t, z) x h(x 2 + y 2 ) (5.11) v y (t, x, y, z) = k(t, z) y h(x 2 + y 2 ) (5.12) v z (t, x, y, z) = v(t, z) f (x 2 + y 2 ). (5.13) -18 -Then, in order to ensure zero expansion, we must enforce Separating variables, one finds for some separation constant C. Then, without loss of generality, we can enforce: Therefore, the zero-expansion flow field can be rewritten in terms of only two free functions v(t, z) and h( This flow field automatically satisfies axial symmetry, a beam-like profile, and zero expansion. So this is indeed suitable for describing a zero expansion "beam". The vorticity for this beam is easily evaluated as (5.21) Force Let us now calculate T zz (t, x, y, z) for this flow field, and then integrate over the x-y plane in order to obtain the net force. For T zz (t, x, y, z) we find: (5.22) -19 -Again using the shorthand u = x 2 + y 2 , we can explicitly calculate: Now consider the integrals over the x-y plane. But first note that Because you want the beam to die off far away from the beam axis, you want h(x 2 +y 2 ) = h(u) → 0 as x 2 + y 2 = u → ∞. So we can already extract some limited information regarding the integrals: Overall, for the zero-expansion narrow beam we now have The first term is indefinite (even though the coefficient is positive), the second term is positive semi-definite, and the third term is negative semi-definite. So the zeroexpansion narrow beam can be tuned to be either a tractor or a pressor, (or even a stressor). One cannot say more about the force F (t, z) without making a specific choice for the profile h(u), and the envelope function v(t, z). If we now consider a wide beam, then we should look on axis and evaluate T zz (t, 0, 0, z). -20 -Specifically, we see: (5.35) So in the wide beam limit of a zero-expansion beam we have Energy conditions For this zero expansion space-time we have K = tr(K ij ) = 0, and so from equations (3.3) and (3.7) it is immediate that This is enough to guarantee that both the WEC and NEC are violated [18], but for the sake of completeness we perform an explicit calculation. WEC: For the Eulerian energy density we note: (5.38) Using the shorthand u = x 2 + y 2 , we can explicitly calculate: This is almost a (negative) perfect square: By performing an integration over the x-y plane, this can then be fully written as the sum of negative perfect squares: This is more than sufficient to guarantee WEC violation somewhere on each x-y plane. Now let us also integrate over dz. Now, note that So, after an integration by parts, But this now implies that we have a (negative) perfect square: ( ω · ω) dudz ≤ 0, (5.49) as expected. Again, this is more than sufficient to guarantee WEC violation somewhere on each spatial slice, apart from also verifying internal consistency of the formalism. NEC: To prove the violation of the NEC, we must now look at the quantity [ρ + T zz ]: Consequently, integrating over the x-y plane we have: This is now a (negative) sum of squares, thereby guaranteeing violation of the NEC. This is again a useful consistency check on the formalism. Zero-vorticity beam Let us now consider a zero-vorticity beam described by the flow field: The stress component T zz (t, x, y, z) will again be somewhat complicated. However once one integrates over the x-y plane we shall soon see that That is, there is no net force once you integrate over the entire 2-plane. We shall soon see, however, that there are regions of both repulsion/attraction at various points on the 2-plane. This is best interpreted as a stressor beam. Force Explicitly calculating the stress component T zz (t, x, y, z) we find: Now, using again u = x 2 + y 2 for compactness, explicit computation yields Noting again that and observing that each of the P i is a pure derivative, one has So there is no net force. Note, however, that: which will in general not equal zero. In this way we see that, while the force integrated all over the x−y plane sums up to zero, this does not imply an identically zero force. On the contrary, different parts of the target will be pulled, while others pushed, creating a perfect example of a stressor beam. Furthermore, this means that-ignoring matters of material properties-a wide zero-vorticity beam could potentially still act as a tractor or pressor beam, whereas a narrow zero-vorticity beam would not. Energy density and null energy condition Calculating the Eulerian (comoving) energy density we find: Note that R 2 R 0 dxdy = 0, whereas after an integration by parts: Hence, That is, Now, given the fall-off conditions on Φ(t, z), namely that v(t, z) = ∂ z Φ(t, z) z→±∞ −→ 0, we have that: Therefore, if the zero vorticity stressor beam has positive density anywhere, then it must have negative energy density somewhere else. Thence, this zero-vorticity configuration violates the WEC. This is fully in agreement with the general warp-drive analysis presented in [18]. Furthermore, since we have already seen R 2 T zz dx dy = 0, it automatically follows that R 3 T zz dxdydz = 0, and thence we have R 3 (ρ+T zz ) dx dy dz = 0. So, just like before, if the zero vorticity stressor beam has (ρ+T zz ) positive anywhere, then this quantity must be negative somewhere else. Therefore this zero-vorticity configuration also violates the NEC. Again, this is fully in agreement with the general warp-drive analysis presented in [18], and is a useful consistency check on the fact that zero-vorticity flow fields do indeed violate the NEC. - 25 -In lieu of direct knowledge how one would actually build a tractor beam, one is left with two extremes: General considerations or modelling of specific possibilities. Our discussion in sections 3 and 4 are based on the generic Natário warp drive, in a sense a compromise between the two. While fixing, for example, a certain (3 + 1) split and flat spatial slices in this split, it still retains a large amount of freedom. Section 5 then considered more constrained choices found in the literature, while still retaining some freedom to choose certain functions appearing therein. In this section, we shall illustrate the results of the preceding sections for specific profile functions f and h and specific envelope functions k, v. As a first step, we shall start by imposing a Gaussian profile by fixing the functions f (x 2 + y 2 ) and h(x 2 + y 2 ) to be Gaussian functions. In a second step, we will then employ envelope functions that contribute to the stressenergy tensor between the positions of the generator (at z generator 0) and the target (at z target > z generator ), while vanishing exactly outside of some certain region (−b, b) on the z-axis. More specifically, we will adopt a kind of smooth "bump function". Naturally, these are by far not the only choices, and they contain a certain amount of arbitrariness. Nevertheless, this should give a good idea of what can be done if an arbitrarily advanced civilization could impose stress-energy sources in such a targeted way. Gaussian beam profiles As Gaussian beam profiles are very popular toy models in optics and acoustics, they are an obvious starting point for investigating our tractor beams. Let us then provide a few specific examples based on Gaussian beam profiles in the following discussion. Generic Gaussian beam Let us consider a generic Gaussian beam, where we set the two profile functions to be identical Gaussians with width parameters a: Then, for the net force exerted by this Gaussian beam (in the narrow field limit), we find a particularly simple factorized form: z) . (6.12) Note that the behaviour switches from pressor to tractor when the beam satisfies the two critical conditions: So, adjusting the two envelope functions is the determining factor in choosing tractor/pressor/stressor behaviour. Alcubierre-based Gaussian beam Let us now consider a Gaussian beam based on the modified Alcubierre flow field. Take f (x, y) = exp(−[x 2 + y 2 ]/a 2 ), then from (5.6) and (5.7) we ultimately see Note this Gaussian profile implies that T zz (t, x, y, z) is zero on the z axis, rises to a maximum for (x 2 + y 2 ) ∼ a 2 , and then very rapidly decays as you move further off axis. For the total net force on the x-y plane this Gaussian beam gives: Putting back all the appropriate dimensions, we obtain in SI units Here F * is again the Stoney force. Note that, as expected, this is always a tractor beam. Zero-expansion Gaussian beam Looking now at a zero-expansion Gaussian beam, we set h(x 2 +y 2 ) = exp(−(x 2 + y 2 )/a 2 ). Then, using (5.17)-(5.18)-(5.19) and (5.23)-(5.27), we have Thence, for the relevant integrals So, for the Gaussian zero expansion beam, we see that This can be either be a pressor or a tractor beam, depending on the choice of the envelope function. Now consider the wide beam limit. For a Gaussian zero-expansion beam equation (5.35) for T zz reads: In SI units, Again, this can be either be a pressor or a tractor beam, depending on the choice of the envelope function. Zero vorticity Gaussian beam If we now take a specific Gaussian profile f (x 2 + y 2 ) = exp(−[x 2 + y 2 ]/a 2 ) then, for a zero vorticity beam we find: We can explicitly check that The sign of the P i (x, y) and consequently the sign of T zz (t, x, y, z) can and will change near x 2 + y 2 ∼ a 2 , so the spatially target will be alternately pushed and pulled -which is why we classify this case as a stressor beam. The calculation on-axis (x = y = 0) gives us: which results in: So, in the wide-beam limit, As we can see, this is another "tunable" case, which can behave either as a pressor or a tractor beam, depending on the choice of the envelope function. Envelope functions In order to be able to visualize some of the properties of tractor/pressor/stressor beams we shall now impose two different possibilities for the envelope functions v(t, z) and k(t, z). This will allow us to plot the force field generated by these functions and the energy density distribution necessary to create them. All of the calculations done in the previous sections are still completely valid here. Illustrating Gaussian beams In figure 1, used in the Introduction to describe where target and generator are with respect to the tractor field, we also plotted the energy densities and forces of the nontrivial beam configurations described above. To produce those plots, we imposed a Gaussian envelope together with a Gaussian profile for the defining functions: where, for the plotting, we used A = 0.5, B = C = 1.0, and we evaluated the energy density and forces at t = 1. Note that F (t, z) for both the narrow Alcubierre and the wide zero-vorticity beams are always negative for this specific setup, implying a tractor beam behaviour, while the other beam configurations allow for a tractor/pressor behaviour depending on the positioning of the target. It is also nice to notice how nontrivial is the cancelation of the energy density along the spatial 3-slices for the zero vorticity case, given by equation (5.71) and represented in figure 1-(b). Bump functions Another, much more brutal way of enforcing that fall-off conditions be fulfilled is by Use this to then define . In a last step, define for real numbers a and b As we are interested in functions satisfying appropriate fall-off conditions at infinity, this example fulfils this by construction in the most trivial way possible: It vanishes for sufficiently large positive or negative values of x. Furthermore, as we are specifying the metric by hand, the Einstein equation will tell us the required sources; just as in all the calculations of this paper. Neither the Gaussian beams nor beams based on such smooth bump functions differ in this regard from each other, and the general analysis of the previous sections will still hold. Nevertheless, using such smooth bump functions for the envelope functions v or k is an intriguing way to model a tractor beam that only contributes to the stress energy on the z-axis between "generating device" and "target". The algebra becomes arbitrarily involved in this case; for this reason we opt to only show our results and the functions we chose. The bump function used is which, depending on the specific (special) case plotted, was used for v, k, or Φ. The profile functions were again chosen to be the Gaussians, as described in section 6.1.1, which also allows an easier comparison with the plots shown in figure 1. In figure 3, the parameters are t = −1, a = 2, b = 10, and D = 1. In figure 4, the parameters are t = −1, a = 2, b = 4, and D = 1. Just this minor variation produced noticeable changes in the forces and energy density. The choice of t can also produce significant differences, but this is not shown here, as it adds little to the discussion. Again, note how non-trivial is the distribution of the energy density for the zero vorticity beams, which sums up to zero when integrated over any 3-spatial slice. It is also interesting to notice the different behaviour for distinct types of beam, varying from constant pull forces (e.g. the Alcubierre case) up to elaborate push/pull behaviours (e.g. wide zero-expansion). This reveals the great diversity of mechanisms one can create by varying the envelope functions only. Behaviour for different types of profile functions might possibly create yet other interesting scenarios, which we will leave for the enthusiastic reader. -33 - Setting aside the issue of the magnitude of the Stoney force (which can be taken care of by an appropriately small pre-factor in our functions), we in particular like to draw attention to the force of the zero expansion beam in figure 3a: A target positioned to the right at z ≈ 10 would be accelerated to the left, then travel for a while at near constant velocity, before it is decelerated again. Sufficient fine-tuning thus allows for safe docking or boarding. - 34 -In this article we have seen how to analyze tractor/pressor/stressor beams within the framework of standard general relativity. The analysis was made based on modified warp drive spacetimes, by creating a "beam like" profile. A general case based on Natário's warp field was analyzed, followed by specific cases and examples. As expected, we have seen that in this case, just like with warp drives and traversable wormholes, the violation of the NEC, and so of all the classical point-wise energy conditions, is unavoidable. A closely related statement is still true even if one moves beyond Einstein gravity. The key point is that it is ultimately the focussing properties of the tractor/pressor/stressor beams, warp fields and traversable wormholes that translate into convergence conditions [99][100][101][102][103][104][105], and thence into [effective] energy conditions. Whenever you can rearrange the equations of motion in the form then the effective energy-momentum tensor [T effective ] ab will consequently violate the NEC and so violate all the classical point-wise energy conditions. However, a significant question remains open: Are energy conditions truly fundamental physics? Probably not, (indeed, almost certainly not). But the energy conditions are certainly good diagnostics for unusual physics -and, as we have seen, the physics of these tractor/pressor/stressor beams is certainly extremely unusual -comparable in weirdness to that of traversable wormholes and warp drives. This is not an absolute prohibition on tractor/pressor/stressor beams, but it is an invitation to think very carefully about the underlying physics.
8,796.2
2021-06-09T00:00:00.000
[ "Physics" ]
Hydrogel films of membrane type for biomedical application on the basis of polyvinylpyrrolidone copolymers The kinetic of copolymerization of 2-hydroxyethyl methacrylate (HEMA) with polyvinylpyrrolidone (PVP) in aqueous-organic media was investigated. The optimal initiating systems and the temperature regimes of the polymerization for fabrication of hydrogel film membranes based on HEMA/PVP copolymers were developed. The influence of the structure of the hydrogel film membranes synthesized with rare cross-linked copolymers on their the basic operational properties was Introduction Hydrogel materials based on 2-hydroxyethyl methacrylate with polyvinylpyrrolidone rarely cross-linked copolymers are used effectively for the manufacturing of contact lenses, polymer carriers of drugs, medical bandages, removable dentures in dentistry, as implants in plastic surgery etc [1] - [3]. Such materials are usually used in the form of films, which, at the same time, serve as membrane functions. The most common method for obtaining film membrane hydrogels is the polymerization of compositions in an aqueous medium or a watersoluble organic solvent, which after synthesis is replaced with water or aqueous solution [3], [4] . Operational and technological properties of hydrogel copolymers, including permeability and sorption ability, are largely determined by their composition and structural parameters of the network. Therefore, it is important to search for effective methods of directed formation of the structure of copolymers, which will allow to predict the operational properties of hydrogels on their basis. The aim of the work was to develop the foundations of technology and modes of formation of hydrogel membranes based on compositions of HEMA with PVP with an adjustable structure and composition of copolymers. Experimental Set-up and Procedure For researches used: 2-hydroxyethyl methacrylate HEMA (Bisomer  trademark), purified by distillation in a vacuum (the boiling points is 78 С, under a residual pressure is 130 N/m 2 ); polyvinylpyrrolidone PVP10 with a molecular weight of 1010 3 (SIAL Sigma-Aldrich  trademark) pharmaceutical purification. Experimental samples of hydrogel film membranes were obtained by copolymerization HEMA with PVP in a solvent medium between two glass plates. The distance between the plates determined the required thickness of the film. The kinetics of polymerization was studied by the chemical method by reducing the amount of unreacted monomer. The fraction of PVP, which entered into the reaction of grafted polymerization, was determined by photocolorimetric method. The sorption properties of hydrogels with respect to water were determined by the weight method, the mechanical propertiesby the method of film breakage in the aqueous medium, the permeability of the membranes for water and dissolved substances in itby the method of osmosis. Results and discussions Studies in the kinetics of polymerization have found that the addition of PVP to the composition significantly increases the rate of polymerization. The polymerization is affected by the physical interaction between the components of the reaction medium through the so-called "matrix effect" with the formation of a charge-transfer complex. To investigate the initiator's influence on the polymerization of HEMA in the presence of PVP and the choice of the optimal initiator and its amount, the reaction was initiated by benzoyl peroxide (BPO), potassium persulfate (KPS) and azo-bis-isobutyric acid dinitrile (AAD). The rate of polymerization increases in the series AAD -BPO -KPS. For peroxide initiators, the rate of polymerization reaction is greater, which is obviously due to the promotional effect of PVP on the decomposition of peroxides. Investigations of the influence of the nature of the solvent on the polymerization process revealed that the highest polymerization rate of HEMA in the presence of PVP is observed in an aqueous medium. In water, the compositions polymerize at high speed, even with a large dilution with a solvent at low temperatures (55…65 С). This enables the synthesis of hydrogel membranes on their basis in mild conditions and to avoid unwanted exothermic effects. On the basis of kinetic studies, the two-stage mode of formation of film hydrogels is substantiated: 1 stage -55 С (2,5 hours), 2 stages -70 С (3 hours). Hydrogels on the basis of copolymers PVP with HEMA are cross-linked and consist of macromolecules PVP, to which grafted chains polyHEMA. The course of graft polymerization is confirmed by IR spectroscopy, differential-thermal and thermogravimetric analyzes. For all compositions with different contents of PVP there is an increase in efficiency and an extreme change in the degree of grafting over time. The porosity of film hydrogels, which determines their permeability, can be controlled by polymerization in the presence of different amounts of solvents (Table 1). In this case, the porosity depends on the amount of solvent in the initial composition for the constant ratio monomer: PVP. The grid density, which is determined by the molecular weight of the internodal fragment M n , is a measure of permeability in the case of a defect-free structure. However, the content of the solvent in the initial composition, which exceeds its maximum content during the equilibrium absorption of the polymer matrix, has a natural phase separation, which manifests itself in the turbidity of the film. The researches have established that by selecting the nature of the solvent (protonodonor or aprotic, or a mixture thereof), it is possible to adjust the density of the mesh of the hydrogel membrane [4]. Changing the density of the net affects the permeability of the membrane for low molecular weight substances (Table 1). Strength during stretching of membranes at the same time practically does not change. The greatest influence on the structural parameters of the grid and permeability was observed when small amounts of dimethyl sulfoxide were added to the water. These parameters were subsequently virtually unchanged for more than 30% by weight of dimethyl sulfoxide of total solvent. At the same time, hydrogel membranes based on HEMA and PVP copolymers are characterized not only by increased sorption properties (compared with homopolymers HEMA), which were estimated by water content but also several times more permeability for water and aqueous solution of a model substance (sodium chloride) ( Table 1).
1,367.8
2019-01-01T00:00:00.000
[ "Materials Science" ]
The association of HLA-DQB1, -DQA1 and -DPB1 alleles with anti- glomerular basement membrane (GBM) disease in Chinese patients Background Human leukocyte antigen (HLA) alleles are associated with many autoimmune diseases, including anti-glomerular basement membrane (GBM) disease. In our previous study, it was demonstrated that HLA-DRB1*1501 was strongly associated with anti-GBM disease in Chinese. However, the association of anti-GBM disease and other HLA class II genes, including HLA-DQB1, -DQA1,-DPB1 alleles, has rarely been investigated in Asian, especially Chinese patients. The present study further analyzed the association between anti-GBM disease and HLA-DQB1, -DQA1, and -DPB1 genes. Apart from this, we tried to locate the potential risk amino acid residues of anti-GBM disease. Methods This study included 44 Chinese patients with anti-GBM disease and 200 healthy controls. The clinical and pathological data of the patients were collected and analyzed. Typing of HLA-DQB1, -DQA1 and -DPB1 alleles were performed by bi-directional sequencing of exon 2 using the SeCoreTM Sequencing Kits. Results Compared with normal controls, the prevalence of HLA-DPB1*0401 was significantly lower in patients with anti-GBM disease (3/88 vs. 74/400, p = 4.4 × 10-4, pc = 0.039). Comparing with normal controls, the combination of presence of DRB1*1501 and absence of DPB1*0401 was significantly prominent among anti-GBM patients (p = 2.0 × 10-12, pc = 1.7 × 10-10). Conclusions HLA-DPB1*0401 might be a protective allele to anti-GBM disease in Chinese patients. The combined presence of DRB1*1501 and absence of DPB1*0401 might have an even higher risk to anti-GBM disease than HLA-DRB1*1501 alone. Background Anti-glomerular basement membrane (GBM) disease, defined by the presence of autoantibodies in the circulation against α3 chain non-collagen 1 domain of type IV collagen [α3(IV)NC1] [1], is a severe autoimmune disease. It manifests as rapidly progressive glomerulonephritis; when accompanied by alveolar hemorrhage, it is termed Goodpasture's disease. Human leukocyte antigen (HLA) alleles, located on the short arm of chromosome 6, have been well known to be associated with most autoimmune diseases [2]. HLA genes encode numerous molecules including the HLA class I and II molecules, which have immunological functions. It was reported that anti-GBM disease was positively associated with HLA-DRB1*1501 and negatively associated with HLA-DRB1*07 in Caucasian population [3]. In Asian populations, HLA-DRB1*1501 was also considered as a risk allele for Japanese [4] and Chinese patients [5]. These data suggested that HLA-DRB1*1501 is a common risk allele for anti-GBM disease in various populations. However, the association of anti-GBM disease and HLA class II genes, including HLA-DRB1, -DQB1, -DQA1, and -DPB1 alleles, has rarely been investigated in Asian, especially Chinese patients [4,5]. Since our previous study has located the risk allele of HLA-DRB1 [5], to better understand the genetic background of this disease and prepare for the study at the level of peptide, in the current study, we further investigated the distribution and clinical association of HLA-DQB1, -DQA1 and -DPB1. Moreover, we tried to locate the potential risk amino acid residues of anti-GBM disease. Patients Forty-four patients with anti-GBM disease, who were diagnosed at Renal Division, Peking University First Hospital, from 1996 to 2007, were included in this study. Anti-GBM disease was defined as the patient had glomerulonephritis and/or pulmonary hemorrhage, and patient's serum contained circulating anti-GBM antibodies [6]. The onset of the disease was judged by renal or extra-renal signs and symptoms of anti-GBM disease, or abnormalities related to anti-GBM disease were detected by various examinations, including hemoptysis, oliguria or anuria, hematuria or elevated serum creatinine [6,7]. All the 44 patients received renal biopsy. Clinical and pathological data were collected at the time of renal biopsy. Two hundred ethnically matched healthy blood donors were employed as normal controls. The research was in compliance of the declaration of Helsinki and approved by the ethic committee of the local hospital. Informed consent was obtained from each patient. Detection of serum anti-GBM autoantibodies Anti-GBM autoantibodies were measured by ELISA using bovine α(IV)NC1 as the solid phase antigen, which was described previously [8]. The results were expressed as relative absorbance value to a percentage of a known positive control serum, and values greater than 13% were regarded as positive. Samples Peripheral blood samples (10 ml) from patients with anti-GBM disease and normal controls were collected in EDTA. Genomic DNA was obtained from peripheral blood leukocytes with a salting-out procedure [9]. Sequence based typing Typing of HLA-DQB1, -DQA1 and -DPB1 alleles were performed by bi-directional sequencing of exon 2 using the SeCoreTM Sequencing Kits (Invitrogen, Brown Deer, WI, USA). Statistical analysis The difference in the frequencies of HLA alleles in disease samples and controls was compared using the Chisquare test or Fisher's exact test as appropriate. To compare the HLA alleles of subjects stratified by various demographic and clinical parameters, Chi-square test, Fisher's exact test, or nonparametric test was used as appropriate. Bonferroni correction was applied to correct p-value (p corrected, pc). It was considered significant difference if the pc value was less than 0.05. The statistical analysis was performed in SPSS statistical software package (version 11.0, Chicago, Ill, USA). The evaluation at the amino acid level including the examination of polymorphic amino acid residue, pocket of amino acid, zygosity and tests for association, interaction, and linkage disequilibrium among amino acid epitopes of the same HLA molecule or between HLA isotypes were conducted by SKDM software program [10]. Demographic and clinicopathological features Among the 44 patients with anti-GBM disease, 30 were male and 14 were female. The median age of the 44 patients was 27 (range 13-82) years old on diagnosis. Sixteen out of 44 patients had pulmonary hemorrhage. All of the patients had hematuria and proteinuria. 17/44 (38.6%) patients had anuria or oliguria. On diagnosis, the level of serum creatinine was 765.4 ± 388.7 μmol/L. Renal biopsy was performed in all the 44 patients. 41/44 (93.8%) patients had crescent formation in more than 50% of the glomeruli and 30 (68.2%) had crescent formation in more than 85% of the glomeruli in the renal specimen. Direct immunofluorescence examination was performed in 35 cases. All of them showed linear or fine granular IgG and/or C3 deposition along glomerular capillary wall. Outcome data were available for 40 out of the 44 patients. At the end of one year after diagnosis, only 7/40 (17.5%) patients were dialysis-independent, and 33/40 (82.5%) patients were dialysis-dependent or died. HLA-DQB1, -DQA1 and -DPB1 alleles and their association with anti-GBM disease The frequencies of each HLA-DQB1, -DQA1 and -DPB1 allele for the 44 patients with anti-GBM disease and 200 ethnically matched healthy controls were determined by sequence based typing. A total of 9 HLA-DQB1 alleles, 9 HLA-DQA1 and 34 HLA-DPB1 alleles typed in our study. Compared with normal controls, the prevalence of HLA-DPB1*0401 was significantly lower in patients with anti-GBM disease (3/88 vs. 74/400, p = 4.4 × 10 -4 , pc = 0.039) ( Table 1). However, the age (transformed by log10) of HLA-DPB1*0401 positive patients was significantly younger than that of DPB1*0401 negative patients (p = 0.011). Besides, the proportion of patients having hemoptysis was significantly higher in patients with DPB1*0401 than that in patients without DPB1*0401 (p = 0.042) ( Table 2). There was no significant difference between these patients and normal controls on other HLA alleles. No significant difference of gender, age, level of anti-GBM autoantibodies, serum creatinine, or other clinical and pathological parameters was found between anti-GBM patients with and without HLA-DPB1*0401 ( Table 2). The combined analysis of HLA-DRB1 and -DPB1 alleles When we analyzed our HLA-DPB1 typing data together with HLA-DRB1 typing data from our previous study [5], we found that 2 patients and 19 controls had both HLA-DRB1*1501 and DPB1*0401 present. For those characterized by the combined presence of DRB1*1501 and absence of DPB1*0401, 32 patients and 39 controls were identified. Comparing with controls, the prevalence of this combination was extremely prominent among anti-GBM patients (p = 2.0 × 10 -12 , pc = 1.7 × 10 -10 , OR = 11.01, 95% CI 5.2-23.31). The evaluation at the amino acid level Significant residue was found neither for HLA-DRB1 nor in HLA-DRB1 pockets. For HLA-DPB1, its productions phenylalanine at position 35 (DPB1_F-35) and lysine at position 69 (DPB1_K-69) were observed in decreased frequencies among anti-GBM disease, but their difference was not significant after correction (Table 3). Besides, the evaluations of pocket residues, zygosity analysis and interaction analysis about DPB1_F-35 and K-69 found no significant difference between patients and controls. Discussion The current study analyzed the distribution of HLA class II alleles in patients with anti-GBM disease and their potential significance. For HLA class II loci, HLA-DRB1 and -DPB1 encode relatively more variable gene products for HLA-DR and -DP molecules respectively, while both HLA-DQB1 and -DQA1 are variable in human population. Besides, previous studies have located some HLA-DRB1, -DQB1 and -DPB1 alleles with association with anti-GBM disease in Caucasian as well as Asian population [5,11,12]. But in Chinese patients, few studies have been done in this topic [5]. Therefore, we choose to type HLA-DQB1, -DQA1 and -DPB1 loci in this study, on the basis of our previous study on HLA-DRB1 [5]. Our typing results indicated that HLA-DPB1*0401 might be non-predisposing on anti-GBM disease. We stratified by the presence of DPB1*0401 on patients with anti-GBM disease and tried to investigate how this allele has its protective influence on clinical and pathological characteristics of patients. However, we found that patients with HLA-DPB1*0401 were younger and were more likely to have hemoptysis. Since there were only three patients with positive HLA-DRB1*0401, larger sample size is needed to investigate the association between this allele and the disease. Our previous study [5] has located HLA-DRB1*1501 as a risky allele to anti-GBM disease in the same population. When we analyzed the combined presence of DRB1*1501 and absence of DPB1*0401, we found this combination had an even higher risk to anti-GBM disease (p = 2.0 × 10 -12 ) than HLA-DRB1*1501 alone (p = 1.6 × 10 -7 ). To further investigate how these alleles have their influence on disease, we used SKDM software to evaluate their productions at amino acid level. However, no amino acid with significant difference was found by this evaluation. Since little is known about the relation between HLA-DP and HLA-DR, it is difficult to know how these two alleles interact with each other on molecular level in the pathogenesis of anti-GBM disease. Previous studies have focused on association between HLA-DR and -DQ genes and their haplotype with anti-GBM disease [3,11,12]. The single allele DQB1*0302, haplotypes DQB1*0602-DRB1*1501 and DQB1*0201-DRB1*0301 were identified as risk alleles [3,11,12], while HLA-DQB1*0501 was considered as a protective allele [3,12]. However, these potential associations were not observed in our study. Actually, no HLA-DQ allele was found to be significantly associated with patients with anti-GBM disease in our study. HLA class II alleles have been demonstrated a connection with many autoimmune diseases [13][14][15]. Nevertheless the mechanism underneath is still unknown. HLA association in anti-GBM disease is believed to reflect the ability of certain class II molecules to bind and present peptides derived from the autoantigen to T helper cells [12]. Although at amino acid level, our study showed no significance, theories from other studies may offer us some clues. As far as we have known, the strong positive association with DRB1*1501 [4,5,11,12] as well as negative associations with DRB1*01 and DRB1*07 [3] were found [12,16,17] in many studies. According to above findings, Phelps et al. [17] suggested that DR1/7 (encoded by DRB1*01/07) could protect by capturing α3(IV)NC1 peptides and preventing their display bound to DR15. Judging from this, we speculate that the similar protective mechanism might happen to HLA-DPB1*0401. We suppose that the beta chain of HLA-DP produced by DPB1*0401 prevents peptide such as α3(IV)NC1 from binding DR15, which leads to the disease. Nonetheless, the exact mechanism requires further research to confirm. Conclusions In conclusion, HLA-DPB1*0401 might be a protective allele to anti-GBM disease in Chinese patients. The combined presence of DRB1*1501 and absence of DPB1*0401 might have an even higher risk to anti-GBM disease than HLA-DRB1*1501 alone.
2,773.4
2011-05-13T00:00:00.000
[ "Biology", "Medicine" ]
Defining a procedure for predicting the duration of the approximately isothermal segments within the proposed drying regime as a function of the drying air parameters One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff – t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values. Introduction Even though drying of porous material has been investigated for decades, it is still a current and actual topic of researchers in many scientific areas, e.g. chemical engineering, civil engineering and soil science. Physics and engineering have provided basics principles, with which this aspect of science can be additionally examined and discussed. A comprehensive understanding of the way in which water is transported from within the porous medium up to its surface, during drying can lead to many technical innovation and energy savings. In order to properly solve heat and mass transfer problems both the transport in air and in the porous material has to be modeled. This can be achieved at different complexity levels in both media. Calculation techniques (models) which are commonly used can be classified into four major groups: diffusion [1][2][3], receding front [4,5], macroscopic continuum models for coupled multiphase heat and mass transport in porous materials [6,7] and pore network models [8]. The conjugate modeling degree in each drying model can be determined by the way in which the heat and mass transport in air are accounted for in the calculation procedure. The procedure for setting up the non isothermal drying regime, that is consistent with the theory of moisture migration during drying, was recently reported [9]. In order to properly apply this procedure it is necessary to firstly determine the change of effective moisture diffusivity vs. moisture content or drying time (Deff -MR or Deff -t curve) for each isothermal experiment, since these plots represents a good indicator for evaluation and presentation of the overall mass transport property of moisture during isothermal drying. In other words all possible mechanisms of moisture transport and their transition from one to another during isothermal drying, within a clay roofing tile, are visible on previously mentioned plots. Detailed information regarding the procedure for identification and quantification of moisture transport and their transition during drying can be found in reference [10]. Optimal drying regime is consisted of five isothermal segments. Durations of previously mentioned drying segments were detected from the relevant Deff -MR curves. One of the main disadvantages of the reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main objective of this study was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. In order to complete this task and to find a mathematical equation, which can predict the time interval between any two chosen characteristic points (duration of each characteristic drying segments) as a function of the drying air parameters, within the defined area designated with lower and upper limiting values of input parameters, the Box-Wilkinson's orthogonal multi-factorial experimental design was used. Materials and Methods The raw material, used in this study, was obtained from the roofing tile producer "Potisije Kanjiža". Its detail characterization was reported in the study [11]. The raw material was first dried at 60 0 C and then crashed down in a laboratory perforated rolls mill. After that simultaneously it was moisturized and milled in a laboratory differential mill, first with a gap of 3 mm and then of 1 mm. Laboratory roofing tile samples 120 × 50 × 14 mm were formed, from the previously prepared clay, in a laboratory extruder "Hendle" type 4, under a vacuum of 0.8 bar. Drying experiments were performed on previously formed roofing tile samples in laboratory recirculation dryer. The mass of the samples and their linear shrinkage were continually monitored and recorded during drying. The accuracies of these measurements were 0.01 g and 0.2 mm. Drying air parameters were regulated inside the dryer with accuracies of ±0.2 °C, ±0.2 % and ±0.1 % for temperature, humidity and velocity, respectively. The response function in the Box-Wilkinson's orthogonal multi-factorial experimental design is presented in a form of the equation (1). This equation corresponds to a surface in a multidimensional space, called the response surface. The space in which the response surface exists is called the factorial space. In the general case, when k factors are covered, equation (1) describes the response surface in k +1 measurement space. This function is usually defined as a polynomial expression (2). The methodology, valid for isothermal experiments, presented in the [10] was used to calculate the functional dependence of the effective diffusivity with moisture content (Deff -MR), to divide obtained curves in segments and to identify all possible mechanisms of moisture transport. Obtained data were analyzed and used to predict the response y (duration of the proposed non isothermal drying segments). Parameters x1, x2..., xi represents the independent variables or factors (drying air velocity, temperature and humidity). The parameters b0, bi, bij represent the regression coefficients. When the regression coefficients are determined and the dependence defined by equation (2) is established the resulting equation is called a mathematical model. The adequacy of experiment reproducibility is checked using the Kohren criteria [12], while the adequacy of the model is checked using the Fisher Experimental conditions presented at table 1 were used in the present study. Each experiment was repeated 2 times. Drying air parameters which were maintained in each proposed non-isothermal drying regimes are presented in table 2. Results and discussion Drying segments along with mechanisms that can take place in them according to the reference [10] are summarized at table 3. All possible mechanisms of moisture transport and their transition from one to another during the constant and the falling drying period up to the "lower critical" point F, for isothermal experiments, were identified on the corresponding Deff -MR curves and are summarized in table 4. The procedure for setting up the non isothermal drying regime, that is consistent with the theory of moisture migration during drying (see table 2) was based on the principle of controlling the mass transport during the drying process and has demanded to divide the drying process into 5 segments. In each of these segments approximately isothermal drying conditions were maintained. The main functions of the first drying segment was to restrain the moisture transport (evaporation), through the boundary layer between material surface and the bulk air, and to heat the ceramic body to the temperature of the drying air. During the second drying segment external (surface evaporation) and internal transport (of liquid water from the ceramic body up to the surface) should be increased and simultaneously harmonized in such way that the drying surface remains fully covered by a water film. The main function of the third segment is to provide the conditions that will lead to the fact that partially wet surfaces provide a constant rate of drying. Within the fourth drying segment, the liquid transport originating from the pores which are near or just below the "dry" patches on the surface, and are still in the funicular state, has to be simultaneously harmonized with the liquid flow originating from the surface "wet" patches. Duration of the first drying segment was equal to the time interval detected in the corresponding isothermal experiment, from the beginning up to the characteristic points C. Duration of the second drying segment was equal to the time interval detected in the corresponding isothermal experiment, between the characteristic points C and D. Duration of the third drying segment was equal to the time interval detected in the corresponding isothermal experiment, between the characteristic points D and E. Duration of the fourth drying segment was equal to the time interval detected in the corresponding isothermal experiment, between the characteristic points E and F. Duration of the fifth segment was limited to 90 minutes. Table 3. Possible drying mechanisms according to reference [10]. (3) and is presented as an example. The data necessary for model valuation are presented in table 6. It can be seen that the evaluated models coefficients are very accurate and precise.
2,203.6
2017-08-01T00:00:00.000
[ "Physics" ]
Design and Implementation of University Competition Integrated Management System based on EXT . NET Mainly used to create the front-end user interface, EXT. NET is a basic front-end AJAX framework technology and has nothing to do with the background. Whether its beauty of the interface, or the power of the function, EXT form controls are at the top. System is based on EXT.NET technology and makes use of the form development components provided by the Visual Studio, which contributes to completing the University Competition Integrated Management System. The design mode of the system and the main application of the key technology are expounded in this paper. Introduction Students' science and technology competition is one of the main activities in the university students practice.The registration, arrangement and organization of competition is a big and tedious work.The development of computer technology and the Internet make people's life and work more convenient.Making full use of advanced network technologies and resources, we can achieve students' competition management informationalization, and make students' competition management to be free from heavy, simple, repetitive work, toward the networking, humanization, and intelligent direction.University Students Competition Management System based on EXT.NET not only makes competition work-based networking and automatic, reduces the workload of managers and the errors of work, improves efficiency and effectiveness, but also solves the most the participators' registration, query and other matters. The Analysis of Demand and the Design Framework of System Students Competition Management System of NEDU (Northeast Dianli University) through investigation and analysis of the academic competition held process and the detailed rules for the implementation to determine the competition system mainly includes the following five links: a) race organizers issue a competition notice, race schedule and other information; b) participants sign up; c) official staff take qualification examination; d) participants entry competition; e) official staff review and announce competition results. Combined with the competition process and links, Competition Management System based on EXT.NET consists of two main modules: one is competing platforms, including race information; another is integrated management platform, including the Dean and College ends.The main function is shown in Figure 1. Figure 1 The System Diagram Corresponding to the system diagram, the main function of each block is as follows: Participating User Platform Modules.The modules provide platform for users to see race information, entries, and students are free to register an account.Only registered account can view race information and entry the contest.Besides race team can edit its own team information and track their team performance, which will not only facilitate the enrolment of students, but also facilitates the work of organizers. Integrated Management Platform Modules.Integrated management modules are divided into the Dean and College ends.The Dean end has the highest administrative authority and is responsible for website maintenance, system security management, participating department management, registration management, schedule management, results management, and so on.The College end has the general authority, in which the college can apply for the participating students' management, as well as information, management and print.. Client-side: Chrome, Internet Explorer and other Web browsers. Technical Solution According to the competition management system features and functionality analysis, systems development model is based on WEB B/C (Browser/Server) model.Compared with the C/S model, the B/S structure has the following advantages.a) Unified interface, easy to use; b) Easy to maintain; c) Good extensibility, effective protection of investment; d) High degree of information sharing; e) Good support and high security for WAN.System is based on EXT.NET, which is an open source ASP.NET (WebForm + MVC) component and perfectly integrates cross-browser JS script library Ext JS.System based on EXT.NET is not only more powerful on the function, but also beautiful in the appearance.And the project needs to add EXT.NET.dll and EXT.NET.Utilities.dllassembly references.It is configured relevant parameters in the Web.config, and main parameters configuration is shown in Table 1.The data layer: The data access function of the encapsulated system, which achieves various operations on a table in the database, and provides services for the business logic layer.This layer is the direct communication with the database layer and the basis of the entire project. Advantages of using three-layer or multi-layer structure are obvious, mainly in several aspects: data access is significantly reduced the number of connections through middle-tier; maintenance is improved and process more flexible.But there are disadvantages:data transfer takes layer that makes the data transmission efficiency lower.But for the real-time demand is not high, like the competition system, we pay more attention to its convenient maintenance, expansion. The login and registration The module of login and registration is simple, but it plays a very important role.That is an important guarantee of data security and system management requirements, and more importantly can manage the users.Login module's primary function is to validate the relevant user information.After verification through system analysis on the user's permissions, record user information and generate a URL path, jump to the corresponding permissions page, register module is targeted at entry user, participating the user to understand competition through registration information, sign up for the race.Register and login module are shown in Figure 2. Figure 2 The module of login and registration Login module ensures the safety of the system by using the MD5 encryption algorithm to encrypt the user's password.And this module builds the data exchange for user and system.Interface is shown in Figure 3. Participating users' management platform Participating users management platform is the competition management module for the contestants, whose main function is to contact the players and the platform.And platform style similar to Window's desktop that Includes open contests, account basic information, change passwords, team lists, awards lists, and other functions, and register for event included in the list of available professional students of the sport where participants can add a group to perform.This interface information is comprehensive, convenient entry of user action, greatly facilitates the registration process. In the registration page, participants can learn the available competition, and can be entered for the corresponding competition, add their own information, even can edit operations on their own team information after finished registration, instead of directly to the appropriate Department, the Senate changes.So that not only facilitates the participants, but also facilitate the corresponding competition Department. After the contest completed and contest results announced, participants can query online awards. Management Platform Integrated management platform is divided into dean end and college end.Dean end has the highest authority whose the main task is to race management, team management, category management, create reports, account management and higher authority management information, and institute the function is on the team management and award management, as well as exporting reports, and has the general authority.The integrated platform of work flow diagram is shown in Figure 4. Figure 4 The work flow diagram integrated platform Screenshot of integrated management platform system is shown in Figure 5: Figure 5 The integrated management platform Strategy of the management system security Science and technology competition management system of NEDU is based on the network management information system, and related to the normal development of group competition management and the vital interests of the participating departments, so the security problem is particularly important.In order to ensure the security of the system, some measures have been taken.According to the needs of the actual situation, the users of the system are divided into three types: the teaching affairs department, the college and the participants where each type play a role and each role gives the appropriate authority.Access to or maintenance of the module can be performed only with a module's access or operating authority.The user's name and authority can be modified and maintained, which not only improves the security of the system, but also avoids the privilege of each system user, which improves the efficiency and flexibility of the system. Conclusion Science and technology competition management system of NEDU has practical significance to strengthen and implement the network management of the group competition in our school, and improve the scientific level of the school competition management and the work efficiency.With the rapid development of the Internet and information technology, and the need to build a conservation-minded society, Competition Management System will be widely used in various competitions.Meanwhile, there will be some problems in the process of use and some aspects not mature enough.We will adapt to the development of advanced science and technology and improve Competition Management System. Figure 3 Figure 3 Registration Table 1 : Main parameters configuration of Web.config
1,953
2018-10-01T00:00:00.000
[ "Computer Science" ]
Purinergic control of inflammation and thrombosis: Role of P2X1 receptors Inflammation shifts the hemostatic mechanisms in favor of thrombosis. Upon tissue damage or infection, a sudden increase of extracellular ATP occurs, that might contribute to the crosstalk between inflammation and thrombosis. On platelets, P2X1 receptors act to amplify platelet activation and aggregation induced by other platelet agonists. These receptors critically contribute to thrombus stability in small arteries. Besides platelets, studies by our group indicate that these receptors are expressed by neutrophils. They promote neutrophil chemotaxis, both in vitro and in vivo. In a laser-induced injury mouse model of thrombosis, it appears that neutrophils are required to initiate thrombus formation and coagulation activation on inflamed arteriolar endothelia. In this model, by using P2X1−/ − mice, we recently showed that P2X1 receptors, expressed on platelets and neutrophils, play a key role in thrombus growth and fibrin generation. Intriguingly, in a model of endotoxemia, P2X1−/ − mice exhibited aggravated oxidative tissue damage, along with exacerbated thrombocytopenia and increased activation of coagulation, which translated into higher susceptibility to septic shock. Thus, besides its ability to recruit neutrophils and platelets on inflamed endothelia, the P2X1 receptor also contributes to limit the activation of circulating neutrophils under systemic inflammatory conditions. Taken together, these data suggest that P2X1 receptors are involved in the interplay between platelets, neutrophils and thrombosis. We propose that activation of these receptors by ATP on neutrophils and platelets represents a new mechanism that regulates thrombo-inflammation. component of the developing thrombus; blood coagulation is initiated by endothelium-expressed tissue factor and leads to the generation of thrombin and fibrin. Under normal conditions, regulatory mechanisms restrain thrombus formation both temporally and spatially [1]. When pathologic processes overwhelm the regulatory mechanisms of hemostasis, excessive quantities of thrombin form, initiating thrombosis. Thrombosis is a critical event in the arterial disease progression and is associated with myocardial infarction and stroke, accounting for considerable morbidity and mortality [2]. Platelet P2 receptors Adenosine diphosphate (ADP) plays crucial roles in the physiological process of hemostasis and in the development and extension of arterial thrombosis. By itself ADP is a weak agonist of platelet aggregation inducing only reversible responses as compared to strong agonists such as thrombin or collagen. However, due to its presence in large amounts in the platelet dense granules and its release upon activation at sites of vascular injury, ADP is an important so-called secondary agonist amplifying most of the platelet responses, which contributes to the stabilization of the thrombus [3][4][5]. More recent studies indicate that ATP, co-released with ADP, should be considered alongside ADP and thromboxane A 2 as a significant secondary platelet agonist [6,7]. The receptors for extracellular nucleotides belong to the P2 family which consists of two classes of membrane receptors: P2X ligandgated cation channels (P2X1-7) and G protein-coupled P2Y receptors (P2Y1, 2,4,6,11,12,13,14) [8]. Starting from the concept of a unique P2T receptor (T for thrombocyte) originally postulated on the basis of pharmacological data, a model of three platelet P2 receptors progressively emerged. These are the P2X1 cation channel activated by ATP and two G protein-coupled receptors, P2Y1 and P2Y12, both activated by ADP [4,9]. Each of these receptors has a specific function during platelet activation and aggregation, which logically has implications for their involvement in thrombosis. Large-scale clinical trials have demonstrated the beneficial effects of thienopyridines, targeting P2Y12 receptors, in the prevention of major cardiac events after coronary artery stenting and in the secondary prevention of major vascular events in patients with a history of cerebrovascular, coronary or peripheral artery disease. More recently, new classes of P2Y12 inhibitors have been developed in order to circumvent clopidogrel limitations (i.e. variability of platelet inhibitory effect) for the management of ischemic coronary syndromes [10][11][12]. Platelet P2X1 receptors The P2X1 receptor belongs to a family of ATP-gated ion channels, comprising seven mammalian receptor subunits (P2X1-7) that assemble to form a variety of homotrimeric and heterotrimeric receptors widely expressed in the body. Each P2X subunit contains two transmembrane domains, intracellular amino and carboxy termini and a large extracellular ligand-binding loop. P2X receptors vary in their kinetics of desensitization and pharmacology, although all are activated by the physiological ligand ATP [13]. The function of P2X1 receptors in neurogenic smooth muscle contraction, and in thrombosis has been well documented [14][15][16][17]. Mutagenesis studies identified residues important in agonist action, the inter-subunit nature of the binding site, the location of the channel gate, and interactions between the transmembrane regions [18][19][20][21]. The crystallization of a zebrafish P2X4 receptor in both resting and ATP-bound open states [22,23] demonstrated extensive conformational changes in the receptor associated with agonist binding and channel gating. Individual P2X receptor subunits have been described by analogy to a dolphin, with the ATP binding site formed predominantly from residues in the upper and lower body regions of adjacent subunits. Agonist binding induces movement of the dorsal fin, left flipper, and the cysteine-rich head regions closing the ATP binding pocket. This movement is translated through the body region to the transmembrane regions and results in opening of the channel gate. The P2X1 receptor plays an important role in thrombus formation especially under high-shear conditions. P2X1-deficient mice have no prolongation of bleeding time as compared to the wild-type mice, indicating that they conserve normal hemostasis [24]. In contrast, they display resistance to the systemic thromboembolism induced by the injection of a mixture of collagen and adrenaline and to localized laser-induced injury of the vessel wall of mesenteric arteries. Conversely, increased arterial thrombosis has been reported in the microcirculation of mice overexpressing the human P2X1 receptor [25]. The P2X1 antagonist NF449 [4,4′,4″,4‴-(carbonylbis(imino-5,1,3-benzenetriylbis-(carbonylimino)))tetrakis-benzene-1,3-disulfonic acid octasodium salt] has an inhibitory effect on platelet activation ex vivo and on thrombosis in vivo [26,27]. Platelet P2X1 receptor function can also be inhibited by using heat shock protein 90 inhibitors, which may be as effective as selective antagonists in regulating thrombosis [28]. About 10% of current flow through the P2X1 receptor is mediated by Ca 2+ [29]. These ion channels can therefore provide a significant source of direct Ca 2+ influx into the cell following activation, as well as causing membrane depolarization. The time course of ATP-evoked P2X1 receptor-mediated currents is concentration-dependent with low concentrations taking several seconds to reach a peak response, which can be sustained for N 30 s. In contrast, at maximal agonist concentrations, P2X1 receptor currents peak within tens of milliseconds and desensitize completely within seconds [30]. In platelets, P2X1-mediated increase in intracellular Ca 2+ leads to the activation of ERK1/2 MAPK and MLCK that phosphorylates myosin light chain (MLC), a process accompanying platelet shape change and degranulation [31]. P2X1 receptor signaling represents a significant pathway for early Ca 2+ -mobilization following activation of a variety of major platelet receptors through both G-proteins and tyrosine kinases [6,32]. Furthermore, P2X1 receptors seem to play a pivotal role in the activation of aspirintreated platelets by thrombin and epinephrine [33]. Since aspirin is used extensively to manage cardiovascular diseases and since, in clinical research, much attention has been focused on "aspirin resistance" (meaning treatment failure), the finding that P2X1 receptors can circumvent the action of aspirin on platelet stimulation by thrombin is of major importance. P2X1-mediated Ca2 + mobilization has been involved in platelet responses to microbial pathogen-associated molecular patterns acting through Toll-like receptor 2 [34], suggesting a role for P2X1 in platelet-dependent sensing of bacterial components. Moreover, such P2X1 signals would be resistant to endogenous platelet inhibiting agents, such as prostacyclin, which may be particularly important during early thrombotic or immune-dependent platelet activation [35]. These results clearly indicate that the P2X1 receptor might be considered as a potential target for antiplatelet strategies, with the interesting feature that P2X1 antagonists should be effective only at sites of severe stenosis where shear forces are very high, without having a deleterious effect on normal hemostasis. Neutrophil P2X1 receptors We recently showed that P2X1 receptors are also expressed on neutrophils [36]. P2X1 activation causes ROCK-dependent MLC phosphorylation, promoting cytoskeletal reorganization and neutrophil deformation during chemotaxis. Intriguingly, we found that P2X1 deficiency increases neutrophil NADPH oxidase activity [37]. Indeed, ex vivo stimulation of P2X1−/− neutrophils with various stimuli, including bacterial formylated peptides, phorbol esters, and opsonized zymosan particles resulted in increased production of reactive oxygen species as compared to neutrophils isolated from wild-type mice. These results indicated that P2X1 would act to limit systemic neutrophil activation through a negative feedback loop, allowing them to migrate to the site of inflammation. In agreement with this proposition, intraperitoneal injection of a sub-lethal dose of lipopolysaccharide (LPS) in P2X1−/ − mice, led to increased release of plasma myeloperoxidase (MPO) concentration, an indicator of neutrophil systemic activation, as compared to wild type mice. In addition, peripheral P2X1−/− neutrophils expressed higher levels of CD11b in response to LPS injection, reflecting their higher activation state. Concomitantly, we observed that the LPS-induced drop in platelet and lymphocyte counts were both worsened in the P2X1−/− mice as compared to their wild type littermates. Immunohistochemistry and MPO activity assay revealed exaggerated neutrophil relocalization into the lungs of P2X1−/− mice, where these cells formed large aggregates in the capillary lumen. Finally, intraperitoneal injection of a lethal dose of LPS, the P2X1−/− mice exhibited shorter survival time than wild type mice, most likely as a consequence of enhanced neutrophil-dependent ischemic events and subsequent multiple organ failure. Notably, this phenotype was not associated with altered plasma levels of the main LPS-induced cytokines, TNF-α, IL-6, IL-1β, and INF-γ. Taken together, these findings support an important role for P2X1 receptors in the homeostatic regulation of circulating neutrophils and in their recruitment at the sites of inflammation/infection. Platelet and neutrophil P2X1 receptors in thrombosis Several studies indicate that besides their ability to kill pathogens, neutrophil activation promotes coagulation in the microcirculation, trapping invading pathogens in fibrin mesh, thereby restricting microbial dissemination [38]. Furthermore, in the absence of any bacterial challenge, the neutrophil serine proteases elastase and cathepsin G, together with externalized nucleosomes contribute to large vessel thrombosis. Nucleosomes form a platform on which neutrophil serine proteases coassemble with the anticoagulant tissue factor pathway inhibitor (TFPI), supporting TFPI degradation and unleashing suppression of factor Xa, thereby fostering fibrin generation. In line with a contribution of activated neutrophils to coagulation, we observed increased thrombin generation and shortened coagulation time in the plasma of LPS-treated P2X1−/ − mice as compared to wild-type littermates. In a model of laser-induced injury of cremaster muscle arterioles, Darbousset et al. recently showed that neutrophils accumulate at the site of injury before platelets, contributing to the initiation of thrombosis. Neutrophils recruited to the injured vessel wall express tissue factor (TF), thereby promoting coagulation and thrombus growth. In collaboration with Dubois' team, we recently found that P2X1 deficiency or antagonism impairs neutrophil recruitment and activation on inflamed arteriolar endothelia, platelet accumulation and fibrin generation [39]. Infusion of wild-type neutrophils in P2X1−/− mice was sufficient to fully restore fibrin generation, whereas infusion of both wild-type platelets and neutrophils were required to allow normal thrombus growth. Thus, P2X1 expressed on neutrophils and platelets is required for thrombosis. The data reported so far assumed that the effects of platelet and neutrophil P2X1 receptors are mediated by homotrimeric P2X1 receptors. It must be known that P2X1 can also interact with other P2X subunits, e.g. P2X5, to form heteromeric ion channels with distinct properties [40]. Though several studies indicate that only homomeric P2X1 receptors form ATP-gated ion channels in platelets [41][42][43], this may not be the case for neutrophils. Indeed, neutrophils express other P2X subtype mRNAs: P2X4, P2X5 and P2X7 [44][45][46][47]. However, the expression of functional P2X4 or P2X5 subunit containing receptors has never been confirmed and it appeared that human neutrophils do not express functional P2X7 receptors. To determine whether the effects reported in P2X1-deficient neutrophils could be due to changes in the stoichiometry of putative heterotrimeric P2X receptors requires further investigations. Summary and outlook: P2X1 receptors in thrombo-inflammatory disorders In summary, our latest findings indicate that P2X1 receptors contribute to ATP-dependent thrombosis in mouse microcirculation by promoting early neutrophil and platelet recruitment and subsequent fibrin generation, locally, at sites of endothelial injury (Fig. 1). Upon systemic inflammatory challenge, P2X1 receptors would act to dampen the activation of circulating neutrophils, thereby limiting oxidative tissue damage and disseminated intravascular coagulation. Targeting P2X1 receptors will not only inhibit platelets but also alter neutrophil function, and may therefore represent an innovative therapeutic strategy to prevent local thrombo-inflammation, only if neutrophil regulatory homeostasis is preserved. Future research should focus on the role of P2X1 receptors in the pathophysiology of thromboinflammatory disorders such as ischemic stroke. In stroke, thromboembolic occlusion of major or multiple smaller intracerebral arteries leads Fig. 1. A role for platelet and neutrophil P2X1 receptors in thrombosis. Experimental data in mice indicate that activation of P2X1 receptors by extracellular ATP acts to maintain circulating neutrophil in a quiescent state (1), recruit neutrophil at the site of endothelial injury (2), and activate adhered neutrophils (3) and platelets (4), thereby promoting thrombus growth and fibrin generation. TF: tissue factor, ROS: reactive oxygen species. to focal impairment of the downstream blood flow, and to secondary thrombus formation within the cerebral microvasculature [48]. Pathologic platelet activity has been linked to cerebral ischemic events [49]. Therapeutic thrombolysis (t-PA) is the only current effective treatment of acute ischemic stroke, but it is restricted to the first few hours after disease onset [48]. The utility of current platelet aggregation inhibitors and anticoagulants is counterbalanced by the risk of intracerebral bleeding complications, and the development of novel antiplatelet agents with a more favorable safety profile, better efficacy and rapid action in acute events remains a challenge. After the interruption of cerebral blood flow, tissue injury begins with an inflammatory reaction, which is a common response of the cerebral parenchyma to various forms of insult. Moreover, not only ischemia, but also reperfusion in itself causes tissue injury. Infiltrating leukocytes, especially neutrophils, play a pivotal role in propagating oxidative stress-triggered tissue damage after cerebral ischemia and reperfusion [50]. In a mouse model of acute ischemic stroke (tMCAO), it appears that platelets contribute to stroke progression by mechanisms that at least partially differ from those involved in thrombus formation [51,52]. Indeed, inhibiting early steps of platelet adhesion and activation (i.e., VWF-GPIb, collagen-GPVI), but not aggregation (αIIbβ3 inhibitors), reduces infarct size. Platelets serve pro-inflammatory functions that are likely involved in infarct growth. However, the mechanistic links between platelets and inflammation remain largely unknown. Our recent experimental data indicate that P2X1 receptors expressed on both platelets and neutrophils may represent such a link. It would therefore be interesting to determine whether defective thrombus formation observed in the microcirculation of P2X1 −/− mice would protect these mice from thrombo-inflammatory ischemic brain infarction.
3,406.2
2014-11-28T00:00:00.000
[ "Biology", "Medicine" ]
Green Corrosion Inhibition on Carbon-Fibre-Reinforced Aluminium Laminate in NaCl Using Aerva Lanata Flower Extract Aluminium-based fibre–metal laminates are lucrative candidates for aerospace manufacturers since they are lightweight and high-strength materials. The flower extract of aerva lanata was studied in order to prevent the effect of corrosion on the aluminium-based fibre–metal laminates (FMLs) in basic media. It is considered an eco-friendly corrosion inhibitor using natural sources. Its flower species belong to the Amaranthaceae family. The results of the Fourier-transform infrared spectroscopy (FTIR) show that this flower extract includes organic compounds such as aromatic links, heteroatoms, and oxygen, which can be used as an organic corrosion inhibitor in an acidic environment. The effectiveness of the aerva-lanata flower behaviour in acting as an inhibitor of the corrosion process of FMLs was studied in 3.5% NaCl solution. The inhibition efficiency was calculated within a range of concentration of the inhibitor at room temperature, using the weight-loss method, potentiodynamic polarization measurements and electrochemical-impedance spectroscopy (EIS). The results indicate a characterization of about 87.02% in the presence of 600 ppm of inhibitor. The Tafel curve in the polarization experiments shows an inhibition efficiency of 88%. The inhibition mechanism was the absorption on the FML surface, and its absorption was observed with the aid of the Langmuir adsorption isotherm. This complex protective film occupies a larger surface area on the surface of the FML. Hence, by restricting the surface of the metallic layer from the corrosive medium, the charge and ion switch at the FML surface is reduced, thereby increasing the corrosion resistance. Introduction Nowadays, the great part of the automotive and aerospace industries is searching for improvements to the use of lightweight components in order to increase the strength-todensity ratio, which implies the possibility of decreasing the weight of structures while simultaneously ensuring the high performance in terms of strength, stiffness, flexibility, corrosion resistance, wear resistance, etc. The most famous automobile companies are looking for hybrid materials that could assist in achieving the above-mentioned properties [1,2]. Friction stud welding [3], diffusion bonding [4], friction welding [5], friction drilling, and friction riveting are some of the processes employed for joining multi-material structures. Nevertheless, multi-material structures developed by combining fibres and metal laminates have promising properties and applications. An analysis of new works shows that sufficiently advanced protection methods, such as arc spraying with flux-cored wire [6], novel superhydrophobic coatings [7], and superdispersed polytetrafluoroethylene (SPTFE) and polyvinylidene fluoride (PVDF) coatings [8] can be used for various connected structural elements, including aluminium alloys, which makes them promising methods for the aerospace industry. Carbon-fibre-reinforced plastics (CFRPs) are characterised by the best specific strength that is at least two times higher than steel. The weight of a component could be decreased by up to half if a CFRP is used instead of steel. For vehicle applications, this means the possibility of making lighter vehicles that consume less fuel. As a result, they are considered advanced structural materials [9]. Currently, the design of Al-based fibre-metal laminates is restricted to the use of glass fibre (GF), since the risk of galvanic corrosion prohibits the use of carbon-fibre-reinforced plastics. Previous research has explored galvanic corrosion between fibre composites and a variety of metals. Tavakkolizadeh et al. [9] suggested that galvanic corrosion occurs when carbon fibres have been in electrical contact with steel in an electrolyte. Moreover, they have shown that the use of coating could reduce the rate of galvanic corrosion, but not eliminate it. Ireland et al. studied the effect of carbon nanotubes, including GF-reinforced epoxy, on galvanic corrosion with aluminium. They assessed that galvanic corrosion did occur between them, even if a polymer barrier existed between the two conductive materials. Plant extracts are highly available natural products that have corrosion-inhibition properties and are a renewable source and by nature, they are non-poisonous. The ample chemical constituents that are present in the plant extracts such as alkenes, polyphenols, and aromatics are capable of inhibiting the corrosion process in mild steel [10]. Most of the herbal products containing functional groups such as C-Cl, C-O, NH 2 , C-H, C=O, O-H and CHO are potential inhibitors. The above compounds become adsorbed and form a protective layer on the surface of the steel to restrict the formation of corrosion. Steven et al. [11] investigated the probability of galvanic corrosion of carbon fibre and aluminium in FML. In this examination, the authors studied the galvanic corrosion behaviour between the bulk metallic glass (BMG) and the CFRP. They assessed that the BMG showed less corrosion than the Al combined with CFRP. Mehdi Yari et al. [12] investigated the properties of carbon composites. The authors determined that when a considerable region of carbon composites is coupled to small metallic parts such as nuts, screws and clasps, the galvanic corrosion was a significant threat. Perez et al. [13] researched the galvanic corrosion between carbon steel and a hardened metal that was treated with 1M NaOH. The evaluation was carried out at two different conditions: with and without chloride. They determined that there is no significant threat or massive damage due to galvanic corrosion when carbon steel and hardened metal are electrically coupled in a strong, fortified structure. Akhil et al. [14] investigated the corrosion behaviour of mild steel in 0.5 M H 2 SO 4 by using Saraka Ashoka. They proposed that the application of this extract containing epicatechin helps in limiting the corrosion rate of the mild steel. The excellent inhibition effect of mild metal in 0.5 M H 2 SO 4 was evaluated at 100 mg/L by using the electrochemical and weight-loss measurements. Atomic force microscopy (AFM) and scanning electron microscopy (SEM) were also used for analysing the surface morphology. The electrochemical studies showed that there was a 95.48% inhibition effectivity at 100 mg/L inhibitor concentration. The following Table 1 describes the inhibition efficiencies exhibited by different plant extracts and the corresponding medium used during the process [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]. Jakubczak et al. [15] carried out their study on the interlaminar shear strength of the CARALLs with different aluminium-surface preparations and various fibre combinations because of the effect of thermal ageing. Their study revealed that galvanic corrosion is reduced when there is an insertion of a thin glass ply in between the metal sheets and carbon laminates. However, this insertion does not have any effect on the ILSS or the thermal fatigue of the laminates. Pan et al. performed a study on the influence of anodizing and annealing the aluminium sheets in an FML [16]. They found that the mechanical properties lowered due to the effect of annealing. Kim et al. (2010) carried out a systematic investigation on the adhesion strength of the CFRP/steel bond to determine the effect of the surface morphology of steel. They incorporated a micro-periodic line pattern on the steel surface for carrying out the investigation. Their findings show that the enhancement of strength is due to the transition from interfacial failure to cohesive failure. Reyes and Gupta (2009) included glass fibre-reinforced polypropylene in place of conventional thermosetting parts used in FMLs. They applied a zinc coating to the surface of the steel layer to achieve very good adhesion on the polymer-metal interface. Carbon-fibre-reinforced aluminium laminates were used for carrying out the low-velocity impact on the specimens. Experimental and numerical simulations were performed on the prepared samples, both qualitatively and quantitatively. It was found that matrix fractures, carbon-fibre cracking, and delamination were the important modes of damage [31]. Novel superhydrophobic coatings were applied to the surface of an aluminium alloy to prevent the effect of corrosion by using Al 2 O 3 /siloxane hybrids that were grown in situ on the surface of the aluminium. The experiment proved that the aluminium alloy AA 2024 has excellent corrosion resistance in NaCl, alkaline and other acidic environments [32]. Superdispersed polytetrafluoroethylene (SPTFE) and polyvinylidene fluoride (PVDF) were used as the coating medium for studying the anti-icing properties of the samples, which were oxidised by plasma electrolytic oxidation. The coatings having the combined PVDF-SPFTE layers with a ratio of 1:4 showed significant performance with hydrophobicity, ice-phobic and electrochemical characteristics compared to all the other sample combinations. The use of this PVDF-SPFTE coating also proved that there was a reduction in the corrosion current density in powers of order 5 when compared with uncoated aluminium alloy [32]. The cored-wire arc-spraying technique was used for spraying ultra-high-molecular-weight polyethene (UHMWPE) particles onto the surface of aluminium during the aluminium spray coatings. These particles act as sealants. The microstructure evaluation was carried out in the study. To study the effect of corrosion, neutral-salt spraying and electrochemical analysis were performed on the coated aluminium sample. The study revealed that the UHMWPE particles acted as sealants, helping to improve corrosion resistance [33]. The present paper aims to study the behaviour of aerva lanata in aluminium-based FMLs, in terms of corrosion and absorption. It is an Indian plant species and belongs to the family of Amaranthaceae. The extraction process was carried out on a soxhlet-extraction handle apparatus with a 3.5% NaCl solution, and its inhibition efficiencies on carbon-fibreand-aluminium-metal-based FMLs were investigated. The performance and mechanism of inhibition was also discussed. The characteristics of FMLs were studied by employing Fourier-transform infrared spectroscopy (FTIR) and SEM analysis was carried out on the prepared surface. Materials Preparation In the present study, carbon-fibre-and-aluminium-metal-based FMLs were studied. The fibre-metal laminates were made by stacking alternating layers of AA 6061 sheets and carbon fibre for a total of 5 layers (starting from AA 6061 and again ending at AA 6061 with carbon fibre in alternate layers). The dimensions of the sheets and fibres were cut to 150 mm × 250 mm, which was equal to the size of the die. The thickness or depth of the die used was about 5 mm. The thickness of the AA 6061 sheets was equal to 0.5 mm and that of the carbon fibre was 0.25 mm. The elemental composition of AA 6061 is given in the following Table 2. Table 2. Weight-Loss-Method Calculation of FML in 3.5% NaCl. Elemental Composition % of Element Manganese ( For preparing these FMLs, carbon fibre and aluminium sheets were bonded together by deposing a layer of reinforcing adhesive. They were joined together by using epoxy resin with the commercial name Araldite LY 556 bought from a retail manufacturer Excel Trading Corporation, Pune, India. The hardener used was Aradur HY 951 bought from a retail manufacturer Aerium Tech Private Limited, Mumbai, India. Both the epoxy resin and the hardener were mixed in the ratio of 10:1 parts by weight using the rule of mixture calculations. Then, a compressive force of about 10 bars was applied to the die using a compressive moulding machine available at Department of Mechanical Engineering, Mepco Schlenk Engineering College, Sivakasi, India. The following Figure 1 shows the prepared FML. After this, for carrying out the corrosion tests, the samples were cut from the FML in dimensions of 5 mm × 5 mm. The following Figure 1 shows the cross-sectional view of the carbon fibre/aluminium 6061 FML laminate sandwich. The preparation of the flower extract as a corrosion inhibitor is discussed herein. Initially, the Aerva-lanata flower was dried for 7 sunny days. Then the extraction process was carried out in a soxhlet-extraction mantle apparatus. A total of 10 g of aerva-lanataflower powder was mixed with 170 mL of distilled water. The powdered sample was refluxed for four hours using distilled water at 80 • C. Then it was filtered to obtain the extract solution of 70 mL. Finally, the filtered solution was heated on a hot plate. The hot plate was maintained at 100 • C for more than 1 h. A quantity equal to 70 mL was boiled by a hot plate until 40-35 mL remained, then it was poured in petri dish. The petri dish was kept in an open atmosphere for three days. Finally, the Aerva-lanata-extract powder was collected. Weight-Loss Measurements For evaluating the rate of corrosion in aqueous solutions, the method of immersion represents an easy technique. In the present work, the immersion corrosion tests on FMLs were carried out for a period of 5 days in a medium of 3.5% NaCl. For weight-loss estimation, the carbon-fibre-and-aluminium-metal-based FMLs were prepared according to ASTM G 31-72 standards. Before the FMLs were exposed to the environment, they were cleaned, dried and weighed, and then exposed to 3.5% NaCl. The results were examined at 25 • C in the presence and absence of inhibitor for an immersion period of 5 days. By using this method, the corrosion rate and the efficiency of the inhibitor was also determined. Electrochemical Measurements Electrochemical measurements were carried out using the simple three-electrode cell system. It was a Cyclic Voltammetry Electrochemical Cell manufactured by Ossila Ltd., Sheffield, UK. It involved an easier method containing a mild steel electrode, a platinum electrode, and a saturated calomel electrode (SCE) as the working, counter and reference electrodes, respectively. During the tests, the working electrode was immersed in the 3.5% NaCl test solution for 1 h, to obtain a stabilised open-circuit potential (OCP). Electrochemical-impedance spectroscopy (EIS) was used for scanning from 100 kHz to 0.01 Hz frequency, with a sign-amplitude perturbation of 5 mV at OCP. From this value, the Tafel and Nquist graphs were drawn in order to determine the inhibition efficiency of the extracted Aerva-lanata sample on the FMLs. FTIR Spectroscopy In order to understand the inhibition mechanism in a better way, the FTIR spectra of Aerva-lanata extract were examined. The FTIR measurements were carried out on a IRSpirit FTIR Spectrometer, Japan purchased from Toshvin Analytical Pvt. Ltd., Mumbai, India. The Aerva-lanata extract was reduced into powder form for FTIR characterization by means of a FTIR 8400s spectrophotometer with a wave number between 500-4000 cm −1 . Hardness Studies The hardness test was used to study the influence of the corrosive NaCl on the hardness of the FMLs. The hardness tests were carried out on a micro Vickers hardness tester purchased from Walter Uhl technische Mikroskopie Gmbh & Co. KG, Asslar, Germany. Tests were carried out on a micro Vickers hardness tester with an indentation load of 500 g for 10 s on the FMLs under three conditions. First, the FML was tested for hardness without being subjected to corrosive NaCl. Secondly, the FML specimen that had not been treated with the Aerva lanata extract surface coating was subjected to NaCl corrosion and then measured with Vickers hardness. Finally, the hardness value was also measured for the specimen that was first coated with the Aerva lanata extract and then subjected to corrosion. SEM Imaging Scanning-electron-microscopic studies were carried out on the prepared samples. The SEM analysis was performed on a ZEISS GeminiSEM Field Emission Scanning Electron Microscope, made in Oberkochen, Germany. Images of the carbon-fibre-reinforced aluminium laminate, the bare material, and the flower-extract-coated specimen, both before and after the immersion test, were obtained. Figure 2 reports the FTIR results. It shows that the Aerva lanata sample had the stretching vibration of O-H, causing the peak centre at 3522.16 cm −1 belonging to the amine functional group [28]. The stretching vibration of C-Cl caused the peak at 2362.34 cm −1 , which belongs to carbolic-acid functional group. The stretching vibration of C-O caused the peak at 1556.44 cm −1 , which belongs to useful alkene group. The stretching vibration of C-H caused the peak at 593.40 cm −1 , which belongs to aromatic functional group. The FTIR results confirm the presence of anticorrosive properties of the Aerva lanata extract that prevent corrosion on the surface. Since the extracts are mainly composed of few low-molecular-weight compounds, FTIR analysis is a suitable method to identify them. Therefore, to examine the more prevalent compounds in the Aerva lanata extract, FTIR analysis was used. Table 3 shows the weight-loss values, inhibition efficiency (η %) and surface coverage (θ) for the carbon-fibre FML at different concentrations of Aerva-lanata extract. From the weight-loss values, corrosion rates (CR) were calculated by the following equation, Weight-Loss Measurements where CR is the corrosion rate in mmpy, ∆W is the weight loss before/after the immersion test, K is a constant (for mmpy K value be 8.75 × 10 4 ), ρ is the specimen density (g/cm 3 ), A is the exposed area (cm 2 ) and t is the exposure time (h). The inhibition efficiency (η) was calculated as follows: where W i and W 0 are the weight of the specimen in the presence and absence of the inhibitor, respectively. Table 3 also shows the corrosion rate (mmpy) and the inhibition efficiency (η %) of the FML in 3.5% NaCl at different concentrations of Aerva-lanata extract. From the above results, it can be seen that the corrosion rate of the FML was reduced as the concentration of Aerva-lanata extract was increased. This is due to the phenomenon of the precipitation reaction caused by the adsorption of the active ingredients of the Aerva-lanata extract on the carbon-fibre-and-aluminium-metal surface. The average inhibition effectivity of the Aervalanata inhibitor on the FML was 87.032%. Table 4 indicates the weight of the specimens before and after coating. Figure 3 indicates the weight loss of FML due to the action of corrosion with respect to the number of days for each of the bare and coated FML in 3.5% NaCl. Figure 3 indicates that there was only a small reduction in the weight of the coated FML when compared to the uncoated bare FML. So, the Aerva lanata extract yields a great efficiency by providing a protective covering over the surface of the FML. Figure 4 indicates the corrosion rate of FML with respect to the number of days in which the FML was immersed in 3.5% NaCl. This Aerva lanata extract was capable of yielding an effectivity of about 87% on carbon-fibre-and-aluminium-metal-based FMLs in 3.5% NaCl. Figure 3 indicates the weight loss of FML due to the action of corrosion with respec to the number of days for each of the bare and coated FML in 3.5% NaCl. Figure 3 indicate that there was only a small reduction in the weight of the coated FML when compared to the uncoated bare FML. So, the Aerva lanata extract yields a great efficiency by providing a protective covering over the surface of the FML. Figure 4 indicates the corrosion rate o FML with respect to the number of days in which the FML was immersed in 3.5% NaCl This Aerva lanata extract was capable of yielding an effectivity of about 87% on carbon fibre-and-aluminium-metal-based FMLs in 3.5% NaCl. Polarization Measurements The effect of the concentration of the Aerva-lanata extract on the polarization behaviour of the FML in 3.5% NaCl were analysed and the Tafel plots were recorded for different inhibitor concentrations, as shown in Figure 5. The corrosion current densities were calculated by the intersection corresponding to the corrosion potential. When there was higher concentration, it resulted in a lower current density at 600 ppm [29]. where I o corr and I i corr represent the corrosion-density values with and without the inhibitor on the FML surface, respectively. Figure 5 clearly shows that the anodic metallic-dissolution and cathodic hydrogenevolution reactions were inhibited when the concentration of aerva lanata was increased in the aggressive medium. The Tafel plots were plotted for the FML for two conditions viz. before and after adding the aerva-lanata-flower extract to the FML surface, which were controlled by means of charge transfer between the cathodic and anodic reaction mechanism. The combination of physisorption and chemisorption properties causes the active ingredients of the Aerva-lanata-flower extract to be absorbed strongly onto the FML surface. Corrosion on the surface was prevented by using this flower extract on the surface of the FML, and it can be proved by fact that the corrosion current-density value decreasing by increasing the Aerva-lanata inhibitor concentration. The result from Table 4 suggests that by increasing the concentration of the Aerva-lanata extract, there is decrease in the corrosion current density. The lowest corrosion current density of 2.335 × 10 −3 A/cm 2 was obtained at 600 ppm concentration at the rate of 88% on the carbon-fibre-and-aluminium-metalbased FML. From Figure 5 it can be seen that there was a positive shift, which is due to the corrosion resistance of the carbon-fibre-and-aluminium-metal-based FMLs with the Aerva-lanata extract concentration on the surface. It can be seen that the maximum positive shift happens at 600 ppm compared to the other inhibitor concentrations. Additionally, this 600 ppm concentrated solution yielded the highest efficiency of about 88%, which is shown in Table 5. Electrochemical-Impedance Spectroscopy Electrochemical-impedance-spectroscopy (EIS) measurements were performed on the carbon-fibre-and-aluminium-metal-based fibre-metal laminates to study the impedance parameters in a 3.5% NaCl environment with different Aerva-lanata-flower-extract concentrations. Table 6 shows the results of the EIS process. Figure 6a shows the EIS Nyquist curves with different concentrations of Aerva-lanata extract. It can be seen that as the concentration of Aerva lanata extract increased, the values of charge-transfer resistance (Rct) also increased. A maximum value of 301.15 Ω cm 2 was reached. This rise in charge-transfer resistance reduced the number of active sites created by the adsorption of chloride ions on the surface of the FML, which led to the protection. The equivalent circuit is shown in Figure 6b. In the Nyquist plot, it can be seen that a semicircle is formed in each curve, which is due to the load-transfer resistance, which stands for a particular time constant. Increasing the concentration of the Aerva lanata extract enlarged the capacitive-loop diameter from uncoated to coated at about 600 ppm, from which it is understood that an inhibition effect was being developed. The inhibition efficiency can be calculated in the impedance study using the following formula, where R ct and R • ct are the inhibitor and non-inhibitor load-transfer resistances, respectively. The efficiency of inhibition is enhanced by increasing the Aerva lanata extract concentration, reaching the highest value of 85.9 percent at 600 mg/L. Table 6 clearly shows that when the Rct value increases, the CPE value correspondingly increases. This is due to the formation of the protective film on the surface of the carbonfibre-and-aluminium-metal laminates. This transition in Rct and CPE values was caused by the substitution of water molecules on the carbon-fibre-and-aluminium-metal-based FML surface by the adsorption of the Aerva lanata inhibitor. The inhibited solutions had higher values of n than those that were uninhibited due to a reduction in the surface heterogeneity as a result of the adsorption of the electrode electrolyte Aerva lanata inhibitor on the FML interface based on carbon fibre and aluminium metal. Inductive loops were present at low frequency for the EIS curves of a blank solution, which is due to the result of the surface relaxation of the adsorbed intermediate products. The inductive loops disappeared for the remaining concentrations. Mechanical-Hardness Test The Vickers hardness test conducted on the carbon-fibre-and-aluminium-metal laminates that were immersed in the 3.5% NaCl corrosion test solution are shown in Figure 7. The raw specimen indicated a most astounding hardness strength of 522 VHN, whereas when it was subjected to corrosion its hardness value drastically decreased to 209 VHN. However, at the same time, when the corroded FML was treated with the Aerva lanata extract, it improved the hardness ability up to 299 VHN. Table 7 shows the Vickers hardness values for the FML at various conditions. Surface Analysis Scanning Electron Microscope Figure 8 shows the carbon-fibre-and-aluminium-metal-based fibre-metal laminate surface morphology in 3.5% NaCl solution with and without the aerva-lanata-flower inhibitor. Figure 8a shows the bare carbon-fibre-and-aluminium-metal-based fibre-metal laminates that were subjected to polishing before undergoing SEM, whereas Figure 8b shows the SEM image of the bare carbon-fibre-and-aluminium-metal-based fibre-metal laminate after eight hours of immersion in 3.5% NaCl, and Figure 8c shows the SEM image of the carbon-fibre-and-aluminium-metal-based fibre-metal laminates that were immersed in 3.5% NaCl and also coated with the Aerva-lanata extract. The surface morphology of the carbon-fibre-and-aluminium-metal-based fibre-metal-laminate specimen was incredibly rough due to surface corrosion when the specimen was immersed in the 3.5% NaCl solution, but the specimen that was coated with the Aerva-lanata extract showed less surface morphology than Figure 8b. This is due to the formation of an extract-solution protective layer on the carbon-fibre-and-aluminium-metal-based fibre-metal-laminate surface. The smoother surface was obtained by the presence of the green corrosion inhibitor on the surface of the carbon-fibre-and-aluminium-metal-based fibre-metal laminates [29]. The effect of microstructure was investigated by comparing the corrosion behaviour of the devitrified (crystalline) to that of the non-crystalline BMG. The results were similar to those reported in a previous study by Peter et al., wherein the corrosion current density from crystalline BMG was slightly greater than the non-crystalline structure. Surface oxide layers can exhibit different potentials than the base metal, and thus affect corrosion behaviour. For example, the standard reversible electrode potential for titanium is negative, but in practice, the electrode potential of titanium in the galvanic series is positive because of the passive oxide layer on the surface. Conclusions For the first time in a basic medium, the Aerva-lanata flower was successfully employed as a green corrosion inhibitor for carbon-fibre-and-aluminium-metal-based fibremetal laminates. In fact, the flower of aerva lanata contains aromatic rings, heteroatoms, and oxygen, which makes it a suitable candidate for acting as an inhibitor in the basic medium. The inhibition efficiency reached a maximum of about 92 percent with electrochemical techniques in the presence of 600 ppm inhibitor. SEM analysis was used to analyse the surface morphology of the carbon-fibre-and-aluminium-metal-based fibre-metallaminate microstructure in the absence and presence of the aerva-lanata-extract inhibitor. The Langmuir adsorption isotherm was detected as the inhibition mechanism. Due to the complex protective film formation between the inhibitor and the metal surface ions, there was a decrease in charge and ion transfer on the metal surface, which was recognised as the reason for the existence of the inhibition property. Conclusions For the first time in a basic medium, the Aerva-lanata flower was successfully employed as a green corrosion inhibitor for carbon-fibre-and-aluminium-metal-based fibre-metal laminates. In fact, the flower of aerva lanata contains aromatic rings, heteroatoms, and oxygen, which makes it a suitable candidate for acting as an inhibitor in the basic medium. The inhibition efficiency reached a maximum of about 92 percent with electrochemical techniques in the presence of 600 ppm inhibitor. SEM analysis was used to analyse the surface morphology of the carbon-fibre-and-aluminium-metal-based fibre-metal-laminate microstructure in the absence and presence of the aerva-lanata-extract inhibitor. The Langmuir adsorption isotherm was detected as the inhibition mechanism. Due to the complex protective film formation between the inhibitor and the metal surface ions, there was a decrease in charge and ion transfer on the metal surface, which was recognised as the reason for the existence of the inhibition property.
6,207.6
2022-04-21T00:00:00.000
[ "Materials Science" ]
Realization of the English Assisted Learning System Based on Rule Mining This paper first makes a brief introduction on the research progress of artificial intelligence, then introduces the basic structure of the whole English assisted learning system from the angle of system functional requirements, and finally discusses the realization of functions of the English assisted learning system under the support of rule-based data mining, aiming at attracting more attentions. INTRODUCTION In the field of artificial intelligence, the rule-based expert system is approved and favored by a large number of researchers.It has now developed as one of the branches with the most application prospect and potential in artificial intelligence field.It has pointed out in a great deal of studies that it is worth trying to combine the expert system theory and data mining theory.The internet platform provides teaching and research activities with opportunities of application so as to achieve twice the result with half the effort.Therefore, from the angle of English learning, this paper carries out a discussion and an analysis on the design and realization of English assisted learning system through the rule-based data mining technology 2 RESEARCH OVERVIEW Artificial intelligence is one of the relatively new research directions currently, the application fields of which are becoming increasingly extensive.Relevant researches at home and abroad become more and more mature.At present, research findings in artificial intelligence at home and abroad mainly include the following parts: First, the understanding of natural language, namely robots' understanding of human language, which is mainly realized by search engine.Second, database retrieval that means the search of useful information with related tools and technologies in a high-volume database.Third, expert system, namely the expert system established based on rules.The fourth part is mechanical theorem proving.The fifth part is robotics.The sixth part is automatic programming and the seventh part is combination and scheduling, namely simplified decomposition of complex things and solutions of optimal process. SYSTEM STRUCTURE The application of English assisted learning system provides students with a targeted practice platform, where tests of knowledge are carried out with objectives.Meanwhile, teachers are able to analyze students' answers and performance levels by inputting examination data in the system so as to guide the improvement of teaching plan.From this perspective, the English assisted learning system can avoid the fact that system data costs too much time on the one hand and it is of great significance for improving students' learning autonomy and enthusiasm on the other hand.According to the requirements mentioned earlier, the basic procedure of the whole English assisted learning system is shown as below (Figure 1). In the procedure shown in Figure 1, the whole English assisted learning system can be classified according to the different identities of users, which include four aspects: (1) System administrators: The major responsibility of system administrators is to conduct comprehensive maintenance on users' information of the whole learning system. (2) Domain experts: The major responsibility of domain experts is to maintain relevant information data of the knowledge base, which not only contains basic knowledge base but also covers question bank.The main object of the former is grammar while the latter includes not only question information but also related interpretation of questions. (3) Teachers: The major responsibility of teachers is to be in charge of utilizing the whole English assisted learning system so as to give play to its values.In the assisted learning system, the question bank provided by the system can be classified in line with the difficulty level.In practical teaching activities, teachers can not only compose test papers and input questions flexibly according to the information of the system but also carry out the compiling of test questions information.Examination scores are then uploaded to the learning system by teachers.According to system functions, analyses are carried out on classes and students' learning situation with the unit of a class or a student.Relevant improvement opinions and suggestions are finally provided for teaching plans. (4) Students: The major responsibility of students is to carry out self-training positively with the English assisted learning system as a platform.In addition, the system also supports query of recorded scores for students as well as passwords modification and reset. SYSTEM IMPLEMENTATION From the perspective of system structure design, the client can be divided into two parts regarding convenience and adaptability of the English assisted learning system application.One is the teachers' end and the other is students' end.The teachers' end can be further subdivided into three parts according to different user permissions and identities, namely system administrators, domain experts and teachers.After user' login through names and passwords, the system analyzes objective answers of students (Models for reference are data uploaded to the server terminal by teachers) through data mining on the database.There are numerous knowledge points in each objective question.Therefore, the function of the English assisted learning system that needs to be satisfied first is to integrate analyses on specialized knowledge, judge answers of students and provide teachers with suggestions on the adjustment of teaching plan based on rules of expert database. According to this requirement, the leading function of the English assisted learning system is the rule-based data mining.This is also one of the most important goals of the system development.Users of the rule-based data mining function are teachers, who are in charge of entering data of students' answers, systematic calling of internal database (namely knowledge points in options) and definition rules (namely the division of credibility), and outputting students' mastery of a certain knowledge point. The English assisted learning system proposes two different ways of diagnosis in terms of various requirements of students and teaching.One is the data diagnosis established based on knowledge point analysis.The other is the data diagnosis established based on the analysis on students' mastery of knowledge points.The former infers students' mastery of knowledge through the analysis on a certain knowledge point.During this process, data mainly comes from students' scores uploaded by teachers and corresponding questions of a certain answer.Implicit knowledge points can be found out from these questions through rule mining.The system can judge students' mastery of knowledge points by analyzing the overall knowledge structure of a test paper and students' answering situation.The latter analyzes the examination situation of students in groups with the unit of a class or a department.Teachers are able to master teaching situations in this way, which plays a key role in adjusting teaching arrangement.Special attentions are required in that: the rule-based data Figure 1.Basic flow of English assisted learning system mining function supports analyses on knowledge points in questions.Summarize knowledge points that are easily confused through the optimized Apriori Algorithm and classify the difficulty of questions according to the result. In the basic condition of introducing optimized Apriori algorithm, the theoretical procedure of the system function realization is: If the corresponding relation between questions and knowledge points are shown as follows (Table 1), operating steps, which must be followed in association rules mining of data set when the optimized Apriori algorithm is introduced, should be: Firstly, with the basic condition that the minimum support value is 2, scan each item set and give the statistics of the appearance of each knowledge point so as to obtain the support degree count of each item set and candidate item set C1. Compare candidate support degree counts according to the given minimum support degree, and carry on the next step for counts satisfying the condition of being larger than the minimum support degree while conduct pruning process for those that do not meet the condition so as to obtain the frequent item set L1. Then connect all the data in this set to form a new candidate item set through pair-wise combination, which is defined as C2.Conduct pruning process for item sets in C2 that do not satisfy the condition of being larger than the minimum support degree so as to form L2. The frequent item set L3 can be obtained by repeating the above steps.After the connecting process of L3, the obtained candidate item set C4 has only one element {Il, I2, I3, I5}.Besides, the support degree count after pruning process is 1 that fails to satisfy the condition of being larger than the minimum support degree and the obtained frequent item set L4 is an empty set.The final structure outputted after the association rules mining is a frequent item set L3. Illustration: Altogether 30 students in a class take part in a unit evaluation test.The teacher selects 5 questions as analytical samples after the test.At first, the teacher login in the English assisted learning system to add the five questions in the system and then determine corresponding knowledge points and analy-sis results of options.Questions are respectively: 1) It was essential that the application forms ( ) back before the contract. A: were sent; B: would sent; C: be sent; D: must be sent; 2) The match was cancelled because most of the members ( ) a match without a standard court. A: objected to having; B: were objected to have; C: were objected to having; D: objected to have; 3) The goals ( ) he had fought all his life no longer seemed important to him. A: with which; B: for which; C: after which; D: at which; 4) I would appreciate ( ) it a secret.A: you to keep; B: you keeping; C: that you keep; D: that you will keep; 5) The children went there to watch the iron tower ( ). A: be erected; B: to erect; C: being erected; D: erecting; Standard answers of the above five questions are respectively: C; A; B; B; A. Corresponding knowledge points are defined as vector y, meanings of which are: vector y1 stands for conjunctions, vector y2 stands for subject-predicate consistency, vector y3 stands for verbs, vector y4 stands for sentence patterns, and vector y5 stands for adjectives. After the examination, answers of students can be provided in statistical data shown in the following table (Table 2) according to uploaded data results. In the English assisted learning system, the above data can be stored in a database after organization.Judge the correctness of answers first and then match these answers with defined rules in the system knowledge base so as to get the value of u, namely quantitatively processing on knowledge points vector y.The corresponding quantized value of the knowledge point yn is: Where, yn stands for the corresponding knowledge point vector of a certain question; x is the frequency level of students' answers of the specific knowledge point yn; n is the integration of all questions. The quantized value u can be further processed with weighting on this basis.The weighted vector is defined as W and the basic principle that must be followed in the weighting process is: As for samples need to be tested, the corresponding weighted value of a certain knowledge point is the result of weighting and superposition of credibility (defined as C).C is determined by rules in the system according to experience of domain experts.In the meantime, the value of weighted vectors decreases progressively in the unit of 2 n .According to the above processing, vectors analysis table of students' answers after processing is shown in the following table (Table 3). It can be concluded from the above analysis: If all the students choose options that are "mastered knowledge points" and the answer is a determined answer, the weighted value is the minimum, which is 0; if all the students choose options that are "not mastered knowledge points" and the answer is a determined answer, the weighted value is the maximum, which is 30. With the combination of the above analysis results, the analysis on this unit test results outputted by the system is given in the following table (Table 4). CONCLUSION This paper mainly carries out an analysis and a study on the design and realization of the English assisted learning system, introduces the expert system that has the greatest potential in artificial intelligence field, optimizes data mining with Apriori algorithm on the basis of rules, and conducts undetermined reasoning on students' answers to enrich and improve system functions.However, as for objective problems like small scale of database, researches and development still need to be further carried out on the premise of guaranteeing network stability so as to provide paperless teaching with platform support.
2,853.2
2015-01-01T00:00:00.000
[ "Computer Science", "Education" ]
On the Awkward Polysemy of the Verb risk As one of Karin Aijmer’s fields of interest is that of semantics, a discussion of a polysemous word may not come amiss in this context. The difference between homonymy and polysemy is clear enough in principle, if often less clear in actual practice.1 As the semantic differences are greater, by definition, between homonyms than between different uses of a polysemous word, it might be thought that ambiguity, especially problematic ambiguity, is restricted to the sphere of homonyms. I will try to demonstrate with the help of the English verb risk that such is not the case. The verb risk is just one in a family of formally related words: risk n., risk v., risky, riskiness, riskful and perhaps risqué, and also of semantically related ones like danger, hazard, venture and gamble, invest, expose. In their plea for semantic frames, Fillmore and Atkins (1992) show that such relations will all have to be taken into account for an exhaustive description of a word to be possible.2 Pustejovsky (2000:77ff.) formalises the main meanings of risk by introducing a privative factor that acts as a coercion operator; he thus arrives at an abstract description of the meanings of the verb. My own aim here is much more modest. By studying a few examples of the interplay of two main senses of the verb, I hope to demonstrate ways in which ambiguity can arise and also to suggest some of the devices hearers/readers use to deal with it. Polysemous 'risk'. A person might refer to a very dangerous situation by saying "I might risk my life doing this", or, equally, by saying "I might risk my own death doing this".This anomaly points up the semantic duality of the verb to risk, a duality that is likely to pass by unnoticed most of the time in conversation and in writing but which is sometimes apt to cause misunderstanding. The identification of a word as polysemous, and the distinction between its different senses may not always be obvious; Fillmore and Atkins (2000: 101) hold that "even for lexicographers there are no objective criteria for the analysis of a word into senses".Nor is this only a matter of objective facts.Different semantic theories take different views of the relationship between extralinguistic factors and their linguistic representations, theories that can be ranged along a scale from reductionist, with few semantic entities, to expansionist, with many.The number of polysemous words in the language will therefore vary with the type of semantic theory.To many or most observers, however, the verb risk is undoubtedly a polysemous word.Its meaning can be understood as made up of two main components: (A) 'to hazard an act or something pleasant or valuable' (as in "to risk a guess" or "to risk money"); and (B) 'to possibly expose oneself to something unpleasant' (as in "to risk bankruptcy"). In both cases, A and B, the action implicit in risk is normally undertaken to achieve something worthwhile, a Goal.3There is a difference between A and B in that the risk the subjects are running is not made explicit in A but is always explicit in B. If you risk a guess, the risk is that you may be wrong, and if you risk your money, the risk is that you will never see it again, but in either case the possible (unwanted) consequence is not spe-cified.In B sentences, on the other hand, the risk the subject is running (e.g."bankruptcy") functions as the object and is always specified. Speakers can choose to focus either on what is being hazarded (A) or on its unwelcome possible consequences (B).Fillmore and Atkins (1992: 82) use the terms Valued Object/Victim and Harm, respectively.If speakers focus on what is being hazarded, they will say e.g.risk one's life, money, career; or risk a guess, blowing one's nose, etc., all the time leaving 'with possibly unwanted consequences' unexpressed.Here the object of risk is the word(s) denoting the act or the valuable or pleasant things.If, on the other hand, they focus on the possible consequences, they might say risk death, jail, humiliation, etc., where the object of risk is the word(s) denoting the danger or unpleasantness that may materialise.The entity that is being endangered in this perspective (normally a person or persons)4 is usually the actor, the subject of the clause. Corpus evidence. In order to look a little more closely into the function of the verb risk in modern English I have made use of material in the CobuildDirect Corpus. Risk (A). Examples of what will here be called risk A are usually easy to distinguish from the rest.What is being endangered is a valuable or pleasant thing, so noun phrases with a positive semantic prosody abound: her good looks, her job, her life, his freedom, his reputation, your money, my career, our most precious asset, peace and prosperity, their capital, etc.Note that the A noun is typically preceded by a possessive; it is something that belongs to you or that you have influence over that you may put at risk.The risk is implied but, as we saw, not specified.What is hazarded can also be an act (a "deed" in Fillmore and Atkins 1992: 94ff).Some typical A cases are the following: (1) I also risk a pound a week on the lottery [...] (Corpus: today/11.Text: N6000941205.) (2) A friend of Kevin said: "It was typical of him to risk his own life to help a child in his care."(Corpus: sunnow/17. Text: N9119980626.)In such cases it is obvious that something valuable is being put at risk in order to achieve some goal.A few examples of acts, "deeds", with possibly unwanted consequences are these: ( With both the A and the B perspective the object of risk could be an act expressed as a gerund (but see below).Again, the difference between them is that in the A perspective the act is seen as possibly having undesirable consequences, but in the B perspective it is the act itself that represents those consequences.Cf. ( 7) and ( 8): In ( 7) the possible unpleasant consequences of doing good are not spelled out, but in (8) the possible unpleasant consequences are precisely "doing something else" (than you should have done). Formal characteristics. Certain characteristics of the A and B categories are striking.It was suggested above that nominal objects of risk with the A sense are typically preceded by a possessive adjective.This is overwhelmingly the case: out of 95 nominal objects with a possessive in the material, 90 have the A sense and 5 the B one.Another a little less pronounced tendency is that if the object of risk is a gerund, risk will be B in more than three cases out of four (168 out of 220).Although there are no very obvious formal or syntactic dividing lines between the two types of risk, it is not unlikely that they will be more clearly differentiated in the future.As Carey Benom says5: Semantic differentiation occurs, diachronically, prior to syntactic differentiation, as well as motivating it.Initially the semantics begin to diverge, and only later are the conceptual differences reflected structurally.However, when there are syntactic differences, there is stronger evidence for distinct senses, as we are not dependent on the linguist's intuition. Ambiguity and context analysis. With two related but different meanings in the same word there are interpretational dangers arising out of ambiguity, "multiple meanings at the level of more complex syntactic structures".6Let us follow the road into increasing ambiguity.As often happens, what is ambiguous in isolation can be quickly resolved by the textual or situational context.For instance, in ( 9): (9) Blinder, of course, is an inflation dove, a Clinton-appointed Fed board member who would risk a bit of inflation to stimulate growth.(Corpus: times/10.Text: N2000951104.) "A bit of inflation" would normally suggest an undesirable phenomenon that may occur if certain measures are taken or neglected, thus a B interpretation, but as a Fed board member Blinder is obviously in a position to bring on inflation, in order to achieve the higher targets expressed in "to stimulate growth"."Risk a bit of inflation" thus means 'decide on creating a bit of inflation, risky as it may be', an A interpretation. In other cases, it is less evident whether the object refers to what is being put at risk or to the unwished-for consequences, and the context will have to be analysed fairly closely for the ambiguity to disappear: (10) I thought I was going nowhere then but before the start of last season Huddersfield came in and I was totally sold on the place.It took courage for them to risk a transfer tribunal, which eventually put me around <KPD> 60,000, and I'll always be grateful to them.Corpus: sunnow/17.Text: N9119980403. The speaker, presumably a footballer, could mean either that the club were courageous in deciding to appear before a transfer tribunal despite the unspecified uncertain outcome, which would be an A case, or that they were courageous in taking certain unspecified steps that might result in a tribunal; the danger that might materialise would thus be the tribunal itself, a B case.Given the seemingly bad repute of transfer tribunals,7 a B interpretation may seem marginally more likely. The difference between the two types of interpretation can thus be slight or elusive, but it is important enough in the context of power politics: (11) Although the Hungarians immediately appealed to the United Nations for support, the Soviets moved in quickly with troops and tanks.As we move into more and more palpable ambiguity we have to depend more and more on context for our interpretation."Context" here means more than the immediate linguistic environment; it also comprises "general knowledge of the world, more specific knowledge that is shared between participants in a conversation, information that was introduced earlier in the discourse, and the specific circumstances in which a discourse or conversation takes place" (Tanenhaus 1992: 288-289).In the great majority of cases, what is ambiguous in a narrow perspective ceases to cause any problems when more contextual elements are taken into account.The general reader would presumably hesitate between an A and a B reading of ( 11) -( 13), and it is easy to think of situations where an unintended reading could cause confusion or harm.However, an observer with a more intimate knowledge of the characters and individual histories of the protagonists of ( 11) -( 13) might not therefore find the sentences ambiguous at all. Distinguished and undistinguished senses. As we have seen, there is a clear distinction between senses A and B of risk, and most of the time there can be no discussion as to which sense is being meant.We have also seen that there are ambiguous cases where the speaker/writer intends either A or B but where the hearer/reader's choice between them is not obvious and where the outcome of the choice, which will significantly affect the final interpretation, is ultimately dependent on sometimes peripheral contextual information.But there is also a third case.The polysemy of risk can be awkward, simply because the perspective of the speaker/writer is not always obvious even to himself/herself.If what you risk is something valuable on the A reading ("risk life") but something undesirable on the B reading ("risk death"), neutral phrases like "risk a lot", "risk more", "risk a situation", "risk a very great deal", "risk anything", "much to risk" tend to blur the line between an A and a B interpretation.Consider sentence ( 14): (14) Prayers are sometimes answered by the experience of more struggle, by our being plunged into situations where we must risk more than we ever dared before.(Corpus: usbooks/09.Text: B9000001453.)(A): 'We have to hazard more, put more at risk' (more suggests something valuable) (B): 'We have to expose ourselves to the possibility of more harm (damage, dangers, losses, etc.)' (more suggests something unpleasant) In such cases readers or listeners can follow one of several interpretational avenues.The first is that of resorting to a default reading.One of the two possible readings will seem to be a generally more likely one, which in case of doubt readers will automatically prefer to the alternative reading.The default will in that case probably be the more frequent of the two.To find out which of them, if so, is the default reading, all the cases of the verb form risk in the Cobuild Corpus were analysed and classified as A or B occurrences.In the cases where an A classification seemed just as likely as a B one, the examples were classified as AB, and in the cases where either an A or a B classification seemed likely but less than certain a ?was added.The result of the analysis is this: But when a person's drug has potentially negative consequences and he continues using it anyway, we must conclude that he is deriving deeper, unacknowledged payoffs from it -secondary gains for which he is willing to risk a lot.Corpus: usbooks/09.Text: B9000000440. (A): gains for which he is willing to hazard many things he values (B): gains for which he is prepared to endure many hardships The sentence is unproblematic as it stands because we don't feel called upon to distinguish between the two senses of risk.Both may apply at the same time.This is a case of vagueness or indeterminacy, discussed for instance by Geeraerts (1993).What happens here, I would suggest, is that hearers/readers, and probably also speakers/writers, desist from making the usual distinctions relevant to the word in question and rely on a more general or abstract meaning common to the two alternatives.8As Ravin and Leacock put it in a different context (2000:22), the two meanings "are not disjoint but rather components of a single word sense".In this case that meaning might be 'accept the possibility of negative consequences in order to gain something valuable'.This raises what seems at first to be an awkward question about the status of our verb.According to a common definitional test of polysemy, "[a] word is polysemous if more than a single definition is needed to account for its meaning.In classical terms, word is polysemous if a single set of necessary and sufficient conditions cannot be defined to cover all the concepts expressed by the word" (Ravin and Leacock 2000:3).If, therefore, the definition of risk just given should be necessary and sufficient, the word would not be polysemous, according to the definitional test.However, precisely because the suggested definition is abstract and general, it is under-specified and therefore generally not sufficient.It will serve the hearer/reader who is only interested in a broad understanding of the subject-matter in question, and it will act as a kind of temporary measure to tide the hearer/reader who wants to understand the subject-matter in detail over a potential quirk in the written or spoken communication, but more information will be needed for the different senses expressed by risk to become clear.This may be said to illustrate the point raised earlier, viz.that the status of a word in a monosemous -polysemous dimension is not given once and for all.It will vary not only for theoretical reasons depending on which semantic theory is being applied, but also for practical reasons, because individual speakers will differ over semantic nuances in a word.The different possible types of relationship between senses A and B could perhaps be as illustrated as in Figure 1, where A/B is intended to represent the default method with either A or B as the preferred option, and where A+B represents the more abstract amalgamation of the two senses.A conclusion that one may draw after considering a polysemous verb like risk is that polysemy is in fact quite likely to give rise to In most cases, however, the potential awkwardness of ambiguity can be handled satisfactorily by hearers and readers, thanks to contextual information.If the context is not sufficiently informative and the ambiguity is therefore not resolved instantly, they will resort either to a default interpretation or to a general, less specific interpretation.In the business world of today, particularly in financial institutions such as insurance companies, specialists in finance devote themselves to "risk analysis" and "risk management", terms that are frequent and often well defined.It is now obvious that competent users of English, i.e. specialists in lan-B A guage use, will also have to engage in both "risk analysis" and "risk management" in a linguistic sense, an activity that will assume a different kind of specialist knowledge, but one which is no less demanding and chiefly applied on the subconscious level.Language is a highly sophisticated system. confine one's actions merely to what is possible but should try, dare and risk doing as much good as possible.(Corpus: ukbooks/08.Text: B0000001257) -(A) (8) [A cricketer speaking:] When you have a finger injury you risk doing something else because you take the ball differently.(Corpus: oznews/01.Text: N5000950419.)-(B) unwilling to risk another fight with the military, reluctantly relented.(Corpus: ukbooks/08.Text: B0000000551.)(A): 'LG, unwilling to provoke another fight with possibly dangerous consequences..' (B): 'LG, who would not accept the unwished-for possibility of a fight ...' Figure 1 . Figure 1.Relations between senses A and B Risk (B).B examples are those where the risk is specified as the object of risk and the subject is what/who is being exposed to it.Some examples are: (5) Mobile phone users risk bad health after a mere two minutes of use.(Corpus: sunnow/17.Text: N9119980517.)(6) Thieves don't want to risk being seen.(Corpus: ukephem/02.Text: E0000001244.) For once, John Foster Dulles was not willing to risk a confrontation, and the non-Communist world stood by as the Hungarians were brutally forced back into the Soviet camp.(Corpus:usbooks/09.Text: B9000001429.) (A): 'The American Secretary of State John Foster Dulles was not willing to stage a confrontation, which might have disastrous consequences.'(B):'Dulleswasnot willing to take steps that might result in a disastrous confrontation.'(12)But it is clear that Mr Assad is unwilling to risk meeting Shimon Peres, the Israeli Prime Minister, before a deal on ending the state of war between Israel and Syria.(Corpus:times/10.Text: N2000960312.)(A):'Assad is afraid his meeting Peres before the deal would have harmful consequences.'(B): 'Assad is afraid he might happen to meet Peres before the deal.' Table 1 . Distribution of the verb risk over semantic classesIt seems, then, that if either of the two interpretations is to function as default, it is the B one, with 59.1% as against 38.5% for A. Intuitively it may also seem more natural that you should risk something unpleasant or dangerous than something valuable.Another avenue of interpretation becomes relevant when again either A or B can apply and it is likely that neither speaker/writer nor hearer/ reader bothers to make a distinction between them.Consider (15):
4,316.2
2007-05-11T00:00:00.000
[ "Linguistics" ]
Distinctive substantial self-knowledge and the possibility of self-improvement Quassim Cassam distinguishes between trivial and substantial cases of self-knowledge. At first sight, trivial cases are epistemically distinctive insofar as the agent needn't provide any sort of evidence to ground her claim to knowledge. Substantial cases of self-knowledge such as ‘I know I want to have a second child’ do not seem to bear this distinctive relation to evidence. I will argue, however, that substantial cases of self-knowledge are often epistemically distinctive and, to this end, I will challenge a crucial assumption in the current debate about self-knowledge, namely: if a piece of self-knowledge is based on evidence, it must have been delivered by a detached, theoretical attitude toward oneself (The Detachment Assumption). My case against the Detachment Assumption combines a negative and a positive programme. Regarding the negative aspect, I will first present Cassam's case for Inferentialism; second, I will argue that this view about self-knowledge is at odds with the sort of self-improvement that he vindicates in his analysis of epistemic vices and, third, I will conclude that only by allowing for an engaged relation to evidence, can we make sense of that sort of self-improvement. Regarding the positive programme, I will first examine the sensitivity to the music that is specific of a graceful dancer and, on this basis, outline an attitude toward oneself that, despite involving evidence, is not detached or theoretical but engaged, so that it gives rise to a kind of substantial self-knowledge that is both transformative and epistemically distinctive. Unlike what is customary in the current debate about self-knowledge, in Self-knowledge for Humans (Cassam, 2014), Quassim Cassam distinguishes between trivial and substantial cases of self-knowledge and explores why the current debate is almost exclusively concerned with trivial cases such as 'I believe there is yoghurt in the fridge' or 'I intend to go for a run at 9 am'. At first sight, such cases are epistemically distinctive insofar as the agent needn't provide any sort of evidence in order to ground her claim to knowledge; in fact, her self-ascriptions are all the more authoritative precisely because no evidence seems to be required. This certainly goes against standard expectations concerning claims to knowledge, where one's authority is proportional to the available evidence. Substantial cases of self-knowledge such as 'I know I want to have a second child' or 'I know I want to migrate to Europe' do not seem to bear this distinctive relation to evidence. There is plenty of room in such cases for self-ignorance and self-deception and, consequently, the agent's authority is surely proportional to the available evidence. My fundamental purpose in this paper is to argue that, contrary to what is customarily assumed, some central cases of substantial self-knowledge are epistemically distinctive and, therefore, that they involve an epistemic asymmetry between the first-person and the third-person perspectives. To this end, I grant that such cases of self-knowledge are based on evidence, but I will challenge what I regard as a crucial assumption in the current debate about selfknowledge, namely: if a piece of self-knowledge is based on evidence, it must have been delivered by a detached, theoretical attitude towards oneself (The Detachment Assumption). By this means, I will make room for a sort of attitude towards oneself that is based on evidence and still distinctively first-personal insofar as it is also engaged. My case against the Detachment Assumption combines a negative and a positive programme, but, more specifically, the structure of the paper is as follows. In Sect. 1, I will present two common approaches to self-knowledge that, as we will see, are ultimately committed to the Detachment Assumption, namely, Rationalism and Inferentialism. Rationalist views associate the distinctiveness of self-knowledge with our capacity to deliberate about the world. 1 They stress, for instance, that one's reasons to claim that one believes that p are transparent to one's reasons to claim that p: that is, whatever reason one may have to claim that p will also count as a reason to claim that one believes that p. To use Cassam's phrase, we can refer to this means to selfknowledge as the Transparency Method (TM). Rationalists expect TM to go beyond the case of knowing one's own beliefs and include several other mental states, such as desires and intentions. Even though TM may not apply to all mental states, Rationalists will stress that TM delivers a kind of self-knowledge that is normal, and even indispensable. Cassam objects to TM and defends instead an Inferentialist view, that is, a view according to which normal, basic cases of self-knowledge are based onor justified by-inferences concerning evidence about oneself. Inferentialism does not exclude the existence of cases of self-knowledge that are non-inferential, but regards such cases as exceptional. Cassam's case for Inferentialism relies not only on the implausibility of the available alternatives, especially Rationalism, but on his defence of Inferentialism from some crucial objections. In this paper, I will examine two. I will take advantage of Cassam's response to the first of these objections, which doubts the distinctiveness of one's inferential access to one's own mental states, to highlight his commitment to the Detachment Assumption, namely, to the idea that an inferential access to oneself is as detached as that of a third party. In connection with the second objection, concerning alienation from one's own mental state as result of having an inferential access to them, in Sect. 2 I will present Cassam's defence of the possibility of self-improvement regarding one's epistemic vices. He is convinced that, in some recalcitrant cases, one's ability to detect one's own epistemic vices will not suffice to eliminate them and that, in such cases, only exposure to certain situations or engagement with some specific practices will help. From an Inferentialist perspective, it is rather mysterious how this benefit could be obtained, that is, what kind of awareness or sensitivity may come with such exposures and practices that could eventually have a transformative effect. More specifically, I will argue that the Detachment Assumption stands in the way of our ability to make sense of the kind of self-improvement that Cassam commends or, in other words, that we must allow for a sort of relation to evidence that is not detached but engaged if we are to make sense of the fact that transformative experiences are associated, as Cassam himself suggests, with the idea of insight or revelation. Here is where the negative programme ends. Regarding the positive programme, I will examine the sort of sensitivity to the music and to one's own body that is specific of a graceful dancer as opposed to that of an unimaginative one. Based on this contrast, I will outline an attitude towards oneself that, despite involving evidence, is not detached or theoretical but engaged; moreover, I will suggest that this attitude, which I call 'receptive passivity', serves to identify the kind of self-awareness that renders epistemic self-improvement intelligible when mere inferential awareness of one's vice does not suffice. I will finally argue that receptive passivity provides a kind of substantial self-knowledge that is both distinctively first-personal and transformative. Cassam's case for inferentialism Cassam is perplexed that the current debate about self-knowledge is almost exclusively concerned with trivial cases, leaving aside substantial cases that are so important to our lives. The reason seems to be that trivial cases are epistemically distinctive while substantial ones are not. Cassam will argue, though, that trivial cases of selfknowledge are not so distinctive while I will proceed the other way around, namely: I will defend the view that some central cases of substantial self-knowledge are distinctively first-personal, although, for this purpose, we would have to renounce the Detachment Assumption. But, prior to this, let us briefly consider how Cassam identifies substantial cases of self-knowledge. Trivial and substantial cases of self-knowledge Cassam lists a few features of substantiality that are almost exclusively epistemic. They have to do with fallibility, evidence, cognitive effort and so on. Only one feature on his list refers directly to the idea of practical significance, namely, the Value Condition: "substantial self-knowledge matters in a practical or even in a moral sense." (Cassam, 2014, p. 31). A further feature he mentions is connected with practical or existential concerns through the idea of identity or self-conception: "… this kind of [substantial] self-knowledge often 'tangles with' a person's self-conception." (Cassam, 2014, p. 30). We can then distinguish two aspects of substantiality regarding self-knowledge: an epistemic and an existential aspect. The epistemic aspect has to do with the idea that such pieces of self-knowledge are epistemically demanding in various respects, especially concerning the gathering of evidence and the ability to let one's deliberations be guided by it; the existential aspect refers, by contrast, to situations that are recognised -in a way still to be elaborated -as important to us, that is, as connected to things that matter. What I intend to defend is the view that existentially substantial cases of self-knowledge are not only epistemically substantial but, in some crucial cases, distinctively first-personal too. In what follows I will qualify my use of 'substantial' only exceptionally, since the emphasis on the epistemic or existential aspect of substantiality will usually be inferred from the context. According to Cassam, Inferentialism derives a significant portion of its strength from the implausibility of the available alternatives (Cassam, 2014, pp. 141ff). Regarding these alternatives, he focuses mainly on Rationalist approaches to selfknowledge. Let me briefly motivate these approaches and then present Cassam's case against them. Rationalist approaches to self-knowledge There are two ways, according to Richard Moran, in which the declaration 'I don't know how I feel about that' can be approached. It can be construed either as raising a theoretical question regarding one's actual feelings ('I don't know what it is that I do feel') or as expressing a practical concern ('I don't know what to feel about that') (Moran, 2001, p. 58). The theoretical question presents one's feelings as mere inner happenings that one must discover, whereas the practical concern involves a preoccupation for the appropriateness of one's feelings and the need to transform them if required. This practical concern involves, as we see, a deliberative attitude that does not reduce to the formation of a normative judgement but includes the corresponding commitment to transform one's feelings or, in general, one's psychological condition (Moran, 2001, p. 59). Two attitudes towards oneself are thus discerned, namely, one theoretical and the other deliberative: In characterizing two sorts of questions, one may direct toward one's state of mind, the term 'deliberative' is best seen at this point in contrast to 'theoretical,' the primary point being to mark the difference between that inquiry which terminates in a true description of my state, and one which terminates in the formation or endorsement of an attitude. (Moran, 2001, p. 63) The theoretical attitude involves, according to Moran, a detached view about oneself. One regards one's feelings and actions as a passing show and, like a third-party, one must observe one's inner events and external behaviour to know them. The deliberative attitude comes instead with the idea of a commitment as a result of one's sensitivity to reasons. Moran articulates the contrast between these two attitudes in terms of the Transparency Condition. Even though this condition can be applied to various sorts of psychological states and attitudes, belief is the simplest case (Moran, 2001, p. 65). He thus begins with the question: (B) 'Do I believe that P?' and examines how it relates to a question about the world itself: (T) 'Is P true?' Despite the disparity of these two questions, it would sound quite weird that an agent might answer them differently, namely, that a yes to one of them would come with a no to the other. But such harmony in the answers to these questions seems to derive from the fact that one's answer to (B) is the product of a deliberation about the appropriate answer to (T), rather than the outcome of a detailed observation of one's inner states and events. In other words, to answer (B) one must inspect those aspects of the world that might be relevant to determine whether P is true. It follows that (B), thus construed, is transparent to (T) insofar as the agent must answer B ". . by reference to (or consideration of) the same reasons that would justify an answer to the corresponding question about the world" (Moran, 2001, p. 62;see Dunn, 2006, pp. 38-43;Edgley, 1969, pp. 89ff;Evans, 1982, pp. 225;Gallois, 2008, ch. 3;and Hampshire, 1957, ch. 3-4). Rationalist approaches tend to generalise over the belief case and present the Transparency Method (TM), to use Cassam's phrase, as the fundamental source of strictly first-personal self-knowledge. The most plausible formulation of TM goes, according to Cassam, as follows: ... the question of whether I believe that P is, for me, transparent to the question of what I ought rationally to believe. (Finkelstein, 2012, p. 103) 2 Cassam (2014) raises three main objections against this view, namely: the Generality Problem, the Substitution Problem, and the Matching Problem. 3 Let us explore them in some detail given that they are not only central to Cassam's case for Inferentialism but serve to qualify Rationalism as well as to highlight how both approaches rely on the Detachment Assumption. Cassam's case against rationalism The Generality Problem has to do with the fact that TM may apply to beliefs but hardly to other mental states such as desires and hopes (Cassam, 2014, p. 103). Moran could certainly reply that desires or feelings might also be known by means of TM insofar as we might accept, like he himself, that there is a matter of fact as to what is worth desiring or what is appropriate to feel on a certain occasion. Regarding hopes, it is harder to defend TM, though, for it is unclear why there should always be an appropriate answer to the question as to whether one ought rationally to hope that P. But, as Cassam highlights, this concern could also apply to beliefs, desires and feelings, since it is quite often undetermined whether one ought rationally to have a certain belief, desire or feeling: "It's not just that it can be very hard to know which attitude one ought rationally to have but that in many cases there is no such thing as the attitude 'the reasons' require one to have." (Cassam, 2014, p. 104) In other words, we may say that, even though Rationalism might account for one's knowledge of those mental states that one ought rationally to have, there are many other cases of self-knowledge, both trivial and substantial, that hardly meet this constraint and, as a result, the Generality Problem seems to be confirmed. The Substitution Problem has to do with the fact that, on many occasions, it is easier for an agent to know, say, what she actually desires than whether she ought rationally to have a particular desire: "I have no idea what I ought rationally to want to drink, and if that is the case then why would I think that figuring out whether I ought to want to have a vodka martini is a good way of figuring out whether a vodka martini is what I want?" (Cassam, 2014, p. 104) Hence, TM -and, in general, any transparency condition that may involve some sort of deliberation (Boyle, 2015)deprives us of the immediacy of trivial cases of self-knowledge that Rationalism is supposed to account for. Finally, we come to the Matching Problem, which hinges on the fact that quite often what an agent is convinced she ought rationally to believe fails to match her actual beliefs, since beliefs do have a dispositional component that may resist the agent's judgment to the contrary (Cassam, 2014, pp. 112-8). A certain agent may be afraid of flying and, nevertheless, acknowledge that it is the safest means to travel, or be frightened at the sight of a certain insect she judges entirely harmless. As we see, the transition from judging to believing is far from trivial, since the latter includes a dispositional pattern that goes beyond the fact that an agent has made a certain judgment. As Richard Moran puts it: ... It is hard to see how anything remotely like our concept of belief could fail to play a sort of dual role: as explanatory of behavior and as bearer of truth values. (Moran, 2001, 129) This dual role makes room for situations where there is a mismatch between what the agent sincerely claims to believe because of her deliberation about what she has most reason to believe and what her psychological dispositions (both linguistic and otherwise) may reveal she is believing. In such cases, (B) could be raised as a theoretical question: (B*) Is 'believing P' among my psychological dispositions? Transparency of (B*) to (T) would require that an agent's psychological dispositions should be sensitive to the same reasons that would favour one or another answer to (T). But quite often such transparency is not a trivial matter for the agent but a remarkable achievement (Moran, 2001, p. 67). Normal cases of self-knowledge Rationalist approaches could certainly reply to all three objections by reducing the scope of their approach to some specific cases of self-knowledge. This strategy might preserve the relevance of TM if combined with the idea that such cases are in a relevant sense fundamental. They might thus argue that knowledge of one's mental states by TM constitutes the basic or normal case while other cases should be regarded as deviant or exceptional -the normality claim; alternatively, one could say that this kind of self-knowledge, even though it is not basic or normal, is at least indispensable insofar as it is presupposed by all other sorts of self-knowledge -the indispensability claim. Thus, Boyle (2009) claims that, even though TM cannot plausibly account for all sorts of self-knowledge, this kind of self-knowledge is fundamental insofar as it grasps a capability that any other variety of self-knowledge presupposes: If this is right, then we are in a position to say why the kind of self-knowledge that Moran characterizes is fundamental. It is fundamental because the ability to say what one believes in the way that Moran specifies is intimately connected with the kinds of representational abilities that must be possessed by a subject who can make comprehending assertions, and a subject who lacks these sorts of abilities cannot be a self-representer, in the sense we specified, at all. (Boyle, 2009, p. 151, see p. 156) ... knowledge of our own beliefs, desires, hopes, and other 'intentional' states is first and foremost a form of inferential self-knowledge' (Cassam, 2014: 137;my italics). It may be relevant to stress, as Cassam (2017) does, that Inferentialism is a view in epistemology and, therefore, that the claim for inferential knowledge is a normative one, namely, that normal cases of self-knowledge are inferential just because they must be based on, or justified by, some inferential process. This view must be distinguished from an empirical claim according to which normal cases of self-knowledge actually involve a psychological process of inference (Cassam, 2017, pp. 727-8;Cassam, 2014, pp. 145-6, see p. 124). Hence, Inferentialism may be true even though the subject has no experience of drawing an inference or no psychological mechanism can be individuated to this effect. We can then state the central inferentialist claim about self-knowledge like this: normal cases of self-knowledge must be based onor justified by-inferences concerning evidence about oneself. Two objections against inferentialism rebuked To elaborate his case for Inferentialism, Cassam examines in some detail four objections that could be raised against it. I will confine myself, however, to the only two that are relevant to my line of argument, namely: the distinctiveness objection, that is, Inferentialism can hardly honour the distinctiveness of self-knowledge, namely, the essential epistemic asymmetry between the first-and the third-person perspectives; and the alienation objection, that is, Inferentialism involves an alienated view about one's psychological condition that is incompatible with our agency. Regarding the distinctiveness objection, Cassam develops a strategy that seeks to combine accommodation and denial (Cassam, 2014, p 149;see Cassam, 2017). On the accommodation side, he argues that, after all, a third-party does not have access to the same kinds of evidence about one's own mental states as oneself and, in this respect, there is an asymmetry between the first-and the third-person perspectives (Cassam, 2014, p. 149). Rationalists might reply, though, that this is not the sort of asymmetry they are interested in. The fact that there may be a difference between the kinds of evidence that can respectively be accessed from each perspective, does not render self-knowledge epistemically distinctive. In response to this, Cassam turns to denial and calls into question the distinctiveness of the first-person perspective as a datum. There is no distinctive access to one's own mental states. The agent herself and a third-party bear the same relation R to the available evidence, only that the kinds of evidence available may differ in each case. Yet, this relation R is precisely that of the detached observer and, therefore, Cassam's line of reply to the first objection seems to take the Detachment Assumption for granted. In Sect. 3, I will challenge this assumption and defend the distinctiveness of existentially substantial self-knowledge in terms of a relation R* to evidence, but let us move on now to the issue of alienation. The Alienation Problem runs as follows: "… the final objection to Inferentialism says that inferential self-knowledge is alienated rather than ordinary self-knowledge, and that Inferentialism doesn't account for ordinary, 'unalienated' self-knowledge." (Cassam, 2014, p. 157) Cassam replies that, for any given attitude that an agent may come to know inferentially, it does not follow that she is not identified with it: Just because inferential or attributional self-knowledge can be of attitudes you don't identify with and can't endorse, it doesn't follow that this kind of selfknowledge has to be alienated in these ways. The mere fact that you self-ascribe an attitude on inferential grounds doesn't make the attitude alienated or impervious to reason. (Cassam, 2014, p. 157) Moreover, Cassam insists that one's identification with a certain desire or attitude may survive one's deliberation to the contrary (Cassam, 2014, p. 157) and, consequently, that one may feel alienated from the conclusions of one's deliberation (Cassam, 2014, p. 158). This suggests that Inferentialism is at least on a par with Rationalism regarding alienation. I doubt, however, that Inferentialism could properly account for the experience of identification. I agree with Cassam that identification takes more than just reaching a conclusion considering the kind of deliberative attitude that Rationalists contemplate, since an agent's dispositions may not be sufficiently permeable to her decisions or intentions. I am thus happy to grant that knowledge of what projects and attitudes one is identified with requires the contribution of evidence and, in this respect, I agree with Inferentialism. What I will dispute is that an agent's relation to those attitudes and projects she is identified with could be reduced to the deliverances of a detached, theoretical attitude (Boyle, 2015). Of course, a third-party might detachedly discover what attitudes or desires a certain agent may feel identified with and even the agent herself may begin to discern a given identification of hers by this inferential means. The question I want to raise is, however, whether this sort of discovery is all there is from the first-person perspective or, in other words, whether we should rather regard as weird or insane someone who were not able to access those attitudes of hers she identifies with from a different perspective. In the next section, I will consider this question while examining the sort of selfimprovement concerning one's epistemic vices that, in Vices of the Mind (Cassam, 2017), Cassam vindicates as commendable and, therefore, as possible. I will suggest that the Detachment Assumption prevents us from understanding the kind of sensitivity that is involved in this sort of self-improvement. I will thus conclude that Cassam must make room for a kind of attitude that is both engaged and sensitive to evidence if we are to make sense not only of the idea of identification but of the sort of selftransformation that he promotes. Three kinds of control: voluntary, evaluative, and managerial In Vices of the Mind, Cassam explores several epistemic vices concerning our attitudes and traits of character. In the final chapters of the book, he addresses the issue of one's moral responsibility for such vices and also that of the means for self-improve-ment. He associates moral responsibility with the capacity to control and, in this respect, he distinguishes three kinds of control: voluntary, evaluative and managerial. Obviously, I cannot change my beliefs at will, that is, in the way I may decide to raise my arm and, as a result, raise it. In general, we could say that our epistemic attitudes -and, therefore, our epistemic vices -are not under direct voluntary control (Cassam 201,p. 126). We usually possess, though, evaluative control over our beliefs: " [We] control our beliefs by evaluating (and reevaluating) what is true. Call this 'evaluative' control." (Hieronymi, 2006: 53). This kind of control lies behind TM and seems to be connected with the fact that beliefs -and, in general, epistemic attitudes -must aim at tracking the world for them to count as beliefs at all. Besides these two types of control, Cassam, following Pamela Hieronymi, introduces a third kind of control, namely, managerial control: One might combat one's arrogance by exposing oneself to superior intellects or one's closed-mindedness by forcing oneself to think long and hard about the merits of opinions with which one disagrees. Such self-improvement strategies are not guaranteed to succeed but nor are they guaranteed to fail. If they succeed they are examples of a person exercising managerial control over their vices. They see their character as something that can be reshaped, at least to some extent. Unlike managerial control over ordinary objects, managerial control over our own character vices is indirect. The layout of the furniture in my office can be changed by moving bits of furniture. Changing one's character or a particular character trait is a matter of doing other things that hopefully have the desired effect. (Cassam, 2019, p. 130;see p. 127) According to Cassam, a first step in the attempt to get rid of one's epistemic vices, or at least to attenuate them, is to become aware of their existence, which is not always an easy task, especially with regard to stealthy vices, such as arrogance or closed-mindedness insofar as they themselves diminish or undermine one's capacity to become aware of them; after all, some degree of open-mindedness is required to acknowledge one's closed-mindedness and, similarly, for one's ability to spot one's own arrogance. In any event, Cassam assumes that detecting one's epistemic vices is a first step towards overcoming them, but often just a first. For, even though one may become theoretically or inferentially aware of some epistemically vicious attitudes or character traits, they will typically keep on distorting the way one may assess the available evidence. Some further step is needed if one is to get rid -or at least to attenuate -the impact of one's epistemic vices. What might this further step consist of? Traumatic experiences and self-transformation Cassam insists that one may try first to be more careful in the process of gathering evidence and make every effort to compensate what one may have identified as one's vicious tendency to gullibility, to arrogance, and so on. This is, indeed, an effort of the will that could produce some immediate results on a particular occasion and eventually soften the corresponding epistemic vice. This softening would amount to a transformation whose grounds are, nevertheless, still to be elucidated, since it is unclear how mere inference, interpreted on the basis of the Detachment Assumption, could make sense of this transformative effect. Secondly, one might try to improve one's epistemic attitude by exposing oneself to certain situations in the hope that by this means one's sensitivity will gradually be altered. Traumatic experiences are a particular case of transformation by exposure that Cassam examines in some detail. By 'traumatic experience' Cassam understands "a sudden, unexpected potentially painful event' that 'ruptures part of our way of being or deeply held understanding of the world" (Rushmer & Davies, 2004, p. ii14). He agrees that 'talk of seeing oneself in a new light is appropriate because what the traumatic experience produces is a certain kind of insight" (Cassam, 2019, p. 161, my emphasis). Still, there is no guarantee that traumatic experiences are genuinely transformative, for "impressions require interpretation, and interpreting one's impressions or traumatic experiences is the work of the intellect. So, it seems that there is no getting away from the potential impact of stealthy vices: they can get in the way of self-knowledge by traumatic experience, by causing us to misinterpret these experiences or misunderstand their significance." (Cassam, 2019, p. 164). 5 We must reflect, however, on the conditions under which a particular traumatic experience may be individuated as transformative. It seems, to begin with, that this traumatic experience must carry within it some epistemic element, since it is claimed to produce or favour a transformation in one's epistemic attitudes and character as a result of a certain kind of insight. But can we make sense of this transformation if our sensitivity to evidence is necessarily the product of a detached attitude and, therefore, motivationally inert? Didn't we conclude that the deliverances of such an attitude are at most a first step in the process of transformation and that, therefore, a further step is required? All this suggests that the sort of relation R* that an agent bears to evidence whenever a traumatic experience has a transformative effect must differ from the kind of relation R that she had in the first place, since the deliverances of R were assumed to be motivationally inert. To put it another way, if there is room for traumatic experiences to be transformative -even though quite often they are not because epistemic vices, like all vices, die hard-, then R* must be conceived of both as sensitive to evidence (otherwise the fact that such experiences are insightful or illuminating would be either mysterious or accidental) and as engaged (as the motivational inertness of the first step must be overcome). Acknowledging the indispensability of R* to account for the possibility of selftransformation implies the rejection of the Detachment Assumption, since R* provides a kind of access to evidence that is constitutively engaged. Moreover, the kind 5 See Paul (2014) for some epistemic paradoxes that decisions concerning transformative experience may generate. Such paradoxes derive from the fact that the preferences and values of the self that inspire a certain choice may significantly differ from those of the transformed self, that is, the self that emerges from undergoing a particular transformative experience. There is, first, the difficulty to ascertain what it would be like to have such an experience for the transformed self and, second, the problem of determining whose set of preferences and values should prevail, that is, those of the pre-choice self or those of the transformed self. These are most pressing questions because, as Paul highlights, they reveal the epistemic situation we confront when dealing with the most significant decisions in our life, such as the decision to have a second child. of self-knowledge that R* provides is not only manifestly substantial but distinctively first-personal too, since R* is crucially distinct to the sort of relation R that a third-party is assumed to have. After all, R combines gathering of evidence with the sort of detachment philosophers tend to associate with a third-party perspective, while R* points to an engaged sensitivity to the situation. 6 In the next section, I will further motivate R* by exploring some fundamental experiences where this kind of sensitivity seems to be required. Receptive passivity, self-knowledge, and self-transformation The epistemically vicious agent may sincerely accept, in light of evidence, that her deliberative capacities are hampered by some epistemic vices and still be unable to prevent such vices from distorting the way she perceives a situation or how she deliberates about it. Hence, the issue must be addressed as to what else she can do to improve her epistemic capabilities or, more specifically, what alternative sort of selfawareness may contribute to self-transformation. This is the issue I will approach in the present section. More specifically, in Sect. 3.1 I will introduce the notion of receptivity or 'being in tune with' in light of the experience of the graceful dancer as opposed to that of 6 Some may doubt the intelligibility of R*, partly because one should then admit that some psychological states or attitudes have a dual direction of fit, something which sounds absurd to many. This is a vexed issue that cannot be discussed here at length. See Dunn (2006), Frost (2014), Little (1997), and Zangwill (2008) for a challenge to the claim that the very idea of a mental state with dual direction of fit is incoherent. For further discussion, see Anscombe (1963), Gregory (2012), Humberstone (1992), Schueler (1995), andSmith (1994).Let me highlight, however, how the claim of unintelligibility may be partly motivated by some metaphysical assumptions closely connected to the Detachment Assumption, and therefore my case against the latter will also pose a problem to such assumptions. To begin with, I should mention Humean accounts of motivation according to which every action is to be explained by a proper combination of beliefs, which have a world-to-mind direction of fit, and desires, whose direction of fit is mind-to-world. No room is left then for a mental state or attitude that might contribute to explaining an action as the result of both providing a certain view of the world, as beliefs are meant to do, and having some motivational import, which desires constitutively possess. Humean accounts presuppose a divided conception of the self that Kantian approaches to morality come to confirm and, if I am right, the current debate about self-knowledge takes for granted as well. According to this conception, the self is divided into two parts: a rational, deliberative part that is concerned with how the world is and how one should respond to it; and a dispositional part that motivates one to act in a certain way. The rational part is thus assumed to have only a mind-to-world direction of fit and the dispositional part a world-to-mind direction of fit and, consequently, no room is left for mental states or attitudes with a dual direction of fit. To sum up, I could say that the reluctance to accept the intelligibility of mental states with a dual direction of fit may partly derive from a commitment to the divided conception of the self. My line of argument in this paper can be regarded, however, as a challenge to this conception of the self, since the Detachment Assumption is just the epistemic correlate of this metaphysics of the self; to put it another way, if the structure of the self is as the divided conception claims it to be, then the Detachment Assumption must be granted. Hence, a case against the Detachment Assumption is, by modus tollens, also a case against such conception of the self. More specifically, I have argued that the Detachment Assumption -and, therefore, the divided conception of the self-cannot account for a subject's capacity for self-transformation. It follows that only by renouncing the divided conception of the self, and therefore accepting that some mental states and attitudes could have a dual direction of fit, can we make sense of this capacity for selftransformation. See Sect. 3.2 for further discussion. an unimaginative one. 7 This receptivity will, in turn, be elucidated in terms of a kind of imposition that has to do with the way the conclusion of a mathematical proof imposes itself upon the agent who understands it. Imposition, in turn, comes hand in hand with the idea of passivity. In Sect. 3.2, I will distinguish two sorts of passivity, namely: base and receptive passivity. The former relies on a divided conception of the self and regards an agent's passions as essentially base insofar as they are viewed as constitutively alienated from her true self, whereas the sort of passivity I will associate with imposition departs from this conception of the self and is anchored to a sense of one's agency that favours a certain kind of integration. Once the concept of receptive passivity has been thus elucidated, I will use it in Sect. 3.3 and 3.4 to shed some light on recalcitrant cases of epistemic vice and to account for the epistemic distinctiveness of some central cases of substantial self-knowledge. Receptivity and imposition The dancer perceives an order in a piece of music which she seeks to express in her dancing, in the way her body moves, but how does this transition from the music to the dancing body take place? We may first consider the case of an unimaginative dancer. There are more ways than one in which a dancer may be unimaginative, but I will confine myself to a way of being dull and unimaginative that is closely connected to the Rationalist view about self-knowledge, namely: the dancer whose bodily movements are guided by a set of principles or rules. We may say, by contrast, that the graceful dancer has a certain experience of her body as she pays attention to the music and, as a result, she moves her body in a particular way. Attention to the music, but also to the emotions and bodily experiences that she senses as deriving from the music, are essential to her gracefulness, to her ability to dance the music beautifully. 8 Her experiences and movements will thus be finely and creatively in tune with the music. The unimaginative dancer, on her side, will also have some bodily experiences and some emotions will surely accompany her performance; the worry is rather that she will be connected to the music in a rather stereotypical and rigid manner. 9 To be receptive to the music, the graceful dancer must also be receptive to her own emotional and bodily responses as being (or failing to be) finely in tune with it. This receptivity will ultimately show in her capacity to let her bodily movements be inspired by such experiences and the order thereby recognised in the music. In general, we can say that the notion of gracefulness involves, as we see, a certain sort of receptivity. To elaborate on this kind of receptivity, we may go back to the idea of evaluative control, as Cassam presents it, and the concept of imposition that comes with it. We do not have voluntary control over our views; there is no way in which one could change one's own beliefs at will. They tend instead to vary if some new evidence becomes available. 10 We should rather say that one is forced to change one's beliefs to track the newly available evidence or, in other words, that the new available evidence imposes a certain change in one's beliefs. I am, for instance, forced to believe that my fingertips are on my keyboard as I type this sentence. I cannot intelligibly choose not to believe it because I see my fingertips in contact with the keys. This is what evaluative control amounts to: one's views are controlled by the evidence one may eventually access. Something similar happens with the conclusion of a mathematical proof when the proof is understood. The conclusion imposes itself, but it is not a kind of imposition that degrades the self; on the contrary, one's agency is enhanced or enriched by accepting a theorem as the outcome of a mathematical proof. The same line of argument applies to several other social practices and institutions such as engaging in a meaningful conversation, cultivating a friendship, or some central experiences of parenthood. This reveals to what extent this sort of imposition plays a central role in our lives and cannot be discarded as marginal or ancillary. In all these cases, an order, a sort of necessity, is imposed upon the agent, but that imposition, far from oppressing or enslaving her, contributes to her expansion and flourishing. This sort of imposition involves, as we see, a sensitivity or receptivity to the way things are arranged out there in the world. We can thus conclude that the imposition of an order upon the agent and her receptivity to this order are two sides of the same coin. But imposition and receptivity point, in turn, to the idea of passivity. What sort of passivity is this? contrasted 'graceful' with 'graceless' or 'clumsy' but I wanted (a) to preserve some continuity with John Ruskin's remark and (b) to use a less derogatory word than 'graceless', since the unimaginative dancer can still be a competent dancer. 10 There is, indeed, room for recalcitrant beliefs, that is, beliefs whose dispositional component persists despite the agent's acceptance in light of evidence of the contrary belief. There are also cases where one's epistemic vices prevent a proper assessment of the available evidence. My point is, however, that such cases are to be construed as exceptional or deviant and, therefore, that in paradigmatic cases agents have evaluative control over their beliefs. Base vs. receptive passivity We must distinguish between base and receptive passivity. 11 Base passivity is the standard notion of passivity, namely, the one concerned with an agent's yielding to the power of passions (and, therefore, that of her vices, epistemic or otherwise) to the detriment of reason. This kind of passivity is base insofar as indulging in it is assumed to degrade the self. The locus of this kind of passivity is a conception of the self as divided between those aspects of it which the self truly identifies with and those others that are alienated from it. Within this approach, all passions may be regarded as essentially base inasmuch they constitutively belong to the alienated parts of the self. 12 No matter whether any such passion may eventually coincide with the dictates of reason, this will be only accidentally so, for there is always the chance that any given passion might lead us away from reason. For my purposes, I can just say that, within this framework, passions constitute a system of forces that the true self must make every effort to keep under control, and base passivity describes the eventual incapacity of the self to resist such forces. Receptive passivity, on the contrary, is at odds with the divided conception of the self. To substantiate this claim, we may turn again to the graceful dancer and consider the following question: what is the graceful dancer's true self? It is certainly hard to imagine what her true self might consist of, what parts of herself should be detached or alienated in her dancing. It seems instead that dancing gracefully has to do with the articulation of music, bodily experiences, emotions, decisions, and actions. As we see, the point is not detachment but what we may coin as agential articulation, where 'agential' alludes both to what is articulated and to one's own contribution to this process of articulation; and, if I am right, this contribution has more to do with a certain kind of passivity, of letting oneself go once a particular kind of gestalt has been formed, than with the notion of activity associated with the effort of the will or with the formation of an intention. 13 After all, agential articulation could hardly 11 'Base' is primarily an evaluative term whereas 'receptive passivity' comes also with a descriptive element, and this asymmetry may sound problematic. The distinction between thin and thick concepts may be of some use here. I would say that 'base' is a thin concept that comes with a negative evaluative import, whereas 'receptive passivity' is a thick one, insofar as it involves both a descriptive and an evaluative component. It is the thickness of the latter term that makes it suitable to account for a kind of attitude towards oneself that departs from the Detachment Assumption, which, as we have seen, presupposes that mental states can have only one or another direction of fit. 12 Some may find the contrast between reason and passions inadequate insofar as it does not consider the subtleties of the current debate on the nature of emotions and, in general, of affective states. Still, my purpose here is not to elucidate the nature of affective states, which is clearly beyond the scope of this paper, but only to sketch a notion of passivity associated with the divided conception of the self -and, therefore, with the Detachment Assumption-that may be interfering with a proper understanding of the notion of receptive passivity that, in turn, I regard as indispensable to make sense of the kind of self-knowledge that might have a transformative effect. See Korsgaard (1996), Blackburn (1998 and Dunn (1996) for the combat between reason and passion that I have in mind. 13 I therefore depart from the Kantian idea that agency has mainly to do with being active while passivity leads one's agency astray. Boyle (2009) presents the Kantian sort of activity as constitutive of a fundamental kind of self-knowledge: "The Kantian contrast between an active and a passive form of self-knowledge has been a source of puzzlement to commentators. Our discussion, however, has equipped us to see a point in the distinction. For on the one hand, we have seen that it is attractive to understand our knowledge of be reached if the dancer adopted a detached attitude towards her own emotions and bodily experiences, for such an attitude presupposes a divided conception of the self, so that one's emotions and bodily experiences will merely be observed as a passing show. 14 To put it another way, the point of receptive passivity is not to vindicate that our emotions and bodily experiences belong to our true self but rather to call into question the divided conception of the self, since this notion of passivity only makes sense within a framework where the clear-cut contrast between the true self and its alienated parts has been abandoned. Once we give up the divided conception of the self, we may still regard some passions as base, but we can no longer dismiss all passions as such. The fact that we may identify a certain passion as base will partly depend on its role in the agent's outlook about what is worth pursuing. Moreover, receptive passivity may favour not so much a pure repression of that base passion, but the search for a more appropriate expression of the needs lying behind it (Williams, 2002, ch. 10;Corbí, 2012Corbí, , ch. 6, 2017. We can thus understand how receptive passivity may contribute to agential articulation and, more specifically, to attenuate one's epistemic vices. Recalcitrant cases of epistemic vice have to do, as we have seen, with the mismatch between the agent's epistemic dispositions and the kind of epistemic attitude that the agent may judge appropriate. Receptive passivity is a kind of awareness that goes precisely in the direction of unifying these conflicting elements by, among other things, endowing one's bodily responses and experiences (including, hence, one's dispositions) with prima facie authority. Each experience, even those that have to do with one's epistemic vices, must be approached as expressing a need to be discerned and met rather than thoroughly dismissed. Some might reply, however, that Inferentialism can make sense of the kind of self-examination that the graceful dancer illustrates, so that receptive passivity may ultimately be accommodated within an inferentialist framework. From this perspective, Krista Lawlor stresses, in dispute with Moran, "… that inference from internal promptings is a routine means by which we know what we want" (Lawlor, 2009, p. 48;see Cassam, 2014), which ought to be distinguished from knowing what to want. Among such internal promptings, Lawlor includes simple sensations but also imaged natural language sentences and visual images that may come up inadvertently or be deliberately prompted by the agent herself. Such promptings would require a causal interpretation to determine what desires lie behind them. Lawlor considers the case of a women, Katherine, that catches herself imagining, remembering, and feeling a range of things connected to the idea of having a second child. She may thus be interested in answering the question 'Do I want a second child?' Lawlor argues that the way to proceed is to infer the cause of such promptings, that is, to infer whether their what we believe as reflecting our capacity for a kind of agency --the capacity to make up our minds on the basis of grounds for belief." (Boyle, 2009, p.158;see pp. 133-4). I could grant Boyle's general point and still make room for a fundamental kind of self-knowledge derived from the sense of agency that receptive passivity enhances. 14 It follows that Inferentialism, insofar as it is committed to the Detachment Assumption, can hardly solve the alienation problem. It is only by renouncing this assumption and shifting to an alternative view of how a self may be sensitive to the nuances of her own experience of the world, that we can make sense of the idea of identification. cause is or is not a desire to have a second child. If I mention this sort of self-examination it is because one might think that what I call receptive passivity could reduce to this and, therefore, that my line of reasoning poses no problem to Inferentialism. Lawlor's focus on internal promptings points to a crucial phenomenon (Boyle, 2015, p. 344) whose proper significance I seek to elucidate in terms of the notion of receptive passivity; after all, such internal promptings should relevantly figure among the objects of the kind of attention I have identified as receptive passivity. But this attitude crucially departs from the Detachment Assumption, which is, nevertheless, central to the notion of causal interpretation as Lawlor presents it. For this notion presupposes that (a) the agent bears a relation R to her internal promptings and to their cause and (b) that one's desires are determined regardless of any of the normative constraints and forms of imposition that are constitutive of receptive passivity. So, it seems that, even though receptive passivity and causal interpretation point to the same phenomena, they provide crucially disparate accounts of them. Let me now examine two metaphysical concerns that some might have regarding receptive passivity. The first has to do with the sort of normativity involved in such an attitude and the second with how it is to be individuated. Normativity and individuation The graceful dancer, like the sensitive agent, is subject to some normative constraints. After all, there are ways of responding to the music that do not count as graceful. This is not to say that there is just one perfect or ideal way of dancing a particular piece of music in detriment to the rest. A dancer's actual performance can be assessed in various ways, but not in terms of its distance from an ideal. We must then dispense with the notion of ideal performance to account for the sort of normativity that is involved in graceful dancing and, in general, in the attitude that I have identified as receptive passivity. 15 We may appeal instead to the idea of proportionality between a piece of music and the dancer's response to it (Brewer, 2009;Wiggins, 1987). This notion admits -and even invites -a plurality of approaches to any particular piece of music (Berlin, 2000). Each such approach will highlight one or another aspect of it. Still, some performances will not be recognised as authorised by the work (Walton, 1990, pp. 58-61). Regarding the second question, that is, how receptive passivity is to be individuated, some progress has already been made: for an agent A to be receptively passive to some aspects of a certain situation some normative constraints must be met. These constraints are to be identified not by reference to what an ideal agent ought to do, but in connection with the idea of the proportionality of A's response to the relevant aspects of the situation. This notion of proportionality implies somewhat that the aspects of the situation and those of A's response (including her self-examination) form a unit of intelligibility, that is, that they are individuated with regard to each other or, in other words, that they form a network of features each of which must be identified in the context of the rest. It follows that circularity is constitutively present in this process of individuation, even though this circularity may end up being epistemically virtuous insofar as it sheds light on some aspects of the situation that had so far been distorted or even remained in the dark. In any event, I borrow the notion of a unit of intelligibility from David Finkelstein's analysis of Wittgenstein's contextualist views and, more specifically, of the sense in which an agent's experiences, her gestures, her self-ascription, and the situation they are a response to are conceptually interdependent (Finkelstein, 2003, ch. 5). Barry Stroud elaborates a transcendental argument regarding evaluative judgements that leads to a similar conclusion (Stroud, 2011, ch. 4). So much for how to individuate receptive passivity. Let us now turn to how this attitude may be shed some light on the epistemically vicious agent. Distinctiveness and self-transformation Like the neurotic agent, there is a point at which the epistemically vicious agent may find herself divided. She may accept in light of evidence that her deliberative capacities are hampered by some epistemic vices, and still be unable to put these vices aside while reasoning or inquiring. Receptive passivity is a kind of attention that permits us to make sense of how exposure to certain situations, or even traumatic experiences, may have a transformative impact in the direction of epistemic virtue, for it certainly makes room for the combination of sensitivity and engagement that such a transformation requires. In other words, we can say that receptive passivity meets the conditions established for R* and, therefore, that such a kind of passivity may intelligibly account for the sort of self-improvement that Cassam postulates. Some may object however that, insofar as we are dealing with epistemic vices, they will still contaminate and distort our ability to focus our attention on one or another aspect of the situation; consequently, receptive passivity by itself could hardly be a way out of the trap the epistemically vicious agent is caught in. There is, of course, no safe route to epistemic self-improvement. Nevertheless, there are ways to practice the kind of attention that I have characterised as receptive passivity that may reduce our vulnerability to self-deception. Such ways have to do, for instance, with our ability to focus on rather formal aspects of our own experience and with the way some images and words may be sensed as anchored to certain parts of one's body. In general, I would say that such means are not alien to the way a dancer learns to be graceful, that is, by seeking to passively discern nuances of expression in the piece of music as well as in herself and then let herself go. But I should leave the details for some other occasion because my purpose in this section is just to outline a kind of attention (i.e., receptive passivity) that may put some flesh on the relation R* that I have been vindicating as indispensable in accounting for a certain kind of self-transformation and, consequently, for the kind of self-knowledge that it requires. Such processes of self-knowledge are, indeed, existentially substantial. The question is whether they are also distinctively first-personal; apparently, they are insofar as a third-party access to someone else's experience is confined to the deliverances of a detached, theoretical attitude. Some could then argue that an agent's ability to perceive someone else's psychological attitudes often requires some sort of empathetic response that, in turn, I would have to conceptualise as a case of receptive passivity. After all, if this kind of passivity is involved in a graceful appreciation of a piece of music, it must also be present in our ability to have an intimate conversation with a friend, where a nuanced sensitivity to each other's psychological attitudes plays a crucial role. Seeing that one's friend is sad, angry, or kind might then require to be passively receptive to how her gestures and behaviour resonate within oneself. We could finally generalise and conclude that receptive passivity is involved in one's access to other people's psychological condition and, therefore, that this kind of attitude could hardly make self-knowledge distinctive. My reply to this objection is that what makes receptive passivity distinctively firstpersonal is not the abandonment of the Detachment Assumption as such, but the kind of agential articulation involved in this attitude, namely, the capacity to let oneself go once one has discerned and acknowledged the normative significance of one's own bodily experiences and emotional responses (Weil, 1963;Williams, 2002, ch. 2). The graceful dancer may thereby apprehend the order in a piece of music and the order in her own response too. The order in the music is not altered in the least by the dancer's effort to discern and acknowledge how best to respond to it given what she is. By contrast, she should expect her own response to the piece of music to be modified and shaped by this process of discernment and acknowledgement. Success in this process of self-transformation will count as a criterion of epistemic enlightenment, that is, of the fact that she has managed to apprehend how the piece of music resonates within her and, therefore, how she is forced to respond to it (Williams, 1981(Williams, , 1993(Williams, , 2002. Complementarily, failure in this process of self-transformation will count as a reason to engage in further processes of discernment along the lines suggested by the idea of receptive passivity. As we see, this process of agential articulation is both strictly first-personal and epistemically distinctive. It is strictly first-personal due to the agential component, once it is conceived of in terms of receptive passivity, and is it epistemically distinctive because (a) it includes an epistemic component insofar as one's discernment is vulnerable to all sorts of mistakes and distortions but (b), unlike an external order such as that of a piece of music, the order in one's bodily experiences, emotional responses and character is shaped through the agential process of discernment and acknowledgement. In this respect, someone else's mental states and attitudes constitute an order as external to oneself as that of a piece of music. A third party may exercise receptive passivity to grasp such mental states and attitudes, but this party cannot shape them the way the graceful dancer does, once she has properly discerned the order in the piece of music that reveals and articulates who she is and what she is forced to become. 16 16 I must emphasise that the epistemically vicious agent, like the neurotic agent, may in the end be confronted with a rather tragic situation. She may find herself in some circumstances to which her epistemic vices may count as a proportional response exactly in the same way in which an agent's neurosis might be the most appropriate attitude given her social environment and character. In other words, it may occasionally turn out that the preservation of a certain epistemic vice is the most appropriate response to the agent's plight. After all, there are further values than faithfulness to truth or to the available evidence, and they may eventually conflict with each other. An agent may confront a situation where a particular epistemic value or virtue could reasonably be neglected or denied for the sake of some other values. Some might To sum up In this paper I have distinguished between trivial and substantial cases of self-knowledge. In trivial cases, the impropriety of providing evidence enhances the agent's authority and makes such cases epistemically distinctive. On the contrary, substantial cases of self-knowledge must rely on evidence and, as a result, they should behave like any other sort of knowledge. This line of argument presupposes the Detachment Assumption, namely, that evidence must necessarily be gathered from a detached, theoretical perspective like that commonly attributed to a third-party. Both Inferentialism and Rationalism are committed to the Detachment Assumption. In Cassam's case, this commitment shows both in his discussion about the distinctiveness of self-knowledge and in his analysis of the experience of alienation. Regarding the former, Cassam argues that the most we can do to accommodate the distinctiveness of self-knowledge is to emphasise that the agent and a third-party do not have access to the same kinds of evidence, since, for instance, internal promptings are only available to the agent herself and not to a third-party. Cassam seems then to assume that the agent and a third-party should bear the same relation R to any sort of evidence and, therefore, that R must involve the kind of detachment that sounds constitutive of a third-person perspective. Something similar happens with the idea of alienation. It is true that Inferentialism -and, in general, self-knowledge based on evidence-allows for an agent to be identified with an attitude that she has discovered inferentially, but it remains silent as to what should be added to the discovery for identification to occur. The worry is that, unless an alternative relation R* towards one's own attitude is properly elucidated, we cannot make sense of the experience of identification. Moreover, I have argued that the same relation R* is required to understand the sort of self-transformation that Cassam commends in his approach to epistemic vices, since, in recalcitrant cases, inferential awareness of one's epistemic vices will be a first step, but only a first. What stands in the way of discerning what other sort of self-awareness is required is precisely the Detachment Assumption, since this alternative kind of self-awareness involves a relation R* to oneself that is both sensitive to evidence and engaged. In the final sections I have tried to elucidate what this relation R* might look like. In this respect, I have distinguished two kinds of passivity, that is, base and receptive passivity. To explore the latter, I have considered the contrast between the graceful and the unimaginative dancers. The dance in the unimaginative case is governed by a set of rules and, therefore, it comes with a certain degree of rigidity. The graceful dancer is, instead, in tune with the order in the music and with her bodily experiences as well. There's no way in which the dancer could detach herself from her bodily experiences and still be graceful. This engagement with her bodily experiences is part of the process by which she discerns the order in the piece of music and articulates reply that the source of an agent's epistemic vices, especially those associated with her character, could eventually be traced back to some sort of psychic impairment or trauma, so that receptive passivity might help her to recover from it and lead, in the end, to a higher sort of agential articulation with no epistemic cost. Still this possibility, even though it may eventually work for a certain epistemic vice or blind spot, could hardly be granted as a general procedure unless one were ready to assume that all human values can ideally be squared, which I rather doubt (Berlin, 2000). her dance. Her attitude is both strictly first-personal and epistemically distinctive. It is strictly first-personal due to the agential component, once it is conceived of in terms of receptive passivity, and is it epistemically distinctive because her own bodily experiences are shaped through her process of discerning how the piece of music resonates within her.
14,869.4
2023-06-01T00:00:00.000
[ "Philosophy" ]
Establishment of a Simple, Sensitive, and Specific Salmonella Detection Method Based on Recombinase-Aided Amplification Combined with dsDNA-Specific Nucleases Salmonella is a common foodborne pathogen that can cause food poisoning, posing a serious threat to human health. Therefore, quickly, sensitively, and accurately detecting Salmonella is crucial to ensuring food safety. For the Salmonella hilA gene, we designed Recombinase-aided amplification (RAA) primers and dsDNA-specific nuclease (DNase) probes. The ideal primer and probe combination was found when conditions were optimized. Under UV light, a visual Salmonella detection technique (RAA-dsDNase) was developed. Additionally, the RAA-dsDNase was modified to further reduce pollution hazards and simplify operations. One-pot RAA-dsDNase-UV or one-pot RAA-dsDNase-LFD was developed as a Salmonella detection method, using UV or a lateral flow dipstick (LFD) for result observation. Among them, one-pot RAA-dsDNase and one-pot RAA-dsDNase-LFD had detection times of 50 min and 60 min, respectively, for detecting Salmonella genomic DNA. One-pot RAA-dsDNase-UV had a detection limit of 101 copies/μL and 101 CFU/mL, while one-pot RAA-dsDNase-LFD had a sensitivity of 102 copies/μL and 102 CFU/mL. One-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD assays may identify 17 specific Salmonella serovars witho ut causing a cross-reaction with the remaining 8 bacteria, which include E. coli. Furthermore, Salmonella in tissue and milk samples has been reliably detected using both approaches. Overall, the detection method developed in this study can quickly, sensitively, and accurately detect Salmonella, and it is expected to become an important detection tool for the prevention and control of Salmonella in the future. Introduction Foodborne pathogen-induced infectious illnesses are considered a major global public health concern, posing serious risks to human and animal health and resulting in large financial losses [1][2][3].Salmonella strains are considered the most common foodborne pathogen globally; they are primarily responsible for foodborne disease outbreaks and infections due to their natural environment frequency [4] and stability in food production and retail supply chains [5].Salmonella is categorized as a moderate-to-serious risk pathogen by the World Health Organization [6], and it is estimated that it has resulted in over 80 million illnesses to date [7].Thus, a quick, extremely sensitive, and easy-to-use detection technique is desperately needed to stop the spread of Salmonella. For diagnosing Salmonella, conventional laboratory culture techniques were regarded as the "gold standard".Unfortunately, those methods were extremely complicated and Foods 2024, 13, 1380 2 of 15 require several processes, including pre-enrichment, bacterial multiplication, and selective separation [8,9].As a result, it used to take more than 3 days to obtain the desired results [10].Salmonella detection has undergone a revolution with the development of nucleic acid amplification detection (NAAT) technology.NAAT-based techniques, such as polymerase chain reaction (PCR) [11], real-time quantitative PCR (qPCR) [12], and CRISPR/Cas13a [13] are now frequently used for Salmonella detection.Nevertheless, these techniques have intrinsic drawbacks that restrict the range of situations in which they can be used, such as the demand for costly machinery, strict environmental regulations, and skilled operators [14].Moreover, loop-mediated isothermal amplification (LAMP) has been extensively utilized in the detection of Salmonella, although it necessitates an intricate primer design [15,16]. Recombinase-aided amplification (RAA) has a number of benefits, such as rapidity, low cost, high sensitivity, and the capacity to quickly amplify DNA at low temperatures (37-42 • C).In recent years, these characteristics have rendered RAA technology appropriate for the identification of viruses and bacteria [17,18].Nevertheless, this method frequently calls for the addition of agarose gel electrophoresis, and even the RAA instruction manual states that the purification of RAA products is required prior to electrophoresis, which considerably reduces the sensitivity and speed of RAA detection.Numerous investigations have exhibited the possibility of utilizing RAA in conjunction with the CRISPR/Cas system to optimize detection sensitivity and efficiency while also accelerating the procedure [19,20].However, the CRISPR/Cas technique usually necessitates pre-preparing crRNA and places more demands on the environment used for detection.As a result, the practical implementation of RAA in conjunction with other approaches is highly valuable for Salmonella identification. Highly selective endonucleases, known as double-stranded DNA-specific nucleases (dsDNase), cleave phosphodiester bonds in double-stranded DNA to produce oligonucleotides with 3 ′ -hydroxyl and 5 ′ -phosphate termini.These enzymes cannot break down single-stranded DNA or RNA, but they are highly selective for double-stranded DNA [21].In one investigation, the application of dsDNase was used to detect miRNA-10b and enhance the signal strength produced during the reaction [22].As far as we are aware, no reports exist about RAA-dsDNAse target detection. To identify Salmonella without requiring RNA in the reaction, we used dsDNase instead of CRISPR/Cas in this study and paired it with RAA, which simplified the environmental conditions needed for detection.The effectiveness of this technique depends on the choice of a suitable target for detection.Based on the information at hand, a number of genes, including invA, stn, opmC, fimA, iroB, agfA, and fimA, have been used to identify Salmonella.Nevertheless, some of these genes are absent from particular Salmonella strains, which reduces their specificity and limits the range of situations in which they can be used.According to the research, all invasive strains of Salmonella include the Salmonella Pathogenic Island 1 gene (SPI1), which is essential for Salmonella invasion.One important virulence component in Salmonella invasion and infection is SPI1.Among the positively transcribed regulatory genes encoded in SPI1 is the hilA gene [23,24].Because it influences Salmonella colonization, the hilA gene is essential for the pathogenesis of Salmonella.The hilA gene is extremely unique to Salmonella and has not been found in any other Gramnegative bacteria [25].Using PCR or LAMP techniques that target the hilA gene, some researchers have successfully developed Salmonella detection methods [26].This implies that concentrating research efforts on the hilA gene is reasonable. Combining RAA with dsDNase allowed us to create a novel Salmonella detection technique called RAA-dsDNase.One-pot detection refers to a simplified analytical approach that allows for the simultaneous detection and quantification of multiple analytes in a single reaction vessel, eliminating the need for separate sample preparation and multiple reaction steps.So, we developed the one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD procedures by placing an RAA and DNase reaction in a tube and observing the outcomes with UV and LFD. Reagents and Materials The primers and fluorescent probes used in this study were purified by high-performance liquid chromatography (HPLC) and synthesized by Sangon Biotechnology Co., Ltd.(Shanghai, China).The 2 X M5 HiPer Taq HiFi PCR mix (with blue dye) was purchased from Mei5 Biotechnology Co., Ltd.(Beijing, China).The supplier of the RAA kit was Hangzhou ZC Bio-Sci & Tech Co., Ltd.(Hangzhou, China).We bought the dsDNase from Yeasen Biotechnology Co., Ltd.(Shanghai, China) along with an RNase inhibitor.The supplier of the tissue DNA extraction kits and bacterial genomic DNA extraction kits was Tiangen Biochemical Technology Co., Ltd.(Beijing, China). Bacterial Preparation and DNA Extraction The bacteria used in this investigation were maintained in a freezer at −20 • C with 50% glycerol.Before being used, the bacteria were allowed to grow on Luria-Bertani (LB) media plates.A single colony was picked and transferred to LB liquid medium and grown in an incubator until the OD 600 reached 0.8.At this stage, bacterial cells were used to extract DNA using a bacterial genomic DNA extraction kit. Construction of Target Gene pMD19-T-hilA Vector The hilA gene was PCR-amplified, separated on agarose gel electrophoresis, and the product was recovered.This insert was ligated to the pMD19-T vector to generate the pMD19-T-hilA construct (Figure S1). Design and Screening of RAA Primers and dsDNAse Probes To create the RAA primers and dsDNAse probes, a highly conserved gene sequence was chosen by retrieving and evaluating the conservation of the hilA gene in the NCBI database.In all, nine primer and two probe pairs were created, and primer and probe screening was performed, utilizing the hilA gene-containing plasmid as a template.The RAA reaction system was set up as follows: the RAA reaction dry powder tube was filled with 25 µL of A Buffer and 13.5 µL of ddH 2 O.After thorough mixing, the mixture was then evenly divided into five new Eppendorf (EP) tubes.Following this, 1 µL of the template and 0.4 µL of each of the downstream and upstream primers (10 µM) were added to each EP tube.Lastly, each EP tube cap was filled with 0.5 µL of B buffer, making a total reaction volume of 10 µL.The mixture was covered, gently inverted eight to ten times, and then centrifuged for 10 s at a low speed.For 40 min, the reactions were conducted at 37 • C.After the reaction, 2% agar gel electrophoresis was used to examine the RAA products, and the bands were observed under UV light.Primer combinations with bright, single bands were selected for additional testing.To make a total volume of 20 µL, the dsDNase reaction system was supplemented with water.It included 10 µL RAA products, 2 µL of 10× dsDNase buffer, 1 µL of probe (10 µM), 1 µL of dsDNAse, 2 µL of template DNA, or ddH 2 O.The reaction mixture was incubated for 10 min at 37 • C, and the outcomes were examined under UV light. Establishment of RAA-dsDNase Detection Method Primers F7R7 and probe-2 were used to create the RAA-dsDNase detection method.The reaction system and procedure were identical to those outlined above.As negative controls, the dsDNase reaction system without adding dsDNase and the RAA reaction system without adding the DNA template were employed. RAA-dsDNase Reaction Condition Optimization The RAA reaction system is completed prior to the degradation reaction of dsDNase and its reaction conditions are almost optimal.Hence, the primary goal was to optimize the conditions for the dsDNase degradation reaction.Unless otherwise noted, dsDNase had a 10 min reaction time.For optimizing the reaction buffer, we added 0 µL, 0.5 µL, and 1 µL of 10× dsDNase buffer to the dsDNase reaction system and assessed the impact of the amount of the buffer on the degradation ability of RAA-dsDNase.Dosage optimization for the dsDNase enzyme is required since the concentration of the enzyme directly influences its capacity for breakdown.Therefore, we introduced 0.5 µL, 0.75 µL, and 1 µL of dsDNase to the reaction systems.Each reaction system's fluorescence intensity was measured under UV light after it had been reacting for 10 min at 37 • C. Reaction time optimization: Due to dsDNase's own high capacity for target degradation, we measured the fluorescence intensity in the reaction system at 0, 5, and 10 min of the dsDNase reaction, respectively. Sensitivity and Specificity Analysis of RAA-dsDNase and PCR Specificity Analysis After the optimization of conditions, we conducted a sensitivity and specificity analysis of RAA-dsDNase.We used a continuous 10-fold dilution method, with pMD19-T-hilA plasmid and Salmonella typhimurium (ATCC14028) genomic DNA as templates, for the RAA-dsDNase sensitivity analysis.Additionally, the genomic DNA of eight additional bacterial species, including E. coli, and 17 serovars of Salmonella were utilized as templates for RAA-dsDNase and PCR analysis.The PCR reaction contained 10 µL of 2× PCR mix, 2 µL of genomic DNA, 0.5 µL of each of the upstream and downstream primers (10 µM), and 7 µL ddH 2 O.The PCR reaction proceeded as follows: 3 min of pre-denaturation at 94 • C, followed by 20 s of denaturation at 94 • C, 20 s of annealing at 56 • C, and 30 s of extension at 72 • C (35 cycles for last three steps).Finally, there was 5 min of final extension at 72 • C and eternal storage at 4 • C. Establishment of One-Pot RAA-dsDNase-UV and One-Pot RAA-dsDNase-LFD Detection Methods Once the RAA-dsDNase specificity and sensitivity analysis is finished, the experiment will be made simpler and the chance of aerosol contamination from frequent lid opening and shutting will be decreased.In this investigation, an EP tube was concurrently filled with two RAA and dsDNase reaction systems.The RAA reaction system was positioned at the tube's bottom, and the cap was filled with dsDNase, ssDNase probes, and ddH 2 O.Following the completion of the RAA reaction, a quick centrifugation was carried out, and the tube was turned upside down to thoroughly mix the two reaction systems.After that, incubation was carried out for 10 min at 37 • C. The outcomes were then identified as one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD, respectively. Sensitivity and Specificity Analysis of One-Pot RAA-dsDNase-UV and One-Pot RAA-dsDNase-LFD In addition to adding the two reaction systems of RAA and dsDNase separately to the bottom and top of the EP tube, the sensitivity and specificity analysis process of one pot of RAA-dsDNase-UV and one pot of RAA-dsDNase-LFD was the same as steps outlined above.In addition, the two reaction systems of RAA and dsDNase separately were added to the bottom and top of the EP tube.Additionally, the 3 ′ modification of probe-2 was changed to biotin. Detection of Salmonella in Real Samples In order to test the practicality of one pot of RAA-dsDNase-UV and one pot of RAA-dsDNase-LFD in suspected Salmonella samples, we dissected clinically suspected Salmonella pullorum-infected chicks and retrieved tissue DNA from their organs, including the heart, liver, spleen, lungs, kidneys, and intestinal tissue.The main symptoms of chicks infected with chicken pullorum disease include lethargy, curling up, wings drooping, anorexia, and excretion of white viscous diarrhea.The necropsy results of sick chickens showed that the livers of these animals were enlarged, blackened, fragile, and accompanied by severe nodules.Then, we conducted analyses on Salmonella utilizing PCR, one-pot RAA-dsDNase-UV, one-pot RAA-dsDNase-LFD, and conventional bacterial isolation and culture techniques. To further evaluate the practicality of one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD in food, we purchased commercial milk from a local supermarket and confirmed its freedom from Salmonella contamination through the plate counting method.Then, we added Salmonella serotypes such as S. enteritidis, S. thompson, S. typhimurium, S. derby, and S. infantis to the milk to simulate the natural contamination of Salmonella in milk, with a final concentration of 1 × 10 6 CFU/mL.Subsequently, bacterial genomic DNA was extracted and analyzed by one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD.Throughout the process, milk without Salmonella contamination was selected as a control. Design and Screening of RAA Primers and dsDNAse Probes For the highly conserved sequences of the hilA gene in the NCBI database, nine pairs of RAA primers, one pair of PCR primers, and two ssDNA fluorescent probes were designed (Table 1).The electrophoresis results demonstrated that the amplification band corresponding to primer F7R7 was single and the brightest when the same quality of template DNA and consistent reaction conditions were used (Figure S2).Probe screening results revealed that there was no fluorescence in the control group and that the fluorescence intensity of the probe-2 group was much higher than that of the probe-1 group (Figure S3).For the purpose of the next experimental study, primer pair F7R7 and probe-2 were employed. RAA-dsDNase Principle and Feasibility Analysis The principle of detecting Salmonella by RAA-dsDNase method lies in massively amplifying Salmonella-specific genes using RAA technology.The fluorescent signal is released when the dsDNase breaks down the ssDNA fluorescent probe after it binds to the target region to produce a double strand (Figure 1A).We discovered that the RAA method worked well for large-scale gene amplification through feasibility analysis (Figure 1B).We then used dsDNase to analyze the degradation of ssDNA fluorescent probes.The findings demonstrated that in the absence of template DNA or dsDNase in the reaction system, fluorescence signals could not be seen (Figure 1C). plifying Salmonella-specific genes using RAA technology.The fluorescent signal is released when the dsDNase breaks down the ssDNA fluorescent probe after it binds to the target region to produce a double strand (Figure 1A).We discovered that the RAA method worked well for large-scale gene amplification through feasibility analysis (Figure 1B).We then used dsDNase to analyze the degradation of ssDNA fluorescent probes.The findings demonstrated that in the absence of template DNA or dsDNase in the reaction system, fluorescence signals could not be seen (Figure 1C). Optimization of Reaction Buffer The reaction buffer must be optimized because variables in the reaction system, such as the concentration of magnesium ions, may have an impact on the activity of dsDNase.Amounts of 0, 0.5 µL, and 1 µL of 10 × dsDNase buffer were added individually to the dsDNase reaction system.The findings demonstrated that there was no discernible variation in fluorescence intensity between the reaction systems, suggesting that the RAA- Optimization of Reaction Buffer The reaction buffer must be optimized because variables in the reaction system, such as the concentration of magnesium ions, may have an impact on the activity of dsDNase.Amounts of 0, 0.5 µL, and 1 µL of 10× dsDNase buffer were added individually to the dsD-Nase reaction system.The findings demonstrated that there was no discernible variation in fluorescence intensity between the reaction systems, suggesting that the RAA-dsDNase reaction is not significantly impacted by 10× dsDNase buffer (Figure S4).As a result, it was not included in the RAA-dsDNase reaction that followed.This is primarily because all the ingredients needed for the dsDNase reaction are present in the RAA reaction buffer. Optimization of dsDNase Dosage The dsDNase activity is directly influenced by its dosage.Therefore, to maximize its maximum reaction activity and control the cost, we need to optimize the dosage of dsDNase.When dsDNase was at 1 µL, fluorescence intensity in the reaction system was significantly higher compared to 0.5 µL and 0.75 µL, suggesting that 1 µL of dsDNase is suitable for experiments (Figure S5). Optimization of Reaction Time Fluorescence probe cleavage may be incomplete if the dsDNase reaction time is too short; on the other hand, an excessive reaction time reduces detection efficiency.As a result, we ran our studies for three different durations: 0 min, 5 min, and 10 min.As a result, and as Figure S6 illustrates, fluorescence signals were strong at 5 and 10 min, with a small rise at the latter time.The 10 min reaction time was chosen for the following tests in order to guarantee the sensitivity of RAA-dsDNase. Sensitivity and Specificity Analysis of RAA-dsDNase and PCR Specificity Analysis Sensitivity investigation showed that the RAA-dsDNase reaction system exhibited strong fluorescence within the plasmids concentration range of 10 2 to 10 3 copies/µL.The fluorescence signal was modest but identifiable from the control group even at a plasmid concentration of 10 1 copies/µL (Figure 2A).RAA-dsDNase in bacterial genome testing revealed that, in comparison to 0 and 10 0 CFU/mL, the fluorescence intensity of bacteria with 10 2 and 10 1 CFUs was much higher (Figure 2B).Nevertheless, a discernible variation in fluorescence intensity between 0 and 10 1 CFU/mL is not visible to the unaided eye.The findings of the sensitivity investigation showed that the detection limit of RAA-dsDNase is 10 1 CFU/mL (bacteria) and 10 1 copies/µL (plasmid).S1. Establishment of One-Pot RAA-dsDNase-UV and One-Pot RAA-dsDNase-LFD Detection Methods We conducted a one-pot RAA-dsDNase experiment, which reduced the possibility of contamination and simplified the reaction process despite the high specificity and sensitivity of RAA-dsDNase.The findings were visualized using UV and LFD.The schematic diagram for a one-pot process is shown in Figure 3A.This method involved adding the fluorescent probe, ddH2O, and dsDNase to the tube lid after the RAA mixture was positioned at the tube's bo om.Fluorescence or detection bands were only created in EP tubes containing DNA templates, as shown in Figure 3B,C.This indicates that the one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD procedures are feasible.S1. The RAA-dsDNase reaction system produced a significant fluorescence signal when Salmonella genomic DNA (Figure 2C, nos.1-17) was used as the template, according to specificity analysis.No cross-reaction was seen between the other eight strains, including E. coli (Figure 2C, nos.[18][19][20][21][22][23][24][25].Similar results were obtained from PCR analysis; bands were generated in agarose gel electrophoresis only in the presence of Salmonella genomic DNA, while no bands were observed in other bacterial genomic DNA (Figure S7).Table S1 displays the strains that correspond to numbers 1 through 25.This supports the great specificity of the RAA-dsDNase results. Establishment of One-Pot RAA-dsDNase-UV and One-Pot RAA-dsDNase-LFD Detection Methods We conducted a one-pot RAA-dsDNase experiment, which reduced the possibility of contamination and simplified the reaction process despite the high specificity and sensitivity of RAA-dsDNase.The findings were visualized using UV and LFD.The schematic diagram for a one-pot process is shown in Figure 3A.This method involved adding the fluorescent probe, ddH 2 O, and dsDNase to the tube lid after the RAA mixture was positioned at the tube's bottom.Fluorescence or detection bands were only created in EP tubes containing DNA templates, as shown in Figure 3B,C.This indicates that the one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD procedures are feasible.3.8.Sensitivity and Specificity Analysis of One-Pot RAA-dsDNase-UV and One-Pot RAA-dsD-Nase-LFD The detection limit of one-pot RAA-dsDNase-UV was determined by sensitivity analysis to be 10 1 copies/µL for plasmids and 10 1 CFU/mL for bacteria (Figure 4A,B).Additionally, Figure 4C,D show that the one-pot RAA-dsDNase-LFD detection limits were 10 2 copies/µL for plasmids and 10 2 CFU/mL for bacteria.The results of the specificity investigation were in perfect agreement with RAA-dsDNase (Figure 5). Sensitivity and Specificity Analysis of One-Pot RAA-dsDNase-UV and One-Pot RAA-dsDNase-LFD The detection limit of one-pot RAA-dsDNase-UV was determined by sensitivity analysis to be 10 1 copies/µL for plasmids and 10 1 CFU/mL for bacteria (Figure 4A,B).Additionally, Figure 4C,D show that the one-pot RAA-dsDNase-LFD detection limits were 10 2 copies/µL for plasmids and 10 2 CFU/mL for bacteria.The results of the specificity investigation were in perfect agreement with RAA-dsDNase (Figure 5). .Specificity analysis of one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD.Specificity analysis results showed that one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD could specifically detect Salmonella, without cross-reactivity to other strains of bacteria.Strain numbers correspond to the details in Table S1. 3.9.Results of Salmonella Analysis in Tissues Using One-Pot RAA-dsDNase-UV and One-Pot RAA-dsDNase-LFD Suspected tissue samples of chickens infected with Salmonella pullorum were evaluated using one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD, PCR, and conventional bacterial isolation culture after specificity and sensitivity tests were finished.S1. 3.9.Results of Salmonella Analysis in Tissues Using One-Pot RAA-dsDNase-UV and One-Pot RAA-dsDNase-LFD Suspected tissue samples of chickens infected with Salmonella pullorum were evaluated using one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD, PCR, and conventional bacterial isolation culture after specificity and sensitivity tests were finished.S1. 3.9.Results of Salmonella Analysis in Tissues Using One-Pot RAA-dsDNase-UV and One-Pot RAA-dsDNase-LFD Suspected tissue samples of chickens infected with Salmonella pullorum were evaluated using one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD, PCR, and conventional bacterial isolation culture after specificity and sensitivity tests were finished.The clinical autopsy of dead chickens shows that their livers are more fragile, with many nodules on the surface and a noticeably darker color (Figure 6A).Bacterial isolation and culture results showed that the isolated strain conforms to typical morphology of Salmonella pullorum on XLT-4 agar medium (Figure 6B).A substantial fluorescent signal was obtained when ill chicken tissue samples were analyzed using one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD.On the other hand, when applied to tissue samples from healthy chickens, neither technique produced any fluorescence (Figure 6C).When PCR and agarose gel electrophoresis were used to examine the tissue samples of sick hens, specific bands were seen; however, no discernible bands were seen in the samples from healthy chickens (Figure 6D).The clinical autopsy of dead chickens shows that their livers are more fragile, with many nodules on the surface and a noticeably darker color (Figure 6A).Bacterial isolation and culture results showed that the isolated strain conforms to the typical morphology of Salmonella pullorum on XLT-4 agar medium (Figure 6B).A substantial fluorescent signal was obtained when ill chicken tissue samples were analyzed using one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD.On the other hand, when applied to tissue samples from healthy chickens, neither technique produced any fluorescence (Figure 6C).When PCR and agarose gel electrophoresis were used to examine the tissue samples of sick hens, specific bands were seen; however, no discernible bands were seen in the samples from healthy chickens (Figure 6D).The detection of Salmonella in milk using one-pot RAA-dsDNase-UV and onepot RAA-dsDNase-LFD showed that when Salmonella contamination was present in the milk, both methods produced clear fluorescent signals or detection bands, indicating that one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD can be used for the detection of Salmonella in food (Figure 6E). Discussion Salmonella is a highly prevalent food-borne pathogen that can cause poisoning in both humans and animals, which can lead to the spread of drug resistance and result in serious economic losses.This has become a global public health problem [27][28][29].Currently, over 2600 serovars of Salmonella have been reported [30], which poses a significant challenge to the prevention and control of Salmonella owing to the large number of serotypes and the extensive detection efforts required.In recent years, DNA molecular detection techniques such as PCR (RT-PCR) [31], LAMP [32], RPA [33], and CRISPR/Cas [34] have been widely developed for Salmonella diagnosis.However, these methods have some limitations, such as the need for expensive equipment, professional personnel, complex primer designs, and the participation of RNA in the detection process.Therefore, the development of a simple, visible process that does not require high environmental requirements and can detect multiple Salmonella serotypes is of great significance for Salmonella prevention and control. The hilA gene, a virulence regulator that plays a crucial role in the regulation of SPI-1, is a unique feature of Salmonella species and is absent in other Gram-negative bacteria.The upregulation of this gene has been linked to enhanced colonization or organ invasion [25], and some studies have utilized Salmonella enteritidis lacking the hilA gene to create attenuated vaccine strains [35].Previous studies have shown that PCR detection methods targeting the hilA gene have high specificity for Salmonella [36].Another study successfully differentiated 83 different serovars of Salmonella and 22 non-Salmonella strains by targeting the hilA gene.A study that evaluated the suitability of targeting different genes (agfA, sef, spvC, and hilA) for salmonella detection showed that PCR methods targeting the hilA gene were all positive for Salmonella [37].Based on these findings, it can be inferred that hilA serves as a highly specific target for Salmonella detection. This work provided a new approach for detecting Salmonella by combining RAA with dsDNase.Large-scale Salmonella hilA gene amplification was accomplished quickly and effectively using RAA technology.Remarkably, a fluorescent signal was then released when the fluorescent probe and target complex were broken up by dsDNase.In order to establish visual one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD, the results were seen under UV and LFD.One-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD had detection limits for Salmonella DNA of 10 1 CFU/mL (50 min) and 10 2 CFU/mL (60 min), respectively.The technique for detecting Salmonella that was developed in this work can be used to identify Salmonella without causing cross-reactivity with other strains, such as E. coli.Additionally, they have been effective in identifying Salmonella in chick tissue samples and milk. Numerous techniques for identifying Salmonella have been developed as research advances.In this study, we compared the detection method developed by us with other Salmonella detection methods by referring to the expression method of Li et al. [38] (Table S2).Traditional cultivation methods are regarded as the gold standard for Salmonella detection, yet they are cumbersome, time-consuming, and have limited sensitivity [39,40].The ELISA is a commonly used approach for detecting Salmonella; however, the manufacture of its antibody is rather challenging [41,42].The PCR/RT-PCR technique is simple and effective, but it requires expensive equipment and is prone to contamination [43,44].Although LAMP for Salmonella detection has an easy-to-use interface and effective amplification, it has drawbacks such as complicated primer construction and the possibility of false positives [45].Simpleness, quick detection, and high specificity are among the advantages Figure 1 . Figure 1.Schematic diagram and feasibility analysis of RAA-dsDNase detection for Salmonella.(A) The principle of RAA-dsDNase detection for Salmonella mainly includes three parts.(I) dsDNase has no degrading activity on ssDNA fluorescent probe.(II) When dsDNase was absent from the reaction system, the ssDNA fluorescent probe remains intact.(III) When the template DNA was amplified by RAA, then dsDNase was added to the system, and the ssDNA fluorescent probe was degraded and released a fluorescent signal.(B,C) Feasibility analysis of RAA-dsDNase detection for Salmonella. Figure 1 . Figure 1.Schematic diagram and feasibility analysis of RAA-dsDNase detection for Salmonella.(A) The principle of RAA-dsDNase detection for Salmonella mainly includes three parts.(I) dsDNase has no degrading activity on ssDNA fluorescent probe.(II) When dsDNase was absent from the reaction system, the ssDNA fluorescent probe remains intact.(III) When the template DNA was amplified by RAA, then dsDNase was added to the system, and the ssDNA fluorescent probe was degraded and released a fluorescent signal.(B,C) Feasibility analysis of RAA-dsDNase detection for Salmonella. Figure 2 . Figure 2. Sensitivity and specificity analysis of RAA-dsDNase.(A,B) The detection limit of RAA-dsDNase was 101 copies/µL (plasmid) and 101 CFU/mL (bacteria).(C) When the genomic DNA of Salmonella exists, RAA-dsDNase releases a strong fluorescent signal, while in the presence of genomic DNA of other species, including Escherichia coli, no fluorescent signal was produced.Strain numbers correspond to the details in TableS1. Figure 2 . Figure 2. Sensitivity and specificity analysis of RAA-dsDNase.(A,B) The detection limit of RAA-dsDNase was 101 copies/µL (plasmid) and 101 CFU/mL (bacteria).(C) When the genomic DNA of Salmonella exists, RAA-dsDNase releases a strong fluorescent signal, while in the presence of genomic DNA of other species, including Escherichia coli, no fluorescent signal was produced.Strain numbers correspond to the details in TableS1. Foods 2024 , 15 Figure 3 . Figure 3. Schematic diagram and feasibility analysis of one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD detection for Salmonella.(A) Simultaneously adding the components of the RAA and dsDNase reactions into an EP tube, with RAA at the bo om of the tube and dsDNase and the fluorescent probe at the tube cap.After the RAA reaction was completed, it was briefly centrifuged, inverted to mix, and finally the results were observed using UV and LFD.(B,C) The feasibility analysis results showed that fluorescence or detection bands appeared only when the genomic DNA of Salmonella existed. Figure 3 . Figure 3. Schematic diagram and feasibility analysis of one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD detection for Salmonella.(A) Simultaneously adding the components of the RAA and dsDNase reactions into an EP tube, with RAA at the bottom of the tube and dsDNase and the fluorescent probe at the tube cap.After the RAA reaction was completed, it was briefly centrifuged, inverted to mix, and finally the results were observed using UV and LFD.(B,C) The feasibility analysis results showed that fluorescence or detection bands appeared only when the genomic DNA of Salmonella existed. Figure 5 Figure 5. Specificity analysis of one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD.Specificity analysis results showed that one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD could specifically detect Salmonella, without cross-reactivity to other strains of bacteria.Strain numbers correspond to the details in TableS1. Figure 5 . Figure 5. Specificity analysis of one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD.Specificity analysis results showed that one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD could specifically detect Salmonella, without cross-reactivity to other strains of bacteria.Strain numbers correspond to the details in TableS1. Figure 5 . Figure 5. Specificity analysis of one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD.Specificity analysis results showed that one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD could specifically detect Salmonella, without cross-reactivity to other strains of bacteria.Strain numbers correspond to the details in TableS1. Figure 6 . Figure 6.Detection of Salmonella in real samples.(A) Observation of liver tissue samples from sick and healthy chickens in clinical practice with the naked eye.The red arrow indicates the location of the nodule.(B) Traditional culture method was used to analyze Salmonella in the tissues of sick chickens, and the results showed typical morphology of Salmonella pullorum.(C) One-pot RAA-dsDNase-UV, one-pot RAA-dsDNase-LFD, and PCR analysis showed that significant fluorescent signals or detection bands were produced in the tissues of the hearts, livers, spleens, lungs, kidneys, and intestines of sick chicks, while no corresponding phenomenon was observed in healthy chicken tissues.(D) And similar results were obtained from PCR analysis.(E) Detection of Salmonella in milk using one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD showed that both methods produced clear fluorescent signals or detection bands. Figure 6 . Figure 6.Detection of Salmonella in real samples.(A) Observation of liver tissue samples from sick and healthy chickens in clinical practice with the naked eye.The red arrow indicates the location of the nodule.(B) Traditional culture method was used to analyze Salmonella in the tissues of sick chickens, and the results showed typical morphology of Salmonella pullorum.(C) One-pot RAA-dsDNase-UV, one-pot RAA-dsDNase-LFD, and PCR analysis showed that significant fluorescent signals or detection bands were produced in the tissues of the hearts, livers, spleens, lungs, kidneys, and intestines of sick chicks, while no corresponding phenomenon was observed in healthy chicken tissues.(D) And similar results were obtained from PCR analysis.(E) Detection of Salmonella in milk using one-pot RAA-dsDNase-UV and one-pot RAA-dsDNase-LFD showed that both methods produced clear fluorescent signals or detection bands. Table 1 . The nucleic acid sequences used in this study.
7,476
2024-04-29T00:00:00.000
[ "Environmental Science", "Biology", "Agricultural and Food Sciences" ]
Thermodynamic resources in continuous-variable quantum systems A system's deviation from its ambient temperature has long been known to be a resource---a consequence of the second law of thermodynamics, which constrains all systems to drift towards thermal equilibrium. Here we consider how such constraints generalize to continuous-variable quantum systems comprising interacting identical bosonic modes. Introducing a class of operationally-motivated bosonic linear thermal operations to model energetically-free processes, we apply this framework to identify uniquely quantum properties of bosonic states that refine classical notions of thermodynamic resourcefulness. Among these are (1) a spectrum of temperature-like quantities; (2) well-established non-classicality measures with operational significance. Taken together, these provide a unifying resource-theoretic framework for understanding thermodynamic constraints within diverse continuous-variable applications. A system's deviation from its ambient temperature has long been known to be a resource-a consequence of the second law of thermodynamics, which constrains all systems to drift towards thermal equilibrium. Here we consider how such constraints generalize to continuous-variable quantum systems comprising interacting identical bosonic modes. Introducing a class of operationally-motivated bosonic linear thermal operations to model energetically-free processes, we apply this framework to identify uniquely quantum properties of bosonic states that refine classical notions of thermodynamic resourcefulness. Among these are (1) a spectrum of temperature-like quantities; (2) well-established non-classicality measures with operational significance. Taken together, these provide a unifying resource-theoretic framework for understanding thermodynamic constraints within diverse continuous-variable applications. The simple harmonic oscillator is an iconic system in quantum science, used to describe a diverse spectrum of bosonic quantum systems-from the optical modes of light to phononic excitations within trapped ions. These continuous-variable (CV) systems enable one to encode and process continuous quantum degrees of freedom, allowing CV variants of many quantum algorithms, as well as cryptographic and metrological protocols [1][2][3]. Such variants can exhibit significant practical advantages over discrete-variable counterparts-from the relative ease of creating ultra-large entangled clusters [4,5] to hybrid factoring algorithms that require only one pure CV mode [6]. In the context of thermal physics, quantum harmonic oscillators present a compelling mechanistic model for temperature: as we lower the temperature T of a harmonic oscillator, we also monotonically lower the variance η of its momentum and position quadrature fluctuations. Indeed, this one-to-one correspondence was a key ingredient in early attempts to understand the specific heat of solids [7]. While these initial studies considered only semi-classical settings, the subsequent flourishing of quantum technologies has made it imperative to consider more general CV states. An instructive case is that of squeezed states, which are thermal states whose statistical fluctuations in certain quadratures are suppressed below the zero-temperature level [8]. Such states also have definitive thermodynamic value: heat engines using squeezed thermal reservoirs, for example, appear to perform beyond Carnot efficiency [9,10]. This suggests that squeezing itself can be leveraged to do work, much like the temperature gradients that power conventional engines. What other quantum effects are thermodynamically useful, and is it meaningful to speak of temperatures in general, non-equilibrium settings? Such questions motivate the need for a systematic characterization of the thermodynamic resources contained within bosonic CV systems. A resource-theoretic treatment, which has catalyzed profound advances in understanding the thermodynamics of discrete-variable systems [11][12][13][14], could stimulate further developments in bosonic heat engines [9,[15][16][17][18][19], by singling out uniquely quantum resources that can be harnessed for work. Our approach draws inspiration from the second law of thermodynamics, which may be paraphrased as follows: "When constrained to operations that cannot access additional sources of free energy, temperature gradient is a non-increasing monotone." Here we ask, what other properties of a quantum system are monotones embodying different forms of free energy, or generalized notions of temperature? In particular, we construct a framework of quantum thermodynamics for identical, linearly-interacting bosonic CV systems. We start by defining bosonic linear thermal operations (BLTO): the processes that can be enacted in such systems without requiring additional sources of free energy. An operational restriction to BLTO leads to several families of second law-like statements. Firstly, we identify a spectrum of generalized temperatures for general bosonic states, all of which (1) align with standard notions of temperature for classical states, and furthermore, (2) equilibrate towards the ambient temperature under BLTO operations. Secondly, we illustrate that many existing indicators of operational performance and quantifiers of non-classicality-including phase-space signal-to-noise ratios, squeezing of formation [20], phase-space sensing resolution [21]-are all non-increasing under BLTO. This thus establishes that many well-known quantifiers of the state's resourcefulness for information-processing and sensing tasks are in fact types of thermodynamic currency. I. FRAMEWORK Notation and preliminaries. Continuous-variable quantum systems occur in many different physical mediums, but it useful to adopt the terminology of one medium for clarity. Here we will adopt the terminology of quantum optics, with the understanding that the results presented can be readily adapted to other physical settings. In such contexts, a single continuous-variable (CV) system is known a bosonic mode, or quantum mode (qumode). Each qumode is associated with a conjugate pair of quadrature operators (q,p), analogous to the classical position and momentum and satisfying the canonical commutation relation [q,p] = i . In the case of an m-mode system, we denote the quadrature operators bŷ x ≡ (x 1 ,x 2 . . . ,x 2m ) ≡ (q 1 ,p 1 ,q 2 ,p 2 . . . ,q m ,p m ). For a state whose density operator is ρ, we denote the associated first quadrature moments x ρ ≡ x k ρ . The vector x ρ lives in a 2m-dimensional phase space V. The second phase-space moments are represented by the covariance matrix V ρ of ρ, defined by where {·, ·} denotes the anti-commutator. We make a choice of units with = 2, whereby the covariance matrix of the vacuum state is the identity matrix. The uncertainty constraint on a state's covariance matrix reads is called the symplectic form on m modes. We denote by γ the density operator of a qumode in the standard thermal state at ambient temperature. Assuming the standard Hamiltonian H = 1 2 ω p 2 +q 2 and ambient temperature T , the resulting thermal states are Gaussian with zero first moments and quadrature fluctuations When T = 0, the thermal state coincides with the vacuum state |0 , whose uniform quadrature variance η = 1 is called the vacuum shot noise. The parameter η increases monotonously with increasing temperature, growing linearly with T in the limit where T ω/k B . This is in line with the semi-classical picture, where quadrature fluctuations are taken to be a proxy for temperature. Thus, we will treat η as our measure of temperature, with the understanding that it has a one-to-one correspondence with T . Bosonic linear thermal operations. A key idea of the second law of thermodynamics is that, without additional energetic input, a system gravitates towards statistical equilibrium; in particular, equilibrium entails an equality between the system's temperature and that of its environment. Thus, when limited to energetically free operations (operations that have no access to external free energy), it is impossible to increase a system's temperature differential relative to its surrounding environment. In this vein, this temperature differential can be regarded as a thermodynamic resource-a quantity that cannot be created by free operations. To generalise these ideas to systems of identical bosonic modes, we thus need to first define the class of operations that we consider to be energetically free. From a practical perspective, the following operations appear natural: (1) the introduction of an additional bosonic mode intiailized in state γ, the thermal state at the ambient temperature; (2) the coupling of any two modes through linear energy-conserving interactions, and (3) the discarding of any number of modes. The combination of these operations clearly preserves the set of thermal states at the ambient temperature, and thus cannot create free energy when there is none to begin with. This then leads us to a formal definition of bosonic linear thermal operations: Definition 1 (Bosonic linear thermal operation [BLTO]). Denote the initial system by S, and the number of its constituent modes by m ≡ m S . A bosonic linear thermal operation (BLTO) is a process realizable through the following steps: 1. Adding an ancillary system A consisting of an arbitrary number m A of elementary modes in uncorrelated thermal states: γ ⊗m A . 2. Application of any passive linear unitary on the composite SA. 3. Partial trace over a subsystem A comprising an arbitrary number m A of modes, leaving an output As the definition suggests, our framework treats thermal states at the designated ambient temperature as "free of cost" in a resource-theoretic sense: the set γ k ≡ γ ⊗k k∈N is closed under BLTO. Note that while the operations map Gaussian states to Gaussian states, our results under an operational restriction to BLTO apply just as well to non-Gaussian initial and final states. II. BOSONIC "SECOND LAWS" We will now derive several laws governing the state transitions of modes subject to BLTO evolution. These laws in effect establish BLTO resource monotones: state functions that vary monotonically under BLTO, thus supplementing the classical thermodynamic free energies. In this sense, these laws generalize the second law of thermodynamics for pertinent physical systems, much like the "second laws" of Refs. [22] and related works. We present the laws in three categories: laws associated with temperature-like quantities, laws concerning the thermal degradation of phase-space displacement considered as a signal carrier, and laws of non-classicality degradation. A. Thermalization of generalized temperatures In equilibrium thermodynamics, a system's temperature determines how it exchanges heat with other systems. In particular, interaction with a heat bath causes the system's temperature to approach that of the bath. We define, and prove similar thermalization results for, several families of monotones that generalize the notion of temperature to non-equilibrium bosonic states. Recall that the thermal state has covariance matrix η1, the fixed parameter η corresponding to the bath's temperature. In the context of generalized temperatures, we will refer to the value η as the thermal level. We will consider a value η > η to be super-thermal, and a value η < η to be sub-thermal. The generalized temperatures will be based on the directional variances of a state: for a state ρ with covariance matrix V ρ , the directional variance along some unit vector v in the phase space V is given by v T V ρ v. This quantifies the variance in the measurement of a quadrature parallel to v. Note that all directional variances of a thermal state are identically thermal (i.e., equal to η). Definition 2 (Principal directional temperatures). For an m-mode state ρ, we define its k th principal directional temperature (principal temperature for short) τ k (ρ), for k ∈ {1, 2 . . . , 2m}, as follows: τ 1 (ρ) is defined as the largest directional variance in the entire phase space; τ 2 (ρ) is the largest directional variance in the subspace orthogonal to a direction associated with τ 1 (ρ), and so on, with each subsequent value defined by maximizing over the subspace remaining after the preceding ones. The principal temperatures are in fact just the 2m eigenvalues of the covariance matrix V ρ of ρ, and therefore efficiently computable from V ρ . Experimentally, they can be inferred from the statistics of quadrature measurements. Our first result (proof in Supplemental Material A 3) then states: Theorem 1. Under bosonic linear thermal operations (BLTO), each of the principal temperatures shifts closer to the thermal level η, never passing the latter. Specifically, if a BLTO maps ρ → σ, then 1. ρ has no fewer super-thermal principal temperatures than does σ; 2. ρ has no fewer sub-thermal principal temperatures than does σ; 3. When arranged in decreasing order, each of σ's super-thermal principal temperatures is no higher than the corresponding one of ρ; 4. When arranged in increasing order, each of σ's subthermal principal temperatures is no lower than the corresponding one of ρ. While the principal temperatures can be inferred from measurement statistics, their directions do not necessarily correspond to a set of phase-space quadratures. For example, if two thermal modes at different temperatures are coupled through an even beamsplitter, and one of the outgoing modes is then squeezed, the resulting state's principal temperatures correspond to directions in phase space whose simultaneous interpretation as mode quadratures is forbidden by the uncertainty principle. This motivates us to define another family of temperature-like measures, with a more direct physical meaning: Definition 3 (Principal mode temperatures). For an mmode state ρ, we define its k th principal mode temperature µ k (ρ), for k ∈ {1, 2 . . . , m}, as follows: µ 1 (ρ) is defined as the largest (arithmetic) mean principal temperature of a single mode that can be obtained from ρ by energy-conserving operations; µ 2 (ρ) is the largest single-mode mean principal temperature obtainable from the remaining modes, and so on. The detailed implications of this law mirror the expanded explanation provided in Theorem 1. Figure 3 provides a visual summary of these two theorems. Note that the principal mode temperatures are not the same as the symplectic eigenvalues: the latter correspond to the temperatures of thermal modes required in preparing the state, rather than ones that can be extracted from the state. The symplectic eigenvalues are subject to a somewhat weaker law under BLTO (proof in Supplemental Material A 4): 2. When arranged in increasing order, each of σ's subthermal symplectic eigenvalues is no lower than the corresponding one of ρ. It is well-known (see, e.g., [8]) that the symplectic eigenvalues quantify the temperatures of thermal states required in preparing a Gaussian state by Gaussian operations (cf. Fig. 4). The last theorem then tells us that the sub-thermal symplectic eigenvalues directly quantify the amount of sub-thermal temperature differential required in preparing the state under BLTO. The super-thermal symplectic eigenvalues, on the other hand, are not monotones in that they may sometimes increase under BLTO, albeit not without the initial presence of squeezedness in the state. B. Signal deterioration laws Our next result is a straightforward observation about the phase-space quadrature moments: Thus, if the phase-space displacement in the state is used as a medium to carry information, then the maximum strength of the signal deteriorates under BLTO. However, recall Theorem 1: the super-thermal variances undergo a diminution under BLTO-possibly counteracting the signal attenuation. Thus, we ask: can the noise reduction possibly compensate for the signal attenuation, resulting in an improvement of the signal-tonoise ratio (SNR)? In order to answer this question, we must formally define the SNR. For an m-mode state ρ with first moments x ρ , the first moment's component along the direction of an arbitrary unit vector v ∈ R 2m in phase space is given by v T x ρ . The corresponding directional variance, in terms of the covariance matrix V ρ , is v T V ρ v. Thus, we can define the directional SNR as the ratio between these two quantities The optimal SNR of ρ then is the maximum directional SNR over the entire phase space. In fact, as with the generalized temperatures, we define an entire family of SNR's: Definition 4 (Principal directional SNR's). For an mmode state ρ, we define its k th principal directional signal-to-noise ratio SNR k (ρ), for k ∈ {1, 2 . . . , 2m}, as follows: SNR 1 (ρ) is the optimal directional SNR over the entire phase space; SNR 2 (ρ) is the optimum over the subspace orthogonal to a direction achieving SNR 1 (ρ), and so on. In the same spirit that the principal mode temperatures were defined, we define the following operationallymotivated variants of the principal directional SNR's, restricting the directions to be simultaneously obtainable as quadratures in the phase space: Definition 5 (Principal mode SNR's). For an mmode state ρ, we define its k th principal mode SNR MSNR k (ρ), for k ∈ {1, 2 . . . , m}, as follows: MSNR 1 (ρ) is defined as the largest directional SNR in a single mode that can be obtained from ρ by energy-conserving operations; MSNR 2 (ρ) is the largest directional SNR in a single mode obtainable from the remaining modes, and so on. Note that all the principal directional and mode SNR's of a thermal state are zero, by virtue of the first moments being zero. In general, we have: Theorem 5. Under bosonic linear thermal operations (BLTO), the principal directional and mode SNR's can never increase. Specifically, if a BLTO maps ρ → σ with an m -mode output, then Thus, the SNR in every principal component of the phase-space displacement can only deteriorate under BLTO, showing that the signal attenuation effect always trumps any reduction in noise. It is important to note that this result is not of the "data-processing principle" type: that any specific information contained in the initial state could only possibly be lost, would be true not only under BLTO but any processing. Rather, Theorem 5 is about the usefulness of the displacement degrees of freedom as a potential information encoding medium-if these degrees of freedom were used to carry information, then their usefulness for this purpose would deteriorate under BLTO. In particular, if we relaxed BLTO by allowing displacement unitaries, Theorem 5 would no longer hold, while of course the data-processing principle would still hold. III. NON-CLASSICALITY DEGRADATION AND OTHER INHERITED LAWS Some notable measures already defined in the literature, and known to have operational significance in other contexts, turn out to be BLTO monotones: 1. The recently-developed resource theory for CV non-classicality [21] identified passive linear circuits with classical ancillary systems and measurement-feed-forward as the class of operations that cannot increase non-classicality as manifested by the negativity of the Glauber-Sudarshan P function. Since BLTO fall within these operations, all non-classicality measures found in [21] are also BLTO monotones. These include convex roof extensions of phase-subspace variances, as well as Fisher information-based measures that quantify the usefulness of a state in the task of detecting phase-space displacement operations. The stronger constraints in BLTO imply that similar Fisher information-based results would hold in connection with the task of detecting a bosonic phase shift. 2. In any resource theory, the distance of a given state from the free states (under any contractive metric) is a monotone. Under BLTO, the thermal states are the only free states. Thus, we can construct numerous monotones of the form D (ρ, γ), where D is contractive. In particular, the "relative entropy of athermality", S(ρ γ), has been identified as a direct analog of the classical Helmholtz free energy for discrete-variable systems [22]. This and all similar metric-based measures naturally function as BLTO monotones, provided they have well-defined values. 3. The squeezing of formation [20] is defined as the aggregate of the single-mode squeezing required for preparing a given state from unsqueezed resources. This is a BLTO monotone, since BLTO do not allow any squeezing operations or squeezed ancillary modes. Interestingly, it is known [20] that the squeezing of formation can in general be strictly (indeed, unboundedly) smaller than the squeezing resource determined by the canonical Euler (or Bloch-Messiah) preparation of a Gaussian state (Fig. 4), which we may call the squeezing of unitary formation. Since BLTO severely restrict the ancillary systems that can be used, it is plausible that the squeezing of unitary formation is also a BLTO monotone; this question remains open. IV. ILLUSTRATIVE EXAMPLES We now present some illustrations of our results. First, Visualization of some of our thermodynamic laws-Each example in the top half is associated with a single-mode initial state (marked with a blue dot), while those in the bottom have two-mode initial states. The plot region contains potential single-mode states reachable from the given initial state, with the X axis parametrizing τ1 (the first principal directional temperature), and the Y axis τ2, of these states. The thermal state is marked with a red dot. The outer pink region marks unphysical states that must therefore be ignored. The blue-shaded region enclosed by solid blue lines depicts all the single-mode states accessible from the given initial state: notice that this region shrinks to just a line for single-mode initial states. The grey-shaded region enclosed by solid black contains all final states consistent with Theorem 1; the dotted region enclosed by the dashed black lines contains those consistent with Theorem 2; finally, the region enclosed by the solid yellow lines contains final states allowed by the monotonicity of the generalized non-equilibrium Helmholtz free energy (i.e. relative entropy with respect to the thermal state). BLTO from a given initial state. To simplify the illustration, we consider only the second moments of all states and ignore their other features. The initial states in these examples were chosen arbitrarily to represent a diverse range of cases. However, we shall now consider a practically relevant special case, wherein the initial state is a squeezed thermal state of the same temperature as the bath. In order to motivate this example, consider the semiclassical regime. Here the system's state can deviate from equilibrium with the bath in only one way, namely as a thermal state at temperatures different from the bath's. On the other hand, modes in their full quantum description can contain a fundamentally quantum-mechanical form of athermality: squeezing. Indeed, squeezed thermal states have been investigated as resources to power nano-scale engines at efficiencies surpassing classical bounds [9,10]. By considering squeezed thermal states at the bath temperature, we can study this quantum thermodynamic resource in isolation. Fig. 6 depicts some examples of this category. Evidently, the presence of squeezing in the initial state enables reaching states outside of the solid black set; this can be interpreted as the conversion of the quantum form of athermality, manifested by squeezing, to the classical form of a temperature differential relative to the bath. This interpretation is all the more vivid in the case of the two-mode initial state, where the accessible region contains thermal states at a range of temperatures higher than the bath's-a purely classical thermodynamic resource. In light of such examples, it is not surprising that squeezed thermal states can be used to overcome classical performance limitations in engines and other applications. The examples illustrate that genuinely quantum resource in the form of squeezing can be converted to a classical form of resource-temperature differential relative to the bath. V. DISCUSSION In this article, we have built a quantitative framework for isolating those features of a bosonic CV quantum system that could constitute thermodynamic resources. Our approach takes inspiration from the second law, identifying quantifiers of thermodynamic resourcefulness by determining if they can ever increase under a practical class of bosonic linear thermal operations (BLTO). Our framework naturally retrieves temperature gradients as non-increasing monotones in the classical limit, while revealing a far richer spectrum of generalised temperaturelike quantities when squeezing and entanglement are involved. Many of these quantities acquire immediate operational meaning in terms of phase-space fluctuations, while others are directly related to existing measures of non-classicality or figures of merit for operational tasks in metrology and communication. In applying our framework to two-mode squeezed states, we illustrated that quantum notions of non-classicality (squeezing, entanglement, etc.) can be directly converted to classical notions of free energy (temperature gradients), demonstrating that CV non-classicality has definitive thermodynamic value. There are many interesting avenues to extend our work. In particular, there can be many alternatives to what operations we consider to be thermodynamically free. Here our choice of bosonic linear operations was heavily motivated by practical consideration of the bosonic setting, where non-Gaussian operations, or those that involve interacting modes with different free Hamiltonians, would almost certainly involve expensive nonlinearity. However, in other contexts, these restrictions could be bypassed. It would certainly be interesting to see how our results change if we allowed bosonic nonlinear operations such as parametric down-conversion, or hybrid models such as the Jaynes-Cummings interaction. Meanwhile, what states one considers free provides another freedom of choice. Indeed, the recently-proposed resource theory of local activity posits that thermal states at all temperatures are free [23]. An exciting future direction would be to further understand the operational consequences of our generalised temperatures. One particularly promising avenue is in sensing and metrology. Indeed, closely related notions of non-classicality have already been found to capture the usefulness of a state for sensing phase-space displacement [21,24], while BLTO operations naturally emerge when considering sensing under energetic constraints. Note During the preparation of this article, we became aware of closely related work on Gaussian thermal operations [25], where arbitrary quadratic local and interaction Hamiltonians are considered free. Towards proving our results, it will help to strip the definition (Def. 1) of a BLTO down into its bare mathematical form using the symplectic geometry of the phase space. Considering the generic BLTO depicted in Fig. 1, denote as before the m-mode phase space of the input system S by V ∼ = Sp (2m, R); let V ≡ V S denote the phase space of the output system S , and V A , V A those of the ancillary systems. Being a passive linear unitary, U induces on the composite phase space V ⊕ V A a symplectic transformation M that is, besides, orthogonal by virtue of the passivity of U . Denoting the phase space quadrature operators of S as (x j ) j∈{1,2...,2m} ≡ (q 1 , p 1 , q 2 , p 2 . . . , q m , p m ), those of A as (x j ) j∈{2m+1,2m+2...,2(m+m A )} , those of S as (x k ) k∈{1,2...,2m } , and those of A as (x k ) k∈{2m +1,2m +2...,2(m +m A )} we havê Noting that the phase-space first moments of thermal modes are identically zero, the resulting transformation of the system first moments looks as follows: Meanwhile, the second moments are encapsulated in the covariance matrix. In order to understand how the latter transforms, we note from the properties of the thermal state that where Π V is the projector onto the phase space V of S . It will be useful for the upcoming proofs to note that the combined operator Π V M effects a symplectic projection. Proof of Observation 4 The orthogonality of M implies the conservation of the euclidean norm in phase space: Restricting the index k to the output system S immediately yields Observation 4. Proof of theorems 1 and 2 We first translate our definitions and theorems to mathematical language; to this end, we start by introducing some notation. Definition A.1 (Eigenvalues). For a symmetric matrix V acting on a (finite m)-dimensional real vector space V, the k th largest eigenvalue of V , for k ∈ {1, 2 . . . , m}, is given by where V k varies over all k-dimensional subspaces of V. Definition A.2. For a symmetric V acting on a real, where V 2k varies over all 2k-dimensional symplectic subspaces of V, and V 2 over all 2-dimensional symplectic subspaces of each V 2k . Note that the ν k are not the symplectic eigenvalues of V . However, they can be expressed as the eigenvalues of an operator, following the line of argument used in Ref. [21], Appendix D: Observation A.1. For any given V , define W := Proof. First, note that where q is an arbitrary unit vector in V 2 and p = Ω T V2 q is the quadrature conjugate to q. Thus, Tr [Π V2 V ] = q T V + ΩV Ω T q = 2q T W q. (A9) W has a special structure in terms of 2 × 2 blocks: with the diagonal blocks satisfying W i,i I = 0. This makes the expression for ν k [V ] amenable to an isomorphism [26] onto a complex vector space of half the dimension: We formW ∈ C m×m with elementsW ij := W i,j R + iW i,j I , and similarly a vector r = (r 1,x , r 1,p , r 2,x , r 2,p , . . . ) ∈ V is mapped tor = (r 1,x + ir 1,p , r 2,x + ir 2,p , . . . ) ∈Ṽ ∼ = C m . Thenr †Wr = r T W r; in addition, an orthogonal basis in C m corresponds to a symplectic basis in V. Therefore, That these are the doubly degenerate eigenvalues of W is seen by inverting the isomorphism to map from the diagonalized form ofW back to the real 2m-dimensional matrix diag λ 1 W , λ 1 W , λ 2 W , λ 2 W . . . . It is straightforward to see why this holds for the λ's, considering that they are the eigenvalues of a Hermitian operator in a finite-dimensional vector space. It also holds for the ν's, since by virtue of Observation A.1 they, too, are the eigenvalues of a Hermitian operator. Observation Note. In the remainder, any expression with ± and/or ∓ signs is to be interpreted as a conjunction of exactly two sub-expressions: the one obtained by consistently applying the top sign throughout, and the other by consistently applying the bottom one. The scope of every such consistent application will be clear from the context. Definition A.3 (Principal directional temperatures). For an m-mode state ρ with covariance matrix V ρ , we define its k th largest principal directional temperature (principal temperature for short) τ k (ρ), for k ∈ {1, 2 . . . , 2m}, as The second line follows from (A17), and the last line from the fact that the maximization therein subsumes the cases covered by that in the line before. We will now prove that the inequalities (A18) for 1 ≤ k ≤ m are collectively equivalent to the conjunction of (the symplectic parts of) conditions 1 and 2 in the statement of Theorem A.5. We shall first prove that the former implies the latter. Firstly, it follows from the definition of k Sp±
7,023.8
2019-09-16T00:00:00.000
[ "Physics" ]
A unified Green's function approach for spectral and thermodynamic properties from algorithmic inversion of dynamical potentials Dynamical potentials appear in many advanced electronic-structure methods, including self-energies from many-body perturbation theory, dynamical mean-field theory, electronic-transport formulations, and many embedding approaches. Here, we propose a novel treatment for the frequency dependence, introducing an algorithmic inversion method that can be applied to dynamical potentials expanded as sum over poles. This approach allows for an exact solution of Dyson-like equations at all frequencies via a mapping to a matrix diagonalization, and provides simultaneously frequency-dependent (spectral) and frequency-integrated (thermodynamic) properties of the Dyson-inverted propagators. The transformation to a sum over poles is performed introducing $n$-th order generalized Lorentzians as an improved basis set to represent the spectral function of a propagator, and using analytic expressions to recover the sum-over-poles form. Numerical results for the homogeneous electron gas at the $G_0W_0$ level are provided to argue for the accuracy and efficiency of such unified approach. I. INTRODUCTION Electronic-structure calculations have been and remain a powerful and ever expanding field of research to understand and predict materials properties [1]. The development of methods, algorithms, and hardware brings in continuous progress, allowing for computational materials discovery [2][3][4], accurate comparison with experiments [5,6], and even hybrid quantum-computation algorithms [7,8]. Due to the interaction between the electrons in a system, solving the many-body quantum problem is often at the core of many approaches. Focusing on condensedmatter systems, density-functional theory (DFT) has been one of the most used and successful methods so far [9]. The possibility to map exactly the groundstate solution of the N -body problem to the minimization of a density functional for the energy [10] offers great computational simplifications, allowing the accurate computation of ground-state quantities for most materials. Although mathematically well-defined [11] and computationally inexpensive, it remains challenging to improve the approximate functionals [12,13] -often resulting in incorrect predictions for complex or stronglycorrelated systems [14] -or to address spectroscopic properties [15,16]. Dynamical (i.e. frequency-dependent) theories like many-body perturbation theory, dynamical mean-field theory, and in general embedding theories offer the flexibility to overcome these limitations of DFT. While the type of embedding differs in different approaches, a common element is the appearance of dynamical potentials. * corresponding author<EMAIL_ADDRESS>As an example, many-body perturbation theory (MBPT) reduces the multi-particle electronic degrees of freedom to one via frequency embedding [17]. Dynamical meanfield theory (DMFT) couples a real-space impurity with the rest of the system, requiring self-consistency between the two self-energies acting on the impurity and on the bath [18]. Self-energy-embedding theory (SEET) calculates exactly the frequency-dependent self-energy of strongly-correlated manifolds in solids, and applies it to the remaining weakly interacting orbitals [19,20]. Coherent electronic-transport theories use a Green's function embedding to calculate the electronic conductance of e.g. a conductor between two semi-infinite leads, coupling the three systems dynamically [21,22]. Clearly, handling properly frequency-dependent potentials is of central interest in the field. Using here MBPT as a paradigmatic example, we highlight that the difficulty in treating dynamical quantities has often led to different methodological approaches when calculating spectral or thermodynamic quantities (such as energies, number of particles, chemical potentials). Real-axis calculations are commonly performed to compute the frequency-dependent spectral properties [23][24][25], while the frequency-integrated thermodynamic properties are typically calculated using an imaginary-axis formalism [26][27][28][29][30][31]. In a series of papers [32][33][34][35] von Barth and coworkers have proposed a formalism partially able to tackle spectra and thermodynamics together for the homogeneous electron gas [36], by modelling the spectral function in frequencymomentum space using Gaussians with k-parametrized centers (quasi-particle energies), broadening (weights), and satellites. Due to its model nature, the approach does not easily offer the flexibility to target realistic systems and in general extend to embedding problems. Here we introduce a novel approach, termed arXiv:2109.07972v1 [cond-mat.mtrl-sci] 16 Sep 2021 algorithmic-inversion method, applied on sum-over-pole expansions (AIM-SOP), to address the simultaneous calculation of accurate spectral and thermodynamic quantities. Within AIM-SOP, dynamical (frequencydependent) self-energies are expanded on sum over poles, and the exact solution -at all frequencies -of the Dyson equation is found via a matrix diagonalization. The transformation of a frequency-dependent propagator into a SOP via a representation of its spectral function on a target basis set is greatly improved with the introduction of n-th order generalized Lorentzians as a basis with improved decay properties. The SOP form allows one to compute analytically convolutions and moments of propagators for the calculation of spectral, and thermodynamic properties. Owing to the fulfillment of all sum rules implied by the Dyson equation, we show that the AIM-SOP method becomes essential to have accurate frequency-integrated quantities in a real-axis (thus, spectral oriented) formalism. As a case study, we consider the paradigmatic case of the homogeneous electron gas (HEG), for r s from 1 to 10, treated at the G 0 W 0 level [37][38][39]. The paper is organized as follows: In Sec. II we introduce the AIM-SOP approach, discussing its main goal and the SOP form for propagators and self-energies. In Sec. II A we provide an overview of the connection between a propagator and its spectral function, first for a continuum and then extending it to treat spectral functions represented on discrete basis sets, as will be used in this work. Then, we consider different basis sets to represent the spectral function and obtain a SOP representation introducing n-th order Lorentzians. In Sec. II B we provide the numerical procedure to transform a propagator sampled on a grid to a SOP representation, and viceversa. In Sec. II C we introduce several useful expressions when dealing with propagators on SOP, such as analytic convolutions and moments, and in Sec. II D we show with a numerical example the representation on SOP for a test propagator. Finally, in Sec. II E we present the algorithmic-inversion method on sum over poles to obtain exact solutions on SOP of any Dyson-like equation. We first provide a mathematical proof for the case of a self-energy on SOP, then we discuss the case of the polarizability inversion, providing a numerical example as proof-of-concept for the procedure. In Sec. III we discuss the application of the method to the test case of the homogeneous electron gas. In Sec. IV we discuss the results obtained applying AIM-SOP to the homogeneous electron gas at the G 0 W 0 level, first discussing the r s = 4 case in detail and then presenting results for r s from 1 to 10. Finally, in Sec. V we draw the conclusions for the paper. Technical aspects of the method are further presented in the Appendices. II. METHOD: AIM-SOP FOR DYNAMICAL POTENTIALS In this Section we introduce the algorithmic-inversion method to treat dynamical (frequency-dependent) potentials. The crucial goal for AIM-SOP is to solve exactly and at all frequencies Dyson-like equations for dynamical potential expressed as sum over poles. For this purpose we express frequency-dependent propagators and self-energies (or, say, polarizabilities or screened Coulomb interactions) in a SOP form: where the constant term A 0 may be present for selfenergies and potentials. Generally, we consider here having complex residues A i and poles In order to provide the correct analytical structure respecting time-ordering, δ i ≷ 0 when i ≶ µ, where µ is the effective chemical potential of the propagator (for a Green's function µ is the Fermi energy of the system, for a polarizability or a screened potential µ = 0). Throughout this work we will use as case of study the homogeneous electron gas (HEG) also in view of to the extensive algorithmic and numerical results in the literature. In the HEG, due to translational symmetry, the two-point operators (including Green's functions, self-energies, polarizabilities) are diagonal on the plane-wave basis, but the AIM-SOP can be generalized to non-homogeneous systems, as will be discussed in future work. A. Spectral representations Following Ref. [33], we consider the spectral representation of a propagator (here the Green's function for simplicity), where G is expressed in terms of its spectral function A, by performing a time-ordered Hilbert transform (TOHT), where C is a time-ordered contour which is shifted above/below the real axis for ω ≶ µ, and where the shift is sent to zero after the integral is computed. Accordingly, the inverse relation to go from G to A is given by: the last expression being valid for a scalar Green's function, as is the case for the HEG. Representing the spectral function on a (finite) basis set {b j (ω)}, (4) with b j (ω) centred on j and positive (negative) for j ≶ µ, respectively, and a j > 0, we also induce a representation of G. This is achieved by introducing a discrete time-ordered Hilbert transform (D-TOHT) as where the sign chosen for b j in Eq. (4) gives by construction the time-ordered analytic structure of the Green's function. In the case of all δ j → 0 with the number of b j becoming infinite (continuum representation limit), Eq. (5) becomes the standard TOHT of Eq. (2) (with C shifted by ±i0 + ). A natural choice is to use a basis of Lorentzian functions centered at different frequencies j , according to: for which the D-TOHT for the single element is analytical, yielding a pole function 1/(ω − z j ) with z j = j + iδ j , with the sign convention for δ j defined as discussed above according to time ordering. Thus, choosing b j as in Eq. (6) induces a SOP representation for G according to Eq. (1), with A i = a i ∈ R. Once the SOP representation of G is known, i.e. poles and amplitudes are known, the grid evaluation (inverse of the above) is trivial and amounts to performing the finite sum in Eq. (1). This approach ensures a full-frequency treatment of the propagator (approaching the continuum representation limit where Lorentzians becomes delta functions), while preserving an explicit knowledge of the analytical structure and continuation of G. The main drawback of using Lorentzians to represent G is related to the slowly decaying tails (1/ω 2 for ω → ∞) induced in the spectral function when using finite broadening values δ j . In order to improve on this, we introduce here n-th order generalized Lorentzians to obtain fast-decay basis functions. These are defined as where N n = n sin π 2n −1 is the normalization factor (see Appendix A). The D-TOHT of L n δ remains analytic and still yields a SOP representation for G (see Appendix A): with residues α m and poles ζ j,m given by Importantly, α m are complex (and so become the residues A i = a j α m in the SOP representation, i being a combined index). Thus, the spectral function of this SOP has contribution by both the real and the imaginary part of each Lorentzian pole 1/(ω − ζ j,m ), resulting in a overall faster decay than each single Lorentzian. Also, it is worth noting that, as for standard Lorentzians, a normalized n-th-order-Lorentzian approaches a Dirac delta for δ j → 0 + . Owing to their fast decay and to this last property, using a SOP for G 0 in term of n-th Lorentzians provides a faster convergence for δ → 0 + , in comparison with a SOP representation built on ordinary Lorentzians, as will be also shown later. While the use of n-th order generalized Lorentzians to represent the spectral function A(ω) provides a faster decay in the imaginary-part of the propagator, it results in a multiplication of the number of poles in the SOP for G (by the degree of the Lorentzian), and in having complex residues. As it will be shown in Sec. II C, the decay properties are fundamental for evaluating the moments of a SOP representation, assuring absolute convergence up to order 2(n − 1). Also, the use of faster decay basis elements when representing the spectral function improves on the stability of the representation procedure, reducing the off-diagonal elements of the overlap matrix of the basis (see Sec. II B). Alternatively to n-th order Lorentzians, one could consider e.g. using Gaussian functions to represent A(ω), and consequently G(ω), as done in Refs. [32,33]. Gaussians also allow for an analytical expression of the D-TOHT, at the price, though, of invoking the Dawson [40] or Faddeeva [41] functions to evaluate the real part of the propagator. Because of this, SOP expressions are not available, and basic operations involving propagators (such as those described in Sec. II C) cannot be evaluated analytically and need to be worked out in other ways, e.g. numerically or recasting the expressions in terms of propagator spectral functions [32]. B. Transform to a sum over poles Once the SOP representation has been introduced, the next important step is to determine numerically the SOP coefficients A i in Eq. (1), given an evaluation of G on a frequency grid. According to the discussion of Sec. II A, the SOP representation can be seen equivalently as a representation for the Green's function G or for the spectral function A. As a first case, we consider representing A(ω) according to Eq. (4), and we do it using the basis of nth-order generalized Lorentzians introduced in Eq. (7). First, we obtain the coefficients a j of the representa-tion by performing a non-negative-least-square (NNLS) fit [41,42], thus assuring the positivity of all a j . Then, we use Eqs. (9)(10) to get the SOP representation for the propagator. While the position and broadening ( j , δ j ) of the n-th order Lorentzians could also be optimized by means of a non-linear NNLS fit, here we consider them centred at j = 1 2 (ω j + ω j−1 ) and broadened with δ j = |ω j − ω j−1 |, and we just linearly optimize a j . Also, for numerical reasons we prefer to work with the bare imaginary part of G, i.e. without imposing the sign factor of Eq. (3), since this function is smoother than the actual spectral function A(ω) close to the Fermi level. Alternatively, one could consider the basis representation induced on G via Eq. (1) in order to directly obtain the A i and z i coefficients (residues and poles). As for A(ω), this can be achieved by a linear or non-linear LS fit (or interpolation) taking advantage of the knowledge of the whole G(ω) on a frequency grid (and not just of A). Interestingly, the SOP representation in Eq. (1) is a special case of a Padè approximant, written as the ratio of polynomials of order N − 1 and N , respectively. Because of this, one can exploit Padè-specific approaches to determine (A i ,z i ), such as, for instance, Thiele's recursive scheme [43]. We found that this leads to a very efficient method when few tens of poles are considered, becoming numerically unstable beyond. Moreover, since the residues are not constrained to be real and positive (A i are actually complex), there is no control over the timeordered position of the poles, and the procedure is nontrivial to extend to the case of n-th order Lorentzians. For the above reasons, in the present work we adopt the first approach, based on the representation of A(ω). C. Analytical expressions Once the SOP representation of a dynamical propagator is available, a number of analytical expressions hold. For instance, the convolution of propagators, such as those involved in the evaluation of the independentparticle polarizabilities in terms of the Green's functions, can be evaluated using Cauchy's residue theorem: Using the the SOP for G, the following integrals can also be computed explicitly: where we refer to the term E m [G] as the m-th (regularized) moment of G. We restrict the discussion to the first m = 0 and m = 1 moments, since those are of interest for calculating the number of particles and the total energy in MBPT (see Sec. III B for details). Higher order moments would require a stronger regularization factor in Eq. (12) than e iω0 + . We underline that if one uses an n-th-order Lorentzian basis to represent G(ω) on SOP, the first 2(n − 1) moments coincide with the moments of its occupied spectral function µ −∞ dω ω 2(n−1) A(ω). This is shown in Appendix B. D. Numerical validation In the following we highlight numerically some properties of the SOP representation. To this aim, we consider a propagator G obtained as the Hilbert transform (HT) of a Gaussian, analytically expressed via the Faddeeva [41] function (black curves in the top panels of Fig. 1). Here we have assumed the Fermi level to be far enough from the imaginary part of G such that the retarded HT can be used. The objective of the validation is to transform the Faddeeva Green's function sampled on a finite grid to a SOP representation. Following Sec. II B, we represent the Gaussian spectral function on first (ordinary) and second-order Lorentzians, use Eqs. (9) and (10) to get the resulting SOP representation, and then compare the results -orange and green lines for 1 st and 2 nd order basis, respectively-with the starting Faddeeva Green's function calculated on a much finer grid. We choose to feed the NNLS fitting algorithm with 10 sampling points (for the imaginary part of G) and to use 9 basis functions centered at the midpoints of the grid, and broadened with the width of the interval (in order to ensure an exhaustive cover of the domain). Then we use Eqs. (8)(9)(10) to obtain the SOP representation of G (in the case of nth order Lorentzians) from the output of the NNLS procedure. Underscoring the quality of the representation, the upper panels of Fig. 1 show that using faster-decay 2 nd order Lorentzians provides a more accurate result for both the real (left) and imaginary (right) part of G. In the lower panel we also compare the moments of the Faddeeva function with those obtained from Eq. (12) and Eq. (B1). Since absolute convergence for the first and second moments is not ensured for 1 st order Lorentzians, meaning that Eq. (B1) does not hold, E n>1 [G] has to be calculated according to Eq. (12) and is in general complex. In order to obtain a meaningful result we take its real part, and consider the imaginary one as an error that must be controlled by extending the basis set towards completeness. E. Algorithmic inversion on SOP As anticipated in the introduction, within the SOP approach the exact solution at all frequencies of the Dyson equation can be remapped into the diagonalization of a static effective Hamiltonian (Hermitian only under special conditions), a procedure that we refer to as "algorithmic-inversion method on sum over poles" (AIM-SOP); this is a central result for the present work. As mentioned we will use the HEG as a paradigmatic test case, leaving the treatment of the non-homogeneous case to later work. Suppressing then the k momentum index for simplicity, let us suppose to have the SOP representation of the self-energy Σ(ω) and the non-interacting Green's function G 0 (ω) given by Taking advantage of these expressions, the Dyson equation can be rewritten as in which the N + 1 roots of the polynomial are the N + 1 poles of the Green's function (as expected when the self-energy has N poles). Then, the key statement of this Section is that the roots of T N can be obtained as the eigenvalues of the (N + 1)×(N + 1) matrix We prove this statement by observing that the characteristic polynomial of H AIM is T N (ω), and we proceed by induction. Since the N = 1 case is trivial we move to the N -th case: using the Laplace expansion on the last line, the characteristic polynomial of the N -th case can be written as where we have used the induction hypotheses in the first term of the rhs. Applying the same procedure to the last column of the second term we obtain which completes the proof. Calling i the poles of G we calculate the residues by equating and performing the limit lim ω→zi (ω − z i ) on both sides (Heaviside cover-up method [44]), obtaining: We have thus proven that by knowing Σ represented on SOP, the SOP expression of G can be found by the diagonalization of the AIM-SOP matrix H AIM followed by the evaluation of the residues using Eq. (20). It is worth noting that the H AIM matrix becomes (or can be made) Hermitian under special conditions. This happens when the self-energy residues Γ i are real and positive, and the self-energy poles have all the same imaginary part iδ, with the usual time-ordered convention according to Eq. (1), also equal to the broadening assumed for the G 0 pole. Then it is possible to include the imaginary part of the poles in the frequency variable ω, and invert G(ω ∈ γ) on this time-ordered-complex path, in order to have H AIM with only the real part of the poles along the diagonal. Finally, in order to have G(ω ∈ R) we analytically continue the solution to the real axis, obtaining Im{ i } = δ i for the SOP of the Green's function. We also stress that, given a self-energy represented on SOP, the solution provided by the algorithmic-inversion procedure is exact at all frequencies. This ensures the Green's function fulfills all the sum-rules implied by the Dyson equation, including e.g. the normalization of the spectral weight, and the first and second moments sum rules of the spectral function derived in Ref. [32]. This result is crucial when evaluating frequency-integrated quantities of a Green's function, such as the number of particles or the total energy (see Sec. III B). Besides the solution of the Dyson equation for G, the AIM-SOP can also be used to solve the Dyson equation for the screened Coulomb interaction W (ω), i.e. to compute the SOP representation of W (ω) once a SOP for the irreducible polarizability P (ω) is provided. Here v c is the Coulomb potential (recalling that we are suppressing the momentum dependence for simplicity). By letting P (ω) = i Si ω−gi , we can write: and following Eq. (21) (multiplied by ω ω ) we have for which the AIM-SOP matrix can be used to find the poles of −1 (ω) and W (ω). The amplitudes of W are easily found using Eq. (20). Note that by multiplying For validation, we apply the AIM-SOP approach to the paradigmatic case of the homogeneous electron gas (HEG), treated at the G 0 W 0 level of theory [6,17,45]. Since we calculate propagators on the real axis we can easily access spectral (frequency-dependent) properties. The calculation of frequency-integrated groundstate quantities (occupation numbers, total energies, and thermodynamic quantities in general) can be obtained directly from the SOP representation of the spectral quantities computed in the procedure. We stress that usually [26,46,47] thermodynamic properties are obtained via additional calculations of propagators (e.g. on the imaginary axis), while in this work spectral properties and integrated quantities are obtained simultaneously using the SOP representation of propagators computed on the real axis. While some quantities computed using the freepropagator G 0 have known analytical expressions, as is the case for the irreducible polarizability P 0 expressed via the Lindhard function [48,49], here we recompute explicitly all the propagators needed to evaluate the GW self-energy, making the treatment suitable also for selfconsistent calculations. Therefore in the following the only assumption we make is to consider the Green's function as represented on SOP. A. HEG propagators on the real frequency axis In order to solve a one-shot G 0 W 0 cycle for the spin-unpolarized HEG, we first need to compute the irreducible polarizability at the independent-particle (or RPA) level, according to the integral where k = |k| and q = |q| are the moduli of the electron and transferred quasi-momenta, respectively. To compute Eq. (24), the frequency integral (convolution) is performed analytically according to Eq. (11). Then we integrate numerically in spherical coordinates by performing the variable change x = |k + q| on the azimuthal angle of k, which allows for the pre-calculation of the analytical convolutions on the two-dimensional (x, k) grid, instead of on the three-dimensional (k, q, θ) space. Exploiting the par-ity of P (ω), it is also possible to limit the k integration to the occupied states (see Appendix C). The numerical integration on the momentum is performed using the trapezoidal rule, which ensures exponential convergence for decaying functions [50]. In order to have a SOP representation for the screened potential W we transform the polarizability calculated on a frequency grid (at fixed momentum q) to a SOP performing a NNLS fitting, following the procedure of Sec. II B. We then solve the Dyson equation using the algorithmic inversion for the polarizability (see Sec. II E) to obtain a SOP for W , and use it for the GW integral. An alternative possibility would be to solve the Dyson equation on a grid (which, due to homogeneity, is an algebraic inversion), and then transform W to a SOP representation. Even admitting for an exact interpolation for the SOP of W on the calculated frequencies (where the Dyson equation is solved on grid), this SOP would suffer from not having solved the Dyson equation for all other frequencies. Very differently, the SOP obtained from the algorithmic inversion provides for an exact solution of the Dyson equation at all frequencies (see Sec. II E). Thus, the sum rules implied by the Dyson equation (moments of the spectral function) are all obeyed by the SOP obtained from the algorithmic inversion, being the exact solution at all frequencies. Conversely, this is not true for the grid inversion where the solution is exact only for isolated frequencies. Concerning the self-energy integral where W corr = W − v c , we can still use Eq. (11) since we have the SOP representation of W . Again, in Eq. (26) we perform the x = |k + q| change of variable obtaining which allows for fewer convolutions (as for the polarizability integral), and use trapezoidal weights as in Eq. (25) for the momentum integration. The solution of the Dyson equation for the Green's function using the algorithmic inversion, and the calculation of frequencyintegrated (thermodynamic) quantities, are discussed in the next section. In Fig. 3 we show the overall flow chart describing the process of going from the knowledge of the initial Green's function to the calculation of the corresponding self-energy (for the HEG in the GW approximation), as implemented in the heg sgm.x program of the AGWX suite [51], by means of the SOP approach. As opposed to the path in red, where the Dyson equations are solved on grids, in the green path we highlight the protocol followed in the present work. The crucial difference between the two approaches is the use of the algorithmic-inversion method in order to solve exactly the Dyson equation, providing a SOP for W obeying all sum rules implied by the Dyson equation, as previously discussed in this Section. B. Frequency-integrated quantities and thermodynamics Having obtained the self-energy on a frequency grid following the procedure described in Sec. III A, we evaluate the Green's function together with some related frequency-integrated quantities. As mentioned, the SOP approach plays here a central role, enabling the possibility of performing analytical integrals for the moments of G, as those involved in the Galitskii-Migdal expression for the total energy [see Eq. (28) below], and thus to have accurate thermodynamic (frequency-integrated) quantities. Moreover, the use of the algorithmic inversion allows for the exact solution the Dyson equation for the Green's function at all frequencies. The conservation of all sum rules (implied by the Dyson equation, see Sec. II) guaranteed by the AIM-SOP is fundamental when calculating the occupied moments of the spectral function. As an example, the normalization condition of the spectral function is automatically satisfied when G on SOP is obtained using the algorithmic inversion, and allows for not having fitting constrains which would be required, e.g., if we were to use a grid inversion. In order to exploit the AIM-SOP to get G on a SOP, we obtain the SOP representation of the self-energy by performing a NNLS fitting of ImΣ(ω) (see Sec.II B). Then, in order to compute the total energy from the knowledge of the Green's function G, we use the Galitski-Migdal expression [17,48], here in Hartree units, where V is the volume of the periodic cell of the electron gas. In this expression, the frequency integrals are performed using the SOP for G, and exploiting Eq. (12) with m = 1 and m = 0 for the first and second terms, respectively. Here n k is the k-resolved occupation function, which sums to the total number of particles when integrated over momentum, and k is the occupied band, i.e. the first momentum of the occupied spectral function. For both m = 0 and m = 1 moments, the equality between the moments of the Green's function and the moments of the occupied spectral function, Eq. (12) and Eq. (B1), is assured by having used the algorithmic inversion when obtaining the SOP for the Green's function. Indeed, the knowledge of the selfenergy on SOP and the use of the algorithmic inversion for solving exactly the Dyson equation ensures that the spectral function decays at least as ImΣ , thereby making the first two occupied moments (see Sec. II C) converge. Similarly to the discussion in Sec. III A, the SOP approach combined with the algorithmic inversion allows one to follow the workflow highlighted by the green path in Fig. 4. Overall, the results presented Sec. IV are obtained using an implementation of the above approach in the heg sgm.x program of the AGWX suite [51]. C. Numerical details Here we discuss and report the parameters that control the numerical accuracy of the quantities (polariz- FIG. 4. Flow chart representing different strategies for the calculation of the total-energy given a self-energy Σ, or a spectral function A, on a frequency grid as input for the heg sgm.x code. The strategy used in this article is highlighted with green lines. FIG. 5. Convergence study for the correlation energy per particle Ecorr, obtained with the Galitzki-Migdal formula, and using a Green function from a G0W0 calculation for the HEG at rs = 4. The parameters to converge are explained in Sec. III C. We choose to converge Ecorr for each parameter taking all the others fixed at the converged (second to last point) value. For each different parameter, we increase stepby-step its value by 20% in the convergent direction. ability, self-energy, total energy) computed by means of Eqs. (25), (27) and (28). In practice, this corresponds to going from left to right in the flow diagram of Fig. 3 following the green path, performing all calculations mentioned in the boxes. The first quantity to be computed is the polarizability P (q, ω). For each momentum q and frequency ω, we perform the integral of Eq. (25). As k in the integral is limited by k f (see Sec. III A), the discretization of the k-and x-grids, ∆k P and ∆x P , has to be converged to the zero spacing limit. Also, it is necessary to converge to zero the spacing of the momentum and frequency points of the polarizability-(q, ω) grid, controlled by ∆q and ∆ω P , along with the grid-upper limits (to infinity) q max and ω max P . Moving to the central part of the flow chart in Fig. 3, the SOP representation of the polarizability is obtained following the method of Sec. II B, and placing the center of the 2 nd order Lorentzians on the mid points of the frequency grid, which improves the accuracy of the fit as ∆ω P → 0. Next, we employ the algorithmic-inversion method to go from the SOP representation of the po-larizability to the SOP of the screened-potential W (exact to machine precision, see Sec. II E). Using the SOP representation of W (and of G), the self-energy integral (right part of Fig. 3), Eq. (27), is formally identical to the integral in Eq. (25) for the polarizability. Therefore, the remaining parameters to converge are ∆x Σ , ∆k, ∆ω Σ , k max , and ω max Σ (using the same notation adopted above). As for the screened-potential W , we obtain the SOP representation of the self-energy following Sec. II B, and placing 2 nd -order Lorentzians on the mid points of the frequency grid. Finally, we obtain the SOP representation of the Green's function employing the algorithmicinversion method. In principle, for each computed quantity which depends on the Green's function G, e.g. via the spectral function or its integrals, we should study the numerical stability of the computational procedure with respect to all the above parameters. Our numerical approach allows for the evaluation of the Green's function and the related spectral quantities on the real-axis, which are then used for the computation of thermodynamic quantities. In this work, we choose to converge the total energy (as obtained in Sec. III C), which is sensitive enough to guarantee a reasonable convergence for the other (spectral) properties of interest here. By changing individually each parameter (increase or decrease by 20% of its value towards convergence), we study the stability of the total energy against the selected parameter, keeping the values of all the others fixed at a reference point (baseline calculation of Fig. 5). Each target parameter is then converged separately until a plateau for the subsequent values of the computed quantity is observed. We evaluate the error on the result considering the two most distant values among those in the plateau. Importantly, it is possible to reduce the number of parameters to converge from 13 to 5, by linking all the gridspacing and broadening parameters together into a single variable, ∆, which ensures convergence for ∆ → 0 + . Specifically, we bind those parameters together by setting ∆ = ∆k P = 5∆x P = 1 6 ∆ω W = 1 9 ∆q = 1 25 ∆ω Σ = ∆x Σ = 1 3 ∆k Σ = 5 4 δ P = 1 100 δ Σ . Together with ∆, the grid-limit parameters are converged separately, following the strategy designed above. The converged values obtained for all the calculated densities are: ∆ = 0.004 k f , IV. RESULTS In this Section we discuss the results obtained applying the SOP approach to the case of the one-shot G 0 W 0 calculation in the HEG. First we extensively discuss the r s = 4 case, also one of the most studied in the literature, then in Sec. IV C we provide the results for densities ranging from r s = 1 to r s = 10. A. Spectral propagators on the real axis We start by considering the independent particle polarizability P 0 (q, ω) computed at the G 0 level. In Fig. 6 we compare the imaginary part of P 0 , calculated using Eq. (25) and represented on SOP (fitted to 2 nd order Lorentzians with NNLS and then evaluated on a frequency grid, see Secs. II B and II D), with its analytic expression [52] (note that this is the only analytic result we use as a check -all others are evaluated numerically). The δ → 0 + broadening used in G 0 in order to converge the momentum integration does not sensibly affect the calculations. It is worth noting that the use of 2 nd order Lorentzians with respect to simple Lorentzians eases this convergence, providing for the same δ and k-grid spacing better agreement with the analytic result at δ = 0 (thermodynamic limit, see Sec. II A). From the plot comparison we can qualitatively infer that the SOP approach, together with its numerical implementation, is working Next, we look at the self-energy numerical procedures by examining directly the G 0 W 0 spectral function as shown in Fig. 7. This is obtained evaluating Eq. (27), representing the self-energy on SOP with 2 nd order Lorentzians, using the algorithmic inversion for the self-energy, and then evaluating the Green's function on a frequency grid. Focusing the attention on the lower satellite as well as on the quasi-particle band, we can see that Fig. 7 compares well with [30,53] (note that, at variance with [30], we use a logarithmic scale to represent the intensity of the spectral function, in order to highlight its structure). The plasmaron peak [53] is very visible for small momenta where the quasi-particle band broadens, while the satellite band in the occupied-frequency range (ω < µ) is sharper. As k approaches k f , the plasmaron disappears and the quasi-particle band becomes more peaked. At k = k f the spectral function presents the typical metallic divergence along the quasi-particle band, and occupied and empty satellites are almost of the same weight, in agreement with Ref. [32]. For k > k f satellites coming from empty states (ω > µ) become dominant along with the quasi-particle band, and no structure resembling a plasmaron hole appears. B. Frequency integrated quantities and thermodynamics We now study convergence and stability of the total energies. Following the prescription of Sec. III, we use the spectral function on SOP obtained in Sec. IV A, Eq. (12) to get analytically the occupation number n k and the occupied-band energy k (see Sec. III B), and finally numerically integrate the momenta of Eq. (28) to obtain the . total energy. To perform the convergence study on the total energy, we follow the approach described in Sec. III C which consists in converging all parameters for the calculation separately. Being the HEG a metal, the use of the algorithmic-inversion method to get a spectral function that obeys all sum rules (implied by the Dyson equation, see Sec. II E for details), including the normalization condition for the spectral function, is crucial for obtaining well-converged results. Indeed, the Luttinger discontinuity of n k makes the value of the total energy from the Galitzki-Migdal very sensitive to the converging parameters. In Fig. 5 we show the convergence study for the correlation energy per particle (total energy minus Fockexchange): the convergence value for r s = 4 is 0.0381 ± 0.0003 Ha in agreement with Refs. [26] (with a difference of 0.0003 Ha), where calculations were done along the imaginary axis. In panel a) of Fig. 8 we plot n k , and in panel b) k (as defined in Sec. III B). The occupation number n k presents a sharp Luttinger discontinuity, which indicates that the broadening used in Eq. (27) is well-controlled and does not spoil the quality of the results. In panel c) of Fig. 8 we plot the total-energy resolved over k-contributions e k [rhs of Eq. (28)]. As previously mentioned, due to the presence of the Luttinger discontinuity, this function is sharp and thus difficult to integrate, at variance, e.g. with the RPA-Klein-energy functional, which is expected to be smoother [54]. [55]), using the data of Table I, and the covariance matrix of the fit. The fitted function is plotted in Fig. 11. C. G0W0 for a broad range of HEG densities In this Section we report results for the HEG with r s ranging from 1 to 10 studied at the G 0 W 0 level, following the same approach used for r s = 4. In Fig. 9 we show the computed data for the spectral function obtained with the AIM-SOP approach. In the chosen units ( f for the energy and k f for the momentum) the spectral function for increasing r s shows an increase in the separation between the quasi-particle band and the satellite occupied and empty bands. Indeed, in these units r s controls the interaction strength -see Eq. (3.24) of Ref. [52]-with the limits of the non-interacting gas obtained for r s → 0 and the strongly interacting gas corresponding to r s → ∞. Accordingly, the plasmaron peak of the satellite band at small momenta is weakened for smaller r s . The same behaviours can be observed for the occupation factor of Fig. 10 for the different densities. For r s → 0 the HEG approaches the non-interacting limit and the occupation number drops from 1 to 0 for increasing k/k f . Going toward r s = 10 the jump becomes smaller, since the quasi-particle is reduced due to the more evident satellite bands, as it can be seen from Fig. 9. In Table I we report the corresponding total energies computed at the different densities, together with the available (to our knowledge) results in the literature. Since the calculations of Ref. [26] were done on the imaginary axis, we shall consider those as the most accurate for the comparison. We refer to the Supplemental Material [57] for the convergence studies of the total energies for the different densities. We find at r s = 1 the largest discrepancy (0.0059 Ha) with respect to the data of Ref. [26]. This can be rationalized by noting, e.g., that n k is a steeper function, thereby enhancing the numerical issues of the Galitzki-Migdal expression discussed in Sec. III B. To deepen the understanding of this numerical discrepancy, aside the convergence study of Fig. 1 provided in the Supplemental Material [57], we performed an additional calculation increasing the refinement parameter ∆ by 20%, aiming at increasing the accuracy in the integral grids, to target the steeper character of r s = 1. The result, 0.0736 Ha against 0.0749 Ha of Table I, is acceptable considering the error of 0.0015 Ha of Table I. Most importantly we stress that at variance FIG. 11. Correlation energy in Hartree units for several densities within the G0W0 approximation in the HEG. In green we show the results found in this work (see Sec. IV C) compared with those found in [35] in blue, and with [26] in orange. In green we plot the correlation energy fit of Eq. (30) (same functional form as in [55]) on the present (green) data. For reference, in dashed grey we add also the Quantum Monte Carlo data obtained by Ceperley and Alder [56] in the fit made by Perdew and Zunger [55]. with Ref. [26], our procedure provides not only accurate frequency-integrated quantities (e.g. the total energy), but also precise spectral properties on the real axis (key quantities for spectroscopy). In Fig. 11 we plot the correlation energy of Table I as a function of r s , including the Perdez-Zunger (PZ) fit of the Quantum Monte Carlo (QMC) Ceperley Alder data as a reference [55,56]. We also exploit the same functional form of PZ to fit our data, providing in Table II γ, β 1 , and β 2 for the fitting function for the correlation energy of the HEG (in Hartree): together with the covariance matrix of the fit. In Fig. 11 we plot the result of the fit as a green line. V. CONCLUSIONS In this work we introduce the novel algorithmicinversion method on sum over poles (AIM-SOP) to handle frequency-dependent quantities in dynamical theories. Specializing to the case of many-body perturbation theory, we show that the AIM-SOP is able to provide a unified formalism for spectral and thermodynamic properties of an interacting-electron system. Expanding all frequency-dependent quantities on SOP, we use AIM-SOP to solve exactly and at all frequencies Dyson-like equations, getting analytic frequency-dependent (spectral) and frequency-integrated (thermodynamic) properties. This is allowed by the mapping of the Dyson equation to an effective Hamiltonian of dimension controlled by the number of poles in the SOP of the self-energy (see Sec. II E). The transformation of frequency-dependent quantities into SOP is performed exploiting the representation of their spectral functions on different basis sets: aside from the standard choice of a basis of Lorentzians, we introduce n-th order generalized Lorentzian basis elements (see Sec. II A) with improved decay properties. This allows for better numerical stability when transforming a propagator to SOP (see Sec. II B), improved analytic properties for calculating the thermodynamic quantities (see Sec. II C), and an acceleration of convergence to the thermodynamic limit (zero broadening and infinite k-space sampling). Also, once the SOP representation of a propagator is known, we use the Cauchy residue theorem to calculate convolutions and (occupied) moments, accessing both spectral and thermodynamic quantities (see Sec. II C). In order to have a working example of the AIM-SOP approach, we apply it to the paradigmatic case of manybody perturbation theory at the G 0 W 0 level for the HEG at several densities (r s from 1 to 10). Using AIM-SOP, we are able to provide accurate spectra simultaneously with precise frequency-integrated quantities (e.g. occupation numbers and total energies). At the available densities, we find very good agreement with Refs. [30,53] for the spectral function. Moving to the total energy, we provide an in depth study of the stability and convergence of our results, finding quantitative agreement with Ref. [26] for the available r s , where calculations are performed on the imaginary axis. Although in this article we study a homogeneous system as test case, the AIM-SOP approach aims to treat realistic non-homogeneous systems in the more general framework of dynamical embedding theories, for a full-frequency representation of potentials and propagators, the flexibility for self-consistent calculations, and the exact solution of Dyson-like equations. In this Appendix we obtain the SOP representation of a Green's function from an n-th order Lorentzian spectral function. Recalling Sec. II A, the discrete time-ordered Hilbert transform (Eq. (5)) of a (not normalized) n-th order Lorentzian, VI. ACKNOWLEDGMENTS induces a SOP representation for the Green's function, see Sec. II A. The expression in Eq. A1 can be computed using the residue theorem. Closing the contour in the upper/lower plane for ω ≶ µ, the poles of the integrand ζ j,m = j + e i π 2n (1+2m) δ j come only from the spectral function A. Using L'Hôpital's rule, the residues of the integrand are reduced to . (A2) Thus, taking the limit for C on the real-axis, poles and residues of the SOP for G are those in Eqs. (10) and (9). The normalization of the n-th order Lorentzian is given by summing α m of Eq. (9) and using the geometric sum, (A3) Appendix B: Moments of a propagator and occupied moments of its spectral function In this Section we discuss the equality between the (regularized) moments of a propagator Eq. (12) and the occupied moments of its spectral function. For simplicity of notation we restrict to the case of a single n-th Lorentzian L n δ , as defined in Eq. (7), and focus on the m = 2(n − 1) case [again here we suppose the integral in Eq. (12) converges which is assured by m = 0 and m = 1, but must be stronger regularized for higher degrees]: where A(ω) is the spectral function of G. To go from the second to the third line, we used the 1/ω 2n decay of the n-th Lorentzian, and applied the dominated convergence theorem which allows for the 0 + limit to be performed inside the integral. The same derivation holds for lower degree moments. For the higher order moments, m > 2(n − 1) it is not possible to discard the e iω0 + factor in the integral, thus E m>2(n−1) [G] becomes complex. The equality between E m>2(n−1) [G] [first and second line of Eq. (B1)] and the occupied moments of A [third line of (B1)] is lost, with the integral for the occupied moments of A diverging. The divergence happens because we cannot exchange the limit of the finite representation (controlled by δ i ) and the lower bound a → −∞ of the integral µ a dω ω 2(n−1) A(ω). Numerically, this translates into performing the two limits in order, i.e. fixing the lower bound of the integral and controlling the integral stability for δ i → 0, then lower a and again convergence the result for δ i → 0, and repeat until both convergences are achieved. In this continuous limit for the represen- Supplemental Material A unified Green's function approach for spectral and thermodynamic properties from algorithmic inversion of dynamical potentials This manuscript contains the supplemental material for the paper: "A unified Green's function approach for spectral and thermodynamic properties from algorithmic inversion of dynamical potentials". FIG. 1. Correlation energy Ecorr convergence study obtained with the Galitzki-Migdal formula using a Green function from a G0W0 calculation for the HEG at rs = 1. See Fig. 5 of [1] for further reference. I. CONVERGENCE STUDIES AT SEVERAL DENSITIES OF THE HEG In this section we show the convergence studies at all densities of [1]. Following the method presented in Secs. III, and particularly III C, of [1] at r s = 4, we study stability and convergence of the correlation (total minus Fock) energy per particle at r s from 1 to 10. In Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 we plot the convergence study at the different densities.
11,495.4
2021-09-16T00:00:00.000
[ "Mathematics" ]
Periodic matrix difference equations and companion matrices in blocks: some applications This study is devoted to some periodic matrix difference equations, through their associated product of companion matrices in blocks. Linear recursive sequences in the algebra of square matrices in blocks and the generalized Cayley–Hamilton theorem are considered for working out some results about the powers of matrices in blocks. Two algorithms for computing the finite product of periodic companion matrices in blocks are built. Illustrative examples and applications are considered to demonstrate the effectiveness of our approach. Introduction It is well known that the scalar homogeneous linear difference equations of order r ≥ 2, defined by y n+r = a 1 (n)y n+r −1 + · · · + a r (n)y n , for n ≥ 0, (1.1) where the coefficients a 1 (n), . . . , a r (n) are functions of n, occur in several fields of mathematics and applied sciences. Several methods have been provided in the literature for solving Eq. (1.1) (see, for example, [15,17], and references therein). Recently, the homogeneous linear difference equations (1.1) with periodic coefficients, i.e., a j (n + p) = a j (n), have been solved in [4,5], using properties of the generalized Fibonacci sequences in the algebra of square matrices. More precisely, in [4], Eq. (1.1) has been studied under its equivalent matrix equation: Y (n + 1) = C(n)Y (n), f or n ≥ 0, (1.2) where Y (n) and C(n) are given as follows: . . . In the last years, the product of companion matrices has attracted much attention, because this product occurs in various fields of mathematics and applied sciences, such that the Floquet system theory related to the linear difference equations (see [4,5,16,17]). Diverse methods for computing the product of companion matrices have been proposed in the literature. For instance, in [16], the authors developed an explicit formula for the entries of the product of companion matrices. Then, they applied their results to solve linear difference equations of variable coefficients. Another expression for the product of companion matrices was obtained in [17], based on the study of solutions of non-homogeneous and homogeneous linear difference equations of order N, with variable coefficients. Recently, it was shown in [4,5] that the product of companion matrices plays a central role, for investigating a large class of periodic-discrete homogeneous difference equations via generalized Fibonacci sequences. Moreover, through the key of generalized Fibonacci sequences, there are still some interesting and relevant problems that can be examined. In this paper, we aim to study the linear difference matrix equations defined by where Y 0 , · · · , Y r −1 are in C d and stand for the initial values, and the coefficients A 1 (n), · · · , A r (n) are square matrices in C d×d , the algebra of square matrices of order d, with complex coefficients, representing periodic matrices functions of n, that is, A j (n + p) = A j (n), for every n ≥ 0, i.e., p = min{N ∈ N, N ≥ 1 A j (n + N ) = A j (n), for j = 1, ..., r and n ≥ 0}. The class of discrete linear matrix equations (1.3) appears in many applied fields, such as economics, population dynamics, and signal processing. For instance, periodic matrix models are often used to study seasonal temporal variation of structured populations (see [6] for example). They can also occur in many practical control systems (see [20] for example). In our exploration, we are looking forward to studying properties of some periodic matrix difference equations (1.3), throughout their closed relation with the product of companion matrices in blocks. First, we formulate the main result on the solutions of the linear matrix difference equation (1.3), through the product of companion matrices in blocks and powers of matrix in blocks. As a matter of fact, we utilize the generalized Cayley-Hamilton theorem for giving rise to a new result that allows us to compute the powers of matrices in blocks. Moreover, we outline the recursive method for investigating two algorithms to compute the finite product of companion matrices in blocks. To highlight the importance of our results, special cases, significant examples and applications are provided. The outline of this study is as follows. Section 2 is devoted to some basic properties of the periodic matrix difference equations (1.3), where the product of periodic companion matrices in blocks is considered. Section 3 concerns the study of the powers of matrices in the algebra of square matrices in blocks. More precisely, using the generalized Cayley-Hamilton Theorem and the linear recursiveness in the algebra of square matrices in blocks, we give an explicit expression of the powers of a square matrix in blocks. Here, the Kronecker product (or tensor product) of matrices plays a central role. In Sect. 4, we develop two algorithms for computing the finite product of companion matrices in blocks, where a recursive sequence of matrices is considered. In Sect. 5, gathering the results of Sects. 2, 3 and 4, we then employ those to examine some special class of periodic matrix difference equations (1.3). Periodic matrix difference equations: general setting In the same way to the scalar case, the matrix equation associated to Eq. (1.3) is given by , ..., Y (n)) ∈ C dr and C(n) is a matrix of order dr, i.e., in C dr×dr given by where 1 d×d and d×d are, respectively, the identity matrix and the zero matrix of order d × d. We observe that the matrix C(n) is a companion matrix in blocks. In the sequel, we use the notation C(n) = C[A 1 (n), A 2 (n), · · · , A r (n)] r ×r , for these companion matrices in blocks of order r . As for the scalar case, the main problem for studying the matrix equation (2.1), is reduced to the study of the following product of companion matrices in blocks: Since A j (n + p) = A j (n), for every j (1 ≤ j ≤ r ) and n ≥ 0, we then infer that C(n + p) = C(n), for every n ≥ 0, where p ≥ 1 is the period. Here we are concerned with the finite product of companion matrices in blocks. It is worthwhile to point out that this class of companion matrices in blocks arises in various mathematical and applied fields (see, for example, [18]). In this work, we will emphasize its key role for providing the solutions of Eq. Thus, for every n ≡ i[ p], i.e n = kp + i (0 ≤ i ≤ p − 1), the solution of the matrix equations (2.1) and (2.2) related to the periodic matrix difference equation (1.3) is given as follows. Theorem 2.1 shows that there is a closed link between the periodic matrices difference equations and product of companion matrices in blocks. More precisely, in expression (2.3) appears a finite product of companion matrices in blocks and the powers of the matrix B = C( p − 1) · · · C(0), which is itself a finite product of companion matrices in blocks. To establish more results, concerning the explicit representation of the solutions of the periodic matrix equations (2.1) and (2.2), we are led to study the two following problems. The first one is related to the powers of matrices in blocks and the second concerns the finite product of companion matrices in blocks. In the first problem, our approach revolves around the generalized Cayley-Hamilton Theorem. In the whereas, for the second problem we manage to build two recursive algorithms for computing this finite product of companion matrices in blocks. Kronecker product and linear recursive relation in the algebra of square matrices in blocks In this subsection, we are interested in the use of the matrix Kronecker product for studying some linear recursive relation in the algebra of square matrices in blocks, and their use for the computation of the powers of matrices in blocks through the generalized Cayley-Hamilton Theorem. In fact, using the product of Kronecker, we extend the results of [3], to the algebra of square matrices in blocks. For reason of clarity, let us recall that the Kronecker product can be defined for two matrices of arbitrary size over any ring. In the sequel of this study, we consider only the square matrices, whose entries are in the fields of real R or complex numbers C (see for example, [11,19]). Let us start by recalling the definition of Kronecker product. That is, let C d×d and C r ×r be the algebras of square matrices of order d ≥ 1 and r ≥ 1, respectively. Definition 3.1 The Kronecker product of the matrix A = (a i j ) 1≤i, j≤r ∈ C r ×r with the matrix B = (b i j ) 1≤i, j≤d ∈ C d×d is defined as follows: Note that, there is other denomination for the Kronecker product such that tensor product, direct product or left direct product (see, for example, [11]). For more details, an interesting overview on the Kronecker product is given by K. Schnack in [19]. The Kronecker tensor product has several important algebraic properties, we refer to what we will use in this section. Let first remark that for r = 1, we have A = a 1,1 ∈ C 1×1 = C, thus the tensor product (3.1) takes the form which allows to see that the tensor product coincides with the usual multiplication of matrices by scalars. Or equivalently, the tensor product can be viewed as an extension of the usual multiplication of matrices by scalars. Expression (3.1) shows that A ⊗ B is an element of G L(r, C d×d ), the algebra of square matrices of order r ≥ 1, with coefficients in C d×d . Moreover, we can also see that A ⊗ B can be identified with an element of C rd×rd , the algebra of square matrices of order rd, with coefficients in C. Therefore, we have the following known isomorphisms of algebras: In the sequel, we will use the notation M d r to designate without distinction the previous notations of C r ×r ⊗ C d×d . A method for computing the powers of the matrices of C r ×r , the algebra of square matrices, has been considered in [3]. This method is based on the linear recursive sequences of Fibonacci type in the algebra of square matrices C r ×r , and can be extended here as follows. More precisely, for computing the powers of the matrix in blocks, we introduce the notion of linear recursive sequences of Fibonacci type in the algebra of square matrices M d r = G L(r, C d×d ). Let A 0 , A 2 , · · · , A r −1 be a family of commuting matrices in G L(r, C d×d ), and B 0 , B 1 , · · · , B r −1 (r > 2) a given sequence of G L(r, In other words, the sequence {Y n } n≥0 is called a generalized Fibonacci sequence, where A 0 , A 2 , · · · , A r −1 are the coefficients, and Y 0 , Y 1 , · · · , Y r −1 stand for the initial conditions. As it was shown in [3], we have where W s = A r −1 B s + · · · + A s B r −1 for s = 0, 1, · · · , r − 1 and with ρ(r, r ) = 1 r ×r = diag(1 d×d , ..., 1 d×d ) = 1 r ×r ⊗ 1 d×d (the r -by-r diagonal matrix in which the entries of the main diagonal are all 1 d×d ) and ρ(n, r ) = r ×r ⊗ d×d , if n < r . The preceding expressions (3.2) and (3.3) combined with the generalized Cayley-Hamilton are useful for computing the powers of the matrix in blocks B = C( p − 1) · · · C(0). For this propose, we employ the result of the generalized Fibonacci sequence, that allows us to obtain a tractable expression for the powers of a block matrix A of G L(r, C d×d ). Generalized Cayley-Hamilton theorem and powers of companion matrices in blocks We first recall the generalized Cayley-Hamilton Theorem for matrices given in [13,14]. Let us consider the square matrix in blocks: [14]), the matrix characteristic polynomial of A, is given by The matrix characteristic polynomial of the square matrix in blocks A, defined by (3.4), is where S ∈ C d×d is the matrix (block) eigenvalue of A, ⊗ denotes the Kronecker product of matrices. The matrix determinant (3.5) is obtained by developing the determinant of the matrix considering its commuting blocks as scalar entries (see [13,14]). More precisely, it was shown in [13,14] that the matrices D i (i = 0, · · · , r − 1) are obtained by developing the determinant of the matrix [1 d×d ⊗ S − A] considering its blocks as scalar entries. Then, we have We now turn our attention to the theory of generalized Fibonacci sequence, to extend some properties established in the case of matrices with scalar coefficients, to the case of matrices in blocks. Equation (3.6) leads to get for every n ≥ r . We observe that the sequence {A n } n≥0 is nothing else but only a generalized Fibonacci sequence of order r , with matrices coefficients In an entirely similar way followed when the matrix has scalar coefficients in [3], we manage to obtain the following result for the block matrix. Theorem 3.3 Let A be a matrix in blocks and P(S) is given by ρ(r, r ) = 1 r ×r ⊗ 1 d×d , with ρ( p, r ) = 1 r ×r ⊗ d×d for p < r , and It seems for us that the result of Theorem 3.3 is not current in the literature. Comparing to the linked results in this subject, we establish here a handed expression that can be a key to resolve diverse questions in this subject. Notably, those on the similar matrix equations (see, for example [2,[7][8][9]13,14]). To give more light to the content of Theorem 3.3, we examine the following special situation. Suppose that r = 2 and A = A 11 A 12 d×d A 22 , with A 11 , A 12 and A 22 are matrices of order d, in addition they satisfy the Employing expressions (3.7) and (3.8), we obtain . What is more, in this case , we have ρ(n, 2) = k 0 +2k 1 =n−2 , we obtain the following explicit expressions of ρ(n, 2): Therefore, we have the following proposition. Proposition 3.4 Under the preceding data, we have As a numerical application of Proposition 3.4, consider the matrix A = Proposition 3.4 and its numerical application illustrate the efficient role of Theorem 3.3. Moreover, our main goal, is to apply Theorem 3.3 to calculate the powers of the matrix B = C( p − 1) · · · C(0), in the aim to provide solutions of the periodic matrix difference equations (2.1) and (2.2), in some special cases, that will be more exploited in Sect. 5. Algorithm 1: product of block companion matrices In this section, we develop the first algorithm for computing the finite product of companion matrices in blocks. Recall that this product appears in the solutions of the matrix expressions (2.3) of Theorem 2.1. Let us consider the companion matrix in blocks (2.2), namely, where A 1 (m), · · · , A r (m) are matrices of order d. We shall give an explicit formula for the matrix The main idea behind this algorithm is to build an iterative formula that calculates recursively the entries B (m) i j of the matrix B (m) , using from the entries of the matrix C(1). More precisely, this recursive process is based on a sequence of matrices D (k) (m), whose entries constructed recursively, using the given sequence A j (m). To this matter, we set The steps of our first algorithm are as follows. Let D Let us define D j (2) by the following relation: Thus, by substituting D (2) j (m) in the last formula of B (m) i j , we obtain Using the same recurrent process above, we obtain We can continue this process by taking Finally, by recurrence we have the following result. For every m ≤ r , we have and for every k > 2, we set It should be made clear that, since the product of matrices is not commutative, the order of matrices in formulas (4.1) and (4.2) need to be respected. For more illustration of Theorem 4.1, we examine the following special case. Proposition 4.2 Consider the companion matrix in blocks: Then, for every m ≥ 2, the entries of the matrix , are given as follows: That is, for m = 2, by a straightforward computation, we get In addition, for m = 3, we obtain Thence, we obtain Consider the following numerical example. Suppose that where 2×2 is the null matrix of order 2 and 1 2×2 is the identity matrix of order 2, C 11 (1) = 1 0 0 2 , i j } 1≤i, j≤r of the product of companion matrices B (m) = C(1) · · · C(m) , since our method is recursive and novel. We can now proceed analogously to Theorem 4.1, and then we obtain a new expression of α (m) i j given by the following corollary. (1) ir , and for m ≤ r, Corollary 4.3 Let α (m) i j be the (i, j)-entry of the product B (m) . Then, for m > r , we have In the aim to give more light in the previous result of Corollary 4.3, we study the case m = 3. Let (3)} be a set of real or complex numbers. Consider the following three companion matrices: Applying the result of Corollary 4.3, a direct computation leads to get . We illustrate this situation by the following numerical application. Algorithm 2: product of companion matrices in blocks In this section, we manage to provide another recursive algorithm to calculate the entries of the matrix B (m) = C(m) · · · C(1) = C(m)B (m−1) , our approach reposes in the techniques of generalized Fibonacci sequences in the algebra of square matrices in blocks, given in Sect. 3.2. We set where A 1 (k), · · · , A r (k) are matrices of order d, and with mutually different sets of initial conditions defined as follows. For s = 1, the initial conditions of the sequence {Y (k) n (1)} n≥0 are given by r −l,k , for every 0 ≤ l ≤ r − 1 and 2 ≤ s ≤ m. (4.5) Therefore, using a straightforward computation, it ensues that we can rewrite de matrix B (m−1) under the form: Thence, employing the recursive relation (4.3) satisfied at order r by Y By induction, we observe that for m < r (m ≥ 1), we obtain For m ≥ r , we observe that by induction, we get for every 1 ≤ i, j ≤ r . Hereafter, we derive that The main idea here is to take advantage of the fact that {Y We can observe that for the two preceding algorithms, the commutativity condition is not necessary. More precisely, in this section, we are interested in making use of all the material provided in the above sections for exploring some special cases of the p-periodic matrix difference equations in blocks. Yet, some particular cases are treated and some examples are given, to make this study more affordable. Solutions of the matrix equation Y n+2 = A(n)Y n , where A(n) is p-periodic Consider the periodic matrix difference equation: where A(n) is p-periodic (with period p ≥ 2) square matrix of order d , and Y 0 , Y 1 stand for the initial conditions. We assume that A(i)A( j) = A( j)A(i) for 0 ≤ i, j ≤ p − 1. This equation (5.1) can be written under the following matrix equation: It ensues that C(n) is p-periodic emanated from the fact that A(n) is p-periodic. We consider the matrix B = C( p − 1)C( p − 2) · · · C(1)C(0). Thus, employing Theorem 2.1, we need to distinguish two cases p = 2 and p > 2. If p = 2, then for every n = 2k (k ≥ 1), the matrix equation (5.2) takes the form: We start by giving the expression of B in terms of A( p − 1), · · · , A(1), A(0), using the Algorithm 1 (see Sect. 4.1). For reason of clarity, we consider the case of r = 2. For p = 2, a straightforward application of the Algorithm 1 shows that where the entries of the matrix B are given by 1 (2) and B (2) where D (0) 1 (2) = d×d and D Example 5.2 Consider the scalar linear difference equation of the form where a : N → R is a 2-periodic scalar function of n and x 0 , x 1 stand for the initial conditions. Then, the matrix A(n) is reduced to one element a(n), thus the unique solution of Eq. (5.1) is given by Proposition 5.1 as follows: This class of equations has been studied in [5]. The method used consists in transforming equation ( x 2n = a(0) n a(0) + a(1) In addition, starting from (5.5) to (5.6), a direct computation implies that x 2n = a(0) n x 0 and x 2n+1 = a(1) n x 0 . Therefore, we show that the two solutions (5.3) and (5.5)-(5.6) are the same results. Now, we turn to the case when r = 2 and the period p ≥ 3. In this case, the entries of the matrix B = C(1)C(0) are given by and for every k > 2, we have We need to distinguish two cases. When p is even, a straightforward computation, shows that Hence, for every k ≥ 2, we have However, when p is odd, we have In this case for calculating the powers B k , for k ≥ 2, we need to utilize Theorem 3.3 of Sect. 3.2. Indeed, in this case, , and it follows from Theorem 3.3 that for every k ≥ 1, we have In addition, once again, we need to distinguish two cases: when k is odd or even, for giving the expression of ρ(k, 2): Thence, we obtain With this results at our disposal, we can express the solution of the matrix equation Y n+2 = A(n)Y n . Indeed, when A(n) is periodic of period p > 2, we have the following result. Proposition 5.3 Let p ≥ 3 be an even integer. Consider the p-periodic matrix equation Y n+2 = A(n)Y n , n ≥ 0, with the initial conditions vector (Y 1 , Y 0 ) . Then, for every n = kp + i (i = 0, · · · , p − 1), the unique solution is given by and for every i = 3, · · · , p − 2, we have Similarly, when the period p is odd, we have to consider two cases. For k is even, the vector solution is given by When k is odd, we get For more illustration, we propose the following example. and, if k is odd, we get Solutions of the matrix equation Y n+r = A(n)Y n , A(n) is p-periodic In this subsection, we apply Algorithm 2 to solve the equation where A(n) is a p-periodic matrix (with period p ≥ 2) of order d, and Y 0 , Y 1 stand for the initial conditions. We assume that In a similar way as before, we can formulate Eq. (5.7) under the following matrix equation: Consider the product of companion matrices in blocks B = C( p −1) · · · C(0). For 1 ≤ m ≤ p −1, the product of companion matrices in blocks We point out that for m = p we have B = B ( p) . To give the form of the matrix B, we propose to apply the Algorithm 2 (the Algorithm 1 can also be used here). We need to distinguish three cases. When p = r , a direct computation, using Algorithm 2, allows us to have Thus, we obtain Therefore, for every k ≥ 1, we have In this case, the solution of the p-periodic matrix equation (5.7) is given by the following proposition. Proposition 5.5 Consider the p-periodic matrix difference equation (5.7) with the initial conditions vector (Y r −1 , · · · , Y 0 ) . Suppose that the period p satisfies p = r. Then, for n = kp, the solution of Eq. (5.7) is given by and if n = kp + i, for i = 1, · · · , p − 2, we have Finally, for n = (k + 1) p − 1, we have When p < r , in a similar way by employing the Algorithm 2, we get By a straightforward computation, we obtain Thence, we have To express the solution of Eq. (5.7) when p < r , we need to compute the powers B k of the matrix B given above using the Theorem 3.3. Unfortunately, it is not straightforward to derive the expression of the matrix polynomial P(S) = det[1 r ×r ⊗ S − A] = S r − D 0 S r −1 + · · · − D r −1 of B. Therefore, we propose to examine the following example, when p = 3 and r = 5 as follows. (5) When k = 5k + 4, we have Y kp+l+1 = D k +1 Y l−2 , for l = 2, 3, 4, 5, 6, Finally, for p > r , once again we need to discuss two cases. The first case consists in p = kr and k > 1. By following the same approach using yet Algorithm 2, we get B = (B i, j ) 1≤i, j≤r , where Thus, we obtain r ). Therefore, for every k ≥ 1, we have The second case is when p = kr + s and s = 1, · · · , r − 1. Then, a direct computation shows that the entries of the matrix B = (B i, j ) 1≤i, j≤r are given by In other words, we have where k×m is the null matrix of order k × m and (3) When k = 3k + 2, we have ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ Y kp+l+1 = D k +1 Y l−3 , for l = 3, 4, 5, A( j))Y l , for l = 0, 1, 2, where we denote by D = A(3)A(2)A(1)A(0). Discussion and concluding remarks In this paper, we have been interested in the study of a class of periodic matrix difference equations. While formulating the result on the solutions of this class of equations, we were led to deal with two new problems. The first one concerns the expression of the powers of matrices in blocks. To this aim, we proposed a method for computing the powers of matrices in blocks based on the linear recursive sequences of Fibonacci type in the algebra of square matrices, and the generalized Cayley-Hamilton theorem. Here, the combinatorial expression for the linear sequences of Fibonacci type in the algebra of square matrices G L(r, C d×d ), and the Kronecker product play a central role. The second problem deals with the computation of the product of companion matrices in blocks. To this matter, we developed two recursive algorithms to calculate the entries of the resulting matrix product: Algorithm 1 is an iterative process based on a sequence of matrices, while Algorithm 2 reposes merely on a family of a Fibonacci sequences in the algebra of square matrices. General results are established and special cases are considered. To the best of our knowledge, the results of this investigation present a pilot study to solve periodic matrix difference equations. It is worth noting that, for reason of clarity and simplicity, in the examples illustrating our results, the matrices are mostly small matrices. However, the general results and algorithms show that our method can work for matrices of large size. On the other side, the programming code of the two algorithms is actually of interest, both for the purpose to treat the matrices of large size and to study as concrete application of the periodic matrix model of Samuelson-Hicks. Partial results have been established, which illustrate that this type of method can be used more effectively. Finally, a recent literature shows that the generalized Cayley-Hamilton Theorem constitutes an important tool for dealing with various applied and theoretical topics. Especially, the generalized Cayley-Hamilton Theorem can be used as new technique for solving some matrix and matrix differential equations (see, for example [2,[7][8][9]13,14]). As for the periodic matrix model of Samuelson-Hicks, it seem for us that our results and algorithms, can be also used effectively for studying some topics, related to the generalized Cayley-Hamilton Theorem.
6,909
2021-07-26T00:00:00.000
[ "Mathematics" ]
Evolutionary Optimization for Robust Epipolar-Geometry Estimation and Outlier Detection : In this paper, a robust technique based on a genetic algorithm is proposed for estimating two-view epipolar-geometry of uncalibrated perspective stereo images from putative correspondences containing a high percentage of outliers. The advantages of this technique are three-fold: (i) replacing random search with evolutionary search applying new strategies of encoding and guided sampling; (ii) robust and fast estimation of the epipolar geometry via detecting a more-than-enough set of inliers without making any assumptions about the probability distribution of the residuals; (iii) determining the inlier-outlier threshold based on the uncertainty of the estimated model. The proposed method was evaluated both on synthetic data and real images. The results were compared with the most popular techniques from the state-of-the-art, including RANSAC (random sample consensus), MSAC, MLESAC, Cov-RANSAC, LO-RANSAC, StaRSAC, Multi-GS RANSAC and least median of squares (LMedS). Experimental results showed that the proposed approach performed better than other methods regarding the accuracy of inlier detection and epipolar-geometry estimation, as well as the computational efficiency for datasets majorly contaminated by outliers and noise. Introduction Sparse image matching is one of the most critical steps in many computer vision applications, including structure from motion (SfM) and robotic navigation.In contrast to dense image matching, where image correspondences are established at nearly each pixel, sparse matching establishes the correspondences at salient image points only.Recent research works apply sparse matching to address a variety of problems including simultaneous localization and mapping [1,2], feature tracking [3] and real-time mosaicking [4,5].The results of sparse matching are usually contaminated with false correspondences.Given the recent advancements in the fields of low-altitude, oblique and ultra-high resolution imagery, the rate of contamination has increased, and detecting the correct correspondences with high accuracy has become more challenging [6].This is due to several factors, which include noisy measurements, the inefficiency of local descriptors, the lack of texture diversity and the existence of repeated and similar patterns that cause matching ambiguity [7][8][9].Therefore, outlier detection should be substantially integrated into sparse matching.From now on, the term putative correspondence is used for referring to the raw results of sparse matching.The term inlier applies to true matches among the putative correspondences, and the term outlier refers to false matches.Besides, the terms correspondence and match have the same meanings throughout this paper.The problem of outlier detection implies the detection of inliers by eliminating outliers from putative correspondences given no prior information about the parameters of relative orientation or intrinsic camera calibration assuming straight line-preserving perspective camera models. Outlier detection techniques are based on the fact that inliers have some spatial characteristics in common.The correspondences that are not consistent with such spatial characteristics can be classified as outliers.The idea of using epipolar geometry as a spatial constraint to detect inliers/outliers has been proposed in several studies.In this regard, the outlier detection problem turns into two problems of (i) robust estimation of epipolar geometry given the putative correspondences and (ii) detecting all of the inliers and outliers using the estimated model.In this paper, the term model refers to the two-view epipolar geometry, which is described here by the fundamental matrix.In particular cases where approximations can be made on the perspective camera model or assumptions can be formulated about the planarity of the scene, the model can refer to affine transformations and homographies, as well.Figure 1 presents the summary of the outlier detection techniques that are discussed in Section 2. In addition to the techniques using epipolar geometry, several methods are based on other spatial criteria, such as the distribution of parallax values [10,11], spatial patterns of outliers [12], orientations of the lines connecting inliers [13] and two-way spatial order differences [14].However, such methods use spatial consistency measures that are less general and valid for specific imaging configurations.In this paper, we also focus on the problem of outlier detection based on the robust estimation of epipolar geometry.To this end, we use the integer-coded genetic algorithm (GA) followed by an adaptive inlier-classification method.The proposed technique can be considered as an extension and generalization of RANSAC-like methods for handling a high percentage of outliers, varying amounts of noise and degenerate configurations.This technique has the following distinctive characteristics.First, random sampling is replaced with an evolutionary search.The evolutionary search brings a significant advantage: new sample sets are generated considering the feedback information obtained by evaluating previous sample sets.Second, a guided sampling scheme based on the spatial distribution of the correspondences is proposed and applied to the evolutionary search.This sampling scheme increases the robustness of the solutions against degenerate configurations and local optima without requiring additional computation or prior information about the matches.Third, the objective function of robust estimation is not defined based on the support cardinality, but the robust least trimmed sum of squared residuals is used instead.Therefore, there is no need to successively set a threshold at any iteration for detecting the support of the estimated model or to assume any specific probability distribution for outlier/inlier residuals.Finally, to identify all of the inliers, a detection method based on adaptive thresholding is proposed, as opposed to using a fixed threshold.In this approach, the uncertainty of the estimated model is taken into account to identify all of the inliers correctly. The rest of the paper is organized as follows.In Section 2, a review of the techniques of robust epipolar-geometry estimation is presented.The main problem of simultaneous inlier detection and epipolar-geometry estimation is formulated in Section 3. Section 4 describes the solution using the genetic algorithm, which is followed by the method of detecting all of the inliers.The experimental results are discussed in Section 5, and the conclusion is presented in Section 6. Related Works Random sample consensus (RANSAC) techniques are popular approaches in fields of robust estimation [15].RANSAC is a method to estimate the parameters of a mathematical model from a set of observations that contain outliers, assuming that the quality of the model and the observations are inter-dependent.More precisely, RANSAC aims at determining the optimal model from an outlier-free sample set of correspondences by maximizing the support size of the model.The inliers that support the model are detected as correspondences whose residuals from the estimated model are less than a given threshold.To find an outlier-free sample set, successive random sampling is performed.To ensure, with probability p, that at least one outlier-free set of m correspondences is drawn from a dataset containing ε percent of inliers, at least k sample sets should be drawn such that k ≥ log(1 − p)/ log(1 − ε m ).This also means that within RANSAC, approximately, log(1 − p) −1 good models are generated before the confidence p is achieved. By this definition, six major questions are involved in RANSAC-like techniques.(i) Is maximizing the support cardinality a robust objective function when no information about the rate of the outliers is available?(ii) How does one handle a large number of required samples in cases where is very small?(iii) Why does the algorithm not take advantage of the probability of generating good hypotheses before reaching the termination criterion; i.e., why not include the feedback of previous samples in the sampling procedure?(iv) How can the robustness of the estimated model be ensured against the influence of noise since it is relying on a minimal (just enough) subset of inliers?(v) How does one control the effect of degenerate sample sets, which naturally maximize the support cardinality?(vi) Does the threshold used to detect the inliers reflect the uncertainty of the estimated model as well?Some of these questions are answered by different variants of RANSAC, which are discussed here. Unlike the standard RANSAC, there are improved variants, which use robust objective functions to determine the support cardinality.In m-estimator sample consensus (MSAC), inliers are scored based on their fitness, and outliers are scored with a non-zero constant penalty [16].Maximum likelihood estimation sample consensus (MLESAC) maximizes the log-likelihood of the solution via the RANSAC process by assuming that outliers are distributed uniformly and residuals distribute a Gaussian function over inliers [17].Maximum a posterior estimation sample consensus (MAPSAC) is also a refined version of MLESAC with Bayesian parameter estimation [18].These objective functions make models with similar inlier scores more distinguishable.However, they make certain assumptions about the distribution of the residuals either for inliers or outliers.Besides, they still score the models and detect the final inliers by applying a hard threshold to the residuals.Generally, this threshold is determined from the standard deviation of the residuals themselves.Assuming that the noise in data points follows a Gaussian distribution N(0, σ) and that the residuals are expressed as point-model distances, then the residuals follow a Chi-square distribution.Therefore, the threshold can be expressed as χ −1 (p)σ 2 , where χ(p) is the cumulative Chi-square distribution with one degree-of-freedom at probability p as the fraction of the inliers to be captured (e.g., 0.95).However, this assumption is valid when ignoring the uncertainty of the model itself.Besides, estimating the standard deviation σ at any RANSAC iteration is another problem.One of the most common methods is to estimate this variable from the median of the residuals on the potential inliers that support the best tentative model.Therefore, it might not be robust against outliers [19].Another method is to determine a Gaussian distribution fitting to the smallest residuals in the dataset (modified selective statistical estimator (MSSE) by [20]).While this method is more robust against outliers, it is sensitive to the distinction of inliers from outliers.One strategy to eliminate the requirement of a fixed threshold is to run RANSAC several times using a range of pre-determined thresholds (the stable random sample consensus (StaRSAC) method by [21]).However, depending on the range of the thresholds to be tested and the number of RANSAC executions, this strategy can be computationally exhaustive.In the RANSAC with uncertainty estimation (Cov-RANSAC) algorithm, the uncertainty of the model estimated from the minimal sample set is used to determine a subset of potential inliers, to which the ordinary RANSAC is applied afterwards [22].However, the uncertainty of the estimated model highly depends on the uncertainty of the configuration of the points appearing in the minimal sample set. The sampling strategy in RANSAC is also an important factor, as it influences the algorithm efficiency with respect to both the number of RANSAC iterations and degeneracy (Degeneracy in robust epipolar-geometry estimation occurs when one or more degenerate configurations exist in the scene.This usually happens when the majority of the correspondences belong to a dominant plane in the scene and the rest of the correspondences are not on the plane (planar degeneracy), or when the correspondences belong to a very small region of an image (ill-configuration).) of the estimated model.There are several methods to control each of these two factors.The main contributions in this regard are discussed here.To control the speed of the algorithm, two strategies can be applied.The first strategy is to enforce an initial consistency check to filter the putative correspondences.This consistency can be measured as the fraction of neighbouring features in a region around a point in one image whose correspondences fall into the similar region in the other image (Spatially Consistent Random Sample Consensus (SCRAMSAC) method by [23]).This strategy is sensitive to the region size and the threshold used to define the spatial consistency.Alternatively, sampling can be guided by keypoint matching scores (Progressive Sample Consensus (PROSAC) method [24], Efficient Guided Hypothesis Generation (EGHG) method by [25]).However, such a strategy is not effective when foreground motion happens.In addition, the scenes with repetitive textures may result in many false matches with high matching scores.Another example of such strategies is to assume that correspondences have a natural grouping structure, in which some of the groups have higher inlier probability than others (the GroupSAC method by [26]).However, finding a meaningful and efficient grouping among the correspondences is itself a concerning challenge in different applications. The strategies mentioned so far mostly require supplementary information about the scene or the matches.Comparatively, guided sampling based on the information from sorted residuals can be used to accelerate the hypothesis generation while avoiding any application-specific ordering or grouping technique (Multi-Structure Hypothesis Generation (Multi-GS) by [27]).In this method, sampling is guided towards selecting the points that are rising from the same structure.This strategy speeds up the procedure to reach an outlier-free sample set.However, this method causes more computational complexity, since every point in the hypothesized sample set should be compared against all of the other points in the dataset in order to determine its intersection (in terms of structure) with them.The fast consensus sampling (FCS) method based on the residuals is developed by [28].In this method, proposal probabilities are calculated for the correspondences based on their normalized residuals and a concentration score.Although this method accelerates the sampling, it is still sensitive to degeneracy, image noise and uncertainty of the model estimation.This is due to the fact that it reduces the number of potential inliers by thresholding the proposal probabilities that are, themselves, dependent on the robust estimation of normalized residuals.Several studies have attempted to apply evolutionary algorithms instead of the random search [29,30].Although promising results were achieved, several limitations were not addressed yet.For instance, their objective functions still require a hard threshold to distinguish outliers from inliers; the two-dimensional spatial configuration of the correspondences is ignored; the uncertainties of the estimated model are not taken into account; and the experiments are limited to small datasets.Genetic Algorithm Sample Consensus (GASAC) is also another technique that is, to the best of the authors' knowledge, the most similar technique to the one proposed in our study.The difference of GASAC from typical RANSAC is that the random sampling is replaced with evolutionary search based on classic genetic algorithms (genetic algorithms are meta-heuristics inspired by biological evolution, which are used for solving optimization problems by relying on evolutionary search operators) [31].This study shows that the computational cost could be sped up 13-times by applying the evolutionary search instead of the random one.However, according to their report, the technique is applicable to small datasets with less than 50% outliers; the objective function is still based on support cardinality; and it introduces no solution to avoid local optima (such as degeneracy).The second strategy proposed in the literature in order to speed up RANSAC is to reduce the solution space by only verifying the hypotheses with a higher probability of being optimal.These highly probable hypotheses can be selected by a T d,d test [32], a bail-out test [33] or a sequential probability ratio test (SPRT) [34,35].In addition, the hypothesis verification can be performed preemptively in a breadth-first manner only for a fixed number of sample sets [36].These techniques may increase the number of required hypotheses, as good models may wrongly be rejected by not being verified completely. It has been observed that, in the case of degenerate configurations, RANSAC-like algorithms result in a model with a very large support, while it is completely incorrect.This behaviour can be explained by the fact that a high inlier support can be obtained even if the sample set includes some outliers and at least five inliers that belong to a dominant plane or to a very small area of the image.This large support causes the termination of RANSAC before a non-degenerate outlier-free sample set can be picked up [37].The main strategy to control the degeneracy of solutions is to re-investigate the support of the best tentative model either locally or globally.For instance, the support of the best model can be re-sampled, and the model estimated from those subsamples can be compared to the best one to find out the degenerate models (the Locally Optimized Random Sample Consensus (LO -RANSAC) method by [38]).Another example of such strategies is the Quasi Degenerate Sample Consensus (QDEGSAC) (proposed by [39]), where a hierarchical RANSAC is performed by changing the number of the parameters in the model and verifying it over the entire dataset.The main issue concerning these techniques is that they cause additional operations, as a separate mechanism is added to the original RANSAC; i.e., they do not directly handle degeneracy in the sampling process.Universal RANSAC (USAC) is also a modular fusion of some of the mentioned RANSAC algorithms, including PROSAC sampling, SPRT verification and LO-RANSAC local optimization [40].In general, its performance is better than any of the single modules integrated into the universal implementation.However, it does not improve any of the modules individually. To conclude, in most variants of RANSAC, the termination criterion is decided based on the size of the maximum consensus set found, which itself depends on the methods of threshold selection.These methods usually ignore the effect of the uncertainties of the estimated models caused by noisy image observations and spatial configurations of the matches in the minimal sample sets.Most RANSAC algorithms require extra operations or validations to increase the robustness of the results against degenerate configurations, and finally, they do not provide an explicit solution to maximize the accuracy of inlier detection. Problem Formulation In this section, the main problem of inlier detection is mathematically formulated.First, the fundamental theories of two-view epipolar geometry are explained, and our fast solution to this problem is presented.Then, a threshold-free objective function for robust estimation is formulated. Notation: Column vectors are represented by italic, bold lower case letters, such as x.Therefore, x (the transpose of x) is a row vector.Matrices are denoted by italic uppercase letters, such as F. The elements of the matrix are denoted as F ij , where i represents the row index and j represents the column index.Sets are denoted by italic, bold uppercase letters, such as U. Fundamental Theories: Two-View Epipolar Geometry Epipolar geometry defines the geometry of stereo vision, all elements of which can be captured by a matrix called the fundamental matrix.It can also be captured by an essential matrix in the case of calibrated images, where the parameters of intrinsic camera calibration are known.There are different methods for estimating the fundamental matrix: linear, iteration-based and robust techniques [41].Robust techniques, which use linear techniques as their base, are the most applicable ones since they can handle the presence of outliers and, finally, can detect the inlier correspondences required for structure reconstruction.This category of estimation is considered in this study.The following paragraphs present the theoretical background with this regard. For any pair of homogeneous coordinates of correspondences u ↔ u in two images, the fundamental matrix (F) is defined by Equation (1). It can be noticed that F is defined up to an unknown scale.It is also a rank-two matrix with zero determinant.Consequently, it has only seven degrees of freedom [42].Given m matches u i ↔ u i , it would be possible to form a linear homogeneous system of equations in the nine unknown coefficients of matrix F as: where f = (F 11 , F 12 , F 13 , F 21 , F 22 , F 23 , F 31 , F 32 , F 33 ) , and matrix A is the m × 9 coefficient matrix as A = [a 1 , a 2 , . . ., a m ] .The coordinates of the points are also usually normalized so that A can gain a better condition number [43].Given a pair of matching points with normalized coordinates u i = (u i , v i , 1) and u i = (u i , v i , 1) , the row a i corresponding to this match is defined as: In robust estimation, non-parameterized linear methods of fundamental-matrix estimation are used due to the reasonable agreement between the speed and accuracy yielded from these methods [44].Given exactly seven matches (m = 7), it would be possible to determine f by spanning the two-dimensional nullspace of A and applying the rank deficiency constraint to it (seven-point algorithm).However, this results in up to three fundamental matrices.Thus, when the goal is the robust estimation of the fundamental matrix, computational expenses for hypothesis evaluation would increase up to three times.Depending on the total number of correspondences and the percentage of outliers, this can be a major drawback.Therefore, eight or more matches (m ≥ 8) are required to determine a single solution for the fundamental matrix (eight-point algorithms).Using at least eight points, the solution to F can be found from Equation ( 2) by linear least-squares methods.In the end, the rank deficiency constraint must be applied to the estimated F by setting its smallest singular value to zero [43]. In the case of the eight-point algorithm, an additional constraint should be imposed to define an arbitrary scale factor for F and to prevent the trivial solution F = 0.There are two options in this regard.One would be to fix the two-norm of the fundamental matrix (e.g., f = 1).The other would be to fix one element of the matrix (e.g., F 33 = 1).In the first case ( f = 1), an orthogonal least-square minimization should be applied.It can be shown that the solution is the right singular vector of A corresponding to the smallest eigenvalue of AA , which can be determined by singular value decomposition (SVD) of matrix A. Technically, to compute the SVD of the matrix A ∈ R m×9 , 8019 + 162m flops (amount of arithmetic operations) are at least required [45].In the second case (F 33 = 1), a set of non-homogeneous linear equations with equation matrix A ∈ R m×8 is produced, the solving of which requires 170 + 64 m flops [45].Therefore, applying the linear scale constraint (F 33 = 1) is computationally 13-times faster than the non-linear constraint for the minimal case of m = 8.In addition, it would be possible to add the observation weights directly and to use weighted linear least-squares techniques [46].However, using this option is quite risky, as it influences the estimation of F if the coefficient F 33 approaches zero ( [47].Such situations (F 33 → 0) can be raised in cases that we will call poor camera models.In these cases, the above-mentioned assumption (F 33 = 1) fails; e.g., the rotation of the second camera coordinate system with respect to the first one is mainly planar, the camera motion contains pure translation or two cameras are only shifted along each other's optical axes.A way to avoid such exceptions is to examine all nine elements of the fundamental matrix for setting them to a constant non-zero value (i.e., F ij = 1) and choosing the best solution [48].However, this increases the computational time. In this study, Gaussian elimination with partial pivoting is used to detect the free variable of the consistent linear system of Equation ( 2) and to solve it.The free variable corresponds to the element of the fundamental matrix, whose corresponding column in the coefficient matrix (A) is not a pivot column.Therefore, that element cannot be zero and can take a fixed value, e.g., one, to resolve the scale deficiency of the fundamental matrix.In terms of complexity, partial pivoting requires 2/3 m 3 flops.Therefore, its application is not efficient for large datasets.However, in the case of robust estimation where only a minimal number of eight points is used, this method requires only 341 flops (compared to 9351 flops for SVD decomposition). Given an estimated fundamental matrix, it is possible to express how well the correspondences fit to it by calculating the residuals of the correspondences [19].There are various error measures to represent the residuals, including algebraic distance, epipolar-weighted distance, two-sided point to epipolar line distance and the Sampson distance: where (l 1 , l 2 , l 3 ) = Fu and (l 1 , l 2 , l 3 ) = u F. The ideal measure for robust estimation should not be highly sensitive to image noise.As the previous studies have shown, the Sampson distance is less sensitive to image noise in comparison with other error measures.Therefore, the Sampson distance is used in this study to represent the residuals. Robust Estimation Problem The main problem of robust estimation is to find a minimal sample set of inliers, from which the fundamental matrix (the model) can be estimated correctly.To determine how correct an estimated model is, an objective function is required.In RANSAC robust estimation, this objective function is defined as the support of the estimated model.However, as mentioned in Section 2, this requires a threshold to decide whether a correspondence supports the estimated model or not.In this study, to avoid such a threshold, the concept of least trimmed squares (LTS) [49] is applied.The objective is to minimize the sum of squared residuals over a minimum number of inliers, which we would call an inlier set of minimum cardinality denoted by I. Given a candidate sample set of correspondences, a fundamental matrix, F, can be estimated via Equation (2).Therefore, the inlier set of minimum cardinality, I, would be the set of a * correspondences with the smallest residuals, d k , leading to the cost function Cost(F): where I = {k|d k < d i for all i / ∈ I}.The cardinality of I, n * , can be hypothesized without loss of generality.For instance, one can assume that, in a dataset containing 1000 putative correspondences, at least 100 matches are inliers without having any knowledge of the errors; i.e., n * = 100.The minimum number of inliers would be a more relaxed assumption in comparison with the approximate ratio of outliers. Robust Estimation via the Genetic Algorithm In this paper, a modified version of the integer-coded genetic algorithm, originally proposed by [50] is applied to stochastically search for a sample set of correspondences that minimizes Equation (5).In GA terminology, each candidate sample set of correspondences is called an individual.The characteristics of an individual are represented by a chromosome.Each chromosome is made of a number of elements, called genes.In the context of this study, each gene accounts for a putative correspondence.Considering the minimal number of correspondences needed to find a solution for F, the length of a chromosome remains constant (e.g., eight points).A group of individuals, e.g., 30 ones, forms a population.The first step in applying the GA is to generate an initial population.The size of the population and the way its individuals are drawn (sampling strategy) are the most important factors to decide.Afterwards, in any iteration of the evolution, the model estimated from each individual is evaluated based on the objective function (Equation ( 5)).Then, the genetic reproduction operators are performed to reproduce a new population.This evolution continues the same way until reaching an optimal solution.The overall pseudo-code of the proposed technique is given in Algorithm 1. Algorithm 1: The proposed robust-estimation technique via the genetic algorithm.Decide the minimum ratio of inliers, n * /n 2. Initialize the first population by guided sampling (Section 4.2) While the best solution gets improved do 3. Evaluate each individual by computing the sum of the n * smallest residuals (Section 3.2) 5. Perform genetic operators (selection, crossover, mutation, random exploration) on the individuals of the current population, and reproduce the next population through the replacement process (Section 4.3) 6. Save the best overall solution achieved so far and the inlier-set-of-minimum cardinality associated with it ( Î with cardinality of n * ) End while 7. Re-estimate the fundamental matrix ( F) using the inlier set of minimum cardinality from the best solution ( Î) (Section 4. Encoding A significant step in the design of the GA is to find an appropriate representation of individuals, which is an encoding of candidate solutions to the problem as a chromosome.Assume that the input dataset contains n putative correspondences as (u k ↔ u k , k = 1, . . ., n).Each correspondence has an index, k, which is given to it based on a random permutation of numbers from one to n.In previous studies (Rodehorst and Hellwich, 2006), the genes are directly defined as the index of each correspondence.However, GA operators take these integers as inputs to create new solutions.Therefore, these integers would better represent the correspondences in a way that makes geometrical or physical sense.To this end, in this study, each correspondence is labelled with a triplet of integers: one integer, k, as its index, as well as two integers, h and v, as the horizontal and vertical coordinates of the left point relative to the other points on the left image.In this study, the length of the chromosomes (the size of each minimal sample set) is set to 12. Therefore, each individual can be encoded as a sample represent the vertical and horizontal dimensions of an overlapping rectangle, which is defined as the rectangle that minimally bounds all of the putative correspondences on the left image (Figure 2a).Note that the reason why the size of M is set to 12 is explained in Section 4.2.Given this encoding, to reach the coordinates of the i-th gene inside an individual M (i.e., u m i ↔ u m i ), the index k i is used.However, the genetic operators are applied to h i and v i .One may note that applying the genetic operators to h i and v i can result in new integers, where no correspondence might be located.To resolve this issue, a 2D lookup table is produced, by which an index is assigned to any empty pixel of the overlapping rectangle based on its proximity to the putative correspondences.In other words, the lookup table, T, finds the closest match to any arbitrary coordinates, ( h, ṽ), inside the overlapping rectangle: The lookup table in Equation ( 6) is identical to the indices in the Voronoi diagram of the points in the left image, measured with Manhattan distances.As an example, consider the matches in Figure 2b and their corresponding lookup table in Figure 2c.An instance of an encoded gene would be m = (5, 6, 7).Now, consider the application of the mutation operator to this gene.The mutation operator randomly changes the values of the genes.Assume that the application of the mutation operator to m (indeed to values of h = 6 and v = 7) has resulted in new integers h = 8 and ṽ = 9.From the lookup table (Figure 2c), it can be seen that these integers correspond to k = T(8, 9) = 1.Therefore, the mutation operator changes m = (5, 6, 7) to m = (1, 7, 9). Sampling GA Individuals The first step to initiate GA is to generate a random population of individuals.As mentioned in Section 3.1, at least eight points are required to form a minimal sample set from which the fundamental matrix can be calculated.As discussed in Section 2, random sampling can be either done uniformly or with an order based on the quality of the correspondences.However, none of these sampling strategies avoids degenerate configurations.To decrease the degenerate solutions, guided sampling based on the spatial distribution of the correspondences is proposed in this paper.To this end, first, the overlapping rectangle (shown in Figure 3a) is divided into 12 sub-regions of equal area as in Figure 3b.Then, the density of each region is calculated as the number of correspondences enclosed by it, normalized by the total number of correspondences.For the first half of the population, the 12 correspondences are picked up from the regions that are selected successively in a roulette-wheel selection.The density of a region determines its probability to participate in sampling.In simple words, the wheel is turned 12 times, and every time, a region is selected from which to draw a match.For the other half of the GA population, every individual is made of twelve correspondences in a way that at least one match is sampled from each region.The reason for introducing this two-step sampling is to avoid high-density regions having full dominance in the population.Figure 3c illustrates an example of putative correspondences distributed on the overlapping rectangle, and the roulette wheel corresponding to it is shown in Figure 3d.Once the population is formed, the correspondences of every sample set M in the population should be substituted into Equation (2) to determine the fundamental matrix (F).Then, each fundamental matrix is evaluated using the cost function in Equation ( 5), and its fitness is decided; the lower the value of the cost function, the fitter the individual. The primary goal of the proposed scheme for subdividing the overlapping rectangle is to decrease the risk of sampling ill-configured points.Evidently, more than 12 sub-grids could be considered for sampling.However, the greater the number of points, the higher the risk of encountering outliers and the lower the probability of reaching outlier-free sample sets.Besides, 12 data points are already proposed by other studies such as [38,51].Of course, there would still be a probability that the sampled points are too close to each other and cause ill-configurations.Such cases might especially happen when the points are selected from the regions close to the same edges or the same corners of the grid cells.It can be shown that the gridding scheme of Figure 3b is up to two-times more robust to this type of ill-configuration compared to a simple regular 4 × 3 grid. Genetic Operators Once the individuals are evaluated, a selection operator is applied to the population to allocate the instances of fitter individuals for entering a mating pool as parents to reproduce a new generation (a generation in the context of genetic algorithms is equivalent to an iteration in the context of RANSAC).The tournament selection is used here due to its higher computational efficiency over other selection techniques [50].Afterwards, new individuals are generated from the selected parents by applying crossover and mutation operators during a reproduction process.Crossover means creating two new individuals (called offspring) by combining two selected parents.Mutation means randomly changing the genes inside a chromosome.These procedures are explained below. Let . ., 12 be two selected parents.Furthermore, let the followings auxiliary variables be defined as x 7) in order to respect the bounds of correspondences on the images. The mutation operation should be applied with caution to not highly distort sound solutions [52].Furthermore, it should take into account the fact that usually inliers tend to be closer to each other than outliers (as assumed in the N Adjacent Points Sample Consensus (NAPSAC) robust estimation method by [53]).Therefore, mutation should perform a random local search for possibly finding more inliers in the vicinity of the currently-sampled points.A mutated solution, . ., 12 using Equation (10): where x = (x 1 , x 2 ) := (h, v) and where In Equation (10), includes two random uniformly-distributed numbers between zero and one; s = (s 1 , s 2 ) := π • π follows a power distribution with π having random values between zero and one.At any iteration, a random exploration is also performed by generating a fixed number of individuals based on the proposed guided sampling strategy in Section 4.2.Generating random solutions as a fixed portion of the population reduces the chance of converging to local optima. The replacement strategy applied in this study can be considered as a combination of steady-state and elitist replacement methods.It helps to keep the best solutions from older generations and to maintain the population diversity to avoid premature convergence.Assume that P − is the population of the last generation, P + is the population of the selected parents from P − and P ++ is the population of the reproduced offspring using P + .Accordingly, q − demonstrates the least fitness value among the best third quartile of individuals in P − .Therefore, the new population P starts forming by the fittest individuals of P − (elites).Among the elite individuals with similar fitness, those who are formed by correspondences coming from more distinct regions of the overlapping rectangle have priority in replacement.This way, the chance of ending with the local optima is highly reduced.In fact, this factor is used to distinguish models with similar fitness values.The rest of the spots available at P are occupied by the following replacement condition: where P i means the i-th individual in the population, i = {1, . . ., population size}.Equation (11) implies that an offspring whose quality is worse than 75% of the previous solutions is not qualified enough to replace its parents.The genetic algorithm iterates the procedures mentioned above until there is no improvement in the average of the elites' fitness values during a specified number of generations. Inlier Classification Once the genetic algorithm terminates, the inlier set of minimum cardinality, Î, is found, and the final fundamental matrix, F, is re-estimated using these points by performing iterative least squares adjustment using the Gauss-Helmert model as: where the vector observations, l, include the coordinates of correspondences, u k ↔ u k , k ∈ Î, which are normalized by T and T as Hartley normalizing transformation as such s k = Tu k , s k = T u k .The vector of parameters is θ = (F 11 , F 12 , F 13 , F 21 , F 22 , F 23 , F 31 , F 32 , F 33 ) T ; note that one element of the fundamental matrix, here F 33 , is assumed fixed based on the direct result of F obtained from the GA.Now, inliers can be distinguished as the correspondences with residuals from F that are less than a given threshold.The important issue would be determining this threshold.Standard RANSAC algorithms determine this quantity using maximum likelihood estimation based on the median of the residuals associated with the best tentative model.In this paper, the uncertainty of the final fundamental matrix is considered to calculate an adaptive threshold as follows.First, the covariance matrix of the estimated parameters can be derived using the covariance law as in Equation ( 13): where A = ∂U ∂θ and B = ∂U ∂l are Jacobian matrices calculated at re-estimated coordinates of the correspondences ûk ↔ û k , k ∈ Î, and v is the vector of estimated residuals.From Equation ( 12), F = T T FT := G( θ).We can determine the uncertainty of the estimated fundamental matrix using the rules of error propagation as follows. For each match u k ↔ u k belonging to the inlier set of minimum cardinality, the Sampson distance d k and its variance σ 2 d k can be calculated via Equation (15). The average of these distances, μ = ∑ k∈ Î d k /n * , and their standard deviation, σ = ∑ σ 2 can represent the distribution of the residuals for inliers.Considering Chebyshev 's inequality, at least 95% of the population is within 4.47-times the standard deviation from the mean, no matter what kind of probability distribution they are following.Therefore, every match with index j is an outlier by the confidence of 95% if its residual, d j , is greater than μd + 4.47 σd .To ensure a maximum set of inliers and avoid possible false positives or negatives, this whole procedure can be repeated a few times using the new set of detected inliers. Experimental Results and Discussion To demonstrate the efficiency of our algorithm and its individual components, we performed several experiments on simulated and real data.The variables that are tested with these experiments include: (i) the performance of our sampling scheme; (ii) the accuracy of our adaptive thresholding method for inlier classification; (iii) the effect of GA population size on the performance of the algorithm and (iv) the performance of the overall algorithm under different levels of noise, outliers and degeneracy.Table 1 summarizes the criteria used to assess these variables. The experimental results obtained from the proposed technique are compared with those of the following state-of-the-art techniques: RANSAC, MSAC, MLESAC, LO-RANSAC (Lebeda et al., 2012), StaRSAC, Cov-RANSAC, Multi-GS-RANSAC and least median of squares (LMedS).Note that for implementing these techniques, the programs were prepared by the authors of this manuscript except for the following ones.The MATLAB built-in computer-vision toolbox was used for LMedS.For measuring the uncertainty of the fundamental matrix in Cov-RANSAC, the code was provided by the original authors [54].The code for Multi-GS sampling was also provided by the authors [27]. Symbol Description Itr number of iterations before the termination of robust estimation µ d precision of estimation (pixel 2 ): the average of squared Sampson residuals over the detected inliers that shows how well the estimated fundamental matrix fits the detected inliers α accuracy of inlier classification (%): percentage of correctly identified outliers and inliers among all the matches, which is calculated as the sum of true positives and true negatives divided by the total number of putative matches (applicable when the ground-truth is available) TPR sensitivity or true-positive rate (%): percentage of correctly identified inliers, which is calculated as the true positives divided by the sum of true positives and false negatives (applicable when ground-truth is available) TNR specificity or true-negative rate (%): percentage of correctly identified outliers, which is calculated as the true negatives divided by the sum of true negatives and false positives (applicable when ground-truth is available) accuracy of estimation (pixel 2 ): an average of squared Sampson residuals on control points or true inliers that shows how well the estimated model fits the real inliers; the real inliers are noise free (applicable when the ground-truth is available) difference between the estimated fundamental matrix and the true one (expressed in pixels), which is measured using a method described by [46] (applicable when the ground-truth is available) Experiments on Synthetic Data Several synthetic datasets were used to evaluate the performance of our algorithm.Using the synthetic data allowed us to control the imaging geometry, the fraction of outliers and the image noise.Besides, it let us assess the accuracy of inlier detection and model estimation in comparison with the ground-truth.Instead of creating random correspondences without having any particular geometric or physical form, real 3D point clouds were used to generate synthetic images.The synthetic outliers were produced in a relatively small range of error because large gross errors can be easily detected by statistical tests.Thus, we are interested in testing the performance of the proposed method in dealing with medium ranges of errors (smaller than 40 pixels).The synthetic datasets are described in Table 2. Different scenarios were chosen to cover diverse real-world cases including close-range and aerial photography, narrow and wide baselines and degenerate configurations.The following paragraphs briefly explain the reasons for which each dataset was selected for these performance tests. The Table dataset: This is mainly generated to simulate a degenerate configuration, where a large number of correspondences is located on a planar object.Therefore, the performance of the sampling technique under degeneracies (Section 5.2.1) was tested on this dataset. The Church dataset: This does not have any degenerate configurations.This close-range stereo-pair contains a small number of correspondences, which allows the application of the Multi-GS-RANSAC method.Furthermore, a low level of noise is simulated, which is usually the case of close-range images with static imaging platforms.The relative orientation of two cameras is quite challenging from the photogrammetric point of view (a narrow baseline and large relative rotations).Therefore, the performance of the sampling technique under varying outlier ratios (Section 5.2.2), the stability and the effect of GA population size were tested on this dataset. The Urban dataset: This represents the case of aerial imagery, where the level of image noise is considerably higher compared to close-range imagery.The lower spatial resolution of the images and the motion blur caused by movements of an airborne platform are the main reasons to make such an assumption.Therefore, this dataset was used to test the performance under various noise levels. The Multiview dataset: This dataset was designed to be unbiased and representative of general photogrammetric applications.This dataset includes stereo images with short, long and moderate base-lines.There is no specific degenerate configuration or any particular structuring pattern in the scene.Some of the images are simulated at very low altitudes (similar to close-range imagery), while the others are at higher altitudes (similar to aerial imagery); this also causes high variations of scale across the images.Therefore, the performance under various outlier ratios (Section 5.4) was tested on this dataset.The results obtained from the StaRSAC method were not represented in the graphs of Section 5.4 since they were not as good as the results of the other methods, and their representation would have caused mis-scaling of the graphs.The Multi-GS-RANSAC was not tested on this dataset since most stereo pairs contained thousands of correspondences, the processing of which with Multi-GS would have been too time consuming. Furthermore, a stereo pair from this dataset with a large number of matches (4510 matches) and a low level of noise was used to test the performance of inlier classification (Section 5.3).There is no specific structuring pattern in this stereo pair.Furthermore, there is no challenge or complexity regarding the relative poses of the cameras; i.e., the baseline is neither too short, nor too long, and the relative rotation between images includes yaw differences only. Table -focal length: 3500 pixels, sensor size: 1940 × 1460 pixels -Gaussian noise = from 0 to 2 pixels -correspondences are either on the monitor plane or other objects on the table λ is the number of matches located on the monitor plane divided by the total number of matches -six instances of the data created by varying λ from 0.4 to 0.9 -258 correspondences placed on the surface of the monitor at each instance of data -a total of 100 random outliers at each instance of data * The urban 3D point cloud, from which the images are synthesized belong to ISPRS benchmark datasets from the Toronto area. Performance under Degeneracies To verify the efficiency of our sampling algorithm to avoid planar degeneracy, the Table dataset was used.In each instance of these data, 258 synthesized correspondences were placed on the surface of the monitor, and the number of matches from other objects of the scene was varied to get the ratio λ. For instance, at λ = 0.4, the total number of correspondences was 645, from which 258 correspondences were located on the monitor plane.It should be noted that synthetic images were captured so that all of the objects of the scene were visible in the images, and the monitor plane only occupied a small in each image.For any instance of the dataset at different ratios λ, we limited the total number of hypotheses to 1000 and compared the results of our method with those of RANSAC and Multi-GS-RANSAC.That is, each algorithm was stopped when exactly 1000 sample sets were drawn.This limited number of sample sets is approximately six-times more than the theoretical number of sample sets for achieving 95% probability of drawing at least one outlier-free random sample set from the data containing 40% outliers, which is the maximum outlier ratio in the dataset instances.The performance criteria used for this experiment were (i) the percentage of outlier-free (an outlier-free sample set is a set of matches where all of the matches are inlier) sample sets among the 1000 sample sets; (ii) the percentage of non-degenerate (a degenerate sample set (due to planar degeneracy) has more than five points from the dominant plane (in this example, the monitor plane); see Section 2) and outlier-free sample sets among the 1000 sample sets that we denote as non-degenerate sample sets for simplicity and (iii) estimation accuracy (µ d • CP).The medians of the results obtained after five trials are represented in Figure 4.The proposed algorithm drew up to 75-times more outlier-free sample sets in comparison with RANSAC within a limited budget of 1000 sample sets (Figure 4a).The percentages of outlier-free samples for our algorithm and Multi-GS-RANSAC were very close.This showed that the Multi-GS sampling strategy performed well in absence of degeneracy.However, Multi-GS sampling failed to draw non-degenerate sample sets as the ratio λ increased (Figure 4b).For ratios higher than 0.7, both RANSAC and Multi-GS-RANSAC failed to estimate the model correctly (Figure 4c), while the proposed algorithm estimated the model robustly even in the presence of serious degeneracy (λ = 0.9). Performance under Varying Outlier Ratios A similar test was performed to assess the performance of the sampling algorithm under various ratios of outliers.To this end, the Church dataset was used.The limited sampling budget was set to 1000 and 5000 sample sets for outlier rates less than or equal to 50% and higher than 50%, respectively.The percentage of outlier-free sample sets among the budgeted sample sets (either 1000 or 5000) and the estimation accuracy (µ d • CP) are presented in Figure 5. For most of the outlier ratios, the outlier-free sampling rates of the proposed algorithm and those of Multi-GS were very close.RANSAC drew quite less outlier-free sample sets in comparison with the other two methods.For outlier ratios higher than 50%, RANSAC completely failed to detect outlier-free sample sets and to estimate the model correctly.For an outlier ratio of 80%, only the proposed algorithm kept good performance by drawing at least 22% outlier-free sample sets, given the limited sampling budget.As a conclusion, the sampling strategy based on the spatial distribution of correspondences along with the evolutionary search not only increased the speed of reaching an outlier-free sample set, but also decreased the probability of ending up with a degenerate solution. Performance of Inlier Classification with Adaptive Thresholding In order to verify the performance of the proposed thresholding method for inlier classification (Section 4.4), a stereo pair from the Multiview dataset at the baseline of 20 m was used.To eliminate the effect of other components of the algorithm, such as sampling and the objective function, no outlier was introduced to the images; i.e., the dataset was outlier-free.The results from our method were compared with those of the median-based and covariance-based algorithms.To this end, each algorithm was applied 500 times.In any trial, a minimal sample set of eight points was randomly drawn, and the fundamental matrix was estimated with the normalized eight-point algorithm.For our algorithm, the technique of Section 4.4 was applied to determine the inlier thresholds.For the median-based algorithm, the robust standard deviation of residuals was defined as σ = 1.4826 1 + 5/(n − 8) median i (r i ), i = {1, . . ., n}. Since the dataset was outlier-free, n was equal to the total number of matches, and r i was the residual of any correspondence with respect to the estimated fundamental matrix.Then, the inlier threshold was calculated as 1.96σ.For the covariance-based algorithm, which is the core component of first, the uncertainty of the estimated model was used to narrow down the total set of matches to the set of potential inliers.Then, the median-based algorithm was applied to the potential inliers for determining the inlier threshold. In order to evaluate the performance of these algorithms, the fraction of runs (from a total of 500 runs) in which a correspondence was classified as inlier was calculated, namely the inlier probability of that match.The inlier probabilities (sorted in ascending order) are shown in Figure 6.Knowing that the dataset was outlier-free, the inlier probability for all of the points was ideally one.Our thresholding algorithm resulted in the most stable and robust solutions.That is, for 92% of the points, the inlier probability was higher than 0.9.For the median-based algorithm, this percentage was only 15%.The covariance-based algorithm had poor performance in comparison with both of the other methods.Our algorithm resulted in inlier ratios higher than 90% for more than 88% of the runs.However, the median-based and covariance-based algorithms yielded inlier ratios greater than 90% at only 32% and 15% of the runs, respectively.To investigate the reason behind the performance of each algorithm, the threshold values determined at each run are illustrated versus the run index as black points in Figure 7.Then, the true residuals, which are residuals of the matches from the true fundamental matrix, were calculated.Note that the true fundamental matrix is the one based on which the matches are synthesized.The maximum and the median of the true residuals are shown in Figure 7 as the red and green lines, respectively.Then, the parameter σ = 1.4826 1 + 5/(n − 8) median i (r i ) using true residuals was used to calculate the true median-based threshold.The blue line illustrates this value in Figure 7. The true median-based threshold was slightly higher than the true maximum residual, and it would be an ideal choice of threshold only if the fundamental matrix were perfect.However, in these tests, the fundamental matrix was calculated from a minimal sample set of eight correspondences, which had different values of noise and did not necessarily have an ideal spatial configuration either.Because of the uncertainty of the fundamental matrix and the noise value of the points participating in the estimation of the fundamental matrix, the threshold value should be larger than the ideal one to detect all of the inliers correctly.Although the covariance-based method tried to consider this effect, it underestimated the potential inliers.The main reason was that the uncertainty of the fundamental matrix was estimated only from the minimal sample set.As shown in Figure 7, our algorithm calculated the threshold adaptively.Frequently, the estimated threshold value was slightly higher than the ideal median-based threshold.This was reasonable since the calculated fundamental matrix, although determined from an outlier-free sample set, was not necessarily perfect.However, for the other two methods, the threshold values were approximately around the true median-based threshold, which could be suitable threshold values only if the estimated fundamental matrices were as accurate as the true one. Performance under Various Outlier Ratios To assess the performance of the overall algorithm under different ratios of outliers, the Multiview dataset was used.In the following experiments, the stall generation of the GA was set to 60.The upper bound to the standard deviation of image noise (σ max ) was set to three pixels.The GA population size was set to 27.The parameter n * was set to n/10 in all of our experiments.The algorithm was implemented in the MATLAB environment directly without using its optimization toolbox.The average of the results over all of the stereo pairs versus the outlier ratios are presented in Figure 8. The results can clearly illustrate the performance of the proposed algorithm.For instance, to reach 95% accuracy in inlier detection from a dataset with 70% outliers, at least 45,658 sample sets must be drawn in random sample consensus.However, the proposed algorithm reached 95% accuracy by drawing only 2100 sample sets.Similarly, our algorithm achieved 78% accuracy over the 80% contaminated dataset with only 1440 hypotheses, while 591,455 random sample sets would be theoretically required to reach that accuracy.There was a combination of reasons caused by the evolutionary search and the sampling strategy that boosted this improvement.Furthermore, for all of the outlier ratios, the number of GA iterations was lower than other algorithms (average Itr of 93).However, it should be noted that any iteration of GA corresponds to the maximum of 27 hypothesis generation.In the case of data with a low percentage of outliers, there are many correct solutions that each can be slightly fitter; such slightly fitter solutions may violate the termination criterion mentioned in Section 4.3.Therefore, an additional condition based on the rate of improvement achieved by the new elites must be considered to avoid unnecessary iterations.One can use the following method; whenever the elite solutions of two generations have fitness values within 10% of one another, then the fitter solution should duplicate itself, and the less fit solution should be removed to ensure a low standard deviation over the stall generations.From the accuracy point of view, the proposed algorithm was more robust to outliers in comparison with other algorithms, especially when the outlier percentage grew over 40%.On average, the proposed technique achieved 91% ± 6% accuracy for inlier detection.The considerably high estimation accuracy (average µ d • CP of 0.376) confirmed this, as well.The improvement obtained by the proposed algorithm was also evident regarding the true negative rate (average TNR of 94%), which showed the efficiency of the algorithm to distinguish outliers from inliers. Figure 8 also indicates that LMedS performs robustly for outlier ratios less than 50%.However, model estimation and, particularly, inlier detection for higher outlier rates is the bottleneck of this algorithm.The Cov-RANSAC resulted in better TNR compared to our algorithm.However, the spatial configuration and the level of noise in the sampled points are ignored in this method.As a result, the estimation of model uncertainty becomes impractically too large or too small.Consequently, the number of inliers is usually over-or under-estimated.That is why this technique resulted in very low TPR. . Performance Stability Another variable that should be tested under various outlier ratios is the stability of the results obtained by running the algorithm multiple times.To this end, the Church dataset was used.The algorithm was repeated 50 times for each instance of the data.For outlier ratios ranging from 10 to 70%, the variations of accuracy was not considerable (α = 99% ± 2%).However, at an 80% outlier rate, the average accuracy decreased to 92% ± 8%.Although this was a noticeable change in the performance of the algorithm, the median of the accuracies was yet reasonably high (95%).Regarding the number of models hypothesized before termination, a large range of changes was observed at lower outlier ratios.As explained earlier, this happened due to the stopping criterion in GA.However, this was itself an advantage for large outlier ratios (more than 50% outliers). Performance under Various Noise Levels The Urban dataset was used to evaluate the robustness of the proposed algorithm against noisy image observations.To this end, the effects of image noise along with varying percentages of outliers were tested using 81 instances of the Urban dataset.The results are illustrated in Figure 9. From the accuracy point of view, the accuracy of estimation (described by µ d • CP) did not decrease by increasing the noise level.The different noise synthesized on putative matches varies between zero and four pixels.The robustness of the algorithm to noise means that the sample sets whose points have a lower level of noise can be recognized through the LTS-based objective function.Thus, the estimation of the fundamental matrix would be based on the least-noisy correspondences of the dataset.Regarding the accuracy of inlier detection, the true positive rate was not affected by the noise.However, the true negative rate decreased slightly by increasing the noise level.Since the outliers were synthesized by adding gross errors as low as 10 pixels to correspondences, distinguishing real outliers from noisy inliers at higher noise levels became a more difficult task, and the false positive rate increased.In terms of convergence speed, increasing the magnitude of image noise decreased the number of iterations.The total number of iterations required to find the final solution changed only from 80 to 250 as the ratio of outliers increased to 80% and the level of noise increased to four pixels.This shows the high computational efficiency of the overall algorithm in the presence of high ratios of outliers and noise. Effect of the GA Population Size To evaluate the effect of GA population size on the performance of the algorithm, the Church dataset was applied.The GA population size was varied from 20 to 70 by steps of five individuals, and the performance of the algorithm was assessed under various outlier ratios.The results obtained from this experiment are illustrated in Figure 10. There was no considerable correlation between the accuracy and the size of the population.The average accuracy of inlier-detection (α) with different population sizes was 99.1% with a standard deviation of only 0.9%.The accuracy of estimation (µ d • CP) also remained below 0.05, which confirmed the stable accuracy of model estimation.The number of iterations before termination (Itr) seemed to be more dependent on the outlier percentage rather than the population size.The average number of iterations at outlier ratios less than 50% was 122, while it was only 62 at higher outlier ratios.The number of generated hypotheses (N model ) increased by either decreasing the outlier ratio or increasing the GA population size, since it depended both on the population size and the number of iterations.Showing that the accuracy does not have any distinct correlation with the population size, the moderate size of the population is suggested to both avoid the unnecessary generation of hypotheses and to keep the number of iterations small. Experiments on Real Data The proposed algorithm of robust estimation and inlier detection was compared with other RANSAC-like techniques for 15 stereo pairs (Table 3).These examples were chosen to cover various cases, such as close-range and aerial photography, narrow and wide baselines, degenerate configurations, scale variation, multi-platform photography and highly contaminated data.The first seven pairs and their putative matches were gathered from [40].For these data, the percentages of true inliers in the dataset (ε) were manually determined in the reference article.For the 8th, the 9th and the 10th stereo pairs, we used signalized targets to provide ground control data when acquiring the images.The reference fundamental matrices were calculated from these control points.The 11th stereo pair belongs to the ISPRS datasets for urban classification www2.isprs.org/commissions/comm3/wg4/detection-and-reconstruction.html.The 14th stereo pair belongs to the ISPRS benchmark for multi-platform photogrammetry http://www2.isprs.org/commissions/comm1/icwg15b/benchmark_main.html [55].For these last two pairs, the exterior and interior orientation parameters provided by ISPRS were used as reference values for evaluation.For the last five stereo pairs, SIFT key points were detected and matched using the VlFeat www.vlfeat.orgfeature-based matching library.For the second and the 12th stereo pairs, we could not apply Multi-GS, since the low inlier ratio and a high number of correspondences made that algorithm too slow to be executable. It can be noticed from the results that our algorithm generally yielded solutions that were compatible with the ground-truth in terms of either inlier-detection accuracy or estimation accuracy.For the eight and the ninth images, a dominant plane existed in the scene.Therefore, most of the algorithms ended up with a degenerate solution.However, the degeneracy could be avoided quite efficiently by our algorithm.This was mainly due to the guided sampling based on the spatial distribution of matches.Finally, it was noticed that the algorithm had good performance in challenging cases such as multi-platform photography.* ε is the inlier ratio, and n is the total number of correspondences. Conclusions In this study, we proposed an integer-coded genetic algorithm for the problem of accurate epipolar-geometry estimation from putative matches, followed by an adaptive thresholding algorithm for inlier classification.The proposed algorithm can be considered as a solution to resolve some of the drawbacks involved in conventional robust estimators, specifically RANSAC-like methods.Based on the experiments, the proposed approach showed robustness to high percentages of outliers, planar degeneracy and image noise.On a general note, the success of the proposed algorithm is due to a combination of elements: (i) the evolutionary behaviour of the search for outlier-free sample sets; (ii) the definition of the objective function to not depend on the maximum number of the inliers, but their minimum number; (iii) the integration of guided sampling and (iv) the uncertainty analysis in the final inlier classification scheme.It should be noted that, except for the first element, other ones could be easily integrated with different iterative robust estimation algorithms, as well.In the future, the proposed sampling and classification techniques will be integrated with the variants of RANSAC, and their improvement will be assessed.In general, the algorithm was able to detect the inliers by more than 85% accuracy, which is a remarkable success for large datasets containing over 80% outliers.Furthermore, the computational expenses of the algorithm were not increasing with either the ratio of outliers or the magnitude of image noise.The efficiency of the proposed algorithm regarding speed was greater than other methods for datasets with a high ratio of outliers (with more than 50% outliers).In the future, the algorithm could also become more robust to degeneracy by extending the proposed two-dimensional guided sampling to three dimensions by finding the probable three-dimensional structures. Figure 1 . Figure 1.Summary of outlier detection techniques in stereo sparse matching based on robust estimation of epipolar geometry. InputA set of n putative correspondences Output Estimated fundamental matrix and the entire set of inlier correspondences (a) Genetic Algorithm −Input: The lookup table of matches (Section 4.1) −Output: The fundamental matrix (F) and an inlier set of minimum cardinality 1. 4) (b) Estimate the uncertainty of the model ( ∑F ) (Section 4.4) (c) Estimate the average and uncertainty of the Sampson residuals ( μd , σd ) for matches belonging to Î to determine the outlying threshold (Section 4.4) (d) Compute and threshold the Sampson residuals on other matches to identify the entire set of inliers (Section 4.4) Figure 2 . Figure 2. Encoding scheme: (a) the overlapping rectangle, (b) an example list of some putative correspondences, which are identified by their indices, and their positions relative to the overlapping rectangle and (c) a part of the 2D lookup table constructed using Equation (6); the bold numbers show the indices of the original matches, and the regular numbers show the indices assigned to the other pixels. Figure 3 . Figure 3. Guided sampling: (a) a stereo pair and the putative correspondences; the bounding rectangle on the left image represents the overlapping rectangle; (b) dividing the overlapping rectangle into 12 sub-regions of equal area; (c) the minimal rectangle and the distribution of putative matches over the sub-regions; (d) density-based roulette-wheel for region selection. Figure 4 . Figure 4. Performance of sampling methods on the Table dataset as the ratio λ increases: (a) percentage of outlier-free sample sets, (b) percentage of non-degenerate sample sets and (c) estimation accuracy. Figure 5 . Figure 5. Performance of sampling methods on the Church dataset as the outlier ratio increases: (a) percentage of outlier-free sample sets, (b) estimation accuracy. Figure 7 . Figure 7. Inlier thresholds determined by (a) our adaptive thresholding method; (b) the median-based algorithm and (c) the covariance-based algorithm. Figure 8 . Figure 8. Performance of different algorithms under various percentages of outliers for the Multiview dataset.In the graphs, the x-axis represents the percentage of synthetic outliers in the dataset. Figure 9 . Figure 9. Performance of the proposed algorithm with noisy images.The graph at the bottom of each surface-plot represents the average of respective performance criterion versus the amount of noise. Figure 10 . Figure 10.Performance of the proposed algorithm with varying GA population size.The graph at the bottom of each surface-plot represents the average of the respective performance criterion versus the population size. Table 1 . Criteria for performance assessment. Table 2 . Description of synthetic datasets. Table 3 . Performance of the proposed algorithm and other techniques on real data.
15,394.4
2017-07-27T00:00:00.000
[ "Computer Science", "Engineering" ]
Effectiveness Evaluation Model of Digital Cost Management Strategy for Financial Investment of Internet of Things Enterprises in Complex Environment In the current study, a depth evaluation for the strategic cost management (SCM) based on Internet of things (IoT) enterprises is performed. The SCM is analyzed under the umbrella of replication environment and financial digitization. Our proposed method of evaluation consists of the replication technology, the relationship between corporate finance, corporate performance, and SCM. However, in the second stage, we considered Xiaomi Corporation (Xiaomi), an IoTenterprise, as our hypothesis and benchmark model to analyze the existing issues related to cost management. Finally, the strategy and effect of Xiaomi’s SCM based on the value chain are thoroughly investigated and an in-depth insight has been provided. Moreover, our results show that the SCM of Xiaomi based on the value chain has achieved optimum results, such as reducing the total cost per unit product and increasing the market share. However, a few issuesstillneedimprovements,suchasinsufficientinnovationability,theincompletescopeofcostmanagement,and asmallamount of patent authorization. This article is conducive to enterprises for better strategic planning and to improve cost management efficiency. Moreover, it provides a reference basis for the cost management problems faced by other IoT enterprises. Introduction e ongoing development of China's economy has promoted the continuous improvement of the whole financial system [1,2]. With the rise of China's capital market, Chinese companies have explored several financing channels, providing a driving force for China's economic development [3,4]. In this regard, China has formulated some new preferential policies to stimulate economic development in recent years, including bond market policies [5]. is has made China's bond market more active and encouraged. Digital finance has been a major innovation in promoting financial reform in recent years [6,7]. As a new way of financing, it has profoundly impacted the traditional way of financing. e rapid progress of mobile Internet has promoted the progress of technology and the huge demand of the market [8]. China's mobile Internet market is becoming increasingly mature and has broad prospects for development. Traditional cost management methods are not enough to realize the competitive advantage of enterprises, and enterprises must introduce strategic cost management (SCM) [9]. SCM is an efficient and advanced cost management method. SCM is used in most enterprises to manage the overall flow of product supply. It adopts the active streamlining of business supply to maximize customer satisfaction and grow business values. By using SCM, the companies can reduce extra costs and deliver products to the consumer quicker [10]. e components of individual supply chain orientation are analyzed by [11] to examine the causal relationships between supply chain management (SCM) and organizational SCO. e outcome of the applied method reveals the importance of the individual SCO and its impact on organizational SCO. e classification of SCM activities into strategic and operational tasks confirms the causal relationship between the two concepts. e use of effective management philosophy inside a firm confirms the successful implementation of SCM [11]. It competes in the key links of the value chain and reasonably allocates the existing resources. e value chain is a business framework used by companies to analyze the detailed procedure involved in each step of business. e value chain is the buzzword in the IoT industry now and several IoT industries adopt value chain frameworks to analyze business procedures [12][13][14]. e close ties among countries have expanded the development of China's economic market, followed by increased pressure on domestic enterprises. Domestic enterprises should face competition in the domestic market and the challenges of foreign enterprises simultaneously. Enterprises cannot make more profits just by directly reducing materials, labor costs, and manufacturing costs. ey need to combine the cost management of strategic thinking. Moreover, they should master the business objectives and external environment of the enterprise, maintain their advantages, conduct in-depth analysis, and find and solve the existing problems in time so as to realize the all-around development of the enterprise. SCM meets the requirements of the times, but the key problem is how to realize and evaluate it [15]. Recently, numerous researchers [16,17] have studied this problem theoretically, but few empirical cases have explored the application of SCM [18][19][20]. Given the above analysis, under the background of replication environment and financial digitization, Xiaomi Corporation (Xiaomi), an Internet of things (IoT) enterprise, is taken as an example to analyze its current situation of cost management. Xiaomi Corporation is a Chinese company founded in 2010 (Beijing) to produce computer electronics and related software. In 2011, Xiaomi introduced the first smartphone, and by 2014, it became the largest market share of smartphones sold in China [21]. In 2021, Xiaomi grew into the second largest smartphone supplier worldwide after Samsung [22]. Moreover, the strategy and effect of SCM based on the value chain of Xiaomi are studied. Finally, the problems and solutions of SCM in Xiaomi are put forward. is research is conducive to better strategic planning and the improvement of enterprise cost management efficiency and can provide a reference for the cost management problems faced by other IoT enterprises. Main Forms of Replication Technology. Replication technology replicates a set of particular forms of securities and then uses it to investigate other security indicators [23]. It is essential to pay attention to the types of securities (preset or a group) and to keep capital flow entirely meeting its characteristics [24]. e purpose of increasing asset income can be realized when the capital flow is completely matched. Meanwhile, these goals also contribute to the research of replication technology. Generally, replication technology is widely used in the financial market environment, mainly in the process of asset pricing, financial asset risk management, and the development of new financial instruments [25]. Applying replication technology to financial investment is mainly to unify the profitability, security, and liquidity of financial assets [26,27]. Figure 1 presents the main form of replication technology. (1) Exponential replication form of replication technology. In stock investment, the return on investment is taken as the criterion to judge whether the investment is successful or not. When it is higher than that of the market index, the investment is successful [28,29]. e so-called replication index is a way for securities investors to establish a new portfolio that can reflect the stock index based on all the constituent stocks in the target index and finally obtain a high yield. ere are two methods of exponential replication: complete replication method and optimal replication method. In the portfolio construction, the complete replication method needs to determine the purchase proportion according to the weight of various stocks in the basic indicators so as to achieve the purpose of index replication. Using the method of complete replication can ensure that the constructed portfolio index is highly consistent with the basic index. is method is more suitable for split share structure, which requires a lot of capital investment, and is not suitable for weak investors. e optimal replication law can effectively control the tracking cost of the optimal replication method by setting parameters and indicators so that investors can invest within the acceptable range. Moreover, some stocks are eliminated in combination with their actual situation. In the process of portfolio allocation, it can be optimized according to the actual weight, and then, optimized allocation can be conducted. It makes the established portfolio apply the optimal replication method to financial investment and financial management, which can effectively control the warehouse construction cost and maintain cost. (2) Bond replication form of replication technology. Bond replication is a form of replicating other bonds based on bonds. In the construction of composite bonds, first, the cash flow of bonds is decomposed to a certain extent, and then bonds are combined. After the correct bond price is determined, the trading opportunity is accurately grasped to realize appreciation. e realization of cash flow replication is based on bond replication. e consistency among cash flows should be ensured during replication. ereby, in the process of copying and constructing synthetic bonds, it can be calculated according to the intrinsic value equation and cash flow. e intrinsic value equation calculates the discount of bond cash flow. e expression reads as follows: where p 0 reveals the present value of the bond; F represents the face value of the bond; i is the coupon rate; T represents the holding period year; D t represents the discount factor of time t, and 0 ≤ t ≤ T. e equations are constructed based on the same time and characteristics of the bond to be replicated and the replicated bond. e expression reads as follows: where C k represents the cash flow of bonds; T is the time period; Q indicates the number of bonds in the market; W 1 , W 2 , . . . , W T are cash flows at different times; N 1 , N 2 , . . ., N Q refer to the number of existing market bonds of replication bonds. e replication bond can be realized through the above equations. is replication form is mainly aimed at fund products, which has two forms at present. e first is mainly aimed at the target fund and it is to replicate through financial derivatives, which can reflect a kind of financial market. e second is mainly based on the relevant investment strategy to achieve replication, and this replication form is applied to the fund products of relevant financial companies to ensure the success of the fund products so as to more effectively safeguard the interests of investors and make the investment management process of fund manager more convenient. Enterprise Financialization and Enterprise Performance. Existing academia believes that enterprise financialization will have a crowding-out effect and a reservoir effect on real industries. With limited resources, the financial investment will increase with the decrease of physical investment. When an enterprise transfers the resources used for entity operation to financial investment, it will harm the enterprise and cause a crowding-out effect [30]. When enterprises use idle funds to invest, this can improve the utilization rate of funds and contribute to the growth of business performance, so this is the "reservoir" effect. e financial assets of enterprises are the main form of enterprise financialization. Many entity enterprises take financial investment as a crucial way to improve the capital composition and alleviate the financing difficulties. erefore, multiple enterprises will use idle funds to buy some financial products with strong liquidity and low profitability so as to prevent the above situation [31,32]. e impact of enterprise financialization on enterprise performance can be analyzed from two aspects. On the one hand, from the perspective of the "reservoir" effect, financial assets are an effective means to maintain enterprise capital liquidity. When an enterprise is in urgent need of cash, it can quickly realize cash assets at a low cost, which is conducive to improving the operation efficiency of enterprise investment. Moreover, it can improve the enterprise's performance, increase financial revenue, and urge the enterprise to reduce material expenditure and financial assets [33]. In the short term, financial returns can alleviate the low production and operation performance and improve the enterprise performance temporarily. On the premise of reasonable financial investment, it can not only increase the enterprise value but also realize the diversification of asset investment so as to effectively reduce investment risk and improve enterprise performance. erefore, enterprises should reasonably invest in liquid financial assets. Scientific Programming On the other hand, from the perspective of the "crowding-out" effect, such pursuit of profit may lead to excessive financial investment, which will inevitably crowd out other investments. is will make the operating profits of enterprises mainly come from financial investment and change the main body of enterprises so that the real economy will be overhead. Unrestricted investment in financial products will hinder the development of the main business of the enterprise and make the profitability of the enterprise worse, which will negatively affect the enterprise's performance. Meanwhile, other investment projects in the enterprise will be delayed due to the lack of funds, which will result in the low overall investment efficiency of the enterprise, so it is difficult to increase the enterprise performance. In addition, the vulnerability of the financial system will also affect the business risk of enterprises, resulting in the decline of enterprise performance, which is not conducive to the long-term development of enterprises. SCM. e SCM is used to enhance the original cost management with the change of business environment. Since the SCM is a combination of strategic management and cost management, it is also beneficial to adapt to the external competitive environment. Table 1 shows the differences between SCM and traditional cost management: e biggest difference between SCM and traditional cost management is that SCM creates an environment within the enterprise, which can reduce costs. Figure 2 displays the characteristics of SCM. SCM is to enable enterprises to achieve sustainable development, so it will not be limited to temporary profits and losses but focus more on the long-term interests of enterprises. SCM will focus on the relationship between the enterprise and the external environment. In addition to the internal consideration of the enterprise, it will also begin to expand management outside the enterprise and optimize the cost structure by improving the relationship between the enterprise and the external environment so as to reduce the cost level of the enterprise, finally improve the competitiveness of the enterprise, and promote the sustainable development of the enterprise [34,35]. SCM should not only analyze the enterprise's own cost structure but also constantly analyze its competitors as the basis for the implementation of the strategy. In this way, a cost management scheme suitable for the enterprise can be established, which can be conducive to enhancing its competitiveness. e SCM is adopted to manage the internal environment of enterprise so that the internal business environment of the enterprise can adapt well to the complex competitive environment. e SCM is useful to maintain the position of the enterprise in the industry as SCM provides useful information in time. In the specific stage of enterprise growth, SCM focuses on the change stage of the enterprise life cycle. erefore, the focus of cost management varies with the stage of the enterprise. If enterprises develop different competitive strategies, the focus of SCM will also change. Current Situation of Xiaomi Cost Management. Xiaomi focuses on the research and development (R&D) of intelligent hardware and electronic products. It is listed in Hong Kong, and it is the first industry to use the Internet to build mobile phone enterprises, with the Internet as its core idea. Traditional mobile phone manufacturers mainly focus on the real economy, and few enterprises adopt the Internet model. Xiaomi does not have much capital, so it has abandoned the traditional marketing method and opened up a new marketing method through network marketing. At the beginning of its establishment, Xiaomi had a clear understanding of the company's positioning strategy, that is, integrating hardware, software, and the Internet. e hardware is Xiaomi mobile phone, the software is "MiTalk," and the Internet service system is MIUI system. In addition, it has also released a series of products such as Xiaomi router and Xiaomi finance. At present, Xiaomi has also entered the field of smart homes and launched the Xiaomi ecological chain brand "MIJIA." It has established the world's largest IoT platform for consumption and invested in about 400 enterprises, including hardware facilities, games, social networks, finance, and multiple other industries. After consulting the data, it is found that the costs of Xiaomi are mainly in sales, administration, R&D expenses, and promotion. Figures 3 and 4 present the sales costs of Xiaomi's products in 2018 and 2019. Figures 3 and 4 show that the total sales cost of Xiaomi in 2018 and 2019 was 152.7 billion yuan and 177.3 billion yuan, respectively; the total revenue of each project was 174.9 billion yuan and 205.8 billion yuan, respectively; the gross profit was 22.2 billion yuan and 28.6 billion yuan, respectively. In 2019, the sales cost of smart phones, IoT and consumer products increased more, which was due to the increase of product sales. Although the revenue of smart phones is the highest, the gross profit is relatively low. e gross profit of Internet services is the highest. e main reason is that Xiaomi's profit mainly depends on software and the Internet, not hardware. Figure 5 shows the sales, promotion and administrative expenses of Xiaomi in 2018 and 2019. Figure 5 reveals that Xiaomi's sales and promotion expenses in 2018 were 8 billion yuan and its administrative expenditure was 2.2 billion yuan; in 2019, Xiaomi's sales and promotion expenses were 10.4 billion yuan and its administrative expenditure was 3.1 billion yuan, an increase of 30% and 40.9%, respectively compared with 2018. e main reason is that in recent years, Xiaomi has constantly conducted celebrity endorsements to improve the popularity of the brand so the advertising fee will increase. Moreover, the rapid development of overseas business will also lead to an increase in logistics costs. Figure 6 shows the comparison of R&D expenses of Xiaomi technology from 2018 to 2020. e results of Figure 6 show that Xiaomi's R&D investment in 2018-2020 is increasing year by year, mainly because Xiaomi insists on technology as the centre and adds a lot of R&D projects. Meanwhile, the expansion of R&D personnel increases the salary of R&D personnel, so it will lead to more growth of R&D expenses. Although the R&D 4 Scientific Programming investment of Xiaomi technology reached 10 billion yuan in 2020, there is still a big gap compared with Huawei. Figure 7 shows the proportion of business revenue of Xiaomi from 2016 to 2019. Figure 7 reveals that the revenue of Xiaomi's smartphone business in 2019 decreased compared with that in 2018, while the revenue of IoT and consumer products increased. It suggests that Xiaomi has continuously adjusted its revenue structure in recent years, reducing the proportion of smartphone business and expanding the proportion of IoT products. SCM Strategy Based on Value Chain. Xiaomi introduces the SCM mode to enhance its market competitiveness. Scientific Programming raw material procurement link: Xiaomi first strengthens the management of the procurement department and strictly controls the procurement system. Second, the R&D link strategy: Xiaomi pioneers the networking mode to develop the mobile phone operating system and then only makes extremely popular products, thus reducing the unit R&D cost. ird, the production and manufacturing link strategy: on the one hand, Xiaomi is production outsourcing to realize the optimal allocation of resources; on the other hand, it has established a quality committee and adopted international testing standards. Fourth, marketing link strategy: Xiaomi implements online sales, focusing on fan marketing, while offline marketing is carried out through the "Mi Fan Festival." Xiaomi's SCM strategy based on external value chain mainly includes supplier-related strategy, consumer-related strategy, and competitor-related strategy. e strategy related to suppliers is to cooperate with strong suppliers in multiple fields on the one hand and adopt zero inventory mode on the other hand. e consumer-related strategy is to accelerate the market sinking. Xiaomi learns from the business model of its competitor Apple. Regarding product strategy, Xiaomi mobile phone only releases one model every year. SCM Implementation Effect Based on Internal Value Chain. Some SCM strategies adopted by Xiaomi have achieved excellent results. e total cost per unit of a product A, i.e., smartphone, is illustrated as an example in Figure 8. Figure 8 presents that the cost components of product A have decreased to different levels. Among them, the cost of raw materials decreases the most, from 771 yuan to 726 yuan. is phenomenon is due to the establishment of strategic partners between Xiaomi and suppliers, which improves the bargaining power of the buyer and reduces the cost of raw materials. Moreover, the user demand increases so the sales volume will increase, and the fixed cost allocated to a single product will be reduced accordingly. Figure 9 describes the gross profit of the changes in gross profit and gross profit margin of Xiaomi from 2016 to 2019. It is described that the gross profit of Xiaomi has increased year by year. In 2017, it had exceeded 10 billion yuan, and the gross profit margin had increased from 10.6% in 2016 to 13.23% in 2017. In the following years, the gross profit margin has also been increasing. e reason is that the products of Xiaomi are relatively diversified in recent years, and the business is relatively mature, so the profitability is also gradually improving. SCM Implementation Effect Based on External Value Chain. Figure 10 represents the growth rate of China's smartphone market share from 2019 to 2020. Smartphones are the main business of Xiaomi so the market share can reflect the competitiveness of Xiaomi in the industry. Figure 10 shows that Xiaomi's market share in 2020 was 12%, which has increased. Under the impact of the epidemic, Xiaomi's smartphone market share maintains a relatively stable growth trend, suggesting that the enterprise's SCM based on the external value chain has achieved certain results. e study analysis shows that the SCM effect of Xiaomi's value chain is obvious. However, there are still some deficiencies in the process of cost management, such as the incomplete scope of cost management and the small amount of patent authorization. e following tables describe the existing problems and their solution in terms of the value chain, strategic positioning, and cost drivers. Table 2 presents the existing problems and solution about product suppliers, R&D technical level, and product after-sales in terms of the value chain. In addition to some problems in the value chain, strategic positioning is also a rising problem in cost management. Table 3 presents the existing problems and solutions about cost management organization and market positioning. Table 4 represents the existing problems and solutions about department contact and SCM awareness in terms of cost drivers. Conclusion SCM is considered an efficient and advanced cost management tool in various domains and its applications are integrated into various domains. In this research paper, we have carried out an intensive comparative analysis of our proposed method with the benchmark models. Moreover, to carry out our analysis and evaluation of the background of replication environment and financial digitization, Xiaomi, an IoTenterprise, is considered a cost management attribute. However, the strategy and effect of SCM based on the value chain of Xiaomi are discussed in detail and the comparative analysis has been provided for the evaluation and analysis purpose. Finally, a solution to the existing problem has been provided and the proposed method of evaluation has been justified and validated. e research results show that SCM based on the value chain of Xiaomi has played a positive effect in reducing the total cost per unit product and increasing the market share. From the literature, we found that there exist some issues related to SCM. ese issues include insufficient innovation ability, the incomplete scope of cost management, and less patent authorization. is research is conducive to improving enterprise cost management efficiency and has a certain practical value. e deficiency is that the research is not very in-depth due to the incomplete information disclosure of Xiaomi. erefore, the aim of this research is to analyze the strategic cost management of the case through the knowledge accumulated. However, we have also recommended future solutions and evaluation methods to decrease the cost and improve the management. erefore, for future research directions, it is important to have a deep understanding of literature, learn relevant theories, and take field investigation to obtain more detailed data, which can improve the existing research problem of SCM. Data Availability e data used to support this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest. Existing problems Solutions About product suppliers Excessive dependence on external suppliers and insufficient product supply capacity Improve the upstream supply chain of the company and establish a supplier evaluation system About R&D technical level Relatively weak R&D capability and scientific and technological level strength Increase R&D capital investment and improve the technical level and innovation ability of enterprises About product after-sales Imperfect after-sales service system Improve the after-sales service system and enhance the brand reputation Table 3: Problems and solutions on strategic positioning. Existing problems Solutions About cost management organization system Imperfect cost management organization and system Improve the cost management system and strengthen cost management About market positioning Inaccurate enterprise market positioning Timely adjust the market positioning in a targeted manner
5,695.2
2022-04-05T00:00:00.000
[ "Computer Science", "Business" ]
Bulletin of Electrical Engineering and Informatics Samer Alabed, Amer Alsaraira, Nour Mostafa, Mohammad Al-Rabayah, Ahmed Shdefat, Chamseddine Zaki, Omar A. Saraereh, Zakwan Al-Arnaout Department of Biomedical Engineering, School of Applied Medical Sciences, German Jordanian University, Amman, Jordan College of Engineering and Technology, American University of the Middle East, Egaila, Kuwait Department of Electrical Engineering, Faculty of Engineering, Hashemite University, Zarqa, Jordan INTRODUCTION A recommendation system is an intelligent system that captures user behavior on internet portals in order to forecast user interest in future online product purchases, movie viewing, or music listening.Because of the growing popularity of the internet market, several websites now provide numerous choices to their customers.Looking for the appropriate item in which the customer is interested among thousands of items in a short amount of time has become quite difficult.Thus, a recommendation system has been introduced to meet this challenging issue.A recommendation system may create data based on a user's previous purchase history, search patterns, and online behavior.When a new item or user enters the online website, the recommender system search for the existing data and recommends the best item based on the customer's preferences [1]. Recommendation systems, based on their functioning behavior, are categorized into three types of recommendation systems to create the most efficient suggestion: content-based recommendation systems, collaborative recommendation systems, and hybrid recommendation systems.Approaches such as collaborative filtering (CF) and content-based filtering (CBF) have mainly been developed to acquire insight into user preferences [2], [3].Those preferences suggested in the CBF technique depend on their content Bulletin of Electr Eng & Inf ISSN: 2302-9285  A highly scalable CF recommendation system using ontology and SVD-based … (Sajida Mhammedi) 3769 similarity to items previously scored by the user.While the CF technique takes advantage of the similarity of users' tastes for suggestions [4], [5].The CF technique is classified into two types: user-based and item-based.The similarity between users in the user-based CF method is estimated according to co-rated items.In contrast, item-based CF assesses this similarity among items instead of users.Items that users have previously admired will pique their attention.Compared to different techniques, the hybrid strategy combines the two filtering procedures and has superior prediction accuracy [6]. Even though CF has drawn attention owing to its effectiveness and simplicity [4], it still faces the following challenges: data sparsity [7], computation time, accuracy of recommendations, scalability, and data volume.To address the issues presented by the CF method, the hybrid recommendation strategy employs several information filtering techniques.The hybrid filtering approach is designed to produce more efficient and accurate recommendations than a single technique.In addition, the hybrid model overcomes the disadvantages of a single system by combining many techniques.In this research, we offer a recommendation approach that uses both ontological semantic filtering and an incremental algorithm to give high scalability when dealing with massive increases in user and item matrix sizes as well as sparsity issues.In order to have an accurate prediction with a decreased running time of the recommendations system.The rest of the paper is organized as: section 2 examines the relevant works to this research work, the proposed approach is described in section 3, section 4 discusses the practical implementation and evaluation of the proposed approach, and followed by the final section 5 which concludes the paper. RELATED WORK Several strategies for recommendation systems have been established in prior research.Nowadays, recommender systems are essential to speed up internet users' searches for relevant content.In the area of recommender systems, using ontologies as a knowledge base is becoming increasingly popular in modeling tasks, inferring new knowledge [8], [9] or computing similarity for recommender systems.Adopting ontologies in information systems intends to model information at the semantic level by structuring and organizing a set of hierarchical terms or concepts within a domain and modeling the relationships between these sets of terms or concepts using a relational descriptor [10], [11].Recommender systems based on knowledge represented by ontologies are then proposed by explicitly soliciting user requirements for these elements and an in-depth understanding of the underlying domain for similarity measures and prediction computation.In relation to the significant number of published studies [11], enhance user profile representation by implementing an ontologybased recommendation system.By introducing domain ontologies into the system, the suggested technique is able to uncover relationships between users and their favorite choices regarding items.The authors developed several experiments based on offline tests.They also compared the new recommendation approach to collaborative approaches.To improve the quality of the recommendations, Hassan et al. [12] used item semantic knowledge.As a result, they created a hybrid semantic improved recommendation strategy that combines the inferential ontologybased semantic similarity (IOBSS) with the classic item-based CF method.Kermany and Alizadeh [13] suggested multi-criteria recommender systems using adaptive neuro-fuzzy inference system (ANFIS) relies on ontological item-based and user demographic information.Their method was tested using Yahoo movies platform dataset.Moreover, according to their results, the accuracy of multi-criteria recommendation system can be increased by incorporating semantic information. Dimensionality reduction techniques have been widely used in the literature of recommendation systems.Among the most successful is the dimension reduction method called singular value decomposition (SVD) and its variants and principal component analysis (PCA) [14], [15].These techniques are used to reduce the dimensionality of the data, which helps in handling the sparsity of the data and improving the efficiency and accuracy of the recommendation process.The literature shows that these techniques have been successful in improving the recommendation performance on various datasets, especially after the challenge launched by Netflix.Indeed, many works like [16] analyzing the results of the challenge demonstrated the superiority in terms of accuracy of approaches after applying dimensionality reduction techniques over CF algorithms.Recent research related to our works has also used SVD as a technique in CF for recommendation systems.For instance, Wang et al. [17] proposed a CF algorithm that incorporates trust between users to improve recommendation accuracy.The algorithm combines the traditional SVD method with a trust factor matrix, the results show that it outperforms other state-of-the-art CF methods in terms of recommendation accuracy.Nilashi et al. [18] combine CF with ontology-based techniques and dimensionality reduction.The proposed recommender system uses ontology and dimensionality reduction techniques to improve the accuracy and coverage of CF.It combines semantic similarity and matrix factorization to handle the sparsity problem and provide more personalized recommendations.However, the use of incremental SVD has been proposed as a way to improve scalability and performance compared to non-incremental SVD [19].Brand [20] uses an incremental SVD approach with incomplete data to solve the issue of uncertain new data with missing values and/or affected by correlated noise.In comparison to SVD technique, using incremental SVD in recommendation systems will updates the factorization model using only new information instead of recomputing the entire model from scratch, which can be computationally expensive and time-consuming for large datasets.As a result, incremental SVD can reduce the training time and improve the efficiency of the recommendation system without sacrificing accuracy.Overall, our contribution lies in presenting a comprehensive recommendation method that combines dimensionality reduction, ontology-based techniques, and incremental SVD to address key challenges in recommendation systems.By leveraging these techniques, we aim to improve recommendation accuracy, scalability, and efficiency which will ultimately enhancing the user experience and satisfaction. HYBRID RECOMMENDER SYSTEM PROPOSITION Figure 1 shows the diagram illustrating how the proposed recommendation system works.The suggested recommendation system aims to provide efficient, scalable, and accurate recommendations.Two significant aspects to examine in the suggested system process.In the first phase, several tasks are performed during the construction of the recommendation model, such as clustering of items and users based on rating, dimensionality reduction using the SVD algorithm, and constructing item-user similarity matrices.First, the system is supplied with a user-item matrix that specifies the user's rating given to each item.As a result, item clusters must be constructed using fuzzy c means clustering to determine the similarity between items.The pairwise similarity between them is computed to regroup items based on similarity.The overall similarity is obtained by calculating the item-based and ontology-based similarity averages. Figure 1. Proposed system framework using ontology and incremental SVD A new algorithm based on ontologies is suggested to compute the item similarity.Following that, we created decomposition matrices using SVD for the user-item cluster.It is worth mentioning that we are working on the SVD model for items and users.As a result, in each matrix, similarity computation is correctly performed after the matrix decomposition process.After the comparable item clusters have been produced, it is proposed to predict a rating for the current user who has yet to rate every item in the system to eliminate sparsity in the user-item matrix.The incremental SVD is employed as part of the recommendation process' second phase (online phase) to predict and recommend tasks for targeted users and items.We follow the same procedure as the item-based suggestion.Finally, in a meaningful way, integrate user-and item-based predictions.In the following subsections, the approach is discussed in depth. Preprocessing of data The initial step in our research is to preprocess the dataset in order to make it suitable for the proposed method.This involves conducting the necessary preparation processes that real-life data typically require for analysis.In our approach, we begin by transforming movie ratings into a user-item matrix, often referred to as a rating utility matrix.This matrix captures the ratings provided by users for different movies in Figure 2.However, this matrix is typically sparse, meaning many cells are empty as they represent movies the user has not rated.CF algorithms typically work with dense matrices, so we need to convert the sparse matrix into a dense matrix by applying normalization techniques.The empty cells in the matrix correspond to new users, new movies, or movies not rated by anyone.Users who have expressed positive sentiment (indicating user preference) towards a movie are assigned ratings of 4 or 5, while users who have shown negative sentiment (indicating user disinterest) are assigned ratings of 1 or 2. Therefore, to address item and user bias in the ratings, we normalize the ratings using mean normalization. Movie ontology In this research, we use the movie ontology (MO) created based on the ontology web language (OWL) standard and at the Department of Informatics in the University of Zurich [21].MO elucidates the semantic ideas and concepts related to the domains of the films.The class "movie" is the main class and all movies are considered instances of it.Many research have demonstrated that using an ontology-based semantic approach improves the prediction accuracy of recommendation systems [22], [23]. User-based clustering In user clustering, users are grouped based on similar preferences, as determined by their ratings.After clustering the users, each cluster's views aggregate is utilized to predict unidentified ratings for target users or predict which items they like or dislike.Since clusters contain a restricted number of users, there is no need to evaluate all users.Thus, it results in improving performance. Item-based clustering 3.4.1. Compute ontology-based item similarity Ontologies supply immense knowledge on any topic, which might be highly valuable in the recommendation system [24].Most studies ignored ontologies' multilevel and complicated structures and used just one feature to determine item similarity according to ontology.For instance, several researchers have relied only on a movie's "genre" to identify a related collection of films based on ontology.In the context of a movie recommendation system, let's consider Figure 3.In this example, we can assume that CL represents the movie class.Within this movie class, we have two attributes: At1 and At2, which could represent characteristics such as the release date and copyright information.Additionally, we introduce a subclass called SCl, which represents the "movie origin".This subclass includes attributes At3, At4, and At5, which correspond to specific regions such as North Africa, Asia, and Europe.By organizing the movie data in this hierarchical manner, we can capture more detailed information about movies and their origins.This ontology-based approach allows us to categorize and represent movies based on their attributes, enabling more sophisticated recommendation algorithms to provide personalized movie suggestions to users.This work uses the binary Jaccard similarity coefficient to compute item-based semantic similarity.For two items to be similar, their attributes and the attributes of their subclasses must be similar [25].The average of the values is determined using recursive computing to find the similarity between items until the maximum depth defined at the beginning is reached.As a result, in Assuming no attribute in the ontology is a subclass, in ( 2) is the ontology-based similarity between two items .In (1) is the semantic similarity between classes and of two items and for a specific attribute , the total number of attributes is represented by .In (3) computed if attributes, subclass with its attributes exist in the ontology.To determine the ontology-similarity between two objects specified by the domain's common.It requires the following two inputs: i) ontology of items with classes, properties, and relationships; and ii) I represent the set of all items.The semantic similarity matrix (SSM) is computed, which measures semantic similarity between two items based on ontology. Calculation of item similarity using explicit user ratings The similarity between items is determined based on explicit ratings supplied by users in the user-item rating matrix, where I represent the set of items, U represents the set of users, and Rui represents the score given by user u to item I, as seen in Figure 4.The similarity metric that was used in (4): where and represent the values of the ratings given by user u to item and item , respectively.1≤u≤l, both items were rated by the total number L of users. Total item similarity score The total similarity between items is obtained by combining the similarity score supplied by ontology in ( 2) and (3), and explicit user ratings in (4). Bulletin of Electr where α + µ = 1, the total item similarity matrix (TISM) is generated after calculating the overall similarity for each item in the item set using (5). Method of item clustering Fuzzy c means clustering [26] was employed in this study to group similar items since it works well with sparse datasets in the majority of recommendation system.This study considers content-based characteristics derived from ontologies combined with user rating data to avoid over generalization, poor accuracy, and cluster overlapping that will result from using just one.As detailed in the next section, similar items within a cluster are used to predict the target item's score.As a result, the number of items that need to be evaluated is significantly fewer than the entire number of items in the system, which increases the system's performance [22].Once the clusters have been constructed, a user-item cluster matrix (UICM) is produced, in which U represents the set of users, C represents the centers of all item's clusters, and represents the value of the average rating supplied by user m to the item of the cluster center z, as illustrated in Figure 5. Prediction for the rating Based on the cluster generated, a sorted list of top T similar items is produced for a target item.Using the obtained values, the empty cells in the user-item rating matrix for the target user are then filled.The rating for each unrated (target) item is anticipated based on the active user's ratings for items comparable to that unrated (target) item.Based on (6), we can predict what the rating of an unrated item i will be expected from a target user u. Where the similarity score between target item i and item j is Similarity (i, j); , is the rating for similar item j by user u, and T is the total number of similar items considered.In certain cases, the current user may not rate the top T similar items for a target item, leaving some empty cells in the user-item matrix after filling it.To address this issue, an extended technique might be used to estimate the remaining sparse cells.In this method, an active user's rating behavior for other items is taken into account, as well as other users' ratings for the unrated (target) item.Using the suggested method, an unrated item I may be predicted by target user u as (7): Where α and µ are control parameters, M is a measure of how many other items U (target user) scored, 1≤m≤M, , is the rating provided by u to other items M, n is the number of other users, where 1≤p≤n, q≠u is the number of other users who submitted a rating for unrated target item I, and , is the rating given to target item i by other users p, excluding target user u.The result for predicting the unrated value in an explicit user-item rating matrix UIM(U, I) is a dense, non-sparse user-item rating matrix DUIM(U, I). . Singular value decomposition According to Zhou et al. [19], one of the standard solutions for sparsity issues is to use data dimension reduction techniques, notably SVD, which is a matrix factorization technique that can extract dataset features by dividing the original user-item rating matrix into three smaller matrix multiplications.Given a mxn matrix A ∈ ( is the number of items and is the number of users), the SVD() is expressed with the rank()= as: () = × × ,where U ∈ , V ∈ , and ∈ .The middle matrix is a diagonal matrix with nonzero entries, which are the singular values of A. SVD is the best low-rank linear approximation of the original matrix, which provide the optimum approximation of the utility matrix A. Incremental singular value decomposition algorithm in the prediction task The algorithms in the proposed study operate in two stages, online and offline.In the suggested CF recommendation system, user-to-user mapping takes place offline.In contrast, the actual rating prediction or target user interactions is made online.Offline prediction or recommendation is, in fact, a time-consuming procedure.Whereas the online method is efficient in terms of prediction and recommendation time owing to the usage of the incremental SVD.The parallel design system for the similarity formation method may be made incredibly scalable using SVD size reduction techniques while generating more significant results in maximum instances.This study presents incremental SVD algorithms that produce recommendations online for target users in the shortest time possible.The incremental algorithm's most essential quality is that it supports a high number of users, making the system scalable as the size of the user-item matrix grows. Our recommender system operates in two distinct phases.First, the model is developed offline by calculating user-user or item-item similarity.Meanwhile, the model generates predictions when a newcomer or item is introduced, and the online process begins.In incremental SVD, the projection method is known as folding-in.To fold new users into the distance of the previously decreased user-item matrix.For instance, Figure 6 shows that after running the SVD method on A1 in the offline process with three matrices U1, Σ1, and V1, the online process uses the incremental approach whenever a new matrix A2 is added, resulting in three updated matrices U2, Σ2, and V2. Dataset description MovieLens dataset, which can be found at [27], is one of the most well-known datasets for evaluating recommender systems.The MovieLens dataset consists of 1 M ratings provided by 6,040 users for a total of 3,900 movies.Each rating is expressed on a scale of 1 to 5, where a rating of 1 indicates the least liked movie and a rating of 5 represents the most liked movie.The dataset offers a comprehensive collection of user reviews, allowing us to evaluate and enhance our recommendation system based on a broad range of user preferences and movie ratings.Detailed information about the dataset is presented in the Table 1.3775 WebSPHINX [28], a web crawler, was used in this study to collect material relevant to IMDb [29] items.Furthermore, gathered data is used to construct and complete an item ontology.To conduct tests, the dataset was divided into 80% of randomly selected data for the training set, while 20% of the remaining data was used for the testing set. Evaluation and discussion of the proposed system The recommender system presented in this study was implemented using Python 3.9.7 on a PC with a 4 GHz processor, 8 GB RAM, and 64-bit Microsoft Windows 10.To thoroughly assess the system's performance, it was compared to various related approaches, including Pearson nearest neighbor algorithm, item-based CF with EM, SVD combined with ontology, and user-item-based EM and SVD with and without ontology integration.The evaluation was conducted from two perspectives: time throughput (recommendations per second) and accuracy, providing valuable insights into the system's efficiency and effectiveness compared to existing approaches. Evaluation 1: predictive accuracy analysis Mean absolute error (MAE) is a statistical accuracy metric used to evaluate prediction accuracy.In this experiment, the MAE computes the difference between the predicted and actual ratings.MAE is presented in (8): Where determines the number of items on which a user u has given a score, the suggested approach for predicting accuracy using MAE is assessed and compared to the state-of-the-art methods.Displayed on MovieLens datasets, respectively, against different neighborhood sizes Figure 7. Evaluation 2: decision-support accuracy In terms of accuracy measurements, the decision-support metrics will be crucial in evaluating the overall performance of the hybrid-based recommender.In the information retrieval area, several measures for this aim are well-known.Recall, precision, and F-measure are among the metrics included in this category.The precision computes the fraction of relevant items in the list of returned results.In contrast, the recall calculates the fraction of pertinent items that have been retrieved.Both metrics should be used in common since the recall increases as the number of items retrieved increases, whereas the precision often decreases as result sizes increase.The F measure is a metric that takes both values into account, as indicated in (11): (11) The F1 measures and precision values for all methods on various top-N recommendations are shown in Table 2.It can be deduced from the table that the precision achieved by the suggested technique is significantly higher than that obtained by the nearest neighbor algorithm or the other methods tested.In addition, we found that the F1 measures of the proposed method, dealing with dimensionality reduction using incremental SVD and ontology, outperformed.Compared to other methods, these findings are sufficient to support our claim that our recommendation system is reasonably more efficient and scalable. Evaluation 3: scalability analysis The efficiency of the suggested approach is evaluated in the first experiment.Evaluation is based on throughput, known as the number of suggestions per second.We test our strategy on the MovieLens datasets to demonstrate its effectiveness in improving the system's scalability problem. Figure 8 illustrates the performance results of our method compared to the state-of-the-art methods. According to the graph, the throughput of those methods that use dimensionality reduction techniques and clustering is considerably higher than other methods.Moreover, the proposed approach based on clustering with expectation maximization (EM), ontology similarity, and incremental SVD is slightly higher than other methods, especially those that rely on the SVD reduction technique.Unlike systems that use the nearest neighbor technique, clustering allows the recommendation system to analyze just a part of the items/users.As a result, increasing the cluster size does not affect throughput since it must scan all nearest neighbors.A highly scalable CF recommendation system using ontology and SVD-based … (Sajida Mhammedi) 3777 The results of the evaluations demonstrate the effectiveness and superiority of the proposed recommendation system.By incorporating ontology and dimensionality reduction techniques in CF, the system achieves improved predictive accuracy, decision-support accuracy, and scalability compared to existing methods.It proved that considering semantic relationships and reducing dimensionality enhances the system's ability to capture user preferences, provide accurate ratings, and enable the system to handle large-scale datasets more effectively.Therefore, this implies that the proposed method not only provides accurate recommendations but also ensures that relevant items are retrieved.The system addresses the limitations of traditional CF approaches by providing accurate recommendations, assisting users in decision-making, and efficiently handling large datasets.This study's findings highlight the proposed system's potential for practical applications in the recommendation domain. CONCLUSION In this paper, we have presented a novel recommendation method that addresses the challenges of accuracy, scalability, and sparsity in CF-based recommender systems.Our approach incorporates dimensionality reduction using the incremental SVD algorithm, ontological item-based semantic similarity, and explicit user ratings to improve the prediction accuracy and scalability of the system.By adopting the incremental SVD method, we were able to handle the increasing size of the user-item matrix while maintaining computational efficiency.The folding-in technique employed in the incremental SVD algorithm significantly reduced the computation cost and allowed our system to achieve high scalability.The experimental results conducted on a real-world movie recommendation dataset confirmed the effectiveness of our proposed method.The precision, F1 measures, and MAE metrics demonstrated that our system provides accurate predictions while effectively addressing the sparsity and scalability issues commonly encountered in recommender systems.The incorporation of MO further improved the predictive accuracy and expanded the potential for applying our method to different semantic contexts and domains.Further research can explore additional evaluation metrics, investigate the system's performance with different datasets, and consider the impact of incorporating other factors such as user demographics or temporal dynamics.By continuously refining and enhancing the recommendation system, we can further improve the accuracy, relevance, and usability of the recommendations provided to users in various domains. Figure 5 . Figure 5. Construction of UICM from user-item rating matrix Figure 6 . Figure 6.Phases of recommendation process Table 1 . Description of the dataset A highly scalable CF recommendation system using ontology and SVD-based … (SajidaMhammedi) Table 2 . Comparison of F1 metric and the precision values for different methods
5,863.2
2023-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Reinforcement learning pulses for transmon qubit entangling gates The utility of a quantum computer depends heavily on the ability to reliably perform accurate quantum logic operations. For finding optimal control solutions, it is of particular interest to explore model-free approaches, since their quality is not constrained by the limited accuracy of theoretical models for the quantum processor - in contrast to many established gate implementation strategies. In this work, we utilize a continuous-control reinforcement learning algorithm to design entangling two-qubit gates for superconducting qubits; specifically, our agent constructs cross-resonance and CNOT gates without any prior information about the physical system. Using a simulated environment of fixed-frequency, fixed-coupling transmon qubits, we demonstrate the capability to generate novel pulse sequences that outperform the standard cross-resonance gates in both fidelity and gate duration, while maintaining a comparable susceptibility to stochastic unitary noise. We further showcase an augmentation in training and input information that allows our agent to adapt its pulse design abilities to drifting hardware characteristics, importantly with little to no additional optimization. Our results exhibit clearly the advantages of unbiased adaptive-feedback learning-based optimization methods for transmon gate design. I. INTRODUCTION Quantum computing holds immense potential to revolutionize various fields, such as optimization, simulation, and cryptography -in some cases promising exponential computational speedup compared to its classical counterpart.However, a central obstacle to harnessing this potential is the challenge of realizing reliable quantum operations.Therefore, achieving high-fidelity quantum gates is a crucial prerequisite to unlocking the full potential of quantum computing for practical applications. A common approach is to optimize control protocols based on effective models of the physical platform.With a suitable model at hand, high-fidelity strategies can often be achieved through analytical insights, gradientbased optimization methods, or error amplification techniques [1][2][3][4][5][6].However, present-day quantum systems are characterized by substantial levels of noise, decoherence, and other environmental disturbances, for which accurate models are rarely known.Moreover, even when a good model is known, it is often not exactly solvable, limiting its usefulness for optimal quantum gate design.A route to circumvent these shortcomings is to resort to model-free approaches, which facilitate gate optimization through direct interactions with the quantum device. Recently an adaptive approach using reinforcement learning (RL) has become increasingly popular in quantum gate design due to its model-free nature, obviating the need for a precise description of all details of the system [7].When trained in a simulation, the RL approach rivals optimal control techniques in synthesizing high-precision quantum gates using discretized control for qubit-based [8][9][10] and qudit-based [11,12] systems, including potentially drastic improvements in exploration [10] and sample efficiency [13].Similarly, RL was found successful in optimizing continuous controls for a generic qubit model [14] as well as a hardware-specific gmon model [15].Ref. [15] further demonstrates the resilience of the RL-designed pulse sequences to stochastic noise when optimized with knowledge of a noisy environment.Discrete control algorithms have also been adapted to successfully learn faster single-qubit gates from scratch using experimental data from IBM's superconducting platform [16,17]; Ref. [17] additionally uses RL to improve upon the standard structure of an analytical cross-resonance pulse sequence. While many aspects of the RL approach are more broadly applicable, in this work, we will specifically address the gate design problem for fixed-frequency fixedcoupling transmon qubits.For this platform, the modelbased approach has yielded valuable insights in the pursuit of crafting high-fidelity entangling gates by utilizing the cross-resonance interaction [18,19].An effective approximate analytical model, capable of capturing the dominant Hamiltonian terms generated by the primary cross-resonance drive as well as undesired cross-talk [20], has paved the way for the development of various error suppression techniques, including echo sequences [21], selective-darkening/active cancellation [22][23][24], optimal control theory [25], rotary pulsing [26], and most recently derivative pulse shaping [27].However, the intricate nature of real hardware and its inevitable imperfections persist, hindering our ability to achieve flawless quantum gate operations.Moreover, while the analytical insight motivates a specific family of control pulses, it remains unknown whether even better solutions can be found by expanding the considered protocol space. As mentioned above, RL approaches have been explored to design entangling gates for the transmon platform.However, one of the inherent strengths of RL -its capability to discover innovative strategies free from the confines of theoretical protocol sequences -has remained underutilized.The absence of such flexibility results in lengthy pulse sequences which, in turn, impose severe limitations on the fidelity of these operations.Moreover, the ability of RL agents to learn adaptive strategies, that include optimal reactions to the feedback received when they are deployed, has so far received little attention.For example, although RL solutions display a degree of temporal robustness due to exposure to changing underlying system characteristics during training [17], leveraging the adaptability of the RL agent to deal with such fluctuations explicitly remains largely unexplored. In this work, we address these and related open questions by deploying a continuous control RL algorithm to construct piece-wise constant (PWC) pulse sequences for cross-resonance and CNOT gates without any prior knowledge about the controlled system.We emphasize that this model-free approach only requires feedback from the environment (simulated or experimental) and has no information about the physical model for the environment's dynamics.Our RL training agent only has access to the quantum state and the gate fidelity, which, in principle, can be obtained experimentally via tomography and fidelity benchmark; however, in this work we train the RL agent in simulation.We tailor the simulated environment to fixed-frequency fixed-coupling transmons using realistic system characteristics to have a direct comparison between our RL results and the existing error suppression techniques in superconducting platforms.Note that in this particular transmon architecture, the qubit frequency depends on the fabrication of the transmon chip itself and cannot be controlled throughout the gate duration. We first demonstrate that our unbiased RL agent is capable of generating novel high-fidelity control solutions that outperform current state-of-the-art cross-resonance pulse sequences.By effectively navigating the vast design space of multi-segment PWC functions to identify high-quality pulses for multiple continuous control drives simultaneously, our agent has achieved a remarkable feat in addressing an up to 120-dimensional control problem, as compared to the 20-dimensional problem considered in Ref. [17].Without compromising the fidelity, our agent additionally discovers control solutions with large drive amplitudes, that can lead to a maximum reduction in gate duration by 30%, while being feasible to implement on modern NISQ devices.We further show how to augment our RL approach so that our agent can learn to adapt to drifts in the underlying hardware parameters (characteristics), a common issue that plagues near-term superconducting devices.This adaptation offers a twofold advantage: immediate, high-fidelity control solutions without any extra optimization when dealing with moderate drifts or a reduction in the number of training iterations required to address more significant changes in hardware parameters.These findings underscore the practicality of the RL approach as a potent alternative for tackling the quantum gate design problem. The following sections are organized as follows.We start with defining the quantum gate design problem for one and two-qubit gates in Sec.II.We give a brief overview of the state-of-the-art gate implementations in Sec.III.We then present our reinforcement learning approach in Sec.IV and our simulated results in Sec.V. Finally in Sec.VI, we conclude and discuss future directions. II. QUANTUM GATE DESIGN PROBLEM A quantum gate design task aims to realize a logical operation over one or more qubits via optimizing a set of available time-dependent external control fields {d j (t)} over some gate duration T .The effect of these fields is described by a control Hamiltonian H ctrl (t) = H ctrl [{d j (t)}] while the intrinsic dynamics of the qubits is captured by the system Hamiltonian H sys .Together, they generate the full unitary evolution where T denotes the time-ordering operator.We measure the accuracy of approximation of the target operation U target by the resultant unitary U via the average gate fidelity [28] F avg (U, where U qubit = Π qubit U Π qubit is the unitary map projected to the qubit subspace of dimension n.The average is taken over all initial states |ψ 0 ⟩ distributed uniformly according to the Haar measure.Here we focus on superconducting qubit platforms where local Z rotations can be performed virtually [29], i.e., without incurring any additional time.We include this degree of freedom in the unitary by augmenting U qubit to V Z (θ)U qubit , in which the near-optimal angles θ are given by the matrix elements of M = U qubit U † target , see App.A 2. We consider a target gate fidelity of 99.9%, which is an order of magnitude higher than the 99% fidelity of the surface code threshold for two reasons.First, this level of fidelity is expected to enjoy a drastic reduction in the number of physical qubits when using the surface code [30].Second, for the typical two-qubit gate durations considered in this work, e.g., 248.9 ns and 177.8 ns (the smallest time unit is the inverse sampling rate dt = 2/9 ns for the considered device), the gate fidelity is coherence limited at 99.9% and 99.93%, respectively.These limits are determined by computing the average gate error under a channel with the amplitude damping rate T 1 = 300µ s and the phase damping rate T 2 = 300µs [31], which are achievable in current devices [32].Thus, having in mind any realistic decoherence in the near term, our target fidelity of 99.9% coincides with the coherence limited fidelities for the considered range of gate duration. In addition to the average gate fidelity, we also investigated the worst-case fidelity as an alternative figure of merit.However, we did not find any discernible advantage and report this additional result in App.E 2. In the following, we provide the explicit form of the Hamiltonian used to model superconducting transmon qubits, while state-of-the-art gate implementations are discussed next in Sec.III. A. Single-qubit Hamiltonian We begin by modeling a single transmon in the Duffing approximation [33], with the lab frame Hamiltonian (ℏ = 1): where ω and α denote the |0⟩ ↔ |1⟩ transition frequency and anharmonicity, respectively, and b, b † are ladder operators.This transmon can be driven at frequency ω d via a control Hamiltonian where we have factored out the drive strength Ω d to keep the real and imaginary parts of the complex control signal d(t) normalized to [−1 , 1].By rotating to the driving frame via the transformation R(t) = e −iω d tb † b and ignoring fast rotating terms (cf.App.A 1 for details), we arrive at the rotating frame Hamiltonian. To make the connection to single-qubit rotations explicit, we consider only the first two levels and make the where X and Y denote respectively the Pauli-X and Pauli-Y matrices.Evidently, turning on the complex control field d(t) induces qubit rotation around the x and y axes, which, for long enough gate duration, is sufficient for realizing any single-qubit gate.With the Euler-angle decomposition, one can achieve any desired single-qubit gate by merely calibrating the R X (±π/2) rotations, and, as discussed above, Z gates can be implemented virtually.Note that in practice much shorter gate durations are desirable, in which case a number of errors (such as high state population [34]) will unavoidably arise and therefore need to be counteracted, see Sec.III. B. Two-qubit Hamiltonian We now extend the model to describe a pair of transmons by combining two Duffing Hamiltonians coupled by a resonator.The resonator acts as a bus for coherent communication between the quantum states of the two Hamiltonians that will define the qubits.Although a variety of different logical two-qubit gates are possible with this setup [35], we will discuss here the cross-resonance interaction Hamiltonian, which is the current standard for fixed-frequency architecture. When the resonator's fundamental frequency is much larger than both |0⟩ ↔ |1⟩ transition frequencies of the individual transmons, we can project the Hamiltonian onto the zero-excitation subspace of the bus resonator to obtain the following lab-frame effective Hamiltonian: (8) Here, ωj denotes the resonator-dressed qubit frequency and J denotes the effective coupling strength [20]. In addition to a standard on-resonance control field d(t) on each transmon, an entangling operation can be realized by driving one qubit at the frequency of another via a cross-resonance (CR) control field u(t).The twotransmon control Hamiltonian then becomes Here, u 01 (t) refers to the cross-resonance pulse sent to qubit 0 when driven at the frequency of qubit 1, and vice versa for u 10 (t).Moving into the frame rotating at ω d for both transmons using the transformation ) , and ignoring the fast rotating terms, we obtain the rotating-frame Hamiltonian where δ j = ωj − ω d defines the detuning for the j-th transmon.The pair of transmons, whose dynamics is described by H 2 (t), is illustrated in Fig. 1.In this work, we simulate the dynamics in the frame rotating at the second transmon's frequency, i.e., setting δ 1 = 0.With the first transmon as control and the second as target, the main effect of the cross-resonance drive can be studied by setting u 01 to a constant value Ω and other control fields to zero: To obtain the effective ZX interaction rate (or strength) within the qubit subspace while accounting for higher levels, one can employ perturbation theory [20] to obtain the following approximate effective CR Hamiltonian where A ∈ {I, Z} and B ∈ {I, X, Z}.In the presence of classical cross-talk and incorrect phases in control drives, B can be extended to include the PauliY matrix [24].Within perturbation theory for small coupling J and small drive Ω, the interaction rates have the following scaling The resultant dominant ZX term can then be used to implement the following entangling operation known as the cross-resonant (CR) gate, which is locally equivalent to the popular CNOT gate.Such entangling operations, together with the capacity to realize any single-qubit gate, enable universal quantum computation in the superconducting transmon platform. C. Leakage Although only the first two levels of a transmon are used to represent a qubit, the higher levels are nevertheless still present and can be populated as the system evolves.We capture the most prominent leakage outside the ideal computational qubit subspace by including the second excited state of the anharmonic oscillator |2⟩ (cf.Fig. 1).The full state space can be decomposed into a direct sum of the computational subspace χ 1 and the leakage subspace χ 2 .Projectors onto these subspaces respectively are denoted as I 1 and I 2 [36]. Under a unitary quantum channel E(ρ) = U ρU † , state leakage averaged over all initial pure states in the qubit subspace is given by . Here, we have used the fact that the average of |ψ 0 ⟩⟨ψ 0 | results in the maximally mixed state I 1 /dim(χ 1 ).For a system of two transmons, the computational subspace χ 1 is spanned by {|00⟩ , |01⟩ , |10⟩ , |11⟩}, and thus dim(χ 1 )=4.Intuitively, the average leakage L quantifies the population fraction initially prepared in the computational subspace, that ultimately ends up outside of this subspace [34].While a prerequisite for achieving a highfidelity quantum gate is to minimize leakage at gate completion, conventional anti-leakage schemes typically suppress leakage throughout the entire gate duration.This is due to the difficulty in restoring population into the computational subspace at the end, as well as (historically) disproportionately larger decoherence rates for higher levels.Nevertheless, high-fidelity control solutions, with considerable excursion beyond the computational subspace during gate duration, do in fact exist and are achievable with the use of RL optimization, as we will demonstrate later, e.g., see the data reported for RL protocols in Fig. 7 and the corresponding discussion in Sec.V B. D. Entanglement In addition to the fidelity, an important goal of a twoqubit operation is to generate entanglement.Among a number of different options, we select a simple metric called linear entropy which quantifies the entanglement of a joint density matrix ρ describing the pure state of both qubit A and qubit B as follows where Tr A (Tr B ) denotes partially tracing out qubit A (B).To calculate the linear entropy of an initial state |ψ 0 ⟩ after a unitary operation U , we simply substitute When applied to a multilevel system like transmons, we make sure to normalize the final state after projecting to the qubit subspace. To assess the entanglement capacity of a quantum gate, we draw inspiration from the widely adopted entangling power of unitary operations [37], defined as the average linear entropy produced by a unitary operator when acting on the space of all two-qubit product states.Since the average is taken over two single-qubit Haar measures instead of a single joint two-qubit Haar measure, it can be computed exactly using the set of tensor products of all six Pauli eigenstates {|0⟩ , |1⟩ , |±⟩ , |±i⟩}.Of the resulting 36 two-qubit product states, 16 are maximally entangled, while 20 remain separable for gates in the class of locally equivalent CNOT operations, including the ZX(π/2) gate.Within the scope of our work, the linear entropy averaged over these 16 initial states is sufficient to capture the entangling power of a unitary operation.We shall therefore define this quantity as the average linear entropy Slin . In the context of driving one qubit at a frequency of another in order to implement an entangling gate, it is common to attribute the entanglement generated entirely to the use of such a cross-resonance drive.However, this might not be the case when an on-resonance drive is used simultaneously with the cross-resonance drive.As we shall see, studying the average linear entropy Slin of optimized pulses implementing two-qubit gates reveals that in some of the control solutions discovered by RL, the roles of different drives are not as isolated as one might initially believe; e.g., see Fig. 9, indicating the existence of an entirely new class of solutions. Standard error suppression techniques for implementing gates on transmon-qubit devices.The analytical waveforms are discretized at the inverse sampling rate dt = 2/9 ns.a) RX (π/2) implemented using DRAG scheme with an in-phase Gaussian pulse d(t) and its out-of-phase derivative (blue).ZX(π/2) implemented using b) an echoed pulse and c) an echo-free/direct pulse, consisting of a main cross-resonance pulse u01(t) (orange) along with on-resonance drives, d0(t) and d1(t), on control (blue) and target (green and red) qubits. III. STATE-OF-THE-ART IMPLEMENTATIONS OF TRANSMON GATES To provide a meaningful point of comparison for our approach, we review the conventional methods for implementing quantum gates within a superconducting platform and specifically the theoretical foundations underpinning each ansatz.As concrete examples, we showcase the standard error suppression techniques for both the single-qubit R X (π/2) gate and the two-qubit ZX(π/2) gate in Fig. 2. The most basic implementation of a single-qubit rotation around the x-axis involves driving the target transmon resonantly with a real-valued pulse envelope d(t), according to Eq. 6. Common choices for the pulse shape include Gaussian and Gaussian Square waveforms as they offer smooth ramp-up and ramp-down.The standard error suppression approach utilizes an additional out-of-phase component equal to the derivative of the in-phase part, see Fig. 2a, which has been shown to significantly reduce gate error including leakage to the second excited level.This is known as the Derivative Removal for Adiabatic Gate (DRAG) scheme [38,39], in which the amplitude of the real Gaussian pulse, the detuning, and the scaling factor of the imaginary derivative component can be optimized.Calibration of these parameters on current superconducting hardware can reliably achieve average gate fidelity of above 99.95%[40]. For two-qubit entangling gates such as ZX(π/2), the standard implementation makes use of a cross-resonance pulse u 01 along with resonant drives (d 0 , d 1 ) on both the control and target qubit, according to Eq. 10.These components can be combined in an echoed or direct fashion [40].As illustrated in Fig. 2b, the echoed scheme employs an echo pulse sequence where the CR pulse is broken into two halves (yellow envelopes) with the second one inverted (Ω → −Ω) and positioned between two π-pulses applied to the control qubit The amplitude inversion changes the sign of ω ZX and ω IX according to the relations in Eq. ( 13), while the addition of two π-pulses can be understood as a conjugation by XI for every term in the effective CR Hamiltonian, leading to the following contribution from the second half When combined with the first half, this should ideally lead to a complete cancellation of unwanted IX, ZI, and ZZ terms.Nevertheless, experimental results reveal a significant IY component as well as a small ZY term, which can be attributed to classical crosstalk.This issue can be rectified by applying an on-resonance tone to the target qubit with an identical waveform as the CR pulse, known as active cancellation [24].On the other hand, the direct scheme employs an echo-free sequence with the same symmetric active cancellation tone, while introducing an additional asymmetric rotary component. In particular, the symmetric part reduces the effects of IX and IY terms whereas the asymmetric part helps offset ZZ and ZY terms.For both schemes, calibration of the amplitudes and phases of the main CR pulse, in tandem with calibration of the additional tones, achieves between 99% to 99.7% average gate fidelity [24,40]. As seen in the above examples, the standard pulse designs rely heavily on a theoretical understanding of the platform, i.e., types of interaction induced when certain control drives are activated or when certain error processes are present.On a real device, however, deviation from the theoretical model is unavoidable and closed-loop optimization is required to mitigate the unwanted effects.Additionally, the perturbative approach of deriving the State Action Environment Reward RL Agent FIG. 3. Basic reinforcement learning loop.The agent interacts with its environment (different from the conventional definition of environment in physics) by taking actions and in turn receiving information about the environment's new state.In addition, the agent receives a reward indicating the usefulness of the last action for achieving the given task. effective interaction rates break down at high control amplitudes, preventing exploration for potential solutions in the strong drive and short time regime. While these theoretical ansätze offer the advantage of straightforward calibration procedures with a minimal number of parameters, they may also impose significant limitations and/or necessitate longer gate durations to compensate for errors not captured by the relevant theoretical model.Moreover, should previously unidentified errors come to light, it will be necessary to develop and implement novel error suppression strategies.Established alternative approaches usually involve gradientbased optimization, such as GRAPE [3], which still requires detailed knowledge about the model and access to the gradient of the loss function.A model-free approach like reinforcement learning is therefore highly desirable since it offers adaptability to system dynamics by learning from "direct interactions" which we will define in the next section.Even when equipped with a relatively simple but flexible design space, such as piece-wise constant pulses, RL has the potential to unearth control solutions that are out of reach in conventional methods [10].Furthermore, RL leaves us with a representation of gained knowledge from the control problem, i.e., the agent, that can be reused and analyzed for additional insights (cf.Sec.V). IV. REINFORCEMENT LEARNING Reinforcement learning operates on a simple principle of trial and error.A generic problem involves an agent learning to make decisions to complete a task by interacting with an environment.Therefore, it is natural to formulate a reinforcement learning problem using a finite Markov Decision Process, in which a decision is made based solely on the current state of the system but not the entire history.We illustrate the basic reinforcement learning loop in Fig. 3.In this framework, at every step i, the agent selects an action a i based on a probability distribution or policy π(a i |s i ), conditioned on the current state of the environment, s i .After execution, the agent observes a new state s i+1 along with a reward r i+1 which indicates progress towards completing a particular task.The process terminates once the task is completed or the number of steps reaches a set limit, defining the end of an episode.Training the agent then involves running many of these episodes to gather experience, while exploration is encouraged by adding randomness to the action selection procedure, the trial process.At the same time, the policy is iteratively adjusted to maximize the expected cumulative reward E [ i r i+1 ] at the end of each episode, guiding the agent away from unproductive actions, the error process.Together, trial and error allow the RL agent to explore new actions effectively, and eventually arrive at a highly-rewarded behavior. An agent trained exclusively on a single environment typically excels only within that specific context, making it less adaptable when confronted with a new environment.To mitigate this limitation, one effective strategy is to expand the agent's training scope to encompass a variety of environments.Moreover, equipping the agent with some context information about its current environment can significantly enhance its learning process and overall decision-making capability.This framework is commonly referred to as reinforcement learning with context [41], and it has been demonstrated as particularly valuable for tasks that require generalization to a range of environment parameters. We now adapt the RL framework to designing quantum gates, in which we task an agent to build a piece-wise constant (PWC) pulse to realize a target operation on a transmon environment, as illustrated in Fig. 4. In the following, we detail our simulated environment, motivate our choice of states, actions, and rewards, and describe the selected RL algorithm. A. Environment Our environment simulates the dynamics of two transmons according to the Hamiltonian in Eq. 10, considering them as directly coupled anharmonic oscillators which can be controlled via external microwave pulses (cf."Environment" box in Fig. 4).The first two levels of the oscillators act as qubits and the main contribution of leakage out of the qubit subspace is captured via inclusion of the third level.The environment is then completely characterized by a set of system parameters, including detuning {δ 0 , δ 1 }, anharmonicity {α 0 , α 1 }, control drive strength {Ω d0 , Ω u01 , Ω d1 , Ω u10 }, and coupling J, which we collect into a single vector ⃗ p = [J, Ω u01 , ...].We denote the main set of system parameters used in this work as ⃗ p 0 whose components are summarized in Table II.Any drifts in the system characteristics are considered w.r.t to ⃗ p 0 via the relative change ∆⃗ p/⃗ p 0 = (⃗ p − ⃗ p 0 )/⃗ p 0 , where we have used an element-wise vector division. Action: The RL agent interacts with the transmon environment by directly modifying the complex-valued control pulses [u 01 (t), d 1 (t), ...].Using the PWC ansatz, pulse shaping is equivalent to picking an amplitude A at each discrete time step ∆t until an N -segment pulse is complete, resulting in a gate duration T = N ∆t.To maintain experimental viability and avoid unrealistic oversampling, the time step is chosen such that 1/∆t be below the sampling rate (and bandwidth) of standard control electronics [42], i.e., ∆t > dt = 2/9 ns in this work.From the RL point of view, each complete pulse constitutes an episode, after which, the environment is reset so a new pulse can be tried out.Allowing the pulse amplitude A to take any value at every step tends to result in highly volatile pulses, similar to those typically obtained from unconstrained optimal control using methods like GRAPE [3].Instead, we aim for slowly varying solutions by defining the agent's action to be the relative amplitude change and restrict it to some continuous window [−w, w].By setting u 01,i ≡ u 01 (i∆t) and d 1,i ≡ d 1 (i∆t), we can write the action at step i as where each component is restricted to a drive-dependent window ≤ w u , a Hence, the action space dimension corresponds to twice the number of available control fields, as shown in Fig. 4a.By choosing the windows w u and w d to be small, e.g., less than 10% of the maximum allowed range, we systematically restrict the action space which additionally improves learning.Finally, we clip the resultant amplitudes to [-1,1] to ensure that the control fields do not exceed the maximum allowed drive strengths. State: The evolution of the transmon system due to external control fields and internal dynamics is characterized by a unitary map U (t, 0) computed from Hamiltonian in Eq. 10.Given a set of basis states {ψ j }, the evolution of an arbitrary pure initial state reads Thus, tracking the time-evolved unitary map is equivalent to tracking the time-evolved basis states {ψ j (t)}.As we aim to design target operations between two-level systems, we assume no occupation beyond the qubit subspace initially.That means, for example, in the singlequbit gate case, it is sufficient to track only the following basis states Reinforcement learning for designing high-fidelity quantum gates.RL framework involves two main entities: the environment, which is a system of two coupled transmons simulated as anharmonic oscillators truncated a three energy levels, and the RL agent, which uses the DDPG algorithm for learning continuous control drives.We focus on learning 2 control drives (cross-resonance u01 and qubit 1 rotation d1) in the main text, and report additional results for including a third control drive (qubit 0 rotation d0) in App.E 1. a) Step 1: Collecting data.At every step, the current state s of the environment is characterized by the time-evolved quantum state of the transmons {ψj(t)}, the previous control pulse amplitudes Aprev, and the relative changes in system parameters ∆⃗ p/⃗ p0.Based on that state s, the RL agent proposes an action a to determine control drive amplitudes that evolve the transmon environment forward in time.The environment outputs the next state s ′ and a fidelity-based reward r (cf.Eq. 24), and the transition tuple (s, a, r, s ′ ) is stored to an Experience Replay Buffer.An episode is complete when the RL agent fully constructs an N -segment pulse, and data from many episodes are collected for training.Here we consider a sparse reward scheme, meaning a non-zero reward is given only at the end of each episode.In addition, during data collection, some noise is injected into the RL agent's action to encourage exploration of new control solutions (cf.Alg. 1).b) Step 2: Training.Transition data from the Experience Replay Buffer are randomly sampled for batch-training two networks in DDPG algorithm: a value network Q, which learns to accurately predict the expected cumulative reward Q(s, a) of taking an action a from a state s, and a policy network µ, which learns to propose an action a = µ(s) that maximizes this Q-value.Outside of this training process, RL agent typically refers to the policy network µ because it generates all of the agent's actions.c) Step 3: Testing.Once trained, the RL agent can deterministically construct pulses with fidelity ≳ 99.9%, not only for a fixed environment, but also for environments whose parameters have drifted. where we have truncated our simulation at three levels.Complete knowledge of the evolved basis states {ψ j (t)} at every step allows the agent to discern the effect of its actions on the environment.Due to our restriction of the action space to contain only relative changes in the control fields as in Eq. 18, we also include the pulse amplitudes in the previous time step to the state provided to the agent: In addition to designing gates for a fixed environment, we also wish to generalize the agent's designing capability to environments where the system parameters have drifted from their original values.While the agent can indirectly discern this change through the evolution of the quantum state, we have observed that furnishing the agent with explicit information about the current system characteristics can enhance its learning process.Instead of directly feeding the agent the system parameter vector ⃗ p whose entries can take a wide range of values, we can provide the same context information via the relative change in system parameters ∆⃗ p/⃗ p 0 , transforming the RL input state into where ⃗ p 0 denotes the original values in Table II.Reward: RL approaches typically utilize a reward that is provided at every step to incentivize the agent to learn the correct actions.Alternatively, the agent can also learn from a single reward granted at the end of each episode.In the case of a fidelity-based objective and when considering a closed-loop implementation using an actual device, this sparse reward scheme demands fewer measurements during intermediate steps, making it more experimentally favorable.With this in mind, we have opted for the sparse reward scheme and have determined that it is adequate for the agent's learning process.As the fidelity approaches unity, improvements tend to slow down, yielding increasingly marginal gains.To enhance the discernibility of positive signals, we define the reward function to be the negative log infidelity at the final time step Here, it is important to note that an improvement of one unit in the reward corresponds to a one-order-ofmagnitude enhancement in fidelity, e.g., r : 2 → 3 corresponds to F : 0.99 → 0.999. B. Algorithm In our pursuit of designing quantum gates via pulse shaping, we have established a large design space of PWC functions for the RL agent to explore.We require an algorithm capable of handling continuous-valued actions to fully harness the flexibility of this design space for achieving high-fidelity solutions, and also to effectively utilize continuous control resources in realistic hardware.Furthermore, given the limited access to near-term quantum devices, an algorithm with efficient training data usage is highly desirable.Thus, we select the Deep Deterministic Policy Gradient algorithm (DDPG), which satisfies all of these criteria [43]. We begin by laying the groundwork for DDPG, which is rooted in the concept of Q-learning.A Q-value quantifies the expected cumulative reward associated with taking an action from a specific state and subsequently following a particular policy π thereafter.In reinforcement learning, the expected cumulative reward is commonly subject to discounting over future time steps in order to incentivize the agent to complete its objective faster to receive a higher reward.Formally, the Q-value for a state-action pair (s i , a i ) at time step i under a policy π(a|s) is defined as follows: Here, γ ∈ [0, 1] is the discount factor, and the expectation value E is taken over actions selected using the policy π.With the current action a i already selected, there is only one possible value for the immediate reward r i+1 = r i+1 (s i , a i ), allowing us to take it out of the expectation and substitute it in the Q-value definition for the next state-action pair.The optimal strategy is then to pick the highest-valued action at every step, which leads to the recursion relation for the optimal Q-value Q * : also known as the Bellman optimality equation [44].As the dependence on the policy π is removed in the above, the optimal Q-value can be iteratively updated using any transition data tuple (s i , a i , s i+1 , r i+1 ) regardless of the collecting policy, a process commonly known as off-policy training.In practice, observed transitions are stored in a replay buffer from which a mixture of new and old transitions are sampled to train the agent.A typical replay buffer stores about half a million transitions, allowing much more efficient re-use of old data as compared to other RL methods. When the number of discrete states and discrete actions are not too large, the corresponding Q-values are stored in a finite-size table that can be iteratively updated.As the state space becomes continuous, one instead approximates the optimal Q-value by a deep neural network with parameters ϕ as Q ϕ ≈ Q * .To adapt this deep Q-learning method to continuous actions, DDPG additionally utilizes a deterministic policy network with parameters θ for action generation: a i = µ θ (s i ).During training, a noise process is injected into the agent's action to encourage exploration as seen in Fig. 4a.With the main goal of maximizing the expected cumulative reward, we want not only a policy network that can generate actions with high Q-values, but also a value network that can approximate the Q-values well according to Eq. 26, which leads to the following update rules: for each transition data tuple (s i , a i , s i+1 , r i+1 ).From these update rules, it is clear that updating one network changes the loss function of the other, creating a moving target problem.Therefore, when computing the targets on the right hand sides of Eqs. 27, we employ target networks (ϕ ′ , θ ′ ) that slowly track the learned networks (ϕ, θ), which minimizes the effect of fast-moving targets.The complete algorithm is detailed in App.B. DDPG is known to struggle when the action space gets too large, which leads to exploding Q-values during training.One solution is to train two Q-networks and use the smaller value for computing the targets in Eq. 27 to mitigate Q-value overestimation (a.k.a., twin network trick).Another solution is to delay the policy network update for better Q-network convergence in between (a.k.a.delayed policy trick).These techniques, when combined with the standard DDPG, result in an augmented algorithm commonly known as Twin Delayed DDPG (TD3) [45].Unfortunately, we could not obtain conclusive evidence as to whether TD3 outperforms DDPG in all situations for our problem.Therefore, in the main text we focus on DDPG and delegate discussion of a case in which TD3 provides an advantage to App.E 1. V. RESULTS Here we report our main results of utilizing the DDPG algorithm for continuous control to solve the twoqubit gate design problem for superconducting transmon qubits.To ensure the consistency and reproducibility of the RL approach, we repeat each case multiple times with different seeds.Each seed leads to a different set of random generators used for initializing and optimizing of neural network parameters, sampling experience from the replay buffer, and injecting exploration noise to agent action during training.Unless stated otherwise, the reported results appear typical within a handful of realizations.The best cases are discussed here in the main text while training data over multiple seeds are included in App. C. We summarize relevant training hyperparameters in Table I as well as the main set of system parameters in our quantum simulator in Table II. We juxtapose the RL-designed strategies with the conventional error suppression schemes in Sec.III where we employ the Nelder-Mead method to optimize the relevant parameters in each ansatz to maximize the average gate fidelity.For the single-qubit DRAG scheme, we simultaneously optimize 2 parameters for d(t): the amplitude of the real Gaussian pulse and the scaling factor of the imaginary derivative pulse.For two-qubit entangling gates, we find that first optimizing the amplitudes of the Gaussian Square pulses for u 01 (t) and d 1 (t), and then optimizing for their phases, yields the best result. In particular, the echoed scheme consists of two tunable Gaussian Square pulses, cross-resonance (u 01 ) and target cancellation tone (symmetric part of d 1 ), constituting a 4-parameter optimization problem.Meanwhile, the direct scheme contains an additional target rotary tone (asymmetric part of d 1 ), increasing the number of optimizable parameters to 6.These are in contrast with the 18-dimensional (single-qubit gate) and 80-dimensional (two-qubit gate) control problems addressed by our RL Pulses designed by our RL agent appear considerably different from the direct scheme, in both pulse shape and quantum state dynamics.Furthermore, our RL agent manages to shorten the gate duration to 177.8 ns without compromising 99.9% fidelity.RL training hyperparameters are given in the "Fixed Environment" section of Table I. approach that we will see shortly. Our main results can be summarized as follows.First, we demonstrate that an RL agent can be trained via direct interaction with a simulated environment to successfully explore the vast design space of PWC functions (Sec.V A).Although the environment is treated as a black box, the PWC time step is chosen such that 1/∆t is below the sampling rate (and bandwidth) of standard control electronics [42], to avoid unrealistic oversampling.The discovered strategies are unbiased by prior theoretical knowledge and distinct from the established analytical solutions in Sec.III.Second, we illustrate the benefit of RL optimization with the flexible PWC ansatz in finding high-fidelity control solutions at shorter gate duration (Sec.V B).We then assess the novelty in the roles played by each drive (Sec.V C), followed by the robustness of optimized pulses to short-timescale stochastic noise (Sec.V D).Finally, we augment our agent to generalize and adapt to drifts in system characteristics (Sec.V E), making use of the left-over representation of gained knowledge which is an advantage of RL over conventional control algorithms. A. Learning without prior knowledge Single-qubit gate in two-transmon setting As a benchmark for our approach, we start by tasking the RL agent to learn the single-qubit π/2 rotation around the x-axis in a two-transmon setting, given by In Fig. 5, we report an RL-designed 9-segment pulse that implements the IX(π/2) gate using a single control drive d 1 .The agent learns to construct both real and imaginary parts of the pulse, thus tackling an 18-dimensional optimization problem.The RL-designed pulse achieves a 10 ns gate duration which is over 3× faster than the 35.6 ns DRAG pulse.With triple the maximum pulse amplitude, the 10 ns pulse maintains a comparable fidelity at 99.9%, despite having leakage larger by a few orders of magnitude during intermediate steps, as seen in Fig. 5d.The data, therefore, suggest that the RL agent learns to exploit the presence of the second level to the advantage of reducing the gate duration.Such speed-up already offers a significant reduction in operating time as general quantum circuits consist mostly of single-qubit gates. Two-qubit entangling gates Applying the same algorithm for 20-segment waveforms, our RL agent successfully learns 248.9 ns pulses that implement ZX(π/2) and CNOT gates completely from the ground up, achieving fidelity F > 99.9%.The agent constructs complex-valued pulses for a crossresonance drive u 01 and an on-resonance drive on the target transmon d 1 , constituting an optimization problem of dimension 80 (20 segments × 2 drives × 2 real numbers).In fact, our agent also finds equally high-fidelity solutions to an even higher-dimensional optimization problem when given access to three drives (d 0 , u 01 , d 1 ).However, these 3-drive control solutions require the ability to send two pulses at different frequencies simultaneously to the same transmon, which, to the best of our knowledge, is not a commonly used technique.Therefore, we defer the discussion of 3-drive results to App.E 1 and focus on constructing pulses using the only two drives (u 01 , d 1 ) for our main results section. In Fig. 6, we present a clear-cut contrast between state-of-the-art direct scheme and RL approach for the ZX(π/2) gate.First, the RL pulse envelope goes beyond the square Gaussian structure in the direct scheme while having a higher maximum amplitude and a more prominent imaginary part, as seen in Fig. 6a-b.In Fig. 6c-d, we illustrate the evolution of Bloch coordinates of the target qubit initialized at |0⟩, for two cases: when the initial control qubit state is either |0⟩ (blue) or |1⟩ (red).Maximal entanglement is achieved when these two timeevolved target qubit states are exactly opposite on the Bloch sphere.In the direct scheme, the conditioned target qubit states start out by rotating together around the x-axis at slightly different speeds.They increase their distance after a couple of revolutions and stop at their final destinations on each end of the y-axis.By contrast, the RL scheme appears to bring the states directly to their respective destinations with minor course corrections in between; this has to do with our choice of allowed action windows w u and w d between the u 01 and d 1 drives.We find that setting w u = 10w d , in this case, yields the best training performance, which inadvertently restricts our agent to solutions with a considerably weaker drive on the target qubit.More interestingly, the evolution roughly splits into two parts (cf.middle and right columns of Fig. 6c-d): the state conditioned on 1 (red) moves while the state conditioned on 0 (blue) remains approximately stationary in the first half of the pulse sequence, and vice versa in the second.The above observations suggest that the strategies learned by our RL agent are fundamentally different from the standard analytical protocols. B. Achieving shorter gate duration By extending our gate design study to different gate durations, we find that the RL approach coupled with 0.9 Fidelity and leakage of optimized ZX(π/2) pulses at different gate durations.We compare results for the direct, echoed, and RL schemes.(a) Best fidelity achieved over a dozen runs as function of gate duration.Short vertical lines mark the approximate entangling times obtained via numerical block-diagonalization for constant pulses of average amplitudes of the three data points marked by the color boxes.(b) Maximum population leakage throughout gate duration for the same set of pulses.For full evolution of population leakage throughout gate duration, see Fig. 15.The increase in maximum population leakage coincides with the decrease in fidelity for the direct and echoed schemes at shorter gate durations.RL-designed pulses, however, maintain 99.9% fidelity down to 177.8 ns gate duration, potentially making use of large population leakage.RL training hyperparameters are given in the "Fixed Environment" section of Table I. the flexibility of the PWC waveform consistently outperforms optimized direct and echoed schemes.In Fig. 7a, we observe that fidelities obtainable using these standard approaches drop below 99.9% when their gate durations approach 213 ns and 320 ns, respectively.Meanwhile, RL pulse duration can be shortened significantly, down to 177.8 ns while maintaining the same performance. We first compare the gate durations of optimized control solutions with the approximate entangling time τ defined for a constant pulse of amplitude Ω.For the ZX(π/2) gate, we have τ (Ω) = (π/2)/ω ZX (Ω), where ω ZX is the effective ZX interaction rate obtained via numerical block-diagonalization of the two-transmon Hamiltonian in Eq. 11 (cf.Ref [20] for more details).We compute τ (Ω) for the three cases considered in Fig. 6 using their average amplitudes, which are 58 MHz, 65 MHz, and 136 MHz, respectively.These approximate entangling times are displayed as short vertical lines in Fig. 7a and are color-coded to the corresponding points on the graph.On one hand, we observe that τ practically equals the gate duration for the 248.9 ns direct pulse, which is not surprising as its Gaussian Square waveform can be well-approximated with a constant pulse.On the other hand, for the RL pulses, the gate durations and approximate entangling times no longer agree, which can be attributed to their considerably more complicated PWC waveform.This observation suggests non-trivial dynamics in the control solutions discovered by our RL agent, which we further investigate in the following by examining the amount of population leakage as well as the evolution of entanglement generated in several initial states and rotation angles. While the target operation involves only the first two levels, they are not isolated from other excited states and input control fields inevitably drive some population out of the computational subspace.In Fig. 7b, we show the maximum leakage at different gate durations; and observe that an increase in leakage for direct and echoed pulses coincides with a decrease in fidelity.By contrast, RL-designed pulses manage to preserve their performance despite experiencing large state leakage which inevitably arises as the agent explores high-amplitude solutions in order to shorten the necessary entangling time.Our results suggest a prominent presence of leakage processes beyond the effective model, and that our RL agent has found a way to make good use of them.Indeed, such behavior is supported by prior research which indicates that the improved gate speed can be attributed to the more strongly coupled higher energy levels.[46]. Within the computational subspace, we first note the deviation from the effective model by looking at the entanglement generated in two initial states, |00⟩ and |10⟩; they are expected to remain separable throughout the evolution under the effective CR Hamiltonian given in Eq. 12.We show the evolution of the linear entropy S lin of these states in Fig. 8a for both direct and RL schemes, and observe small but non-zero amounts of entanglement.For the former, we observe more entanglement generated in the |10⟩ state which can be attributed to pulse ramp up and ramp down not taken in account in the effective Hamiltonian analysis [20].For RL-designed pulses, on the other hand, especially one with shorter gate duration, both states become considerably more entangled at intermediate time steps, indicating the presence of entangling processes beyond those expected from the effective CR model.This suggests that our RL agent has managed to remove these unwanted entangling processes at gate completion in order to achieve a high final fidelity. For a more detailed picture of both single-qubit rotation and entangling processes, we take a closer look at the strengths of different interactions in the Pauli basis as a function of time.To do so, we first compute the averaged Hamiltonian by taking the logarithm of the unitary U (t, 0) at time t and project it onto the qubit-subspace We expand tH qubit avg in the Pauli basis P i ∈ {I, X, Y, Z} where θ ij defines the rotation angle that depends on the P i ⊗ P j interaction strength and duration t.We can then invert the relation and compute the rotation angle as Computing ln U (t, 0) amounts to choosing an appropriate branch cut to obtain sensible results for the timedependent rotation angles, a procedure we discuss in App.D 2. It is important to note that these branch cuts are chosen to reveal a semi-stable increase in θ ZX , which in turn, results in clear discontinuities in other angles. We can now analyze these rotation angles θ ij to reveal the arisen interactions in greater depth.In Fig. 8b, we show the time evolution of the rotation angles for the XX and Y X interactions.Their nonnegligible presence, even in the direct scheme, is not expected from the effective CR model given in Eq. 12, which further supports our previous observation of the linear entropy in Fig. 8a.The saw-tooth time-evolution of these interactions corresponds to the off-resonance precession of the control qubit, and this pattern differs for all three cases presented.It is worth noting that the short 177.8 ns RL pulse results in the largest precession rate as well as the most prominent XX and Y X interactions, which we attribute to its significantly higher drive amplitude as seen in Fig. 6.We thus confirm that the more complex waveforms in RL pulses feature an increased presence of entangling processes beyond the ZX term. In Fig. 8c-d, we also display several other interactions that are expected from the effective CR model, including ZX, IX, IY , and IZ.First, we observe a similar gradual accumulation of θ ZX across the board.The target qubit rotations, however, look completely different.In the direct scheme, an active cancellation tone was introduced to mainly suppress a large unwanted IX term generated by the bare CR pulse.The inclusion of the asymmetric rotary part provides the additional degrees of freedom to suppress more unwanted terms.While having some success, these techniques are limited by the rigidity of the Gaussian Square waveform.This can be seen from monotonic evolution of the IX, IY , and IZ rotation angles in the direct pulse as illustrated in the left column of Fig. 8cd.By contrast, the evolution of these rotation angles in RL pulses, shown in the middle and right columns of Fig. 8c-d, appears to be non-monotonic and significantly more flexible throughout the gate duration, suggesting a much more powerful error suppression potential resulting from the PWC waveform.Indeed, we observe that our RL agent successfully brings all unwanted interactions [cf.Fig. 17] for the remaining terms close to zero at gate completion, even in the case of a significantly higheramplitude 177.8 ns pulse where the effective CR model completely breaks down.These findings reveal a considerable deviation in environment characteristics beyond the perturbative effective dynamical model, and yet, our PWC-equipped model-free RL agent remains unbiased and adjusts accordingly to obtain high-fidelity control solutions. With the training setting used above, i.e., learning pulses for only 2 drives (u 01 , d 1 ) via a standard DDPG algorithm, our RL agent only manages to find one 99.9%fidelity 177.8 ns solution out of a dozen runs.This is because we need to increase the allowed relative change in pulse amplitudes at each step, effectively broadening the action space, in order to compensate for such a short gate duration.Larger action space tends to result in Qvalue overestimation, which ultimately destabilizes the training process.By employing a few additional modifications to the training setting such as expanding the agent's access to 3 drives (d 0 , u 0 , d 1 ), implementing the TD3 tricks, and training the agent for longer, we notice more stable training and an improved probability of success on some occasions but not universally.Therefore, we postpone the discussion of these additional results to App.E 1. C. Novelty in the roles of drives We can further highlight the novelty in control solutions found by our RL agent by analyzing the role played by each drive in implementing a two-qubit operation.We do so by taking the each optimized pulse sequence, removing the on-resonance component d 1 , and comparing the left-over cross-resonance component u 01 with the original pulse.In the weak drive regime, we expect the cross-resonance drive u 01 to be the sole entanglement generator and the on-resonance drive d 1 to only affect the local rotation of the target qubit.By examining the changes in the linear entropy and the qubit control fidelity when the on-resonance drive is removed, we confirm that, in the direct scheme, d 1 has little to no effect on the entanglement generated and the motion of the control qubit, as illustrated in by the overlapping orange curves in Fig. 9.This suggests that optimizing the Effect of removing target qubit drive d1 from optimized ZX(π/2) pulses.a) Average linear entropy Slin as defined in Sec.II D. b) Fidelity of control qubit averaged over initial states |00⟩ and |10⟩.The target qubit drive practically only affects the target qubit rotation in the direct scheme.Meanwhile, the RL agent discovers solutions where this on-resonance drive works in tandem with the crossresonance drive to generate entanglement and rotate the control qubit. Gaussian Square pulses in the direct scheme leads to control solutions exhibiting a clear separation of roles: the cross-resonance drive generates almost all entanglement and ensures that the control qubit ends up in the intended state; meanwhile, the on-resonance drive focuses on correcting the target qubit state.Interestingly, our RL agent discovers additional strategies where these roles get mixed up to different degrees, represented by how much the blue curves in Fig. 9 deviate from one another.In these cases, the on-resonance drive d 1 actually works in tandem with the cross-resonance drive u 01 to generate entanglement and rotate the control qubit.Such a behavior can be attributed to high driving amplitudes which activate interactions beyond the desired ZX term, leading to the observed novelty in solutions discovered by our RL agent. D. Robustness of optimized pulses When implemented on a real device, the performance of optimized control solutions inevitably suffers from a variety of error sources such as imperfect controls and noisy readouts.We simulate these effects by introducing Gaussian fluctuation on system parameters ⃗ p; this allows us to assess the robustness of the control solutions discussed so far.Specifically, at every step of the size of the inverse sampling rate dt = 2/9 ns, fluctuations are 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Gaussian noise variance (%) at rate dt = 2/9 ns 0.9 0.99 0.999 Fidelity F 248.9 ns direct 248.9 ns RL 177.8 ns RL FIG. 10.Robustness of optimized ZX(π/2) pulses to short-timescale stochastic noise.We simulate stochastic noise by adding uncertainty to system parameters at inverse sampling rate dt = 2/9 ns during evaluation.RL-design pulses outperform the direct scheme up to 0.5% noise.For larger noise, the same-duration RL solution behaves similarly to its direct counterpart whereas the shorter-duration RL solution degrades at a faster rate.RL training hyperparameters are given in the "Fixed Environment" section of Table I. sampled from a zero mean Gaussian distribution with a standard deviation of up to 3% of the original system parameters ⃗ p 0 listed in Table II.For each deviation value, we collect 50 samples and report the results for optimized ZX(π/2) pulses in Fig. 10.RL-designed pulses outperform the direct implementation up to 1% deviation at the same gate duration (248.9 ns) and up to 0.5% deviation at shorter gate duration (177.8 ns).The shorter gate duration RL solution degrades quickly as the deviation increases, possibly due to large jumps in its non-smooth PWC waveform.Since the 248.9 ns RL pulse has a much lower amplitude, the jumps in its PWC waveform are not as detrimental.As a result, it shows no discernible difference in performance degradation rate, when compared to its smooth Gaussian Square waveform counterpart in the direct scheme.Overall, the mean fidelities drop to around 99-99.5% at a 3% deviation, which translates to about 66 kHZ for the coupling value and about several MHz for the remaining parameters.These results suggest a trade-off in the RL solutions between fidelity and gate duration in the presence of stochastic noise, which could stem from the non-smooth feature of PWC pulses as well as proximity to the smallest duration necessary to generate sufficient entanglement. E. Adapting to drifting system characteristics In addition to the Markovian, short-timescale fluctuation, systems characteristics of superconducting transmon qubits are also known to drift over a longer time, owing to, e.g., dilution refrigerator temperature or fabrication defects.Therefore, in the following, we explore the idea of generalizing a single RL agent to a range of system parameters so that no additional training is required Fidelity of fixed-environment control solutions in the presence of system drifts.We sample drifts in system parameters (see legend) of a single type (solid curves) and of all types simultaneously (dashed black curve), and then bin the data points according to the maximum drift.The binned mean fidelities (curves) and their standard deviations (shaded areas) are displayed.(a-b) Fidelity of pulses from direct implementation and RL optimization are evaluated on new environments.The direct pulse is susceptible mostly only to drift in drive strength, whereas the RL solution is susceptible to drift in all parameters.(c) Fidelity of adaptive pulse found by RL agent via interaction with each new environment; solution remains susceptible to drift in detuning and anharmonicity while generalizing well for drift in drive strength.Data represents optimizing a 248.9 ns CNOT pulse, with RL training hyperparameters given in the "Drifting Environment" section of Table I. when re-calibration is needed [47].We additionally discover that knowledge of the change in system parameters can be utilized to strengthen the generalization capability of the agent.Finally, from this point on, we will switch to the objective of learning the CNOT gate to further highlight our approach's applicability to different target gates.Traditionally, a CNOT gate can be achieved by performing a ZX(π/2) gate followed by a target qubit rotation.Since our RL agent learns to perform these gates from scratch, its control solution for the CNOT gate does not have to follow the aforementioned decomposition and can be learned directly, resulting in pulses reported in Fig. 18. Evaluating generalizability We first discuss our method of evaluating the generalizability of an RL agent on multiple systems whose parameters have drifted from the original values ⃗ p 0 .We study two main situations, when only a single type of parameter has changed, i.e., only coupling: only drive strengths: or when all parameters have changed.We gather the drifts in system parameters into a context vector We then sample these changes randomly, evaluate the fidelity of the tailored control solutions, and bin these data points according to the drift with largest absolute value.Using a bin-size of 0.2%, we collect samples such that the number of data points increases from ∼ 15 points in the central bins to ∼ 60 points in the ±7% bins.Finally, we study the binned mean and standard deviation of the fidelity as a function of the maximum drift. We can now examine the generalizability of the control solutions discussed so far, namely the direct and RL pulses optimized for the original system parameters, as shown in Fig. 11.We focus on a 6% range of maximum drift on each side of the original values.In our simulation, since the coupling value is much smaller, the drift of the same percentages for this parameter has significantly less effect than the others (green curves).While generalizing well for most parameters due to its smoothness, the direct pulse in Fig. 11a is highly susceptible to drift in drive strength as the rotation angle is directly affected.The RL-designed pulse in Fig. 11b, on the other hand, degrades quickly for drifts in all parameters as expected for a non-smooth PWC waveform. Adaptive control solutions from the RL agent for different system parameters behave more interestingly, which can be seen in Fig. 11c.We find a large susceptibility to changes in detuning and anharmonicity, likely because they directly change the spacing between transmon energy levels and fundamentally alter the system's internal physics, e.g., the resonant frequencies.Meanwhile, the drive strengths are connected to external controls which the RL agent has direct influence over, resulting in a much more robust behavior.Overall, previous control solutions exhibit poor generalization performance when drift in all system parameters is considered.12. Improved generalization fidelity when using augmented RL approach.As reference, we import fidelity curves (solid) from an agent trained on a fixed environment from Fig. 11c.We display improved results for training on an environment with drifting system parameters, when the agent has no knowledge (dotted) or full knowledge (dashed) of the drifting parameters, i.e., context information.(a) Drift in detuning frequency only.Training on environments with drifting system parameters is sufficient in improving the fidelity to ≳ 99.9% within a 5% drift.Having context information provides a slight improvement in performance while cutting training time in half.(b) Drift in all parameters.Context is essential to stabilize training, and provides the best generalization result.RL training hyperparameters are given in the "Drifting Environment" section of Table I. Learning to generalize We now identify several ingredients necessary for improving the generalization performance of our agent.We start with a simpler problem where we task the agent to adapt to drift in detuning while all parameters remain fixed.By simply allowing the agent to interact with many systems with different detuning values during training, we observe an immediate and significant performance increase (cf.dotted blue curve in Fig. 12a).We achieve this by sampling the detuning from a uniform distribution of a 5% range around its original value at the beginning of each episode.As a result, our agent can observe the changing effects of its action on drifted systems, and thus, learn to adapt accordingly.To further help the agent discern different systems more effectively, we can provide it with specific knowledge about the system with which it is currently interacting, namely the size of the detuning drift relative to its original value.We refer to this piece of information as context and this additional input to the agent remains constant within each episode, implying that the system characteristics remain constant throughout the entire gate duration.While having little effect in generalization performance here (dashed blue curve), the training time needed for a context-aware agent is actually cut in half. We apply our findings to the full problem with drift in all parameters and plot the result in Fig. 12b.Training on a drifting environment remains necessary but is no longer sufficient to achieve good generalization results (dotted black curve).In fact, the best result we report when training without context is obtained in only a few training iterations; after that, the performance drops precipitously (cf.black curve in Fig. 14b).We believe that feedback with the quantum state is no longer enough for our agent to distinguish different environments.For example, different environments can get to the same state with a different set of actions.Therefore, without additional information, the agent encounters a good amount of confusion during training. Providing our agent with context information about the drifts in all parameters greatly alleviates the problem (dashed black curve).Here, instead of sampling from a uniform distribution as in the previous case, we sample drifts for all parameters from a zero-mean Gaussian distribution of a 2% standard deviation, in order to highlight the effect on the generalization results.Indeed, instead of collapsing at 5% drift (blue dashed curve), the fidelity in this case gradually decreases (black dashed curve) since the agent actually gets to interact with system drifts beyond 5% during training under the Gaussian distribution.Overall, extending the training environment to include system drifts and providing the agent with context information about those drifts significantly stabilizes the generalization task when all system parameters are involved.More importantly, the resultant agent can immediately propose pulses with 99.9% fidelity at up to 4% drift without any further training.These simulations justify the suggested practicality of RL in the presence of a reasonable system drift on near-term devices. When dealing with a more substantial drift, we find that fine-tuning is necessary to achieve 99.9% fidelity, although the number of training episodes required can be notably less than when starting from scratch.To investigate this phenomenon, we first reiterate that the RL agent used in Fig. 12b has been trained to generalize to system parameters sampled from a Gaussian distribution of 2% standard deviation around the original values ⃗ p 0 , as in Fig. 13a (grey distribution).We then select a set of drifted system parameters, denoted as ⃗ p drifted [48] with maximum drift of −5.7%, for which the generalized agent suggests a control solution achieving 95% fidelity.Subsequently, we specialize our agent by training it on a fixed environment with these specific system parameters (blue delta distribution). Comparing this approach to training a separate agent entirely from scratch within the same environment, we observe a reduction of 1.3× in the number of training less eps less eps a b FIG. 13.Reduction in training episodes when using a generalized agent to fine-tune.a) Training distribution of system parameters for three cases: generalizing to a Gaussian distribution P0 of 2% standard deviation (gray) as a starting point (same agent for dashed black curve in Fig. 12b), then either fine-tuning to a narrower distribution P1 of 0.2% standard deviation (purple), or fine-tuning to a fixed environment, i.e., a delta distribution P2 (blue).P0 is centered at the original system parameters ⃗ p0 whereas P1 and P2 are centered at ⃗ p drifted .b) Mean training fidelity as a function of training episodes for comparison between fine-tuning (to P1 and P2) and training from scratch (under P2).Shaded region is defined by the minimum and maximum training fidelity.For each case, we perform 3 runs with different seeds and report the best learning curve.RL training hyperparameters are given in the "Drifting Environment" section of Table I.Fine-tuning to P1 and P2 offers a 8.1× and 1.3× reduction in episodes required to reach 99.9% mean fidelity, implying great potential for transfer learning. episodes required to reach 99.9% fidelity, as illustrated in Fig. 13b.Intriguingly, when we instead specialize our agent to a drifting environment characterized by a Gaussian distribution of 0.2% standard deviation and mean at ⃗ p drifted (cf.purple distribution in Fig. 13a), the episode reduction jumps to 8.1×.These preliminary findings hint at the potential for substantial transfer learning in certain cases.However, further investigations are needed to understand the underlying causes for such a wide range of effectiveness, which we plan to pursue in future studies. VI. CONCLUSION AND OUTLOOK In this study, we have showcased the advantages of harnessing reinforcement learning for the design of crossresonance gates, fully independent of known theoretical protocols and pre-existing error suppression techniques. Our unbiased approach employs an off-policy agent to customize continuous control parameters, shaping complex-valued pulses concurrently for both the crossresonance and the target on-resonance drives.Compared to established optimal control methods, RL has the advantage of i) being suited for closed-loop optimization due to its model-free nature; ii) it is capable of nonlocal exploration; and iii) it generates a representation of gained knowledge as a valuable by-product. Using the RL methodology, we demonstrated the discovery of novel control solutions that fundamentally differ from conventional error suppression techniques for two-qubit gates, such as direct and echoed schemes, while surpassing them in both fidelity and execution time.At the typical gate duration of 248.9 ns (for transmon devices) where the direct and echoed schemes achieve 99.937% and 99.501% fidelity respectively, RL-designed solutions can cut the error in about half compared to the better scheme, achieving F RL ≳ 99.966%, for both ZX(π/2) and CNOT gates.Not only that, our agent identified a potential maximum reduction of 30% in gate duration, while maintaining the same level of fidelity exceeding 99.9%.This can be attributed to the flexibility of the piece-wise constant ansatz capable of managing leakage out of the computational subspace, as well as unwanted coherent processes that inevitably arise at large drive amplitudes. Furthermore, we illustrated the possibility of augmenting our approach to enable our agent to flexibly adapt its design capability to accommodate drifts in the underlying hardware.We found that exposing the agent to an environment with drifting system parameters during training while providing it with context information about these drifts, allows our agent to learn the appropriate control solutions and generalize well across a range of drifted system parameters.Concretely, our contextaware agent can readily propose control solutions with ∼ 99.9% fidelity when all system parameters, including detuning, anharmonicity, coupling strength, and drive strength, are allowed to drift within a 4% range around their original values.In instances of more substantial drifts, our generalized agent serves as a valuable starting point for fine-tuning, resulting in a remarkable 1.3−8.1×acceleration in optimization iterations when compared to starting from scratch. Based on these findings, we can assert that the RL approach alleviates the necessity for a precise model, presenting a versatile framework applicable to designing various cross-resonance-based gates.When combined with piece-wise constant protocol space, our RL agent demonstrates its capacity to devise innovative pulse shapes that surpass the capabilities of conventional ansätze in terms of both fidelity and gate execution duration.The quest for shorter, high-fidelity pulses is particularly significant, given that various calibration methods are nearing the coherence limit imposed by state-of-the-art gate duration and qubit relaxation times.Furthermore, our contextaware RL approach effectively addresses hardware drifts, indicating the possibility of reducing and even eliminating additional training, and thus expensive calibration experiments, as long as system characteristics remain within a reasonable range. When applied to experiments conducted on real-world hardware, our off-policy method carries the potential for significant data efficiency gains, as the agent can be trained on data collected by any policy.Consequently, while the initial training phase may incur high costs, subsequent retraining can be expedited thanks to the collected dataset.Additionally, actual drives delivered to the qubits are generally smoothed out from the raw jagged PWC input pulses, which should enhance the robustness of the optimized solutions to control fluctuations. As the quantum computing community progresses toward larger platforms, the capacity of a single agent to extend its design capabilities across diverse system characteristics becomes increasingly pivotal for scalability [47].In fact, as the number of qubits grows, it becomes inevitable that certain qubits will exhibit overlapping system characteristics [49].In such a scenario, our context-aware agent, trained to generalize within a specific region of system parameters, can readily be applied to a group of qubits sharing similar characteristics.Moreover, these experiments can be conducted simultaneously, as qubits with akin parameters are likely to be positioned at a considerable distance from each other in the first place, further enhancing the efficiency of our RL agent. In the immediate future, we are eager to integrate our approach into established gate optimization procedures for superconducting devices, as well as extending its utility to various quantum computing platforms.At the same time, we also aim to broaden the applicability of our RL agent to handle more intricate operations, such as the SWAP gate, multi-qubit gates, or gates on qudits.With the recent advancement enabling better control of the |1⟩ ↔ |2⟩ transition [50,51] and the potential of improving quantum speed limit by expanding beyond the qubit subspace [52], the synthesis of even faster qubit gates as well as qutrit gates emerges as an intriguing and imminent application for our RL protocol.On the algorithmic front, we emphasize the significance of enhancing generic RL algorithms through generalization and transfer learning techniques to bolster the method's practicability, especially for large-scale platforms.With the field of reinforcement learning, and machine learning in general, growing at an unprecedented rate, we hope to continue leveraging these powerful advancements toward the development of practical quantum computers. yields a near-optimal solution θ * = (θ * 0 , θ * 1 ) that is sufficiently accurate for the fidelity to reach the threshold we set: indeed, we find that the error in computing the maximum average fidelity using this protocol, as compared to numerically optimizing for both angles simultaneously, remains below 10 −5 , which is negligible for the fidelity levels discussed in our work. In our simulation where we work in the rotating frame of the second transmon, more Z error accumulates on the first transmon.We observe that optimizing for the larger angle yields a more accurate result, whence the order in Eq.A9. Evolving method We provide details into our simulator of transmon quantum dynamics, which is unitary in the absence of decoherence processes.Under piece-wise-constant controls, the unitary map naturally simplifies into a product of time-local propagators where ∆t is the discretized time step, and N is the number of segments in the PWC pulse.Each propagator, corresponding to each segment, can be computed by solving the time-dependent Schrödinger equation (TDSE) from t to t + ∆t.For example, in the two-transmon setting, we set H(t) = H 2 (t) given in Eq. 10.In general, the unitary is given by the following time-ordered integral which we numerically obtain using QuTiP's TDSE solver [55].This is necessary because of the detuning phase factors e iδt , even though the control fields u and d are constant within each segment.For the majority of this work, however, we focus on shaping only two control fields, u 01 and d 1 , and the phase factors drop out in the frame rotating at the target qubit frequency.As the result, the piece-wise Hamiltonian is now constant and can be directly exponentiated to obtain the unitary that solves the time-independent Schrödinger equation (TISE) providing a significant computational speedup.We verified that TISE solution converges to TDSE solution as the step size approaches zero. Algorithm 1 Deep Deterministic Policy Gradient Require: Initial Q-network and policy parameters ϕ and θ Require: Initial target networks parameters ϕ ′ and θ ′ 1: for step = 1, 2, . . ., M do 2: Reset environment to state s0 if end of episode 3: Sample an exploration noise N 9: Compute targets yi = ri+1 + γQ ϕ ′ (si+1, µ θ ′ (si)) 10: Update Q-network by minimizing the loss: Update policy by maximizing the Q-values: Update target networks: end if 14: end for Appendix B: DDPG Algorithm In this work, we employ Deep Deterministic Policy Gradient (DDPG), an off-policy Q-learning algorithm suitable for a continuous action space.We summarize the training procedure in Algorithm 1. The two neural networks for estimating the optimal Qvalue and the agent's deterministic policy are randomly initialized at the beginning of training.For continuous action space, exploration is implemented by adding some noise N directly to the policy network's output: a i = µ θ (s i ) + N .The exploration noise is scaled down over time using the Ornstein-Uhlenbeck process as implemented in the DDPG paper [43]. Transitions (s i , a i , s i+1 , r i+1 ) are collected by following the agent's noisy policy and stored in a large replay buffer B. For the first 10000 steps, the buffer is being filled without learning.After that, a batch of transitions are used along with two independent Adam optimizers to update both networks.To maintain quasi-stable targets throughout training, we soft-update both target networks (ϕ ′ , θ ′ ) via Polyak averaging, see Eq. A14. We employ the DDPG (and TD3) implementation from RLlib, an open-source industry-grade library for RL [54].We conduct a routine exploration of hyperparameters to identify an effective setting, which we maintaine consistently throughout the study.Due to the complex interplay between hyperparameters in highdimensional analysis, altering one may impact others, making it challenging to provide a comprehensive account.Our focus is on identifying an effective set of hyperparameters and after that, minimizing additional adjustments to maintain the stability in our approach.The detailed hyperparameters used in this work are summarized in Table I.Any other hyperparameters not mentioned are unchanged from the default setting of RLlib version 2.0.0. Appendix C: Training procedure We report learning curves for the RL results discussed in the main text, plotting the mean fidelity of pulses encountered as a function of training episodes.Fig. 14a shows the learning curves for RL training on a fixed environment for different gate durations.Our DDPG agent consistently finds ≥ 99.9% fidelity control solutions after about 150,000 episodes for gate durations ≥ 248.9 ns, which corresponds to about 18 hours of training.Below this number, training becomes more challenging as we increase the action windows w u and w d to compensate for shorter physical time.An increase in action space often leads to exploding Q-values, resulting in a lower success rate over multiple runs.It should be pointed out that implementing the TD3 tricks [45] stabilizes training but degrades the achievable fidelity slightly when compared to DDPG in general. Fig. 14b-c show learning curves for RL training on an environment with changing system parameters.Unlike both stable blue curves, the black curve is only stable when context is included, suggesting the importance of context information for adapting RL to a more realistic situation where all system parameters can drift away from their original values.The generalized agent typically converges around 500,000 episodes, which corresponds to about 4 days of training. Each training instance initiates 4 workers for sampling interaction with the simulated environment and 1 worker for agent training, utilizing 5 cores simultaneously.Without any significant difference in runtime, training is done either on a typical laptop (M1 3.2GHz or Intel i7 2.7GHz) or on a node within the JUWELS cluster (Intel Xeon Platinum 8168 2.7 GHz).We report the full evolution of population leakage throughout gate duration for RL pulses with different gate durations in Fig. 15 We display high-resolution evolution of the leakage value as well as its moving average.Crosses mark the maximum leakage data points reported in Fig. 7. Larger population leakage is observed for shorter gate duration, which can be attributed to corresponding higher drive amplitude.θZX is first computed in the principal branch cut (blue), i.e., without any phase shifts.Then, a phase shift is determined and added at every time step, resulting in the shifted branch cut (orange), which ensures a well-behaved evolution of rotation angles.Without shifting the branch cut, the observed large jumps obscure meaningful interpretation of the accumulated rotation angles. Rotation angles Any two-qubit unitary map U qubit (t, 0) can be expressed in terms of a generating averaged Hamiltonian as follows exp −iH qubit avg t = exp where we have expanded H qubit avg in the Pauli basis given by P i ∈ {I, X, Y, Z}, and the rotation angle in the ij direction depends on the duration t and the P i ⊗ P j in-teraction strength.For example, the cross-resonance gate can be written as ZX(π/2) = exp(−iπZX/4). For three-level transmons, we first compute the averaged Hamiltonian by taking the logarithm of the unitary U (t, 0), and then project it onto the qubit subspace.This allows us to quantify the strength of different interactions in the unitary at time t via the rotation angle θ ij (t) = Tr i Π qubit ln U (t, 0)Π qubit P i ⊗ P j 2 , (D2) where Π qubit is the projector onto the qubit subspace. Computing the log of a matrix is non-trivial due to existence of branch cuts, which lead to different θ ij from the same U (t, 0).To see this, we let V be the unitary that diagonalizes H avg and write U (t, 0) = exp(−iH avg t) = V diag(e −iE1t , e −iE2t , ...)V † = V diag(e −iE1t−2in1π , e −iE2t−2in2π , ...)V † = e −iV diag(E1t+2n1π,E2t+2n2π,...)V † ⇒ i ln U (t, 0) = V diag(E 1 t + 2n 1 π, E 2 t + 2n 2 π, ...)V † , (D3) where we have taken into account the periodicity of the complex exponential via a list of integers {n i } corresponding to the eigenvalues {E i }.Note that a choice of {n i } specifies a particular branch cut where the principal branch cut corresponds to {n i = 0, ∀i}.Since we are most interested in the ZX interaction, we would like to pick a branch cut where θ ZX behaves nicely and without large jumps.With that goal in mind, we first consider the principal branch for a rough idea of how θ ZX evolves.After that, we search through {n i = ±1} at every time step, via brute force, to find the branch cuts that result in a smooth evolution of θ ZX , as seen in Fig. 16.These branch cuts can then be applied to obtain the evolution of rotation angle for the other interactions.Even though our approach is not perfect, which can be seen from the outliers in Fig. 8 and Fig. 17, the resultant data points are sufficiently accurate to reflect the main features of each evolution.within the same run time.Despite slower training, TD3 on 3 drives exhibits an improved probability of successful runs.These additional findings suggest potential benefits when simultaneous control of all three drives is accessible. RL optimization with worst-case fidelity reward Here we summarize our investigation on using worstcase fidelity as an alternative figure of merit.Let us first discuss the standard approach of estimating worstcase fidelity over an ensemble of initial states restricted to qubit subspace.The restriction is valid as we focus on implementing quantum logic operations between twolevel systems.Under this assumption, an arbitrary pure initial state can be written in terms of the computational basis as |ψ 0 ⟩ = i c i |i⟩ where i ∈ {0, 1} for one qubit and i ∈ {0, 1, 2, 3} for two qubits.The worst-case fidelity of a unitary map U w.r.t U target is defined as where U qubit is the unitary map projected to the qubit subspace.The complex-valued coefficients {c i } can be recast into 3 ( 7) real values for one (two) qubit(s), where we have subtracted a global phase degree of freedom.Numerical optimization is then carried out via Sequential Least Squares Programming method (SLSQP), which we find to be the fastest and most stable out of all methods available in SciPy's library.It is should be emphasized that the worst-case fidelity can be estimated by simply evolving a few states initially in the computational basis, suggesting a straightforward implementation on nearterm devices. In fact, the estimation of the worst-case fidelity for a single qubit can be further improved by adopting a density matrix perspective.Working with a three-level system, a general qutrit density matrix is written as where r is an 8-dimensional Bloch vector for the qutrit state and λ is a vector of Gell-Mann matrices [56].The normalization condition for a pure state implies |r| = 1. Restricting the initial state to the qubit subspace leads to r i = 0 for i = 4, . . ., 7 and r 8 = 1/2, resulting in where we have rescaled the Bloch vector to the 3dimensional unit sphere via r i = √ 3n i /2 so that |n| = 1.The relevant Gell-Mann matrices read where {σ i } denote the Pauli matrices.The fidelity of ρ(0) evolved under a unitary U = U (t, 0) w.r. where we have symmetrized the quadratic term in the third line and used n T n = |n| 2 = 1 in the last.Evidently, minimizing fidelity over all possible initial qubit states is equivalent to minimizing a quadratic function over a sphere.This spherically constrained quadratic programming problem (SCQP) can be efficiently solved using the algorithm outlined in Ref. [57].Indeed, the algorithm requires no initial guess, converges to a single solution within machine precision over multiple runs, and enjoys a ∼ 10× speed up compared to the standard SLSQP method. Finally, we note that a similar analysis for two qubits results in a quadratic programming problem for a 15dimensional Bloch vector with highly non-trivial constraints beyond the normalization condition [58], rendering the efficient SCQP algorithm inapplicable.Moreover, optimizing for 15 parameters with multiple convoluted constraints turns out to be much harder and less stable than optimizing for 7 parameters as in Eq.E1.Therefore, we deem the reparameterization unnecessary for two qubits and adhere to the standard approach using SLSQP algorithm. With the outlined methods, we train our RL agent to learn both single and two-qubit gates using worst-case fidelity as the figure of merit and find similarly highfidelity control solutions.Due to the additional optimization, training with worst-case fidelity is slightly slower than training with average fidelity.Moreover, the uncertainty in its estimation using the SLSQP solver appears to destabilize training occasionally.Such an issue is fixed for learning a single-qubit gate when a more robust solver like SCQP is employed.Despite having no obvious advantage within this work, the worst-case fidelity remains an interesting alternative figure of merit to be further studied in future investigations. FIG. 4.Reinforcement learning for designing high-fidelity quantum gates.RL framework involves two main entities: the environment, which is a system of two coupled transmons simulated as anharmonic oscillators truncated a three energy levels, and the RL agent, which uses the DDPG algorithm for learning continuous control drives.We focus on learning 2 control drives (cross-resonance u01 and qubit 1 rotation d1) in the main text, and report additional results for including a third control drive (qubit 0 rotation d0) in App.E 1. a) Step 1: Collecting data.At every step, the current state s of the environment is characterized by the time-evolved quantum state of the transmons {ψj(t)}, the previous control pulse amplitudes Aprev, and the relative changes in system parameters ∆⃗ p/⃗ p0.Based on that state s, the RL agent proposes an action a to determine control drive amplitudes that evolve the transmon environment forward in time.The environment outputs the next state s ′ and a fidelity-based reward r (cf.Eq. 24), and the transition tuple (s, a, r, s ′ ) is stored to an Experience Replay Buffer.An episode is complete when the RL agent fully constructs an N -segment pulse, and data from many episodes are collected for training.Here we consider a sparse reward scheme, meaning a non-zero reward is given only at the end of each episode.In addition, during data collection, some noise is injected into the RL agent's action to encourage exploration of new control solutions (cf.Alg. 1).b) Step 2: Training.Transition data from the Experience Replay Buffer are randomly sampled for batch-training two networks in DDPG algorithm: a value network Q, which learns to accurately predict the expected cumulative reward Q(s, a) of taking an action a from a state s, and a policy network µ, which learns to propose an action a = µ(s) that maximizes this Q-value.Outside of this training process, RL agent typically refers to the policy network µ because it generates all of the agent's actions.c) Step 3: Testing.Once trained, the RL agent can deterministically construct pulses with fidelity ≳ 99.9%, not only for a fixed environment, but also for environments whose parameters have drifted. FIG. 5 . FIG. 5. Optimization for the single-qubit gate X(π/2) in two-transmon setting.(a) IBM 35.6 ns DRAG pulse.(b) 10 ns RL-optimized pulse with training hyperparameters given in the "Fixed Environment" section of TableI.(c) Corresponding evolution of Bloch coordinates for the controlled qubit.(d) Population leakage to |2⟩ for RL pulse is up to a few orders of magnitude higher compared to DRAG during the evolution.RL-optimized pulse is 3× faster with similar average gate fidelity above 99.9%, and makes use of the presence of level |2⟩, at the expense of accessing three times larger amplitudes. FIG. 6 . FIG.6.Optimization for the cross-resonance gate ZX(π/2).We display results with fidelity over 99.9% for the direct and RL approaches at gate durations 248.9 ns and 177.8 ns.(a-b) Optimized pulse envelopes for the cross-resonance drive u01 and target qubit drive d1.(c-d) Corresponding evolution of Bloch coordinates for target qubit when the control state is |0⟩ or |1⟩.Pulses designed by our RL agent appear considerably different from the direct scheme, in both pulse shape and quantum state dynamics.Furthermore, our RL agent manages to shorten the gate duration to 177.8 ns without compromising 99.9% fidelity.RL training hyperparameters are given in the "Fixed Environment" section of TableI. FIG. 9.Effect of removing target qubit drive d1 from optimized ZX(π/2) pulses.a) Average linear entropy Slin as defined in Sec.II D. b) Fidelity of control qubit averaged over initial states |00⟩ and |10⟩.The target qubit drive practically only affects the target qubit rotation in the direct scheme.Meanwhile, the RL agent discovers solutions where this on-resonance drive works in tandem with the crossresonance drive to generate entanglement and rotate the control qubit. FIG.11.Fidelity of fixed-environment control solutions in the presence of system drifts.We sample drifts in system parameters (see legend) of a single type (solid curves) and of all types simultaneously (dashed black curve), and then bin the data points according to the maximum drift.The binned mean fidelities (curves) and their standard deviations (shaded areas) are displayed.(a-b) Fidelity of pulses from direct implementation and RL optimization are evaluated on new environments.The direct pulse is susceptible mostly only to drift in drive strength, whereas the RL solution is susceptible to drift in all parameters.(c) Fidelity of adaptive pulse found by RL agent via interaction with each new environment; solution remains susceptible to drift in detuning and anharmonicity while generalizing well for drift in drive strength.Data represents optimizing a 248.9 ns CNOT pulse, with RL training hyperparameters given in the "Drifting Environment" section of TableI. FIG.12.Improved generalization fidelity when using augmented RL approach.As reference, we import fidelity curves (solid) from an agent trained on a fixed environment from Fig.11c.We display improved results for training on an environment with drifting system parameters, when the agent has no knowledge (dotted) or full knowledge (dashed) of the drifting parameters, i.e., context information.(a) Drift in detuning frequency only.Training on environments with drifting system parameters is sufficient in improving the fidelity to ≳ 99.9% within a 5% drift.Having context information provides a slight improvement in performance while cutting training time in half.(b) Drift in all parameters.Context is essential to stabilize training, and provides the best generalization result.RL training hyperparameters are given in the "Drifting Environment" section of TableI. Appendix D: Dynamics of optimized pulses 1 . Leakage for RL pulses FIG. 16 . FIG.16.Evolution of ZX rotation angle under different branch cuts.θZX is first computed in the principal branch cut (blue), i.e., without any phase shifts.Then, a phase shift is determined and added at every time step, resulting in the shifted branch cut (orange), which ensures a well-behaved evolution of rotation angles.Without shifting the branch cut, the observed large jumps obscure meaningful interpretation of the accumulated rotation angles. FIG. 17 . FIG. 17.Remaining rotation angles of optimized ZX(π/2 pulses.As complementary to Fig.8in the main text, we display the remaining rotation angles, categorized into: a) control qubit rotations, b) small entangling interactions expected from CR Hamiltonian, and c) small entangling interactions not expected from CR Hamiltonian.Distinct evolution of rotation angles implies distinct physical processes in all three control solutions. d 1 FIG. 18 .FIG. 19 . FIG.18.RL optimized pulses for 3 drives.With the single-qubit drive d0 on the control transmon is included as compared to the main text, our DDPG RL agent effectively solves a 120-dimensional optimization problem.The control solutions found by our agent retain fidelity above 99.9%.RL training hyperparameters are given in the "3 drives" section of TableI. ψ0 ⟨ψ 0 2 = min {ci} ij c * i c j ⟨i|U qubit U † target |j⟩ 2 , |U qubit U † target |ψ 0 ⟩ (E1) Coupled transmons simulated as Duffing oscillators.Simulation is truncated at three energy levels per transmon (the faded fourth level is shown but not considered), and performed in a rotating frame.The first two levels act as qubits (dashed boxes).External control drives (purple) include on-resonance and cross-resonance complex control fields, denoted by d(t) and u(t) in the main text, respectively.The full Hamiltonian in Eq. 10 is completely characterized by the detuning δj and anharmonicity αj for each transmon, drive strengths {Ω d 0 , Ωu 01 , Ω d 1 , Ωu 10 } for 4 external controls, and the direct coupling J. TABLE I . [54]ning hyperparameters for training an RL agent to design quantum gates on simulated transmon environment.Unmentioned hyperparameters necessary for the DDPG algorithm are set to their default values in RLlib's implementation[54], version 2.0.0.*When studying the adaptability of our RL agent to drifting system characteristics, we discover a particular region around +2% drift on all system parameters where the 20-segment ansatz yields no solution with fidelity better than 99.5%.The problem goes away when we increase the number of segments to 28, whose result was reported in the main text. . As expected, shorter RL pulses Population leakage throughout gate duration for RL ZX(π/2) pulses with different gate durations.
21,778.4
2023-11-07T00:00:00.000
[ "Physics", "Computer Science" ]
High Frequency Solution‐Processed Organic Field‐Effect Transistors with High‐Resolution Printed Short Channels Organic electronics is an emerging technology that enables the fabrication of devices with low‐cost and simple solution‐based processes at room temperature. In particular, it is an ideal candidate for the Internet of Things since devices can be easily integrated in everyday objects, potentially creating a distributed network of wireless communicating electronics. Recent efforts allowed to boost operational frequency of organic field‐effect transistors (OFETs), required to achieve efficient wireless communication. However, in the majority of cases, in order to increase the dynamic performances of OFETs, masks based lithographic techniques are used to reduce device critical dimensions, such as channel and overlap lengths. This study reports the successful integration of direct written metal contacts defining a 1.4 µm short channel, printed with ultra‐precise deposition technique (UPD), in fully solution fabricated n‐type OFETs. An average transition frequency as high as 25.5 MHz is achieved at 25 V. This result demonstrates the potential of additive, high‐resolution direct‐writing techniques for the fabrication of organic electronics operating in the high‐frequency regime. Introduction [3] The main advantage is the use of low-cost, solution based and high throughput fabrication processes.In particular, digital additive direct-writing approaches offer efficient patterning of different materials on a large variety of substrates without the use of any mask, strongly reducing the cost and time required of pattern change and reducing the waste of precious materials.However, despite of the advantages, current direct-writing techniques such as inkjet and higher-resolution jetting strategies show several limitations, especially where high operational frequencies are required, as for Internet of Things (IoT) applications.This can be understood considering the expression of the so-called transition frequency (f T ), which is largely adopted to determine the maximum operational frequency of an organic field-effect transistor (OFET), a fundamental building block of any organic electronic circuit and an ideal component for rectifiers in the IoT field.f T is defined as the frequency at which the small-signal gate and drain currents become equal, and in saturation it can be expressed as follow [4] : where g m is the device transconductance, C g the total gate capacitance, μ eff the effective charge carrier mobility, [5] V TH the threshold voltage, L ch and L ov the channel and overlap length, respectively.Hence, to make faster printed circuits one should employ semiconductors performing well with short channels and downscale transistor dimensions.The first is made difficult by the contact resistance R C , responsible of performance degradation, in terms of μ eff reduction, when the channel length is shortened. [6]he second point is more a technological limitation: finding a solution based additive direct-written fabrication method characterized by high resolution, able to preserve at the same time a low level of complexity and a high throughput.Inkjet is a widely used direct-writing, printing technique and has the advantage to be very versatile and already demonstrated to be scalable at industrial level. [7,8]However the achievable resolution with stateof-the-art tools is typically ≈ 10 μm.Some strategies were developed to improve the resolution.One way is to locally modify the surface properties of the substrate, such as ink wettability, creating an alternation of hydrophobic and hydrophilic regions by either a physical or chemical treatment. [9,10]Along this strategy, self-aligned printing (SAP) exploits the wettability modification of the first printed electrodes, for example using self-assembled monolayers (SAMs), in order to repel the second ones, leaving a small gap with a length in the range of hundreds of nm. [11,12]owever, such methods do not improve the resolution of the printed lines, but only their mutual alignment.This does not allow to directly tackle the scaling of the parasitic overlap among electrodes (L ov ).[15][16] However, EHD presents some constraints: the ink must be conductive or at least polar and, very importantly, the method relays on a fine control on the electric field distribution on the whole substrate, complicating the design of multi-nozzles systems and the scaling up of the technique. [17]Moreover, the high electric fields used can be deleterious for previously patterned components.[20][21][22] In this case the waste of materials is higher with respect to inkjet, since it is not a fully additive process, and its scaling up requires a higher complexity opto-mechanical system.Recently a novel printing method was introduced for high resolution patterning, called Ultra-Precise Deposition (UPD), which is able to retain the simplicity and versatility of inkjet printing but with an improved spatial resolution, comparable with laser assisted printing.It allows, without the use of external electric fields like in EHD, high resolution printing of either conductive or insulating inks on different types of substrates, which may also present high aspect ratio topographical features.Mateusz Łysień et al. recently introduced this technique to print silver lines with resolution down to 700 nm with a constant and well-defined line separation, also down to 700 nm. [23]The process can be described as a pressure assisted inkjet printing of an highly concentrated paste placed inside a nozzle with a fine diameter (between 0.5 and 10 μm).Thanks to the non-Newtonian properties of the ink at the tip of the nozzle, the paste can be ejected in an extremely confined way with resolution ranging from hundreds of nm up to tens of μm.Typically, the thickness of UPD printed silver electrodes is around hundreds of nanometers, much higher than what obtained with inkjet.This feature is advantageous in several cases as it allows low electrical resistance, but it may represent an issue in the case of OFETs, which are characterized by functional layers in the tens of nanometers range.Since such aspect ratio electrodes are not common in OFET, UPD has not been adopted before to fabricate high performance, downscaled organic transistors. In this work, we report fully solution fabricated n-type, polymer based OFETs operating in the high-frequency (HF) range thanks to the adoption of UPD printed silver electrodes with mi-cron scale resolution.The frequency performance is enabled by an average electrode line width as low as 3.3 μm, reducing the parasitic overlap length, and a controlled channel length of the transistors, of 1.4 μm on average, increasing the device transconductance, in combination with a suitable interfacial modification to improve charge injection.Our results show that, despite the unusual electrode thickness (maximum thickness of ≈ 330 nm), performing OFETs for HF applications can be fabricated with digital high resolution patterning methods such as UPD.An important aspect to achieve optimized devices with UPD patterned electrodes is the use of a molecular n-type dopant to retain excellent transport properties in the thin semiconducting layer, with ideal and reproducible device characteristics.An average value of transition frequency as high as 25.5 MHz was measured, which is the best achievement obtained up to now in organic devices with direct-written, printed source and drain electrodes.In perspective, by further reduction of the overlap length by scaling of the gate line as well, a transition frequency above 100 MHz appears feasible, opening a path for printing-based, direct-writing schemes for the manufacturing of high-performance OFETs for HF and even ultra-HF applications, relevant for communication purposes in IoT. Results and Discussion We printed interdigitated silver (XTPL Ag Nanopaste CL85) contacts by UPD on glass substrates to be used as source and drain electrodes in transistors (Figure 1a).In Figure S1 (Supporting Information), an example of an array of interdigitated contacts is shown.For each transistor, five silver electrodes were printed with an average line width of 3.34 ± 0.19 μm, for a total channel width of 2 mm and an average channel length of ≈ 1.47 ± 0.24 μm.As reported in ref., [23] direct-written silver contacts printed by UPD and characterized by a resolution down to 2 μm, are characterized by an electrical conductivity of ≈ 2.5 × 10 7 S m −1 (40% of bulk A g ).Therefore, the expected electrical resistance of the UPD contacts deposited in this work is ≈ 74 Ω.The electrodes are characterized by a round shape and an average maximum thickness of 330 ± 30 nm.After printing, a short drying of the silver contacts was performed, heating the substrates at 200 °C for 10 min.A more detailed description of the printing method and parameters to achieve such spatial resolution can be found in a previous work. [23]o assess the potential use of the UPD printed silver contacts in OFETs, we fabricated staggered top gate-bottom contact (TGBC) field-effect transistors based on the well-known, reference co-polymer poly[N,N ʹ -bis(2-octyldodecyl)-naphthalene-1,4,5,8-bis(dicarboximide)−2,6-diyl]-alt-5,5 ʹ -(2,2 ʹ -bithiophene) (P(NDI2OD-T2)) as electron transporting semiconductor.A sketch of the device architecture is presented in Figure 1b.P(NDI2OD-T2) was deposited from a 7 g l −1 toluene solution by off-centered spin-coating perpendicularly to channel, in order to achieve uniaxial alignment, which ensures optimal charge transport. [18,20,24]Poly(methyl methacrylate) (PMMA) was used as a solution processable dielectric since it is known to form an optimal interface with the chosen semiconductor. [25,26] gate contact made of poly(2,3-dihydrothieno-1,4-dioxin)poly(styrenesulfonate) (PEDOT:PSS) was ink-jet printed on top of the dielectric, in full overlap with source and drain electrodes.The realized device with downscaled channel length is therefore fully solution fabricated: the process flow is schematized in Figure 1d. [30] P(NDI2OD-T2) was selected as semiconductor since it is a well-studied material characterized by good electron transporting properties with field-effect mobilities exceeding 1 cm 2 /Vs when polymer backbones shows directional alignment. [24,31]Moreover, thanks to the non-linear injection properties of P(NDI2OD-T2) with the lateral electric field, the increase of weight of contact resistance decreasing the channel length, at fixed bias, is milder than what expected in TGBC devices. [27,32]However, despite this effect, to retain good electrical characteristics at short channels, some additional strategies are often needed.In our case, to improve the injection properties we exploited the combined effect of a self-assembled monolayer (SAM), to lower the work-function of electrodes, and of mild doping of the semiconductor, to fill deep trap states.In particular, as SAM we used dimethylamino(benzenethiol) (DABT), known to decrease the work-function of noble metals like silver. [33]To dope the semiconductor, we used a derivative of 4-(2,3-dihydro-1,3-dimethyl-1H-benzimidazol-2-yl)-N,Ndimethylbenzenamine (DMBI-H), which is a well-known n-type dopant for P(NDI2OD-T2). [34]In Figure 1c chemical structures of both the molecule adopted for the SAM and the dopant are depicted, along with the adopted polymer semiconductor.The modified benzimidazoline dopant is 4-(1,3,5,6-tetramethyl-2,3-dihydro-1H-benzoimidazol-2-yl)-N,N-diisopropylaniline (N-DiPrBI-Me 2 ); the introduction of a diisopropyl group on the aniline moiety has already been reported to improve the solid-state solubility of the compound in the host polymer, with respect to the standard DMBI-H, leading to a more efficient intercalation in its semicrystalline structure. [35]The presence of two methyl groups on the benzimidazoline core, on the other hand, aims at tuning the electron donating properties of the compound, as similarly presented in the work of Fabiano and co-workers. [36]The combination of these chemical modifications of the DMBI-H structure are expected to enhance the doping efficiency.A more in-depth investigation on this novel dopant molecule and of its performances will be addressed in a future publication.Doping was performed by solution mixing at 1 mol% with respect to the monomeric unit of the polymer.In the Supporting Information, the synthetic procedures and the NMR characterization of the dopant (Figures S2 and S3, Supporting Information) are reported. The cross-section profile of the fabricated transistors was obtained by FIB-SEM imaging.A typical cross-section of a device is shown in Figure 2a, while in Figure S4 (Supporting Information) images of different electrodes on various samples are collected.In Figure 2b,d magnifications on a single electrode and in the channel, region are performed.From Figure 2b it is possible to notice that electrode thickness varies smoothly along its profile without any abrupt change, which is beneficial to allow the polymer film continuity between contacts and channel regions.The metal nanoparticles are still visible in the dried electrodes, and their granulometry was quantified by measuring the average di-ameter of the particles across various cross-sections images (an example is reported in Figure 2c).A value of ≈ 40 nm was estimated, close to the diameter of the silver nanoparticles composing the starting silver ink (nominal average diameter ≈ 50 nm).While the different organic films cannot be distinguished one from the other, in Figure 2b,d, it is possible to notice a localized thinning of their stack in the contact region with respect to the channel area.The maximum films thinning was quantified as ≈ 36% on average, by considering the average maximum (h max ) and minimum (h min ) heigh of the films equal to 640 and 410 nm, and an average maximum thickness of the electrodes (t max ) equal to 330 nm. We characterized the electrical properties of our OFETs by first measuring DC characteristic curves.In Figure 3a a comparison of transfer curves between a doped (blue) and undoped (black) device is shown.Upon doping, it is possible to notice a clear increase in the OFF current of four and three order of magnitudes in linear and saturation regimes, respectively, due to the increase of bulk conductivity.Also, the ON current at the same gate voltage is increased of more than one order of magnitude.The minimum OFF and ON resistances of the doped transistor are ≈ 230 kΩ and 2.3 kΩ, respectively, much higher than that of the printed contacts.The threshold voltages in both linear and saturation regimes were extracted, and they are equal to 19 (V d = 5 V) and 15 V (V d = 40 V) for the pristine and 4.2 (V d = 5 V) and 3.9 V (V d = 23 V) for the doped device.Using gradual channel approximation, field-effect mobilities curves were computed in both regimes (Figure 3b).For the pristine device there is a marked gate-voltage dependence of mobility with maximum values ≈ 0.025 cm 2 Vs −1 at V d = 5 V and 0.065 cm 2 Vs −1 at V d = 40 V, respectively.We have not deeply investigated the origin of such non-ideality, but it is a typical fingerprint of the presence of trap states, which can tentatively be ascribed to the marked granularity and roughness of the electrodes, as well as their thickness.Upon molecular doping, the device characteristics in full accumulation are much more ideal, at the expense of an increased OFF current, which is in any case not detrimental for improving a figure of merit like f T .The observed effects are compatible with the introduction of excessive electrons upon doping that fill trap states in the semiconductor and contribute to an increased film electrical conductivity in the OFF state.As a consequence, charge mobility of the doped device is characterized by a markedly reduced bias dependence with respect to the undoped transistor and by an improved reliability factor (r), [5] which goes from 0.7 and 0.6 for the linear and saturation mobility to 0.85 and 0.75 (Figure 3b).In this case, a mobility as high as 0.4 (V d = 5 V) and 1.4 cm 2 Vs −1 (V d = 23 V) is obtained.These values are comparable to what already obtained in optimized short channel P(NDI2OD-T2) OFETs with fs-laser direct-written contacts treated with the same self-assembled monolayer used in this work. [18]n Figure S5 (Supporting Information), the output characteristic curve for the doped device is shown.It is possible to observe that there is no clear saturation of drain current with drain voltage, probably due to a short channel effect.[37] Furthermore, it can be seen that no S-shape at low drain voltages is present, which is an additional indication of good electron injection, in accordance to what deduced before form the mobility curve. In Figure 4a, the average transfer curve and its standard deviation, obtained by measuring ten doped transistors is presented, indicating good reproducibility.In Figure 4b, the corresponding average mobility curves in both linear and saturation regimes are reported, with mean mobility values ≈ 0.35 and 1 cm 2 Vs −1 respectively, in the V g range between 12 and 23 V. Looking at Figure S6a,b (Supporting Information) it is possible to appreciate a good linear behavior of I d and √I d with gate voltage, indication of good device ideality.By performing a linear fit in the same voltage range used for mobility extraction, it is possible to derive also average threshold voltages equal to 2.3 and 2.7 V in linear and saturation regimes, respectively. The average quasi-static width-normalized DC transconductance g m /W, an important parameter to have a prediction of the transition frequency of the devices, reaches 0.5 mS cm −1 at V d = V g = 23 V. Furthermore, to preliminary assess the intrinsic shelf-life stability of the devices, they were kept in a nitrogen glovebox after the first measurement, and then re-measured after 2 weeks without any sign of degradation (Figure S7, Supporting Information). To characterize the semiconductor film on both contacts and channel region, and understand the relation between electrical and morphological properties, we used Atomic Force Microscopy (AFM) on samples prepared following the same procedure for the case of transistors, but without any dielectric and gate electrode.Such characterization is relevant, as the high electrodes thickness may impair the formation of an optimal film for charge transport.As a reference, an AFM image of the bare direct written contacts is shown in Figure 5a characterized by a derived root-meansquare roughness (RMS) of 10 ± 0.5 nm (see details in Figure S8, Supporting Information).Figure 5b shows the AFM topography of the coated polymer films on top of the contacts across the channel area: it is possible to notice that between the elec- trodes the material assumes a highly oriented fibrillar structure, aligned in the perpendicular direction with respect to the channel.The latter is typical of P(NDI2OD-T2) when deposited using directional solution shearing techniques, such as off-centered spin coating and bar-coating from solvents like toluene or mesitylene, which allow strong pre-aggregation in solution. [24,38]Because of the high contrast between the electrode thickness and any feature characterizing the semiconductor morphology, we performed a magnification inside the channel region to clearly distinguish the film morphology.The polymer film topography is very similar to what previously observed in literature for oriented P(ND2OD-T2) and it is characterized by a surface RMS roughness value of 1.8 ± 0.4 nm. [24,31]This proves the successful polymer alignment between the two rather thick direct-written electrodes and explains the good transport properties observed in the fabricated transistors.In Figure 5c a magnification of the contact region coated with the P(NDI2OD-T2) film is reported with a surface RMS roughness value of 20 ± 0.5 nm.The contacts appear only partially covered by the polymer, likely reflecting a different surface interaction.The topography of the polymer is drastically different with respect to the channel, presenting ribbon-like shaped features, apparently interconnected.The non-ideal surface coverage and the different topography could be the reason of an inefficient carrier injection from the contacts, as observed in the pristine OFETs, which we solved by molecular doping of the semiconductor. To finally assess the dynamic behavior of our devices, we measured the transition frequency f T .Since the undoped transistors suffer severe contact limitations, only devices with doped P(NDI2OD-T2) were considered for dynamic measurements.We adopted the measurement set-up introduced by Perinot et al., which allows to measure separately channel transconductance g m and gate-to-source and gate-to-drain capacitances (C gs , and C gd ). [19]Then, f T value is identified as the crossing point between the total gate admittance, which depends on the total gate capaci-tance C g , and g m .In the frequency range from 1 kHz to 10 MHz, we estimated a mean width-normalized transconductance equal to 0.53 mS cm −1 averaging over ten devices at V d = V g = 25 V and of 0.61 mS cm −1 for the best performing transistor, which are in line with the previous estimations based on quasi-static transfer curves (Figure 6a).Then, average values of C gd , C gs , and C g equal to 0.16, 0.45, and 0.63 pF were obtained by performing a fit in a frequency range between 100 kHz and 10 MHz (Figure 6b).The gate-to-source capacitance is higher since the source is made by three fingers in the interdigitated structure, while the drain is made of two, and since the measurement is performed in saturation, with the source end of the channel showing charge accumulation and the drain end in depletion.As an approximation, by considering a simple parallel plate model, it is possible to estimate theoretically what the total gate capacitance should be, knowing the geometrical overlap (A ov ) and channel (A ch ) area and the dielectric capacitance per unit area C diel .We estimated that C g should nominally fall within a range from 0.54 to 0.76 pF, as experimentally observed.Details about the estimation are reported in the Supporting Information.With such prediction for capacitances and the quasi-static g m , by using Equation ( 1) it is possible to estimate a theoretical transition frequency between 22 and 31 MHz. Figure 6c,d shows the experimental extraction: an average value of f T as high as 25.5 MHz (over ten transistors) and of 28 MHz for the best device are achieved at 25 V, in agreement with the previous estimations.This is the highest transition frequency obtained so far for organic transistors with printed source and drain electrodes. Conclusion In this work, the possibility of using high resolution directwritten metallic contacts deposited by an Ultra-Precision Deposition method for the development of fully-solution fabricated organic transistors is demonstrated.In particular, by downscaling the channel length to 1.4 μm and the silver source and drain contacts width to 3.3 μm in TGBC staggered n-type OFETs, based on the model polymer P(NDI2OD-T2), it was possible to achieve an average transition frequency f T of 25.5 MHz at 25 V, the highest f T achieved with OFETs where the critical features is defined by a printing method.A key aspect for the achievement of such AC performance is the combination of a SAM treatment of the silver contacts, a mainstream approach to fine tuning charge injection, with molecular doping of the semiconductor, by means of a benzimidazoline derivative.We highlighted that the printed contacts, characterized by a rounded profile, a maximum height (>300 nm) much thicker than the semiconductor film thickness (40 nm), and a high surface RMS roughness of 10 ± 0.5 nm, do not preclude the formation of an optimal film microstructure in the channel area.Yet, they do not allow ideal coverage by the semiconductor on the contacts, producing non-idealities in the electrical characteristics of OFETs based on the pristine polymer.Such non-idealities disappear upon molecular doping, likely owing to deep trap states filling, allowing the achievement of reproducible devices with excellent performances. Our results qualify a fully solution-based process for the realization of OFETs where the downscaling of the channel is achieved with a printing, direct-written technique, for applications in the HF bandwidth.Moreover, they pave the way to even higher operational frequencies, which could be achieved by acting on the downscaling of the gate line on the one side, and on the increase of the effective mobility with higher performing semiconductor on the other.Back-on-the-envelope estimations suggest that the UHF band could be achieved in the future with direct-written OFETs, making them candidates for cost-effective and sustainable IoT. UPD Printing of High-Resolution Electrodes: The glass substrates used (low alkali 1737F Corning glasses, purchased from Apex Optical Services) were cut into pieces of 1 cm × 1 cm and then cleaned by sonication with acetone and isopropyl alcohol (both purchased from Sigma Aldrich) for 10 and 5 min.The printing of high-resolution electrodes using the UPD method was performed on the XTPL Delta printing system.Silver contacts were printed using the XTPL Ag Nanopaste CL85.The properties of the adopted Ag ink were reported in ref. [23] In order to obtain silver contacts as source and drain electrodes, process parameters had to be optimized.Appropriate nozzle diameter, printing pressure, and printing velocity had a crucial influence on the whole process.The outer diameter of the nozzle was 3.5 μm.In addition, appropriate humidity (40-60%) and temperature (21-24 °C) also had to be kept constant during the fabrication.Sintering of the printed lines was performed in the air on a hotplate (CAT H 17.5D). Transistor Fabrication: To fabricate our n-type OFETs the glass substrates with the silver electrodes previously printed were cleaned with isopropyl alcohol (Sigma-Aldrich) and then a plasma treatment in argon (100 W for 1 min) was performed.Successively, the substrates were immediately put in a solution of dimethylamino(benzenethiol) (TCI chemicals) and IPA in a concentration of 0.0014 v/v and left there for 25 min.P(NDI2OD-T2) (Polyera Corporation ActivInk N2200) was dissolved in Toluene in concentration of 7 g l −1 by stirring for 3 h.For the doped devices, N-DiPrBI-Me 2 solution was prepared in a glovebox at a concentration of 1 g l −1 and left stirring for 1 h.Then a given amount was added to the P(NDI2OD-T2) solution to have 1 mol% with respect to the monomeric unit of the polymer.The semiconductor solution was deposited by offcentered spin coating in a glovebox and immediately put on a hot plate at 120 °C for the pristine devices or 180 °C for the doped ones for 30 min. After that, a solution of PMMA (Sigma-Aldrich, averaged M W ≈ 120 00) was dissolved in n-Butyl acetate (Sigma-Aldrich) and deposited by centered spin-coating.Then an annealing at 80 °C for 10 min is followed.PEDOT:PSS (Clevios PJ700 formulation, purchased from Heraeus) gate electrodes were inkjet printed by means of Fujifilm Dimatix DMP2831.At the end, the devices were left in glovebox in annealing for 6 h at 110 °C. AFM Characterization: The surface topography of the semiconductor films and printed electrodes were characterized with Keysight 5600LS Atomic Force Microscope operating in tapping mode.Gwyddion software was used for image processing and calculation of surface roughness.In particular, to evaluate the RMS.roughness of the electrodes, the image was planarized, removing a polynomial background associated to the curvature of the contact (Figure S5, Supporting Information). Cross-Section FIB-SEM Imaging: FIB-milled cross sections and SEM imaging of the devices were obtained with a Dual Beam FIB/SEM Helios Nano-Lab 600i (FEI) equipped with a Field Emission source.Samples surface was prepared by depositing a thin Au film (t ≈20 nm) by means of a Q150R sputter coater (Quorum Technologies).FIB-milling and polishing of cross-section was obtained with Ga+ source at 30 kV acceleration voltage and 120 pA of current.SEM images of cross sections were obtained under a sample tilt angle of 52°and with accelerating voltage of 5 kV.Thickness of the device layers and electrode dimensions were evaluated on cross section images by using software ImageJ version 1.53a. Statistics were performed on 51 images from seven different devices crosssections.Granulometry of Ag nanoparticles composing the electrodes was quantified by measuring the average diameter of the particles on three high-magnification cross-sections images (23 sample measures for each image). Static and Dynamic Electrical Characterization: OFETs transfer and output characteristic curves were measured with semiconductor parameter analyzer (Agilent B1500A) inside a glovebox with a Wentworth Laboratories probe station.The dynamic characterization of OFETs was carried out in the glovebox using a custom setup that includes an Agilent ENA Vector Network Analyzer and Agilent B2912A Source Meter.To had more information see ref [19]. Figure 1 . Figure 1.a) Optical images of the bottom source and drain electrodes realized by UPD printing, b) sketch of device architecture, c) chemical structure of materials employed, and d) scheme of the process flow. Figure 2 . Figure 2. a) Typical cross-section FIB-SEM image of a device and b) magnification of a single electrode region.c) Detail of a silver electrode to highlight its granulometry.d) Channel region between two direct-written contacts.All images have been acquired under a sample tilt angle of 52°. Figure 3 . Figure 3. a) Transfer characteristics in both linear and saturation regime for a pristine (black curve) and doped (blue curve) OFET; the two devices are characterized by the same channel length and width, 1.7 μm and 2 mm respectively, but different dielectric thickness: 350 nm for the pristine and 550 nm for the doped transistor.The difference is due to different stages of optimization of the devices, and the thicker dielectric in the doped case ensures high yield, while reducing by ≈1.6 times the coupling capacitance.b) Field-effect mobility curves computed using gradual channel approximation from transfer characteristics. Figure 4 . Figure 4. a) Characteristic transfer and b) field-effect mobility curve averaged over ten devices in linear (V d = 5 V) and saturation regime (V d = 23 V) with standard deviation. Figure 5 . Figure 5. a) AFM image of a bare direct written Ag contact printed with UPD technique, b) AFM image of a P(NDI2OD-T2) film deposited on the printed electrodes, with a magnification of the channel region, and c) AFM image of contact region of a P(NDI2OD-T2) film.The scale bar is 1 μm. Figure 6 . Figure 6.a) Average channel transconductance in saturation regime (V d = V g = 25 V), b) average total source-to-gate and drain-to-gate admittance in saturation regime; c,d) estimation of average (over ten devices) and maximum transition frequency at 25 V.
6,701.8
2023-07-20T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Understanding how access shapes the transformation of ecosystem services to human well-being with an example from Costa Rica Abstract Increasingly, ecosystem services have been applied to guide poverty alleviation and sustainable development in resource-dependent communities. Yet, questions of access, which are paramount in determining benefits from the production of ecosystem services, remain theoretically underdeveloped. That is, ecosystem assessments typically have paid little attention to identifying real or hypothetical beneficiaries and the mechanisms by which benefits may be realized. This limits their ability to guide policy and interventions at the local scale. Through a qualitative mixed methods approach, this article analyzes how access to different aspects of the production of provisioning services is negotiated in Bribri communities (Costa Rica) of small-scale plantain farmers with alternative modes of agricultural production. The analysis considers access to land, labour, knowledge, tools, markets, and credit. Our analysis reveals how institutions of access are organized differently in traditional vs. conventional systems of agriculture and how these shape power dynamics and pathways to well-being. We conclude that understanding institutions regulating access to ecosystem services provides more useful insights for poverty alleviation than approaches that assume homogeneous access to benefits. Introduction The concept of ecosystem services is increasingly used to understand and improve human well-being in resourcedependent communities. However, in communities where livelihoods are closely tied to a dominant provisioning ecosystem service -such as those communities that produce plantain, soya, or fish as commodities -there appears to be a disconnect: despite high yields of provisioning services, many communities face significant socio-economic challenges (e.g. Béné, 2003). The disparity between producing ecosystem services and attaining a good standard of living opens a fundamental question about how residents in resource-dependent communities access the benefits of ecosystem services, which are often assumed to flow automatically (Nahlik et al., 2012). In this article, we take a closer look at the question of access in resource communities in Costa Rica, paying particular attention to how local beneficiaries experience and negotiate the relationship between ecosystem services and human well-being. Large-scale, aggregate assessments of ecosystem services are limited in their ability to guide policy and interventions at the local scale, especially in development settings where there exist complex linkages between poverty, vulnerability and ecosystems. The assumption that increases in the output of ecosystem services correspond to increases in human well-being simply does not hold, at least not in any straightforward manner. Indeed, the conclusions of the MA (2005) indicate that human well-being increased globally at the same time as the majority of ecosystem services around the world declined (Raudsepp-Hearne et al., 2010), and it is possible to have the supply of ecosystem services increase at the expense of the well-being of particular groups in society . Indeed, several critiques have called for more nuanced understanding of the complexities associated with the distribution of benefits and the role of trade-offs (see Daw et al., 2011Daw et al., , 2015Fisher et al., 2013;Bennett et al., 2015;Felipe-Lucia et al., 2015;van Hecken et al., 2015;Berbés-Blázquez et al., 2016;Wieland et al., 2016). well-being without constituting benefits by themselves (Daily, 1997;de Groot et al., 2002). Many of these conceptualizations depict ecosystem services as cascades to emphasize the intermediary steps between an ecosystem's biophysical processes and the eventual improvement in human well-being (e.g. de Groot et al., 2010;Potschin and Haines-Young, 2011). Each step in the cascade represents a social-ecological transformation, which grants an opportunity to examine additional factors mediating the relationship between ecosystems and well-being. Among the many factors that play a role in shaping the successive transformations, access has a crucial impact on how stakeholders experience ecosystem services (Daw et al., 2011;Hicks and Cinner, 2014). In this paper, we focus on institutions that regulate access to ecosystem services as key elements in the process of actualizing human well-being in resource-dependent communities. Our analysis compares the organization of access in indigenous Bribri communities that produce plantain in Costa Rica. Farmers in the Bribri Territory provide an interesting case study because two agricultural systems of small-scale farming co-exist, one based on traditional practices, the other based on conventional practices. While both systems produce the same provisioning ecosystem service, i.e. plantains, the institutional organization of access differs, thus creating an opportunity for comparison. The rest of the paper is organized as follows: In the next section, we introduce a framework for analyzing access applicable to ecosystem services; we then describe our methods and the research site in the Bribri Territory before launching into the details of how access is organized using the small-scale production of plantain as an example of a provisioning ecosystem service; we finish with a discussion of the implications of access in our understanding of ecosystem services and further applicability of this framework. Access This paper focuses on the institutions that mediate access in the production of ecosystem services. We define institutions broadly as the regularized patterns of behaviour among groups and individuals in society (after Leach et al., 1999), and access as all the possible mechanisms by which a person is able to benefit from things (after Ribot and Peluso, 2003). The ecosystem services cascades aforementioned generally distinguish between the following transformations ( Fig. 1): 1) An initial transformation of the biophysical flows of an ecosystem into an ecosystem service. This transformation is mediated by a system of production that includes capital, technology packages and labour. Similar provisioning ecosystem services can be obtained following different production processes, for instance, a crop can be industrially farmed using agrochemicals and migrant labourers or it can be farmed organically in smallholdings by members of a single household. 2) In the second transformation, the ecosystem service becomes a benefit to someone. A single ecosystem service can produce an array of benefits for different stakeholders. For instance, those who consume the crop will gain nutrition, while those who sell the crop will obtain an economic gain. 3) A third transformation concerns how human well-being is impacted by a given ecosystem benefit depending on the personal circumstances of individuals or groups. Factors such as health, gender, or culture, shape how people enjoy ecosystem services, e.g. flour may provide nutrients to one person and cause an allergic reaction to another one. This is very much in line with understandings of well-being based on Sen's capabilities approach (Sen, 1988(Sen, , 1999). Sen's approach to development deemphasized utilitarian ideas of well-being and highlighted instead the diversity of contexts and human experiences (Forsyth, 2015). Thus, Sen posited that improving well-being depended on removing the obstacles that stood on the way of expanding people's freedom to achieve what they value being or doing (Deneulin and Shahani, 2009). Access barriers may exist at each step of the ecosystem service cascade as illustrated in Fig. 2. Although this is generally acknowledged, there is a dearth of empirical work and a lack of methodologies applicable to characterizing access. Notable exceptions are Daw et al. (2011) and Hicks and Cinner (2014) who conducted analyses where they disaggregated the recipients of ecosystem benefits living in coastal communities in the Indian Ocean into distinct stakeholder groups. Thus, empirical research to date has focused mainly on the last step of the cascade (step 3 in Fig. 2) to understand how the positionality of different actors shapes the way in which they experience ecosystem services. Our article is complementary to previous efforts to characterize access but it focuses on access with regards to the system of production applied to obtaining an ecosystem service and its corresponding socio-economic organization that oversees the distribution of the benefits and impacts of that service. Our analysis uses Ribot (1998) and Ribot and Peluso (2003) whose analytical framework suggests thinking of access as a bundle made up of interwoven strands that together create the 'web of benefits' experienced by an individual or a group at a given time. Some of the strands that are essential for ecosystem services include having access to: land, tools and technology, capital and credit, markets, knowledge and information, and labour opportunities. We use these categories to guide the analysis of the Bribri agricultural social-ecological system. Our work considers the mechanisms used by farmers to gain, maintain and control access to these aspects of production where, maintaining access refers to the efforts dedicated to keeping a particular benefit; gaining access, refers to the initial process by which access is established; and controlling access refers to the ability to regulate other people's access. The analysis of access is presented as a comparison between the traditional and the conventional farming systems that co-exist in the Bribri Territory. Methods A number of methods can be used to obtain information that answers the following questions with respect to the systems for ecosystem service production and distribution: 1. Who controls access to the land (or sea, in the case of fisheries)? 2. Who controls access to the knowledge and information required to produce an ecosystem service (e.g. best cropping practices or reliable weather information)? 3. Who controls access to the tools and the technology associated to the production of ecosystem services (e.g. agrochemicals, seeds or farming utensils)? 4. Who controls access to markets to commercialize provisioning ecosystem services (e.g. local farmer's markets, co-operatives, or food retailers)? 5. Who is able to labour or has access to labourers to work in the production of ecosystem services? Without being prescriptive, adequate methods to answer the above questions may include participant and/or field observations (Mason, 2009;DeWalt and DeWalt, 2011), qualitative interviews (Kvale, 1996;Kvale and Brinkmann, 2009), surveys, review of secondary sources, ethnographic approaches (Atkinson et al., 2001) and mixed methods approaches. Our analysis is supported by a review of secondary sources from the peer review literature combined with field observations. Field observations encompass a variety of techniques with the idea of immersing oneself in a research setting to experience and observe first hand a range of dimensions pertinent to that setting (Mason, 2009). Observable events include people's daily routines, interactions, relationships, norms, spatial arrangements and so on. Field observations deliver nuanced and complex data that is difficult or impossible to capture otherwise (Mason, 2009). It also allows the researcher to draw on their own lived experience of the place while being aware of their position as outsiders to the community. In this case, field observations were conducted through several short stays (1-2 weeks at a time) in the communities of Suretka, Shiroles and Amubrë between June and November 2012. Typical community spaces that were visited during these stays included grocery stores, family restaurants, agricultural farms, cooperatives, sport events, and people's homes. Notes and personal reflections were recorded daily while in the field and insights from these inform the analysis below. Research site: The Bribri Indigenous Territory Our analysis of access considered communities in the Bribri Indigenous Territory situated in the Talamanca county in the South-Atlantic coast of Costa Rica and one of the poorest regions of Costa Rica. The Bribri Indigenous Territory was recognized in 1977 as a result of the passing of the Costa Rican Indigenous Law. The territory spans 437 km 2 and has a population of approximately 8500 residents. The Bribri Indigenous Territory is governed by a local Indigenous government known as the Indigenous Bribri Association for the Integrated Development of Talamanca (ADITI-BRI, Spanish acronym). Our study focuses on the communities of Lower Talamanca, primarily Suretka, Shiroles and Amubrë, which encompasses the areas below 500 meters above sea level (Fig. 3). The Talamanca county supplies over half of the plantain production of Costa Rica (Municipality of Talamanca, 2003). Plantains from the Talamanca region are sold to both national and international markets although we focus here solely on the former. To sell nationally, Bribri farmers bring their harvest to a sales point in Suretka for middlemen to purchase and transport to the central depot in the capital city of San Jose (5-6 hours away by road) where large-and medium-size food retailers purchase the fruit. Middlemen are outsiders to the territory and they are generally non-Indigenous. There is a mix of traditional and more intensive forms of agriculture in Talamanca. For the purposes of this paper we follow the definitions of FAO (2009) where the term ''conventional agriculture" refers to agriculture characterized by monocultures, mechanization and the use of agrochemicals, and the term ''traditional agriculture" refers to indigenous forms of farming, usually as diversified agricultural systems that rely on local knowledge and non-synthetic inputs. Two examples of what these agricultural systems look like in the Bribri Indigenous Territory are shown in Fig. 4. Land Access to land in the Bribri Territory is regulated by Costa Rican Indigenous Law, which states that Indigenous reserves are ''nontransferable and exclusive for the indigenous communities living on them". Furthermore, all of the land of the territory is registered under the name of the local Indigenous government, ADITIBRI, which subsequently grants residents access to plots of land. Decisions about land-use are taken at the household level. Initial access to the land is therefore gained either through matrilineal inheritance or through purchase between Indigenous residents. In terms of maintaining access to land, residents are protected by Costa Rican Indigenous Law, which stipulates that Indigenous people can negotiate and transfer land only among themselves. Yet, approximately 35 percent of the land in the Bribri Territory is currently in non-indigenous hands, particularly around the more accessible communities of Shiroles and Suretka (Guevara-Viquez, 2011). Rapid population growth in Talamanca presents an additional challenge to maintaining access to land. In some cases, growth has reduced the size of family plots to the point where subsistence and commercial agriculture have become unviable. Tools and technology The upkeep of plantains requires cleaning debris at the base, deleafing, removing suckers and rotted stems, weeding and pest control (Robinson and Saúco, 2010). In the traditional system, plantain maintenance is conducted using manual labour and simple tools, such as machetes. Pests are controlled by managing shade to prevent their proliferation, which means pruning and interspersing tree species of varying heights (Polidoro et al., 2008). Some farmers use vegetable-based insecticides made out of sandbox tree sap. By and large, all of these activities involve simple, affordable tools that farmers are able to purchase and know how to use. Hence, it is relatively easy for farmers to gain, control and maintain access to the tools and technologies needed in traditional agriculture. Conventional plantain farming involves additional tools to deal with pests that proliferate in monocrops. The cost of an agrochemical often determines the extent to which it is used. For instance, fungicides used to combat black Sigatoka are expensive, hence, they might be applied sparingly. Chlorpyrifos-coated bags are used by 98% of conventional farmers (Polidoro et al., 2008). Chlorpyrifos protects the fruit against thrips but more importantly, the bag keeps the peel of the plantain looking lighter and free of black spots. While this is purely aesthetic, middlemen pay more for plantains that come in the bags (sometimes twice as much), giving a strong incentive for farmers to switch to conventional practices. At the same time, the cost of additional fertilizers and agrochemicals is an entry barrier for many indigenous farmers. While this presents a barrier to obtaining initial access to the tools and the technology necessary for conventional agriculture, middlemen are eager to finance the switch in exchange for the farmer's harvest (more on this later). Knowledge and information Traditional agriculture is based on knowledge that is codified within a set of cultural practices that regulates which plants and animals can be harvested and when (Rojas-Conejo, 2002;García-Serrano and Del Monte, 2004). Some of this information is common enough that a person living in the territory might encounter it throughout their upbringing. More sophisticated information might be the purview of certain individuals, such as elders (awapas) and healers (sikuas), or of specific clans that are the keepers of certain knowledge. Gaining and maintaining access to this information involves partaking in cultural rituals and fulfilling prescribed roles. However, since colonization, external pressures have undermined Bribri cultural beliefs significantly. The push towards economic integration and the change in lifestyle is evident across generational lines. This means that traditional knowledge could become eventually inaccessible to future generations. Conventional plantain agriculture requires knowledge and information on agrochemical pest-control. Indigenous farmers are generally not well versed on the use of agrochemicals (Polidoro et al., 2008;Barraza et al., 2011) unless they have had previous experience working in large-scale plantations outside of the territory. Hence, most conventional Bribri farmers rely on middlemen to access information about agricultural practices. However, the advice dispensed by middlemen is based on commercial requirements for the sale of the plantains rather than on best farming practices. For instance, the insistence on the use of chlorpyrifos-coated bags responds not so much to the need for pest control but to keep the plantain peels looking lighter and more appealing to consumers. Markets Given their remoteness, Bribri producers who want to sell in the national market have little choice but to sell through the middlemen. Therefore, middlemen control access to the market by controlling the transportation route. Farmers who sell to middlemen have an incentive to switch to conventional agricultural practices because conventional plantain sells at a higher price. Having few buyers relative to the number of sellers gives middlemen an undue advantage to determine the sale price. Yet, bypassing the middlemen to gain access to the national market is next to impossible for indigenous producers because middlemen and retailers in the central depot in San Jose maintain a tight system of reciprocal loyalties. What maintains these loyalties is a combination of long-standing relations, the possibility of monitoring each other (i.e. a retailer would know if a middleman sold the plantain to someone else and vice versa), and likely some degree of prejudice against indigenous peoples (Christian, 2013). In an attempt to gain access to alternative markets, and to better prices, Bribri farmers began organizing into cooperatives in the 1990s. Cooperatives were set up by producers who practiced traditional agriculture to sell bananas, plantains and cocoa destined to international markets for retailers of organic and fairtrade products. Access to the international market depends on the viability of the cooperatives themselves. Whereas some cooperatives have been operating for years, many cooperatives have dissolved in part due to predatory pricing, as middlemen increase their purchase price temporarily to draw farmers away from the cooperative. Capital and credit Residents of the Bribri Territory cannot access credit from Costa Rican financial institutions because they are unable to provide a land title as collateral. This is a consequence of the status of the region as an Indigenous reservation. Given this situation, middlemen act as a source of informal credit to finance the switch to conventional agriculture in exchange for the farmers' harvest (Whelan, 2005). There are no official records on the amount of informal lending or interest rates, but Dahlquist et al. (2007) determined that 26% of the households in the territory received credit from plantain middlemen. This percentage increased to 53% for households in the more accessible parts of the territory such as the communities of Suretka and Shiroles. It is unclear whether there exist other informal credit systems, such as loans from family, peer-to-peer lending or micro-credit schemes in the territory. Labour Plantain farms are family-run operations where over 90% of the producers rely on the work of their household members to tend to the production (Orozco et al., 2008). The remaining 10% of producers hire day-workers, especially for tasks that are physically demanding or unpleasant, such as spraying pesticides (Orozco et al., 2008). While, in theory, the majority of farmers can easily gain and control access to labour; in practice, it can be difficult to secure reliable labour. Day-workers tend to be the young and landless who, for the most part, are not interested in pursuing the farming lifestyle and would prefer to work outside of the territory, especially in the ballooning ecotourism industry. Even when youth are interested in agriculture, producers indicated that state institutions, such as the National Child Welfare Board may interfere with their ability to involve their own children in agricultural activities. In-kind exchanges are part of the traditional practices but are becoming less common. These practices include the mano vuelta (literally 'returned hand'), where two people agree to help one another, and chichadas, where a large job is done collectively and everyone gets invited to drink chicha (an alcoholic drink) afterwards. These transactions involve no exchange of money and are less structured than a formal job, e.g. a person might come to help one day, but the next day goes to help someone else. Summary of access The organization of access in the conventional and the traditional systems of agriculture in the Bribri Territory is different with respect to the number of actors who control the access of others (Table 1). Actors who have the ability to control aspects of provisioning ecosystem service production may be thought as gatekeepers. We note that in the conventional system middlemen are the main gatekeepers regulating access to most aspects of plantain production. The two exceptions are access to land, which is defined by Costa Rican law and locally managed by the local indigenous government, and decisions about hiring labour that are relegated to the household sphere. By contrast, in the traditional system there is a wider group of actors that behave as gatekeepers controlling the different aspects of ecosystem service production. Discussion Using the Bribri case study, we note that although both traditional and conventional farmers produce plantains, each method of production is mediated by alternative institutional arrangements with respect to its organization of access. We highlight several important ways in which the role of access is revealed and the implications that this has for ecosystem management and the improvement of human well-being in resource-dependent communities: First, the analysis of access shows how power is distributed among stakeholders. In the Bribri case, we note that the conventional and the traditional systems of plantain agriculture distribute power and influence differently among local actors. In the conventional system, middlemen become key gatekeepers by virtue of controlling access to most aspects of plantain production. Consequently, with the advancement of conventional agriculture in the territory, middlemen gain importance because they are able to determine not only pricing, but also the technology that farmers should use, they become information sources for farming practices, and they provide credit when needed. Hence, conventional agriculture tends to concentrate power in fewer hands, whereas in traditional agriculture power is shared among a wider group of gatekeepers. Beyond concentrating power, the advancement of conventional agriculture in the Bribri Territory has the effect of extending nonindigenous influences. Gatekeepers in the traditional system, such as elders or the indigenous government, come from the local context and therefore have a common history and cultural background. This contrasts with gatekeepers in the conventional system who are outsiders to the community and predominantly non-indigenous. Given the extent to which people's identities in farming communities are tied to their agricultural practices, a switch in land management practices needs to be understood as having the potential to reshape the identity of individuals in the community. Thus, the current expansion of conventional agricultural practices may be linked to the erosion of indigenous institutions as non-indigenous actors become key gatekeepers. Second, when ecosystem services are used in the context of poverty alleviation, a black-box characterization of well-being is insufficient in guiding meaningful interventions and could even result in perverse outcomes. While our proposed analysis focuses on barriers at the level of production and distribution, these barriers interact with the personal circumstances of actors (review Fig. 2). Better characterizations of well-being should provide a sense of the contextual details and personal circumstances of actors that influence their ability to benefit from ecosystem services. For instance, the two farming approaches correspond to two forms of approaching and fulfilling well-being needs. The switch to the conventional model is clearly motivated by the higher price that farmers can obtain for the sale of conventional plantains. Hence, one can assume that conventional farmers place a greater emphasis on securing the material dimensions needed for a good life. As well, the switch furthers the integration of the territory into the market economy, which is a desire often expressed by younger residents. On the other hand, traditional farmers see plantain agriculture as satisfying their economic needs to a degree, but their agricultural practices also contribute to other dimensions of their well-being, namely, satisfying their desire to live according Table 1 Dimensions of access in plantain production and corresponding gatekeepers in the traditional and conventional systems of plantain agriculture in the Bribri Territory. Dimension Agricultural System How is access gained? How is access maintained? How is access controlled? Understanding the logic under which ecosystem users operate is crucial in guiding local level interventions aimed at alleviating poverty and reducing vulnerability. An approach to sustainable development predicated solely on improving yields of provisioning ecosystem services, may not result in livelihood improvements for all local residents. That is, interventions geared at increasing plantain yields through intensification will benefit those farmers who value material wealth over other dimensions of well-being, but it can be counterproductive to those farmers who place a higher value on tradition. The two pathways to building well-being need not be mutually exclusive, e.g. an intervention to obtain better prices for organic plantains can boost material standing while fostering cultural practices. The point remains that a careful consideration of implicit and explicit trade-offs is required to consider differential impacts of policies on groups of local actors. Third, and related to the previous one, an analysis of access offers an entry point to understand path-dependency in the institutional arrangements that shape the future choices that ecosystem users will face. This is because either agricultural system will set off feedback loops that determine the range of future options that will become available to the communities. For example, we established that farmers who value material welfare are likely to move towards conventional agriculture, this means that, over time, the assets of conventional farmers will come to reflect this choice. That is, the farmer will develop loyalty to certain middlemen that trade on conventional plantain, these farmers will rely more heavily on stores to purchase food staples, and within a few generations conventional farmers may lack the know-how to farm traditionally. The same is true for traditional farmers who respond to an alternative set of incentives. An important consequence is that, while switching back and forth between the two forms of agriculture is possible, it becomes more difficult the longer a system becomes established. Therefore, the switch in agricultural systems should be interpreted as a deeper change in the possibilities and opportunities that will shape the future well-being in the communities. Fourth, the analysis of access highlights the degree to which ecosystem services are co-produced. In other words, ecosystem services are the product of natural and social processes, yet the importance of human labour provided by farmers, fishers, or ecotourism operators, often goes unaccounted (Kosoy and Corbera, 2010;Gómez-Baggethun and Ruiz-Pérez, 2011;Palomo et al., 2016;Berbés-Blázquez et al., 2016). This omission is important because labour relations shape the dynamics within a social group, especially in resource-dependent communities where productive activities are deeply intertwined with identity and well-being (Bernstein, 2010). Our analysis shows that different institutional arrangements in relation to access can give raise to forms of unequal exchange and exploitation. For example, the ability of middlemen to control access to the national plantain market gives them an opportunity to use predatory pricing practices to their advantage. Hence, when assessing provisioning ecosystem services, it is crucial to consider the form of production alongside the yields. In thinking of the applicability of this type of analysis, the case of plantain farmers in the Bribri Territory contains insights transferable to communities that share similar characteristics, that is: First, resource-dependent communities where livelihoods depend on the production of provisioning ecosystem services, particularly when there are intermediaries that control access to markets (e.g. Daw et al., 2011;Wamukota et al., 2015;Hicks and Cinner, 2014), or regulating services such as communities trying to benefit from payments for ecosystem services schemes (e.g. Corbera and Brown, 2010;Pascual et al., 2014). Second, populations where there are cultural differences and a history of colonialism or marginalization, such as with Indigenous peoples in many parts of the world (Ramirez-Gomez et al., 2015). Third, communities where there are significant power differentials among stakeholders (e.g. Felipe-Lucia et al., 2015). Fourth, regions undergoing economic and ecological transitions where the likelihood of tradeoffs among stakeholders is high (e.g. McShane et al., 2011;Hartel et al., 2014;Daw et al., 2015). Conclusion Equating ecosystem service production with benefits is true in general, but not very useful in specific cases and places. An analysis of access demonstrates how ecosystem service assessments based on agricultural yields or land use provide important, yet incomplete, information to understand prosperity in resourcedependent communities. We have argued for and presented an analytical approach that can unravel the social-ecological interactions that shape ecosystem use and benefits. In the Costa Rican communities that we studied, the switch from traditional to conventional agriculture concentrated power in the hands of middlemen who, for the most part, were outsiders to the community. Far from uniformly improving human well-being, the production of ecosystem services seemed to boost certain dimensions of well-being, such as economic gain, while undermining others, such as agency or identity. Therefore, our research suggests that an understanding of access to ecosystem services in resource dependent communities can highlight trade-offs between the different dimensions of human well-being. This article invites reflection on our understanding and depiction of human well-being as applied to ecosystem services, highlighting the importance of institutions that regulate access. By using the theory of access developed by Ribot and Peluso (2003), this paper presented a framework for the study of ecosystem services that can reveal inequality and power dynamics. We believe that our analytical approach could be applied and extended to understand how institutions shape the co-construction of ecosystem services and their benefits in many places, and such analyses could inform development and management practices that fulfill the promise of ecosystem services by broadly enhancing human wellbeing while enhancing the capacity of ecosystems to continue to support humanity.
7,012.6
2017-12-01T00:00:00.000
[ "Economics" ]
Insight on the Reduction of Copper Content in Slags Produced from the Ausmelt Converting Process The reduction of copper content in converting slag using process control is significant to copper smelter. In this study, the slags produced from the Ausmelt Converting Process for copper matte have been analyzed using X-ray diffraction and chemical analysis. Thermodynamic calculation and effects of various conditions including the lance submerging depth in molten bath, the molten bath temperature, the addition of copper matte, and airflow rate were carried out to lower the content in the slag. Thermodynamic analysis indicates that the decrease of copper content was achieved by reducing Fe 3 O 4 , CuFe 2 O 4 and Cu 2 O in the slag, decreasing the magnetism of slag and lowering the viscosity of slag, which is feasible at the operating temperature of the molten bath. Experiments show that the optimal combination of operating conditions were found to be the addition of copper matte between 5000 - 7000 kg/h, a lance airflow rate of 13000-14000 Nm 3 /h and a lance submergence depth into the molten bath of 700-900 mm, in which the copper content in the slag can be effectively reduced from 22.74 wt. % to 7.70 wt. %. This study provides a theoretical support and technical guidance for promoting the utilization of slags from the Ausmelt Converting Process. Introduction More than 80% of copper (Cu) is produced by pyro-metallurgical processing world-wide [1][2][3][4][5], with the main converting unit processes including flash converting, Peirce Smith (PS) converting, bottom blowing, top blowing and other improved converting processes. The copper content in the slag is typically less than 5 wt.% in the operation of PS converters and other improved furnaces [5][6][7], while it is higher than 16 wt.% Cu in some converting processes [8][9][10]. Numerous studies on reducing copper content in the converting slags have been carried out by experts and scholars [11][12][13] with their researches mainly focused on increasing feed of matte grade, process control   Foundation item: Project (51734006) supported by the National Natural Science Foundation of China, Project supported by Academician free exploration fund of Yunnan province system improvements of the converting process, converting slag reduction, converting slag beneficiation and copper recovery. Increasing matte grade [14][15] will only reduce the output rate of slag, the process control system for the blowing process has no substantive improvement and copper content in the slag will not be reduced effectively. PS converter slag was re-reduced in the modified PS converter by Yunnan Copper Corporation, but the effect of reduction is not significant because the copper content in PS converter slag is only about 3 wt.% prior to the re-reduction operation [16][17]. Copper was recovered in the form of a copper concentrate from the copper slag flotation process used at most PS converter operating sites in China. This method has many advantages when the copper content in the slag is low, but it is not economically sound to reduce the copper content using this method when the copper content in the slag is more than 16 wt.%, which is due to the low recovery efficiency of copper into the copper concentrate from the slag. Currently, the study of the reduction of copper content in the converting slag by using process control technology is rarely reported. Additionally, the toxicity and environmental issues of the solid waste including copper slags have been received wide attentions in recent years [18]. To efficient remove heavy metals including Cu 2+ from waste water and soil, adsorption have been used for removal of Cu 2+ . However, the purification such as adsorption [19,20] and bioremediation [21][22][23] usually faced the high cost and the waste of value metal resource if the high copperbearing copper slag is discarded in the dump. As an improved converting unit processes, the Ausmelt Top Submerged Lancing (TSL) matte converting technique was adopted at the Copper branch of Yunnan Tin Company Limited (YTCL). During the converting operation, limestone and quartz fluxes are fed into the furnace, with oxygen enriched air and pulverized coal injected into the molten bath via the Ausmelt lance. The resulting SO2 in the flue gas is transported to the sulfuric acid production system to produce concentrated sulfuric acid whilst the dust containing valuable metals in the off-gas is collected by the ESP system [24]. The matte converting process is operated in a batch mode with the Converting 1 stage used for slag making/converting stage and the Converting 2 stage for the final production of blister copper. During Converting 1, solid copper matte is continuously fed to the Ausmelt furnace at about 60 tonnes per hour with a feeding time of 4 hours. Slag is tapped from the furnace twice during this stage while matte continues feeding; the first slag tap after 100 tonnes of matte is fed with the second tap after 180 tonnes of matte fed. Bath temperature is maintained at 1543 to 1593 K. In the Converting 2 stage, matte feeding is stopped and operating parameters are adjusted for converting the bath to produce blister copper. At the completion of this stage, the Converting 2 stage slag is tapped out from Ausmelt furnace, then blister copper is tapped at the end of slag tapping from the Ausmelt Converting furnace and is sent into the anode furnace for refining of copper via blister launder. Herein, to gain a deeper understanding into this technical aspect, a study of the reduction of copper content in the slag produced from the Ausmelt TSL top-blown converting process of matte was carried out. The slags produced from the Ausmelt Converting Process for copper matte have been analyzed using X-ray diffraction (XRD) and chemical analysis. Thermodynamic calculation and effects of various conditions including the lance submerging depth in molten bath, the molten bath temperature, the addition of copper matte, and airflow rate were carried out to lower the content in the slag. This study would provide a theoretical support and technical guidance for promoting the utilization of mineral resources. Materials In this study, the slag was produced in the converting process of YTCL, i.e. Converting stage 1 slag and Converting stage 2 slag. The Converting stage 1 slag is tapped continuously from the furnace while solid copper matte is fed continuously into the furnace. The Converting stage 2 slag is tapped from furnace when final converting process is finished. The granulated copper matte is produced from the upstream of Ausmelt TSL smelting process is added into the Ausmelt converting furnace with belt conveyor systems. Limestone and quartz are industrial grade. Other chemicals, including hydrochloric acid (HCl, 36 %, Xilong Chemical Group Co. Ltd.) and sulfuric acid (H2SO4, 98%, Xilong Chemical Group Co. Ltd.) are of analytical grade. Experimental Methods The experiments were conducted in an industrial experimental facility (inner diameter, 5 m; height, 12 m) located in the Copper Branch of Yunnan Tin Company. During the experimental period, the lance smelting factor and the lance submergence depth were controlled by changing the depth of spray lance to the molten pool to investigate the effect of these two parameters on the copper content in the resulting Converting 1 slags. A reduction of converting slag was carried out before the C2 slag was tapped at the end of the convert. The bath temperature at the end of the converting, reductant matte rate, total fed matte amount, lance airflow rate and lance submerging depth were controlled to different levels to investigate their effects on the copper content in the final tapped slag. First, the representative slags from converting process including Convert stage 1 and Convert stage 2 were collected. Then, thermodynamic calculation was used to evaluate the reduction of copper in the Ausmelt Converting process. Subsequently, effects of various conditions including lance submerging depth in molten bath, molten bath temperature, the addition of copper matte, and airflow rate were carried out to lower the content in the copper. The phase composition of Converting 1 and Converting 2 slags was investigated using an XRD (Rigaku IV, Japan) in the 2θ range of 5-90 ° with the scan rate of 5°/min. The metallic copper particles contained in the slag samples are difficult to grind during sample preparation; therefore, these particles are screened out through a 400-mesh sieve and the undersize fraction was used for XRD analysis. To determine the content of magnetic components in slag, a magnetic separator was used to select the magnetic components in the slag and the obtained components were weighed for calculating the content of magnetic components in the slag. In addition, 0.10 g of Converting 1 and Converting 2 slag was totally dissolved using 20 mL of concentrated HCl and 15 mL of 50% v/v H2SO4 in a 300 mL beaker for the chemical analysis of copper content using an atomic absorption spectrophotometer (AAS, Z-8200, Japan), respectively. Results and Discussions 3.1. XRD analysis of the samples As can be seen from Fig. 1, at the Converting 1 stage, the strongest peak is the phase magnetite (Fe3O4), followed by cuprous oxide (Cu2O) and a small amount of white matte (Cu2S). At the Converting 2 stage, the strongest peak is also Fe3O4, then CuFe2O4 and Cu2O. Theoretical assessment of reducing copper content in converting slag Based on the XRD analysis of the slag along with chemical analysis, it can be found that during the copper matte converting process, the strongest peak is the Fe3O4 phase and about 6 wt.% of metallic copper particles was suspended in the slag in both Converting 1 and Converting 2 stage, which indicates that the content of magnetite in the slag was high. When a magnetic analyzer was used to test the slag, the measured results shows more than 40 wt.% of the slag belongs to magnetism, suggesting that the viscosity of the slag is very high [17]. This is the main reason for the slag being contaminated with metallic copper particles and which is due to the highly oxidizing atmosphere in the TSL converting process. This high oxygen potential of the slag leads to some oxidation of Cu to Cu2O during the Converting 1 stage [24]. In addition, some Cu2O reacts with Fe3O4 and forms CuO·Fe2O3 during the Converting 2 stage [25,26]. Therefore, in order to reduce the copper content in the slag during the TSL copper converting process, magnetic strength of the slag must be lowered, so that Fe3O4 would be reduced to FeO to provide a lower viscosity and form a low melting point slag, fayalite (2FeO·SiO2) [26]. When the viscosity of the slag was decreased, metallic copper particles can settle and be separated from the slag. Furthermore, when the oxygen potential was decreased, Cu2O and CuFe2O4 dissolved in the slag would be reduced to Cu2S, which is insoluble in the slag. Then, Cu2S can react with Cu2O to form metallic copper, which would be settled from the slag and transformed to the blister copper. Some related chemical reactions are shown as follows [24; 26-28]: (6) Except for reaction (6), FeO generated in other reactions would react with silica in the molten bath to make slag, as shown in reaction (7). The relationship between Gibbs free energy (G 0 ) and the temperature for the reactions (1)-(6) are calculated and shown in Fig. 2. 2FeO+SiO2 = 2FeO·SiO2 (7) Fig. 2 shows that the initial reaction temperatures are 850 K (reaction 1), 560 K (reaction 3), 400 K (reaction 4) and 955 K (reaction 6). The reaction follows the descending sequence: (2) > (5) > (4) > (3) > (1) > (6) at the operating temperature. The Gibbs free energy of reactions (2) and (5) is always negative [29,30], indicating that both reactions occur readily. Therefore, Fe3O4 and Cu2O in the slag would be easily reduced by FeS when FeS exists in the molten bath. Figure 2. Relationship between Gibbs free energy and temperature for reactions (1) - (6) Considering that the temperature of the molten bath would be up to 1523-1573 K during the TSL copper converting process, the above-mentioned reactions (1)- (6) would occur from left to the right. Thus, at such temperatures, Fe3O4 and Cu2O in the slag would be easily reduced by carbon (C) or FeS. Effects of conditions on the copper content in the slag 3.3.1. Effect of Smelting Air Factor on the copper content in the slag at Converting 1 stage The Converting 1 stage of TSL copper converting process is a copper matte continuous feeding and slag making operation, during which Cu2S is oxidized to blister copper and then can further be oxidized to Cu2O because of the high Smelting Air Factor of lance air and strong oxygen potential of molten bath. Furthermore, vigorous stirring by the TSL lance results in some blister copper particles being entrained in the Converting 1 slag when it is tapped from furnace. Also, as part of the blister copper is strongly oxidized to Cu2O and its solubility in the slag is much higher than the Cu2S, leading to higher total copper content in the slag. During the Converting 1 stage, the copper matte of 58 -60 wt.% Cu is fed at 55 -60 tonnes per hour using a lance smelting air factor of 800 Nm 3 /tonne. FeS in the molten bath would be strongly oxidized to magnetic iron [24]. Furthermore, some Cu2S would also be oxidized, which will result in the increase of copper content in the slag. When the air factor is decreased to 700 -750 Nm 3 /tonne, the oxygen potential of the bath may be reduced, thus a small amount of FeS would exist to help reduce Fe3O4 and Cu2O, thus decreasing copper content in the slag effectively. However, if the smelting air factor is further reduced, since the heat of slagging reaction is not enough, it will result in lower bath temperature and this is not conducive for slag making normally at C1 stage in the TSL process. Therefore, lowering the lance air smelting factor decreases the oxygen potential of the molten bath and then the amount of Cu2S which is oxidized to blister copper would be reduced during the slag making process in the Converting 1 stage. Meanwhile, FeS should be prevented from being excessively oxidized from the copper matte. Only by doing so, some FeS remains in the molten bath and will promote reactions (2), (4) and (5), and then to reduce Fe3O4 content of slag to ensure that slag has good fluidity, which helps copper metal particles settling from slag. The Cu2O in the slag can be reduced by FeS to form Cu2S [26][27][28], separating from slag to decrease copper content in the slag. Effect of lance submerging depth in molten bath on the copper content in the slag at Converting 1 stage In addition to controlling the smelting air factor during the Converting 1 stage, the lance submergence within the molten bath must also be adjusted. This is due to that if the lance is submerged too deep, blister copper or Cu2S underneath slag layer will be stirred up, resulting in copper loss with the slag discharged. If the lance submerging depth in the molten bath is too shallow, it will reduce the effective utilization of lance flow for stirring and slagging reactions are incomplete, resulting in lower bath temperature, poor mobility of slag and higher copper content in the slag [31]. Experimental studies during the production process have shown that, as the bath level gradually increased with the feeding process, optimal control of the lance submergence depth is 200-300 mm into the molten bath during the Converting 1 stage. If it is too deep, the lance flow will stir up blister copper at the bottom of molten bath into the slag and leads to copper content in slag increased. Settling and separation of metallic copper from slag is interrupted. Effect of molten bath temperature on copper content in the slag during the Converting 2 stage Keeping other conditions relatively constant and adding reductant coal or copper matte for reduction of the converting slag before blister tapping, the effect of initial bath temperature on the copper content in the slag at the end of Converting 2 stage of TSL was examined. The relationship of initial molten bath temperature and the copper content in slag after reduction are shown in Fig. 3. Fig. 3 shows that when initial bath temperature is lower than 1563 K, the reduction impact is not significant and the copper content in the slag is above 15.67 wt.%. However, when the temperature of the bath is above 1573 K, the copper in the slag is lowered to below 13.65 wt.%. The reason is that when using coal or matte to reduce the slag, the reactions are endothermic and the initial bath temperature is essential for the reduction effect [32,33]. Low initial temperature and endothermic reactions would cause the bath temperature to further decrease, which will result in low temperature, poor mobility, and high viscosity, and copper particles or Cu2S not settling and so becoming entrained in the slag. Therefore, to obtain a better reduction effect, initial bath temperature must be maintained at around 1573 K before proceeding with reduction to ensure adequate heat is available for endothermic reaction requirements in the reduction process. Copper content in slag after reducing /wt.% C2 initial bath temperature /k Figure 3. Relationship between Converting 2 initial molten bath temperature and copper content in the slag after reduction 3.3.4. Impact of the addition of copper matte on copper content in slag at Converting 2 stage By keeping other conditions relatively stable, the effect of the addition of copper matte used as reductant on the copper content in the slag at Converting 2 stage was examined. The results are shown in Fig. 4. The copper content in Converting 2 slag is reduced from 18.88 wt.% to 7.79 wt.% when the amount of copper matte added is increased from 1000 to 9000 kg/h. This demonstrates the effect of copper matte as a reductant to the copper content is significant due to reactions (2), (4) and (5). The precondition for this is that specific gravity of copper matte is close to Fe3O4 or CuFe2O4 and the stirring action of injected air from the lance accelerates those reactions. This results in a lowered magnetic strength of the slag and improved the slag fluidity, allowing blister copper to be separated and be settled from the slag. Oxidized copper dissolved in slag was reduced by FeS to produce Cu2S [26][27][28]. All these factors support a lower copper content in the slag. Although the addition of copper matte has a significant effect on the reduction of copper content in the slag during the Converting 2 stage, the amount of matte cannot be added excessively. This is because the copper matte particles feeding into molten bath will absorb heat to melt into liquid phase. If the amount is excessive, the molten bath temperature will be decreased which is not ideal for reactions (2), (4) and (5) to occur. Under these conditions, the best addition rate of copper matte is controlled in the range of 5000 -7000 kg/h. It should be recognized that excessive addition of copper matte will also lead to longer operation time and higher sulfur content of the blister copper. The lance airflow rate is an important factor in the reduction of Converting 2 stage slag process. The impact of lance airflow rate on the copper content in the slag after reduction was examined with addition of copper matte controlled at 7000 kg/h and reduction time maintained at 15 min. The results are shown in Fig. 5. Copper content in slag after reduction /wt.% Air volume /Nm 3 /h Figure 5. The relationship of airflow rate with copper content in the slag after reduction When the airflow rate is controlled in the range of 18000 -20000 Nm 3 /h, the reduction of slag is ineffective and copper content in the slag has only slightly reduced. The main reason is that FeS in copper matte was preferentially oxidized [24,34], which results in the content of FeS required by reactions of slag reduction (2), (4) and (5) being decreased and the extent of the reduction of slag is weakened. With the airflow rate decreased to 13000 -14000 Nm 3 /h, copper content in the slag began to decrease significantly. When airflow rate further declined, there is a slight decrease of copper content in the slag. However, the difference of melt depth inserted into the melt will lead to the difference of spraying lance airflow rate and melt stirring intensity, thus affecting the relatively static sedimentation separation of copper in the slags [31]. Low airflow rate is not favored for the operation as it can cause lance tip blockage, less stirring of molten bath and result in difficulties in achieving stable normal operation. It is found for this case that the best airflow rate should be controlled in the range of 13000 -14000 Nm 3 /h, which can ensure reduction reactions at the end of Converting 2 stage completed smoothly. 3.3.6. Effect of lance submergence depth on copper content in the slag during Converting 2 stage The effect of lance submergence depth in molten bath on copper content in the slag during the Converting 2 stage was examined when the addition rate of copper matte was fixed at 7000 kg/h, the reduction time was set to 15 min and lance airflow rate was controlled at 14000 Nm 3 /h. The results are shown in Table 1. Too deep (>900 mm) or too shallow (< 700 mm) lance submerging depth is not beneficial for slag reduction. The reason is that, if the lance submerging depth is too deep, blister copper at the bottom of molten bath will be stirred and then mixed with slag phase, causing the copper content of slag to increase. However, if it is too shallow, the mixing effect on the molten bath is poor, which is not good for the reaction of copper matte with slag. We can see that the best submergence depth is 700 -900 mm, while the copper content in the slag would be controlled at around 12.54 wt.%, which means that the copper content in the slag is effectively decreased. Analysis of copper content in mixed slag from both Converting 1 and Converting 2 stages Following adjustment of the Smelting Air Factor, the lance submergence depth and after reduction of the Converting 2 final slag, a mixed slag sample was obtained from Converting 1 and Converting 2 stages, namely comprehensive slag. The sample was tested using XRD and its result is shown in Fig. 6. The strongest peak was the Fe3O4 phase, then the magnetic iron phase. A small amount of copper metallic phase was also observed. By comparing Fig. 1 and Fig. 6, CuFe2O4, Cu2O and Cu2S in the slag were all gone after the slag reduction at the end of Converting 2 stage, indicating that the measures are effective for the reduction of copper content in the slag. The coppercontaining phase in the slag is effectively decreased and the chemically dissolved copper in the slag is lowered. Chemical analysis of the slag samples showed that the copper content in the slag is 7.70 wt.%, which is far below the previous value of 22.74 wt.% prior to the experiment. . Conclusions In the study, theoretical analysis shows that the copper content was decreased in the slag by reduction of Fe3O4, CuFe2O4 and Cu2O in the slag, diminishing the magnetism of slag and lowering the viscosity of slag is feasible at the process operating temperatures. When the Smelting Air Factor is set to 700-750 Nm 3 /tonne, the oxygen potential of the bath may be reduced; thus, a small amount of FeS would exist to inhibit the formation of Fe3O4 phase and will reduce the oxidized Cu2O, which will effectively reduce the copper content of slag. To get a better reduction efficiency, the initial bath temperature must be maintained at 1573 K and adequate heat must be provided. The best combination of operating conditions for this case were found to be: the addition of copper matte between 5000 -7000 kg/h, a lance airflow rate of 13000-14000 Nm 3 /h and a lance submergence depth into the molten bath of 700-900 mm. Under these conditions, the copper content of the converting slag can be effectively maintained at 7.70 wt.%. This study would provide a theoretical support and technical guidance for promoting the utilization of mineral resources. Moreover, the other metal distribution and energy flow in the system will be considered in the future study. Figure 2. Relationship between Gibbs free energy and temperature for reactions (1) - (6) Figure 3. Relationship between Converting 2 initial molten bath temperature and copper content in the slag after reduction Figure 4. Relationships between copper content in the slag and amount of copper matte at Converting 2 stage Figure 5. The relationship of air rate with copper content in the slag after reduction Figure 6. XRD analysis of comprehensive slag Table captions Table 1 The relationship of lance injection depth with copper content in the slag after reduction
5,846.4
2021-01-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
Electro-mechano-optical detection of nuclear magnetic resonance Signal reception of nuclear magnetic resonance (NMR) usually relies on electrical amplification of the electromotive force caused by nuclear induction. Here, we report up-conversion of a radio-frequency NMR signal to an optical regime using a high-stress silicon nitride membrane that interfaces the electrical detection circuit and an optical cavity through the electro-mechanical and the opto-mechanical couplings. This enables optical NMR detection without sacrificing the versatility of the traditional nuclear induction approach. While the signal-to-noise ratio is currently limited by the Brownian motion of the membrane as well as additional technical noise, we find it can exceed that of the conventional electrical schemes by increasing the electro-mechanical coupling strength. The electro-mechano-optical NMR detection presented here opens the possibility of mechanical parametric amplification of NMR signals. Moreover, it can potentially be combined with the laser cooling technique applied to nuclear spins. INTRODUCTION Electrical signals can be up-converted from radiofrequency (rf) to optical regimes using a high-Q metalcoated silicon nitride membrane, which serves both as a capacitor electrode and a mirror of an optical interferometer [1]. There, the mechanics of the membrane, the electronics of the rf circuit, and the optics of the interferometer interact with one another through the optomechanical and the electro-mechanical couplings. Even though the principle of such membrane-based, rf-to-light signal transduction has now been established, its power has yet to be harnessed in various rf-relevant fields. In this work, we report on the first rf-to-light up-conversion of nuclear magnetic resonance (NMR) signals. NMR [2-4] is a powerful analytical tool, offering access to structure and dynamics in liquid and solid materials of physical/chemical/biological interest. Usually, NMR signal reception relies on nuclear induction [5] causing an electromotive force across the detection coil, followed by electrical amplification of the rf signals [6]. For a given signal strength, which could be significantly enhanced by nuclear hyperpolarization techniques [7,8], the sensitivity is limited by the noises, namely, the Johnson noise of the resistive components within the circuit as well as the inevitable noise from the amplifier. While the noise levels in unconventional optical NMR schemes, such as Faraday rotation [9,10], force detection [11], fluorescence [12,13], and atomic magnetometry [14], are much lower than that in the traditional NMR and can in principle be quantumnoise-limited, all existing optical NMR detection schemes lack wide applicability compared to the traditional induction approach which allows measurements of any bulk samples, including living organisms, placed inside the detection coil. Here, we put forward a versatile approach to optical NMR readout, applicable straightforwardly to chemical analysis as well as magnetic reosnance imaging (MRI) diagnosis, by exploiting the membrane signal transducer system that we designed and fabricated to meet the specific needs for pulsed NMR spectroscopy. In the following, we demonstrate the Electro-Mechano-Optical (EMO) NMR detection scheme with proton ( 1 H) spin echoes [15] in water. The signal-to-noise ratio, albeit currently limited by the thermal noise due to the Brownian motion of the membrane as well as additional technical noise, is expected to increase with the electromechanical coupling strength. We show that the EMO NMR approach can offer better sensitivity compared to the conventional all-electrical scheme with realistic improvements in the experimental parameters. The EMO approach opens the possibility of mechanically amplifying NMR signal [16] and even laser cooling nuclear spins [17][18][19] to further enhance the sensitivity of NMR. EXPERIMENT Experimental setup We aimed at transducing 1 H NMR signals induced in a magnetic field of ≈ 1 T from the original rf domain (ω s /2π ≈ 43 MHz) to the optical domain (Ω c /2π ≈ 300 THz) for a demonstration of EMO NMR. Figure 1 illustrates the experimental setup. For the optomechanical and the electro-mechanical couplings, the mechanically compliant part was a high-stress silicon nitride (Si 3 N 4 ) membrane (Norcada) with lateral dimensions of 0.5×0.5 mm and a thickness of 50 nm. On the membrane was deposited a circular Au layer with a diameter of 0.45 mm and a thickness of 100 nm. The effective mass m of the Au-coated membrane oscillator was 8.6×10 −11 kg. We found the fundamental (1, 1)-drum mode oscillation of the Au-coated membrane at ω m /2π ≈ 180 kHz. The Q factor was about 1,800 in vacuum with no air damping. Counter electrodes were patterned on a silica plate, and the membrane capacitor was assembled with a designed gap d 0 between the electrodes of 800 nm. The actual gap was estimatedto be d 0 ≈ 1.4 µm (see Appendex for more detail). The magnetic field was provided by a nominally 1 T permanent magnet, in which a pair of orthogonal rf coils were embedded for pulsed excitation of nuclear spins and NMR signal reception, respectively. The excitation coil was a 2-turn saddle coil, while the detection coil was a 10turn solenoid coil with a diameter of 3 mm (L = 150 nH). In addition, a pair of planar coils (not shown) were placed outside the rf coil pairs to vary the static magnetic field with application of dc current around the resonance condition of the proton spins. The membrane capacitor was connected in parallel with the detection coil together with additional trimmer capacitors with capacitances C t = 98 pF and C m = 21 pF, forming a balanced resonant circuit at ω LC /2π ≈ ω s /2π ≈ 43 MHz with the Q factor of 26.7. The excitation coil was also impedance-matched at the same frequency. The isolation between these two separate circuits was 22.5 dB at the resonance frequency. The design of the optical Fabry-Pérot cavity is described in Appendix. Here, the metal-coated membrane served as one of the two mirrors of an optical cavity for a laser beam with a wavelength of 780 nm. The other mirror with a reflectance of 97% and a radius of curvature of 75 mm was attached to a ring piezo actuator. The cavity length, which was coarsely adjusted to 17.5 mm, was locked by the feedback on the piezo to the position where the amplitude of the reflected laser beam drops half the dip at cavity resonance, so that the membrane oscillation resulted in amplitude modulation of the laser and thus was imprinted in the optical sideband signal at ω m . Note that the cutoff frequency of the piezo servo sys-tem is far below ω m , so that the mechanical resoponse, which would include the rf signal contribution, can be safely transduced to the optical sideband signal at ω m . Electro-mechano-optical signal transduction The rf signal developed in the detection LC circuit was parametrically transduced to the membrane oscillation in the presence of the drive signal at either the sum or the difference angular frequency ω D = ω s ± ω m , which was applied to bridge the mismatch between the 1 H resonance frequency ω s /2π ≈ 43 MHz and the membrane resonance frequency ω m /2π ≈ 180 kHz. The resultant membrane oscillation was then probed by light. To examine the EMO signal transduction, we applied to port A in Fig. 1 a continuous-wave rf signal at a frequency ω s /2π + 500 Hz, instead of the real emf signal, together with drive irradiation at various powers. Figure 2 shows the acquired optical sideband spectra, where in each spectrum the mechanical responses of the membrane to the noise (blue) as well as the delta-function-like rf-signal tone (red) are visible. With increasing power of the drive, the mechanical resonance frequency is shifted downward [1]. In addition to the Johnson noise and the Brownian noise of the mechanical oscillator, we found increase in the noise floor with the drive power. We ascribed this to the phase noise of the drive as will describe in Sec. . H spin echo experiment 1 H NMR experiments were then carried out at room temperature using 0.1 mol/L aqueous solution of CuSO 4 in a glass test tube (inner diameter 1 mm) with ≈ 2.2 × 10 20 1 H spins of water molecules, in which the paramagnetic copper ions accelerate 1 H spin relaxation, allowing rapid repetition of signal averaging. The spin-echo measurement [15] was performed by applying rf pulses with a power of +17 dBm to the tuned excitation coil through port B in Fig. 1 with the widths of the π/2 and the π pulses of 140 µs and 280 µs, respectively, and the pulse interval of 1.5 ms. The inset of Fig. 3 shows a conventional electrical signal of the 1 H spin echo obtained by connecting port A in Fig. 1 to a low-noise amplifier, so that the amplified electrical nuclear induction signal could be sent to the conventional demodulation circuit of the NMR spectrometer. The maximum intensity of the NMR echo signal was −93 dBm at the input of the lownoise amplifier. The observed decay with a time constant T * 2 ≈ 320 µs was dominantly caused by the inhomogeneity of the magnetic field. Next, the low-noise amplifier at port A in Fig. 1 was replaced with a drive source for down conversion of the NMR signal to the mechanical frequency, and the optical The spectra are plotted with vertical offsets proportional to the drive power. The baselines (horizontal broken lines) indicate the corresponding drive power (right axis) as well as the reference power spectral density of −114.5 dBm/Hz. Along with the membrane spectra (blue lines) the peaks corresponding to the tone signals (red lines) appear at 500-Hz off-resonance from the mechanical resonance frequency ωm (black points). The observed downward shifts of the mechanical resonance frequency were fitted with a model discussed in Appendix (orange line). output from the Fabry-Pérot cavity was measured under the drive power of +15 dBm . During the rf pulses, the frequency of the drive was detuned by +400 kHz, so as to decouple the electro-mechanical interaction and thereby prevent the membrane from being shaken by the excitation rf pulse leaked to the detection circuit, which, in spite of the 22.5 dB isolation, was still orders of magnitude more intense than the NMR signals induced in the receiving LC circuit (−93 dBm). Figure 3 shows the electro-mechano-optically detected spin-echo signal (blue line) accumulated over 5000 times with a repetition interval of 20 ms. For comparison, we performed another measurement with the identical experimental parameters except for a slight shift in the static magnetic field (≈0.06 mT) to make the 1 H spins off-resonant by 2.5 kHz, and verified that the signal dis- appeared (red line), convincing ourselves that the profile of the optically detected signal (blue line in Fig. 3) does really originate from the nuclear induction signal. The difference in the profile of the spin-echo signal obtained by the EMO approach from that in the conventional electrical scheme can be explained by the transient response of the high-Q membrane. That is, the response b(t) of the membrane to an excitation a(t), the present case of which is the profile of the electrically detected spin echo, is determined by the response function h(t) of the membrane through convolution, i.e., b(t) = t −∞ h(t − τ ) * a(τ )dτ . Since the spectrum of the fundamental mode of the membrane was well fitted with a Lorentzian function with a width γ m /2π ≈ 100 Hz, we approximated the response function h(t) to be an exponentially decaying function with a time constant 2/γ m , and calculated the response b(t), which was found to reproduce the measured profile of the EMO NMR signal (broken line in Fig. 3). THEORY AND DISCUSSION Dynamics of the EMO system Figure 4 schematically shows the pathway of successive signal transduction through a chain of three harmonic oscillators, namely, the LC circuit, the membrane oscillator, and the optical cavity. Here, q and φ are the charge and the flux of the LC circuit, z and p are the displacement and the momentum of the mechanical oscillator, and X and Y are the canonical quadratures of the optical cavity's field. G em and G om are the electromechanical and the opto-mechanical coupling strength. γ i , γ m , and γ o are the intrinsic dissipation rates for the LC circuit, the mechanical oscillator, and the optical cavity. Associated with these dissipations there are rotatingframe thermal fluctuation inputs q in and φ in for the LC circuit, laboratory-frame thermal fluctuation input f in for the mechanical oscillator. The thermal fluctuation inputs for the optical cavity, x in and y in , are negligible and thus omitted. κ i and κ o are the external coupling rates for the LC circuit and the optical cavity, and, in addition to the NMR signal input, S, the associated fluctuation inputs are Q in and Φ in for the LC circuit and X in and Y in for the optical cavity. The total dissipation rates are thus κ iT = κ i + γ i for the LC circuit and κ oT = κ o + γ o for the optical cavity, respectively. Using the input-output formalism in the rotating wave approximation [20], we have the following Heisenberg-Langevin equations of motion: where ∆ i = ω D − ω LC is the difference between the drive signal frequency and the LC resonance frequency, and ∆ o = Ω D − Ω c is the detuning of the optical cavity from the drive laser frequency. We note that the electro-mechanical coupling G em increases quadratically with decreasing the gap d 0 between the electrodes of the capacitor (see Appendix). Now we shall see how the rf NMR signal, S appearing in Eq. (1), is transduced to the optical output X out , which is given by the input-output relation By taking the time derivative of Eq. (3) and using Eq. (4) we havë where f in ≡ p in +ż in /ω 0 is the mechanical thermal noise input. In the frequency domain, the above equation for the displacement can be written as Membrane Optical Cavity LC Circuit Optical Readout Schematic diagram of electro-mechano-optical signal transduction of NMR. The three harmonic oscillators, namely, the LC circuit, the membrane, and the optical cavity, are represented with circles, each of which has channels of the input and output, with coupling strengths κi, Gem, Gom, and κo, and dissipation to the bath, with rates γi, γm, and γo. The rf signal S generated by nuclear induction at frequency ωs ≈ ωLC is transduced to the membrane oscillation through the LC circuit with the electro-mechanical coupling under application of the drive signal at ωD = ωLC + ωm. The resultant membrane oscillation is in turn read out optically with the optical cavity through the opto-mechanical coupling. where the mechanical susceptibility χ m (ω) is defined by In a similar fashion, we obtain the frequency-domain representaion for q and X as where the LC susceptibility χ LC (ω) and the optical susceptibility χ c (ω) are given by In our experiment, ∆ i ≈ ω m was much smaller than the resonant bandwidth of the LC circuit. Thus, we consider the case of resonant application of the drive, ∆ i → 0. In addition, to detect the membrane displacement through amplitude modulation of the optical output, we detuned the optical cavity approximately by half its bandwidth, i.e., ∆ o ≈ κ oT /2. Further, since the frequency ω m of interest in the optical output signal is much smaller than ∆ o , we set ω → 0 in Eqs. (12) and (14). Neglecting G 2 om ≪ 1, we obtain after some algebra In the laboratory frame, the linearized rotating-frame signal X out in Eq. (15) has to be modified to bẽ where the last term comes from the displacement by the optical drive, which oscillates at frequency Ω D . Here, N D is the intracavity photon number (see Appendix). Note that Y out , now appearing inX out , is given by In the photo-detected signal X out 2 in the laboratory frame, the components oscillating around ω ∼ ω m , which are produced by the interference between the term oscillating at Ω D and the ones at Ω D ± ω m , are of interest. These components constitute the optical signal output, O(ω), which amounts to the magnitude of the quadrature demodulated signal (see Appendix) and can be written as This indeed contains the rf signal input S along with various noises, which is faithfully transduced from the mechanical response Eq. (9) with the amplification factor proportional to G om as seen in Eqs. (15) and (17). The added noise here is just the optical shot noise, which can be quantum-noise-limited. One of the potential advantage of the EMO NMR detection over the convential NMR is thus the fact that both the Brownian noise and the optical shot noise can be suppressed by increasing the electro-mechanical coupling G em as well as the optomechanical coupling G om [1]. Noise spectral densities Since the mean value of noise is zero, each noise shall be evaluated in terms of spectral density. For the Brownian noise of the mechanical oscillator, the noise spectral density S F F is defined as S F F = |f in | 2 . The Johnson noise in the LC circuit can come from the bath as well as from the input channel, and its spectral density, S qq , is given Assuming that these noise spectra S F F (ω) and S qq (ω) are white within the bandwidth of the mechanical resonance, we have the Nyquist-type noise spectra, with Here, we assume that the electric bath temperature T is 300 K, while the mechanical bath temperature T eff is not necessarily equal to 300 K but can rather be higher given that the quality factor is good so that the ambient noise could easily bring the mechanical oscillator away from the thermal equilibrium. We note that the LC circuit and the mechanical oscillator are both in a high temperarure regime where k B T eff ≫ ω m and k B T ≫ ω LC . Conversely, we can expect that the noise spectral density S XX ≡ |X in | 2 and S Y Y ≡ |Y in | 2 for the optical part can be made much smaller. From Eq. (18), the single-sided spectral density S oo (ω) of the optical signal at frequency ω close to ω m can be written as Here, we introduced the opto-mechanical cooperativity C om and the frequency-dependent electro-mechanical co-operativity C em (ω) as Signal-to-noise ratio In the under-coupling limit κ o ≪ κ oT , the signal-tonoise ratio S/N in units of photon number within a narrow frequency range ∆ ≪ ω m at around ω = ω m , i.e., from ω m − ∆ 2 to ω m + ∆ 2 , is where we used γ 2 m |χ m (ω m )| 2 = 1. The form of the signalto-noise ratio consolidates the aforementioned potential advantege of the EMO NMR. All the noise except for the Johnson noise, which is intrinsically inseperable from the rf signal, are suppressed by increasing the electromechanical coupling G em and thus the electro-mechanical cooperativity C em [1]. Note that the yet another figure-of-merit, signal tranfer rate [21], for the current EMO-NMR is given by Comparison to the experiments We calibrated the parameters (see Appendix) that characterize the EMO signal transduction from the acquired optical sideband spectra shown in Fig. 2. In the presence of +15 dBm drive, the electro-mechanical cooperativity C em of 0.019 was attained, whereas the optomechanical cooperativity C om was 0.32 ×10 −3 . With these values the signal transfer rate amounts to ∼ 1.1 × 10 −7 . As the drive power is increased to make C em much larger, however, the phase noise of the drive becomes conspicuous as mentioned in Sec. . In terms of the offset angular frequency ω the profile of the phase noise can be expressed as, a Lorentzian form with the spectral line width of δ P , where 1/f noise and frequency-independent noise are ignored. Then the photon flux associated with the phase noise of the drive at sideband frequency ω can be given by where P D is the power of the drive. Thus, the spectral density at frequency ω in Eq. (22) needs to be modified when the phase-noise contribution is appreciable. To deduce the expected signal-to-noise ratio, one missing element is the bandwidth of the NMR signal. In the echo experiment, the effective bandwidth of the detection is determined by 1/πT * 2 ≈ 1 kHz where T * 2 ≈ 320 µs. Since the bandwidth of the electro-mechano-optical NMR detection is limited by the mechanical response, ∆/2π ≅ γ m /2π ≈ 100 Hz, the impedance mismatch roughly leads to the factor of γ m T * 2 /2 reduction of the signal strength. The signal-to-noise ratio for the echo experiment shown in Fig. 4 is thus expected to be for the single-shot measurement, where the parameter η p in Eq. (28) characterizing the phase noise at around ω m with respect to the carrier at ω D , i.e., which was evaluated to be η p ≈ 10 −11 (see Appendix). The number of total noise quanta [the denominator of Eq. (28)] is estimated to be 2.4 × 10 10 while the signal quanta [the numerator of Eq. (28)] for the echo experiment is on the order of 3.6 × 10 8 , which are proportional to the noise and the signal voltages squared, respectively. The noise budget of the current EMO NMR detection is shown in Table I. With 5000-times averaging, the signalto-noise ratio becomes roughly 8, agreeing well with the signal-to-noise ratio of the acquired data (S/N ≈ 5.4) shown in Fig. 3. Prospects Even though the signal-to-noise ratio in the present proof-of-principle EMO NMR demonstration is lower than that in the conventional electrical NMR approach, there is plenty of room for improving the sensitivity. In particular, the electro-mechanical cooperativity C em ∝ 1/d 4 0 would increase dramatically by reducing the capacitor gap. With a realistic revision including the capacitor design, we have a prospect of attaining the effective noise temperature of as low as 6 K at room temperature operation of the transducer with a +30-dBm drive (see Appendix), which would outperform the conventional NMR approach. If the membrane is put in a cryogenic environment, further improvement is expected. In addition, the effect of the phase noise of the drive can be made negligibly small by increasing the mechanical oscillation frequency and thereby the difference ω D −ω s . One way to do this would be to reduce the weight of the metal layer deposited on the membrane. Some filters can also be arranged to prevent the phase noise of the drive from exciting the mechanical oscillator. Moreover, as increasing the electro-mechanical cooperativity C em , signal transduction would be accompanied by parametric signal amplification. So far, in NMR and MRI, parametric amplification has been realized using an LC circuit containing a varactor diode, whose capacitance can be varied electrically [22,23]. The present work would lead to electro-mechanical parametric amplification of NMR/MRI signals. The usage of the Fabry-Pérot optical cavity in this work opens the possibility of exploiting the effect of radiation-pressure cooling [24][25][26]. If the optomechanical and electro-mechanical couplings as well as the laser power are large enough, the membrane's oscillation modes, and thereby the eigenmode of the LC circuit, can be cooled [17], implying the possibility of cooling nuclear spins through electro-mechanical and optomechanical couplings without physically lowering the temperature of the experimental system. If the expected challenges, such as the insufficient Q factor and finite dissipation rates to the bath, have been overcome, laser cooing of the nuclear spins would provide a way toward further enhancing the NMR sensitivity. It is worth noting that nuclear-spin laser cooling would not require doping of paramagnetic impurities in the sample of interest, in contrast to the current dynamic nuclear polarization schemes [27]. In the coupling between a microwave cavity and an ensemble electron spins [28][29][30], analogous population exchange has been theoretically proposed [18,19] and experimentally reported [31,32]. Its extension to nuclear spins with a cold mechanical nano-resonator is also suggested [33]. With the separate coil used for rf excitation, pulsed-NMR techniques for coherent manipulation of nuclear spin interactions can be applied straightforwardly [34], whereas in the receiving part, the bandwidth is limited by that of the membrane oscillator (≈ 100 Hz). This can be rather narrow compared to the spectral width of interest in NMR analysis, where the resonance lines can spread due to broadening and/or distribution of isotropic shifts. In this context, the EMO approach is compatible with traditional continuous-wave NMR [7] as well as recently reported field-sweep NMR [35,36], where the frequency of interest is fixed throughout measurement and the external magnetic field is varied instead. It is also worth noting that the aforementioned enhancement of the electro-mechanical coupling would cause damping of the membrane's oscillation, and thereby increase the accessible bandwidth. SUMMARY Rf signals of nuclear induction can be up-converted to light through the membrane oscillator that forms a part of both the LC resonant circuit and the optical cavity. The EMO NMR approach presented here potentially offers better sensitivity than that of the conventional electrical detection scheme. Fig. 1(c)]. Counter electrodes made of Al were patterned on a 0.4 mm-thick silica substrate by photo-lithography and wet-etching, as depicted in Fig. A1. To support the membrane frame with a designed gap of 800 nm between the capacitor electrodes, Al pillars were made on the substrate. In addition, we mechanically carved a burr-free recess with a depth of 50 µm serving as the pockets of dusts, which can otherwise get stuck between the membrane and the substrate and render the gap between them far bigger than designed. Even with this precaution the actual gap was estimated to be larger (≈ 1.4 µm) than what was designed (see below). Optical-cavity design A short summary of ray optics A Gaussian beam (propagating along the z axis) is characterized completely by a pair of real parameters z and z 0 , or equivalently, a single complex parameter q, often called the q parameter, defined as Here, z represents the distance from the beam waist, i.e., from the position at which the radius of the beam is the minimum. By convention, z is taken to be positive if the beam waist is on the left side. z 0 is called the Rayleigh range, which measures the distance from the beam waist to the position where the beam diameter is √ 2 times larger than that at the beam waist. Alternatively, the Gauss beam is characterized by an inverse 1/q(z), being the radius of the curvature of the wavefront, and being the beam radius, where is the waist radius. Optical cavity for EMO NMR An optical cavity, one of key components in the EMO NMR system, is composed of a pair of mirrors, one of which is a metal layer deposited on the membrane, and the other is a concave mirror. To develop a stable hemispherical laser resonator (Fig. A2), the mirrors need to be placed in such a way that the laser beam, which reflects back and forth, form a waist at the membrane, and the wavefront's radius of curvature matches with that of the cavity mirror. In addition, on the concave side of the cavity mirror, the reflection coefficient matches with that (97%) of the Au mirror on the membrane to achieve the critical coupling. Note that due to the other loss mechanism (such as the diffraction loss stemming from the beam misalignment) than the absorption loss of the Au mirror the desired critical coupling could not be realized in the experiment. In this work, the wavelength λ was 780 nm, and we employed the concave mirror with a radius of curvature R of 75 mm, and aimed at setting the beam diameter 2W 0 at the membrane mirror to be 180 µm so as to safely fit the small Au mirror with the diameter of 0.45 mm. We found, from Eqs. (A3)-(A5) that the cavity length z of 17.5 mm fuflils the requirements for the hemispherical resonator. NMR Experiments In this work, we detected 1 H NMR signals in 0.1 mol/L aqueous solution of CuSO 4 containing ≈ 2 × 10 20 1 H spins. The spin echo experiments were performed by applying successive π/2 and π pulses with a common power of +17 dBm, widths of 140 µs and 280 µs, and an interval of 1.5 ms. For generating rf signals and detecting NMR signals, we used a home-built NMR spectrometer equipped with multi-channel rf transmitters and a receiver [37]. Each transmitter is capable of generating rf signals of up to 600 MHz with arbitrary amplitude, phase, frequency, and pulse modulation, and the receiver serves for frequency conversion, digital quadrature demodulation, and digital filtration. In the conventional electrical detection of the 1 H spin echo demonstrated in the inset of Fig. 3 for comparison, the rf pulses were fed to port B of Fig. 1 while the nuclear induction was detected by monitoring the signal coming out of port A through a low noise amplifier with a noise figure of 1.1 dB. Conversely, in the EMO approach, the drive signal was now fed through port A during acquisition of the photo-detected signal, and the signal from a photo-detector was amplified by another low noise amplifier (SR560, Stanford). The drive signal was generated with a home-built direct digital synthesizer (DDS) board equipped with a DDS chip AD9858 (Analog Devices) [37]. Fig. A3 shows a schematic diagram of quadrature demodulation of the photo-detected NMR signal with respect to the mechanical frequency ≈ ω m in such a way that the phase coherence with the excitation rf pulses is retained. Note that the degradation of the signal-to-noise ratio in the process of mandatory signal amplification and frequency conversion is dominated by the amplifier at the first stage, i.e., in the present case, the shot-noise limited photo-detector composed of a Si photo-diode and low-noise operational amplifiers. Theoretical description To understand the up-conversion mechanism of the signals from rf to optical regimes with the membrane, we recapitulate the theory developed in Ref. [1] with modifications necessary for the present experiments. The theory also serves for evaluation of the signal-to-noise ratio. Hamiltonian The Hamiltonian of an LC circuit and a metal-coated membrane oscillator, which is coupled to the former as being a part of the capacitor, can be written as (A6) Here, Q, Φ, C(Z), and L are the charge, the flux, the capacitance, and the inductance in the LC circuit. Z and P are the displacement and the momentum of the membrane oscillator, m is the effective mass, and ω 0 /2π is the eigenfrequency of the unloaded membrane oscillator. The coupling between the LC circuit and the membrane oscillator stems from the first term in Eq. (A6) containing the displacement-dependent capacitance. The last term represents the drive applied to the LC circuit with voltage V . The equations of motion are theṅ For convenience, let us introduce the following dimensionless variables:Q = 4 L 2 C0 Q,Φ = 4 C0 2 L Φ,Z = mω0 Z, andP = 1 √ mω0 P , where C 0 is the capacitance with the equilibrium displacement. In addition, we de- . Using the LC resonance frequency ω LC = 1/ √ LC 0 , we represent the equations of motion for these variables aṡ . (A14) Let us now introduce the oscillating drive voltage at ω D with the amplitude V 0 , that is,Ṽ = V 0 cos (ω D t), which poses the major difference from the previous analysis in Ref. [1], where the DC voltage was applied for realizing the electro-mechanical coupling. To linearize the equations of motion, let us suppose that the mean values and the fluctuations can be separated asQ = Q 0 + q, which diverges at ω D = ω LC . The singularity can be avoided if we take dissipation into account. From Eq. (A14), the mean value for the displacement Z 0 is given by where we use mω0 C 0 = C(Z 0 ) in the second equation. For the fluctuations, q, φ, z, and p, the linearized equations of motion around the mean value of the charge Q 0 and the displacement Z 0 are given bẏ With Q 0 in Eq. (A15) and introducing the membrane eigenfrequency ω m ≡ ω 0 + δω, we have the following linearized equations of motion, where we rewrite the Hamiltonian Eq. (A26) as (A27) Now by invoking the rotating-wave approximation and eliminating the counter-rotating terms ae −iωD and a † e iωD , we have (A28) Then, by performing the unitary transformation U = exp iω D ta † a , we obtain the following time-independent Hamiltonian: This Hamiltonian is recast into the one with the quadratures, where −∆ i = ω LC − ω D and G em , which describes the electro-mechanical coupling rate, is given by By similar arguments, we can establish the effective Hamiltonian for the opto-mechanically coupled system, and Ω D /2π are the optical cavity frequency and the optical drive frequency, respectively. Here X and Y are the mutually orthogonal quadratures of the intra-cavity optical field, and G om is the opto-mechanical coupling rate. Putting the electromechanically and opto-mechanically coupled systems together, we have the following effective Hamiltonian Using the Hamiltonian (A33), the Heisenberg-Langevin equations of motion, Eqs. (1)- (6), are derived with the ad hoc dissipation and fluctuation input terms added. Electro-mechano-optical signal transduction rf signal input, Johnson noise, Brownian noise, and optical back-action noise The mechanical response, Eq. (9), contains the emf signal S along with various noise, which can now be rewritten for ∆ i = 0, ∆ o = κ oT /2, and ω → 0 using Eqs. (11) and (12) as Here, we see that the rf signal and the Johnson noise are faithfully transduced to the mechanical response with the amplification factor proportional to G em . The added noises are the Brownian noise from the mechnical oscillator and the optical back-action noise, the latter of which can be neglected here since G om is small. The Brownian noise part cooresponds to the amplifier noise in the convensional NMR detection scheme. The contribution of the Brownian noise can, however, be made negligibly small in principle if the electro-mechanical coupling G em becomes large. We shall discuss the issue of signal-tonoise in more detail later on in Sec. . Optical shot noise and optical signal output Now let us see how the mechanical response Eq. (A34) appears in the optical readout. With Eqs. (12) and (A34) with ∆ o = κ oT /2 and ω = 0 (this assumption is valid since ω ≈ ω m ≪ ∆ o ≪ Ω D ≈ Ω c ), X can be written as This X has been tacitly displaced by α+α * √ 2 = √ N D cos θ from the lab frame in the linearized rotating-frame Hamiltonian (A33), where we only dealt with the fluctuations above the non-zero mean value. Here, N D is the intracavity photon number. Since we are in the rotatingframe at ω D , α = ND 2 e iθ is time-independent. Signal-to-noise ratio In the conventional electrical detection, the NMR signal acquisition process is inevitably accompanied with amplifier noise, which is characterized by the equivalent amplifier noise temperature T n = √ S V V S II /k B , where S V V and S II are voltage and current noise spectral densities of the amplifier [6]. For the low-noise amplifier that we used in the conventional electrical NMR detection, T n was measured to be 84 K. In the EMO approach, where the signals are acquired through membrane displacement measurement, the additional noises come from the Brownian motion of the membrane, the shot noise of the laser beam, and the back-action of the photons hitting on the membrane. Since the last noise is negligible here, the optical shot noise with spectral density S XX and the displacement noise with spectral density S F F contribute to the net noise. Brownian noise and Johnson noise First, let us evaluate the Johnson noise and the Brownian noise of the mechanical oscillator. From Eq. (A34), the double-sided velocity noise spectral density for the mechanical oscillator Sżż is given by The rms displacement noise z 2 n is then given by We can see, from Eqs. (19), (20), and (A37), the noise can be dominated by the Johnson noise when Optical shot-noise backaction Next, we shall evaluate the optical shot noise. Equation (A34) leads us to the following portion of doublesided velocity noise spectral density for the mechanical oscillator, which is from the optical shot noise S XX (ω) and S Y Y (ω): If these noise spectra S XX (ω) and S Y Y (ω) do not include classical noise around the mechanical frequency ω ≈ ω m , S XX (ω) and S Y Y (ω) are said to be shot-noise-limited, and where, since k B T ≪ Ω c , n th (Ω c , T ) ≈ 0. Then the noise spectrum of the mechanical displacement is where the opto-mechanical cooperativity C om is The total mechanical displacement driven by various noise sources, the Brownian noise and the Johnson noise [see Eq. (A37)] and the shot noise [see Eq. (A42)] is Signal-to-noise ratio In the presence of appreciable phase noise of the drive signal, the single-sided spectral density at frequency ω in Eq. (22) is modified to be By taking the phase noise into account, the signal-to-noise ratio, Eq. (25), is modified as Parameter calibrations Equilibrium distance d0 between the electrodes of the membrane capacitor The electro-mechanical cooperativity C em can be deduced from the shift of the eigenfrequency of the membrane oscillator as a function of the drive power. The frequency shift δω is given with the dimensionless variables by as in Eq. (A20). Here Q 0 is the one given by Eq. (A15) with a loss term, that is, On the other hand, for Q 2 0 the dominant contribution comes from the rectified DC term, that is, With this Q 2 0 the frequency shift can be written with the dimensionless variables as where we use ∂ 2 , which holds in the case where the contribution of the displacementdependent membrane capacitance to the total capacitance is small. With the physical variables, δω can be given by The capacitance C(Z) is the total capacitance of a series LCR circuit that is equivalent to the impedance-matched probe LCR circuit shown in Fig. 1 capacitance being C t ≈ 98 pF, the parasitic capacitance being C p ≈ 21 pF and the membrane capacitance C m (Z). Here, the membrane capacitor can be approximated as two parallel plate capacitors in series and its capacitance is given by with ǫ 0 being the vacuum permittivity, A being the area of the capacitor, and d is the nominal distance between the electrodes. Then Eq. (A52) becomes where d 0 = d + Z 0 is the equilibrium distance between the electrodes for a given power P D = V 2 /R of the drive and R = 50 Ω is the impedance of the circuit looking from port A at the drive frequency. Electro-mechanical cooperativity Cem The electro-mechanical coupling rate G em can then be estimated. From Eqs. (A20), (A31), and (A49) we have with the dimensionless variables. Thus, with the physical variables, the electro-mechanical coupling rate can be given by Using the approximate identity [1], and Eq. (A54) we have where we used 2R/L = κ iT and z zpf = /(2mω 0 ), which is the zero point fluctuation of the membrane oscillator. The form of G em has a clear physical explanation; 1 2 stems from the rotatingwave approximation we have performed in Eq. (A28), g em ≡ ω LC (z zpf /(2d 0 ))(C m (Z)/C 0 ) is the so-called single-photon electro-mechanical coupling rate [20], where the multiplication factor η ≡ C m (Z)/C 0 signifies the contribution of the membrane capacitor to the total capacitance C 0 , and P D /( ω LC κ iT ) is the square root of the intra-LC resonator photon number in the case of resonant drive. Phase noise and mechanical bath temperature Now let us evaluate the phase noise and the mechanical bath temperature. To this end, we examine how the noise and signal grow as a function of the electro-mechanical cooperativity C em (ω) ∝ P D . Here we neglect the shot noise and any other noise contributing to the noise floor as it can be subtracted from the data. As we have seen in Eq. (A45), a sum of the Johnson noise, the mechanical Brownian noise, and the phase noise of the drive lead to the following form When applying a narrow band external tone at ω T = ω D + ω m + ∆ T with the power P T , on the other hand, the resultant spectrum is given by Comparing these noise powers (area) we have which does not contain the optical quantities, N D , C om , κ o , κ oT , where the range of the integration for the denominator δ is a little bit more than the bandwidth over which the tone signal is appreciable thus the phase noise contribution can be modeled as L(ω m + ∆ T )δ PD ωD , while the range of the integration for the numerator ∆ (∆ ≫ δ) is a little bit more than the bandwidth of the mechanical response γ m thus the phase noise contribution becomes γ 2 m |χ m (ω)| 2 L(ω). (A62) Figure A4 shows the ratio X 2 n / X 2 t as a function of the drive power P D , which is generated from Fig. 3. From this data, the mechanical bath temperature given by Eq. (19) is estimated to be T eff ≈ 205 K, which is more or less consistent with the environment temperature of 300 K, indicating that there are no appreciable heating effect. As for the phase noise, we deduce L(ω m + ∆ T )δ ≈ 5.8 × 10 −10 , and η p ≈ 9.6 × 10 −12 from the data. The former corresponds to the phase noise bandwidth [see Eq. (26) for definition] δ P /2π ≈ 19 Hz, while the latter does to δ P /2π ≈ 31 Hz. These values may be sensible given that our model of the phase noise given in Eq. (26) is a simple one ignoring 1/f noise and frequency-independent noise. Opto-mechanical cooperativity Com The opto-mechanical coupling rate G om can now be estimated. The strategy is to use the optical shot noise level as a reference [38] and evaluate the experimentally obtained noise spectral density X 2 n and that for the tone X 2 t in Eq. (A61) from the data shown in Fig. 3. Blue points in Fig. A5 represent X 2 n calibrated in units of (photon number flux) 2 at respective drive power P D . From this data and with already known parameters, we deduce the opto-mechanical cooperativity C om ≈ 0.32 × 10 −3 . Red points in Fig. A6, on the other hand, represent X 2 t calibrated in units of (photon number flux) 2 at respective drive power P D . From this data, we deduce C om ≈ 0.33×10 −3 , which is in good agreement with the former value. We can then estimate G om from the definition given by Eq. (A43). With the independently measured values of κ oT /2π = 1.1 GHz (γ o /2π = 1.1 GHz; κ o /2π = 43 MHz) and γ m /2π = 100 Hz, we obtain G om /2π = 6.0 kHz. The single-photon opto-mechanical coupling rate g om is given by With the above G om for the optical input power of P D = 1.2 mW, we have g om /2π ≈ 55 Hz. When the opto-mechanical coupling purely stems from the radia- tion pressure, g om can be given by [20] g om = Ω c z zpf l (A64) with l ≈ 18 mm being the cavity length, which results in g om /2π ≈ 16 Hz. Thus the opto-mechanical coupling could be partly due to the radiation pressure and partly due to the photo-thermal effect [39]. Prospect There is plenty of room for reducing the added noises with realistic improvements in the parameters. For in-stance, if the Au layer coated on the membrane is replaced by an aluminum layer, the weight would be reduced by a factor of ≈ 7, and the frequency of the mechanical resonance would be much higher. Then, the phase noise of the drive at ω m can be significantly smaller. Moreover, we could then use a notch filter that prevent the phase noise around ω m from entering the LC circuit. The drive power P D of +30 dBm could then be applied for increasing C em ∝ P D . If the cavity and the LC circuit are assumed to be both overcoupled, i.e., κ o ≈ κ oT and κ i ≈ κ iT , the signal-tonoise ratio Eq. (25) can be simplified to We can see that as the electro-mechanical cooperativity C em increases, the contributions of the shot noise, S XX + S Y Y , and the mechnical Brownian noise, S F F , to the total noise decrease, and the dominant noise would be the intrinsic Johnson noise, S qq , from the LC circuit. In addition, the larger opto-mechanical cooperativity C om is beneficial to minimize the shot noise contribution further. A major improvement of the signal-to-noise ratio can be achieved by reducing the gap of the membrane capacitor d 0 . Suppose that d 0 is changed from the current value of d 0 ≈ 1.4 µm to 100 nm, the electro-mechanical coupling scales as G em ∝ 1/d 2 0 thereby C em ∝ 1/d 4 0 changes from the current value of ≈ 0.02 to ≈ 700. With the drive power of +30 dBm, C em can further improve to ≈ 20000. Consequently, the noise quantum number of the membrane thermal vibration and the shot noise in Eq. (A65) are reduced by a factor of ≈ 10 6 . The noise budget of the prospective EMO NMR detection is shown in Table A2, where all the aforementioned improvements are taken into account. The intrinsic John-son noise aside, such improvements would lead to the the effective added noise temperature of the transducer of 6 K, outperforming the state-of-the-art low noise amplifier. a, a † LC annihilation and creation operators A capacitor area b, b † membrane annihilation and creation operators B0 magnetic field C capacitance Cem electro-mechanical cooperativity Com opto-mechanical cooperativity γi LC dissipation rate γm mechanical dissipation rate γo optical dissipation rate δω membrane frequency shift ∆ tone offset frequency ηp phase noise κi LC input coupling constant κiT net LC dissipation rate (= κi + γi) κo optical output coupling constant κoT net optical dissipation rate (= κo + γo) φ linearized flux Φ flux Φ0 equilibrium flux φin thermal flux fluctuation input Φin flux fluctuation input χc cavity susceptibility χLC LC susceptibility χm mechanical susceptibility Ωc frequency of light ω0 unloaded membrane frequency ωD LC drive frequency ΩD cavity drive frequency ωLC resonance frequency of LC circuit ωm membrane resonance frequency ωs NMR frequency
11,156.6
2017-06-02T00:00:00.000
[ "Physics" ]
Use of the spatial phase of a focused laser beam to yield mechanistic information about photo-induced chemical reactions Two-pathway quantum mechanical interference was used to control the photoionization and photodissociation of a number of polyatomic molecules. The phase lag between different pairs of products obtained from acetone and dimethyl sulfide was altered by translating the focus of the laser beam along an axis normal to the molecular beam axis. This effect was derived quantitatively from the spatial Gouy phase of the laser beam. Details of the chemical reaction mechanisms were deduced from the channel phase lags, obtained when the laser was focused on the axis of the molecular beam, and from the variation of the phase lag produced by axial translation of the laser focus. Introduction In conventional photochemical reactions, the phase of the radiation field does not play a role in the reaction mechanism. It has been known for some time, however, that if two coherent light sources are used to initiate competing reaction paths, quantum interference between those paths can alter the yield and product branching ratio [1]. By varying the relative phase of a bichromatic field, it is possible to modulate the product yield, in analogy to Young's two-slit experiment. The appearance of a phase lag between different pairs of products is the hallmark of coherent control [2]. In typical two-path quantum control experiments, all the products are formed at the same total energy. In such a situation, relatively little dynamical information is learned from the phase lag. In a recent series of papers, however, we showed that if formation of some of the products requires the absorption of additional photons after the control step, it is possible to extract information from the phase lag about the reaction paths leading to those products [3]- [5]. Consider the case of bichromatic excitation by n photons of frequency ω m and m photons of frequency ω n , such that nω m = mω n . The spatially-dependent phase of the ω n electric field is given by φ sp,ω n (z) = φ n + k n z − tan −1 (z/z R ), where z is the axial distance from the focal point of the laser, z R is its Rayleigh range, φ n is the constant part of the phase, and k n is the wavenumber. The third term on the right-hand side, known as the Gouy phase, increases by π as the electromagnetic wave passes through a focus [6]. The overall phase for bichromatic excitation is given by φ sp (z) = (mφ n − nφ m ) + (mk n − nk m ) z + (n − m) tan −1 (z/z R ). This phase appears in the probability of obtaining product S, P S = P S m + P S n + P S mn cos(φ sp − δ S mn ), where P S m and P S n are the reaction probabilities for the individual fields, P S mn is the amplitude of the cross-term, and δ S mn is the molecular phase associated with that channel [5]. The branching ratio may be controlled by varying φ sp , so that the yield of channel S is maximized when φ sp = δ S mn . Usually φ sp is varied by adjusting the first term on the right-hand side of equation (2). 3 This is typically accomplished by passing both beams through a medium of variable optical density and taking advantage of the wavelength dependence of the refractive index of the medium. The second term usually vanishes because of conservation of linear momentum. The third term, the Gouy phase shift, also vanishes if the products are formed at the same energy, because n − m is the same for all channels. As explained below, however, the Gouy phase shift need not cancel out if one of the products requires the absorption of additional photons following the control step. We designate the number of additional photons required by the integer l. Chen and Elliot [7] demonstrated that the modulation of the signal produced by one-and three-photon ionization of mercury atoms undergoes a Gouy phase shift as the probed region passes through the focal point of the laser beams. This effect in itself is insufficient to affect the branching ratio of a chemical reaction because the phase contribution of the l additional photons to the two interfering paths cancels exactly, and the contribution of the (n − m) tan −1 (z/z R ) term at any given point in space is equal for all channels. The effect of the additional photons in some particular channel is to confine the products to a smaller volume, so that a spatial average of the product yield may have a channel-dependent contribution from the Gouy phase. This effect averages to zero if the laser is focused onto the axis of the molecular beam, because the Gouy phase is an antisymmetric function of z. If the laser is focused off axis, however, the spatial symmetry of the products is broken, and a spatial average of the yield contains a nonzero Gouy phase contribution. In a recent study [3], we demonstrated a Gouy phase contribution to the branching ratio of the photoionization versus photodissociation of vinyl chloride (CH 2 CHCl, VCl) in threeversus one-photon excitation at 532 and 177 nm. Phase control of the reaction products for this molecule is of interest because it shows that the high density of states of a multicenter polyatomic molecule need not destroy the coherence effects that were previously observed for simpler molecules. From the Gouy effect we determined that breaking of the C-Cl bond in VCl occurs by fragmentation of the neutral molecule at the 3ω 1 level rather than by fragmentation of the VCl + ion at the 5ω 1 level. Moreover, we found that the axial dependence of the phase lag is in quantitative agreement with the calculated value for l = 2, m = 3, and n = 1 without any adjustable parameters. Here we extend our studies to additional polyatomic molecules in order to test our understanding of the effect of the Gouy phase. Acetone (CH 3 COCH 3 ), like VCl, requires three 532 nm photons to fragment and five photons to ionize (i.e. l = 2). The density of states of acetone is much greater, however, because of the presence of four heavy atoms. Dimethyl sulfide (DMS, CH 3 SCH 3 ) provides an interesting contrast to both molecules because it requires only one excess photon (l = 1, IP = 8.69 eV [8]) to ionize. This molecule is of interest also because its control was investigated previously by Nagai et al [9] at a lower photon energy, although without consideration of the Gouy effect. Last, we briefly investigate the phase response of a number of other polyatomic compounds to see how general is the controllability of complex molecules. Experimental The experimental setup was described previously [3]. Briefly, a pulsed molecular beam of the neat sample gas was simultaneously excited with 532 and 177 nm radiation. The 532 nm light was the output of a 4 ns, frequency doubled Nd:YAG laser, whereas the 177 nm was obtained by frequency tripling the 532 nm radiation in a Hg oven. The energy of the 532 nm laser was The molecular and laser beams intersected at 90 • . The coaxial laser beams were focused onto the target by a pair of UV-coated spherical mirrors, one concave and one convex. The offaxis configuration of the mirrors produces two astigmatic, elliptical foci, one perpendicular to the plane defined by the laser and molecular beam axes and the other in the plane of intersection. All the data reported here used the in-plane focus, where the intensity was approximately 1 × 10 13 W cm -2 . The mirrors were mounted in a cell containing hydrogen gas, which was used to control the relative phase of the laser beams. The reaction products were detected with a home-built time-of-flight mass spectrometer. The molecular beam had a Gaussian profile with a full width at half maximum of 397 µm at the intersection point. This quantity was determined by deconvoluting the profile measured by scanning one of the focusing mirrors along a direction perpendicular to the molecular beam axis. In the Gouy phase experiment, this scan was repeated ten times to determine the precise location and standard deviation of the peak maximum. This peak location defines the origin of the z-axis used in equation (2) to define the Gouy phase. Next, the ion yield versus H 2 pressure in the tuning cell was measured for the fragment ions of interest at different positions of the laser focus, with the axis of the molecular beam located at a distance z m from the focal line. DMS (>99% purity), 1,1-dichloroethylene (99%), iodobenzene (98%) and toluene (>99.5%) were obtained from Sigma-Aldrich Inc., and acetone (Certified ACS Spectranalyzed) was obtained from Fisher Scientific. Figure 1 shows the mass spectrum of acetone obtained with just the 532 nm laser. The mass spectrum is dominated by three peaks corresponding to CH + 3 , CH 3 CO + and CH 3 COCH + 3 . Strong modulations were observed for all three ions and are shown in figure 2. At z m = 0, a phase lag of δ(CH 3 COCH + 3 , CH 3 CO + ) = 6.3 ± 4.0 • was measured, with CH 3 COCH + 3 leading CH 3 CO + . The phase lag between CH + 3 and CH 3 CO + is 0.2 ± 5.6 • . Another experiment was performed with acetone mixed with VCl, and the modulations in CH 3 CO + , CH 3 COCH + 3 and VCl + were recorded simultaneously. The phase lag between CH 3 COCH + 3 and VCl + is not significantly different from zero. The modulation depths for CH 3 CO + , CH + 3 , CH 3 COCH + 3 and VCl + are 14, 25, 35 and 28%, respectively. We note that the acetyl peak is the most intense in the mass spectrum but has the smallest modulation depth. The phase lag between the acetyl and acetone ions obtained at five values of z m (the 'Gouy profile') is plotted in figure 3. Also shown in this figure is the previously published [3] Gouy profile for VCl, which is seen to be very similar to that of acetone. Results The mass spectrum for DMS is shown in figure 4. The 532 nm photoionization spectrum includes all the peaks observed previously by Nagai et al [9] at 602.5 nm, but much greater fragmentation is evident. The parent ion is just barely visible, and its phase modulation was too weak for quantitative study. Peaks that showed strong modulation are CH + 3 , SCH + 3 and Figure 3. The phase lag as a function of the distance of the molecular beam axis from the focal line of the laser. A positive phase lag corresponds to a parent ion signal (VCl + or CH 3 COCH + 3 ) leading the fragment (C 2 H + 3 or CH 3 CO + ). The vinyl chloride data are taken from [3]. The solid curve is the numerical calculation of the spatial phase lag using the methods described in [4], taking into account the astigmatic focus of the laser beam and the Gaussian profile of the molecular beam. CH 3 SCH + 2 , which are produced by scission of the S-C and S-H bonds. The modulation curves for these products at z m = 0 are plotted in figure 5, displaying modulation depths of 9, 6 and 6%, respectively. Values of the phase lags measured at different z m are listed in table 1. The variation between runs on different days was much larger than has been observed for other molecules, but the slopes of the phase lag versus z m were reproducible. In the subsequent analysis we use the first line of each data set in the table, which contains five values of z m . These Gouy profiles are shown in figure 6. We also measured the phase modulation between fragmentation channels of 1,1dichloroethylene (CH 2 CCl 2 , DCE) at z m = 0. Figure 7 shows the time-of-flight mass spectrum obtained with just the 532 nm beam. Notably absent in the mass spectrum is the parent ion. Figure 8 shows strong modulation of the C 2 H 2 , C 2 HCl and Cl fragments. No phase lag was observed between any of the products at z m = 0. No modulation was found for trimethyl amine, iodobenzene and toluene. Figure 6. Gouy phase shifts for various pairs of fragment ions measured in the coherent control of dimethyl sulfide. The dashed, solid and dash-dotted lines were calculated for an astigmatic focus using a Gaussian molecular beam profile, with m = 1 and n = 3. The dashed curve corresponds to l = 0 for one channel and l = 3 for the second channel, the solid curve corresponds to l = 0 for one channel and l = 2 for the second channel, and the dash-dotted curve corresponds to l = 2 for one channel and l = 3 for the second channel. A phase lag of 39 • , 26 • and 6 • was added to the dashed, solid and the dash-dotted curves, respectively, to bring them into agreement with the channel phase lags measured at z m = 0. Acetone Photodissociation of acetone is a classic Norrish-type I reaction resulting from α-cleavage of a carbon-carbon bond. The three lowest-energy pathways are accessed by S 1 ← S 0 (π * ← n), S 2 ← S 0 (3s ← n) and S 3 ← S 0 (4s ← n) transitions, involving non-bonding (n), anti-bonding (π * ), and Rydberg (ns) orbitals [10]. The experimental onset of the π * ← n transition occurs at 3.75 eV, with a maximum at 4.38 eV. The threshold shifts down to 3.56 eV when singlet-triplet mixing is taken into account [11]. The vertical energies for transitions to the Rydberg states are 6.35 eV for n = 3 and 8.46 eV for n = 4 [12]. The ionization potential of acetone is 9.70 eV [8]. In the lowest energy (n, π * ) channel, acetone predissociates into CH 3 CO and CH 3 radicals. An early study of the S 2 channel indicated that at 193 nm the molecule dissociates synchronously into three fragments, CH 3 + CO + CH 3 [13]. Later studies, however, showed that the reaction proceeds through a stepwise mechanism [10,14]. The acetyl radical forms in about 50 fs and then decomposes into CO + CH 3 fragments, with the lifetime of the radical intermediate decreasing with increasing excitation energy. Three distinct mechanisms for the formation of the acetyl cation (CH 3 CO + ) are possible: (i) dissociative ionization of neutral acetone, (ii) photodissociation of acetone ions and (iii) photoionization of neutral acetyl radicals. The threshold for pathway (i), measured by photoionization with a single VUV photon, was reported to be 10.38 eV [15]. Mechanism (ii) was studied by photofragment imaging using one 118 nm photon to produce the acetone cation and two 355 nm photons to dissociate the ion via a metastable state [16]. The data revealed that acetyl cations and methyl radicals are the primary products. Competition between paths (ii) and (iii) was observed using sub-ps 585 nm radiation [17]. A mechanism that is consistent with the previous experiments and our current results consists of the following elementary reactions: CH 3 CO * → CH 3 + CO (R5) Neutral methyl radicals produced in reactions (R2), (R4) and (R7) may absorb additional photons to yield CH + 3 ions. Similarly, metastable CH 3 CO * produced in reaction (R4) may survive long enough to be photoionized. The strong acetyl peak in figure 1 is expected because of the multiple paths for its formation discussed above. The much lower ionization potential of acetyl [18] versus methyl [19] radicals (7.01 versus 9.84 eV) helps to explain the greater relative intensity of the acetyl peak. The preponderance of acetyl versus methyl ions in the mass spectrum could also be explained by the selectivity of reaction (R7) [17], but we will argue later that this reaction is negligible under our conditions. Our phase modulation data may be understood in terms of this mechanism. Consider first the data at z m = 0. Modulation of the acetyl and parent ion signals results from two-pathway interference in both reactions (R3) and (R6). The zero phase lag between CH 3 CO and CH 3 is consistent with their originating from the same precursor in reaction (R4) 4 . The very small phase lag between the acetyl and parent ions, if real, could be due to the channel phase that results from coupling of the 3s Rydberg bound state to an antibonding state [20]. The smaller modulation depths of the acetyl and methyl products might be explained by a large two-photon contribution from reactions (R1) and (R2), for which there is no coherently competing path. We turn now to the effect of the Gouy phase. The curve in figure 3 was calculated by averaging equation (3) over the radial and axial coordinates of the laser beam, as described previously [4], taking into account the astigmatic focus of the laser and the Gaussian profile of the molecular beam. The calculation was performed assuming interference between the ω 3 and 3ω 1 paths in the dissociation channel and between the ω 3 + 2ω 1 and 5ω 1 paths in the ionization channel. Good agreement is obtained between the data and the theory. From our results we infer that acetone dissociates by absorbing either one photon of 177 nm radiation or three photons of 532 nm radiation, to yield the neutral products in reactions (R3)-(R5). The molecule may absorb two additional 532 nm photons to produce the parent ion in (R6). Dissociative ionization (R7) does not appear to play a significant role, because there is no Gouy phase lag between the acetyl ion produced in this channel and the parent ion, contrary to the data in figure 3. DMS The photodissociation dynamics of DMS has been studied extensively in the near UV [21], ( [22] and references cited therein). The primary excitation step for wavelengths >190 nm is the promotion of a non-bonding sulfur electron (from 3b 1 , the highest occupied molecular orbital) to a 4s Rydberg-like orbital (9a 1 ) or a valence-like C-S anti-bonding (6b 1 ) orbital [23,24]. Only the X 1 A 1 → 1 1 B 1 transition is dipole allowed. The vibronic coupling, however, mixes the 1 A 1 and 1 B 1 states, which cross at a conical intersection to become the 1 1 A and 2 1 A states. Dissociation on the 1 1 A surface yields SCH 3 + CH 3 . At the shorter wavelength used in the present study (177 nm, corresponding to ω 3 ), higher-energy Rydberg orbitals, such as the 5s(a 1 ), 4p(b 1 ), 3d(a 1 ) and 3d(b 1 ), are populated [25], and presumably higher lying conical intersections involving the C-S reaction coordinate provide efficient paths to form the same products. As shown in figure 9, electronically excited SCH 3 can also be formed at this energy. Another reaction channel yields the products CH 3 SCH 2 + H [26]. The pathway for this reaction presumably involves a different set of conical intersections along the C-H stretch coordinate. We shall argue later, however, that under our experimental conditions the CH 3 SCH + 2 ion is formed primarily by dissociative ionization. In our experiment, the parent ion is barely visible, whereas the primary fragment ions, CH 3 SCH + 2 , SCH + 3 and CH + 3 are present with comparable intensities (figure 4), and all show strong phase modulation. We consider first the channel phase lags for pairs of these products, measured at z m =0.The30 • phaselagbetweenSCH + 3 andCH + 3 provesthatthesefragmentsmust comefromdifferentreactionchannels(seefootnote4).WeassigntheCH + 3 signal to the threephoton (3ω 1 ) dissociation of DMS (subsequently ionized by five additional ω 1 photons), and SCH + 3 to 5ω 1 coherent dissociative ionization of DMS. A possible source of the channel phase is the vibronic coupling of the bound and continuum states at the conical intersection that leads to neutral SCH 3 and CH 3 . As we propose in the following paragraph, an elastic dissociative Figure 9. Schematic energy level diagram of control mechanism of dimethyl sulfide. Energy levels were taken from [21], [25] and [26]. ionization continuum has a channel phase of zero and therefore does not contribute to the phase lag [27]. The 37 • channel phase lag between CH 3 SCH + 2 and CH + 3 likewise shows that these products come from different pathways, with CH 3 SCH + 2 again the product of coherent dissociative ionization. The small, possibly zero, phase lag between CH 3 SCH + 2 and SCH + 3 at z m = 0 is consistent with both products coming from elastic dissociative ionization. More information about the reaction mechanism may be obtained from the Gouy profiles. In particular, we find that the slopes of these profiles support the assignments made in the previous two paragraphs. Figure 6 shows that the slope of δ(CH 3 SCH + 2 , CH + 3 ) versus z m is greater than the slope of δ(SCH + 3 , CH + 3 ), which is in turn greater than the slope of δ(CH 3 SCH + 2 , SCH + 3 ). The solid curve, calculated assuming that l = 2 for SCH + 3 and l = 0 for CH + 3 , is in fair agreement with the data and supports the mechanism proposed above. That is, SCH + 3 is produced at the five-photon level and CH 3 is produced at the three-photon level. The question arises what happens to SCH 3 produced at the three-photon level. Nourbakhsh et al [26] reported the secondary dissociation of the neutral SCH 3 fragment at 193 nm. The potential energy of the SCH 3 radical along the C-S reaction coordinate [16] shows aà 2 A 1 bound state coupled to aB 2 A 1 repulsive state, which can be accessed by two photons of 532 nm. We propose that here the neutral SCH 3 fragment undergoes a rapid secondary dissociation by absorbing two 532 nm photons, and hence has a smaller contribution to the peak observed in the mass spectrum. The S + observed in our 532 nm photoionization mass spectrum of DMS (figure 6) supports this argument. From the above discussion we conclude that SCH + 3 and CH + 3 are produced in different reactions: SCH + 3 is produced by five-photon dissociative ionization of DMS and neutral CH 3 by three-photon dissociation of DMS. (Although an additional five photons are required to ionize CH 3 , they do not contribute to the phase lag [3]). The ionization potential of DMS is 8.69 eV [8], which corresponds to four 532 nm photons. Nourbakhsh et al [26] reported an appearance energy of 10.67 eV for SCH + 3 . This finding supports the assignment of SCH + 3 to dissociative ionization. The methyl fragment produced from the dissociative ionization of the parent ion could also contribute to the methyl peak observed in the mass spectrum. The phase lag data indicate, however, that the contribution of neutral methyl fragments from this channel is negligible. That is to say that the relative number density of the methyl fragments produced by dissociative ionization is much smaller than that of the neutral methyl fragments produced at the three-photon level. Let us look now at the CH 3 SCH + 2 product. Figure 6 displays a spatial phase lag between CH 3 SCH + 2 and SCH + 3 , which indicates that CH 3 SCH + 2 is produced at a higher energy than SCH + 3 . A possible explanation is that a vertical transition producing CH 3 SCH + 2 by dissociative ionization requires the absorption of an additional ω 1 photon. The dash-dotted line in figure 6 corresponds to l = 2 for the SCH + 3 channel and l = 3 for the CH 3 SCH + 2 channel. The higher experimental phase lag obtained for CH 3 SCH + 2 and SCH + 3 as compared to the calculated dashdotted line suggests the possibility that some fraction of SCH + 3 formed at the three-photon level also contributes to this phase lag. If this explanation is correct, the SCH + 3 and CH + 3 phase lag profile should lie slightly below the solid curve. Our observations are consistent with the following control mechanism for DMS: The energetics of this mechanism are depicted in figure 9. This scheme is necessarily an oversimplification. It is unlikely, for example, that all of the SCH 3 produced in (R9) is dissociated and that none of the CH 3 produced in (R10) contributes to the phase lags. Also, it is likely that some parent ions produced by 4ω 1 , ω 3 + ω 1 interference are dissociated incoherently by the absorption of one more ω 1 photon and contribute to the observed phase lags of the fragments. Nevertheless, this mechanism accounts qualitatively for the observed channel phase lags and for the slopes of the three Gouy profiles. We note a third pathway produced by the absorption of 2ω 3 photons is possible in (R11), but there is no evidence for it in the sinusoidal modulation curves. We also note that 2ω 3 excitation could produce all three products without quantum interference. This incoherent pathway could explain the small modulation depths in figure 5. The present study complements the earlier work of Nagai et al [9], who measured the phase lag between CH 3 SCH + 2 and SCH + 3 at fundamental wavelengths between 600.0 and 602.5 nm. A maximum phase lag of 8 • was observed at 601.5 nm. The falloff of the phase lag at longer and shorter wavelengths could be explained by a three-photon resonance embedded in the dissociative continuum of one of the product channels [27]. The CH 3 SCH + 2 signal was attributed to H atom ejection following ω 3 , 3ω 1 excitation, rather than the dissociative ionization mechanism proposed here. The two mechanisms are not necessarily inconsistent, considering that different electronic states of DMS are accessed in the two experiments. It would be interesting to test the mechanism at the longer wavelengths by measuring the Gouy phase lag. Other molecules Of the various other large molecules that we studied, 1,1-dichloroethylene is the only one that showed phase modulation. The photochemistry of this molecule is complex because of its many competing reaction channels, including H and Cl ejection and H 2 and HCl elimination [28]. The strong modulation of the three observed fragments demonstrates that two-pathway control is possible even in such complex systems. Gouy phase measurements were not performed, and without such data it is difficult to draw any mechanistic inferences. The other molecules we studied, trimethyl amine, iodobenzene and toluene, did not display any modulation. The absence of modulation in these molecules may be explained by the following analysis. The modulation depth, M, is given by [27] where with an equivalent expression for g 2 using D (n) , and In these equations, |X is the ground state and |E Sk − is the excited state for channel S, energy E and momentum unit vectork. The integrals are over the scattering angles of the products, θ and φ. Writing g 2 = λ 2 f 2 , where λ is a real number, we obtain The Schwarz inequality assures that 0 M 1. Equation (7) also shows that |λ| > 0 is a necessary condition for modulation. In other words, the signal produced with both laser beams present must be greater than that obtained with either laser alone, which was the case for all the molecules studied here. (Averaging the signal over the axial coordinate of the laser in the region where the laser and molecular beams overlap reduces M by approximately a factor of two [4]. This effect limits the ability to measure very weak modulations but otherwise is not essential for the present discussion.) Next, we expand f and g in a spherical basis, and g = j,m B j,m Y jm (θ, φ), where Y jm are spherical harmonics. The magnitude of M depends on the distribution over m. For a uniform distribution over m with λ = 1, 100% modulation is obtained. On the other hand, if, for example, the nonzero A j,m coefficients are clustered around |m| = j, and the nonzero B j,m coefficients are clustered around m = 0, there will be little modulation. The distributions over m are determined by the symmetry properties of the intermediate states in the multiphoton dipole transitions, which borrow their intensities from available rotational, vibrational and electronic states [30]. Differential crosssection measurements with just one of the laser fields present could provide these distributions. We expect these distributions to depend on the energies of the intermediate states. We note that Wang et al [31] observed modulation of the parent ion of trimethyl amine produced by interference between 5ω 1 and 2ω 1 + ω 3 pathways at fundamental wavelengths between 601.5 and 602.8 nm, whereas no modulation was detected here at 532 nm (4ω 1 versus ω 3 + ω 1 ). Conclusions Coherent phase control of the photoionization and photodissociation of acetone, DMS, and 1,1-dichloroethylene was obtained, using 532 and 177 nm radiation to initiate two-pathway interference. Strong phase modulation of the products demonstrated that the high densities of states and multiple reaction channels do not reduce the controllability of these complex molecules. The phase lag between different pairs of products from DMS and acetone was measured as a function of the location of the laser focus. Information obtained from the channel phases of the photoproducts and the spatial phase of the laser was used to elucidate the reaction mechanisms. The absence of modulation for other molecules may be a consequence of the symmetry of intermediate states.
7,124.6
2009-10-30T00:00:00.000
[ "Chemistry", "Physics" ]
Protective Molecular Passivation of Black Phosphorous Black phosphorous (BP) is one of the most interesting layered materials, bearing promising potential for emerging electronic and optoelectronic device technologies. The crystalline structure of BP displays in-plane anisotropy in addition to the out-of-plane anisotropy characteristic to layered materials. Therefore, BP supports anisotropic optical and transport responses that can enable unique device architectures. Its thickness-dependent direct bandgap varies in the range of around 0.3-2.0 eV (from single-layer to bulk, respectively), making BP suitable to optoelectronics in a broad spectral range. With high room-temperature mobility, exceeding 1,000 cm2V-1s-1 in thin films, BP is also a very promising material for electronics. However, BP is sensitive to oxygen and humidity due to its three-fold coordinated atoms. The surface electron lone pairs are reactive and can lead to structural degradation upon exposure to air, leading to significant device performance degradation in ambient condition. Here, we report a viable solution to overcome degradation in few-layer BP by passivating the surface with self-assembled monolayers of octadecyltrichlorosilane (OTS) that provide long-term stability in ambient conditions. Importantly, we show that this treatment does not cause any undesired carrier doping of the bulk channel material, thanks to the emergent hierarchical interface structure. Our approach is compatible with conventional electronic materials processing technologies thus providing an immediate route toward practical applications in BP devices. emergent hierarchical interface structure. Our approach is compatible with conventional electronic materials processing technologies thus providing an immediate route toward practical applications in BP devices. The unique and desirable optoelectronic properties of BP 1,3 have motivated much recent work to ameliorate its problematic air and moisture instability. Promising strategies to address this problem include the formation of a protective capping layer or a controlled stable native oxide 15 at the surface. Capping of BP with 2D layered materials such as graphene and hBN 16 have provided stability for a period of 18 days. Alternatively, a 25 nm thick layer of alumina 10 formed by atomic layer deposition was also found to be efficient, especially with the addition of a hydrophobic fluoropolymer layer that improved the stability over several weeks 17 . More recently, organic functionalization of BP with layers of aryl diazonium has been shown to provide effective chemical passivation although this is accompanied by p-doping of the channel material. 18 Here, we passivate BP with self-assembled monolayers (SAMs) of OTS, thus gaining important advantages in terms of stability, oxidation resistance, and elimination of electronic devices degradation. SAMs are known for their effective surface passivation capabilities, particularly in semiconductor nanostructures. 19,20 OTS molecules, comprised of an eighteen carbon chain backbone, a trichlorosilane head group and a methyl (CH3) functional group, can form smooth and uniform SAMs on oxidized substrates; these crystalline-like, close-packed SAMS 21 can substantially reduce oxygen penetration toward the underlying reactive substrate. A scheme of our device structure is shown in Fig. 1a, where a thin native phosphorus oxide (BPO) layer bridges between OTS molecules and a BP crystal that forms the channel of a field-effect transistor (FET). The source and drain electrodes were defined by electron beam lithography on exfoliated BP flakes followed by metallization (Ti/Au). After liftoff, devices are cleaned with solvents and then immersed in hexane solution of OTS (see methods for details) for one hour to obtain OTS coating on BP. X-ray photoelectron spectroscopy (XPS) characterization, applied on bulk BP samples, clearly differentiates between the OTS-coated and uncoated BP, both exposed to air for the same period of time. Normalized P 2p core level spectra of coated (blue) and uncoated (red) BP are presented in Fig. 1b from which two pronounced differences are observed. First, in coated samples, the relative intensity of the broad peak around 134 eV, associated with BP oxide (BPO), is significantly smaller than in uncoated samples, indicating a thinner P-oxide layer as compared to bare BP. Because OTS coating of BP was performed in ambient conditions, a very thin layer of BPO still exists at the surface. Second, a pronounced spectral broadening of the low binding-energy line (at ~130 eV) towards higher binding energies is imposed upon the OTS coating. A secondary, smaller broadening of the highly oxidized regime (at ~134 eV) towards lower binding energies is also seen. These spectral signatures of OTS-coated BP are further analyzed in detail and provide a plausible mechanism for the surface reaction, as elaborated upon below. The efficiency of OTS passivation in preventing the typically rapid degradation of uncoated BP, is further confirmed by Raman spectroscopy, where the intensity of the 1 g A peak of BP (normalized to the intensity of Si substrate peak at 520 cm -1 ) provides a measure of sample's structural stability. In uncoated BP (Fig. 1c) a rapid decrease of the 1 g A signal is observed, associated with the loss of long-range order, 12 due to oxidation and subsequent amorphization of the BP. In contrast, the spectra of OTS-coated samples ( notably, in contrast with other molecular layers such as aryl diazonium on BP 18 , they consistently show that BP preserves its original properties under the OTS coating. Fig. 2d further demonstrates that the transconductance (and hence, the hole mobility) is retained during the process of OTS self-assembly. Contrary to the decay of transport with time in bare BP (Fig. 2a) the stability of OTS-coated BP is remarkably persistent, suggesting that a firm corrosion-resistant molecular film is formed on the BP surface. The success of OTS coatings in stabilizing BP and preserving its original electronic properties was studied in further detail by inspecting the SAM quality and its interaction with BP. Based on XPS (see SI for additional details) we conclude that a nearly vertically-aligned, ordered SAM is formed on the BPO (see Fig. 1a). We find a C:Si atomic ratio (extracted after elimination of background signals) of ~24, which is in excellent quantitative agreement with the value expected after correcting for the standard signal attenuation across the (CH2)17CH3 backbone of the molecules. More precisely, the theoretical C:Si ratio for a perfect vertical orientation of the molecules is ≈26.4; hence, on average, slightly tilted molecules (~25° to the surface normal) better fit the experimentally-extracted value. Beneath the OTS layer, a thin (~2nm) BPO layer is detected, consisting of low O-P stoichiometry, with a significant amount of partially-oxidized P components, as shown in Fig. 3c. Importantly, the total signal of oxidized P-2p components (including those components of intermediate oxidation that appear in the coated sample only, as in Fig. 3c) normalized to the bulk BP signal, is roughly conserved in the coated samples with respect to uncoated ones. This fact suggests that limited reduction of pre-formed BPO takes place; a process involving removal of oxygen atoms, while the number of corresponding phosphorus atoms remains unaffected. In fact, the chemical reaction with OTS involves the release of Cl atoms that are expected to cause mild reduction of the top BPO layer 21 analogous to surface oxide reduction and corrosion of Al contacts in electronic devices. 22 Remarkably, the amount of oxygen depleted from the BPO layer is comparable to the amount of oxygen required for the Si-Ox interface between BPO and OTS (see the SI for details). We therefore infer that the binding process of OTS is self-supported by the pre-existing BPO layer, which enables the success and robustness of our treatment. Our proposed mechanistic interpretation is supported by multiple independent results. Firstly, based on our electrical data we infer that no doping of the BP bulk takes place, which suggests that the bulk BP is indeed unaffected directly by the OTS. In other words, the thin oxide layer that is formed spontaneously, prior to OTS binding, seems to provide efficient protection against BP doping by any of the applied chemical agents. Also, no apparent traces of Cl atoms were observed on the BP surface, indicating that the hydrolysis reaction of OTS was complete. Secondly, controlled surface charging (CSC) data (Fig. S1) shows that BP remains highly conductive. The recorded line-shifts as the electron flood gun (eFG) was switched on and off were about 250 meV for the oxidized P, whereas the BP shifts were as small as 70 meV (see SI for details). As a reference, surrounding regimes of the adhesive tape shifted by ~600 meV under the very same conditions. This analysis suggests that the top BPO layer is slightly reduced during the exposure to OTS, such that the local stoichiometry changes from about PO2.9 to about PO1.5 on the average. Finally, computed binding energy shifts indicate (as discussed hereafter) a range of intermediate-oxidation states at the BPO/OTS interface that fit well our measured intermediate P-signals in Fig. 3d; signals that are missing from the corresponding P 2p spectrum of uncoated BP. ab initio density functional theory (DFT) calculations were applied as an independent probe of the OTS binding to BP. The structural models considered consist of a unit cell of BPO in a low oxidation state, 15 with a SAM of OTS in either a polymerized or a non-polymerized configuration (Figs. 3a, 3b). From DFT calculations, medium binding-energy states (between 130-133 eV) were obtained, which is in line with the measured spectrum of the OTS-coated BP (Fig. 3c), where signals from partially oxidized states were resolved, as shown in Fig. 3d and further elaborated in the SI tables. The calculated chemical shifts (Tables S3 and S4) roughly fall into two categories: one set of states that are ~0.3-0.6 eV above the bulk P 2p level and a second set at ~1.8-1.9 eV above the lowest P 2p levels. While the actual structure of the surface oxide is most likely amorphous and hence, significantly more complex, the general agreement between core level energies for the surface-functionalized theoretical model and the XPS data supports the picture of chemical bonding, rather than mere physisorption, between the OTS molecules and the BPO. To conclude, we have demonstrated a simple and effective strategy for efficient, long-term stabilization of BP surfaces against humidity and corrosion using OTS SAMs. This method can be applied to stabilize wafer-scale black phosphorus thin films in the future. In the process of attachment to BP, OTS partially reduces the pre-existing surface oxide, a process quantitatively evaluated by XPS. The stabilization of BP is independently confirmed by transport measurements, Raman spectroscopy and DFT calculations. We have shown that the native oxide layer of BP plays multiple critical roles in the surface functionalization process. Firstly, the BPO layer enables the binding of ordered, close-packed OTS layers by providing the oxygen for the hydrolysis process and presenting a flexible template for assembly. Secondly, the remaining thin oxide layer provides the necessary screening against undesired doping effects associated with inter-diffusion and charge transfer between BP and OTS. Finally, the oxide layer itself is stabilized by OTS and is part of the overall protective capping over the BP channel (see SI for more detail). Overall, this study provides an inexpensive, reliable, and scalable solution to the vexing stability problems of BP thus paving the way for future technological applications. OTS coating OTS encapsulation was achieved by immersion of devices in 3ml dry hexane solution containing 30μl of OTS for 1h in a sealed tube to minimize air and moisture exposure. The samples were then washed in cold hexane and soaked for 10 minutes in hot hexane to remove OTS residues. Finally, samples were blow dried with nitrogen. For coating bulk BP samples (HQ Graphene, 7803-51-2,), the same anchoring procedure was performed, following immersion of BP for one minute in 5% acetic acid solution (in acetone) and blow dried with nitrogen prior to soaking in OTS solution. Raman spectroscopy Raman spectra were collected on a Horiba Labram HR spectrometer with exciting laser line at 532 nm. Device fabrication BP flakes were mechanically exfoliated on a clean surface of 90nm SiO2/Si wafer. Electrodes were patterned by electron beam lithography and metallized with 3/50nm Ti/Au. XPS measurements XPS measurements were carried out on a Kratos Axis Ultra DLD spectrometer, using monochromatic Al kα source at a relatively low power, in the range of 15-75 W. Samples were kept under inert atmosphere prior to their insertion into the vacuum, such that their exposure to air was limited to less than 1 min. The base pressure in the analysis chamber was kept below 10 -9 Torr. Controlled surface charging (CSC) 23,24 was used in order to differentiate between sample domains, as well as to eliminate signals originated from the underlying adhesive tape (see SI). The CSC data further provides information on the electrical properties of the resolved domains. Complementary measurements were performed on BP-flakes deposited on Ti and exposed to OTS at various conditions (not shown). Curve fitting was done using the Vision software, referring to control measurements on the bare adhesive tape and on samples Dipole corrections were applied along the vacuum direction of the supercell (normal to BPO layer). Core-level shifts were calculated in the initial-state approximation. Figure S1 demonstrates the application of CSC (controlled surface charging). 23,24 CSC was used as a means for differentiating between signals of different domains and, to start with, for the elimination of all adhesive tape signals. Second, it was used to exclude differential charging artifacts, thus verifying that the complex P 2p line-shape does indeed reflect various P-oxidation states. The P 2p lines in Fig. S1 were recorded under two markedly different charging conditions and, yet, both samples exhibit small shifts only (much smaller than those encountered at the adhesive tape). Slight differences in shifts characterize the inspected domains: the bulk phosphor, its oxide and the OTS coating. They are summarized in Table S1. It is also noted that the CSC line-shifts provided a useful cross-check of consistency as with the interpretation of atomic concentrations, information that is given in Table S2. Figure S1. XPS P-2p spectra of the OTS-coated BP sample, acquired with (blue dash-dot) and without (red dash) the application of an electron flood gun (eFG) on the sample. Note the limited line shape changes and shifts. Details of XPS analysis Electrically, the CSC data indicate relatively high conductance of the BP bulk, a feature retained in both samples. Values are given in Table S1. The P-oxide signal manifests slightly larger CSC shifts, which reflects enhanced dielectric character of the very thin overlayer. The shifts of OTS signals in the coated sample, Table S1, are consistent with their nearly vertical organization, as proposed in the paper. Slight deviations can yet be observed, reflecting our limited accuracy in the offline technical work, as well as possible non-uniformities in sample response. It is noted that straightforward comparisons can be made between elements within each sample. On the other hand, comparing shift values of different samples should be considered more carefully, due to possible variations in the eFG-sample alignment. Specifically, one should be critical as of the measured P-ox shift in the coated sample, found to be larger than in the uncoated one. With this reservation, which in any case deals with minor effects, the observed results are yet in line with the expected behavior, due to additional attenuation and, possibly, extra capturing of eFG electrons at the OTS. Testing the subsequent exposure of the samples to air (~ 24 hours, not shown) resulted in increased oxidation in the uncoated samples, while the coated ones exhibited a significant, though not perfect, oxide-protective feature. Quantitatively, we find it very likely that the initially formed P-oxide serves as the main source of O-atoms that participate in the bonding of OTS molecules (via Si-O-P bonds), as well as for the evolution of lateral polymerization at the OTS (via Si-O-Si bonds). Due to signal overlapping, the level of our quantitative analysis was limited; hence, further study is needed to improve our understanding of the OTS-BP interface details. DFT Methodology The initial BPO structure was based on the work of Edmonds et al. 7 Atomic positions and inplane cell dimensions were relaxed with a force tolerance of 0.02 eV/ Å. All systems had approximately 20 Å of vacuum normal to the BPO layer to prevent spurious interaction between images. For relaxation calculations, the Brillouin zone was sampled with a 4×3×1 Гcentered k-point mesh for the polymerized OTS case and a 2×3×1 Г-centered k-point mesh for the non-polymerized case; this leads to approximately the same k-point density in both cases. The core level shifts of the BPO monolayer upon OTS adsorption were calculated using the initial state approximation, and referenced against the vacuum level. As all cases require an appreciable dipole correction, the average of the two vacuum levels was used as the reference for electronic energies. Core level shift calculations for all systems were performed at higher accuracy using a 600 eV kinetic energy cutoff and double the k-point mesh density. Electronic stability of OTS-BP devices Transfer curves of OTS-coated devices acquired over 28 days from fabrication show relatively small deviations. In a stringent comparison, the passivated BP devices prove to even outperform the stability of bare MoS2 FET devices. 8 Oxidation of MoS2 typically require high temperature and plasma to drive and enhance the reaction. 9,10 The I-V transfer curves of two devices are shown in Fig. S3, establishing the consistency of the results shown in Fig. 2a. Figure S3. I-V transfer curves similar to those of Fig. 2a, acquired from two additional OTSpassivated BP devices. Additional to the transconductance shown in Fig. 2d, the stability in performance of the passivated devices is summarized in the trends of threshold voltage of the passivated devices, as in Fig. S4. Figure S4. A summary of the measured threshold voltage of three BP-OTS devices over 28 days in ambient.
4,095
2016-11-10T00:00:00.000
[ "Materials Science" ]
A glimpse of the ERM proteins In all eukaryotes, the plasma membrane is critically important as it maintains the architectural integrity of the cell. Proper anchorage and interaction between the plasma membrane and the cytoskeleton is critical for normal cellular processes. The ERM (ezrin-radixin-moesin) proteins are a class of highly homologous proteins involved in linking the plasma membrane to the cortical actin cytoskeleton. This review takes a succinct look at the biology of the ERM proteins including their structure and function. Current reports on their regulation that leads to activation and deactivation was examined before taking a look at the different interacting partners. Finally, emerging roles of each of the ERM family members in cancer was highlighted. The ERM proteins are evolutionary conserved group of three related proteins (ezrin, radixin and moesin) that possess band Four point one (4.1) as a common origin [1]. They interact with the plasma membrane through a common FERM (Four point one, ERM) domain [2]. The ERM proteins are located in cellular structures such as filopodia, lamellipodia, apical microvilli, ruffling membranes, cleavage furrow of mitotic cells, retraction fibres, and adhesion sites, where the plasma membrane interacts with F-actin [3]. ERMs are critical for structural stability and for maintaining the integrity of the cell cortex by coupling transmembrane proteins to the actin cytoskeleton [1]. These proteins also play very pivotal intracellular scaffolding roles that aid signal transduction between the intracellular and extracellular compartments of the cell as well as interacting with other membrane phospholipids [4]. Thus, ERMs are involved in regulating several cellular processes including reorganization of actin cytoskeleton, cell survival, membrane dynamics, cell migration, adhesion and regulation of membrane protrusion [4,5]. Structure of ERM proteins Structurally, at the amino terminus of ERM proteins is an approximately 1-296 amino acid FERM domain also known as N-terminal ERM association domain (N-ERMAD) through which they interact with cell membranes. X-ray crystallography revealed that the FERM domain consists of F1, F2 and F3, also respectively referred to as A, B and C subdomains that fold and joined together to form an cloverleaf structure, and these subdomains are homologous to ubiquitin, acyl-CoA binding protein and plekstrin homology domains respectively [4,5] (Fig. 1). The FERM region is closely flanked by a central (approximately 200 amino acid) α-helical domain that form coiled coils [4] and mediate interaction with protein kinase A (PKA) [6]. The carboxylic terminal tail consists of 107 residues, and this terminus contains the F-actin binding site through which ERMs interact with the actin cytoskeleton [7]. In all ERM family members, distinct domains within the N-terminal head and C-terminal tail known as N-and C-ezrin-radixin-moesin association domains (N-ERMAD and C-ERMAD respectively) mediates homotypic and heterotypic head-to-tail interaction [4,8,9]. The N-ERMAD is distinct from the C-ERMAD in that it is a labile domain that is inactivated by chemical agents such as sodium dodecyl sulfate (SDS) treatment, and its activity is negatively affected by freeze thawing, whereas the C-ERMAD is unaffected by chemical treatment [10]. ERMs exist in a dormant, inactive closed conformation within the cytosol in which the C-ERMAD stretches from the F-actin binding site through F2 and F3 to part of the FERM region, thereby concealing both the F-actin and the membrane binding sites from other binding partners [7,10,11]. This covering of FERM by the C-ERMAD is bolstered by the central α-helical domain in that it binds the FERM domain to facilitate masking of both domains [4]. Activation of ERMs requires opening up the binding sites in the FERM domain and those of the F-actin biding sites in the C-terminal domain. This is achieved by phosphatidylinositol 4,5-bisphosphate (PIP 2 )-mediated uncoupling of the C-terminal domain from the FERM domain [1]. Regulation of ERM proteins ERM proteins function is regulated by a two-step process of open (active) and closed (inactive) conformation [12]. They are mainly regulated through conformational changes induced by phospholipids and kinasesmediated phosphorylation, and this results in activation of the proteins [13]. Recruitment of ERMs to areas of the plasma membrane containing high amount of phosphoinositides such as PIP 2 exposes a conserved regulatory threonine phosphorylation residue (T567, T564 and T558 in ezrin, radixin and moesin respectively) located in the C-ERMAD domain [14], and this induces a successive activation mechanism whereby PIP 2 first bind to a subdomain in the N-terminal FERM domain followed by plasma membrane translocation and phosphorylation of the threonine residues [15]. There are three lysinerich consensus sites known to bind phosphoinositides on the FERM domain of ERM proteins and mutation on any of these sites inhibited PIP 2 -ERM interaction and translocation to the plasma membrane [16]. PIP 2 -mediated recruitment of ERMs to the plasma membrane is sufficient not only in the phosphorylation of these proteins by other kinases, but also in the formation of microvilli [17]. Binding of ERM proteins to the cytoskeleton in many cases is strengthened by phosphorylation of the proteins [25]. Activation of the small Rho GTPase, RhoA and not Rac or Cdc42 was able to induce phosphorylation of both radixin and moesin, and this paralleled formation of membrane protrusions in Swiss 3 T3 cells [26]. Also upon activation of platelets with thrombin, the phosphorylation status of moesin on threonine 558 (a residue also phosphorylated by PKCθ) was enhanced and this bolstered the interaction of moesin with the cytoskeleton, and moesin was found localized at the spreading filopodia [27]. Phosphorylation of radixin on threonine 564 at the C-terminal half by Rho-kinase had no effect on the C-ERMAD to bind F-actin, but attenuated the ability of the C-ERMAD to bind N-ERMAD [28] suggesting that the activated state of ERM proteins during which the intramolecular interaction between the N-and C-terminal domains is inhibited, can be sustained by the phosphorylation of threonine 564 in radixin [25], threonines 558 and 567 in moesin and ezrin respectively [29]. ERM proteins can also be phosphorylated by receptor tyrosine kinases. Epidermal growth factor (EGF) receptor can phosphorylate ezrin at tyrosines 145 and 353 (Y145 and Y353) [30]. In epithelial kidney cells, Y353 phosphorylation is required for not only binding of phosphatidylinositol 3-kinase to ezrin, but for the activation of Akt signaling pathway [31]. Similarly, stimulation of ezrin-transfected LLC-PK1 cells with hepatocyte growth factor (HGF) resulted in increased phosphorylation of ezrin at the same tyrosine residues and this not only promoted cell migration, but also enhanced intracellular signal transduction [32]. Ezrin Y145 phosphorylation was demonstrated in Jurkat T-cells expressing Lck (a Src family kinase), but not in Lck-deficient cells [33]. It is now known that ERM proteins are also activated by sphingolipids. For instance in several cell lines such as A549, HEK, MEF, MCF7 and MDA cells, expression of the bioactive sphingolipid, sphingosine-1-phosphate (S1P) both endogenously and exogenously resulted in phosphorylation and activation of ezrin in a time-and dose-dependent manner [34,35], and in Hela cells, S1Pmediated phosphorylation was found to be through S1P receptor 2 (S1PR2). This was required for filopodia formation [34]. In a PKC-dependent manner, S1P stimulation of pulmonary endothelial cells resulted in activation of ezrin and moesin, but not radixin [36,37]. However, contrary to its known functions, via unclear mechanisms, S1P phosphorylation of ezrin resulted in inhibition of cell invasion, and this could be attributed to the ability of S1P to act on different receptors [35]. In Hela cells, generation of plasma membrane ceramide through breakdown of sphingomyelin by the action of sphingomyelinase resulted in dephosphorylation of ERM proteins, while decreasing plasma membrane levels of the sphingolipid resulted in ERM proteins hyperphosphorylation [12]. Regulation of ERM proteins is also brought about by dephosphorylation and inactivation of the proteins through the activities of phosphatases, and via PIP 2 hydrolysis. In Hela cells, ceramide drives ERM dephosphorylation through activation of protein phosphatase 1α (PP1α) with the resultant effect of inactivating ERM and subsequent dissociation from the plasma membrane [38]. Similarly, overexpression of the small protein tyrosine phosphatase, phosphatase of regenerating liver-3 (PRL-3) in HCT116 colon cancer cell line resulted in dephosphorylation of ezrin [39]. Moesin can be downregulated by myosin light chain phosphatase through dephosphorylation of threonine 558 [40]. Although, in phorbol 12-myristate 13 acetate (PMA)-stimulated leucocytes, ezrin is inactivated through calpain-mediated cleavage, moesin and radixin are insensitive to cleavage by calpain [41] suggesting that distinct regulatory mechanisms exist for each protein in the same cell. Interaction of ERMs with other proteins There are several proteins within the plasma membrane that interact with activated ERM proteins through the FERM domain. In a manner dependent on PIP 2 , ERM proteins can associate with the cytoplasmic tails of intracellular adhesion molecules -1, -3 (ICAM-1 and -3) [42] and -2 (ICAM-2), as well as the hyaluronan receptor CD44 and CD43 [43]. ERMs are also known to bind PDZ (postsynaptic density protein)-containing proteins such as transporters and ion channels through other anchoring proteins like NHERF1 (Na + -H + exchanger regulatory factor) also known as ERM-binding phosphoprotein 50 (EBP50) and NHERF2 [44,45]. They also interact with membrane glycoproteins such as P-selectin glycoprotein ligand-1 which tether white blood cells to injured tissues [42]. The α-helical domain on the central portion of ERMs can also bind subunits of HOPS complex (homotypic fusion and protein sorting) as well as the regulatory subunits RII of protein kinase A [46]. Binding of ERM proteins to PKA tethers it to downstream targets to effect cAMPmediated biological processes such as cell differentiation, proliferation, metabolism, apoptosis, exocytosis, T cell and B cell activation, muscle contraction [6]. In COS-1 cells, ezrin was shown to bind and link syndecan-2 to the cortical cytoskeleton [47]. The death receptor Fas/CD95, unlike other membrane proteins that bind all ERM proteins, did not bind to moesin but only to ezrin in T lymphocytes [48]. ERM proteins and cancer Cancer cell migration is a coordinated process involving different steps that bring about loss of cell-cell adhesion and deregulation of cell-matrix interaction. Several reports have outlined different factors such as localization of ERMs within the cell, their level of phosphorylation as well as expression profile to be responsible for ERM proteins-mediated promotion of tumourigenesis [2]. Below is a brief discussion of each of the ERM proteins, with the aim of highlighting their significance in different human cancers. Ezrin Abnormal localization of ERM proteins is a leading factor resulting aberrant intracellular signal transduction triggered by growth factors. For instance, in breast carcinoma, ezrin which was originally situated at apical structures in normal cell was found translocated to the cytoplasm and plasma membrane and this aberrant localization resulted in the acquisition of an epithelialmesenchymal transition (EMT) in which cells loss their normal differentiated, planar and apical-based polarity and anchorage dependent architecture and instead acquire metastatic phenotype that correlated with poor prognosis [49]. In epithelial cells, interaction of ezrin with Fes kinase causes recruitment and activation of the later at the cell membrane where it facilitates HGF-mediated loss of cell-cell and cell-ECM contacts resulting in cell migration as revealed by wound healing assay [32,50]. In this interaction, ezrin not only localized to the leading edge of migrating epithelial cells [50], but also promoted the formation of membrane protrusions [2]. Similarly, upon phosphorylation of the ERM proteins by PKCα, ERMs can act as downstream effector of PKC to mediate cell migration when the later was stimulated with phorbol-ester [51]. PKC activation by phorbol ester also caused a switch in phosphorylation site of the transmembrane receptor CD44 from Ser325 to Ser29 and this phosphorylation regulated the association of ezrin with CD44 to promote directional cell migration triggered by CD44 [52]. Ezrin binds cell-neural adhesion molecule (L1-CAM) to promote progression of colorectal cancer in that RNA interference of ezrin activity inhibited tumour metastasis mediated by L1-CAM [53]. In 3D matrigel matrix, perturbation of ezrin activity with small hairpin RNA technology reduced the metastatic and invasive capabilities of MDA-MB-231 and MCF10A breast cancer cell lines [54]. Ezrin Y145 phosphorylation in mouse mammary carcinoma cell line SP1 [55] and in pig kidney epithelial LLC-PK1 [56] cells resulted in cell spreading. Elevated expression of ezrin has been reported in LTE, BE1, H446 and H460 lung cancer cell lines, and a substantial decrease in migration, proliferation and invasion was observed upon siRNA-mediated knockdown of ezrin [57]. In high grade prostate cancers, overexpression of ezrin has been reported [58,59], and this was attributed to increased expression of oncogenic c-Myc [60]. Interestingly, ezrin itself, through a feedback loop involving the Akt/PI3K pathway can regulate c-Myc levels and this is crucial for cell migration and invasion [60,61]. Ezrin overexpression has been shown in other cancer such as pancreatic carcinoma, rhabdomyosarcoma and osteosarcoma [62][63][64]. Radixin Although, much is not known about the role of radixin in cancer, unlike ezrin, however, radixin has been implicated in prostate cancer progression [65]; and impairment of radixin in human pancreatic cancer cell line by shRNA not only significantly attenuated cell proliferation, survival, adhesion and invasion but also enhanced expression levels of the cell-cell adhesion molecule, Ecadherin [66]. In a manner dependent on Vav (a guanine exchange nucleotide factor for Rac1) activity, downregulation of radixin levels resulted in an increase in Rac1 activity [67]. In radixin, phosphorylation of a conserved threonine 564 residue is sufficient to prevent the interaction of the FERM domain at the N-terminus with the F-actin binding domain at the C-ERMAD terminus, and this results in constitutive opening of the membrane and F-actin binding domains [68]. Indeed, in Madin-Darby canine kidney (MDCK) epithelial cells, phosphorylation of radixin on this site (T564) by the G protein-coupled receptor kinase 2 (GRK2) was able to induce membrane protrusions as well as increased migration of the cells as determined by wound healing assay [69]. In contrast to the aforementioned positive roles of radixin in tumourigenesis, a novel role in which the protein appeared to inhibit metastasis has been reported. According to this report, perturbation of radixin activity in the metastatic prostate cancer cell line PC3 by siRNA technology resulted in an elevated increase in spreading of cells, enhanced cell-cell adhesion and acquisition of epithelial phenotype [70]. Moesin Expression of moesin has been correlated with increased tumour size and invasive capability, and there was an aberrant trafficking of the protein from plasma membrane to the cytosol in oral squamous carcinoma cell (OSCC) in which moesin was knocked down [71]. Similarly, whereas high grade glioblastoma showed high expression levels of moesin, there was no change in expression levels of ezrin and radixin [72]. Moesin promoted tumour cell invasion in that in vitro 3D cell migration assays revealed that moesin depleted-cells exhibited reduced invasiveness [61,73]. Moesin is considered an important promoter of metastasis as it has been shown to induce EMT in human mammary cell MCF10A [74], and there are now emerging reports that moesin is upregulated in different human cancer cell lines [75,76] as well as a marker of EMT [74,77,78]. In the same vein, high level of moesin was also found in head and neck squamous cell carcinoma [79]. Whereas both moesin and radixin were found upregulated in lymph node metastases of pancreatic cancer, the level of ezrin expression was unaffected, but its phosphorylation status did change [80]. Conclusion and future perspectives ERM proteins play very vital role in maintaining cellular integrity and in mediating signal transduction from different extracellular inputs through their interaction different receptor tyrosine kinases (RTKs) such as EGFR and HGFR, adhesion and adaptor proteins such as Ecadherin, ICAM-1,2,3, NHERF and CD44, and other signaling pathways such as PI3K/Akt, cAMP/PKA and the Rho GTPases, all of which have been implicated in tumorigenesis; thus, making ERM proteins important target in development of novel therapeutics in fighting cancer progression. Although, there are ample reports on involvement of ERMs in cancer, detailed understanding of the mechanisms of their interactions with other proteins as well as their activation is still lacking and requires further investigation. Competing interests The author declare that there are no competing interests. Authors' contribution GAP conceptualized and wrote the manuscript.
3,675
2016-03-17T00:00:00.000
[ "Biology" ]
Deep learning-based lesion subtyping and prediction of clinical outcomes in COVID-19 pneumonia using chest CT The main objective of this work is to develop and evaluate an artificial intelligence system based on deep learning capable of automatically identifying, quantifying, and characterizing COVID-19 pneumonia patterns in order to assess disease severity and predict clinical outcomes, and to compare the prediction performance with respect to human reader severity assessment and whole lung radiomics. We propose a deep learning based scheme to automatically segment the different lesion subtypes in nonenhanced CT scans. The automatic lesion quantification was used to predict clinical outcomes. The proposed technique has been independently tested in a multicentric cohort of 103 patients, retrospectively collected between March and July of 2020. Segmentation of lesion subtypes was evaluated using both overlapping (Dice) and distance-based (Hausdorff and average surface) metrics, while the proposed system to predict clinically relevant outcomes was assessed using the area under the curve (AUC). Additionally, other metrics including sensitivity, specificity, positive predictive value and negative predictive value were estimated. 95% confidence intervals were properly calculated. The agreement between the automatic estimate of parenchymal damage (%) and the radiologists’ severity scoring was strong, with a Spearman correlation coefficient (R) of 0.83. The automatic quantification of lesion subtypes was able to predict patient mortality, admission to the Intensive Care Units (ICU) and need for mechanical ventilation with an AUC of 0.87, 0.73 and 0.68 respectively. The proposed artificial intelligence system enabled a better prediction of those clinically relevant outcomes when compared to the radiologists’ interpretation and to whole lung radiomics. In conclusion, deep learning lesion subtyping in COVID-19 pneumonia from noncontrast chest CT enables quantitative assessment of disease severity and better prediction of clinical outcomes with respect to whole lung radiomics or radiologists’ severity score. www.nature.com/scientificreports/ have been published successfully identifying patients with COVID-19 [9][10][11] or automatically scoring disease severity based on CT findings [12][13][14][15][16] . However, despite significant inroads in artificial intelligence (AI) to the acquisition, segmentation and diagnosis of data in COVID-19 17 , little attention has been paid to automatic lesion subtyping. Parenchymal tissue lesions change during the progression and recovery from COVID-19 pneumonia 18,19 . Therefore, the automatic quantification and characterization of lesion subtypes based on deep learning may improve patient classification and management. Recent studies have addressed the characterization of COVID-19 lung lesions using radiomic features. A seminal publication using CT data to predict outcomes focused on radiomic lobar analysis in order to characterize disease severity in a small cohort 20 . Other publications have proposed and compared several prediction models based on radiologist scores, CT whole lung radiomics and clinical variables 21 . The whole lung radiomic model may be superior to radiologists' assessment predicting outcomes, disease severity and improving patient triage. Other authors have compared CT lesion radiomics, clinical and combined models to predict the progress and outcome of COVID-19 8 . More recently, Chassagnon et al. proposed an AI scheme for disease quantification, staging and outcome prediction using and ensemble of architectures 7 . The image features included the radiomic analysis of the COVID-19 related CT abnormalities as a single class as well as cardiopulmonary whole organ features. The image-based biomarkers were combined with clinical and biological variables in a severity and outcome prediction assessment. Scarce attention has been paid to the design of severity and prognosis assessment models based on COVID-19 lung lesion subtyping. Only one prior work has proposed a deep learning model which differentiates between several classes 14 , other methods proposed postprocessing the resultant complete lesion segmentation through thresholding 22 or the use of texture and intensity features 23 . In this context, this work contributes to the definition and comparison of COVID-19 prognosis prediction algorithms based directly on deep learning lesion subtyping from CT images. For this purpose, we specifically designed a deep learning algorithm, based on convolutional neural networks (CNN), for the automatic subtyping of COVID-19 compatible radiological patterns from CT images and the quantification of the extent of pulmonary involvement. Lesion characterization is used in order to assess disease severity and predict clinical outcomes, including mortality, admission to Intensive Care Units (ICU) and the need for mechanical ventilation. The proposed quantification and prediction models have been evaluated in a completely independent and multicentric cohort of 103 patients and compared to the radiologists' assessment and a whole lung radiomics based model. Materials and methods Patient selection. This retrospective and multi-center study included 103 randomly selected, hospitalized patients diagnosed with COVID-19 confirmed by RT-PCR who underwent noncontrast chest CT for the assessment of disease severity at four different Spanish hospitals between March and July of 2020. The institutional review boards (IRBs) of all involved institutions approved this retrospective study ("Comité Ético de Investigación Clínica del Hospital Clínic de Barcelona", "Comité Ético Hospital Universitario La Paz", "Comité de Ética de la Investigación de la Universidad de Navarra", "Comité Ético de Investigación Clínica, Instituto de Investigación Sanitaria -Fundación Jiménez Díaz" and "Comité de Ética de la Universidad Politécnica de Madrid"). Informed consent was obtained from all study participants. All experimental protocols complied with all relevant guidelines and regulations. Only CT acquisitions that were performed at the time of hospital admission were included in this study. Chest CT examinations were performed using standard-of-care chest CT protocols in 12 different CT models from 5 different manufacturers. All CT exams were reconstructed with a 512 × 512 matrix and a slice thickness varying from 0.5 to 5 mm (mean of 1.5 mm). A deep inspiration breath-hold technique was performed whenever feasible. Acquisition details are shown in Supplementary Table S1. Medical records were reviewed at each institution to collect clinical data including patient sex, age, mortality outcome (deceased or not deceased), ICU admission and length of stay, and the need for mechanical ventilation. All collected data were anonymized according to local guidelines. We assessed chest CT quality to discard scans with significant artefacts. Demographic and clinical data were obtained from digital records. Participants with missing demographic or clinical data were excluded (Fig. 1). Patient characteristics of the study cohort used as independent testing for the lesion subtype segmentation, severity assessment, and outcome prediction analysis are shown in Table 1. Dataset for training the AI model. The dataset used for training the AI model for segmenting COVID19 lesion subtypes comes from a publicly available database collected and published by the Italian Society of Medical and Interventional Radiology. This database is composed of 100 axial CT slices belonging to 60 COVID-19 confirmed patients and were manually segmented by an expert radiologist considering two different types of parenchymal injuries, including ground glass opacities and consolidation. The images, which were in JPG format, were transformed to Hounsfield Units (HU) by establishing an intensity-transformation function from original values to HU considering the intensity values of air and fat areas. COVID-19 lesion subtype segmentation algorithm. A 2D multiclass CNN-based segmentation architecture was specifically designed for segmenting COVID-19 radiographic subtypes. Our motivation to adopt a 2D architecture was based on our interest in analyzing independent 2D axial CT slices. This aspect of the approach makes it invariant to slice thickness, consequently, robust to the variability of scanning protocols across centers and vendors, Additionally, it allows training the network on annotated 2D images which is an advantage with respect to manual 3D annotations as they imply high burden to experts and are not always available. The network architecture is illustrated in Fig. 2 www.nature.com/scientificreports/ Each encoder block consists of a convolutional dense block which concatenates in an iterative manner previous feature maps from preceding layers. Dense connections alleviate the vanishing gradient problem and allow a stronger feature propagation throughout the network architecture, and thereby prevent the loss of informative features 24 . These dense blocks are subsequently down-sampled by an Efficient-Net block which combines the traditional max-pooling operation with strided convolutional operations to avoid representation bottlenecks and loss of information when performing the sub-sampling 25 . It has been previously shown that the integration of these blocks into pure 3D and 3D-to-2D segmentation architectures outperforms state-of-the-art CNN such as the well-known U-Net 26,27 . In the decoding pathway of the proposed architecture, transposed convolutional (deconvolutional) features together with higher resolution feature maps from the encoder, which are passed through skip-connections, are processed by a convolutional dense block. This decoding pathway ends with a convolutional operation with a kernel size of 1 × 1 producing 4 feature maps corresponding to the background, normal tissue, ground glass opacities and consolidation. The last convolutional operation has a softmax activation function which produces the final probability image map. The final image label map is then computed by assigning to each pixel in the image a label which has the highest probability. For the purpose of training the segmentation algorithm, we designed a class-weighted Dice-based loss function that accounts for class imbalance caused by differences in the prevalence of different COVID-19 lesion subtypes. We inversely weighed the loss function according to the number and size of the different classes in the training population to avoid frequency biases and to avoid discriminating against underrepresented COVID-19 lesions. For optimizing the loss function, we used an Adam Stochastic Optimizer with an initial learning rate 1e-4 and plateau learning rate decay with a factor of 0.2 when the validation loss is not improved after 5 epochs. Training was terminated when the validation loss did not improve after 10 epochs. The model was trained from scratch www.nature.com/scientificreports/ and no transfer learning was used to initialize model parameters. The network was implemented in Keras with Tensorflow and using a PC with a GPU NVIDIA Quadro P6000 (24 GB). All CT scans were preprocessed by clipping the intensities outside the range [50, -1024] HU, and the remaining values were scaled between zero and one. Two different experts (one radiologist and one expert in pulmonary imaging) manually edited the segmentation results obtained by the algorithm on a subset of CT scans using ITK-SNAP software (www. itksn ap. org), and several 3D overlap and surface metrics were computed to evaluate the performance of the segmentation algorithm. Additionally, all segmentation results were visually evaluated by experienced radiologists. This visual segmentation evaluation scoring was performed using scores which ranged from 0 to 10 according to the degree of under-or over-estimation of the different parenchymal subtypes (score 0: significant under-estimation, score 5: neither under-nor over-estimation, score 10: significant over-estimation). Evaluation of segmentation results. Automatic segmentation of disease subtypes using the AI model was performed in all 103 CT scans. Three chest radiologists with expertise in COVID-19 diagnosis and one expert in pulmonary imaging (MFV, CPM, GGM and MJLC, all with more than 10 years of experience in chest CT) participated in the revision of 3402 axial CT images covering full lung regions in 20 CT scans, including segmentation results obtained by the algorithm. Two independent readings were performed of this CT subset. The 20 CT scans were randomly selected making sure that all ranges of disease severity were included. The experts manually supervised and corrected all the segmentation label maps produced by the proposed algorithm. The Dice coefficient, the Hausdorff distance and the average surface distance for each parenchyma subtype was calculated to evaluate the similarity between the manual segmentation (manually corrected annotations) of each CT slice and automated segmentation generated by the AI model from a randomly selected set of 20 CT scans. The mean and the standard deviation were calculated. Additionally, to make a complete the evaluation in all the testing cohort of 103 cases, the segmentation results obtained by the algorithm were visually evaluated by expert radiologists (MFV, CPM, GGM, GB, MS, MB) using a scoring scale from 0 to 10 to assess per case the degree of under-and over-estimation of each parenchymal subtype. Disease quantification. All test CT scans were preprocessed by segmenting and masking only the lungs' regions using a robust and publicly available model for lung parenchyma segmentation 28 . Volumetric quantification of each parenchymal pattern, including healthy tissue, ground glass opacities and consolidation pattern areas, was performed in all CT scans after being processed by the COVID-19 segmentation algorithm. Relative percentage volume affected by each parenchymal pattern was calculated for each lung independently. Additionally, each lung was divided into three equivolumetric regions, and disease quantification was computed in each third. Disease extent was expressed as a percentage. Visual severity scoring of chest CT. An experienced radiologist per hospital, who was blinded to all patient information -including the three outcomes considered-, reviewed each CT scan recording the extent of www.nature.com/scientificreports/ COVID-19 lesions (ground glass and consolidations), total lung involvement, and the percentage of each pulmonary lobe (right upper, right middle, right lower, left upper, and left lower) affected. On CT scans, ground-glass opacity is defined as a hazy increased opacity of lung parenchyma, with preservation of bronchial and vascular margins and consolidation is a homogeneous increase in pulmonary parenchymal attenuation that obscures the margins of vessels and airway walls 29 . Visual severity scores based on lesion extension ranged from 1 to 5 for each subtype (score 1: < 5%, score 2: 5%-25%, score 3: 25%-50%, score 4: 50%-75%, score 5 > 75%) 30 . Additionally, total lung involvement (for each lung independently) was also scored. Patient prognosis. The automatic volumetric lesion quantification, performed by the AI algorithm, was used to assess patient severity and predict clinical outcomes. Relevant clinical outcomes included patient mortality, ICU admission and the need for mechanical ventilation. A total of 27 features were included in the regression model to predict clinical outcomes. These features included the total percentage of extension of each parenchymal subtype (normal parenchyma, ground glass opacities and consolidation pattern), relative percentage of each parenchymal subtype divided for each lung (right, left) and for each lung third (upper, middle, lower) and relative lesion involvement (combination of both ground glass and consolidation patterns) for each lung third. The performance of our proposal was compared to the ability of the radiologist scores to predict the same clinical outcomes, and to the use of CT whole lung radiomic signatures which have been previously proposed for predicting ICU admission and patient outcome 21 . Signatures for predicting ICU admission included run entropy of the GLRLM, zone entropy of the GLSZM, large dependence low gray level emphasis of the GLDM and correlation of the GLCM. Signatures for predicting patient outcome were composed by cluster tendency of the GLCM, long run low gray level emphasis of the GLRLM, busyness of the NGTDM and large dependence low gray level emphasis of the GLDM. We used ICU admission signature for predicting the need for mechanical ventilation since no specific signature was proposed for this outcome. Logistic regression models were used to evaluate the ability of extent-based parenchyma subtype features, whole lung radiomic features, and radiologist scores to predict patient outcomes using a five-fold cross validation strategy. The performance of each model was primarily evaluated by using the mean area under the ROC curve (AUC). Other metrics including sensitivity (SN), specificity (SP), positive predictive value (PPV) and negative predictive value (NPV) were also reported. Youden index was used to determine the optimal threshold. 95% confidence intervals (CIs) were calculated considering appropriate t-score and the estimated standard deviation computed with the cross-validation strategy. Results Evaluation of COVID-19 lesion subtype segmentation. The evaluation metrics of the segmentations results compared to the manually corrected annotations made by the experts in the subset of 20 CT scans are reported in Table 2. It should be noted that no significant differences in performance of the segmentation algorithm were observed among the different institutions. The most common errors identified were the overestimation of ground glass opacities and consolidation lesions in basal zones, caused by motion artifacts as well as confusion with pleural effusion areas. Figure 3 shows the corrections made to the results produced by the algorithm performed by the experts in 3 cases with different parenchymal involvement. Disease quantification. The proposed method is an attempt at a valid and practical tool for radiologists to quantify disease severity. The algorithm automatically generates a report in an easy-to-read PDF format (see supplementary Figure S1), including the lung involvement COVID-19 disease subtypes. This solution provides detailed quantification of disease subtypes by reporting the percentage of affected volume for each whole lung and lung zone with respect to the total lung volume. The report also includes a visual glyph representation which sums up the volume metrics, allowing intuitive estimates of the affected areas at diagnosis and during patient follow-up. Figure 4 shows two cases, including moderate and mild lung involvement. The corresponding glyphs are shown on the right. www.nature.com/scientificreports/ The segmentation algorithm was executed in the entire test cohort, and lung involvement of disease subtypes was calculated for all cases. Figure 5a shows the relation between the AI-predicted percentage of each disease subtype and the radiologists' score. A good agreement between them was found, although the assessment overestimated disease severity compared to the AI-base prediction. It should be noted that visual assessment of lesion involvement is a subjective procedure, and it has been previously shown that visual readings tend to overestimate the extent of the disease 31 . The agreement between the percentage of the total affected lung tissue with respect to the total severity score assigned to each patient thought visual inspection was also assessed (Fig. 5b). The total visual severity score was calculated as the sum of the scores for each subtype. Visual scoring correlated well with predicted total percentage of lesion, with a Spearman correlation coefficient (R) of 0.83 (95% CI: 0.755, 0.884). Patient prognosis. Twenty-one patients in our study were admitted to ICU, thirteen required mechanical ventilation and nine died during hospitalization. Regression models including the features based on the automatic quantification of parenchyma subtypes predicted mortality with an AUC of 0.874 (95% CI: 0.790, 0.959), ICU admission with an AUC of 0.726 (95% CI: 0.582, 0.871), and mechanical ventilation with an AUC of 0.679 (95% CI: 0.496, 0.862) ( Table 3). Automatic and objective measurements of lung lesion volumes performed better than radiologist based visual scoring (Table 3). Simple features such as automatically detected volumes of parenchymal subtypes were also better outcomes predictors when compared to more complex CT radiomic features. The results presented in Table 3, show a slight tendency of the proposed method to overestimate the different clinical outcome, having higher PPV values, without decreasing the specificity of the system. We consider this is a useful feature in prognostic models for clinical decision guidance, especially in the context of the management of patients with COVID-19, as it is necessary to grant surveillance and proper management to all the potential severe cases and it is not desirable to miss any of them. Most prominent features for all three outcomes prediction models were the relative percentage of normal parenchyma and consolidation pattern for each lung third. Percentage of normal tissue had a negative weight in the model while there was a strong positive relation between the percentage of consolidation pattern and all clinical outcomes. The relative percentages of ground glass opacities had also a positive relation with the outcomes in the prediction models, however with a lower impact than that obtained for the percentage of consolidation pattern. Figure 5. Disease severity assessment. A: boxplots representing the relationship between the automatic AI-predicted percentage of each lesion subtype and the severity scores visually determined by radiologists. The horizontal line in each box illustrates the median, and the whiskers represent 5th and 95th percentiles. B: relation between visually and automatically defined CT severity score considering total lesion involvement. Visual severity scores ranged from 1 to 5 for each subtype (score 1: < 5%, score 2: 5%-25%, score 3: 25%-50%, score 4: 50%-75%, score 5 > 75%). Discussion Artificial intelligence can provide tools capable of estimating COVID-19 disease severity and predicting clinically relevant outcomes such as mortality, ICU admission, and the need for mechanical ventilation. CT findings in the lung of infected patients are one of the earliest indicators of disease. Therefore, the quantification of each disease subtype may play an important role in the management of COVID-19 patients. We hereby report our findings using an AI system to automatically segment lung lesion subtypes in COVID-19 pneumonia from CT images. Our results demonstrate that AI is a valid tool for the identification and quantification of lesion subtypes in COVID-19 pneumonia (Dice coefficient of 0.985 ± 0.02-healthy tissue, 0.912 ± 0.15-ground-glass, 0.84 ± 0.25-consolidations and visual assessment confirmed no relevant under-or overestimation with mean absolute error of 0.55 ± 0.64-healthy tissue, 0.81 ± 0.82-ground glass and 0.97 ± 1.14-consolidations), and its results are associated with the visually determined presence of parenchymal injuries and disease severity as assessed by radiologists (Spearman correlation coefficient (R) of 0.83 (95% CI: 0.755, 0.884)). Furthermore, the use of simple metrics in prediction models of relevant outcomes outperforms whole lung radiomic models or radiologists scoring for predicting the aforementioned clinically relevant outcomes, with an AUC of 0.874 (95% CI: 0.790, 0.959), 0.726 (95% CI: 0.582, 0.871) and 0.679 (95% CI: 0.496, 0.862) for predicting mortality, admission to ICU and the need for mechanical ventilation respectively. An extended performance analysis using different algorithms to predict these clinical outcomes is presented in Table S3. Parenchymal lung disease subtyping has been previously used to characterize emphysema, interstitial lung abnormalities and interstitial lung disease using deep learning as well as other histogram based local methods 15,[32][33][34] . Despite the abundance of COVID-19 manuscripts published to date, few studies have focused on lesion subtyping 17 . Our results not only demonstrate the efficacy of COVID-19 lesion subtyping using deep learning techniques, but also its potential role in patient stratification and predicting different outcomes. Previous works had address disease severity assessments based on clinical and biological parameters 2,3 , imaging studies 4,5 or a combination of both imaging and clinical data 6,7,14 . Yue et al. 20 defined a prediction model of hospital stay using CT data using radiomics from lung lobes. Homayounieh et al. 21 proposed and compared several prediction models based on radiologist scores, CT whole lung radiomics, and clinical variables. In our work we compared our deep learning approach with their CT whole lung radiomics model. Lesion subtyping demonstrated superiority in terms of AUC with respect to whole lung radiomics in the three outcomes considered. The results in our cohort of the whole lung radiomics model are comparable to those reported in the original publication. Chassagnon et al. 7 proposed an AI scheme including radiomic lesion features and cardiopulmonary whole organ features with clinical and biological variables in their severity and outcome prediction assessment. In comparison to this work our approach presented a similar performance in terms of AUC with a simpler image processing approach and without including clinical variables. The integration of clinical variables in our models is expected to improve prediction accuracy as has been previously reported in other works 14,20 . Zhang et al. 14 focused mostly their work on differential diagnosis but they also proposed a critical illness prediction model considering different lesion subtypes, extracted using a 2D deep learning model with seven classes, as well as clinical parameters. Critical illness was defined as the combined outcome of ICU admission, mechanical ventilation or death. As shown in our work lesion subtypes were identified as the most relevant variables in their prediction model. Performance in terms of AUC of their model based on lesion features are comparable to the Table 3. Performance analysis of prediction models based on DL-based lesion subtyping, full lung radiomics or radiologist assessment for three outcomes (mortality, ICU admission and need of mechanical ventilation) studied using five-fold cross validation in a cohort of 103 subjects with RT-PCR positive COVID-19 pneumonia. AUC, SN, SP, PPV, NPV and 95% confidence intervals are reported for each model. www.nature.com/scientificreports/ ones found in our work. In this work they did not compare the performance with respect to the predictor based on the radiologist scores nor considered differentiated outcomes. Therefore, in the context of the previous literature we highlight the following strengths. Our work confirms of the importance of lesion subtyping in prognosis analysis and the proposed prediction model based in deep learning segmentation shows high AUC scores for several clinical outcomes, specifically mortality and admission to ICU. The completely independent multicentric evaluation with data from 12 different scanner models confirms the robustness of the segmentation model with respect to the variability introduced by medical device. The quantitative segmentation evaluation shows very high Dice coefficient for the segmentation of the ground glass opacity regions. Our study had several limitations. First, regarding the training cohort, only axial slices from 60 patients were available limiting the exploitation of the three-dimensional nature of the CT datasets. This multicentric dataset was selected as it was one of few resources for COVID-19 lesion subtype labeling during the early days of the pandemic, including ground glass, consolidations pattern and pleural effusion areas. Our system was designed and ready to include the three lesion subtypes. Pleural effusions could not be evaluated given their low prevalence in the training dataset and in the study cohort. To date, this is the only publicly available dataset that we are aware of that includes labeling of pleural effusion, evidencing a clear need to the community to provide CT datasets with accurate and complete labeling of COVID-19 lesion subtypes (see supplementary Table S2 for an up to date list of publicly available datasets). As presented in the results section, the consideration of pleural effusion subtype as a separate class in the AI system is important as it may lead to overestimation of consolidation patterns, and more importantly it would enable the quantification and follow-up of the pleural effusion with important implications for the patient management. Our deep learning architecture is two-dimensional, as we prioritized that the method would be invariant to slice thickness, and consequently, robust to the variability of scanning protocols across centers and vendors. We acknowledge that spatial consistency would improve, and basal segmentation could be better tackled by using a three-dimensional approach. However, the proposed lesion subtype segmentation of axial slices performed well for whole lung assessment and was also considered as an advantage in time constrained scenarios when only few slices need to be evaluated. Another limitation is related to the evaluation of the segmentation technique that was quantitatively performed for a subset of images (20 cases) and visually for all the testing cohort following an overestimation and subestimation score for each lesion subtype. Additionally, scoring was regional as opposed to lobal, and therefore not anatomical. Automatic lobar segmentation is very challenging in diseased lungs, and we preferred to ensure a robust partition to avoid biasing the results. Recent COVID-19 studies have successfully demonstrated accurate lobar segmentation 12 and the integration with the presented AI system would be certainly feasible and could be considered in the future. A final limitation could be related to the number of cases in the training and testing cohorts. As previously commented, the training database was one of the only databases available at the beginning of the pandemic with subtype labeling. Although, the number of slices and patients included could be considered low, recently it has been demonstrated that semantic segmentation properly configured can be trained effectively with low numbers of cases 35 , that is consistent with the evaluation results of our proposal in the independent testing cohort from four different hospitals. Similarly, the testing cohort could be considered as having a low number of cases. In this sense, as data collection is an intense labor task, we prioritized the multicentric nature of the cohort to ensure sufficient variability (see supplementary Table S1 for acquisition details) to evaluate the results for both segmentation and outcome prediction models. Considering publicly available datasets, supplementary Table S2 presents an updated list after a careful literature review. Only two additional datasets 14,36 include subtypes labeling in a limited number of cases or slices that could be considered for extending the training cohort of the presented segmentation algorithm, but given the good evaluation results it doesn't seem necessary. On the other hand, no dataset includes subtyping and detailed outcomes information including ICU stay, need for mechanical ventilation and mortality hampering the extension of the testing cohort for both segmentation and outcome prediction tasks. Despite the relevant efforts in data collection and algorithm development during the pandemic, there is still a clear need of complete COVID-19 databases that would include CT imaging data, lesion subtyping labeling, clinical data covering demographics, symptoms and hemogram based biomarkers, as well as sufficient follow-up information and differentiated clinical outcomes. Only when this data is available a robust comparison of diagnostic and prognostic algorithms and methods in large cohorts could be performed. The main implication of our study is the demonstration of use of deep learning techniques to automatically characterize COVID-19 pneumonia lung lesion subtypes to better predict mortality and admission to ICU and at a lower extent the need for mechanical ventilation. Participating radiologists in this study confirmed the validity of the designed quantification and prediction tools (including the automatic reporting) and expressed an interest in using them in their routine clinical practice during the pandemic. Moreover, further studies using our methodology could enable an objective and quantitative understanding of disease progression and response to therapies, as well as the objective evaluation of drug efficacy in clinical trials. In conclusion, our study demonstrates that an AI system can identify COVID-19 lung lesion subtypes in nonenhanced CT scans with a performance comparable to expert radiologists scoring. Lesion subtyping enables a better stratification and risk assessment of patients based on the prediction of clinically relevant outcomes.
6,889.4
2022-06-07T00:00:00.000
[ "Medicine", "Computer Science" ]
MaPMT Relative Efficiency Measurements for the LHCb RICH Upgrade The Large Hadron Collider beauty experiment (LHCb) at CERN is aimed to study flavor-physics. The Ring Imaging Cherenkov detector system (RICH), which provides particle identification, have been operating successfully since 2010. During the second Long Shutdown of the LHC of 2019-2020, the RICH detectors will be upgraded to maintain the excellent PID performance with a five-fold increase of instantaneous luminosity. In addition, the detector will be readout at the full LHC bunch-crossing rate of 40 MHz using a flexible software based trigger. To cope with that changes the current hybrid photon detectors will be replaced by Hamamatsu R13472 multi-anode photomultipliers (MaPMT) with external brand new frontend electronics. The new photodetectors and the associated electronics have been subjected to calibration procedures. The high-voltage working point determination, relative efficiency measurements of MaPMT pixels and their calibration procedure for the RICH detector system will be presented. Introduction The Ring Imaging Cherenkov detector system (RICH) upgrade [1] includes the replacement of hybrid photon detectors (HPDs) [2] by new Hamamatsu [3] R13472 multi-anode photomultipliers (MaPMTs). The scheme presented in Fig. 1 was used for MaPMT behavior studies. The setup comprises 16 MaPMTs combined into groups of four called elementary cells (EC), the frontend electronics including the CLARO ASICs [4] used to digitize the signal, and two digital boards. The readout is performed by using the prototype for the DAQ architecture of the upgraded LCHb experiment, namely the MiniDAQ2 [5] system. A stable illumination system is provided by a laser. This opto-electronics chain is the first level of modularity of the RICH upgrade detectors that can be integrated with the new LHCb readout architecture working at 40 MHz. One MaPMT consists of 64 pixels, each pixel being coupled to a CLARO channel providing the digitization of the signal. The new photodetectors and the associated electronics have been subjected to calibration procedures that include the implementation of an algorithm to define the optimal working point for the RICH upgrade detectors. Operating voltages are investigated in order to optimize signal to noise ratio and ageing rate. This is achieved via determination of the average efficiency of the MaPMT channels as a function of High-Vo ltage (HV) applied to the MaPMT. Algorithm Development The photon rate is measured for each programmable threshold of the readout chip to obtain the so-called S-curve (Fig. 2, right) that is the integral of the single-photon spectrum (Fig. 2, left). Single-photon spectrum varies from pixel to pixel due to the different gain, so MaPMT pixels have different S-Curve shape. Additionally, each associated CLARO channel has its own intrinsic offset. Therefore, pedestal positions and working points are different for each pixel of each MaPMT. The first step on efficiency measurements was to develop an algorithm to define the working point specifically for each pixel. Working point is the threshold at which the spectrum should be cut-off in order to avoid noisy signal and maximize the photon detection efficiency. This point lies in the valley region. CLARO allows visualizing the integrated spectrum through threshold scans. In order to zoom the necessary region offset was applied. To define the working value in order to have a high signal detection efficiency and a good spurious counts rejection the analyzed data is then fitted with linear plot to identify the sharp increase of the occupancy. This increase is associated with the closeness to pedestal and therefore gives nonlinear behavior. From the single photon spectrum on Fig. 2 it can be seen that the valley region is almost flat. S-Curve is a normalized integral under the curve of that spectrum at each threshold step. Therefore, at the valley region occupancy (the integral under the curve) does not change significantly. Linear plots were done on smoothed data starting with the fixed right end and moving the left end of the linear plot by one threshold step. The chi^2 test was chosen as an indicator of the nonlinearity of the plot. It can be easily seen that the value of chi^2 is increasing closer to the pedestal. Analyzing its evolution with respect to the threshold step, it is obvious that the slope of the chi^2 plot increases dramatically at some point (Fig. 3). The working point was chosen by cutting the data at the threshold step with chi^2 more than 4. To stay in the safe region working point was chosen as one threshold step closer to the signal than that cutting point. Algorithm Validation Necessity of the algorithm is proven by the fact that the working point is changing from pixel to pixel as well as the pedestal position. The algorithm was validated using the pedestal identification along with the width between the pedestal and the working value. Distributions of the pedestals and the width of the range were used as the validation at the different voltage values. Range should be consistent for all pixels and therefore their distribution should be narrow. Fig. 4 demonstrates narrow distributions that proves the validity of the working point identification algorithm. Occupancy-HV Dependence Using the distribution of differences between pedestals and working points (valley width) the evolution of valley width with voltage applied was studied. Valley width decreases with the voltage and that makes algorithm ineffective at the voltages less than 900V. Therefore, working point as 5 threshold steps from the pedestal was chosen for 012004-3 850V and 800V runs. Occupancy at the working point is considered as an effective occupancy of the MaPMT pixel. Results of the runs at the same voltage but at different times was averaged in order to include the laser instability effects. Occupancy of each MaPMT was taken as an average of all 64 pixels for that MaPMT. Occupancies for different voltages were normalized with respect to the value at 1000V and the plot can be seen in the Fig. 5 below. It shows almost linear response of the PMT's occupancy to the applied voltage. Large error bars occurs due to averaging 3 runs at each HV at different times during the day. Sharp difference between 850V and 900V is due to the different method of working point definition and therefore the average occupation calculation. averaged with respect to the average occupancy at 1000V.
1,446.2
2019-11-13T00:00:00.000
[ "Physics" ]
Palmitoylation-regulated interactions of the pseudokinase calmodulin kinase-like vesicle-associated with membranes and Arc/Arg3.1 Calmodulin kinase-like vesicle-associated (CaMKv), a pseudokinase belonging to the Ca2+/calmodulin-dependent kinase family, is expressed predominantly in brain and neural tissue. It may function in synaptic strengthening during spatial learning by promoting the stabilization and enrichment of dendritic spines. At present, almost nothing is known regarding CaMKv structure and regulation. In this study we confirm prior proteomic analyses demonstrating that CaMKv is palmitoylated on Cys5. Wild-type CaMKv is enriched on the plasma membrane, but this enrichment is lost upon mutation of Cys5 to Ser. We further show that CaMKv interacts with another regulator of synaptic plasticity, Arc/Arg3.1, and that the interaction between these two proteins is weakened by mutation of the palmitoylated cysteine in CamKv. Introduction Calmodulin kinase-like vesicle-associated ("CaMKv"), a member of the Ca 2+ /calmodulin-dependent protein kinase family, is almost exclusively expressed in brain and endocrine tissues (Human Protein Atlas). Because it lacks key residues required in other kinases for ATP binding, as well as the consensus autophosphorylation motif (RXXS/T) required for maintenance of CaMKII activity, CaMKv is believed to be a pseudokinase. Consistent with this view, kinase activity was not detected in in vitro Frontiers in Synaptic Neuroscience 01 frontiersin.org assays (Godbout et al., 1994) and mutation of a key ATPbinding residue did not interfere with its ability to rescue activities that were lost in CaMKv knockdown neurons (Liang et al., 2016). In a report describing CaMKv function in the nervous system, Liang et al. (2016) showed that CaMKv is critical for the maintenance of dendritic spines. They further showed that CaMKv expression is induced by sensory experience in vivo and by synaptic activity in cultured neurons and that overexpression of CaMKv increases spine density, whereas its depletion reduces spine density and results in impaired spatial memory. Although mechanisms underlying these synaptic functions remain to be fully elucidated, they are likely to involve regulation of the actin cytoskeleton in dendritic spines, as CaMKv was shown to interact with and inhibit the Rho guanine nucleotide exchange factor (GEF), GEF-H1, thereby suppressing RhoA activation and RhoA-dependent reduction in spine density (Liang et al., 2016). Despite the evident importance of CaMKv in neuronal function, almost nothing is known about the protein itself. Indeed, since its discovery in 1994 by Sutcliffe's group (Godbout et al., 1994; who named it "1G5"), only three publications focusing on CaMKv have been published (Won et al., 2006;Liang et al., 2016;Sussman et al., 2020). CaMKv was first identified as a protein associated with neuronal vesicles (Godbout et al., 1994) and was more recently found to partially localize to the plasma membrane (Liang et al., 2016;Sussman et al., 2020). However, our analysis of secondary structure prediction programs, e.g., TMHMM-2.0 (Krogh et al., 2001) and CCTOP (Dobson et al., 2015), failed to reveal a transmembrane domain. Thus, the mechanism that underlies CaMKv binding to membranes is not understood. A relatively large percentage of peripheral membrane proteins in dendrites undergo palmitoylation, the covalent modification of cysteines with 16-carbon palmitoyl chains (Zarêba-Kozioł et al., 2018). CaMKv was detected in three large-scale screens aimed at identifying palmitoylated proteins (Kang et al., 2008;Wan et al., 2013;Collins et al., 2017). In the present study, we confirm that endogenous CaMKv undergoes palmitoylation in neurons and that recombinant CaMKv undergoes palmitoylation in heterologous cells. We further show that mutation of the palmitoylation site, Cys5, results in the displacement of CaMKv from the plasma membrane. Finally, we report for the first time that CaMKv interacts directly with Arc (Activity-regulated cytoskeleton-associated protein) (Lyford et al., 1995), also known as Arg 3.1 (Activity-regulated gene 3.1) (Link et al., 1995), an activity-dependent immediateearly gene product that regulates synaptic plasticity and is required for the formation of long-term memories (Epstein and Finkbeiner, 2018;Zhang and Bramham, 2021). Our data indicate that the CaMKv-Arc interaction is influenced by CaMKv palmitoylation. Materials Mouse CaMKv cDNA (Myc-DDK-tagged at the C-terminus) and mouse monoclonal anti-DDK antibody (TA50011) were from Origene. Fluorescently labeled secondary antibodies for Infrared Imaging System were from LI-COR. Cloning reagents were from Thermo Scientific. Primers and Lipofectamines were from Invitrogen. Reagents for electrophoresis and immunoblotting were from Bio-Rad. Phosphatase inhibitors cocktail (PhosSTOP) was from Roche. Thiopropyl Sepharose, reagents for analysis of palmitoylation, tissue culture, and other reagents were from Sigma. Generation of mutant and fluorescently-tagged constructs Arc-mCherry construct was generated as described in Hedde et al. (2022). Fluorescently tagged CaMKv was generated by subcloning into pEGFP-C1vector (Contech) using CaMKv-Myc-DDK as a template. This CaMKv-pEGFP construct was then used as a template to generate the C5S point mutant by site-directed mutagenesis. All DNA constructs were verified by sequencing. Analysis of palmitoylation Palmitoylation was detected using the Acyl-Resin Assisted Capture (Acyl-RAC) method (Forrester et al., 2011), as described in detail in Barylko et al. (2018). Briefly, cells or whole brain were solubilized with 2.5% SDS in 100 mM HEPES (pH 7.5), 1 mM EDTA, 0.2 mM PMSF, protease inhibitor cocktail, Predicted secondary and tertiary structure of calmodulin kinase-like vesicle-associated (CaMKv). (A) Domains of CaMKv. The intrinsically disordered region was predicted using the PONDR (Predictor of Natural Disorder Region) program. The entire sequence from residues 325-351 had PONDR scores ranging from 0.8 to 1.0. (B) Structural alignment (Cα r.m.s.d = 1.21 Å) of the full-length crystal structure of human CAMK2A (green; PDB: 3SOA) and a predicted model of residues 1-329 of human CaMKv derived from the AlphaFold Protein Structure Database (magenta; entry Q8NCB2). Unstructured residues 330-501 of CaMKv were omitted. (C) Close-up view of the predicted active site of CaMKv with modeled placement of ATP and Mg 2+ based on an alignment to a crystal structure of CAMK2A bound to ATP (PDB: 6XBX). Residues located at positions that are important for catalysis in protein kinases are colored blue. Residues labeled in red differ from those typically found in active kinases. and 50 mM dithiothreitol (DTT). Cells were then incubated at 40 • C for 0.5 h (to reduce potential S-S bonds), then for an additional 4 h with methyl methanethiosulfonate (MMTS) to block free thiols. Excess MMTS was removed by protein precipitation and washing with acetone. Dried pellets were resolubilized in 1% SDS and mixed with thiopropyl-Sepharose resin. Half of the sample was incubated with hydroxylamine (NH 2 OH) to cleave thioester bonds, the other half was incubated with 2 M NaCl (to control for false positives). Proteins with free thiols (i.e., from cysteines that were originally palmitoylated before NH 2 OH treatment) are captured on the resin. After extensive washing, proteins released from the resin were analyzed by SDS-PAGE and identified by immunoblotting. GST pull-down assay of the interaction between calmodulin kinase-like vesicle-associated and activity-regulated cytoskeleton-associated protein Calmodulin kinase-like vesicle-associated-Myc-DDK was expressed in HeLa cells and cell lysates were incubated overnight with GST-Arc or GST alone (control) bound to glutathione resin. After low-speed centrifugation (1 min at 500 × g), samples were washed, proteins were eluted with glutathione, and electrophoresed. CaMKv and Arc were recognized by immunoblotting with anti-DDK and anti-GST antibodies, respectively. Fluorescence imaging and fluorescence lifetime imaging MCF-10A cells plated on imaging dishes coated with fibronectin were transfected with wild-type and mutant CaMKv-EGFP and Arc-mCherry using Lipofectamine 3000. Cells were imaged at room temperature 20-24 h after transfection for a maximum duration of 90 min with a Zeiss LSM880 laser scanning microscope set up for FLIM. EGFP fluorescence was excited at 880 nm (two-photon excitation, 80 MHz) and detected at 510-560 nm with a 40×, NA 1.2 water immersion lens in non-descanned mode using a hybrid photomultiplier detector (HPM-100, Becker & Hickl, Germany) coupled to a FLIMBox (ISS, Champaign, IL, United States). Per data set, 35 frames of 256 × 256 pixels were acquired with a pixel dwell time of 16 µs. Data were analyzed in SimFCS. Before FLIM, the presence of Other methods HEK293, HeLa, and MCF-10A cells (ATCC) were cultured in DMEM supplemented with 10% fetal bovine serum and antibiotics. They were transfected with either Lipofectamine 2000 (HEK293, HeLa) or Lipofectamine 3000 (MCF-10A) according to the manufacturer's instructions and were used 20-24 h after transfection. Protein concentration was determined using the modified Lowry method according to Peterson (1977) with BSA as a standard. SDS-PAGE was carried out according to Laemmli (1970). For immunoblotting, proteins were transferred to nitrocellulose, and immunoblotted with the indicated antibodies. Bound primary antibodies were detected and quantified using fluorescently labeled secondary antibody in the LI-COR Odyssey system. Results Analysis of the putative active site of calmodulin kinase-like vesicle-associated Calmodulin kinase-like vesicle-associated contains an N-terminal kinase domain with 39% identity and 59% similarity to the kinase domain of CaMKIIα (CaMK2a), a central calmodulin-binding domain, and an extended C-terminal intrinsically disordered domain (∼ residues 330-501) (Figure 1A). Figure 1B shows a structural alignment of CaMK2a (obtained by X-ray crystallography) with residues 1-329 of CaMKv (predicted by AlphaFold; Jumper et al., 2021;Varadi et al., 2022). Residues T7-K300 of CaMK2a align well both structurally and by sequence with residues S18-K314 of CaMKv. However, the two proteins diverge in sequence and structure in the remaining C-terminal regions. The expanded view of the putative CaMKv active site ( Figure 1C) shows that His168 replaces the glycine of the Asp-Phe-Gly (DFG) motif, which is highly conserved among kinases and which plays a critical role in catalysis. In addition, Asn145 in the catalytic loop is an aspartate in active kinases. We note that mutation of this Asp to Asn is commonly used to abolish kinase activity. Thus, it is unlikely that CaMKv expresses kinase activity. Palmitoylation-dependent recruitment of calmodulin kinase-like vesicle-associated to the plasma membrane Using the Acyl-RAC approach we confirmed that endogenous CaMKv is palmitoylated in mouse brain (Figure 2A) and recombinant CaMKv is palmitoylated in HeLa cells ( Figure 2B). As expected, endogenous palmitoylated CaMKv was detected in mouse brain membranes, but not in cytosolic fractions (Supplementary Figure 2A). However, only 5-10% of membrane-associated CaMKv was palmitoylated, indicating that this modification is not essential for membrane binding, but may instead contribute to specific subcellular targeting. When synaptosomal membranes were centrifuged on a sucrose step gradient, a portion of CaMKv distributed to low buoyant density fractions, commonly termed "lipid rafts, " which are enriched in palmitoylated proteins (Levental et al., 2010;Supplementary Figure 2B). To determine if palmitoylation influences the subcellular distribution of CaMKv, we localized wild-type CaMKv-EGFP and a palmitoylationdeficient CaMKv-EGFP mutant in heterologous (MCF-10) cells (Figure 3). Cysteine 5 ( 1 MPFGCVTLGD 10 ) was identified as the palmitoylation site of mouse CaMKv in a thioacylation screen (Collins et al., 2017). As shown in Figures 3A,B, wild-type CaMKv-EGFP undergoes palmitoylation in HeLa cells, whereas C5S-CaMKv-EGFP does not. In MCF-10 cells, wild-type CaMKv-EGFP shows a diffuse cytoplasmic distribution, but with a pronounced enrichment on the plasma membrane ( Figure 3C, top four panels). This plasma membrane enrichment is essentially abrogated in cells expressing C5S-CaMKv-EGFP (Figure 3C, bottom four panels). During our studies, we found that the position of the EGFP tag influenced the distribution of wild-type CaMKv. Whereas CaMKv-EGFP displays pronounced plasma membrane localization in HEK-293 cells, as it does in MCF-10 cells, EGFP-CaMKv is almost entirely cytoplasmic (Supplementary Figure 1), suggesting that an N-terminal tag may suppress palmitoylation of Cys5. Interaction of calmodulin kinase-like vesicle-associated with activity-regulated cytoskeleton-associated protein The finding that Arc binds directly to CaMKII (Donai et al., 2003) prompted us to examine whether it may also binds to CaMKv. Indeed, we found that CaMKv expressed in HeLa cells is pulled down by purified GST-Arc but not by GST alone (Figure 4A). To test whether CaMKv and Arc are likely to interact directly, we turned to measurements of Förster resonance energy transfer (FRET), which occurs if an acceptor fluorophore comes within a few nm of a donor. Energy transfer causes a reduction in both the intensity and lifetime of the donor. However, lifetime measurements are generally a more robust reporter of FRET in living cells (Padilla-Parra and Tramier, 2012). We expressed wild-type-and C5S-CaMKv-EGFP (the donor), either alone or together with Arc-mCherry (the acceptor) in MCF-10A cells and acquired lifetime images by pulsed 880 nm two-photon excitation. The fluorescence lifetime of each pixel was calculated and displayed using phasor plots (Malacrida et al., 2021; Figures 4B-E), which are 2D histograms of the pixels of cell images. In cells expressing only CaMKv-EGFP (Figures 4C,E) the center of mass of the pixel distribution falls on the universal semicircle, indicating a single exponential lifetime decay as expected for unquenched EGFP. Instead, for cells that were co-transfected with Arc-mCherry (Figures 4B,D), a comet tail extending from this region toward the zero-lifetime point (S = 0, G = 1) was observed, indicating reduced donor lifetime and, hence, the presence of FRET. Comparing the number of pixels falling outside the universal circle when Arc-mCherry is co-expressed with wild-type-CaMKv (Figure 4B) vs. the palmitoylationdeficient C5S mutant (Figure 4D), it is evident that mutation of the palmitoylation site reduces the extent of FRET between Arc and CaMKv. Quantification is provided in Figure 4F. Although palmitoylation of CaMKv apparently enhances its localization to the plasma membrane (Figure 3), we note that Arc is predominantly cytoplasmic when expressed in MCF-10A cells and that FRET between Arc-mCherry and CaMKv-EGFP was most evident in the cytoplasm (Supplementary Figure 3). Discussion Here we confirm results from proteomic screens that CaMKv undergoes palmitoylation in cells and show that this modification is important for its targeting to the plasma membrane. Unlike most other forms of lipidation, palmitoylation is reversible and, hence, may be responsive to changes in neuronal conditions. Indeed, CaMKv was identified as one of 121 proteins that were differentially palmitoylated in the mouse hippocampus in response to context-dependent fear conditioning (Nasseri et al., 2021;preprint). In that study, an increase in palmitoylation of CaMKv was observed, suggesting that it may translocate to the plasma membrane upon neuronal activation. Although only about 5% of CaMKv was palmitoylated in unstimulated brain tissue, the specific pool of acylated CaMKv may play an important role in synaptic function, as was already shown for the relatively small pools of palmitoylated Arc (∼5-10%) (Barylko et al., 2018) and PICK1 (∼1%) (Thomas et al., 2013). We also found that CaMKv interacts with Arc in a palmitoylation-sensitive manner. Arc is a positive regulator of AMPA receptor (AMPAR) endocytosis and, hence, plays a critical role in LTD. However, Arc is also required for late-stage LTP and its deletion interferes with the formation of spatial, taste, and fear memories. Like CaMKv, Arc's promotion of latestage LTP has been ascribed to its ability to regulate the actin cytoskeleton within dendritic spines (Bramham, 2008). There are several striking similarities between Arc and CaMKv (Liang et al., 2016). First, translation of both CaMKv and Arc is induced in dendrites in response to synaptic activity. Second, both proteins are required for late-stage LTP and spatial learning. Third, they both localize to the plasma membrane and postsynaptic density. Fourth, they both have been implicated in regulation of the actin cytoskeleton. The functional significance of the CaMKv-Arc interaction remains to be determined. We suggest that palmitoylation of Arc and CaMKv induces their colocalization to the same membrane subdomains, perhaps lipid rafts, where their coordinated activities function to regulate the postsynaptic actin cytoskeleton. A recent report identified CaMKv as a potential immunotherapeutic target in MYCN-amplified neuroblastoma, due to its inordinately high expression in these tumors compared to normal human tissues (Sussman et al., 2020). CaMKv was found in both membrane and soluble fractions of neuroblastoma cell lines, and plasma membrane (as well Interaction of calmodulin kinase-like vesicle-associated (CaMKv) with Arc (A) CaMKv binding to GST-Arc (see "Materials and methods" section). HeLa cell extract expressing CaMKv-myc-DDK was incubated with purified GST (control) or GST-Arc (10 µM) with glutathione beads. Samples were centrifuged and pelleted proteins were electrophoresed and blotted with anti-DDK or anti-GST. Lane 1: GST alone (control); Lane 2: GST-Arc. Input represents 20% of CaMKv in the incubation mixture. Similar results were obtained in three separate experiments. as cytoplasmic) staining was detected. Notably, the study suggested that CaMKv, as a potential transmembrane protein, may be susceptible to therapeutic targeting by anti-CaMKv antibodies. However, in light of the presence of CaMKv in both membrane-bound and soluble pools, the absence of a predicted membrane-spanning motif, and our finding that palmitoylation is a likely plasma membrane targeting mechanism, it will be important to experimentally test whether or not CaMKv is indeed a transmembrane protein. Data availability statement The original contributions presented in this study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author. Author contributions JA, BB, DJ, and KH planned and designed the study. BB, PH, CT, DB, Y-KH, and GM performed the experiments. All authors participated in writing and editing the manuscript and approved the submitted version.
3,879.4
2022-07-28T00:00:00.000
[ "Biology" ]
Probing Light Mediators and Neutrino Electromagnetic Moments with Atomic Radiative Emission of Neutrino Pairs † : We present the novel idea of using the atomic radiative emission of neutrino pairs to test physics beyond the Standard Model, including light vector/scalar mediators and the anomalous neutrino electromagnetic moments. With O ( eV) momentum transfer, atomic transitions are particularly sensitive to light mediators and can improve their coupling strength sensitivity by 3 ∼ 4 orders of magnitude. In particular, the massless photon belongs to this category. The projected sensitivity with respect to neutrino electromagnetic moments is competitive with dark matter experiments. Most importantly, neutrino pair emission provides the possibility of separating the electric and magnetic moments, even identifying their individual elements, which is not possible by existing observations. Introduction The neutrino oscillation, and hence neutrinos being massive, have been experimentally established as the first new physics beyond the Standard Model (BSM) of particle physics.However, the reason behind massive neutrinos, and in particular non-standard interactions (NSIs), retain various possibilities [1][2][3][4].This is especially true for light mediators.A mediator with mass m and coupling g produces a matrix element |M| 2 ∼ g 4 /(q 2 + m 2 ) 2 .For m 2 q 2 , the contribution of a light mediator is typically suppressed by the momentum transfer, |M| 2 ∼ g 4 /q 4 .To improve the sensitivity, smaller momentum transfer q 2 m 2 is better. Laboratory-based experiments typically have momentum transfer at the keV∼MeV scale, which makes it difficult to constrain very light mediators.In this work, we present a new possibility: we propose using the atomic radiative emission of neutrino pairs (RENP) [5][6][7][8] to look for light mediators.With intrinsically O(eV) energy, RENP provides a suitable environment to significantly improve the sensitivity of searching light particles. Our study proceeds in two steps: (1) a general light vector/scalar mediator between electron and neutrino [9] and (2) the neutrino electric and magnetic moment interactions with a massless photon [10].The latter is particularly interesting.Energy thresholds can be used to separate the electric from the magnetic moments and probe their individual elements.With a suitable design, future RENP experiments can greatly improve the sensitivity on BSM mediators and NSI. Atomic Radiative Emission of Neutrino Pairs The RENP is an atomic E1×M1 transition from an atomic excited state |e to a ground state |g via an intermediate virtual state |v with energies E v > E e > E g [5][6][7][8]11,12].A neutrino pair is emitted in the transition |e → |v and a photon is emitted in |v → |g . The whole reaction chain is Because E v > E e , the transition |e → |g is second order in perturbation and the two steps cannot happen separately.In SM, RENP involves both electromagnetic and weak interactions.The E1-type photon emission |v → |g + γ is contributed by the electric dipole moment, while the M1-type neutrino pair emission |e → |v A νiL γ µ ν jL contains both the vector v ij and the axial vector a ij coefficients where U ei is the PMNS matrix element.The M1 transition selects only the axial part of H W .The atomic spontaneous decay rate Γ ∼ G 2 F (E e − E g ) 5 /15π 3 ≈ 10 −34 s is extremely small [5].Enhancing the decay is possible via two quantum mechanical effects [12]: stimulated photon emission and macroscopically coherent atoms [13].The enhancing factors are proportional to the photon number density n γ and the coherent atom number density n 2 a , respectively.The total decay width [14][15][16] scales linearly with the material volume V.The spectral function I(ω) is defined as where with momentum transfer q 2 ≡ (E 2 eg − 2E eg ω) and E ab ≡ E a − E b .The Heaviside Θ function imposes kinematics requirements on the frequency ω.Energy-momentum conservation allows the process to occur only if the trigger laser frequency ω is smaller than the a frequency threshold ω max ij as a function of the emitted neutrino masses m i and m j . The last term in the square bracket of Equation ( 4) is non-zero if neutrinos are Majorana particles (δ M = 1) and zero otherwise (δ M = 0).The first term appears in both cases, and is additionally a function of the neutrino masses (5) Non-Standard Interactions with Light Mediators The W/Z gauge boson masses are much larger than the momentum transfer, and their propagators shrink to a contact term 1/(q 2 − m 2 Z/W ) ≈ 1/m 2 Z/W .The relative size between the m 2 Z/W scale and the atomic momentum transfer is ∼10 21 .For NSI with mediator mass of 1 ∼100 keV, the decay width can be significantly enhanced over the SM one.In other words, RENP is very sensitive to light mediators. Vector Mediators The relevant new interactions with a vector-like mediator Z are Because the neutrino pair emission transition (|e → |v ) is of the M1 type, it has even parity and selects the γ µ γ 5 component of the electron coupling.The neutrino interaction may contain both left and right axial currents, which generalizes Equation (2): The spectral function I(ω) in Equation (4) becomes In Figure 1, we present the I Z spectral function versus the trigger laser frequency ω.The lines have sudden dips around the kinematic thresholds.As expected, even for a tiny coupling |g e g L,ij |∼10 −16 (dashed red), the presence of a Z mediator results in a sizable contribution when compared with the SM one (the black curve).We estimate the sensitivity to the combination |g e g ν ij | using the experimental setup proposed in [15].The target is exposed to the trigger laser beam at three different frequencies ω i = (1.0688,1.0699, 1.0711) eV.The total number of events for each trigger laser frequency is obtained using the total decay width times exposure time T: For T = 2.3 days, a target volume V = 100 cm 3 , and n a = n γ = 10 21 cm −3 , we expect O(20) events at each ω i .We use Poisson χ 2 [17] to compare the expected event numbers with and without new physics.The sensitivity curves are shown in the middle panel of Figure 1.Across almost the whole range from meV to keV, RENP can improve the sensitivity by 2∼3 orders. Scalar Mediator A general spin-0 particle can couple with an electron via both scalar (y e S eeφ) and pseudo-scalar (y e P eγ 5 eφ) interactions.The M1 transition selects only the pseudo-scalar coupling to contribute.However, the neutrino side can have both types: L φ ≡ iy e P ēγ 5 eφ + νi (y ν S,ij + iγ 5 y ν P,ij )ν j φ + h.c. Correspondingly, the spectral function becomes where the correction term δI ij (ω) is In addition, the left panel of Figure 1 shows the new contribution of the spectral function I φ .Notice that the scalar coupling constant has to be larger than |y e y ν ij | ∼ 10 −9 when compared with the vector one (|g e g ν L,ij | ∼ 10 −15 ) to produce a similar change to the SM curve.This happens because nonrelativistic atomic transitions with pseudo-scalar couplings are suppressed by q 2 /m 2 e , as e|γ 5 |v ∼ q • e|S|v /m e , where S is the spin operator.Consequently, the sensitivity to |y e y ν ij | in the right panel of Figure 1 is five orders less stringent than the vector case, although it remains much better than the existing ones. In comparison, RENP does not suffer from any of these limitations.The decay rate for the magnetic (electric) moment, is a function of individual (µ ν ) ij or ( ν ) ij .These spectral functions are shown in the left and middle panels of Figure 2. The kinematic thresholds allow each non-zero contribution of (µ ν ) ij and ( ν ) ij to be pinpointed, as well as for µ ν to be separated from ν .To accomplish this, instead of scanning three different frequencies, as in Section 3.1, we need to scan six: ω i = (1.069,1.07, 1.0708, 1.0712, 1.0716, 1.07164) eV.In this way, along with the individual matrix element, we can identify another one, ω = 1.068 eV, to disentangle µ ν from ν .The sensitivity in the right panel of Figure 2 as a function of the exposure time is competitive with current experiments, and can be further improved with longer exposure time. The first two panels show the spectral functions for the electric (Left) and magnetic (Middle) moments; also shown are the projected sensitivities for various elements of the neutrino electromagnetic moments (Right).In all cases, we use normal ordering and take the lightest neutrino mass to be m 1 = 0.01 eV. Conclusions We have presented the novel idea of using the atomic radiative emission of neutrino pairs to probe the neutrino NSI with light mediators.The typical O(eV) atomic transition energy improves current constants by several orders of magnitude for vector and scalar mediators with masses below keV.With the photon being a massless mediator, the searching neutrino magnetic moments additionally benefits from the eV momentum transfer of RENP.Most importantly, the kinematics thresholds allow different coupling constants to be separated and their individual matrix elements to be identified, which is not possible for current probes. Notes Added Constraints on light mediators and neutrino electromagnetic properties exist from various sources.A detailed survey of the current bounds can be found in our third paper on RENP [27], which appears after the submission of this proceeding.[29].While the RENP experiment can achieve similar bounds to coherent scattering within a few days, it should take more than a hundred days to obtain bounds comparable to the DM direct detection measurements.However, the neutrino mass eigenstates cannot be disentangled in either DM or coherent scattering experiments.Consequently, the bounds always involve a combination of the neutrino mixing matrix elements.In contrast, the RENP process does not suffer from such degeneracies.The detection of the kinematics thresholds together with scanning of the photon frequency spectrum allows for the separation of the individual matrix elements of each interaction type.In this sense, the RENP experiment is a unique probe of neutrino interactions and light mediators. 11 Figure 1 . Figure 1.(Left) The spectral function I(ω) of Yb as a function of the trigger laser frequency ω for the normal ordering and lightest neutrino mass m 1 = 0.01 eV for light vector/scalar mediators; also shown are the projected sensitivities for vector (Middle) and scalar (Right) mediators.
2,479.4
2023-09-06T00:00:00.000
[ "Physics" ]
Identification, functional prediction, and key lncRNA verification of cold stress-related lncRNAs in rats liver Cold stimulation reduces the quality of animal products and increases animal mortality, causing huge losses to the livestock industry in cold regions. Long non-coding RNAs (lncRNAs) take part in many biological processes through transcriptional regulation, intracellular material transport, and chromosome remodeling. Although cold stress-related lncRNAs have been reported in plants, no research is available on the characteristic and functional analysis of lncRNAs after cold stress in rats. Here, we built a cold stress animal model firstly. Six SPF male Wistar rats were randomly divided to the acute cold stress group (4 °C, 12 h) and the normal group (24 °C, 12 h). lncRNA libraries were constructed by high-throughput sequencing (HTS) using rat livers. 2,120 new lncRNAs and 273 differentially expressed (DE) lncRNAs were identified in low temperature environments. The target genes of DElncRNA were predicted by cis and trans, and then functional and pathway analysis were performed to them. GO and KEGG analysis revealed that lncRNA targets were mainly participated in the regulation of nucleic acid binding, cold stimulation reaction, metabolic process, immune system processes, PI3K-Akt signaling pathway and pathways in cancer. Next, a interaction network between lncRNA and its targets was constructed. To further reveal the mechanism of cold stress, DElncRNA and DEmRNA were extracted to reconstruct a co-expression sub-network. We found the key lncRNA MSTRG.80946.2 in sub-network. Functional analysis of key lncRNA targets showed that targets were significantly enriched in fatty acid metabolism, the PI3K-Akt signaling pathway and pathways in cancer under cold stress. qRT-PCR confirmed the sequencing results. Finally, hub lncRNA MSTRG.80946.2 was characterized, and verified its relationship with related mRNAs by antisense oligonucleotide (ASO) interference and qRT-PCR. Results confirmed the accuracy of our analysis. To sum up, our work was the first to perform detailed characterization and functional analysis of cold stress-related lncRNAs in rats liver. lncRNAs played crucial roles in energy metabolism, growth and development, immunity and reproductive performance in cold stressed rats. The MSTRG.80946.2 was verified by network and experiments to be a key functional lncRNA under cold stress, regulating ACP1, TSPY1 and Tsn. In cold regions, animals are prone to cold stress and their physical development is affected. In pregnant animals, cold stress can result in symptoms such as miscarriage and even infertility. The stress response caused by cold stimulation can cause damage to the nervous, cardiovascular, and immune systems 12,13 . However, reports on the function of cold stress-related lncRNA are rare. Su et al. studied overexpression of lncRNA TUG1 (taurine up-regulated gene 1) in mice can prevent cold-induced damage 14 . lncRNAs associated with cold stress have also been reported recently in cabbage and cassava 15,16 . Kidokoro found that soybean (C repeat binding factor) CBF/DREB1 (dehydration response element binding protein 1) regulates gene expression in cold response process 17 . This process activates many defense mechanisms, including molecular chaperones, metabolite biosynthesis enzymes and so on. Freezing and low temperature stress can cause plant metabolic disorder and increase the production of various reactive oxygen species. H 2 O 2 can regulate gene expression under cold stress, affecting transduction in wild type and catalase (ΔkatG)/thioredoxin peroxidase (tpx) cells treated by cold stress 18 . Keeping body temperature constant under cold environment needs heat production and protection. These mechanisms are affected by various neurotransmitters and hormones, and regulated by the nervous system 19 . Numerous studies have shown that cold stress affects multiple metabolic and molecular regulatory processes in vivo. PACAP (Pituitary adenylate cyclase activating polypeptide) acts a pivotal part in peripheral and central physiological stress responses. Cline found that PACAP is involved in thermostimulated sympathetic signaling and may be a crucial regulator of lipid metabolism 20 . Environmental factors such as cold stress may lead to mammal hippocampus apoptosis in late pregnancy, and in a caspase-3-independent manner to enhance phosphorylation of Ser536 by P65 21 . In addition, some lncRNAs can regulate cell function through other pathways. For example, Kang et al. found that energy-induced lncRNA HAND2-AS1 (heart and neural derivatives expressed 2-antisense 1) inhibits HIF-1 (hypoxia inducible factor-1) α-mediated energy metabolism and inhibits osteosarcoma development 22 . However, the regulation of lncRNA involvement in cold stress in rat livers remains unclear. As an important organ of heat production in the body, the liver increases its activity and heat production during acute cold stimulation to maintain the body's normal temperature 23 . In this process, how lncRNAs play regulatory roles needs further study. Here, we analyzed and identified the characteristics of lncRNAs in liver of cold-stressed rats, predicted the target genes of these differential lncRNA by cis and trans, and explored the roles of lncRNAs under cold stress in rat liver. Our data will help to better understand the mechanisms of lncRNA in rat liver under cold stress. Results Identification of lncRNAs in liver of rats. Six rats were randomly selected respectively in the normal group (L01, L02 and L03) and the stress group (L04, L05 and L06). Then total RNA was extracted from the liver samples and six cDNA libraries were constructed. After quality control of raw data from each sample, high quality data remained nearly 21.10 Gb, accounting for approximately 96.39% of the total. Afterwards, screening of dependable candidate lncRNAs from assembled transcripts based on process pipelines for high-throughput sequencing data (Fig. 1). Clear statistics on the quality of data and the proportion of raw data are shown in Fig. 2A. The distribution of lncRNAs on each chromosome is shown in Fig. 2B. These lncRNAs are evenly distributed on all chromosomes. It is worth noting that the number of lncRNAs on chromosomes 1 and 2 were relatively high, respectively 513 (8.5%) and 304 (5.04%). Four different tools, Coding-Potential Assessment Tool (CPAT), Coding-Non-Coding Index (CNCI), Pfam-scan and Coding Potential Calculator (CPC) were used to calculate the ability of transcripts to encode proteins (Fig. 2C). Characterization of cold stress-related lncRNAs. We described the genomic characteristics of the acquired cold stress-related lncRNAs in rats liver. A large number of the lncRNAs contained two exons (Fig. 3A). Consistent with the size distribution pattern of the lncRNA library, the length of library lncRNAs at 300-500 nt was most distributed, and the number of lncRNAs for this length was 1,611. There were 723 lncRNAs with tags larger than 3000 nt (Fig. 3B). Most lncRNAs (85%) contained a short ORF (open reading frame) (approximately 20-60 amino acids), which is shorter than for codeRNA (Fig. 3C). From Figs. 3D, 6,025 candidate lncRNAs were captured, including 3,729 lincRNAs (61.9%), 889 antisense lncRNAs (14.8%), 1129 intronic lncRNAs (18.7%) and 278 sense lncRNAs (4.6%). Of these lncRNAs, 317 (5.2%) were identified as known using BLAST alignment with the rat lncRNAs in NONCODEv5 database 24 , and we obtained a grand total of 451 novel lncRNAs. We assessed the conservation of rat lncRNAs (Fig. 4A). Most lncRNAs scores ≤0.4, indicating poor conservation. Moreover, the overall distribution of lncRNAs expression was presented by the FPKM (fragments per kilobase million) density distribution comparison chart and the FPKM box plot. By detecting the expression pattern of lncRNAs, it was shown that the expression profiles of the three biological replicates in the cold stress group and the normal group were relatively close, indicating that the experimental repeated data was will (Fig. 4B). The FPKM value of lncRNAs expression levels spanned 10 −2 to 10 4 six orders of magnitude (Fig. 4C). In Fig. 4D, we quantified the expression level of the lncRNAs using Stringtie. The clustering plots showed significant differences in lncRNA expression between the groups, but the differences within each group were small. Differential expression of lncRNAs. Expression levels were analyzed using the 'ballgown' R package to screen differential expressed (DE) lncRNAs between the cold stress group and the normal group. We identified 146 up-regulated lncRNAs (53.5%) and 127 down-regulated lncRNAs (46.5%), a total of 273 significant DElncRNAs. The top 20 significant DElncRNAs were demonstrated in Table 1. The volcano plot in Fig. 5A depicted the approximate distribution of DElncRNAs. In our study, 2120 novel lncRNAs were acquired, including 435 antisense lncRNAs, 1618 lincRNAs and 67 intronic lncRNAs. The targets of DElncRNAs were predicted based on trans and cis-acting. Furthermore, the targets of 58 lncRNAs had functional annotations. From the correlation plot, it can be seen that lncRNA expression is highly correlated within the group (Fig. 5B). www.nature.com/scientificreports www.nature.com/scientificreports/ Functional and pathway analysis of DElncRNA targets. The DAVID software was used to predict functions of DElncRNA targets. GO (Gene Ontology) analysis included 19 cell component (CC) terms, 21 molecular function (MF) terms and 22 biological process (BP) terms. Among them, the cell part, membrane part; the binding, catalytic activity, signal transduction activity and biological regulation, metabolic process, response to stimulation had high percentages of genes ( Fig. 6). GO results showed that regulation of nucleic acid acid binding, cold stimulation response, regulation of cytokines, regulation of protein complex stability, regulation of protein ubiquitination, metabolic processes, multicellular biological processes and the like. In order to clarify the specific signal pathways affected by the targets, the KEGG (Kyoto Encyclopedia of Genes and Genomes) analysis was conducted. As shown in Fig. 7, the target genes were divided into 50 KEGG pathways. The PI3K-Akt signaling pathway, insulin signaling pathway, T cell receptor signaling pathway, pathways in cancer, fatty acid metabolism, HIF-1 signaling pathway, glucose metabolism and lipid metabolism were mainly involved in the regulation of liver in cold stress rats. Results suggested that these DElncRNAs targets may play crucial roles in energy metabolism, immunity, growth and development, proliferation and apoptosis. To compare the functions of targets predicted in different ways, GO and KEGG analyses were performed on cis and trans targets respectively from DElncRNAs. GO classification of cis and trans targets from up and down-regulated lncRNAs respectively showed in Supplementary Figs. S1-S4. KEGG classification of cis and trans targets from up and down-regulated lncRNAs showed in Supplementary Figs. S5-S8. The key pathways and functions in red both appear in up-and down-regulate lncRNAs. It indicated that targets were mainly involved in metabolic processes, responses to stimuli, multicellular biological processes, immune system processes, reproductive processes, transport activity, molecular function regulation, nucleic acid binding transcription factor activity, fatty acid metabolism, pathways in cancer and the PI3K-Akt signaling pathway under cold stress. Construction of co-expression network to reveal hub lncRNAs. First, a interaction network of lncR- NAs and their targets was constructed using limma platform (Fig. 8). The results showed that a total of 723 interactions were identified among 273 DElncRNA and 415 target genes. There were 517 positive interactions and 206 negative interactions among pairs within the network, and most lncRNA-gene pairs were positively correlated. In addition, one lncRNA can be associated with 1 to 35 mRNAs, and one mRNA can be associated with 1 to 30 lncR-NAs. To reveal the most significant hub DElncRNA (P < 0.01), we extracted the junctions between DElncRNA and DEmRNA to reconstitute a co-expression network. As shown in Fig. 9, the two lncRNAs MSTRG.80946.2 www.nature.com/scientificreports www.nature.com/scientificreports/ and MSTRG.7147.72 were in the center of the network. Compared with MSTRG.7147.72, the MSTRG.80946.2 had higher node degrees and more interactive pairs. This suggested that MSTRG.80946.2 may be a key functional lncRNA in rats liver under cold stress. In order to understand clearer the functions of hub MSTRG.80946.2, the functional analyses were performed on its co-expressing DEmRNAs. GO analysis revealed the top three significantly enriched in "Biological Processes" (BP), including regulation of protein complex stability, metabolic processes and multicellular biological processes (Fig. 10). In addition, the top ten pathways of genes were listed as the most significant p-values. As shown in Fig. 11, KEGG enrichment analysis showed that the top three pathways enriched with most targets were pathways in cancer, the PI3K-Akt signaling pathway and fatty acid metabolism. They respectively included mRNA TSPY1, ACP1, Tsn, Hsp90ab1and so on. Pathways in cancer is closely related to immunity, cell proliferation and apoptosis. PI3K-Akt signaling pathway involves many biological processes, such as protein synthesis, glycolysis, and apoptosis. Studies have reported that lncRNA TUG1 (taurine up-regulated1) inhibits mice apoptosis and thus has a protective effect on cold-induced liver injury 25 . Translin was a regulator that responds to metabolic changes 26 . Metabolic status has a major impact on the regulation of biological rhythms. Hsp90ab1 was an ATP-dependent highly conserved molecular chaperone. It interacted with some epidermal growth factor receptor (EGFR), human epidermal growth factor receptor-2 (HER2), which played an important role in cancer pathway and participates in various pathophysiological processes of cells 24 . ACP1 was a marker enzyme for lysosomes. As an organelle for digestive function in cells, it contained a large amount of acidic hydrolase, which played an important role in the metabolism of substances inside and outside the cell 27 . These results showed that lncRNA targets were prominent in metabolic disorders and cancer pathways under cold stress. Quantitative analysis verified sequencing accuracy. We 2) from cold-stressed rat livers to verify the accuracy of sequencing results by qRT-PCR (Fig. 12). The results illustrated that the relative expressed changes of lncRNAs in conformity with high-throughput sequencing results, indicating that expressed assessment and identification of lncRNAs were persuasive. In all DElncRNAs, MSTRG.80946.2 was the most significantly DE (P < 0.001) under cold stress. Therefore, further functional verification of this key lncRNA was performed. Characteristic analysis of the key lncRNA. We compared MSTRG.80946.2 with the known rat sequence in NONCODEv5 database, which was closest to NONNATT021477.2 in length and chromosomal location with 99% homology. So far, there was little information about NONNAT021477.2, only its sequence length was 712 bp and located in chr4. The full length of MSTRG.80946.2 was amplified by RACE (rapid amplification of the cDNA ends). As shown in Fig. 13, the length of the 5′ RACE was 583 bp, and the length of the 3′ RACE was 383 bp. After www.nature.com/scientificreports www.nature.com/scientificreports/ the linker sequence was removed, the full-length was spliced to 746 bp. The RACE products were subjected to agarose gel electrophoresis. The results showed a distinct band at 746 bp. Next, we performed BLAST alignment using the full length of MSTRG.80946.2 and the known rat sequence from NCBI Genebank. It was found that its sequence was inversely complementary to the sequence of acid phosphatase 1 (ACP1) (Supplementary Fig. S9). In a comparison with the co-expression network (Fig. 9), ACP1 may be a target gene of MSTRG.80946.2 by Cytoscape.3.60. This illustrated that our analysis was aligned with the sequencing results. Regulating the expression of adjacent genes is one way in which lncRNAs act 28 . Thus, we hypothesized that MSTRG.80946.2 may play a part in rats liver under cold stress by regulating ACP1 expression. Then the subcellular localization of MSTRG.80946.2 in BRL (Rat liver cell) was verified by fluorescence in situ hybridization (FISH). The Fig. 14 showed that MSTRG.80946.2 was mostly expressed in the nucleus, and expressed at a low level in the cytoplasm. www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ 24 h was higher than 48 h. After silencing of lncRNA MSTRG.80946.2, the adjacent gene ACP1 and Tsn were significantly down-regulated (Fig. 15B,C). The TSPY1 expression was extremely up-regulated (Fig. 15D). The results showed that lncRNA MSTRG.80946.2 did regulate the protein-coding genes ACP1, TSPY1 and Tsn to play an important part in rats liver under cold stress. Discussion Cold stimulation is the most common stressor in cold regions. The slow growth of animals, poor disease resistance, and even death due to cold are important factors restricting the development of animal husbandry in cold regions. As high-throughput sequencing technology continues to improve, cold stress-related lncRNAs have been found in fish and plants [29][30][31] . However, few researches have been done on mammalian cold stress-related lncRNAs. We previously established a cold stress rat model and confirmed that the liver is a key target organ for cold stress injury. This study used HiSeq. 2500 high-throughput sequencing technology to construct differential expression profiles of cold stress-related lncRNAs in rats liver. 6,025 lncRNAs and 273 DElncRNAs were identified. GO and KEGG analysis of DElncRNA targets indicated that targets were mainly related to PI3K-Akt signaling pathway, positive regulation of cell division, pathways in cancer, fatty acid metabolism, multicellular biological processes, immune system processes, reproductive processes, transport activity, molecular function regulation and nucleic acid binding transcription factor activity. In addition, a co-expression sub-network containing 11 DElncRNAs and 24 DEmRNAs was reconstructed to reveal the underlying mechanisms of cold stress. Functional analyses were further performed on target genes of hub lncRNA MSTRG.80946.2 in sub-network. KEGG enrichment analysis showed that the top three pathways enriched with most targets were pathways in cancer, the PI3K-Akt signaling pathway and fatty acid metabolism. Liu et al. found that brain damage caused by cold stress can be prevented by inhibiting TRPV1 (transient receptor potential vanilloid subtype 1) and the PI3K/Akt inflammatory pathways 32 . We also found that some lncRNAs regulated the expression of their target genes through cis mechanism. For example, lncRNA MSTRG.7147.72 (1910 bp, chr 1) was down-regulated under cold stress, and can target two cis genes, Igf2 and Ins2. Many of the identified lncRNAs were not found in public data, and there was little information to describe the functional annotations of co-expressed genes. For example, MSTRG.62962.1, ENSRNOG00000042133 and ENSRNOG00000059588 showed strong co-expression, but annotation of these target genes requires further investigation. MSTRG.80946.2 was the hub lncRNA in the net. Currently, there were few reports on this lncRNA. In this study, it was identified as a known lncRNA with 99% homology to NONNATT021477.2, and had 10 target gene pairs (Fig. 9), including TSPY1, ACP1, Tsn, Il2rb, SZT2, Lpin1, EPAS1, Hsp90ab1, Alb and Ccdc107. Testis-specific protein Y-encoded (TSPY) is expressed in sperm cells of adult animal testes 33 . TSPY1 is related to male testis and fertility. TSPY and its homologous TSPX act as protooncogenes and tumor suppressor genes, respectively. They have opposite effects on cell proliferation and degradation of viral HBx (Hepatitis B virus X protein) oncoprotein 34 . Recently, human TSPY has been reported to inhibit USP7 (ubiquitin-specific peptidase 7) -mediated p53 function and promote spermatogonia proliferation A 35 . The deletion of Il2rb causes mice to develop immune disease and NK cell dysfunction, including severe autoimmunity 36 . Loss of function of SZT2 leads to over-activation of mTORC1 signaling, due to amino acid deficiency 37 . There is increasing evidence that SZT2 is associated with neurological diseases such as growth retardation and epilepsy. SDC belongs to the family of HSPG (heparan sulfate proteoglycans) 38 . Giuseppina et al. found that SDC4 may regulate lipid homeostasis and play a key role in human health and longevity 39 . Studies have shown that translin/trax RNase complex can degrade microRNA, thereby regulating energy metabolism 40 . Lack of Lpin1 can lead to severe metabolic homeostasis, such as fatty liver and cardiovascular disease. It regulates cellular triacylglycerol levels and liposomes in cellular signaling pathways 41 . Lipoproteins as a target for inflammatory diseases or metabolic therapies require further investigation. Herui et al. established a mouse model to confirm that the EPAS1 mutation is a causative gene for somatostatin 42 . Studying the HIF-2α function associated with EPAS1 helps to discover the role of tumors. HSP (Heat Shock Protein) can form a multi-protein chaperone complex involved in the proliferation of animal cells and the folding of apoptotic substrates 43 . Inhibition of Hsp90 leads to ubiquitination of the proteasome pathway 44 . Hsp90ab1 is one of its isoforms. Ccdc107 is one of the family of helical coil domains (Ccdc). Ccdc has many important biological functions and can regulate various biological behaviors such as invasion and metastasis of malignant tumor cells 45 . It has been confirmed that Ccdc protein is abnormally expressed in prostate cancer, breast cancer and so on 46 . And it has a direct link with tumor cell migration www.nature.com/scientificreports www.nature.com/scientificreports/ and invasion. These target gene reports above suggested that lncRNA may affect the physiological processes of energy metabolism, reproductive performance, immunity, apoptosis and proliferation in rats under cold stress. It is well known that the effects of low temperature environments on the energy metabolism of living organisms are considerable large 47 . Cold stress can damage macromolecules such as proteins, nucleic acids and lipids in body cells. These molecular deletions lead to metabolic disorders and changes in redox potential 48 . In further study, lncRNA MSTRG.80946.2, which is closely related to cold stress and has a significant expression level, was selected for verify in cells to elucidate its mechanism of action. We found that MSTRG.80946.2 is 99% homologous to NONNATT021477.2 from NONCODEv5 database. MSTRG.80946.2 is mainly expressed in the nucleus of BRL by FISH (fluorescence in situ hybridization), so subsequent gene silencing experiments prefer ASO technology. The full length of MSTRG.80946.2 was amplified by RACE. It was found to be an antisense lncRNA of rat ACP1 compared with NCBI database. ACP1 is a lysosomal marker enzyme that is involved not only in intracellular digestion and endocytosis of phagocytic cells, but also in important life activities, such as nucleic acid and protein metabolism, immune regulation and signal transduction 49 . Wang et al. found that ACP has strong activity in fish liver and is widely involved in the energy metabolism 50 . Interestingly, ACP1 was one of DEmRNAs of MSTRG.80946.2 in the co-expression network. Therefore, we further verified the relationship between MSTRG.80946.2 and its targets in BRL cells. It was confirmed that MSTRG.80946.2 did regulate ACP1, TSPY1 and Tsn expression in rats under cold stress. www.nature.com/scientificreports www.nature.com/scientificreports/ In conclusion, this study was the first to systematically identify cold stressed-associated lncRNAs in rats liver and construct lncRNAs DE profiles. lncRNAs played crucial roles in energy metabolism, reproductive performance, growth and development and immunity in rats by regulating mRNAs under cold stress. The MSTRG.80946.2 was verified by network and experiments to be a cold responsive key lncRNA, regulating protein-coding genes ACP1, TSPY1 and Tsn. However, the detailed mechanism of lncRNAs under cold stress still requires further experimental verification. www.nature.com/scientificreports www.nature.com/scientificreports/ In vitro lncRNA silencing assay. Specific ASO interference sequence targeting lncRNA MSTRG.80946.2 by Ribobio Biotechnology (Guangzhou, China). The ASO plasmids were transfected into BRL cells at 200 nmol for 24 h, and then subjected to qRT-PCR. The ASO sequence are: ASO-MSTRG.80946.2: 5′-TTAACTTCACCAACCTGTTG-3′, ASO-NC: 5′-TTAAATGGAAGGCTGCCATG-3′. Transfection was performed using Lipofectamine RNAiMAX (Thermo Scientific, USA).
5,170.2
2020-01-16T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Enhanced cutoff energies for direct and rescattered strong-field photoelectron emission of plasmonic nanoparticles : The efficient generation, accurate detection, and detailed physical tracking of energetic electrons are of applied interest for high harmonics generation, electron-impact spectroscopy, and femtosecond time-resolved scanning tunneling microscopy. We here investigate the generation of photoelectrons (PEs) by exposing plasmonic nanostructures to intense laser pulses in the infrared (IR) spectral regime and analyze the sensitivity of PE spectra to competing elementary interactions for direct and rescattered photoemission pathways. Specifically, we measured and numerically simulated emitted PE momentum distributions from prototypical spherical gold nanoparticles (NPs) with diameters between 5 and 70 nm generated by short laser pulses with peak Introduction The characterization of photoexcitation and -emission of plasmonic nanostructures is of basic research and applied interest for efficient harmonic up-conversion [1,2], femtosecond time-resolved scanning tunneling microscopy and spectroscopy [3,4], electron-impact spectroscopy [5,6], and the development of compact electron sources [7].We here show that prototypical plasmonic NPs exposed to intense IR-laser pulses emit PEs over a large kinetic energy range, owing to an intricate dynamical interplay of distinct electronic and photonic interactions.Extensively investigated during the past two decades [8,9], metal NPs have remarkable optical properties that are primarily related to incident light in the IR to the visible frequency range enforcing the collective motion of conduction electrons.This light-driven excitation of localized surface-charge plasmons (LSP) controls the particles' light absorption, reflection, and skin depths [10].It results in a nanoplasmonic field near the NP surface that can greatly amplify the incident-laser electric field [11,12].The LSP resonance frequency of metal NPs can be tuned into resonance from IR to visible frequencies by variation of their shape, size, composition, and dielectric environment [8,9,13,14].This tunable enhanced light absorption and light scattering are key to powerful diagnostic methods, such as surface-enhanced Raman spectroscopy [15], time-resolved nanoplasmonicfield microscopy [12,[16][17][18], and biomedical and chemical sensing [19,20].The present single-pulse PE imaging investigation is expected to promote future two-pulse pump-probe experimental schemes for the spatiotemporal imaging of induced-plasmonic-field distributions near the surface of metal nanoshells that have recently been proposed in classical [21] and quantum-mechanical [12,18,22] numerical simulations. In this work, we have employed 2-dimensional velocity-map-imaging (VMI) spectroscopy to investigate strong-field electron emission from metal NPs.VMI spectroscopy provides projections of PE momentum distributions onto the plane of a 2-dimensional PE detector.It is established as a powerful technique for studying intense-light interactions with atoms and molecules [23][24][25].Through the last decade, this technique was applied to study strong-field photoemission from isolated NPs by intense linearly polarized laser pulses [26][27][28].During strong-field emission from atoms and molecules [29], PEs can gain a significant amount of energy while propagating in the oscillating laser electric field.PEs that are "directly" emitted from gaseous atomic targets by linearly polarized laser pulses (without being driven by the external light field to return to the residual ion) gain up to 2U p (I 0 ) in kinetic energy, while PEs that are accelerated back to the residual ion by the laser electric field to "rescatter" elastically accumulate up to 10U p (I 0 ) [30][31][32][33].The ponderomotive energy U p (I 0 ) = I 0 ∕(4 2 ) is the cycle-averaged quiver energy of a free electron in a laser field of frequency and peak intensity I 0 .Unless indicated otherwise, we use atomic units throughout this work.For strong-field PE emission and rescattering from solids [34][35][36][37][38][39][40] and nanostructures, such as nanotips [3,[41][42][43][44][45][46], isolated clusters [47][48][49][50][51], and dielectric NPs [27,28,52], cutoff energies in directly emitted and rescattered photoemission from dielectric NPs were found to be approximately 2 2 U p (I 0 ) and 10 2 U p (I 0 ), respectively [27,53].Compared to atomic targets, these limiting PE energies are enhanced by the square of the near-field plasmonic-enhancement factor .In this work, we (i) measured and numerically modeled VMI spectra resulting from the strong-field PE emission from metal NPs by intense IR-laser pulses and (ii) validated a recent extension [17] of the three-step-model for atomic strong-field ionization [54] to metal NPs.Owing to plasmonic-near-field enhancement of the incident-laser electric field and PE correlation, we found measured and calculated cutoff energies for metal NPs that exceed typical cutoff energies from gaseous atoms and dielectric NPs by two and one order of magnitude, respectively.Interestingly, the cutoff energy for direct electron emission from metal NPs reaches up to 93 % of the corresponding value for rescattered PEs electrons, dramatically exceeding the well-known proportion of 20 %, discussed earlier for gaseous atoms and dielectric NPs [27,53]. Experimental setup The laser system and VMI electron detection apparatus at the James R. Macdonald Laboratory at Kansas State University are described in more detail in [26,55].Briefly, the experiments used a Ti:Sapphirebased chirped pulse amplification (CPA) system generating 25 fs pulses FWHIM (10 optical cycles), and central angular frequency = 2.415 PHz (corresponding to a central wavelength = 780 nm). The laser pulse intersects the stream of isolated single NPs with diameters of 5, 30, or 70 nm that are injected by aerodynamic lens focusing [28,[55][56][57][58].As shown in the sketch of the experimental setup in Figure 1, PEs are projected onto the detector by the static electric field between the repeller and extractor.This allows the recording of the 2D projection of the PE momentum distributions as VMI maps.PE spectra were captured in a thick-lens, high-energy VMI spectrometer [59] capable of gathering up to 350 eV electron energy.The NPs were purchased from Cytodiagnostics [60].The NP samples were custom synthesized, characterized for monodispersity (typical polydispersity index < 0.1) and sphericity (>95 %) to ensure sufficient reproducibility between interactions, and extensively purified to remove any source of contamination.We carefully chose the initial NP concentration to avoid the formation of clusters in the NP beam [61]. Laser-intensity characterization The peak laser intensity was determined by analyzing the abovethreshold-ionization (ATI) PE energy distribution from gaseous Xe atoms with the VMI spectrometer described above and for the same laser parameters we selected for the strong-field-ionization studies reported in this work.To determine the absolute value of the intensity, the ponderomotive shift of the Xe ATI comb was measured as a function of the input-laser-pulse energy.From this shift, we deduced the ponderomotive energy, U P , for a given pulse energy.Since U p is proportional to the peak laser intensity I 0 , the latter could be directly determined from this measurement.We determined the values of the intensities used in this work as I 0 = 8.0 × 10 12 W/cm 2 and 1.5I 0 and estimated the accuracy of the intensity calibration to be better than 15 % (see Ref. [55,62] for details). Theoretical model We numerically investigated PE emission from metallic NPs by IR-laser pulses with a Gaussian temporal profile.Propagating along the x axis Figure 1: Schematic of the velocity-map-imaging spectrometer coupled to the nanoparticle source.The dilute beam of isolated gas-phase nanoparticles is injected into vacuum and focused by an aerodynamic lens to intersect 800 nm, 25 fs, 10 kHz-repetition-rate linearly polarized laser pulses.Emitted electrons are focused onto the microchannel plate (MCP)/phosphor assembly.V R and V E are the respective voltages on the PE repeller and extractor plates needed to guide photoelectrons to the MCP phosphorus detector.The MCP is coupled to a phosphor screen, of which a camera records the spatial distribution of photoelectron hits for every laser shot.and linearly polarized along the z axis, their electric field is given by where is the pulse length at full-width-half-intensity maximum (FWHIM), the pulse's central frequency, the carrier-envelope phase, and c the speed of light in vacuum (Figure 1).During the laser -NP interaction, LSPs are excited and induce an inhomogeneous plasmonic field near the NP surface.At the same time, and most significantly at the LSP resonance frequency [63,64], electrons are excited to electronic states above the Fermi level.Sufficiently high laser intensities generate multiply ionized NPs [26,56].The incident laser pulse induces a transient dipole in the NP.Within the electric-dipole approximation, the corresponding transient induced plasmonic-dipole moment, ⃗ P pl (t) = 0 Mie () ⃗ E inc (⃗ r, t), generates the plasmonic electric field [65] where k = 2∕ = ∕c.We calculate the complex NP polarizability, Mie (), within Mie theory [66], following Ref.[67], which restricts the applicability of Eq. ( 2) to size parameters ka ⪅ 0.6 for nanospheres of radius a [68].We describe strong-field ionization from metal NPs by extending the semi-classical three-step model (also known as "simple-man model") for atomic strong-field ionization to metal NPs [17].Our extended three-step model consists of: (1) electron release based on quantum-mechanical tunneling, (2) PE propagation from the NP surface to the detector by sampling over classical trajectories, and (3) PE rescattering and recombination at the NP surface.In comparison with gaseous atomic targets, each of these steps is significantly more intricate for metal NPs, due to their more complex electronic structure, the added morphological structure, and the emission of a much larger number of electrons, emphasizing the effects of PE -PE correlation, residual charges, and PE -nanoplasmonic-field interactions. We represent the NPs' static electronic structure in terms of the surface-potential step V 0 = F + with the work function = 5.1 eV and Fermi energy F = 8.0 eV for bulk gold [69].Our dynamical numerical simulation divides the NP surface into small surface elements.During successive small time intervals, the surface elements are modeled as spherical square-well potentials.Bound PEs close to the NP surface tunnel out along the radial component of the total electric field at the NP surface, ⃗ F ⋅ êr , where ⃗ The residual-charge field ⃗ F res results from the accumulation of positive residual charge on the NP during electron emission in preceding time intervals.We account for strong-field electron release from the NP by employing modified [17] Fowler-Nordheim tunneling rates [70,71].Subsequently, we Monte Carlo sample over the initial phase-space distribution of released electrons and solve Newton's equations of motion for the PE propagation outside the NP in the presence of all electric fields, ⃗ F + ⃗ F e−e , where ⃗ F e−e is the repulsive Coulomb electric field between PEs.In each laser half-cycle the direction of the incident-laser electric field changes, such that emitted PEs can be driven back toward the NP and either rescatter from or recombine at the NP surface.For 5, 30, and 70 nm diameter gold nanospheres, we include and numerically evaluate the effects of PE repulsion, residual positive charges on the NP, PE recollisions and recombinations at the NP surface, and nanoplasmonic enhancement of the incident-laser-pulse electric field.More details about this numerical model are given in the Supplementary Information (SI) and in Ref. [17]. In our numerical applications in Section 3, we distinguish and compare specular and diffusive PE rescattering at the NP surface.For diffusive rescattering, we uniformly randomize the polar and azimuthal scattering angles relative to the surface normal at the impact site on the NP surface, modeling rescattering in all accessible directions with equal probability. Influence of nanoplasmonic field, rescattering, residual-charge interactions, and photoelectron correlation VMI spectra are sensitive to all PE interactions included in our simulation.In order to track the effects of different electronic interactions on the propagation and rescattering of released PEs, we leave the modeling of the tunneling release of electrons at the NP surface unchanged when selectively switching off individual PE interactions (for identical laser-pulse parameters), assuming for all calculated VMI maps identical tunneling-ionization rates (Eq.(S1.5) in the Supplementary Information) The comparison of simulations in which we selectively include and exclude specific PE interactions during the PE propagation and rescattering, allows us to quantify their specific effects on VMI maps. Figure 2 shows simulated VMI spectra compared to experimental results for gold nanospheres with 30 nm diameter for the experimental setup depicted in Figure 1.The VMI spectra are projections of the PE momentum distribution on the x-z plane of the MCP detector and show the projected PE yields as functions of the PE asymptotic velocities, x and z , along the laser-propagation and -polarization directions.The first, second, and third column in To allow for a quantitative comparison of direct and rescattered PE yields, we normalized the yields in each row to the corresponding net PE yield in the third column and display the normalized integrated yield in each graph of Figure 2. We calculated as the x -and z -integrated yields from the simulated VMI maps in each row, divided by the corresponding integrated yield of the VMI maps in the third column.The comparison of the VMI spectra in Figure 2 allows us to assess the influence of the distinct PE interactions on VMI spectra, as we discuss next. Plasmonic-field interactions The simulated VMI spectra in the first and third row of Figure 2 are calculated under the assumption that released electrons solely interact with the incident-laser and induced plasmonic fields while propagating to the detector.These PE distributions are aligned with the laser-polarization direction and have a dipole-like appearance, owing to the dipole character transferred from the induced plasmonic field and tunneling ionization. The comparison of Figure 2(a) and (b) with Figure 2(c) for specular rescattering and Figure 2(g) and (h) with Figure 2(i) for diffuse rescattering reveals that directly emitted PEs dominate the low-energy part of the photoemission spectra.Rescattered PEs, in contrast, can gain additional energy from the laser and induced plasmonic fields and establish the higher-energy part of the PE spectrum.Rescattering boosting PE energies is a well-understood phenomenon in strong-field ionization.For gaseous atomic targets, elastically rescattered PEs reach kinetic energies up to 10U p (I 0 ) [30][31][32][33] and larger energies occur for dielectric NPs (SiO 2 ) [26, 27, 56].By comparing the yield factors in the first and second row, we find that approximately 83 % of the detected PEs is directly emitted, while 17 % have rescattered at the NP surface at least once. All interactions effect The second (specular rescattering) and forth (diffuse rescattering) row of Figure 2 show simulated VMI spectra including all PE interactions, i.e., ⃗ E inc (1), ⃗ E pl (2), ⃗ As noted above, in the absence of PE-PE interactions and diffuse rescattering, the linearly polarized incidentlaser and induced plasmonic electric field imprint their dipole character on the VMI spectra.The inclusion of PE-PE interactions and diffuse rescattering partially removes the dipolar emission character and results in more isotropic VMI spectra [17].For metal NPs, attractive residual-charge interactions are thus much less influential than PE-PE interactions in shaping PE momentum distributions and determining PE cutoff energies. Comparing the VMI spectra in rows one and two of Figure 2 for specular and in rows three and four for diffuse rescattering, we notice that the combined effect of ⃗ F e−e and ⃗ F res considerably increases the final energy of directly emitted electrons, while decreasing the direct-emission yield from 83 % to 33 % (specular rescattering) and 35 % (diffuse rescattering).On average, directly emitted PEs are slower than rescattered PEs and thus spend more time near the NP.They are therefore (i) more likely to recombine with the NP, reducing the direct PE yield, and (ii) experience stronger PE-PE Coulomb repulsion, leading to higher acceleration and larger final kinetic energies.Due to influential PE-PE interactions, direct photoemission reaches a cutoff energy of 121 U p (I 0 ) for 30 nm diameter NPs.This is 85 % the cutoff energy for rescattered PEs [cf., Figure 2(d) and (j)].Thus, PE-PE interactions significantly contribute to the high-energy part of the PE spectra, even for direct emission, resulting in cutoff energies significantly larger than the known 2U P (I 0 ) limit of atomic targets [31] and even the 2 2 U P (I 0 ) cutoff energy of dielectric NPs [27].The increase of the PE cutoff energies due to rescattering, and as compared to direct emission, is less pronounced for metal NPs than for gaseous atomic targets and dielectric NPs. Figure 2(n) shows our experimental VMI spectra.With regard to yield, cutoff energy, and isotropic shape of the PE momentum distribution, Figure 2(m) (including all interactions and with diffuse rescattering) is our most comprehensive simulation result and matches the experiment well.The VMI spectra in Figure 2 clearly show that all PE interactions are relevant for shaping the PE angular distribution in the measured VMI spectrum in Figure 2(n). Influence of nanoparticle size and laser intensity Figure 3 shows simulated and experimental VMI spectra for gold nanospheres with diameters of 5, 30, and 70 nm.The first, second, and third column are simulated VMI spectra for, respectively, the direct, rescattered, and net PEs yield for peak laser intensities I 0 (first, second, and third row) and 1.5I 0 (fourth, fifth, and sixth row).Experimental results corresponding to the simulations in the third column are shown in the fourth column.The VMI spectra in Figure 3 are (slightly) elongated along the laser-polarization direction, with PE cutoff energies that increase with NP size.As discussed in Section 3.1, isotropic VMI spectra are promoted by PE-PE interactions and diffuse PE rescattering from the NP surface, while incident-laser and induced plasmonic-field interactions tend to imprint a dipolar shape.The detected number of the PEs per laser short for the experimental data shown in Figure 3 varies from 140 for 5 nm diameter NPs at the lower peak laser intensity (I 0 ) to 600 for 70 nm NPs at 1.5I 0 .However, as discussed in detail in Ref. [55], these numbers do not directly reflect the number of PEs that hit the detector due to the PE energy-dependent detector saturation in our experiment.The saturation effect is most prominent in the central detection area, where low-energy electrons (which dominate the total PE yield) hit the MCP. To allow for a quantitative comparison of direct and rescattered PE yields, we normalized the direct and rescattered PE yields in each row to the corresponding net PE yield in the third column and displayed the normalized integrated yield (a, I 0 ) in each graph. reveals that the yield of direct PEs decreases as a function of the NP size and intensity, being more sensitive to the size.This observation is compatible with PEs having a higher probability to rescatter off larger NPs.In addition, increasing laser intensity leads to a stronger radial attractive force, due to an increase in the number of residual charges on the NP surface, leading to more PE rescattering events.The direct and rescattered PE yield can be controlled by the intensity of the laser pulse and size of the NP.The measured and simulated VMI maps also reveal a large increase in the direct and rescattered PE cutoff energy with the laser peak intensity and NP size.We quantify this laser-intensity and NP-size-dependent effect in the following subsection. Angle-integrated photoelectron yields and cutoff energies Figure 4 shows simulations corresponding to the VMI spectra in Figure 3.It includes (i) all interactions for the direct PE yield (denoted as "All_Direct"), (ii) all interactions for the rescattered PE yield ("All_Rescat"), and (iii) all interactions for the net PE yield ("All_Net").In addition, Figure 4 displays (iv) simulations only including incident-and plasmonic-field interactions for the net PE yield (denoted as "Inc + Pl_Net") and (v) integrated experimental yields as a function of the PE kinetic energy.Due to the detector saturation at the center of the MCP phosphor detector (Figure 1), the experimental yields for kinetic energies below approximately 8 eV (corresponding to PE velocities below 0.8 a.u.) are not accurate.To be able to compare experimental integrated yields to one another and to the simulation results, we have removed the low energy part of the integrated yields from both, experimental and simulated data. The overall agreement between experimental and simulated integrated PE yields in Figure 4 is not perfect for several reasons.With regard to the simulation, an important uncertainty derives from our implementation of approximate modified Fowler-Nordheim tunneling rates.With regard to the experiment, the above-mentioned detector saturation decreases the reliability of the low-energy portion of our spectra.While the low-energy portion of the simulation data was truncated to allow for a better comparison with the experiment, the detection uncertainty due to saturation is not completely removed and tends to affect predominantly our measurements for the largest NP size (70 nm diameter) and the higher laser peak intensity (1.5I 0 ), due to larger numbers of emitted PEs per laser shot.This is consistent with the agreement between simulation and measurement being better for 30 nm NPs at the lower peak intensity (I 0 ) in Figure 4 than for 70 nm NPs in Figure 4(c) and at the higher laser intensity of 1.5I 0 in Figure 4(d)−(f).However, in view of hardly avoidable inaccuracies in the detailed modeling of this complex interaction scenario and NP-size-and laser-intensity-dependent experimental background noise, we cannot exclude that the exceptionally good match between experimental simulated results shown in 4(b) compared to the other graphs in this figure is serendipitous. Integration of the VMI-projected PE momentum distributions y( x , z ) in Figure 3(a)-(x) over the PE detection angle in the VMI plane results in PE yields as functions of the PE energy, shown in Figure 4(a)-(f) are normalized individually to their maxima, except for the simulations labeled "Direct_Net" and "Rescat_Net", which are normalized to the maxima of the "All_Net" simulation results. For simulated yields, we define the PE cutoff energy E cutoff as the energy up to which 99.5 % of the net PE yield has accumulated, = 99.5 %. 4. Simulated cutoff energies including all interactions for, respectively, direct ("All_Direct") and net ("All_Net", i.e., direct and rescattered) photoemission.Simulations only including incident-and plasmonic-field interactions are denoted as "Inc + Pl_Net".Yellow "plus" markers show atomic cutoff energies, 10U p , scaled by the plasmonic intensity enhancement 2 . The experimental cutoff energy was extracted from the experimental VMI maps as described in [26,55], for which the upper energy boundaries of the full 3D momentum sphere and the 2D projection are identical.The radial distribution of these projections along the polarization direction accurately determines the maximum PE energy. The PE cutoff energies, shown as red dashed circles in Figure 3 in Section 3.2, increase with the NP size and peak laser intensity.Figure 5(a) and (b) display cutoff energies as a function of NP size for peak intensities of I 0 = 8.0 × 10 12 W/cm 2 and 1.5I 0 , in units of the incident-laser ponderomotive energies U p (I 0 ) and U p (1.5I 0 ), respectively.Blue diamonds and red circles show, respectively, simulated cutoff energies (including all interactions) for the direct (denoted as "All_Direct") and net ("All_Net") PE yields.Gray squares with error bars are experimental cutoff energies ("Experiment").Simulation results including all interactions for rescattered PEs are not shown, because they coincide with the "All_Net" yield.For gaseous atomic targets, the cutoff energy is equal to 10U p [30][31][32][33].Cutoff energies obtained by scaling this well-known expression by the plasmonic intensity enhancement of the incident-laser pulse, 2 , are shown as yellow "plus" markers.As expected, they tend to merge with the cutoff energies computed while only including incident-laser-pulse and plasmonic-field interactions (represented as green triangles).We calculated the applied value for within Mie theory at the poles (relative to the laser-polarization direction) of the NPs [66][67][68].In contrast to the 10 2 U p scaling, the comparison of Figure 5(a) and (b), shows that our theoretical cutoff energies predict intensity-dependent changes that become more pronounced for larger NPs.Within the experimental error this theoretical prediction is compatible with our experimental results. Based on the discussion in Section 3.1 of different PE interactions and their influence on VMI maps, we investigated two plausible causes for the numerically predicted increase of the PE yield and cutoff energy with the NP size.First, the lowering and narrowing of the surfacepotential barrier by the more significant nanoplasmonicfield enhancement near larger NPs [12,18,21,68] promotes strong-field tunneling ionization.However, this not only tends to augment the measured PE yield.Since PEs gain more energy in a more strongly enhanced field, it also entails higher cutoff energies for larger NPs.Second, as the NP size increases, a larger surface area becomes available from where more electrons are emitted, increasing the PE yield.The cutoff energy rises with the PE yield due to the increased repulsive Coulomb energy between PEs upon their release from the NP surface.In principle, a third cause for larger yields and cutoff energies can be laserpulse-propagation effects inside the NP that result in higher local-field enhancements for larger NPs [56].However, for the NP sizes investigated here, we did not find this effect to be relevant. As discussed in Section 3.1, the consequences of residual-charge interactions and PE-PE interactions oppose each other.While attractive residual-charge -PE interactions reduce both, PE yields and cutoff energies, PE Coulomb repulsion increases them.A detailed numerical comparison of these competing interactions is shown in Figure 3 of Ref. [17].Our numerical results indicate that PE Coulomb repulsion overcompensates residual-charge -PE interactions with regard to the cutoff energy, leading to an overall cutoff-energy increase, especially for larger NPs. The green triangles (denoted as "Inc + Pl_Net") in Figure 5 are cutoff energies calculated under the assumption that released electrons solely interact with the incident-laser and induced plasmonic field while propagating to the detector.In Sec.(S4) of the SI, we derive a closedform analytical heuristic expression for the cutoff energy in direct photoemission, based on a simplified central-field approximation of residual-charge interactions and PE correlation.By comparing Eq. (S4.12) in the Sec.(S4) with the known respective 2U p and 10U P limits for direct and rescattered emission for atomic strong-field ionization, we infer the cutoff energy for rescattered PEs, where R eff (a, I 0 ) models, on average, the effect of plasmonicfield enhancement on rescattered PEs (indicated by the superscript "R"), while taking all PE interactions into account.t f designates the effective interaction time (determined at numerical convergence).We introduce the effective Coulomb interaction factor, R C (a, I 0 ) = R e−e (a, I 0 ) − R res (a, I 0 ), in analogy to the plasmonic-field-enhancement factor .In the central-field approximation, R e−e (a, I 0 ) and R res (a, I 0 ) represent PE-PE repulsion and the decelerating effect of residual-charge interactions, respectively.Note that direct PEs, on average, are more strongly affected by repulsive PE-PE Coulomb interactions and plasmonic-field enhancement than rescattered electrons, as mentioned earlier.For the NP and laser parameters we considered, this leads to comparable cutoff energies for direct emission and The heuristic Eq. ( 5) qualitatively explains all experimental results in Figure 5.For very small R C (a, I 0 ) and plasmonic-field enhancement, Eq. ( 5) approaches the familiar 10U p (I 0 ) scaling for rescattering ionization of gaseous atomic targets, as expected.This condition is satisfied for dielectric NPs for appropriate particles sizes and laser intensities.Including field enhancement simply in terms of a multiplicative factor, 2 , i.e., modeling cutoff energies as 10 2 U p (I 0 ), fails to reproduce the cutoff energies we measured and simulated for metal NPs when plasmonic-field enhancement and PE Coulomb interactions are relevant.If R C (a, I 0 ) is negligible, the cutoff energy, 10 ] 2 U p (I 0 ), is smaller than 10 2 U p (I 0 ), since R eff (a, I 0 ) < , (see Sec. (S4) in the SI).For R C (a, I 0 ) > 0, PE correlation dominates the residual-charge deceleration, and cutoff energies tend to rapidly increase with the NP size and laser intensity. For the present numerical applications, even small R C (a, I 0 ) enable very large cutoff energies, because the coefficient t f in Eq. ( 5) is large.Assuming an effective PE interaction time t f ≈ 2, for the laser parameters used in this study, t f = 120.75. An unexpected and interesting result derives from the fact that the simulated cutoff energies for direct PEs ("All_Direct") and rescattered PEs ("All_Net") are comparable.The direct cutoff energy for 5, 30, and 70 nm NPs are, respectively, 93, 85, and 89 % of the rescattered PE cutoffs for the intensity I 0 and 93, 87, and 84 % for 1.5I 0 .This value is 20 % for strong-field ionization of gaseous atoms, molecules, and dielectric NPs. Summary and conclusions We measured and numerically simulated VMI maps to model strong-field ionization from metal NPs.Our experimental and simulated results scrutinize a complex dynamical interplay of PE emission, propagation, recombination, and rescattering.Augmented by strong plasmonic-field enhancement, a large number of PEs tunnel ionize from metal NPs and result in high PE yields and cutoff energies.We analyzed the size and laser-intensity dependence of PE angular distributions in light of competing contributions from various PE interactions. We observed that the dipolar shape, imprinted on VMI maps by the incident-laser and induced plasmonic fields, is mostly erased by PE correlation and diffusive rescattering at the NP surface to yield almost isotropic VMI maps.While for gaseous atomic targets, directly emitted PEs acquire no more than about 20 % of the cutoff energy of rescattered PEs [10U p (I 0 )], we found direct photoemission from metal NPs to yield cutoff energies up to 303U p (I 0 ), reaching between 84 and 93 % of the cutoff energy for rescattered PEs.Due to (exponentially) laser-intensity-dependent PE emission, the effects of residual charges and PE-PE interactions are strongly intensity dependent.This leads to a nonlinear intensity-dependence of the PE yield and cutoff energy scaling with U p (I 0 ), contrary to the known linear intensity scaling for gaseous atomic targets. Our joint experimental and theoretical investigation of a prototypical light-driven nanoplasmonic system supports the use of plasmonic nanostructures towards the development of tunable compact electron and radiation sources for PE and radiation imaging in basic research and for novel photoelectronic detection, catalytic, and lightcollecting devices. Figure 2 include, respectively, VMI spectra of direct PEs, rescattered PEs, and the net PE yield for either specular (first and second row) or diffuse rescattering (third and fourth row, cf., Sec.(S3) in the SI).The first and third row show simulations in which only the incident-laser and plasmon fields ( ⃗ E inc and ⃗ E pl ) are considered.The VMI spectra in the second and fourth row include all forces: ⃗ E inc , ⃗ E pl , PE interactions with residual positive charges ( ⃗ F res ), and repulsive PE Coulomb interactions ( ⃗ F e−e ). Figure 2(n) is our measured VMI map for the same laser and NP parameters. F e−e [Eq.(S2.7) in the SI], and ⃗ F res [Eq.(S1.4) in the SI].The inclusion of the long-range Coulomb attraction of accumulating positive residual charges decelerates both direct and rescattered PEs, increasing the number of PEs that recombine with and rescatter off the NP.The addition of ⃗ F e−e opposes the residual-charge interaction by introducing Coulomb repulsion into the system of released electrons, accelerating a large fraction of PEs to significantly higher final (detectable) kinetic energies. Figure 2 : Figure 2: (a-m) Photoelectron VMI spectra simulated for 30 nm diameter gold nanospheres for direct (first column), rescattered (second column), and all (denoted as "Net", third column) photoelectrons, including either specular (first and second row) or diffuse rescattering (third and fourth row) for incident 780 nm laser pulses with a pulse length of 25 fs (FWHIM) and 8.0 × 10 12 W/cm 2 peak intensity. designates the integrated yield, normalized to the simulated net PE yield.First and third row: simulations where only the incident-laser and plasmon fields ( ⃗ E inc and ⃗ E pl ) are included.Second and fourth row: VMI spectra including all the interactions, ⃗ E inc , ⃗ E pl , photoelectron interactions with residual positive charges ( ⃗ F res ), and repulsive photoelectron Coulomb interactions ( ⃗ F e−e ).(n) Corresponding measured VMI spectrum. Figure 3 : Figure 3: Comparison of simulated direct (first column), rescattered (second column) and net (i.e., including direct and rescattered yields, third column) photoelectron VMI spectra with experimental (forth column) VMI spectra for gold nanospheres with 5, 30, and 70 nm diameter and laser peak intensities of I 0 = 8.0 × 10 12 W/cm 2 (first -third row) and 1.5I 0 (forth -sixth row).The laser-pulse length and wavelength are 25 fs and 780 nm.Red dashed circles in (a-x) indicate simulated and experimental photoelectron cutoff energies. is the integrated photoelectron yield normalized to the integrated net yields in third column. Figure 5 : Figure 5: Comparison of simulated and experimental photoelectron cutoff energies scaled by the incident-laser ponderomotive energy U p (I 0 ) for 5, 30, and 70 nm diameter gold nanospheres and laser peak intensities of (a) I 0 = 8.0 × 10 12 W/cm 2 and (b) 1.5I 0 .The laser-pulse length and wavelength are the same as in Figure4.Simulated cutoff energies including all interactions for, respectively, direct ("All_Direct") and net ("All_Net", i.e., direct and rescattered) photoemission.Simulations only including incident-and plasmonic-field interactions are rescattering. R C (a, I 0 ) is a measure for the magnitude of the effective counteracting attractive (decelerating) residualcharge and repulsive (accelerating) PE-PE interactions.For the present work R C (a, I 0 ) > 0, indicating the dominance of PE repulsion over residual-charge attraction in determining the cutoff energy.
7,667.8
2023-04-12T00:00:00.000
[ "Physics", "Materials Science" ]
Consonant and Vowel Identification in Cochlear Implant Users Measured by Nonsense Words : A Systematic Review and Meta-Analysis Purpose: The purpose of this systematic review and meta-analysis was to establish a baseline of the vowel and consonant identification scores in prelingually and postlingually deaf users of multichannel cochlear implants (CIs) tested with consonant–vowel–consonant and vowel–consonant–vowel nonsense syllables. Method: Six electronic databases were searched for peer-reviewed articles reporting consonant and vowel identification scores in CI users measured by nonsense words. Relevant studies were independently assessed and screened by 2 reviewers. Consonant and vowel identification scores were presented in forest plots and compared between studies in a meta-analysis. Results: Forty-seven articles with 50 studies, including 647 participants, thereof 581 postlingually deaf and 66 prelingually deaf, met the inclusion criteria of this study. The mean performance on vowel identification tasks for the postlingually deaf CI users was 76.8% (N = 5), which was higher than the mean performance for the prelingually deaf CI users (67.7%; N = 1). The mean performance on consonant identification tasks for the postlingually deaf T he offering of multichannel cochlear implants (CIs) to profoundly deaf and hard-of-hearing adults and children is a well-established medical procedure today, and there are more than 600,000 CI users in the world (The Ear Foundation, 2017).The CI is offered to patients with a large variety of causes for their hearing loss and leads to a considerable improvement in hearing for the majority of users.There is, however, large variability in speech perception outcomes after cochlear implantation (Dowell, Dettman, Blamey, Barker, & Clark, 2002;Rotteveel et al., 2010;Välimaa & Sorri, 2000).Thus, it is critical to have precise measures of how well CI users can perceive different speech sounds.Such measures are important for the fitting of CIs and testing of new implant technology but also for planning and assessing the effects of listening training and speech therapy.In recent years, traditional speech perception tests using sentences and words as stimuli have increasingly produced ceiling or near-ceiling effects in CI users (Blamey et al., 2013).This may be due to a number of factors, such as shorter time of deafness before implantation, increased residual hearing of the implant candidates, and better hearing preservation in CI surgery.There is therefore an increasing need for more difficult tests, which provide fine-grained information on perception of consonants and vowels.Speech perception tests with nonsense words, which are more difficult than real-word tests and less reliant on prior experience with a specific language, appear to be a valuable alternative for future clinical practice and research.However, in order for nonsense word tests to be maximally useful, it is necessary to establish a baseline of the typical level of consonant and vowel perception that CI users achieve on these tests.Additionally, it is important to determine how this baseline relates to performance on other speech perception tests for both prelingually and postlingually deaf CI users.The present systematic review and meta-analysis investigates the typical performance of CI users in nonsense word tests and the influence of some clinically relevant background factors on performance in these tests. Testing of Speech Perception in CI Users In the first years after the advent of the CI, speech perception in CI users was assessed more thoroughly and frequently than today, as the CI technology was new and regarded as experimental by many.In these assessments, the CI users were asked to repeat monosyllabic and bisyllabic words to assess their word perception and their consonant and vowel perception and to repeat sentences with and without audiovisual support.Later, with improved implant technology, modified indications for implantation and, thus, improved hearing in the implantees, the test batteries were supplemented with sentences-innoise tests. The test batteries for clinical assessment of the quality of hearing in adults and children with CIs today typically consist of monosyllabic words and sentences presented in quiet and with added noise in free field, sometimes also with pure-tone audiometry in free field (Berrettini et al., 2011;Faulkner & Pisoni, 2013;Lorens et al., 2016).Usually, these tests are conducted without the possibility of lipreading, except for the poorest performers. Testing of the speech perception of CI users is normally done with test lists of real-word monosyllables and sentences in the implantees' native language.Because 80% of the included articles in our meta-analysis are done with English-speaking participants, we will focus on tests with English words in the following paragraph.Speech perception tests in other languages follow the same principles as the tests in English. A common monosyllabic test is the consonant-vowel nucleus-consonant test created by Peterson and Lehiste (1962).This test is a special case of the consonant-vowelconsonant (CVC) test, which both tests the perception of real words and of speech sounds.The consonant-vowel nucleusconsonant word lists are a set of 10 lists of 50 phonemically balanced words.The test has been controlled for text-based lexical frequency across lists.The Northwestern University Auditory Test No. 6 (NU-6) monosyllable test is another test of word and speech sound recognition with monosyllables in the CVC format, consisting of 50 words and 150 speech sounds (Tillman & Carhart, 1966).Yet, another commonly used test is the Phonetically Balanced Kindergarten Word Test (Haskins, 1949).The test contains four 50-word lists and is still extensively used for assessing speech perception of children who have hearing impairment.All these three tests are commonly used in Englishspeaking countries and have been adapted to many other languages. Real-word monosyllable recognition scores have been shown to have a high correlation with audiometric thresholds.In a study by Dubno, Lee, Klein, Matthews, and Lam (1995), a confidence limit for maximum word recognition scores of the NU-6 was obtained from 407 ears in a large group of young and aged subjects with confirmed cochlear hearing losses.The relationship between the pure-tone averages and the maximum word recognition scores on the basis of this study is displayed in a table by Stach (2009, p. 296). As part of the development of implant technology, the implant companies run clinical studies regularly to test the benefits of new implants, speech processors, or speechprocessing strategies.New technology is also tested in CI clinics, wherein company-supported or independent studies are conducted.Standard speech perception tests are used in testing, typically repetition of words or sentences, but also more sophisticated tests involving, for instance, consonant and vowel identification or discrimination (Carlyon, Monstrey, Deeks, & Macherey, 2014;Frijns, Briaire, De Laat, & Grote, 2002;McKay, McDermott, Vandali, & Clark, 1992).A common test design for deciding which one of two or more speech-processing strategies gives the best speech perception for the CI user is to measure the consonant and vowel identification with each of the strategies and, then, compare the scores. Open-or Closed-Set Tests Speech perception is usually measured in either openor closed-set/forced-choice test conditions, depending on what kind of information the clinician is seeking.Open-set tests provide a collection of detailed information about speech perception, listening capacity, and acoustic properties but require a substantial effort from the test leader for posttest analysis.Open-set tests have relatively small learning effects for the patient and can therefore be performed reliably at desirable intervals. Closed-set tests are quickly performed and easily administered but give limited information about perception of individual speech sounds.The person being tested responds by pushing a button or touching a screen, and the results are interpreted automatically and instantly by a computer.However, the learning effect is considerably larger than in open-set tests because of the limited number of possible answers (Drullman, 2005).In closed-set tests, all participants should perform significantly above chance level. Tests of monosyllabic and bisyllabic words and sentences have traditionally been performed in open-set conditions, whereas vowel and consonant identification tests have been performed in closed-set conditions.Some commonly used closed-set tests of consonant and vowel identification are those by Hillenbrand, Getty, Clark, and Wheeler (1995); Shannon, Jensvold, Padilla, Robert, and Wang (1999); Tyler, Preece, and Tye Murray (1987);and Van Tasell, Greenfield, Logemann, and Nelson (1992).An openset test of phoneme recognition and confusion in Finnish is described by Välimaa, Määttä, Löppönen, andSorri (2002a, 2002b). Consonant and Vowel Identification Consonants are part of a heterogeneous group of speech sounds characterized by voicing, duration, manner, and place of articulation.Phonetically, consonants are speech sounds with the air stream passing one or more constrictions on its way from the lungs through the vocal tract. Vowels are characterized by the tongue position in the mouth cavity and by the lip-rounding.Tongue position can be high, low, back, or front.Normally, vowels are voiced, and the air stream passes frictionless along the middle of the mouth cavity while the tongue is in a static position.The vowel is the nucleus of a syllable, and a syllable can be one vowel alone or a vowel with surrounding consonants.Consonants carry more varied types of phonetic information than vowels, but many of them have lower duration and less acoustic energy.Because of this, vowel sounds are often easier to perceive than consonants, and it is widely accepted that vowels carry most of the intelligibility information in sentences (e.g., Kewley-Port, Burkle, & Lee, 2007). Previous research has confirmed that CI users have more difficulties identifying consonants and vowels than persons with normal hearing, who typically achieve a score of 95%-100% on consonant and vowel identification tests (Kirk, Tye-Murray, & Hurtig, 1992;Sagi, Kaiser, Meyer, & Svirsky, 2009).In addition, consonant identification scores have usually been measured to be lower than vowel scores.For instance, in two Finnish studies of CI users, it was shown that 24 months after switch-on of the CIs, the average vowel recognition score was 80% and the average consonant recognition score was 71% (Välimaa et al., 2002a(Välimaa et al., , 2002b)). Postlingually deaf CI users often have substantial problems identifying vowels, despite their long duration and high acoustic energy.The reason might be that the first and second formants (F1 and F2) are altered by the implant compared with what the users once used to hear.The same problem applies to the voiced consonants.Therefore, the failure rate in vowel identification by CI users may be as large as, or even larger than, the failure rate for voiced consonant identification. Consonant and vowel identification tests provide more detailed information about the hearing of CI users than word or sentence tests.Identification of consonants and vowels can be measured both with real-word or nonsense syllable identification tests, and the scoring can be done by counting the number of correctly identified speech sounds.Other commonly used consonant and vowel identification tests have vowel-consonant-vowel (VCV) or consonant-vowel (CV) nonsense syllables as stimuli, and the consonants are typically presented in an [ɑ, i] or [u] context with the target consonant in medial or initial position. Different vowel contexts give somewhat different test results for the identification of consonants because the formant transitions of the first and second formants differ in the vowel-consonant or consonant-vowel transition phase for the different vowels and consonants.The advantages and disadvantages of the different vowel contexts have been thoroughly evaluated by Donaldson and Kreft (2006), who concluded that the choice of vowel context has small but significant effects on consonant-recognition scores for the average CI listener, with the back vowels /ɑ/ and /u/ producing better performance than the front vowel /i/. In typical vowel identification tests, vowels are presented in CVC or CV contexts, for example, in hVd, bVd, wVb, or bVb context, or alone.The hVd vowel-test (Hillenbrand et al., 1995;Tyler, Preece, & Lowder, 1983) has been widely used with English-speaking CI users, although vowels in hVd context form real words in English (Munson, Donaldson, Allen, Collison, & Nelson, 2003). Although a large number of studies have been published on the subject of speech perception in CI users, there is no international consensus or standard on how to measure the identification of vowels and consonants.Several countries use nationally standardized tests for speech perception measurements.An overview of different speech perception tests (sentence identification, CVC words, and number triplets) in Danish, Dutch, (British) English, French, German, Polish, and Swedish is given in a report from the European HEARCOM project (Drullman, 2005).However, this document only reports the use of meaningful CVC words (i.e., not nonsense words) for consonant and vowel identifications. In vowel and consonant recognition studies of postlingually deaf adult CI users, some predominant confusions have been identified.Van Wieringen and Wouters (1999) tested vowel and consonant recognition in Flemish-speaking CI users and found that /y/ was often confused with /e/ and that /ɪ/ is often confused with /ə/, showing that vowel length was recognized correctly.The consonant /t/ was often confused with /k/, and /ɤ/ was often confused with /z/, indicating that voicing and manner of articulation were recognized correctly.Munson et al. (2003) found that English-speaking CI users often confused /ɛ/ with /ɪ/ and /ɪ/ with /ɛ/, concluding that they recognized vowel length.Moreover, /d/ was confused with /g/ and /θ/ with /f/, concluding that they recognized voicing and manner of articulation.Välimaa et al. (2011) presented longitudinal data of vowel recognition and confusion patterns in Finnish informants from before CI surgery until 4 years post implantation.They also studied the effect of duration of profound hearing impairment before implantation and the effect of the use of different implant devices after implantation.After 4 years, the most frequent confusions were /ø/ perceived as /ae/ and /e/ perceived as /ø/ or /ae/, which led to the conclusion that the Finnish front vowels were the most difficult to distinguish.This is in agreement with previous studies showing that vowels with smaller spectral differences are often the most difficult to identify (Munson et al., 2003;Skinner, Fourakis, Holden, Holden, & Demorest, 1996;Van Wieringen & Wouters, 1999). A widely used method for evaluation of the transmission of speech features is described in an article by Miller and Nicely (1955).Their method of classifying the consonant confusions by arranging them into confusion matrices (CMs) and calculating the information transmission of the linguistic features voicing, nasality, affrication, duration, and place of articulation is still in use. Nonsense Syllable Test Words Nonsense syllables have no meaning but are typically phonotactically legal in the language of the listener.The primary advantage of using nonsense syllables instead of real words to measure vowel and consonant identification is that the informant cannot guess which word is presented but has to rely on his or her hearing alone.Thus, the influence of other cognitive factors, such as vocabulary and inferential skills, is reduced compared with when conducting the test with real words.Consequently, nonsense syllable tests tend to be more difficult than real-word tests, as the stimuli ideally do not match any existing representation in the user's mental lexicon. Another advantage of nonsense syllable tests is that learning effects in multiple experiments with the same stimuli are very small compared with tests using real-word stimuli (Dubno & Dirks, 1982).Thus, it is possible to use the same nonsense syllable test for repeated examination of speech perception in the same individual to check for progress in listening ability. Nonsense syllables are convenient to use in experiments measuring speech perception.In his classical article, Glaze (1928) showed that experiments using nonsense syllables evoke fewer associations in the participants and thus reduce between-participants variability in test results compared with experiments using real words. Studies using nonsense syllables as stimuli can be compared across languages as long as the included speech sounds in the tests exist in both languages and a few such studies have been conducted (e.g., Pelizzone, Cosendai, & Tinembart, 1999;Tyler & Moore, 1992). Nonsense words used in studies of speech perception usually contain only one or, at most, two syllables to avoid the influence of possibly poor phonological working memory span on performance.However, some studies have used tests, such as the Children's Test of Nonword Repetition (Gathercole, Willis, Baddeley, & Emslie, 1994) and other nonsense word tests primarily constructed to assess children's working memory span and cognitive abilities, to study speech perception (Burkholder-Juhasz, Levi, Dillon, & Pisoni, 2007;Casserly & Pisoni, 2013;Nakeva Von Mentzer et al., 2015).The nonsense word test battery of Gathercole et al. (1994) contains nonsense words with two, three, four, and five syllables, but even the bisyllabic nonsense words are poorly suited to measure vowel and consonant identification, as the same vowel or consonant can be found several times in the same word in different positions and several times in the same test sequence.This makes it more complicated to measure the prevalence of consonant or vowel confusions. Milestones in the Development of CI Technology A significant advance in the CI technology was the transformation from single-channel to multichannel implants in the beginning of the 1980s.The single-channel implants provided limited spectral information and very rarely gave open speech understanding, as only one site in the cochlea was stimulated.Multichannel implants with four channels and more, however, provide electrical stimulation at multiple sites in the cochlea with an electrode array and can also convey frequencies covering most of the frequency range of the speech sounds.All multichannel strategies are spectral resolution strategies, as they convey spectral information to the implantees. The stimulation strategies of the early multichannel implants were either analog or pulsatile.The main difference between the two groups of strategies is that the first employs simultaneous stimulation, whereas the latter employs sequential stimulation.A major disadvantage with the analog stimulation strategy is channel interaction, an effect that obstructs speech perception by sound distortion.This problem is less prevalent in pulsatile, nonsimultaneous stimulation.All the stimulation strategies currently used are pulsatile. The discontinued implants from Ineraid/Symbion and from University of California, San Francisco/Storz employed the compressed analog (CA) stimulation strategy.The CA strategy was also employed by Advanced Bionics in their previous implants.Some years later, Advanced Bionics released simultaneous analog stimulation, which is a modified CA strategy.This strategy was applied until the mid-2000s.Several clinical studies have demonstrated open speech understanding with analog stimulation strategies (e.g., Dorman, Hannley, Dankowski, Smith, & McCandless, 1989), and several studies have also compared implants running pulsatile and analog stimulation (Tyler et al., 1996;Tyler, Lowder, Parkinson, Woodworth, & Gantz, 1995;Xu, Zwolan, Thompson, & Pfingst, 2005).The results have pointed toward better speech perception with pulsatile stimulation than with analog, although there has been large variability in the outcomes.Analog strategies are not used in CI processors today. Variables Influencing Speech Perception in CI Users It has been shown in many studies that there is a large variability in speech recognition performance of CI users (Dowell et al., 2002;Rotteveel et al., 2010;Välimaa & Sorri, 2000).For a given type of implant, auditory performance may vary from 0% to 100% correct, and thus, the individual differences between CI users appear to be vastly larger than the effect of implant manufacturer.Auditory performance is here understood as the ability to discriminate, detect, identify, or recognize speech.A typical measure of auditory performance is the percentage correct score on open-set speech recognition tests.The review article by Loizou (1999) lists the following factors that have been found to affect auditory performance: the duration of deafness prior to implantation (a long duration appears to have a negative effect on auditory performance), age of onset of deafness (younger age is associated with better outcome), age at implantation (earlier implantation is associated with better outcome for prelingually deaf subjects), and duration of CI use (longer duration of CI experience is associated with better outcome).Other factors that may affect auditory performance include etiology of hearing loss, number of surviving spiral ganglion cells, electrode placement and insertion depth, electrical dynamic range of the CI, cognitive abilities, duration of hearing aid use before implantation, and signal processing strategy (Blamey et al., 2013(Blamey et al., , 2015;;Rotteveel et al., 2010;Spencer, 2004;Wie, Falkenberg, Tvete, & Tomblin, 2007). It is critical to be aware of the influence of these factors when assessing and evaluating speech perception outcomes in CI users.Furthermore, it should be kept in mind that the influence of these and other factors on speech perception may be different for prelingually and postlingually implanted children and adults. Some studies have even found that age at implantation is not a significant predictor of speech perception outcome for prelingually deaf children (e.g., Geers, Brenner, & Davidson, 2003;Wie et al., 2007).Wie et al. (2007) found that the variations in performance on speech perception tasks could be explained by daily user time, nonverbal intelligence, duration of CI use, educational placement, and communication mode (use of sign language or spoken language).The authors explained this result by the relatively high age at implantation for the participants in the study, as only one participating child was implanted before 24 months of age. For a group of 65 postlingually implanted adults, Plant, McDermott, van Hoesel, Dawson, and Cowan (2016) showed different factors which predicted word recognition scores for unilaterally and bilaterally implanted CI users.For the unilaterally implanted group, predictors included a shorter duration of severe-to-profound hearing loss in the implanted ear and poorer pure-tone-averaged thresholds in the contralateral ear.For the bilateral group, shorter duration of severe-to-profound hearing loss before implantation, lower age at implantation, and better contralateral hearing thresholds were associated with higher bilateral word recognition in quiet and speech reception threshold in noise. Transmission of Consonants and Vowels in an Implant The transmission of consonants and vowels in CIs is designed to reproduce a speech signal that closely resembles the original by means of electrical stimulation patterns in the CI electrode.Failure to resemble the original signal is always explained from two viewpoints: limitations in the hearing system of the implant user caused by different variables (cf.previous section) and technical limitations in the CI system.In a CI user with optimal conditions for the reception of speech, some important factors for the transmission of speech are the speech coding, the length and insertion depth of the implant, the input dynamic and input frequency range of the speech processor, and implant electrode properties. Vowels are characterized by long duration and high energy compared with consonants, and as such, they are easily perceived by the implantees.Furthermore, vowels are characterized mainly by F1 and F2, the first two formants, which can be found in the frequency range between 200 Hz and 2500 Hz.Thus, provided the input frequency range of the implant includes frequencies as low as 200 Hz, all vowels should be possible to recognize. For the perception of pitch, the insertion depth of the implant plays an important role.The tonotopy of the cochlea is organized with the low frequency sounds in the apical region and the high frequency sounds in the basal region.When the more apical part of cochlea is stimulated, darker pitch is received by the implantee.Thus, one should expect that users of the implants with the longest electrodes, like Med-El's, would obtain best pitch perception.However, this is not always the case. Some stimulation strategies are supposed to be better for the perceptions of voiced sounds than others.For example, the FSP/FS4/FS4-p strategies from Med-El will code the fundamental frequencies on the most apical electrodes in addition to running ordinary continuous interleaved sampling (CIS) stimulation.The HiRes120 strategy from Advanced Bionics is marketed as being supposed to improve the spatial precision of stimulus delivery and be more suitable for the perception of pitch and music than spectral envelope strategies like CIS or Advanced Combination Encoder (Wouters, Francart, & McDermott, 2015). The microphone sensitivity in the speech processors plays an important role in the perception of soft sounds, and the higher the microphone sensitivity is, the better these speech sounds are picked up.None of the implants have problems with picking up soft speech sounds, as long as the sounds are within the input frequency range of the speech processor. Consonants are a more heterogeneous group of speech sounds than the vowels.They can be characterized, for example, by long or short duration, by voicing or nonvoicing, or by being nasal or nonnasal.Many of the consonants, especially the unvoiced stops and fricatives, have high frequency parts, which are easily picked up by the CI speech processors.Earlier research has shown that acoustic similarity of the consonants is the most important reason for confusion (Fant, 1973), as implant users most frequently confuse consonants that are pronounced in the same manner but with a constriction in different places in the mouth cavity.Consonants that are pronounced with different manner in the same place are seldom confused.Furthermore, CI users have more trouble distinguishing between voiced consonants than between unvoiced and have the most trouble distinguishing between nasals and laterals. Cognitive explanatory factors obviously play an important role in the perception of consonants and vowels but are outside the scope of this discussion. Aim and Research Questions The aim of this systematic review and meta-analysis was to examine previous research in order to investigate how well users of multichannel CIs identify consonants and vowels in tests using monosyllabic and bisyllabic nonsense words as stimuli.We wanted to ascertain the baseline of consonant and vowel perception in previous nonsense word research, use aggregated empirical findings and measurements to increase the statistical strength, and pool these studies in a meta-analysis.Specifically, we aimed to investigate the following three research questions: 1. What are the typical vowel and consonant identification scores in CI users when measured by nonsense syllables, and how do the typical vowel and consonant identification scores differ between prelingually and postlingually deaf implantees?To our knowledge, a systematic review and metaanalysis of studies on consonant and vowel identification in CI users tested by nonsense syllables has not been published before. Method This systematic review was conducted in accordance with the 27-item checklist in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement (Moher, Liberati, Tetzlaff, & Altman, 2009). Details of the systematic review protocol were registered with PROSPERO, the international prospective register of systematic reviews, on December 15, 2014.The protocol is available online at: http://www.crd.york.ac.uk/ prospero/display_record.asp?ID=CRD42014015141. The systematic review was performed in the following steps: • Screening of articles for inclusion and exclusion. • Extraction of information from the articles (coding). • Pooling of data for statistical analysis. A flow diagram displaying the process from searching, via screening and eligibility to the final number of included articles, is shown in Figure 1.The diagram is based on a template designed by Preferred Reporting Items for Systematic Reviews and Meta-Analyses (Moher et al., 2009). The forest plots displayed in Figures 2, 3, and 4 were generated by means of the software Comprehensive Meta-Analysis (CMA; Borenstein, Hedges, Higgins, & Rothstein, 2014). Literature Searches Detailed searches for primary and retrospective studies were performed in the following six databases: EMBASE, MEDLINE, PsycINFO, ERIC, Web of Science/Web of Knowledge, and Scopus.Initially, the databases Cochrane Library, Speech Bite, Svemed, Pubpsych, Proquest, Norart, Researchgate.com,and Academia.eduwere also searched by the review team, but these searches returned no results. The searches were run three times on August 13, 2014, April 6, 2015, and October 9, 2016 and were limited to peer-reviewed journal articles written in English, in Scandinavian languages (Norwegian, Swedish, and Danish), and in Finnish.The search strings consisted of two elements: (a) various terms referring to nonsense words and speech discrimination and (b) terms referring to CIs.All the search elements were truncated in order for the searches to include all conjugations of the nouns.Truncation was represented by an asterisk (*).Because "cochlear implant" is an unambiguous concept, unlike "nonsense word repetition," the number of search terms in (b) turned out to be considerably lower than in (a).The complete search syntaxes for the four Ovid databases EMBASE, MEDLINE, PsycINFO, and ERIC, as well as for Web of Science and Scopus, are listed in the Appendix. Screening of Abstracts and Review of Full-Text Articles The search results were imported into EndNote, v. X7.7.1 (Thompson Reuters), for removal of duplicates, books and book chapters, dissertations, editorials, systematic reviews, and articles in languages other than Danish, English, Finnish, Norwegian, and Swedish.Thereafter, the references were imported into the web-based systematic review software DistillerSR (EvidencePartners), which was used for the screening process. Assessment of articles was performed in two phases: (a) screening of abstracts and titles and (b) full-text review of the remaining articles, as described in Figure 1.In Phase (a), two researchers (the first author, AKR, and the fourth author, MAS) independently evaluated all the identified titles and abstracts and excluded the studies missing one or both of the search terms cochlear implants and nonsense words with synonyms.Disagreements were solved by discussion or by reading the full text of the articles.Further on, the abstracts were screened by AKR for number of participants, and studies with less than three participants were excluded, as case studies with one or two participants did not fit into the methodology of the systematic review. In Phase (b), full-text articles were reviewed according to exclusion Criteria IV and V in Figure 1.During this phase, some of the articles were also excluded according to Criterion I, II, or III when this applied.Further details on the inclusion and exclusion criteria are found in the subsequent paragraphs. The included articles described studies with three participants or more.We focused on the outcome of consonant and vowel identification tests measured by nonsense words in free field 6 months or more after implantation.If use of repeated measures in longitudinal studies was reported in the article, we registered the most recent nonsense word scores.If different nonsense word tests for the same groups of participants were used, for example, in Kirk et al. (1992), we included the test that provided results with the highest score.If the article referred to other articles by the same authors for more details about the tests, we extracted the necessary information from these. • Studies on participants with single-channel CIs were excluded.This was based on research showing that implants need at least four channels to provide adequate speech perception in quiet (Cohen, Waltzman, & Fisher, 1993;Tyler et al., 1988). • Studies measuring consonant or vowel score by realword stimuli and not by nonsense syllables were excluded. • Studies measuring consonant or vowel score by nonsense words with three or more syllables were excluded, as it is difficult to disentangle effects of working memory span from hearing when interpreting these results.In addition, the same target consonants or vowels are often presented more than once in such multisyllable test words. • Studies assessing the identification of less than about 50% of the national inventory of vowels and consonants were excluded, as these studies presented vowel and consonant identification scores on the basis of too few consonants and vowels to represent the phoneme inventory of this language.For instance, there are 20-24 consonants in English, depending on the dialect, and for the study to be included, at least half of these had to be used to calculate a consonant identification score. • Studies in which means and standard deviations of the consonant and vowel identification score were not reported, only reported graphically in diagrams, or could not be calculated from confidence intervals or standard errors were excluded.For those excluded studies published less than 10 years ago, we wrote to the corresponding author to ask for the raw data from the study.Studies from which the raw data were received were included in the meta-analysis. • Studies in which nonsense words were presented live instead of recorded were excluded because of less expected consistency in the test results than in recorded materials (Mendel & Owen, 2011). • Studies in which the stimuli were presented with lipreading support were excluded. • Studies using synthesized or electronically generated test stimuli were excluded. • Studies displaying speech sound scores not separated into a vowel and a consonant score were excluded. • Studies in which the identification score for consonants was only reported as categories according to consonant properties like place, manner, or voicing (e.g., Nelson, Van Tasell, Schroder, Soli, & Levine, 1995) were excluded. • In those cases where different articles were based on the same study participants and/or the same data, all but one of these articles were excluded.The article that included the highest number of participants was selected for further analysis. • Studies including participants with a contralateral hearing aid in addition to an implant were excluded unless it was clearly stated in the article that the benefit of the implant was better than the benefit of the hearing aid. Risk of Publication Bias Risk of publication bias was commented on qualitatively and by inspection of funnel plots generated in CMA.A symmetrical funnel plot could indicate the absence of publication bias.However, an asymmetrical funnel plot could indicate several conditions, for instance, heterogeneity, publication bias, or chance, and the interpretation of the asymmetry with regard to publication bias has been highly disputed in previous research (Lau, Ioannidis, Terrin, Schmid, & Olkin, 2006;Sterne et al., 2011).Although it is common in meta-analyses to correct the asymmetry in funnel plots by the "Trim-and-fill" method, we chose not to make use of this technique in our study, as there are substantial methodological problems related to it (Lau et al., 2006).Effect sizes may be underestimated when publication bias does not exist and overestimated when publication bias does exist, and thus, it can be argued that the method is inadequate as a corrective technique (Simonsohn, Nelson, & Simmons, 2014).Therefore, we chose not to draw definite conclusions about publication bias in the case of asymmetry. Quality Assessment Publications considered to be of weak overall quality by the review team were excluded from the systematic review.These quality criteria were • inconsistent presentation of results; • errors in the analyses; and • lack of transparency, for example, missing description of the study methods. Selection and Coding of Data A pilot coding was performed on 11 articles by MAS, to test the strength of the categories in the coding form.After this, an evaluation of the pilot coding was performed by the review team to develop the final coding form, in which the selection of coding parameters was done based on our research questions.The following data were extracted from the articles: author, title of article, publication year, journal, aim, language, and study design, and absence or presence of a control group.For studies including participants with an implant, the following measures were coded: number of participants; number of postlingually/prelingually implanted participants; number of participants with auditory neuropathy spectrum disorder; implant type; speechprocessing strategy; age at testing; age at implantation; duration of implant use; duration of deafness before implantation; age at onset of deafness; stimulation level; number of unilaterally or bilaterally, sequentially, or bimodally implanted participants; identification score for vowels; most confused vowel; identification score for consonants; most confused consonant; monosyllable real-word identification score; and score from postoperative audiometric measurements. For participants with normal hearing serving as control groups, the following measures were coded: number of participants, identification score for vowels, most confused vowel, identification score for consonants, most confused consonant, and monosyllable real-word identification score.The data were extracted to the form by AKR. Strategy for Data Synthesis Both aggregate and individual participant data were used.We used quantitative methodology on the included studies, which were sufficiently homogeneous.Vowel and consonant identification scores and vowel and consonant confusions were compared between studies and between languages, despite cross-linguistic differences (Tyler & Moore, 1992). Analysis Our meta-analysis included studies reporting means and standard deviations.A random effects model was chosen over a fixed effects model to average the effect sizes across studies, as this does not assume a shared common true effect (Borenstein, Hedges, Higgins, & Rothstein, 2009). Research Question 1, "What are the typical vowel and consonant identification scores in cochlear implanted participants when measured with nonsense syllables, and how do the typical vowel and consonant identification scores differ between prelingually and postlingually deaf implantees?" was answered statistically by pooling of the studies in CMA.Individual consonant and vowel identification scores were weighted by the random effects model, averaged across studies and presented as forest plots in Figures 2, 3, and 4. To answer Research Question 2, "Which consonants and vowels are most frequently confused by CI users, and which consonants and vowels are most frequently identified correctly?"we constructed meta CMs to display the three most common vowel and consonant confusions, from the 11 studies in which this information was available.In some articles, this information was given qualitatively, and in these cases, our presentation of the results was also given qualitatively. To answer Research Question 3, only users with postlingual deafness were included in the analysis, as very few studies reported consonant and vowel scores for the prelingually deaf group.We performed a univariate regression analysis with the weighted mean consonant identification score against duration of CI use.Real-word monosyllable score and vowel identification score were omitted as independent and dependent variables in the analyses because this was only reported in 17 studies and 6 studies, respectively.We obtained beta regression coefficients to characterize the univariate relationship and explained the percentage of between-studies variance by using R 2 , which quantifies the proportion of variance explained by the covariates (Borenstein et al., 2009). Study Characteristics The results are based on analyses of the 50 studies reported in the 47 included articles, and the study characteristics are summarized in Table 2 and below.The articles that met our inclusion criteria were published between 1989 and 2016.Three of these articles were treated as two independent studies each in the meta-analysis, with different participants in each study (Kirk et al., 1992;Munson et al., 2003;Tyler & Moore, 1992).In 38 of the studies, the participants were speaking English, and 32 of these studies had participants with American English as their mother tongue.In eight of the remaining nine studies, the participants spoke either Flemish, French, German, Italian, or Japanese.In the final study, the participants reportedly spoke one out of seven mother tongues, namely, Albanian, French, German, Italian, Russian, Spanish, and Swahili (Pelizzone et al., 1999).The large majority of participants (581 of 647) were reported as postlingually deaf and the rest (66) as prelingually deaf.As the criteria for prelingual and postlingual deafness differed between studies, and often were not reported, we used the studies' own report of prelingual and postlingual deafness in our statistics. Six hundred thirteen participants were unilaterally implanted, 10 bilaterally and 24 bimodally.The number of participants per study varied between three and 56.Three articles described CI users with a hearing aid on the contralateral ear (bimodal users; Gani, Valentini, Sigrist, Kos, & Boex, 2007;Incerti, Ching, & Hill, 2011;Sheffield & Zeng, 2012).From these articles, we included in our metaanalysis only the results obtained without a hearing aid.In one of the articles, the participants' vowel perception was tested both with wVb and with bVb words (Kirk et al., 1992).According to our inclusion criteria stating that the participants should not be represented in the material more than once, we chose to use the bVb words in our analyses, as these gave the highest mean score of vowel perception. The participants used implants from the CI manufacturers Advanced Bionics, Cochlear, Digisonic/Neurelec, Ineraid/Symbion, Laura, and Med-El.Many studies reported results from participants with implants from more than one manufacturer and results from studies in which one implant used several stimulation strategies, thus it was not always possible to pool results per implant model or per stimulation strategy. The mean age at onset of deafness was 31.6 years (SD = 18.0 years, range = 2.6-52.4years), reported in 28 studies, and the duration of profound deafness before CI was 14.8 years (SD = 8.1 years, range = 2.7-38.9years), reported in 29 studies. Only two of the included studies had children or adolescents as participants (Arisi et al., 2010;Tyler, 1990).In a study by Tyler (1990), the five children who participated had a mean age of 8.5 years (SD = 1.6 years, range = 6.8-10.3years) and obtained a consonant identification score of 30% (SD = 13.2%,range = 19%-50%).In a study by Arisi et al. (2010) Table 3 shows the vowel and consonant identification scores for the studies with prelingually deaf participants, the studies with postlingually deaf participants, and for the whole sample of 50 studies.All scores are weighted by the random effects model (Borenstein et al., 2009).Only five studies reported scores on vowel identification for the postlingually deaf (Cosendai & Pelizzone, 2001;Gani et al., 2007;Ito, Tsuji, & Sakakihara, 1994;Kirk et al., 1992;Pelizzone et al., 1999).Four of these studies (including 30 participants) reported both consonant and vowel identification scores.For the prelingually deaf, a vowel score for one CI user was reported in only one article, which also reported a consonant score for the same user (Gani et al., 2007).Another article reported the consonant score of one prelingually deaf CI user (Bhattacharya & Zeng, 2007).These scores could not be included in the analyses because of an SD of 0. Finally, vowel identification scores for the normal-hearing group were only calculated in one study, and a mean score of 98.3% (SD = 1.0%) was reported (Kirk et al., 1992). Consonant identification scores were reported in 46 articles (48 studies).Four of these articles had to be excluded because the consonant scores could not be split into one score for the prelingually deaf and one for the postlingually deaf (Kirk et al., 1992;Munson et al., 2003;Stacey et al., 2010;Van Wieringen & Wouters, 1999).Consonant identification scores were not reported for any of the normalhearing control groups, which were included in 13 of the studies.In many of these studies, the control group was used for calibrating the consonant and vowel identification test in the local dialect.This was done by requiring a score of 95% or higher on the test by the control group, before the test could be used for testing cochlear-implanted participants.If the score for the control group turned out to be lower than the limit set in the study, the consonant identification test was modified to get the score above the limit, for instance, by removing nonsense syllables with high failure rates from the test, for example, certain test words pronounced in a dialect little known to the participants. In Figures 2, 3, and 4, the vowel and consonant identification scores are presented as forest plots, showing the weighted mean and the 95% confidence interval for each study, arranged in ascending order.Ceiling effects were observed in the individual scores of the included studies, especially in the vowel scores. Only five studies reported consonant identification scores for both the prelingually and postlingually deaf CI users, and no studies reported vowel identification scores for both groups.Consonant identification scores for the postlingually deaf users were on average 10.9% better than for the prelingually deaf users (SD = 39.7%, range = −22.5%-47.5%,z[5] = 0.61).This difference in scores was not statistically significant ( p = .54,df = 4).Hence, it is unclear whether there is a difference in consonant perception between prelingually and postlingually deaf CI users. Research Question 2: Which Consonants and Vowels are Most Frequently Confused by CI Users, and Which Consonants and Vowels are Most Frequently Identified Correctly? Vowel Confusions Details on individual vowel confusions were reported in only one of the included articles (containing two studies; Kirk et al., 1992) but were based on quantitative data from 27 CMs.This article reports results from participants with normal hearing and two groups of CI users: Ineraid and Nucleus-users.Vowel stimuli were given both in bVb context and in wVb context.Identifications and misidentifications were reported qualitatively, and for the subjects with normal hearing, only a few errors were made.In the bVb context, mean vowel identification was 50.5% (SD = 4.8%, range = 30.0%-77.7%) for cochlear CI users and 52.0%(SD = 4.0%, range = 32.5%-82.5%)for Ineraid CI users.In the wVb context, the vowel identification scores were somewhat lower than in the bVb context for both implants.In summary, the long vowels /iː, aeː, ɑː/, and /uː/ were seldom misidentified, but the short vowels /ɪ, ɛ, ʌ/, and /ʊ/ were often confused with other short vowels./ʊ/ was sometimes, however, also confused with /ɑː/ in wVb context.Additionally, a higher number of short vowels were confused in the wVb context than in the bVb context. Consonant Confusions Details about consonant confusions were reported in 13 of the included articles (15 studies;Donaldson & Kreft, 2006;Dorman & Loizou, 1996;Dorman et al., 1990;Doyle et al., 1995;Incerti et al., 2011;McKay et al., 1992;Munson et al., 2003;Pelizzone et al., 1999;Sagi et al., 2009;Teoh, Neuburger, & Svirsky, 2003;Tyler, 1990;Tyler & Moore, 1992;Van Wieringen & Wouters, 1999).In 11 of these articles, the consonant confusions were reported in CMs.Table 4 gives an overview of these 11 articles.Detailed results of the three most frequently correctly identified consonants from the 11 articles are shown in Table 5, and details about the most common consonant confusions from the 11 articles are presented in a meta-CM in Table 6.Because of the low number of articles presenting CMs (11), we chose to base our study's matrices on the nine consonants that were used in all the 15 studies, /b, d, p, t, k, n, m, s/, and /z/.We also chose to pool articles reporting studies conducted in different languages (Australian English, American English, and Flemish) and to pool those with different kinds of stimuli, Cɑ, Ci, Cu, ɑCɑ, iCi, and uCu.We also pooled the only article, which included children as participants (Tyler, 1990) with the remaining articles. In two studies (Dorman et al., 1990;Munson et al., 2003), the participants were divided into poor and better performers; in one study, the participants were divided into poor, intermediate, and better performers (Van Wieringen & Wouters, 1999); and in two studies, the participants were divided into three groups according to type of implant (Doyle et al., 1995) or according to native language of participants (Tyler & Moore, 1992).In each of these studies, the data from the CM of each group were plotted into the table and the meta-CM.Thus, a total of 17 CMs were pooled into Table 5 and the meta-CM in Table 6. In three of the articles, several consonant identification tests were given to the same participants.We chose the better of the two outcomes when two speech processors were compared (Dorman & Loizou, 1996;McKay et al., 1992).We chose the outcomes on the basis of use of CI alone if one CM was made based on the CI alone and one on CI + hearing aid (Incerti et al., 2011).In one article (Donaldson & Kreft, 2006), the consonant identification tests were performed in six contexts, Cɑ, Ci, Cu, ɑCɑ, iCi, and uCu, and averaged over all conditions.We included the pooled data in our analyses.When several CMs were presented, obtained with and without background noise and with and without lipreading (Incerti et al., 2011), testing in quiet and auditory-only condition was chosen. As Table 5 shows, the consonants that were most frequently identified correctly were the unvoiced stops /t/ and /k/. The meta-CM in Table 6 shows that the most frequent confusions were /k/ confused with /t/ and /m/ confused with /n/. Table 3. Means, standard deviations, and ranges of the study variables for the prelingually and postlingually deaf CI users. Postlingually deaf Prelingually deaf Total (a) The weighted scores of age at implantation and duration of implant use for the prelingually and postlingually deaf CI users are reported in Table 7.The monosyllable scores are reported in Table 3.Because only six studies report results for prelingually deaf CI users, a bivariate metaregression was not carried out, and Research Question 3 (a) could not be answered. (b) Only five studies reported a vowel identification score for the group of postlingually deaf.This is too few to provide an adequate representation of the included studies, and further analyses were therefore not performed on this group.The vowel identification scores can be examined in Table 3. We decided to omit monosyllable scores from the multiple regression model with postlingually deaf CI users due to a small number of studies (N = 14).A univariate regression model was then constructed with the moderator variable duration of implant use and the independent variable consonant identification score.The results of the univariate regression were β = 2.6, SE = 1.4,95% confidence interval = [−0.22,5.3], z[36] = 1.81, and not significant ( p = .071).The proportion of total between-studies variance explained by the model was R 2 = .59,N = 36. Publication Bias In order to optimize the quality of our included study sample, we have only included peer-reviewed, published studies written in English, Finnish, and in Scandinavian languages.Although we performed searches in a number of grey material databases in the beginning of our systematic review process, without finding any relevant studies, some unpublished and even published research may still be missing from our searches.Also, relevant studies may have experienced delayed publishing for various reasons.Thus, there might be some publication bias in our systematic review. By visual inspection of the funnel plot for the consonant identification scores of the postlingually deaf, we noticed that the studies were slightly scattered to the left of the mean of the funnel plot.The asymmetry in the funnel plot may be a sign of publication bias, heterogeneity, or chance. Discussion The purpose of this systematic review and metaanalysis is to establish a baseline of the vowel and consonant identification scores in prelingually and postlingually Note.Ca = Consonant-a; Ci = Consonant-i; Cu = Consonant-u; aCa = a-Consonant-a; iCi = i-Consonant-i; uCu = u-Consonant-u. Table 5. Overview of the three most frequently correctly identified consonants in the included studies.deaf users of multichannel CIs tested with CVC and VCV nonsense syllables. Stimulus The mean consonant and vowel identification scores for the prelingually and postlingually deaf CI users show that performance was well below ceiling for both groups and that there were higher scores for vowels than for consonants.The mean differences between the consonant identification scores for the prelingually and postlingually deaf CI users were not statistically significant. Details of the vowel confusions were given qualitatively and in only one article.Details of the consonant confusions were given in CMs in 11 articles.Our meta-CM showed that the most frequently confused consonants were /k/ confused with /t/ and /m/ confused with /n/. In a univariate regression model between duration of implant use and consonant identification score for postlingually deaf CI users, duration of implant use explained 59% of the variance in effect sizes.The model was not statistically significant ( p = .071). Research Question 1: Typical Vowel and Consonant Identification Scores We could not draw definite conclusions about differences in consonant identification between prelingually and postlingually deaf CI users because of the large difference in sample size between the groups (six studies with prelingually deaf and 44 studies with postlingually deaf ).The same reason applies to why Research Question 1 could not be answered with regard to vowel identification score, as only one article with one participant reported a vowel score of prelingually deaf CI users and five articles reported vowels scores of postlingually deaf CI users. Visual inspection of Table 3 shows that the vowel identification scores were substantially higher than the consonant identification scores for both prelingually and postlingually deaf CI users and that the total vowel score was approximately 16% higher than the total consonant score.This can be explained by the known fact that vowels have more acoustic energy than most consonants.The vowels in the CVC test words also have longer duration than the consonants in the VCV test words and may therefore be easier to perceive, as the participants have more time to listen to them. The consonant score for the prelingually deaf implant users was below 50% and more than 10% lower than the consonant score for the postlingually deaf (see Table 3).When we examined the six included studies with prelingually deaf participants, we noticed that they all had participants with a high age at implantation (range = 6.8-31.5 years) and, thus, long duration of deafness before implantation.Many studies have shown that prelingually deaf individuals younger than 2 years of age at implantation are more likely to obtain higher benefit from the implant for open speech perception than prelingually deaf implanted at a higher age (May-Mederake, 2012;Quittner, Cejas, Wang, Niparko, & Barker, 2016;Tobey et al., 2013).Studies conducted with prelingually deaf children implanted earlier than at 1 year of age show even that their speech perception measures are superior to the corresponding measures for postlingually deaf CI users, for prelingually deaf, later-implanted children, and for CI users with a progressive hearing loss before implantation (Colletti, Mandalà, & Colletti, 2012;Dettman et al., 2016;Holman et al., 2013). Research Question 2: Vowels and Consonants Most Frequently Confused and Most Frequently, Correctly Identified In 11 of the included articles in our meta-analysis, consonant confusions were presented in CMs, making the results easy to quantify.In the spirit of meta-analytic approach, the CMs from the 11 articles were merged into one meta-CM displaying the three most frequently confused consonants from each CM.It is a well-known phenomenon in phonetic and audiologic research that confusions between speech sounds most frequently happen within a group of sounds with different place of articulation but similar manner of articulation.Fant (1973) showed that the acoustic similarities of consonants grouped according to manner of articulation, for instance, stops, fricatives, and nasals, are significant for speech sound perception.The most frequently confused consonants in this study had the same manner of articulation and were thus acoustically similar and differed only in place of articulation./t/ is an unvoiced dental/alveolar stop, and /k/ is an unvoiced velar stop./m/ and /n/ are voiced nasals.In both confusions, different places of articulation were confused within the same category of manner of articulation. The relatively high percentage of correct identification scores for the unvoiced stop consonants /t/ and /k/ in VCV context displayed in Table 5 can be explained by the fact that CI users listen to formant transitions in the adjacent vowels for identification.Consonants with the same manner but different place of articulation would be difficult to identify if formant transitions were not available.Moreover, the quality of the aspiration of the unvoiced stops also makes them easier to recognize than the voiced stops.There is a distinct audible difference between the aspiration following the pronunciation of /p/, /t/, and /k/, resembling the sound of the corresponding fricatives produced in the same place. /k/ and /t/ were found to be the most frequently, correctly identified consonants, but /k/ was also the consonant most frequently confused, namely, with /t/.This may seem contradictory, but the explanation is most likely that the other consonants in the CMs of the included studies, /b, d, n, m, s, z/, are confused more broadly and more frequently with a number of other speech sounds, and also with those not included in our study, whereas the three unvoiced stops are almost exclusively confused among themselves.Apparently, CI users perceive the unvoiced stops as the most audibly distinct group among the consonants included in this study. Research Question 3: The Association Between Age at Implantation, Duration of Implant Use, and Real-Word Monosyllable Score on Vowel and Consonant Identification Scores in Prelingually and Postlingually Deaf CI Users Due to the low number of included studies reporting consonant or vowel identification score for the prelingually deaf, a statistical analysis of the associations with the moderators could not be performed for this group.However, many previous studies have investigated this, and it is well known that age at implantation plays an important role for the outcome of speech perception tests for prelingually deaf CI users (Holman et al., 2013;Tobey et al., 2013).Presumably, this is also the case for vowel and consonant tests measured by nonsense words. For the postlingually deaf CI users, we constructed a univariate regression model in which duration of implant use could explain 59% of the variance in consonant score.After implantation, the CI users need a period of adaptation to the implant sound, which, in most cases, can vary from 3 months to 1 year.Thus, until stability of the fitting parameters is reached, the implantees will experience a gradual improvement of the benefit of the implants.Schmidt and Griesser (1997) showed that this stability was reached after about 1 year. Earlier studies have shown that there is a close relationship between consonant and vowel identification scores and real-word monosyllable scores (e.g., Rødvik, 2008).Due to the low number of studies that reported real-word monosyllable scores in quiet for the postlingually deaf implantees (N = 14), we could not confirm this relationship in the meta-analysis.It also needs to be pointed out that, in the included studies, three different real-word monosyllable tests were used, and the consistency of the pooled means may therefore not be satisfactory. Limitations Exclusion of Studies Reporting Vowel Identification Scores Measured by Real Words Our set criterion of only including studies which measured vowel and consonant scores by nonsense words demanded the exclusion of studies in which real words were used.The hVd nine-vowel test by Tyler et al. (1983) and the hVd 12-vowel test by Hillenbrand et al. (1995) were used to calculate vowel identification scores in 28 of the included studies, in which consonant identification scores were also measured.The test scores were excluded from this metaanalysis, as all of the hVd-combinations produced real English words, and also included diphthongs.Among the six included studies in which vowel scores were measured using nonsense words, three described Swiss participants (French-speaking; Cosendai & Pelizzone, 2001;Gani et al., 2007;Pelizzone et al., 1999), one described Japanese (Ito et al., 1994), and two described English-speaking participants from the United States (Kirk et al., 1992).It appears that many of the studies conducted with English-speaking participants use tests with real words in vowel identification testing, but tests with nonsense syllables in consonant identification testing.Studies conducted with participants with other native languages more often use nonsense syllables for obtaining vowel identification score as well.The reason might be lack of a validated nonsense syllable vowel test in English or that other languages do not have as many minimal pairs or triplets as the English language. The consequences of excluding studies in which real words were used to measure consonant and vowel identification scores can be considered both positive and negative.On the positive side, consonant and vowel scores are collected from a homogenous material and can be compared cross-linguistically. On the negative side, the collected material is smaller than it would have been if consonant and vowel scores measured by real words were included, and thus, the statistical power is lower. Use of Nonsense Syllable Tests to Avoid Ceiling Effects in Speech Perception Testing When the outcomes of speech perception tests approach the ceiling effect, the tests should be replaced with more difficult tests.This is usually done in two different ways, either by adding noise to test words and sentences or by exchanging the real-word tests with nonsense syllable tests.These are two very different approaches of increasing the levels of difficulty, and both have advantages and disadvantages.A speech-in-noise test is most frequently preferred in clinics, and one reason may be that such tests allow for the assessment of speech perception in everyday situations, which often involve a degree of environmental noise.Although the nonsense syllable identification test does not correspond closely to everyday speech perception situations, it has a major advantage in its relative independence of cognitive and contextual factors, such as language abilities, language experience, inferential skills, working memory capacity, and use of sentence context for comprehension.Such a test is valuable in research and in clinics, as it provides information about minute details of the speech sound perception of the implantees, details that cannot easily be obtained with other tests.This is useful for the fitting of implants and for the planning of individual listening therapy. Choice of Time Frame for the Inclusion of Articles The articles included in the meta-analysis range in publication year from 1989 to 2016 and report test results on CI users with multichannel implants of four channels or more.The validity of our choice is confirmed by Figure 5, which shows that the correlation between publication year and consonant score in the included articles is low and not statistically significant (.187; p = .202).Hence, other factors than implant technology would probably explain the consonant score or dominate in a regression model with consonant score as the dependent variable. Since 1989, there has been a transition from analog strategies in Symbion/Ineraid and feature extraction strategies in previous Cochlear devices (F0F2 and F0F1F2), to n-of-m and derivate of CIS stimulation strategies.More recently, there has been a transition to the fine structure stimulation strategies from Med-El.These strategies convey the fundamental frequency in the coding algorithm.All these modern strategies are spectral resolution strategies and, thus, can deliver pitch information to the inner ear, unlike the previous single-channel implants.The spectral resolution strategies are mainly pulsatile strategies, except for the analog strategies, and thus, the information is delivered to the electrodes using a set of narrow pulses in a nonsimultaneous fashion.Some of the recent stimulation strategies from Advanced Bionics even employed combined pulsatile and simultaneous (analog) stimulation strategies. There has been a development in the microphone technology since the early years of CI.The input frequency range has increased, and the overall microphone quality has improved.However, the microphone sensitivity and the internal noise of the microphones have not improved noteworthily, although the availability of good microphones has increased.The benefit of increased frequency range in the speech processors for the postlingually deaf can also be discussed because the perceived pitch depends on where the implant is located in the cochlea rather than on the input frequency range of the microphone.Thus, the improvements in speech processor technology may not be of great importance in a clinical test situation with a good signal-to-noise ratio. The largest improvements and developments of the implant technology since 1989 have followed the advances in conventional hearing aids by integrating a large amount of technology from the hearing aid industry.For instance, refined and further developed automatic gain controls with new noise reduction and compression algorithms have been implemented in the speech processors from all implant manufacturers.Also, there has been a trend toward smaller processors and toward controlling the speech processors by remote controls or by "apps" on the users' smartphones.All this may have had substantial impact on the speech perception in daily life but probably only minor impact on speech perception in a clinical environment. Conclusions This systematic review and meta-analysis included peer-reviewed studies using nonsense syllables to measure the consonant and vowel identification scores of CI users, both with and without control groups. The mean performance on consonant identification tasks for the postlingually deaf CI users from 44 studies was higher than for the prelingually deaf users, reported in six studies.No statistically significant difference between the scores for prelingually and postlingually deaf CI users was found. The consonants that were not correctly identified were typically confused with other consonants with the same acoustic properties, namely, voicing, duration, nasality, and silent gaps. A univariate metaregression model with consonant score against duration of implant use for postlingually deaf adults indicated that duration of implant use predicts a substantial portion of their consonant identification ability.No statistical significance was found using this model. Tests with monosyllabic and bisyllabic nonsense syllables have been employed in research studies on CI users' speech perception for several decades.These kinds of studies expose information about the hearing of cochlear-implanted patients, which the standard test batteries in most audiologic clinics do not reveal, information that is very useful for the mapping of CIs and for the planning of habilitation and rehabilitation therapy.Such tests may also give valuable information for further development of CI technology.We therefore propose that nonsense syllable tests be used as part of the standard test battery in audiology clinics when assessing the speech perception of CI users.or/ 22-23,32,37 (532) 39.TS = ("prostheses and implants" or "sensory aids" or "hearing aids" or "hearing loss" or "hearing disorders" or (implant* or prosthes*)) Database: Scopus (Elsevier) ((TITLE-ABS-KEY("Cochlear implant*")) OR (TITLE-ABS-KEY((cochlear or auditive or auditory or hearing) PRE/2 (implant* or prosthes*))) or ((TITLE-ABS-KEY("prostheses and implants" or "sensory aids" or "hearing aids" or "hearing loss" or "hearing impaired persons" or "hearing disorders" or (implant* or prosthes*))) and (TITLE-ABS-KEY(cochlea*))) or ((TITLE-ABS-KEY(implant* or prosthes*)) and (TITLE-ABS-KEY("hearing aids" or "hearing loss" or "hearing impaired persons" or "hearing disorders")))) and (((TITLE-ABS-KEY("speech sound" PRE/2 (repetition or recognition or confusion or identification or discrimination or perception or test or score)) OR TITLE-ABS-KEY(phoneme PRE/2 (repetition or recognition or confusion or identification or discrimination or perception or test or score)) OR TITLE-ABS-KEY(consonant PRE/2 (repetition or recognition or confusion or identification or discrimination or perception or test or score))OR TITLE-ABS-KEY(vowel PRE/2 (repetition or recognition or confusion or identification or discrimination or perception or test or score)))) or ((TITLE-ABS-KEY("nonsense word*" or nonword* or "pseudo word*") OR TITLE-ABS-KEY("nonword* syllable*" or "nonsense syllable*" or "pseudo syllable*"))) or (((TITLE-ABS-KEY("speech sound" PRE/2 (repetition or recognition or confusion or identification or discrimination or perception or test or score)) OR TITLE-ABS-KEY(phoneme PRE/2 (repetition or recognition or confusion or identification or discrimination or perception or test or score)))) or ((TITLE-ABS-KEY("nonsense word*" or nonword* or "pseudo word*") OR TITLE-ABS-KEY("nonword* syllable*" or "nonsense syllable*" or "pseudo syllable*"))))) Appendix ( p. 5 of 5) Search Syntax (a) Nonsense word repetition with the synonyms nonword*, NW*, nonsense word*, pseudo word*, nonsense syllable*, nonword syllable*, pseudo syllable*, CV* word*, VC* word*, speech sound repetition*, speech sound recognition*, speech sound confusion*, speech sound identification*, speech sound discrimination*, speech sound perception*, phoneme repetition*, phoneme recognition*, phoneme confusion*, phoneme identification*, and phoneme discrimination*.(b) Cochlear implants with the synonyms CI, cochlear prosthes*, hearing aid*, sensory aid*, hearing instrument*, and hearing device*. Figure 1 . Figure 1.Flow diagram, searches for "nonsense words" with synonyms and "cochlear implants" with synonyms.CI = cochlear implant; EMBASE = Excerpta Medica Database; MEDLINE = Medical Literature Analysis and Retrieval System Online; PsycINFO = Psychological Information Database; ERIC = Education Resources Information Center; WOS = Web of Science.Copyright © 2009 Moher et al. (Creative Commons Attribution License). Figure 2 . Figure 2. Forest plot of vowel identification scores for postlingually deaf cochlear implant users.The primary studies are represented by boxes, which are bounded by the confidence interval (CI) for the effect sizes in each study.The effect sizes are measured in percent. Figure 3 . Figure 3. Forest plot of consonant identification scores for prelingually deaf cochlear implant users.The primary studies are represented by boxes, which are bounded by the confidence interval (CI) for the effect sizes in each study.The effect sizes are measured in percent. Figure 4 . Figure 4. Forest plot of consonant identification scores for postlingually deaf cochlear implant users.The primary studies are represented by boxes, which are bounded by the confidence interval (CI) for the effect sizes in each study.The effect sizes are measured in percent. , 45 adolescent participants had a mean age of 13.4 years (SD = 2.6 years, range = 11-18 years) and obtained a consonant identification score of 53.5%.Research Question 1: What are the Typical Vowel and Consonant Identification Scores in CI Users When Measured by Nonsense Syllables, and How Do the Typical Vowel and Consonant Identification Scores Differ in Prelingually and Postlingually Deaf Implantees? The means and standard deviations are given with one decimal, except when the included articles reported these values without decimals.Em dashes indicate data not obtained.CI = cochlear implant; NH = normally hearing; VCV = vowel-consonant-vowel; aCa = a-Consonant-a; V = Vowel; iCi = i-Consonant-i; uCu = u-Consonant-u; Ca = Consonant-a; Ci = Consonant-i; Cu = Consonant-u; NU-6 = The Northwestern University Auditory Test No. 6; CNC = the consonant-vowel nucleus-consonant test; wVb = w-Vowel-b; bVb = b-Vowel-b; PBK = The Phonetically Balanced Kindergarten Word Test. a Not tested with the consonant test.b Tests performed using a CI simulator and the test results therefore not included. Figure 5 . Figure 5. Scatter plot of consonant mean scores versus publication years in the 48 included studies reporting consonant scores.The consonant mean scores are measured in percent.The cases are weighted by number of participants. Table 1 . PICOS criteria for inclusion in the systematic review and meta-analysis. Table 2 . Study characteristics, task characteristics, and results for the 50 included studies. Note.In three of the studies, the real-word monosyllable scores could not be separated into separate scores for the groups of prelingually and postlingually deaf CI users.Em dashes indicate data not obtained.CI = cochlear implant. Table 4 . Description of the articles presenting consonant confusions in matrices. Note.The three most frequently correctly identified consonants in each study were picked, assigned to an index weighed by the number of participants in the study, added together with the results from the other studies, and included in this table.The percentages in the second column were calculated by dividing the number of correct identifications of each consonant by the total number of correct responses.The consonant with the highest percentage was the most frequently, correctly identified of the nine consonants.The consonants are arranged in descending order according to percentage of correct identification. Table 7 . Means, standard deviations, and ranges for the moderator variables for the prelingually and postlingually deaf CI users. Table 6 . Confusion matrix of the three most frequently confused consonants pooled across 13 studies.Note.The three most frequently confused consonants in each confusion matrix were picked, assigned to an index equal to the number of participants in the study, added together with the results from the other matrices, and included in this table.The percentages in the table cells were calculated by dividing the number of confusions in each cell by the total number of confusions.The cell with the highest percentage shows the most frequent consonant confusion of the 13 studies.
15,974
2018-04-17T00:00:00.000
[ "Linguistics", "Physics" ]
Optimizing Dead Mileage in Urban Bus Routes . Dakar Dem Dikk Case Study This paper studies the buses assignment from their depots to their routes starting points in urban transportation network. It describes a computational study to solve the dead mileage minimization to optimality. The objective of this work is to assign the buses to depots while optimizing dead mileage associated with pull-out trips and pull-in trips. To do so, a new mixed-integer programming model with 0 1 variables is proposed which takes into account the specificity of the buses of Dakar Dem Dikk (the main public transportation company in Dakar). This company manages a fleet of buses some of which, depending on road conditions cannot run on part of the network. Thus, buses are classified into two categories and are assigned based on these categories. The related mixed-integer 0 1 linear program is solved efficiently to minimize the cumulative distance covered by all buses. Numerical simulations on real datasets are presented. Introduction Dakar Dem Dikk (3D) is the main public urban transportation company in Dakar.It manages a fleet of buses with different technical characteristics so that a great part of the bus fleet cannot have access to all the network.Buses are parked overnight at Ouakam and Thiaroye depots (see Figure 1).Starting points of routes (or terminals) are different from depots.A bus has to cover the distance from its depot to the starting point of its route before being undertaken on a regular service.This distance is known as "dead mileage" or "dead heading" because no service is provided while covering it. The objective is to minimize the cumulative distance covered by all the buses from their depots to the starting points of their routes and vice versa.End points of the routes are taken to be the same as the starting points. In addition, depending on roads conditions and technical characteristics of buses, some of them cannot circulate on some roads of the network.The capacities of the respective depots and the number of buses are known. More accurately, the 3D urban transportation network can be defined by the valued graph (see Figures 1 and 2, where all depots, terminals, bus stops and routes (158 for Ouakam depot, and 131 for Thiaroye depot) are represented:   , , G N A C   N is the set of nodes which represents depots, terminals (starting points) or bus stops. A is the set of arcs in N × N, representing the set of routes connecting terminals and depots, terminals and bus stops or two bus stops. C is the set of distances between nodes. There are both good and bad routes, that is to say there exist pairs ( , ) i j A  such that the route connecting i and j is in a bad condition.A workday for a given bus is defined as a sequence of trips, dead mileage, parking stops and pull-out/pull-in trips from/to the assigned depot.Given the diversity of transportation modes in Dakar and the high customer demand (in terms of service quality and network coverage), 3D decision makers must manage multiple resources: buses, employees, depots, etc.They make several important decisions such as network design, timetabling, vehicle scheduling, construction of new depots, etc.The 3D company focuses on efficient use of resources, especially buses and drivers.Decision markers here play a crucial role.However, two important aspects are taken into account for the determination of frequencies and schedules: facilities supply and transport demand. In practice, due to their complexity, planning problems are often divided into three levels of decisions which are solved sequentially, see Boctor et al. [1]: 1) The strategic planning level involving long-term decisions: network design, bus stop problem, etc. 2) The tactical planning level that concerns mean-term decisions: trip frequency scheduling, timetabling, etc. 3) The operational planning level concerning the short-term decisions and deals with the current transportation operations: vehicle scheduling, driver scheduling, etc. The inevitable problems related to current operations can be addressed through adjustments in timetables.In the past, there were no problems related to dead mileage that warrant serious attention.Even if they cropped up they were tackled by applying some techniques based on commonsense and experience.Nowadays, due to the fact that most of the urban transportation companies have large fleets of buses, bus stops and terminals, problems related to dead mileage have become so complex that 3D's rudimentary techniques seem unsuitable.Thus, the need to use analytical tools to deal with urban transportation system problems has called the attention of 3D decision makers.Since dead mileage trips create an additional cost factor, the minimization of total distance covered by all the buses will reduce this cost.Thus, the minimization of this cost is an important optimization goal. The strategic level decision problem of assigning buses to depots while optimizing dead mileage, is known to be a NP-Hard problem, see e.g.Boctor et al. [1]; Hsu [2]; Prakash et al. [3].Several researchers have employed analytical tools to deal with urban transportation system problems in the recent past.Sharma and Prakash [4] have applied analytical tools for solving a prioritized bicriterion dead mileage problem related to an urban bus transportation system.The problem is to determine an optimal number of buses to be parked overnight at respective depots and an optimal schedule to take buses from the depots to the starting points of their routes.The capacities of the respective depots and the number of buses required at the starting points of routes are known.The minimization of the cumulative distance covered by all the buses and that of the maximum distance between the distances covered by individual buses from the depots to the starting points of their routes are the first and second priority objectives, respectively.In Hsu [2], Hsu has extended Sharma and Prakash's problem by introducing an additional all-or-nothing constraint which stipulates that all the buses plying on a route must come from the same depot.In Prakash et al. [3], they developed an algorithm to obtain the set of nondominated solutions of a two-objective problem.The two objectives were to minimize the cumulative distance covered by all the buses and the maximum distance among the distances covered by individual buses from the depots to the starting points of their respective routes.Kasana and Kumar [5] proposed an extension that makes it possible to treat p objectives and take up the data in Prakash et al. [3].It should be noted that these contributions take into account the capacity of depots in a period corresponding to a workday; and also buses should leave and return to the same depot.In Agrawal and Dhingra [6], a singleobjective dead mileage problem is considered.The problem is to determine optimal program aiming at increasing the depots capacities and the number of buses to be parked overnight at respective depots with a schedule to take the buses from the depots to the starting points of their routes. Prakash and Saini [7] have considered a bi-criterion dead mileage problem: the problem is to select an optimal site for a new depot of specified capacity from among several potential sites.An optimal schedule to take buses from the depots to the starting points of their routes and the spare capacity available at each of the depots after the construction of the new depot are also determined.Moreover, Pepin et al. [8] developed a heuristic approach for the vehicle scheduling problem with multiple depots.The objective is to determine schedules of minimum cost for vehicles assigned to depots.In their work, Boctor et al. [1] studied the problem of minimizing the dead mileage, with two models, in order to select a new depot among several potential sites.Their first model gives the buses assignment by requiring that all buses must return to the same depot.The model takes into account the storage capacity by dividing the day into T periods.In their second model, a bus can end its route to a depot which is different from its initial one.However, neither the bus types nor the road condition are integrated in this model.In its case, 3D makes its planning and takes these two constraints into account.Sometimes it happens that the decision maker is not able to assign priorities to both bus types and road conditions.In such a situation, help can be provided to the decision makers by presenting them with a set of solutions. In the present paper, an attempt is made to deal with this type of situation in the context of dead mileage.We propose a mixed-integer programming problem with 0 -1 variables to deal with the dead mileage minimization problem of 3D.The new model obtained is based on the second one developed in Boctor et al. [1].The model assigns each pull-in trip and each pull-out trip to a depot so as to minimize the mileage associated.It takes into account both the bus types and the network condition [9,10], and we show that this technique is of practical importance.With some processing techniques, we solve instances of 3D which could not be solved by their rudimentary techniques.By dividing the day into periods, this model is applied to 3D's scenarios.We obtain through numerical simulations (by using the commercial MILP solver, namely IBM-CPLEX [11]) the dead mileage reduction. T This paper is organized as follows: In Section 2, we present the model that minimizes the mileage associated with the pull-out trips and the pull-in trips.Computational analyzes using IBM-CPLEX V12.3 and the frequency assignment of buses are carried out in Section 3. Finally, summary and conclusions are presented in Section 4. Definition of the Dead Mileage Optimization Problem The problem can be stated as follows.Given  the bus types,  a set of depots,  a set of routes,  and a set of terminals; assign buses so as to minimize dead mileage traveled by buses, for their pull-out trips and their pull-in trips.The dead mileage corresponds to a route where no service is provided while covering this distance. The following assumptions are introduced to formulate the dead mileage problem:  A certain number of routes are assigned to each depot, and consequently a certain number of buses. Each depot has a capacity. There are several bus types divided into r categories, with  the set of bus types. All buses on which there is a restriction (belong to 1 K ) come from the same depot. Taking into account the capacities of depots, bus types 1 k K  can return to the nearest depot ( 1 K is the set of bus restrictions). A bus is said to be compatible with a depot if at the end of its route, it can return to this depot.The following notations are used to describe the procedures for optimizing dead mileage. j: routes ( ). ). e ij : dead mileage associated with the pull-out trip of route from depot i .j f ij : dead mileage associated with the pull-in trip of route to depot .j i c i : capacity of depot .i M: total number of available buses for the m depots.S it : number of buses leaving from depot i at period t.R it : number of buses going to depot i at period t.K: the set of bus types, the set of bus types with restrictions. Decision Variables x ij : binary variable which becomes 1 if the pull-out trip of route j is associated with depot i, 0 otherwise.y ijk : binary variable which becomes 1 if the pull-in trip of route j is associated with depot i for the bus type k (the bus which performs the route is compatible to this depot), 0 otherwise. n it : number of buses in depots i at the beginning of period t. The dead mileage problem may be formulated as follows: The objective function (1) minimizes the mileage asso-ciated with the pull-out trips and pull-in trips while taking into account the bus that covers the route.The departure depots may be different from the return ones.It is a linear objective function.Constraint (2) implies that the pull-out trip of a route is associated with a single depot.Constraint (3) ensures that the pull-in trip is also associated with a single depot and takes into account the bus compatibility.Constraint ( 4) is effective only when ijk equals one.It restricts the number of k type bus pull-in trips to exactly p, where p is the number of bus types with restrictions.Constraint (5) makes it possible to have, in real time, the number of buses at each depot.Constraint (6) implies that the number of buses in the depot i, at each period, is less than or equal to the capacity of this depot.Constraint (7) ensures that the number of buses in all depots is equal to the number of available buses.Constraint (8) ensures the feasibility of the model.The number of buses in the morning (starting period) must be equal to the number of buses in the evening (end period).Constraints ( 9), ( 10) and ( 11) represent domains of the variables.y Numerical Experiments This section shows how the problem is applied to real data of 3D.Using the model, the operational costs resulting from the reduction of dead mileage in urban buses can be calculated.Dead mileage does not only have an impact on revenue but also in increases of fuel consumption, operational costs, pollution etc.As all these bad consequences could be reduced with dead mileage, it is thus an obligation to optimize it. The operational costs of dead mileage, by a vehicle per kilometer, is evaluated jointly with 3D.Considering that the bus average speed is 60 kilometer per hour, the cost of vehicle use is 1000 FCFA ( 1.5 euro) per kilometer.  Located at km 4.5 avenue of Cheikh Anta Diop, the 3D company manages a fleet of 410 vehicles using two depots: a depot located in Ouakam (km 4.5 Cheikh Anta Diop avenue, the head office of the company) and a depot located in Thiaroye (suburbs of Dakar, 19 kilometers far from Dakar city). To ensure network coverage, 3D manages its services by using 17 lines (see Tables 1 and 2, with 11 from Ouakam depot and 6 from Thiaroye depot.Each line ensures a certain number of routes.Presently, the total number of routes in the network is 289.Thirty (30) permanent terminals and 810 bus stops are used (see Figure 1, where bus stops are not representeddue to their quantity).The maps in Figures 1 and 3 are obtained by using the software EMME [12].A daywork is divided into 38 periods of 30 minutes.The first route of the day should arrive at its terminal starting at 6:00 am.Given the distance between depot and this terminal, the first vehicle must leave the current depot at 5:45 am.Therefore, the first period of the model begins at 5:30 am.The first and the last periods correspond to 05:30 am -06:00 am and 11:30 pm -00:00 am, respectively.Relevant informations such as a storage plan of the depots and time interval in which each bus is available (for pull-out trips and pull-in trips), are usually given.In addition, the following information is known in advance:  capacities of Thiaroye and Ouakam depots,  and distances between depots and end points of routes.Tables 3 and 4 give the total number of buses to be parked overnight and the number of buses to be sent from depots to the starting points of routes.Recall that, all scenarios are given by 3D. To represent the pull-out trips and the pull-in trips, we mentioned only the periods where there are actually pullin trips and pull-out trips of routes.In periods 8 to 27, which correspond respectively to the intervals from 08:30 am -09:00 am to 06:00 pm -06:30 pm where no entry is made.That is why they are not represented in tables. Tables 3 and 4 show the number of pull-out trips and pull-in trips for all routes on the network, and Figure 1 illustrates the locations of Ouakam and Thiaroye depots and the starting points.When we look at this map, we can notice that there are sites where the demand is high.The cumulative distance (for the 17 lines) covered by all the buses from the depots to the starting of their routes and from the end points back to their depots is 7881.9kilometers.Therefore, the daily cost is 7,881,900 FCFA (  11,822.85euro) for the 38 periods (from 05:30 am to 00:00 am).As the 38 periods correspond to a workday, the total cost is 39,409,500 FCFA ( 59,114.25 euro) for 5 workdays (from Monday to Friday).Therefore, the total cost is 157,638,000 FCFA ( 236,457 euro) for a month (are counted only workdays).   Recall that, our proposed model takes into account both the bus types and the network condition.The numerical experiments were executed on a computer: 2  Intel(R) Core(TM)2 Duo CPU 2.00 GHz, 4.0 Gb of RAM, under UNIX system. The numerical tests were performed by using the IBM-CPLEX's MILP solver [11], with a total number of 1114 variables, where the number of binary variables is 1036.The solution obtained is an optimal one (of this problem) wherein the priority is assigned to the minimization of the dead mileage. The total distance covered by all the buses from depots to starting points of routes and from the end points back to their depots is 5268.8kilometers.Therefore, a total cost of 5,268,800 FCFA ( 7903.2 euro) for a workday.  The total cost is 26,344,000 FCFA ( 39,516 euro) for 5 workdays.Therefore, a total cost of 105,376,000 FCFA  6 and 7 show the pull-out trips and the pull-in trips, respectively.First, some modifications in the pullout trips are pointed out.With the assignment of 3D we have 131 and 158 buses assigned to Thiaroye and Ouakam depots, respectively.From our numerical experiments we obtain 137 and 152 buses assigned to Thiaroye and Ouakam depots, respectively.Therefore, 6 routes are reduced from Ouakam depot and assigned to Thiaroye.But, the main difference appears in the assignments by period. Indeed, for the first period: the 3 Thiaroye pull-out trips, corresponding to the starting points Parcelles Assainies, Case and Palais 1 are assigned to Ouakam depot.During period 2, Ouakam depot loses 2 pull-out trips to the benefit of Thiaroye depot which has 27.The same scenario occurs in periods 3 and 4 where, respectively, 4 and 31 additional routes are assigned to Thiaroye depot.On the other hand, 18, 8 and 2 routes are moved from Thiaroye depot and assigned to Ouakam depot, for the periods 5, 6 and 7, respectively. For the pull-in trips, there is no change in the period.From period 29 to 34, we have respectively 3, 2, 1, 7, 20 and 20 additional routes; assigned to Thiaroye depot.From period 35 to 38, we obtain respectively 26, 13, 7 and 1 additional routes, assigned to Ouakam depot.th 28 Figure 3 illustrates the new pull-in and pull-out trips at all terminals except depots.At each terminal are represented the cumulative pull-in and pull-out trips.Trips from/to depots are ignored in this figure.One can notice that the amounts of trips are higher in terminals located in the northern side; the main reason for this fact is that most of 3D costumers leave suburbs.So during the morning 3D manages to assign an important quantity of buses to the suburbs' terminal to carry people to offices, factories, firms that are downtown.In the evening the re- versed phenomenon takes place.Sites where demand is high correspond to starting points of Dieupeul, Liberte 6, Palais 1, Guediawaye, Camberene 1, Daroukhane and Keur Massar.The starting points of Airport, Camberene 2, Mbao and Malika are the second category of concentration.The results show that by allowing buses to return to their starting points or depots different from the starting ones, we can significantly reduce the dead mileage, and thus obtain greater savings.All this database is validated by 3D officials. Concluding Remarks In this paper, we described how the new model of optimizing dead mileage can be used to compute an optimal solution of the 3D urban transportation problem.We developed a mixed-integer programming problem with 0 -1 variables which takes into account road network and achieved greater savings.The results showed that 3D can significantly reduce the cumulative distance covered by all the buses from their depots to the starting points of their routes and also from starting points of their routes back to their depots.To be more competitive, alterations, roads repairs, fleet renewal and constructions of new depots will improve the results performance.Furthermore, the model considered in this work can be used in the case where the end points of routes are different from starting points. Figure 3 . Figure 3. New pull-in/pull-out trips given by the model. The input data needed to use the model are:  Current number of depots, m = 2 (Ouakam and Thiaroye (see Figure 1));  Total number of routes, 289 n  ;  Number of routes associated with Ouakam depot: 158 (see Figure 2);  Number of routes associated with Thiaroye depot: 131 (see Figure 2);  Set of bus types:   1, 2 ;  Set of bus types in which there is a restriction: {2};  Number of periods T: 38;  Number of bus type 1: 229;  Number of bus type 2: 60 (number of buses on which there is a restriction);  Number of terminals (starting points): 30 (see Figure 1). Table 4 . Current situation of pull-in trips. Tables
4,946.2
2012-07-23T00:00:00.000
[ "Economics" ]
On the deformation theory of structure constants for associative algebras Algebraic scheme for constructing deformations of structure constants for associative algebras generated by a deformation driving algebras (DDAs) is discussed. An ideal of left divisors of zero plays a central role in this construction. Deformations of associative three-dimensional algebras with the DDA being a three-dimensional Lie algebra and their connection with integrable systems are studied. Introduction An idea to study deformations of structure constants for associative algebras goes back to the classical works of Gerstenhaber [1,2].As one of the approaches to deformation theory he suggested " to take the point of view that the objects being deformed are not merely algebras, but essentially algebra with a fixed basis" and to treat " the algebraic set of all structure constants as parameter space for deformation theory" [2]. Thus, following this approach, one chooses the basis P 0 , P 1 , ..., P N for a given algebra A, takes the structure constants C n jk defined by the multiplication table C n jk P n , j, k = 0, 1, ..., N and look for their deformations C n jk (x), where (x) = (x 1 , ..., x M ) is the set of deformation parameters, such that the associativity condition or similar equation is satisfied. A remarkable example of deformations of this type with M=N+1 has been discovered by Witten [3] and Dijkgraaf-Verlinde-Verlinde [4].They demonstrated that the function F which defines the correlation functions Φ j Φ k Φ l = ∂ 3 F ∂x j ∂x k ∂x l etc in the deformed two-dimensional topological field theory obeys the associativity equation (2) with the structure constants given by where constants η lm = (g −1 ) lm and g lm = ∂ 3 F ∂x 0 ∂x l ∂x m where the variable x 0 is associated with the unite element.Each solution of the WDVV equation ( 2), (3) describes a deformation of the structure constants of the N+1-dimensional associative algebra of primary fields Φ j . Interpretation and formalization of the WDVV equation in terms of Frobenius manifolds proposed by Dubrovin [5,6] provides us with a method to describe class of deformations of the so-called Frobenius algebras.An extension of this approach to general algebras and corresponding F-manifolds has been given by Hertling and Manin [7].Beautiful and rich theory of Frobenius and F-manifolds has various applications from the singularity theory to quantum cohomology (see e.g.[6,8,9] ). An alternative approach to the deformation theory of the structure constants for commutative associative algebras has been proposed recently in [10][11][12][13][14]. Within this method the deformations of the structure constants are governed by the so-called central system (CS) .Its concrete form depends on the class of deformations under consideration and CS contains, as particular reductions, many integrable systems like WDVV equation, oriented associativity equation, integrable dispersionless, dispersive and discrete equations (Kadomtsev-Petviashvili equation etc).The common feature of the coisotropic, quantum, discrete deformations considered in [10][11][12][13][14] is that for all of them elements p j of the basis and deformation parameters x j form a certain algebra ( Poisson, Heisenberg etc).A general class of deformations considered in [13] is characterized by the condition that the ideal J =< f jk > generated by the elements f jk = −p j p k + N l=0 C l jk (x)p l representing the multiplication table (1) is closed.It was shown that this class contains a subclass of so-called integrable deformations for which the CS has a simple and nice geometrical meaning. In the present paper we will discuss a pure algebraic formulation of such integrable deformations.We will consider the case when the algebra generating deformations of the structure constants, i.e. the algebra formed by the elements p j of the basis and deformation parameters x k ( deformation driving algebra (DDA)), is a Lie algebra.The basic idea is to require that all elements f jk = −p j p k + N l=0 C l jk (x)p l are left divisors of zero and that they generate the ideal J =< f jk > of left divisors of zero.This requirement gives rise to the central system which governs deformations generated by DDA. Here we will study the deformations of the structure constants for the threedimensional algebra in the case when the DDA is given by one of the threedimensional Lie algebras.Such deformations are parametrized by a single deformation variable x .Depending on the choice of DDA and identification of p 1 , p 2 and x with the elements of DDA, the corresponding CS takes the form of the system of ordinary differential equations or the system of discrete equations (multi-dimensional mappings).In the first case the CS contains the third order ODEs from the Chazy-Bureau list as the particular examples.This approach provides us also with the Lax form of the above equations and their first integrals. The paper is organized as follows.General formulation of the deformation theory for the structure constants is presented in section 2. Quantum, discrete and coisotropic deformations are discussed in section 3. Three-dimensional Lie algebras as DDAs are analyzed in section 4. Deformations generated by general DDAs are studied in section 5. Deformations driven by the nilpotent and solvable DDAs are considered in sections 6 and 7, respectively. 2 Deformations of the structure constants generated by DDA So, we consider a finite-dimensional noncommutative algebra A with ( or without ) unite element P 0 .We will restrict overself to a class of algebras which possess a basis composed by pairwise commuting elements P 0 , P 1 , ..., P N .The table of multiplication (1) defines the structure constants C l jk .The commutativity of the basis implies that C l jk = C l kj .In the presence of the unite element one has C l j0 = δ l j where δ l j is the Kroneker symbol.Following the Gerstenhaber's suggestion [1,2] we will treat the structure constants C l jk in a given basis as the objects to deform and will denote the deformation parameters by x 1 , x 2 , ..., x M .For the undeformed structure constants the associativity conditions (2) are nothing else than the compatibility conditions for the table of multiplication (1).In the construction of deformations we should first to specify a "deformed " version of the multiplication table and then to require that this realization is selfconsistence and meaningful. Thus, to define deformations we 1) associate a set of elements p 0 , p 1 , ..., p N , x 1 , x 2 , ..., x M with the elements of the basis P 0 , P 1 , ..., P N and deformation parameters x 1 , x 2 , ..., x M , 2) consider the Lie algebra B of the dimension N+M+1 with the basis elements e 1 , ..., e N +M+1 obeying the commutation relations 3) identify the elements p 0 , p 1 , ..., p N , x 1 , x 2 , ..., x M with the elements e 1 , ..., e N +M+1 thus defining the deformation driving algebra (DDA).Different identifications define different DDAs.We will assume that the element p 0 is always a central element of DDA.The commutativity of the basis in the algebra A implies the commutativity between p j and in this paper we assume the same property for all x k .So, we will consider the DDAs defined by the commutation relations of the type ) where α k jl and β kl j are some constants, 4) consider the elements of the universal enveloping algebra U(B ) of the algebra DDA(B ).These f jk "represent" the table (1) in U(B ).Note that f j0 = f 0j = 0. 5) require the all f jk are non-zero left divisors of zero and have a common right zero divisor. In this case f jk generate the left ideal J =< f jk > of left divisors of zero.We remind that non-zero elements a and b are called left and right divisors of zero if ab = 0 (see e.g.[15]). Definition.The structure constants C l jk (x) are said to define deformations of the algebra A generated by given DDA if all f jk are left zero divisors with common right zero divisor. To justify this definition we first observe that the simplest possible realization of the multiplication table (1) in U(B ) given by the equations f jk = 0, j, k = 0, 1, ..., N is too restrictive in general.Indeed, for instance, for the Heisenberg algrebra B [12] such equations imply that [p l , C m jk (x)] = 0 and , hence, no deformations are allowed.So, one should look for a weaker realization of the multiplication table .A condition that all f jk are just non-zero divisors of zero is a natural candidate.Then, the condition of compatibility of the corresponding equations f jk • Ψ jk = 0, j, k = 1, ..., N where Ψ jk are right zero divisors requires that the l.h.s. of these equations and, hence, Ψ jk should have a common divisor (see e.g.[15] ).We restrict ourself to the case when Ψ jk = Ψ •Φ jk , j, k = 1, ..., N where Φ jk are invertible elements of U(B).In this case one has the compatible set of equations that is all left zero divisors f jk have common right zero divisor Ψ.These conditions impose constraints on C m jk (x).To clarify these constraints we will use the basic property of the algebra A, i.e. its associativity.First we observe that due to the relations (4) one has the identity where ∆ mt jk,l (x) are certain functions of x 1 , ..., x M only.Then, taking into account (4), one obtains the identity where and The identity (6) implies that for an associative algebra N s,t=0 Due to the relations ( 5) equations ( 7) imply that These equations are satisfied if (8) This system of equations plays a central role in our approach.If Ψ has no left zero divisors linear in p j and U(B) has no zero elements linear in p j then the relation ( 8) is the necessary condition for existence of a common right zero divisor for f jk . At N≥ 3 it is also a sufficient condition.Indeed, if C m jk (x) are such that equations ( 8) are satisfied then N s,t=0 Generically, it is the system of 1 2 N 2 (N − 1) linear equations for N (N +1) 2 unknowns f st with noncommuting coefficients K st klj .At N ≥ 3 for generic (non zeros, non zero divisors) K st klj (x, p) the system (9) implies that and where α jk , β lm , γ jk are certain elements of U(B) ( see e.g.[16,17] ).Thus, all f jk are right zero divisors.They are also left zero divisors.Indeed, due to Ado's theorem ( see e.g.[18] ) finite-dimensional Lie algebra B and, hence, U(B) are isomorphic to matrix algebras.For the matrix algebras zero divisors ( matrices with vanishing determinants) are both right and left zero divisors [15].Then, under the assumption that all α jk and β lm are not zero divisors, the relations (10) imply that the right divisor of one of f jk is also the right zero divisor for the others.At N=2 one has only two relations of the type (10) and a right zero divisor of one of f 11 , f 12 , f 22 is the right zero divisor of the others.We note that it isn't that easy to control assumptions mentioned above.Nevertheless, the equations ( 5) and ( 8) certainly are fundamental one for the whole approach. We shall refer to the system (8) as the Central System (CS) governing deformations of the structure constants of the algebra A generated by a given DDA.Its concrete form depends strongly on the form of the brackets p t , C l jk (x) which are defined by the relations (4) for the elements of the basis of DDA.For stationary solutions (∆ t jk,l = 0) the CS ( 8) is reduced to the associativity conditions (2). For the quantum deformations of noncommutative algebra one has M = N and the deformation driving algebra is given by the Heisenberg algebra [12]. The elements of the basis of the algebra A and deformations parameters are identified with the elements of the Heisenberg algebra in such a way that where is the (Planck's) constant.For the Heisenberg DDA and consequently Quantum CS ( 14) governs deformations of structure constants for associative algebra driven by the Heisenberg DDA.It has a simple geometrical meaning of vanishing Riemann curvature tensor for torsionless Christoffel symbols Γ l jk identified with the structure constants (C l jk = Γ l jk ) [12].In the representation of the Heisenberg algebra (12) by operators acting in a linear space H left divisors of zero are realized by operators with nonempty kernel.The ideal J is the left ideal generated by operators f jk which have nontrivial common kernel or, equivalently, for which equations have nontrivial common solutions |Ψ ⊂ H .The compatibility condition for equations (15) is given by the CS (14).The common kernel of the operators f jk form a subspace H Γ in the linear space H. So, in the approach under consideration the multiplication table (1) is realized only on H Γ , but not on the whole H.Such type of realization of the constraints is well-known in quantum theory as the Dirac's recipe for quantization of the first-class constraints [19]. In quantum theory context equations (15) serve to select the physical subspace in the whole Hilbert space.Within the deformation theory one may refer to the subspace H Γ as the "structure constants" subspace.In [12] the recipe (15) was the starting point for construction of the quantum deformations.Quantum CS ( 14) contains various classes of solutions which describe different classes of deformations.An important subclass is given by iso-associative deformations, i.e. by deformations for which the associativity condition ( 2) is valid for all values of deformation parameters.For such quantum deformations the structure constants should obey the equations These equations imply that C n jk = ∂ 2 Φ n ∂x j ∂x k where Φ n are some functions while the associativity condition (2) takes the form It is the oriented associativity equation introduced in [20,5].Under the gradient reduction Φ n = N l=0 η nl ∂F ∂x l equation ( 18) becomes the WDVV equation ( 2), (3). Non iso-associative deformations for which the condition ( 16) is not valid are of interest too.They are described by some well-known integrable soliton equations [12].In particular, there are the Boussinesq equation among them for N=2 and the Kadomtsev-Petviashvili (KP) hierarchy for the infinite-dimensional algebra of polynomials in the Faa' de Bruno basis [12].In the latter case the deformed structure constants are given by with where τ is the famous tau-function for the KP hierarchy and .. where P k (t 1 , t 2 , t 3 , ...) are Schur polynomials defined by the generating formula exp Discrete deformations of noncommutative associative algebras are generated by the DDA with M = N and commutation relations [p j , p k ] = 0, x j , x k = 0, p j , x k = δ k j p j , j, k = 1, ..., N. In this case where for an arbitrary function ϕ(x) the action of T j is defined by T j ϕ(x 0 , ..., x j , ..., x N ) = ϕ(x 0 , ..., x j + 1, ...., x N ).The corresponding CS is of the form where the matrices C j are defined as (C j ) l k = C l jk , j, k, l = 0, 1, ..., N. The discrete CS (22) governs discrete deformations of associative algebras.The CS (22) contains, as particular cases, the discrete versions of the oriented associativity equation, WDVV equation, Boussinesq equation, discrete KP hierarchy and Hirota-Miwa bilinear equations for KP τ -function. where {, } is the standard Poisson bracket.The algebra U(B ) is the commutative ring of functions and divisors of zero are realized by functions with zeros.So, the functions f jk should be functions with common set Γ of zeros.Thus, in the coisotropic case the multiplication table ( 1) is realized by the set of equations [10] f jk = 0, j, k = 0, 1, 2, ..., N. Well-known compatibility conditon for these equations is The set Γ is the coisotropic submanifold in R 2(N +1) .The condition (25) gives rise to the following system of equations for the structure constants Equations ( 26) and ( 27) form the CS for coisotropic deformations [10].In this case C l jk is transformed as the tensor of the type (1,2) under the general tranformations of coordinates x j and the whole CS ( 26), ( 27) is invariant under these tranformations [14].The bracket [C, C] m jklr has appeared for the first time in the paper [21] where the co-called differential concomitants were studied.It was shown in [18] that this bracket is a tensor only if the tensor C l jk obeys the algebraic constraint (27).In the paper [7] the CS ( 26), (27) has appeared implicitly as the system of equations which characterizes the structure constants for F-manifolds.In [10] it has been derived as the CS governing the coisotropic deformations of associative algebras. We would like to emphasize that for all deformations considered above the stationary solutions of the CSs obey the global associativity condition (2). 4 Three-dimensional Lie algebras as DDA. In the rest of the paper we will study deformations of associative algebras generated by three-dimensional real Lie algebra L .The complete list of such algebras contains 9 algebras (see e.g.[18]).Denoting the basis elements by e 1 , e 2 , e 3 , one has the following nonequivalent cases: 1) abelian algebra L 1 , 2) general algebra In virtue of the one to one correspondence between the elements of the basis in DDA and the elements p j , x k an algebra L should has an abelian subalgebra and only one its element may play a role of the deformation parameter x.For the original algebra A and the algebra B one has two options: 1) A is a two-dimensional algebra without unite element and B =L 2) A is a three-dimensional algebra with the unite element and B = L 0 ⊕ L where L 0 is the algebra generated by the unite element p 0 . After the choice of B one should establish a correspondence between p 1 , p 2 , x and e 1 , e 2 , e 3 defining DDA.For each algebra L k there are obviously, in general, six possible identifications if one avoids linear superpositions.Some of them are equivalent.The incomplete list of nonequivalent identifications is: 1) algebra L 1 : p 1 = e 1 , p 2 = e 2 , x = e 3 ; DDA is the commutative algebra with 2) algebra L 2 : case a) p 1 = −e 2 , p 2 = e 3 , x = e 1 ; the corresponding DDA is the algebra L 2a with the commutation relations 3) algebra L 3 : 5) solvable algebra L 5 at α = 1, β = 0, γ = 0, δ = 1 : For the second choice of the algebra B = L 0 ⊕L mentioned above the table of multiplication (1) consists from the trivial part P 0 P j = P j P 0 = P j , j = 0, 1, 2 and the nontrivial part For the first choice B =L the multiplication table is given by (34) with A=D=L=0. It is convenient also to arrange the structure constants A,B,...,N into the matrices C 1 , C 2 defined by (C j ) l k = C l jk .One has In terms of these matrices the associativity conditions (2) are written as 5 Deformations generated by general DDAs Commutative DDA (28) obviously does not generate any deformation.So, we begin with the three-dimensional commutative algebra A and DDA L 2a defined by the commutation relations (29).These relations imply that for an arbitrary function ϕ(x) where (38) In terms of the matrices C 1 and C 2 defined above this CS has a form of the Lax equation The CS (39) has all remarkable standard properties of the Lax equations (see e.g.[20,21]): it has three independent first integrals and it is equivalent to the compatibility condition of the linear problems where Φ is the column with three components and λ is a spectral parameter.Though the evolution in x described by the second linear problem (41) is too simple, nevertheless the CS (38) or (39) have the meaning of the iso-spectral deformations of the matrix C 2 that is typical to the class of integrable systems (see e.g.[22,23]).CS (39) is the system of six equations for the structure constants D,E,G,L,M,N with free A,B,C: where D ′ = x ∂D ∂x etc.Here we will consider only simple particular cases of the CS (42).First corresponds to the constraint A=0, B=0, C=0, i.e. to the nilpotent P 1 .The corresponding solution is where α, β, γ, δ, µ are arbitrary constants.The three integrals for this solution are (44) The second example is given by the constraint B=0, C=1, G=0 for which the quantum CS ( 14) is equivalent to the Boussinesq equation [12].Under this constraint the CS (42) is reduced to the single equation and the other structure constants are given by where α, β, γ are arbitrary constants.The corresponding first integrals are Integral I 3 reproduces the well-known first integral of equation ( 45).Solutions of equation ( 45) are given by elliptic integrals (see e.g.[24]).Any such solution together with the formulae (46) describes deformation of the three-dimensional algebra A driven by DDA L 2a .Now we will consider deformations of the two-dimensional algebra A without unite element according to the first option mentioned in the previous section.In this case the CS has the form (39) with the 2 × 2 matrices or in components In this case there are two independent integrals of motion The corresponding spectral problem is given by (41).Eigenvalues of the matrix C 2 , i.e. λ 1,2 = 1 2 (E + N ± (E − N ) 2 + 4GM ) are invariant under deformations and det C 2 = 1 2 I 2 1 − I 2 .We note also an obviously invariance of equations ( 42) and (49) under the rescaling of x. In the last two cases the CS (49) is equivalent to the simple third order ordinary differential equations.At B=0, C=1 with additional constraint I 1 = 0 one gets while at B=1,C=1 and I 1 = 0 the system (49) becomes The second integral for these ODEs is Equation ( 53) with G ′ = ∂G ∂y is the Chazy V equation from the well-known Chazy-Bureau list of the third order ODEs having Painleve property [25,26].The integral (55) is known too (see e.g.[27]). The appearance of the Chazy V equation among the particular cases of the system (49) indicates that for other choices of B and C the CS (49) may be equivalent to the other notable third order ODEs.It is really the case.Here we will consider only the reduction C=1 with I 1 = N + E = 0.In this case the system (49) is reduced to the following equation where Φ = B ′ + 1 2 B 2 .The second integral is and λ 1,2 = ± I2 2 . Choosing particular B or Φ, one gets equations from the Chazy-Bureau list.Indeed, at Φ = 0 one has the Chazy V equation (53).Choosing Φ = G ′ , one gets the Chazy VII equation At B=2G equation (56) becomes the Chazy VIII equation Choosing the function Φ such that 6Φe one gets the Chazy III equation In the above particular cases the integral I 2 (57) is reduced to those given in [27].All Chazy equations presented above have the Lax representation (39) with Solutions of all these Chazy equations provide us with the deformations of the structure constants (48) for the two-dimensional algebra A generated by the DDA L 2a . 2.Now we pass to the DDA L 2b .The commutation relations (30) imply that where ϕ(x) is an arbitrary function and T ϕ(x) = ϕ(x + 1).Using (62), one finds the corresponding CS where ∆ 1 = T − 1, ∆ 2 = 0.In terms of the matrices C 1 and C 2 this CS is For nondegenerate matrix C 1 one has The CS (65) is the discrete version of the Lax equation (39) and has similar properties.It has three independent first integrals and represents itself the compatibility condition for the linear problems Note that det C 2 is the first integral too. The CS (64) is the discrete dynamical system in the space of the structure constants.For the two-dimensional algebra A with matrices (48) it is where B and C are arbitrary functions.For nondegenerate matrix C 1 , i.e. at BG − CE = 0 , one has the resolved form (65), i.e. This system defines discrete deformations of the structure constants. Nilpotent DDA For the nilpotent DDA L 3 , in virtue of the defining relations (32), one has (72) In the matrix form it is For invertible matrix C 1 This system of ODEs has three independent first integrals and equivalent to the compatibility condition for the linear system So, as in the previous section the CS (73) describes iso-spectral deformations of the matrix C 1 .This CS governs deformations generated by L 3 . For the two-dimensional algebra A without unite element the CS is given by equation (73) with the matrices (48).First integrals in this case are I 1 = B + G, I 2 = 1 2 (B 2 + G 2 + 2CE) and det C 1 = 1 2 I 2 1 − I 2 .Since det C 1 is a constant on the solutions of the system, then at det C 1 = 0 one can always introduce the variable y defined by x = y det C 1 such that CS (74) takes the form (89) Chosing B and C as free functions and assuming that BG-CE = 0, one can easily resolve (89) with respect to TE,TG,TM,TN.For instance, with B=C=1 one gets the following four-dimensional mapping 2. In a similar manner one finds the CS associated with the solvable DDA L 5 .Since in this case [p 1 , ϕ(x)] = (T − 1)ϕ(x)p 1 , [p 2 , ϕ(x)] = (T −1 − 1)ϕ(x)p 2 (91) the CS takes the form For nondegenerate C 2 it is equivalent to
6,308.4
2008-11-28T00:00:00.000
[ "Mathematics" ]
T-Lymphocytes Enable Osteoblast Maturation via IL-17F during the Early Phase of Fracture Repair While it is well known that the presence of lymphocytes and cytokines are important for fracture healing, the exact role of the various cytokines expressed by cells of the immune system on osteoblast biology remains unclear. To study the role of inflammatory cytokines in fracture repair, we studied tibial bone healing in wild-type and Rag1−/− mice. Histological analysis, µCT stereology, biomechanical testing, calcein staining and quantitative RNA gene expression studies were performed on healing tibial fractures. These data provide support for Rag1−/− mice as a model of impaired fracture healing compared to wild-type. Moreover, the pro-inflammatory cytokine, IL-17F, was found to be a key mediator in the cellular response of the immune system in osteogenesis. In vitro studies showed that IL-17F alone stimulated osteoblast maturation. We propose a model in which the Th17 subset of T-lymphocytes produces IL-17F to stimulate bone healing. This is a pivotal link in advancing our current understanding of the molecular and cellular basis of fracture healing, which in turn may aid in optimizing fracture management and in the treatment of impaired bone healing. Introduction The molecular and cellular regulation of fracture healing is not completely understood, yet such knowledge is critical to developing treatments to optimize bone repair and remodeling. There is growing evidence that inflammation plays a crucial role in early fracture repair [1][2][3][4][5]. In mouse models, interleukin (IL) 21, 26 and tumor necrosis factor a (TNFa) expression is present in the fracture site within the first 24 h period post-injury with both TNFa and IL-6 knockout mice demonstrating delayed endochondral repair and callus remodeling [3,6]. Similar to other injury states, such as wound healing [7], the immunologic response is pivotal in initiating the necessary triggers that result in cellular differentiation required for successful bone healing. Although most osteoimmunology research has centered on the areas of inflammatory and metabolic bone processes [8], in diseases such as rheumatoid arthritis and osteoporosis, interest within the context of other pathological conditions, such as fracture healing, has been a more recent focus. The molecular links between bone and the immune system have emerged from the identification of receptor activator of nuclear factor-kappaB/ligand (RANK) and RANKL as key osteoclastogenic molecules. However, immunological regulation of the osteoblast has been a particularly poorly understood topic to date. While it is well known that there exists a mutual interaction between osteoblast and osteoclast cells through RANK-RANKL signaling [8][9][10][11][12], the role of lymphocytes and cytokines in osteoblast biology with respect to osteoblast activation and maturation during fracture healing remains unknown [13][14][15]. In normal fracture healing, osteoblasts synthesize osteoid matrix which is eventually mineralized to produce bone. Cells of the osteoblast lineage include bone-lining cells and osteocytes, the latter of which becomes embedded in the lacunae as the surrounding bone is formed. Runt-related transcription factor 2/ core binding factor 1 (Runx2) and Osterix (OSx) are both known transcription factors essential for early osteoblast differentiation. Similarly, activation of the canonical Wnt and bone morphogenic protein (BMP) signaling pathways are known to play a role in osteoblast differentiation [16]. Yet exactly how these pathways are affected by the immune system remains an ongoing area of investigation. Mice lacking recombinase activating genes Rag1 or Rag2 are unable to form T-cell or B-cell receptors and hence, completely lack mature T and B lymphocytes [17,18]. The Rag1 and Rag2 proteins act in combination as a heterodimer to facilitate the rearrangement of variable (V), diversity (D) and joining (J) genes required for the generation of immunoglobulin and T-cell receptors. This rearrangement is necessary for diversity in antigen recognition and is permissive in allowing the developing B-cells and T-cells to mature and enter the circulation. In turn, Rag1 deficient mice are devoid of any lymphocytic sources of interleukins and provide a preclinical model of fracture healing in the absence of these secreted factors. Thus, the Rag1 2/2 mouse is a useful animal to elucidate the physiological role of T-cells and their subtypes on osteoblast differentiation during fracture healing. Ethics Statement All studies followed the Canadian Council of Animal Care (CCAC) guidelines and all procedures were approved by Sunnybrook Health Sciences Centre Animal Care Committee. (AUP#09-407). All surgery was performed under isoflurane gas anesthesia, and all efforts were made to minimize suffering. Mice B6.129S7-Rag1 tm1Mom/J (Rag1 2/2 ) and C7BL/6J wildtype (WT) male 12-week old mice were used (Jackson Laboratory, Bar Harbor, Maine). A longitudinal incision was made over the knee, and a 0.5 mm hole was made just proximal to the tibial tubercle and lateral to the patellar tendon. The tibia was pre-stabilized by placing an 0.9 mm intramedullary pin (Fine Science Tools, http://www.finescience.com/) in the marrow space as previously reported [19] with the following modifications. The fracture was generated by an open osteotomy in the mid shaft of the tibia through a separate small anterolateral incision with minimal soft tissue dissection. Previous data show that a fracture generated in this manner heals through both endochondral and intramembranous ossification [20]. It allows for consistency in fracture generation with respect to fracture level, orientation (ie. transverse vs. multi-fragmentary) and controlling for the fibula to remain intact, thus minimizing for variability between specimens and allowing a more homogeneous assessment of fracture callus cell types. The animals were allowed unrestricted weight-bearing immediately following surgery. Mice were euthanized at different time points post fracture and the limbs were harvested for analysis. In vitro Studies To determine whether the presence or absence of mature T-cells influence osteoblast differentiation, primary mesenchymal stromal cells (MSC) were harvested from bilateral femurs, tibias and humeri of 12 week old WT and Rag1 2/2 mice. After lysis of red blood cells using ACK lysis buffer, 5.0610 6 /mL cells were seeded per 12 multiwell plate for 7 days in a 37uC incubator (Becton Dickinson) in aMEM (Wisent, St-Bruno, Quebec) containing high glucose supplemented with 100 U/ml penicillin, 100 mg/ml streptomycin, and 10% fetal calf serum. At day 4, half of the media containing nonadherent cells was exchanged with fresh media and at day 7, the media was changed completely to osteoblast differentiation media (aMEM supplemented with 50 mg/ml ascorbic acid (Sigma-Aldrich, St. Louis, MO), 10 28 M dexamethasone (Sigma-Aldrich, St. Louis, MO), and 8 mM b-glycerophosphate (Sigma-Aldrich, St. Louis, MO). Cells were harvested at 20 days for RNA extraction and analysis with RNeasyH Plus Mini Kit (Qiagen, Valencia, CA) for quantitative real-time RT-PCR and staining with Alizarin red. Murine pre-osteoblast cell line, MC3T3-E1, were maintained in aMEM supplemented with 2.5% fetal bovine serum (FBS; Gibco, Invitrogen) and 100 U/ml penicillin and 100 mg/ml streptomycin 37uC in 5% CO 2 atmosphere. The spontaneous differentiation into osteoblasts of MC3T3-E1 and primary mesenchymal stromal cells was induced by osteoblast differentiation media as above. For IL-17F treatment, 2.5610 5 MC3T3-E1 cells and 2.5610 5 primary mesenchymal stromal cells as prepared above were seeded in 6well plates with aMEM added with 20 ng/ml of IL17F (R&D Fracture Analysis Tibias were harvested at 3, 7, 14, 21, 28 and 35 days postfracture. Tibias harvested at 3 and 7 days were fixed in 4% paraformaldehyde for histological analysis, decalcified in 10% EDTA (pH 7.4) for 3 weeks, dehydrated and embedded in paraffin. Sections 10 mm thick were prepared with Safranin O staining (Sigma) (n = 3) for histologic analysis. Immunohistochemistry (IHC) was performed using ABC Kit (Vector Laboratories, Burlingame, CA) following the standard manufacturer protocol. The monoclonal anti-CD3 with goat anti-rabbit and anti-mouse immunoglobulin (Ig) from Abcam (Cambridge, CA) was used. The anti-mouse IL-17F antibody from R&D Systems (Minneapolis, MN) was used. Tibias harvested at 3 and 7 days also underwent total RNA isolation from their callus. The callus was snap frozen in liquid nitrogen, processed by BioPulvenizer (MidSci) and total RNA was extracted using TRIZOL (Invitrogen). Quantitative real-time reverse transcriptase polymerase chain reaction (RT-PCR) was performed with StepOnePlus system (Applied Biosystems) using SYBR Green (Bio-Rad). The primers are listed in Table 1. Fractured and unfractured tibias harvested at 14, 21 and 28 days for both WT and Rag1 2/2 mice were fluorochrome labeled with calcein green as described by van Gaalen et al, 2010 [21] (n = 3 per group). At 2 and 9 days prior to harvest, mice were given 30 mg/kg calcein green (Sigma-Aldrich, St. Louis, MO) via peritoneal injections to sequentially label new bone deposition and bone remodeling during a one week healing period. Briefly, harvested tibias were fixed in 70% ethanol. In vacuum jars, specimens were sequentially dehydrated using ascending acetone concentrations. Following dehydration, the infiltration process was performed with increasing concentrations of Spurr resin (50, 80, 26100%) (SPI-Pon TM 812 Embedding Resin System). Sections of 7 mm were cut in the long axis of the tibia using a rotary microtome (Leica RM 2165) for fluorescent microscopy imaging (495 nm/521 nm, FITC) and analysis (Bioquant, Nashville, TN). Micro-computed Tomography Analysis WT and Rag1 2/2 mice tibias harvested at 28 and 35 days were used for micro-computed tomography (mCT) based stereologic analysis and standard torsional biomechanical testing. Characterization of callus geometry and mineralization was analyzed using high resolution mCT. Samples were scanned at an isotropic voxel size of 8 mm (SkyScan 1172, SkyScan, Belgium) using a voltage of 50 kV, a current of 160 mA, and a 0.5 mm aluminum filter. Samples were placed with the long axis of the tibia coincident with the vertical axis of the scanner, and scanned in combination with calibrated bone density phantoms to allow direct measurements of bone density from the scans using a linear relationship. Three-dimensional callus properties of geometry and mineralization were calculated using the custom processing option available in CTAn software (SkyScan, Belgium) and in-house code in AmiraDEV (VGS, Germany). Structural and material parameters were calculated by first segmenting the images with a global threshold value of 0.2 gHA/cm 3 to define voxels corresponding to bone [22]. Following a despeckling filter to remove noise in the images, a region of interest (ROI) was fitted around the callus on subsequent axial slices using an adaptive contouring algorithm which ignored holes in the outside of the callus of less than 50 pixels. Measurements of the callus properties in the ROI included Bone Volume (BV, mm 3 ), Total Callus Volume (TV, mm 3 ), Bone Volume Fraction (BV/TV, %), Trabecular Thickness (Tb.Th., mm), Trabecular Number (Tb.N., 1/mm), Trabecular Separation (Tb.Sp., mm), Mean Bone Mineral Density (BMD, gHA/cm 3 ), Mean Tissue Mineral Density (TMD, gHA/cm 3 ) and Torsional Rigidity (CTRA, kNmm 2 ) [23]. Mechanical Testing Torsional strength and torsional stiffness were measured at 28 and 35 days using a MTS Bionix 858 (MTS Systems, MN, USA) materials testing system. Prior to testing, intramedullary pins and fibulas were removed from all tibias using sharp bone scissors. Each tibia was then aligned longitudinally to the loading axis of the MTS and potted proximal and distal to the fracture callus in polymethylmethacrylate. The gauge length, defined as the length between the two potting casings, was kept consistent for each sample. Torque was measured during the application of angular displacement (1u/second) until failure or to a maximum displacement of 30u. The maximum torque, twist angle at failure, and torsional stiffness were calculated based on the generated load displacement data. Torsional strength was defined as the maximum load sustained during loading, and the torsional stiffness was defined as the slope of the line extending to the point of maximum sustained torque. Statistical Analysis Data were expressed as mean 6 standard deviation. Statistical differences were calculated using a Student t test. Unless stated otherwise, 10 animals were utilized per group for the RT-PCR expression data; 9 animals (WT) and 8 animals (Rag1 2/2 ) were used for mCT stereology and biomechanical analyses. A p value below 0.05 was considered statistically significant. T-cells are Present in the Early Phase of Fracture Repair and Correlates to Bone Marker Gene Expression To confirm the role of T-cells as an early responder in the fracture healing cascade, immunohistochemistry of WT mice fracture callus was analyzed. CD3, a marker ubiquitously expressed in T-cells, was used to identify these cells in the fracture region. A low magnification Safranin O section ( Fig. 1A) was used to illustrate the region of interest whereby higher magnification images corroborated the histology (Fig. 1B) and positive CD3 staining of T-cells which characteristically localized adjacent to the endosteum in the fracture hematoma at 3 days post fracture in all sections (Fig. 1C). The Rag1 2/2 mice at 3 days post fracture did not show any specific T-cell staining in multiple high power field regions in the fracture callus (Fig. 1D). The distribution of T-cells predominantly in the endosteal fracture hematoma was representative in the all the WT mice CD3 immunohistochemistry during this early phase of healing. Following the inflammatory phase at 7 days post fracture, immunostaining results showed a reduction in the presence of T-cells at the fracture site in similar sections during the later stages of bone healing (data not shown). Although the presence of B-cells in the fracture hematoma has been observed [12], antigen processing and clonal expansion is not a known early response of fracture healing. While CD45R antibody staining is commonly used as a marker of B-cells, there are reported examples of cross-reactivity in the mouse with positive immunofluorescence and signal transduction of T-cells using CD45R [24] and therefore, any positive CD45R staining would be equivocal in concluding B-cell specificity. Interestingly, while the relative absence of T-cells in the later phases of fracture healing (7, 14, 28 days post fracture) showed no significant differences in mature bone marker expression between WT and Rag1 2/2 mice (data not shown), the presence of T-cells at 3 days post fracture correlated with the level of expression of mature bone markers, detected using real-time quantitative RT-PCR normalized to GAPDH. Expression of Collagen 1 (Col1) (p,0.01), Collagen 2 (Col2) (p,0.05), bone sialoprotein (BSP) (p,0.01), BMP2 (p,0.01) and Runx2 (p,0.05) were significantly increased in WT compared to Rag1 2/2 mice at 3 days post fracture (Fig. 1E). The Rag1 2/2 mice which lack mature T-cells generally showed a failure to increase bone marker expression 3 days post fracture. Both Col1 and BSP were more significantly decreased at a p value ,0.01. BSP has the biophysical and chemical properties of a nucleator and its temporo-spatial expression has shown to coincide with de novo mineralization in bone [25]. The significant decrease in BSP suggests an impairment in Rag1 2/2 mice in its ability to up-regulate BSP in osteoblasts to form bone. Pro-inflammatory Cytokine Expression is Decreased and Anti-inflammatory Cytokine Expression is Increased in the Fracture Callus of Rag1 2/2 Mice To gain further insight as to the relevant cytokines involved during early T-cell activation and osteoblast maturation, thirtytwo cytokines and chemokines were profiled and their expression analyzed using a protein assay of mice serum 2 days post fracture. Cytokines IL-6 and granulocyte colony-stimulating factor (G-CSF) were significantly lower in Rag1 2/2 mice compared to WT mice (p,0.05) ( Fig. 2A). As well, eotaxin (p,0.05), IL-12 (p40) (p,0.05) and CXC motif chemokine 10 (IP-10) (p,0.05) were elevated in Rag1 2/2 mice compared to WT whereas CXC chemokine ligand 5 (LIX) (p,0.01) and CXC motif ligand 9 (MIG) (p,0.01) were decreased in Rag1 2/2 mice (data not shown). The remainder of the cytokine/chemokine panel showed no significant difference in expression between WT and Rag1 2/2 mice at this 2 day time point (n = 3 per group). The systemic changes in cytokine and chemokine expression during the early phase of healing were further assessed locally at the fracture callus. Quantitative RT-PCR of the extracted RNA at 3, 7, 14 and 28 days was screened using several known cytokines previously identified in fracture healing ( Table 1). The regulatory patterns of the cytokines at 3 days showed a characteristic grouping. The pro-inflammatory cytokines IL-6 (p,0.05), IL-17F (p,0.05) and IL-23 (p,0.05) were expressed at significantly lower levels in the Rag1 2/2 mice compared to WT, while the antiinflammatory cytokines IL-10 (p,0.05) and TGFb (p,0.05) were expressed at much higher levels relative to WT mice (Fig. 2B). Of note, IL-17A which has an important role in regulating osteoclastogenesis, was found to be undetectable in WT and Rag1 2/2 mice at any of the early or late time points studied indicating that IL-17A is not induced during the fracture repair process. Immunohistochemistry was completed using an IL-17F antibody to confirm the presence of a pro-inflammatory cytokine in the fracture site during the early phase of healing in WT and Rag1 2/2 mice 3 days post fracture (Fig. 2C). The presence of IL-17F was noted in the WT mice fracture hematoma similar to the distribution of the previous CD3 staining of T-cells. Predictably, the Rag1 2/2 mice showed no appreciable IL-17F staining in this same region. Unfractured and 7 day post fractured tibias demonstrated no appreciable positive IL-17F staining (data not shown). However, the level of TGFb expression in Rag1 2/2 mice was comparable to baseline, unbroken limbs suggesting a constitutive activity of TGFb in WT mice that is down-regulated post fracture. The TGFb down-regulation in Rag1 2/2 mice was not observed. Similarly, the up-regulation of IL-6 and IL-17F in Rag1 2/2 mice was also absent. In fact, IL-10 was the only cytokine up-regulated post fracture in the Rag1 2/2 mice. At 7, 14 and 28 days, no significant differences were seen between the Rag1 2/2 and WT mice in their cytokine expression, except IL-6, which still remained decreased in Rag1 2/2 compared to WT mice (data not shown). Thus, the observed regulatory grouping of cytokines present at 3 days was only specific to the early phase of fracture healing. To determine the effects of IL-17F and TGFb on Rag1 2/2 and WT mice, primary mesenchymal stromal cell cultures were studied. Rag1 2/2 mice mesenchymal stromal cells treated with IL-17F promoted osteoblast maturation and showed increased bone marker expression of Col 1, Col2 and Runx2 (Fig. 3B left). Primary mesenchymal stromal cells from WT mice treated with TGFb, suppressed osteoblast maturation and showed a significant decrease in the expression of all the bone markers assessed (Fig. 3B right). There is a Reduction in Osteogenesis during Fracture Repair in Rag1 2/2 Mice In vitro cultures of primary mesenchymal stromal cells treated with osteoblast differentiation media were performed on unfractured WT and Rag1 2/2 mice to assess for any inherent differences in mineralization potential. RNA extracted from Rag1 2/2 mice primary bone cultures (Fig. 4A) demonstrated decreased expression of mature bone markers Col1 (p,0.01), Col2 (p,0.005), BSP (p,0.05) and osteocalcin (p,0.01) compared to WT and normalized to GAPDH. Bone markers ALP, BMP2 and Runx2 had no significant differences in expression. Hence, the resulting Rag1 2/2 primary bone cultures showed a lower capacity for osteoblast differentiation and expression of bone markers compared to WT mice. Alizarin red staining of colonies (Fig. 4B) from primary mesenchymal stromal cells of Rag1 2/2 and WT mice differentiated to osteoblast colony-forming units showed a significant decrease in mineralization at 20 days of culture in Rag1 2/2 mice compared to WT suggesting an overall lack of mature osteoblast activity in this T-cell deficient model. There were no differences observed in the confluence or number of colonies between Rag1 2/2 and WT mice cultures. Subsequent studies of Rag1 2/2 cultures (data not shown) of longer incubation periods greater than 20 days followed by Alizarin red staining showed eventual mineralization similar to that of WT cultures at 20 days previously shown. Thus, mineralization did occur in the Rag1 2/2 mice but in a delayed fashion. Supporting in vivo calcein green fluorochrome labeling studies (Fig. 4C) showed continued decreased mineralization and bone formation in Rag1 2/2 mice even in the subsequent phases of fracture healing at 14, 21 and 28 day time points compared to WT. Calcein was administered at 2 and 9 days prior to harvest of the limbs. With each fluorochrome label representing the injection time points, the amount of bone formation during this 7 day period was directly reflective of the distance measured between the The Fracture Callus from Rag1 2/2 Mice Shows Less Healing than Observed in WT Mice The differences in callus healing were assessed. Histologic data indicated the persistence of a cartilage template and less endochondral bone formation at 28 days in the Rag1 2/2 mice compared to WT mice (Fig. 5A). More proteoglycan staining and less bridging of the fracture gap with bone was apparent in the Rag1 2/2 mice. This was further corroborated by longitudinal mCT scan images of the fracture callus which were analyzed using current stereologic/histomorphometric methods (Fig. 5B). Qualitatively, the results indicated a larger, less healed bridging fracture callus which quantitatively exhibited lower tissue mineral density (TMD, p = 0.004) and higher total callus volume (TV, p = 0.002) in Rag1 2/2 mice compared to WT calluses at 28 days post fracture (Fig. 5C). The WT mice demonstrated more advanced remodeling characteristics with a smaller, higher density fracture callus. There were no differences seen (p.0.2) in bone volume fraction (BV/TV), trabecular thickness (Tb.Th), trabecular number (Tb. N.) or trabecular separation (Tb.S.) between groups at 28 days. At 35 days, there were no differences observed between the two groups for any of the parameters (p.0.2). Thus, it appears that bone healing eventually occurs in the Rag1 2/2 mice but is delayed in comparison to WT mice. Biomechanical testing showed higher ultimate torque and torsional stiffness (p = 0.002) in the Rag1 2/2 mice at 28 days compared to WT (Fig. 5D). The elevated ultimate torque and torsional stiffness between Rag1 2/2 and WT mice was no longer significant at 35 days. Although an unexpected higher torsional stiffness resulted from the biomechanical testing of Rag1 2/2 mice, a presumption of better healing cannot be concluded based on this data. The wider distribution of bone deposition around the diaphyseal (neutral) axis, despite its reduced density, resulted in an increased polar moment of inertia, leading to greater torsional rigidity (p = 0.002) (23). Discussion Here we found that a lack of T-cells delayed osteoblast maturation and prolonged the proliferative phase of fracture healing resulting in more immature osteoblasts and decreased bone formation. Quantitative RNA expression studies of the fracture callus and primary bone cultures confirmed the overall decrease in mature osteoblasts correlated with the absence of Tcells and its secreted factors in the early phases of fracture repair. T-cells are selectively recruited in control human fractures during the early phase of fracture repair [26] and T helper cells and macrophages are disproportionately elevated in the fracture site compared to what is seen systemically [4]. This indicates a crucial role for T-cells in osteoblast regulation by subsequently uncoupling the proliferative and remodeling phases of fracture healing such that a lack of T-cells results in delayed and/or impaired bone healing. This is commonly seen clinically in immune deficient conditions such as those related to trauma, autoimmune or malignancy disorders as well as malnutrition and advanced age [14,[27][28][29][30][31]. While the factors that determine fracture repair in immune compromised patients are complex, focusing on the T-cell mediated maturation of osteoblasts may provide the critical link to understanding the actual fracture repair mechanism. Our data defines a new role for T-cells in regulating osteoblast differentiation, thus expanding on previous information from investigations studying the role of T-cells in bone of Rag1 2/2 [22] and dcT-cell deficient mice [32]. Based on the mCT analysis, the Rag1 2/2 mice demonstrated a larger, lower density callus compared to WT at 28 days indicating a more immature state of healing. While this wider but lower density pattern of bone distribution led to an increased polar moment of inertia (PMI) and greater torsional rigidity, this did not indicate a more advanced state of bone healing. Torsional rigidity is particularly relevant to heterogenous materials such as fracture callus/healing bone. Torsional rigidity values calculated will not be equal to the PMI values, but will show the same trends in that a wider cross sectional area will yield both higher PMI and torsional rigidity values. Torsional rigidity, however, is a more appropriate metric to use as it more directly relates to experimentally measured torsional stiffness incorporating both geometric and material parameters vs. PMI which is representative of geometric distribution alone. As such, torsional rigidity was used in this study and has been utilized successfully in the recent biomechanics literature with respect to assessing rodent long bones, including fidelity of healed fractures in rat femora [23,33]. The cytokine expression data provided support for the concept that an imbalance of pro-and anti-inflammatory cytokines may be the underlying mechanism of osteogenesis. For this reason, we directly treated osteoblast cells in culture with these cytokines to observe their effects on maturation. Indeed, IL-17F alone, promoted increased expression of osteoblast bone markers, highlighting for the first time, a key role for the pro-inflammatory cytokine IL-17F in fracture healing. This suggests a model in fracture healing processes whereby IL-17F, known to be secreted by the T-helper cell 17 (Th17) subset of T-cells, stimulates osteoblast maturation. More importantly, the impaired osteoblast maturation and decreased bone marker expression observed in Rag1 2/2 mice primary mesenchymal stromal cells were able to be rescued with direct IL-17F treatment further supporting its osteoinductive role. Immunodeficient Rag1 2/2 primary mesenchymal stromal cells demonstrated an impaired osteogenic potential compared to WT mice in vitro. The lack of T-cells, including Th17, in the Rag1 2/2 mice is an important difference as WT mice have Th17 cells in the bone marrow which have the potential to be stimulated. Although the Th17 cell stimulation was not fracture induced, the addition of osteoblast differentiation media to the cultures after the initial incubation period for cells to adhere did in fact influence bone marker expression in WT cells. As Rag1 2/2 mice have an absence of Th17 cells from the outset, the addition of osteoblast differentiation media had little effect in inducing the markers for bone formation. Thus, these results in part support the conclusions that T-cells are important in osteoblast differentiation. However, mesenchymal stromal cells do not express IL-17F and it was also not found to be expressed in osteoblasts differentiated from primary mesenchymal cells in vitro (data not shown). Hence, it is postulated that the relevance of a fracture stimulus in vivo is key to the up-regulation and expression of this essential pro-inflammatory cytokine to promote bone formation and healing. Factors to which the cells are exposed in the animal in vivo before they are harvested will influence cell number or differentiation potential. This is illustrated by data showing that even short term ovariectomy influences osteoblast differentiation when cells are assayed in vitro [34]. Although IL-6 has modest effects on osteoblast differentiation directly [35], upstream IL-6 increase early post fracture likely promotes naïve CD4+ T-cells into Th17 cells stimulating preosteoblast cell differentiation. Activated IL-6 (IL-6 bound to IL-6 receptor (IL-6R)) interacts with gp130, a signal transducing subunit, to phosphorylate Janus kinase (Jak) to activate cytoplasmic transcriptional factors in Th17 cells for the production of IL-17 [14,36,37]. Concomitantly, IL-6 has a dual role in the inhibition of the regulatory T-cell (Treg) population [38,39] which prevents Treg inhibition of osteoblast activity so as to further allow for osteoblast activity to increase beyond its equilibrium state and thus, allowing a net formation of bone to occur during fracture healing (Fig. 6). This is supported by our qPCR expression data from 3 and 7 day fracture callus RNA which revealed the sustained down-regulation of IL-6 in Rag1 2/2 mice compared to WT beyond the early phases of healing which resulted in less mature osteoblasts and mineralization. compared to WT mice. (B) mCT analysis confirmed the presence of a wider, lower density callus in Rag1 2/2 mice tibias at 28 days. (C) This was reflected by increased total callus volume (TV) measurements (p = 0.002) and decreased tissue mineral density (TMD) at 28 days in the Rag1 2/2 compared to WT mice (p = 0.004). Torsional rigidity (TR) was significantly higher in Rag1 2/2 mice compared to WT (p = 0.002). (D) Mechanical assessment of the samples using torsional mechanical testing showed a significantly higher ultimate torque and torsional stiffness in the Rag1 2/2 mice at 28 days compared to WT (p = 0.002). doi:10.1371/journal.pone.0040044.g005 The decrease in pro-inflammatory cytokines and subsequent impaired fracture healing is substantiated by other studies in the literature. A lack of IL-6 expression in a mouse model was found to have biomechanically weaker calluses during early fracture healing [40]. Furthermore, TNFa, another pro-inflammatory cytokine, was found to facilitate fracture repair, albeit through its actions on the muscle-derived stromal cell population as opposed to direct actions on the osteoblast population [1]. Another study suggests that the pro-inflammatory cytokine IL-1b, accelerates osteoblast differentiation and callus mineralization. However, the authors ultimately did not find any net effects on fracture healing with IL-1b deficiency alone, which they attributed to a multifactorial contribution by the other pro-inflammatory cytokines [41]. It could be suggested that an alternative explanation to T-cell modulation as the mechanism for osteoblast maturation would be that the Rag1 gene itself is responsible for the maturation effects on the osteoblast. It is unlikely, however, that expression of the Rag1 protein in the osteoblast is accountable for these findings. The expression of the Rag1 gene is very tightly regulated and has no significant activity beyond developing T-cells and B-cells. Moreover, given that the Rag1 protein is a recombinase and cleaves DNA, its more ubiquitous expression would result in increased tumorigenesis in bone which has not been reported [42]. A proposed mechanism derived from these findings is illustrated in Figure 6. In this model of early fracture repair, the secretion of IL-17F by Th17 T-cells is thought to enable osteoblast maturation and activation, permitting bone synthesis to occur. IL-17F is a more recently recognized addition to the family of pro-inflammatory cytokines [14,38,43]. To date, IL-17A and F are known to have roles in immunity augmenting the effects of IL-6 and TNFa, with IL-17F known to be expressed in Th17, natural killer (NK) and cdT-cells [38]. Here, these data suggest a novel link between T-cells and osteoblast biology, with IL-17F being a key element. It is possible that this may be the common regulatory point for bone metabolism as there are studies in literature supporting its role in promoting as well as inhibiting bone formation. One study [44] found that IL-17A suppressed osteoclast differentiation when applied at high concentrations in vitro, while another report [45] found IL-17 to induce osteoclast formation. Similar to the interaction between the osteoblast and osteoclast as mediated by the RANK-RANKL pathway, whereby secretion of RANKL by osteoblasts and its binding to the RANK receptor on the surface of the osteoclast is required for activation of the osteoclast, IL-17 may have effects on stimulating osteoblasts but at different concentrations, producing either net osteoblast or osteoclast action through its direct actions on the osteoblast. Furthermore, the action of these molecules occurs in the context of a network of other proteins and signaling pathways. It may be that a change in the balance of activity between any of these pathways upstream of IL-17 will produce either net bone formation or resorption. A different balance of overall pathway activity and IL-17 levels may also explain the differences between the cdT-cell deficient model [32] our results in the Rag1 2/2 mice which exhibited improved fracture healing. The Rag1 2/2 mouse model used in our study would have produced a more global depletion of lymphocytes, including Th17 cells and therefore, a different cellular and cytokine fracture healing environment. Moreover, cdT-cells have the ability to produce IL-10 and TGFb certainly suggesting that they have the potential to inhibit osteoblast differentiation [46]. While a recent study [22] also examined fracture repair in Rag1 2/2 mice, our study focuses on the mechanism of how the effectors of the adaptive immune system alters mesenchymal progenitor differentiation via an imbalance of pro-and anti-inflammatory cytokines. A decrease in proinflammatory cytokines and an increase in anti-inflammatory cytokines in the Rag1 2/2 mice, especially IL-10, reported in this study, agree with our results. However, their conclusion of a negative effect of lymphocytes on fracture healing was not reproduced in our study. Nevertheless, the immune system and its cellular activation and expression through cytokines as a mechanism of fracture repair during osteoblast regulation are only now being investigated. Further studies are needed to explore the specific signaling pathways involved in osteoblast differentiation and maturation as modulated by T-cells. In conclusion, our work has shown that a loss of expression of Rag1 2/2 , leading to a global depletion of lymphocytic activity appears to be detrimental to the processes of fracture healing, consistent with the assumption that T-cells are essential in fracture healing. IL-17F not only can stimulate and promote osteoblast maturation, but also has shown to directly rescue impaired healing in vitro. In doing so, it may be the key mediator in regulating the balance between a net osteoblast or osteoclast activity. As such, future studies aimed at further elucidating the regulation of IL-17F and its actions on mesenchymal progenitor cells may prove to be pivotal in understanding of the interactions between the immune system and bone healing. Significance This study provides a novel mechanism of interaction between the immune system and bone healing, specifically on how T-cells enable osteoblast maturation via IL-17F. This may provide future molecular targets that may have clinical applications to improve bone healing in those with nonunion or risks of impaired fracture healing.
7,782.6
2012-06-29T00:00:00.000
[ "Biology", "Medicine" ]
Bioluminescent RIPoptosome Assay for FADD/RIPK1 Interaction Based on Split Luciferase Assay in a Human Neuroblastoma Cell Line SH-SY5Y Different programed cell death (PCD) modalities involve protein–protein interactions in large complexes. Tumor necrosis factor α (TNFα) stimulated assembly of receptor-interacting protein kinase 1 (RIPK1)/Fas-associated death domain (FADD) interaction forms Ripoptosome complex that may cause either apoptosis or necroptosis. The present study addresses the interaction of RIPK1 and FADD in TNFα signaling by fusion of C-terminal (CLuc) and N-terminal (NLuc) luciferase fragments to RIPK1-CLuc (R1C) or FADD-NLuc (FN) in a caspase 8 negative neuroblastic SH-SY5Y cell line, respectively. In addition, based on our findings, an RIPK1 mutant (R1C K612R) had less interaction with FN, resulting in increasing cell viability. Moreover, presence of a caspase inhibitor (zVAD.fmk) increases luciferase activity compared to Smac mimetic BV6 (B), TNFα -induced (T) and non-induced cell. Furthermore, etoposide decreased luciferase activity, but dexamethasone was not effective in SH-SY5Y. This reporter assay might be used to evaluate basic aspects of this interaction as well as for screening of necroptosis and apoptosis targeting drugs with potential therapeutic application. Introduction Programed cell death (PCD) is a regulated cellular suicide through events inside a cell, promoting cell death through protein-protein interactions in supramolecular complexes that conduct the cell fate toward either survival or death [1]. Several types of PCD have been discovered in the past decades including autophagy, necroptosis, ferroptosis, apoptosis and pyroptosis [2]. Analysis of different PCD pathways provides evidence relating to disorders and the discovery of suitable drugs for their protein targets [3,4]. FADD is a bipartite adaptor protein containing both Death effector domain (DED) and DD. RIPK1 also possesses DD at its C-terminus and binds to FADD via homotypic DD:DD interactions, and caspase 8 contains tandem DEDs that allow the recruitment of FADD via DED: DED interactions. The RIPK1-DD: FADD-DD complex forms the core part in the oligomeric platform of the RIPoptosome with 2-MD, while the FADD DED: caspase 8 DED interaction is responsible for caspase 8 recruitment [10,15]. So far, negative-stain electron microscopy, modeling, immunoblotting (caspase 8 IP) and gel filtration confirm the RIPoptosome platform [10,11,16]. Because of differences in cellular contents in various cell types, the understanding of cell death platforms has been rather complex. Therefore, developing useful reporters can support better detection of these pathways with the aim of better understanding and curing numerous diseases such as cancer and neurodegenerative diseases [17,18]. Currently, the progress in the molecular area has been highly effective in the discovery of some important interactions in PCD's complexes; however, some conundrums yet remain to be elucidated. Consequently, many techniques have been developed based on protein-fragment complementation assay using luciferase-based biosensors [19] to detect protein-protein interactions in apoptosis, necroptosis, pyroptosis and autophagy [20][21][22][23][24] Development of bioluminescent reporters for involving protein complexes in cell death enabled us to screen compounds against one of the large protein complexes in different cell death modalities [25][26][27][28][29][30]. In this study, we describe the interaction between FADD and RIPK1 in the presence of effective inducers and inhibitors of TNFα signaling by generating reporters based on a split luciferase complementation assay in SH-SY5Y as a caspase 8 negative cell line. Additionally, the effect of zVAD.fmk, BV6 and TNFα signaling pathways in this interaction and its following outcome was investigated. Reporter Constructs and Site Directed Mutagenesis The constructs of mouse RIPK1 (mRIPK1), mouse FADD (mFADD) and ubiquitin (Ub) promotor were first inserted in pEntry vectors (gifted by VIB, Gent University). We first put the SacI and SacII restriction sites using Phusion High-Fidelity DNA polymerase (Finnzymes, Life technologies) on the pEntry3C vector, containing attL1 and attL2 sites and the pEntryR2L3 vector, containing attR2 and attL3 sites to introduce, respectively, to make them for cloning genes with the Gateway ® system (please see Supplementary Table S1 for nomenclature and a complete list of plasmids). NLuc (1-416 amino acids) and CLuc (395-550 amino acids) of the luciferase sequence originated from pGL3-Control Vector (Promega). PCR was performed using Forward and Reverse primers (Table S1). The sequences coding for NLuc or CLuc with the GS-rich linker were introduced by ligation in the pEntryR2L3 vector using SacII and XhoI restriction sites. The sequences coding for mRIPK1 and mFADD were introduced in the pEntry3C vector using the CloneEZ PCR cloning kit (GenScript) with, respectively, the SacI and SalI or BamH1 and XhoI restriction sites. The sequences coding for the Ub promoter were introduced by ligation in the pEntryL4R1 vector using the BamHI and XhoI restriction sites. All used fragments were amplified by PCR as shown in Table S2 and Figure S1A. The vectors were transformed into the MC1061 E. coli strain and cultured at 37 • C for 24 h. Positive clones were selected using LB plates containing kanamycin. After plasmid extraction from broth cultures, obtained clones were validated by double digestion as well as sequencing ( Figure S1B). These sequences were then recombined into the pLenti6-R4R3-puromycin destination vector using the LR gateway recombination system (Invitrogen). The Ub promotor and NLuc and CLuc sequences of luciferase was N and C-terminally fused to the coding sequence (mFADD-Nluc (FN) and mRIPK1-Cluc (R1C)) ( Figure S3). Proper orientation of the Gateway cassette was confirmed by DNA sequencing. For validation of reporter, RIPK1 K612R was generated by QuikChange mutagenesis (Agilent Genomics) of R1C via PrimeSTAR ® HS DNA Polymerase (Takara). The vectors were transformed into the DH5α E. coli strain, cultured for 48 h at 28 • C on ampicillin plates. Positive clones were screened for the correct sequence by digestion and full-length sequencing after plasmid extraction from broth cultures. Final constructs are illustrated in Figures 1 and S3. Transient Transfection, Cellular Treatments and Extract Preparation SH-SY5Y cells were seeded at 5 × 10 5 cell/well in six-well plates and R1C and FN co-transfected 24 h later with a mix complex consisting branched polyethyleneimine (PEI, 25 KD). Reagents and 2 µg of each plasmid were added to each well [PEI/DNA ratio (w/w) = 3:1] in confluency of 70-90% [31]. The media were changed after 3 h with fresh media. After 18 h, cells were either prestimulated with zVAD.fmk (25 µM), Nec-1 (5 µM), BV6 (3 µM) alone or in the respective combinations for 1 h followed by stimulation by TNFα (100 ng/mL). Moreover, cells were treated with fresh media including BOR, ETPO and DEXA for 24 h and collected 48 h post-transfection. After treatment, cells were trypsinized from the plates and rinsed twice with ice-cold Phosphate-buffered saline (PBS) and cells lysed by hypotonic lysis buffer containing 20 mM HEPES-KOH (pH 7.6), 1.5 mM MgCl 2 , 10 mM KCl, 1 mM EDTA, 1 mM DTT, 100 mM sucrose and 1 mM PMSF, which could keep intact mitochondria. After three times freeze thaw, the insoluble material was removed by centrifugation for 10 min at 13,000× g and 4 • C. Supernatant was collected for our analysis, and total proteins were quantified by Bradford [32]. Western Blot Analysis Supernatants by 12% SDS-PAGE were electrophoretically transferred onto a nitrocellulose membrane at 250 mA for 150 min and then blocked with 5% skim milk in a PBS buffer containing 0.1% Tween 20 (PBS-T) for 2 h. After blocking, the membrane was incubated in primary antibody diluted to 1:10,000 for overnight at 4 • C. After washing with PBS-T for 3 times, the membrane was placed into secondary antibody solution and was incubated with the membrane for 1 h at room temperature. After 3 times washing, the membrane was incubated for 5 min and exposed the blot using an Alpha Innotech Imager using enhanced chemiluminescence (ECL) reagents (Lumigen, Southfield, MI, USA). Transient Transfection, Cellular Treatments and Extract Preparation SH-SY5Y cells were seeded at 5 × 10 5 cell/well in six-well plates and R1C and FN cotransfected 24 h later with a mix complex consisting branched polyethyleneimine (PEI, 25 KD). Reagents and 2 µg of each plasmid were added to each well [PEI/DNA ratio (w/w) = 3:1] in confluency of 70-90% [31]. The media were changed after 3 h with fresh media. After 18 h, cells were either prestimulated with zVAD.fmk (25 µM), Nec-1 (5 µM), BV6 (3 µM) alone or in the respective combinations for 1 h followed by stimulation by TNFα (100 ng/mL). Moreover, cells were treated with fresh media including BOR, ETPO and DEXA for 24 h and collected 48 h post-transfection. After treatment, cells were trypsinized from the plates and rinsed twice with ice-cold Phosphate-buffered saline (PBS) and cells lysed by hypotonic lysis buffer containing 20 mM HEPES-KOH (pH 7.6), 1.5 mM MgCl2, 10 mM KCl, 1 mM EDTA, 1 mM DTT, 100 mM sucrose and 1 mM PMSF, which could keep intact mitochondria. After three times freeze thaw, the insoluble material was removed by centrifugation for 10 min at 13,000× g and 4 °C. Supernatant was collected for our analysis, and total proteins were quantified by Bradford [32]. Luciferase Activity Measurements and Cell Death Assays For analysis of split luciferase complementation activity, cells were stimulated for 24 h. The cells were analyzed for cell death induction 48 h post transfection. For investigation of the effects of drugs, cells were treated with zVAD.fmk and TNFα and then ETPO, BOR and DEXA. Ten µL of cell lysate were added to 10 µL of substrate solution (10 mM MgSO4, 2 mM D-Luciferin potassium salt (Resem, Lijnden, The Netherlands), 4 mM ATP, 50 mM Tris-HCl, pH 7.8) in the Sirius tube Luminometer (Berthold Detection System, Germany), and luciferase complementary activity were reported as relative light unit (RLU). Caspases 3 Activity Measurement To probe apoptosis induction, cells were cultured in 12-well plate (0.3 × 10 5 per well) and harvested and lysed in hypotonic lysis buffer after the indicated time as mentioned. Ac-DEVD-AMC (Enzo) were the substrate for caspase 3 activity. First, 15 µL of cell extract were mixed with 100 µL of assay buffer containing 50 mM HEPES, 1 mM DTT, 5 mM EGTA, and 10 µM DEVD-AMC. An increase in AMC fluorescence (excitation at 360 nm, emission at 460 nm) was detected for up to 30 min. The slope from the linear part of each curve was normalized to the protein concentration of the lysate in each reaction and normalized to the protein concentration of the lysate. Measurement of Reactive Oxygen Species Intracellular accumulation of reactive oxygen species (ROS) was measured using the cell-permeant fluorescent probe, 2 -7 dichlorofluorescin diacetate (DCFH-DA, sigma). Briefly, at selected times, cells were collected by trypsinization and washed with PBS and then incubated with a solution of DCFH-DA (10 µM) under the dark condition for 30 min at 37 • C. At the end of treatment period, the fluorescent intensity was measured by a microplate reader at the excitation/emission wavelengths of 480/530 nm. The ROS fold changes were normalized by cell number. Statistical Analysis GraphPad prism 8 software was used to analyze the data and construct statistical graphs. One-and two-way ANOVA tests were used to compare differences between treated groups and their paired controls, respectively. Differences in compared groups were considered statistically significant with p values lower than 0.05; * p ≤ 0.05; ** p ≤ 0.01; *** p ≤ 0.001; **** p ≤ 0.0001. All experiments were repeated at least three times, and the data are expressed as the mean ±SD from representative experiments. The pEntry and Final Constructs were Generated by Gateway Cloning In order to create split luciferase-tagged proteins, different pEntry constructs were made. pEntry3C was prepared for mFADD and mRIPK1; pEntryR2L3 was prepared for CLuc; NLuc; and pEntryL4R1 was only made containing Ub promoter to have all final constructs under Ub promoter ( Figure S3). Ub promoter prevents overexpression of the tagged proteins and keeps its level in the status of endogenous in contrast to CMV promoter [33]. After bacterial transformation of pEntry vectors in MC1061 E. coli and plasmid extraction, clones were evaluated using enzymatic digest. pEntry3C, pEnrtyR2L3 and pEn-rtyL4R1 constructs were double digested, respectively, by SalI and SacI; PvuII; SacII and XhoI; BamHI and XhoI for 2 h at 37 • C. In Figure S1B, the double digested product is shown by gel electrophoresis. To investigate the homotypic interaction among DDs of FADD and RIPK1, both proteins were tagged in C-terminal by NLuc or CLuc. Additionally, the flexibility for the accurate folding of the fused proteins was provided by 5 amino acid-linker (GGSGS) between the luciferase tag and the protein [34]. After bacterial transformation in DH5α E. coli strain, the plasmids were extracted after 48 h incubation at 28 • C for NLuc and CLuc constructs ( Figure S2). Primary validation was conducted using restriction analysis by EcoRV for 2 h at 37 • C. The developed constructs are summarized in Table S1. ZVAD.Fmk Trigger Luciferase Activity and Protein Expression in Caspase 8 Deficient SH-SY5Y Neuroblastoma Cells The interaction between RIPK1 and FADD mediated extrinsic apoptosis pathway in presence of caspase 8 [35]. Interaction was performed in a caspase 8 deficient neuroblastoma cell line, which originally derived from a metastatic bone tumor biopsy and as a sub-line of the parental line SK-N-SH [36]. Previous data on SH-SY5Y showed that this cell line with low expression of RIPK3, MLKL and caspase 8 probably have a novel necroptosislike type of cell death [37,38]. Thus, we selected this cell line to better understand the role of FADD/RIPK1 interaction in necroptosis or apoptosis in absence of caspase 8. To evaluate this protein-protein interaction, after 24 h of transfection with PEI, we pre-treated SHSY5Ycells with pan-caspase inhibitor, zVAD.fmk (Z) and BV6 as a bivalent Smac mimetic (B) for 2 h, followed by addition of TNFα (T) alone or combined (combination hereafter referred as TBZ). After 48 h, lysed cells were used for experimental data. Based on previous studies, unexpectedly, we could not find any change in FN and R1C interaction in SHSY5Y in the presence of B, but interestingly, Z increased interaction between them than untreated cell, as expressed by luciferase reconstitution activity (Figure 2A). Western blot displayed the increased expression of both R1C and FN in the presence of TZ and Z treatment, leading to the increase in luciferase activity ( Figure 2C-E) in contrast to interaction with TB, which induced minimal interaction. However, based on caspase 3 activity, the highest activity was observed in TB treated whereas TZ treated cells indicated low level of apoptosis as displayed by caspase 3 activity ( Figure 2B). K612R Mutation in DD of RIPK1 Reduce FADD/RIPK1 Interaction A recent study with immunoblotting data showed that the K612R mutant in the mouse was a promising site in the ubiquitination effect on other DD-mediated interactions, especially FADD protein [39]. To approve FADD/RIPK1 interaction reporter assay, we made a K612R mutation in RIPK1 to confirm specificity of complementation. In comparison with luciferase activity of native, K612R showed lower activity ( Figure 3A). Interestingly, zVAD.fmk had no effect on caspase 3 activity based on western blot data as well as cleavage of a fluorescent caspase-3 substrate (Figure 3B,D). These results support the idea that caspase 8 is not the main caspase in SH-SY5Y. In contrast to zVAD.fmk treatment, BV6 increased caspase 3 activity in R1C (WT and mutant) alone or combination with zVAD.fmk. Furthermore, ROS generation can happen in necroptosis [3]. ROS assay revealed reduction in cell death in K612R transfected cells compared to WT ( Figure 3C). K612R Mutation in DD of RIPK1 Reduce FADD/RIPK1 Interaction A recent study with immunoblotting data showed that the K612R mutant in the mouse was a promising site in the ubiquitination effect on other DD-mediated interactions, especially FADD protein [39]. To approve FADD/RIPK1 interaction reporter assay, we made a K612R mutation in RIPK1 to confirm specificity of complementation. In comparison with luciferase activity of native, K612R showed lower activity ( Figure 3A). Interestingly, zVAD.fmk had no effect on caspase 3 activity based on western blot data as well as cleavage of a fluorescent caspase-3 substrate ( Figure 3B,D). These results support the idea that caspase 8 is not the main caspase in SH-SY5Y. In contrast to zVAD.fmk treatment, BV6 increased caspase 3 activity in R1C (WT and mutant) alone or combination with zVAD.fmk. Furthermore, ROS generation can happen in necroptosis [3]. ROS assay revealed reduction in cell death in K612R transfected cells compared to WT ( Figure 3C). These findings show that R1C K612R with reduction of interaction with FADD probably decreased either apoptosis or necroptosis cell death. Genotoxic Drugs such as Etoposide Decrease FADD/RIPK1 Interaction Etoposide as an IAPs depletion compound for induction of cell death [40] can induce the binding of caspase 8 to RIPK1 and FADD only in cancer cells due to low expression of caspase 8 [10]. We examined the effects of etoposide in induction of cell death in SH-SY5Y; 5, 50 and 100 µM of etoposide were used which brought about with decrease in split luciferase complementary activity at 50 µM etoposide concentration ( Figure 4A). Based on caspase 3 activity and western blot, the highest caspase 3 activity was observed at 5 µM, and the highest cleavage rate was observed at 50 µM of ETPO ( Figure 4B,C). Normalized protein amounts showed that the highest amount of R1C and FN was observed at 50 µM of ETPO and in control, respectively ( Figure 4D,E). Genotoxic Drugs such as Etoposide Decrease FADD/RIPK1 Interaction Etoposide as an IAPs depletion compound for induction of cell death [40] can induce the binding of caspase 8 to RIPK1 and FADD only in cancer cells due to low expression of caspase 8 [10]. We examined the effects of etoposide in induction of cell death in SH-SY5Y; 5, 50 and 100 µM of etoposide were used which brought about with decrease in split luciferase complementary activity at 50 µM etoposide concentration ( Figure 4A). Based on caspase 3 activity and western blot, the highest caspase 3 activity was observed at 5 µM, and the highest cleavage rate was observed at 50 µM of ETPO ( Figure 4B,C). Normalized protein amounts showed that the highest amount of R1C and FN was observed at 50 µM of ETPO and in control, respectively ( Figure 4D,E). FN/R1C Interaction in Presence of as Bortezomib and Nec-1 Since anti-apoptotic molecules such as in the IAP family are degraded by proteasome [40], increasing in anti-apoptotic molecules via some proteasome inhibitors such as bortezomib induces cell death that in combination with other drugs can improve therapies [10]. The kinase activity of RIPK1 requires the assembly of RIPoptosome so that using Nec-1, the RIPK1-targeted kinase inhibitor of necroptosis, can inhibit kinase domain activity and RIPoptosome formation [10]. To explore the role of etoposide in the presence of other treatments such as BOR and Nec-1, luciferase activity decreased in the presence of Nec-1 as same as ETPO; however the luciferase activity with bortezomib is more than both. Combination of ETPO + BOR and ETPO + Nec-1 decreased the luciferase activity compared to their alone application ( Figure 5A). Data of caspase activity showed the lowest level in presence of BOR alone or its combination with ETPO ( Figure 5B). On the other hand, using of Nec-1 with ETPO increased the caspase activity, and western blot also shows higher concentration of pro-caspase 3 in the presence of Nec-1 ( Figure 5C). In addition, in the presence of BOR, expression of RIPK1 leads to the increase in luciferase activity. Some studies displayed that etoposide with triggering XIAP deletion promotes RIPoptosome formation [10,40]. In addition, western blot displayed that the amount of XIAP in ETPO compared with BOR is higher, as further confirmed with luciferase activity. FN/R1C Interaction in Presence of as Bortezomib and Nec-1 Since anti-apoptotic molecules such as in the IAP family are degraded by proteasome [40], increasing in anti-apoptotic molecules via some proteasome inhibitors such as bortezomib induces cell death that in combination with other drugs can improve therapies [10]. The kinase activity of RIPK1 requires the assembly of RIPoptosome so that using Nec-1, the RIPK1-targeted kinase inhibitor of necroptosis, can inhibit kinase domain activity and RIPoptosome formation [10]. To explore the role of etoposide in the presence of other treatments such as BOR and Nec-1, luciferase activity decreased in the presence of Nec-1 as same as ETPO; however the luciferase activity with bortezomib is more than both. Combination of ETPO + BOR and ETPO + Nec-1 decreased the luciferase activity compared to their alone application ( Figure 5A). Data of caspase activity showed the lowest level in presence of BOR alone or its combination with ETPO ( Figure 5B). On the other hand, using of Nec-1 with ETPO increased the caspase activity, and western blot also shows higher concentration of pro-caspase 3 in the presence of Nec-1 ( Figure 5C). In addition, in the presence of BOR, expression of RIPK1 leads to the increase in luciferase activity. Some studies displayed that etoposide with triggering XIAP deletion promotes RIPoptosome formation [10,40]. In addition, western blot displayed that the amount of XIAP in ETPO compared with BOR is higher, as further confirmed with luciferase activity. Dexamethasone Had no Effect on FN/R1C Interaction A previous study showed that dexamethasone has not any influence on caspase 8 interaction with FADD [10]. We treated SH-SY5Y cells with 10, 100 and 200 µM of DEXA in the presence of TZ ( Figure 6A). Data showed that the FN/R1C interaction was not affected by DEXA treatment. In addition, synergistic combination of BV6 and dexamethasone has a significant effect on RIPoptosome formation [41]. So, we increased the concentration of DEXA to 500 and 1000 µM in the presence of BV6. Based on luciferase activity, no significant change in FN/R1C interaction was observed ( Figure 6B). However, with microscopic imaging after 18 h of treatment, cell death was observed ( Figure 6C). Dexamethasone Had no Effect on FN/R1C Interaction A previous study showed that dexamethasone has not any influence on caspase 8 interaction with FADD [10]. We treated SH-SY5Y cells with 10, 100 and 200 µM of DEXA in the presence of TZ ( Figure 6A). Data showed that the FN/R1C interaction was not affected by DEXA treatment. In addition, synergistic combination of BV6 and dexamethasone has a significant effect on RIPoptosome formation [41]. So, we increased the concentration of DEXA to 500 and 1000 µM in the presence of BV6. Based on luciferase activity, no significant change in FN/R1C interaction was observed ( Figure 6B). However, with microscopic imaging after 18 h of treatment, cell death was observed ( Figure 6C). Discussion Split luciferase complementary assay has been used to keep track of the RIPoptosome complex formation between RIPK1 and FADD ( Figure 1). Upon activation of TNFR1 in presence of a pan-caspase inhibitor (TZ) (Figure 2A), the increase in split luciferase activity indicates proper juxtaposition of complex subunits in presence of zVAD.fmk despite TNFα alone which might indicate resistance to TNFα in the absence of caspase 8. As well, lack of expression of caspase 8 has been documented as a resistance approach to TRAIL and chemotherapy in neuroblastoma cells [42]. However, to confirm right reporter interactions, mutation of critical involving residues has been implemented. There are many post-translational modifications (PTM) sites, such as ubiquitination, that are important in the TNFα signaling pathway [12]. RIPK1, as a mediator with different PTM sites, is more prone to ubiquitination in different sites especially in DD conduct TNFR1 pathway to cell death or survival [43]. K612 of RIPK1 has a pivotal role in its interaction with FADD [39]. The interaction of FADD/RIPK1 contributes to complex IIb which contains activated RIPK1, FADD and caspase 8 to mediate the activation of caspase 8 and apoptosis [44]. Mutation of K612 to R brought about with the loss of luciferase complementary activity presumably due to loss of proper interactions (Figure 3). Several studies have shown that Fas or TNFR activation leads to necrotic cell death upon caspase inhibition in various cell lines [6,45]. Previous studies express that RIPK1/FADD have dual roles in extrinsic apoptosis and necroptosis [46]. Furthermore, the FADD and caspase 8 deficiency activates necroptosis [47]. Lake of expression of caspase 8 is frequent in the several kinds of tumor models such as lung carcinoma, neuroblastoma and hepatocellular carcinoma [48]. Our results showed zVAD.fmk treatment as a pan-caspase inhibitor not only increased FADD and RIPK1 expression but also elevated split luciferase complementary activity which is an indicator of higher RIPK1 and FADD interaction or at least their more proper proximity (Figure 2A). Some studies show that zVAD.fmk increases stability of the caspase 8 -RIPK1 complex most likely by protection from caspase-dependent apoptosis [13,49]. Likewise, over expression of RIPK1 leads to spontaneous formation of RIPoptosome [15]. However, Tenev et al., showed that treatment with zVAD.fmk did not affect RIPoptosome formation [10]. So, it could depend on variety of cell context in cell types. Furthermore, low expression of RIPK1 makes the cells resistant to cell death [13]. However, some data show that SHSY5Y cells do not have caspase 8 activity and expression [3]. Then, probably increasing in FADD/RIPK1 interaction induces cells towards necroptosis pathways. Based on split luciferase complementary assay no significant change between R1C and FN interaction with BV6 treatment was observed (Figure 2A). Based on some studies, deletion of IAPs by tenoposide/etoposide and Smac mimetics such as BV6 promotes RIPK1/FADD/caspase8 interaction and spontaneous assembly of the Ripoptosome [41,50,51]. Moreover, cIAPs adjust the amount of RIPK1 in a cell type-dependent manner [50] and balance the intracellular levels of cIAPs and RIPK1 in RIPoptosome formation [13]. Lack of apoptosis extrinsic cell death could be attributed to either cells resistance against Smac mimetic compounds as reported earlier [51,52] or cells RIPK1 content. In spite of previous synergic reports on BV6 and Dexamethasone on hypersensitization of all cell lines to apoptosis [41], no significant changes in R1C and FN interaction were observed ( Figures 4B and 6A); based on split luciferase complementary assay, even more morphological cell death was observed. Genotoxic drugs, such as Etoposide, can trigger cell death due to stabilization of the RIPoptosome complex (FADD/RIPK1 and caspase 8) [10]. A decrease in split luciferase complementary activity in the presence of etoposide may be due to a lack of caspase 8, as mentioned in the applied cell line. Therefore, it may be concluded that the RIPK1/FADD interaction may lead either to a necroptosis pathway or mitochondrial pathway of apoptosis, as indicated by higher caspase 3 activity. We co-treated the cells with bortezomib and Nec-1, and the results showed that co-treatment with bortezomib merely increased FN/R1C luciferase activity and decreased caspase-3 activity ( Figure 5A,C), indicating the involvement of proteasomal degradation. Interestingly, the combination of ETOP + BOR decreased split luciferase activity without changes in caspase-3 activity ( Figure 6A,C). These data are somehow in support of caspase 8 as an indispensable component of RIPoptosome complex. Conclusions In summary, according to the results presented in this manuscript, it can be concluded that FN/R1C bioluminescent reporter can be considered as an alternative approach to investigate basic aspect of RIPoptosome protein complex and to find suitable effective RIPoptosome disrupting or activating compounds. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/bios13020297/s1, Figure S1. (A) agarose gel electrophoresis (1%) of PCR product of (mFADD and mRIPK1), (CLuc and NLuc) and Ub promotor for ligation in pEntry3C, pEntryR2L3 and pEntryL4R1 vectors, respectively. (B) double digestion of constructs for validation of cloning containing mFADD-pEntry3C (BamHI and XhoI) and mRIPK1-pEntry3C (PvuII), Nluc-pEnrtyR2L3 (SalI and SacI), CLuc-pEnrtyR2L3 (SalI and SacI) and Ub promoter-pEntryL4R1 (BamHI and XhoI); M (10 Kb); Figure S2. Digestion patterns of some final constructs using EcoRV, white marked clones are positive based on the band size (A) R1C; (B) FN; M (10 Kb); Figure S3. The Gateway cloning procedures for generating Luciferase-tagged proteins based on split luciferase assay using the PLenti6 destination vector. Firstly, the fragments encoding the Ub, mFADD, mRIPK1, NLuc and CLuc is inserted into a pEntry vector. Three pEntry vectors were described in this study: pEntry3C which used for mFADD and mRIPK1; pEntryL4R1 which used for Ub and pEntryR2L3 which used for luciferase fragments. In the end, the constructs in the pEntry vectors are recombined into pLenti6R4R3 vectors and generate (A) pLenti6-FN and (B) pLenti6-R1C; Table S1: Primers used for the amplification of tags; Table S2: PCR programs of the used genes.
6,552
2023-02-01T00:00:00.000
[ "Biology", "Medicine" ]
Generalization of the Lieb–Thirring–Araki Inequality and Its Applications : The matrix eigenvalue is very important in matrix analysis, and it has been applied to matrix trace inequalities, such as the Lieb–Thirring–Araki theorem and Thompson–Golden theorem. In this manuscript, we obtain a matrix eigenvalue inequality by using the Stein–Hirschman operator interpolation inequality; then, according to the properties of exterior algebra and the Schur-convex function, we provide a new proof for the generalization of the Lieb–Thirring–Araki theorem and Furuta theorem. Introduction As an important branch of mathematics, matrix theory has been widely applied in the fields of mathematics and technology, such as optimization theory ( [1]), differential equations ( [2]), numerical analysis, operations ( [3]) and quantum theory ( [4]). In this manuscript, let C n be an n-dimensional complex vector space with the inner product x, y = x * y = ∑ n i=1 x * i y i for x = (x 1 , · · · , x n ) , y = (y 1 , · · · , y n ) ∈ C n , where the superscripts x * and denote the conjugated transpose of x and the matrix transpose, respectively. Let M n denote the whole set of n × n matrices with complex entries, and we call x ∈ C n the eigenvector of A ∈ M n when A x = λ x (where λ is called the eigenvalue of A). We denote H n the set of all Hermitian matrices. For any A ∈ H n , we have A = ∑ n i=1 λ i P i , where λ i is the eigenvalue of A and ∑ n i=1 P i = Id, P i P j = 0(i = j); specially, when x * A x ≥ 0 for any x ∈ C n , we denote A ∈ H + n (H + n is the set of n × n positive-definite Hermitian matrices whose eigenvalues are nonnegative). Let f be a function with the domain (0, +∞); for any A ∈ H + n , the matrix function is defined as On the basis of this definition, we have a formula relating the trace of matrix A and the eigenvalue of A: where λ i (A) is the eigenvalue of A. It is well known that Tr [ Thompson and Golden independently discovered an inequality called the Thompson-Golden theorem (refer to [5][6][7]): In general, the following limit holds (called the Lie-Trotter formula [8] Furthermore, the following inequality holds when p ≥ 1: which is the Lieb-Thirring-Araki theorem ( [9,10]). Since the function F(A) = Tr e B+ln A is a Fréchet differential function for any A ∈ H n , the concavity of F(A) implies the Thompson-Golden theorem. At the same time, one can also obtain the Thompson-Golden theorem by using the relationship p . By using the matrix exterior algebra, we have According to the convexity of Tr ∧ k e A , Huang proved the following inequality ( [12]): With this motivation, we utilize the Stein-Hirschman operator interpolation inequality to show that λ 1 1 α is a monotone increasing function for any α > 0. Then, we generalize the Lieb-Thirring-Araki theorem and provide a new proof of the Furuta theorem ([13]). The rest of the paper is organized as follows. In Section 2, some general definitions and important conclusions are introduced. In Section 3, a new proof of the monotonicity of λ 1 1 α and some general results are offered. Preliminary In this section, we recall some notions and definitions from matrix analysis, and introduce some important results of the matrix-monotone function, which are used through the article (refer to [14][15][16][17]). Tensor Product and Exterior Algebra The tensor product, denoted by ⊗, is also called the Kronecker product. It is a generalization of the outer product from vectors to matrices, so the tensor product of matrices is referred to as the outer product as well in some contexts. For an m × n matrix A and a p × q matrix B, the tensor product of A and B is defined by where A = a ij 1≤i≤m,1≤j≤n . The tensor product is different from matrix multiplication, and one of the differences is commutativity: From this relation, one can obtain For convenience, we denote In addition to the tensor product, there is another common product named exterior algebra ( [18]). Exterior algebra, denoted by ∧, is a binary operation for any A n×n that is where {ξ j } n j=1 is an orthogonal basis of C n and where σ n is the family of all permutations on {1, 2, · · · , n}. Let k C n be the span of the {ξ i 1 ∧ ξ i 2 · · · ∧ ξ i k } 1≤i 1 <···<i k ≤n ; a simple calculation shows that (1) Schur-Convex Function Let x = (x 1 , · · · , x n ) , y = (y 1 , · · · , y n ) ∈ R n and denote then x is said to be majorized by y, denoted by Suppose f is a real-valued function defined on a set A ⊆ R n ; then, f is said to be a Schur-convex function on A if, for any x, y ∈ R n and x ≺ y, one obtains f ( x) ≤ f ( y) ( [11]). If f is differentiable and defined on I n (I ⊂ R being an open interval), then the following lemma holds (refer to [11]). The Matrix-Monotone Function For a matrix A ∈ H + n , according to the spectral theorem ( [19]), it can be decomposed as where P is the unitary matrix and Λ A := diag{λ 1 , ..., λ n } is a diagonal matrix with eigenvalues as elements. When Associated with a function f (x) on (0, +∞), the matrix function f (A) is defined as Since the matrix-monotone function is a special type of operator monotone function, we present the following general conclusion about the operator-monotone function, which can be found in [20,21]. Lemma 2. The following statements for a real-valued continuous function f on (0, +∞) are equivalent: ( i) f is operator-monotone; ( ii) f admits an integral representation where α is a real number, β is non-negative and µ is a finite positive measure on (−∞, 0). The Main Results For any A, B ∈ M n , it is known ln AB is not equal to ln A + ln B in general when Therefore, many people pay much attention to studying the relation between (A A famous result regarding the trace inequality is the Lieb-Thirring-Araki theorem: where α ≥ 1. In the following, we further study the relation between (A Theorem 1. For any 0 < α ≤ β and A, B ∈ H + n , the following inequality holds Proof. By using the Cauchy inequality, we have If we denote as the maximum eigenvalue of A, then we can obtain Here, we use the fact that λ 1 (AB) = λ 1 (BA). Through a simple deformation, we have k . From this inequality, we have the expression This implies Namely, for 0 < α ≤ β, we obtain This completes the proof of Theorem 1. Although Theorem 1 has been obtained from the Cauchy inequality, the frequency of retractions improves the inequality. In the following, we obtain Theorem 1 by using operator interpolation. First, let us introduce the Stein-Hirschman operator interpolation inequality ( [12]). From Lemma 3, we can improve the result in Theorem 1 and obtain the following theorem. Theorem 2. For any A, B ∈ H + n , the following inequality holds Proof. Firstly, let f be an analytic function in C. then, for any f ∈ L 1 t (C), we can obtain This implies for any 0 < t < 1, and the first "≤" is obtained by the Jensen inequality ( [22]). This completes the proof of Theorem 2. Theorem 2 is very useful. On one hand, when α < β, letting t = α β , we can obtain Theorem 1. On the other hand, using the matrix exterior algebra, we obtain Furthermore, we can deduce the following inequality whether it is true or not for any k ≤ n, and this inequality can be regarded as a generalization of the Lieb-Thirring-Araki theorem. Generalization of Lieb-Thirring-Araki Theorem According to Theorem 1 and Formula (1), we can show that 1 β ], (6) when α ≤ β. Specially, when β = 1 and We know that the Lieb-Thirring-Araki theorem can be obtained from the Schur-convex Generally, we can prove the following conclusion. From Theorem 3, we can deduce the following inequality immediately. Corollary 1. For any α ≥ 1 and A, B > 0, the following inequality holds or for any γ > 0. From (8), it can be seen that, when k = 1, Corollary 1 is just the Lieb-Thirring-Araki theorem. Especially, Tr ∧ k (A Using Theorem 1, we can obtain for any 0 ≤ α ≤ β, and this is a generalization of the Thompson-Golden theorem. For some other generalizations of the Thompson-Golden theorem, see [8,23]. Moreover, since where 0 < α ≤ 1 and r ≥ 1, we can obtain the following corollary. Corollary 2. For any r ≥ 1 and A, B > 0, the following inequality holds: Applications in Matrix-Monotone Function In this subsection, we obtain some other corollaries from Theorem 1 associated with the matrix-monotone function. Since Hence, we obtain the Löwner-Heinz Theorem ( [4]). This implies That is, 3 2 ]. Repeating this process, we have finished the proof. Some Other Applications In this subsection, we obtain a corollary associated with the matrix determinant. We suppose A, B ∈ H n and λ n (e where 0 < α ≤ β and ln λ i (e A 2 e B e A 2 ) ≥ 0 (i = 1, 2 · · · , n). Let Then, a straightforward calculation indicates Hence, d(x 1 , · · · , x n ) is a Schur-concave function and the following inequality holds ( [8]). In fact, for any A ∈ H n , we have Hence, Corollary 5 can be generalized as the following corollary. Proof. Since we can finish the proof if we show that the function a(x 1 , x 2 · · · , x n ) = ∑ 1≤i 1 <i 2 <···<i k ≤n x i 1 x i 2 · · · x i k is Schur-concave for any x i ≥ 0. In fact, we have This completes the proof of Corollary 6. Conclusions In the paper, we discuss the relationship between λ 1 (A 1 2 BA 1 2 ) α and λ 1 (A α 2 B α A α 2 ) by using the Stein-Hirschman operator interpolation inequality. Through in-depth study, we obtain some eigenvalue inequalities such as the generalization Golden-Thompson theorem and Lieb-Thirring-Araki theorem. Moreover, the Furuta theorem is also shown by using the eigenvalue inequality. At last, we generalize an important determinant inequality by using the matrix exterior algebra.
2,633.4
2021-03-26T00:00:00.000
[ "Mathematics" ]
Exploring students’ views about basic concepts in introductory quantum mechanics Literature in physics education shows that students still experience difficulties learning quantum mechanics, although it is now part of the high school curriculum and many research-based proposals are available. Prior works mostly focused on specific misconceptions and a clearer picture of students’ ideas on general quantum concepts is still lacking. We addressed these issues by inspecting, through multiple correspondence analysis and cluster analysis, the responses given by 408 Italian high school and undergraduate students to a Likert scale questionnaire on quantum physics. From our preliminary results, we can conclude that the majority of students leave high school and enter university without a sound model of the quantum world. Introduction and aims Literature in physics education has shown that students have difficulty learning quantum physics (QP), in particular wave-particle duality, wave function and atoms [1].The abstractness and counter-intuitiveness of QP are at the origin of students' conceptual difficulties [2].Other reasons for such difficulties are: shifting from a deterministic to a probabilistic worldview, and relating the mathematical formalism of QP to experiences in the physical world [3]. Recently, also in Italy, QP has been introduced into the high school curriculum [4].Targeted concepts include, e.g., the quantum model of light, Planck's hypothesis, the discrete energy levels, de Broglie wavelength, the uncertainty principle and the photoelectric effect.Some of these topics are also addressed in the chemistry course at an earlier stage.However, curricular teaching mostly relies on a traditional lecture-based approach and there is limited evidence of the effectiveness of these reforms.Moreover, physics teaching in Italy differs across high school strands (science focused, math and physics oriented, humanities, . . .).Such inhomogeneity leads more interested students to attend extracurricular physics activities focused on quantum topics.It also impacts on students' initial preparation for undergraduate courses.Hence, it is possible that, also among STEM freshmen, knowledge about quantum topics widely differs.Finally, while prior research work mostly focused on specific misconceptions about, e.g., the uncertainty principle, Schrödinger equation and wave function at undergraduate level, research still has to provide a clearer picture of how students combine concepts such as stability of atoms, behaviour and properties of photons and electrons, probabilistic vs. deterministic viewpoint, into coherent mental models.Therefore, the specific research questions that guided this study were: what are the students' ideas about properties and behaviour of atoms, electrons and photons?What are the students' profiles that can be identified from these ideas?How these profiles are associated to different groups of students? Sample and methods To address the above research questions, we involved a convenience sample composed by 408 students that can be divided in four groups: 148 students from math and physics oriented high school, 59 from the same high school stream, who were attending extracurricular activities in physics, 127 freshmen in engineering, and 74 freshmen in biology.We considered those students, who participated in extracurricular activities, as a separate group, because they can be regarded as more interested in physics [5].Prior to our study, all high school students received instruction on quantum physics during their regular physics lessons.No intervention was included in our study.The reason for involving engineering and biology students is that they represent STEM undergraduates.Moreover, biology students are exposed to basic quantum concepts since their first year in the chemistry exam. Our research instrument was a questionnaire about students' views on quantum mechanics designed from prior instruments proposed in [6,7,8] and [9].The first version of the instrument, featuring 54 statements on a 5-point Likert scale, was developed by Mashhadi and Woolnough [6].Their questionnaire was administered to 319 pre-university A-level physics students to gain a qualitative insight into students' understanding of quantum mechanics.They found that their sample could be divided into three clusters: Mechanistic, Intermediate and Quantum, respectively.The instrument was then adopted by Ireson [7], who administered a refined scale of 29 items to 338 first-and second-year undergraduate students in UK.Students' responses were factorized into two latent dimensions of conceptual understanding: 'Absolute thinking -Dual thinking' and 'Simple atom/deterministic mechanics -Complex atom/indeterministic mechanics'.Three clusters were formed using these factorial dimensions: Mechanistic thinking, Intermediate thinking and Quantum thinking.The questionnaire was also administered to 342 pre-university A-level physics students, of which only about 50% had already studied quantum physics [8].For the group that had not yet been exposed to quantum phenomena, four clusters were found: structure and mental image of entities, mechanistic thinking, quantum thinking and conflicting mechanistic thinking.For the group that had been exposed to quantum physics, three clusters were found: quantum thinking, conflicting quantum thinking and conflicting mechanistic thinking.By comparing the two groups, the author concluded that, even if there were differences between them, "only a minority of the significant changes can be traced back to statements in the syllabus.The majority of changes must, therefore, be due to factors outside the direct teaching of the syllabus material."[8, p. 20].In a follow-up study [9], the same author analysed all the data obtained in one of the previous studies [6], now also including 11 statements on the conceptual understanding of models (e.g., "models are constructions of human minds").While the same two factors were obtained from the analysis, the clusters' interpretation was refined as it follows: quantum thinking/descriptive models, conflicting thinking/conflicting models and mechanistic thinking/complete models. For the present study, we kept the original formulation of most of the statements used in [6,7,8] and [9].However, we rephrased some items to eliminate ambiguities (e.g., "The energy of an atom can have any value" was changed in "The energy emitted or absorbed by an atom can take on any value") and designed new items to account for other students' misconceptions (e.g., "Electrons move at the speed of light" or "The electron spins around its axis").Therefore, the final version of the questionnaire featured 36 items on a 4-point Likert scale (see Appendix A for the complete list). Data analysis We first calculated Cronbach's alpha (α = 0.50) to measure the internal consistency of the instrument.This value can be considered acceptable, because the items concern different quantum entities.Hence, some problems of internal consistency are foreseeable [10].Descriptive statistics of students' answers was used to identify the most frequent misconceptions. To answer the research questions, a multiple correspondence analysis (MCA), a hierarchical cluster analysis and a chi-square analysis were performed.In agreement with the studies described in [7] and [9], we used MCA to obtain the latent factors that describe students' views about QP.MCA can be thought as an extension to categorical variables of principal component analysis, which allows to identify one or more latent dimensions (factors) that explain most of the variance in the data.In our case the variables are represented by the degree of agreement (modality) with a certain item.From the mathematical viewpoint, instead of the correlation matrix used in factor analysis, MCA uses a contingency table of frequencies, with entries equal to 1 or 0, depending on whether a certain modality has been selected in one or more questions (for more details, see, for instance, [11]).As in factor analysis, the interpretation of the emerging factors takes into account the loading of the item in the factor (called weight in MCA).However, differently from factor analysis, the interpretation of the retained factors has to take into account also the observed modalities, since the items were not scored as in factor analysis.Therefore, in our case, to interpret the retained factors we took into account first the specific topic targeted in the item (e.g., photon) and then the modality, with which the item weighs in the factor.In general, different researchers could give different interpretations of the retained factors.For this reason, to label each factor from MCA, we interpreted the output of the analysis independently and separately, and, when necessary, we discussed the results to reach consensus. Then, we used the factorial scores obtained from MCA to build students' profiles using hierarchical cluster analysis.This analysis involves an iterative procedure to obtain the best partition of the data set.Clusters were identified following a hierarchical-divisive procedure, which started from a subdivision of the sample into two main groups and then proceeded in successive steps, according to a hierarchical tree structure, which was cut to obtain a final solution that explains a reasonable percentage of data variance.The elements belonging to each cluster were classified in order of importance with the aid of a statistical criterion (test-value) to which a probability is associated: the larger the test-value, the lower the probability of exclusion of the element, the better the cluster is defined by that element.Finally, to better describe the obtained clusters, with a process of independent interpretation and successive comparison among the authors, only the elements with higher test-value were considered. To evaluate the association between each cluster and the groups of involved students the chi-square test was performed and the corresponding p-value calculated.In particular, the null hypothesis is rejected if the p-value is less than or equal to a predefined threshold value set to 0.05.The software SPAD and SPSS were used to perform statistical analysis. Preliminary descriptive statistical analysis The analysis of the frequencies of wrong answers signaled common students' misconceptions (cf.Table 1).An answer is considered wrong, when the respondent chooses strongly disagree or disagree for a right statement, i.e., "A free electron can assume any energy value", or chooses strongly agree or agree for a false statement, i.e., "The electron moves at the speed of light." Emerging factors and their interpretation Three factors were retained from MCA.Indeed, in most cases, three factors are enough to describe the variance of the data [12].They are reported in Table 2 with the most significant statements and related modality that guided our interpretation of the factors.These latent factors could be regarded as the dimensions of the euclidean space in which to represent the data set (see Figure 1).In order of importance, the three extracted factors are: • Factor 1, labelled Level of agreement, from partially agree or disagree to completely agree or disagree.It can be interpreted as a metacognitive factor, because it identifies the extent to which the student agrees with the statement, regardless of the topic targeted by the item; • Factor 2: labelled Photon vs electron, from a quantum view about the electrons but a deterministic view about the photons to a quantum view about the photons but a deterministic view about the electrons.It considers the different views (deterministic vs quantum view) about electron and photon behaviour; • Factor 3: labelled Deterministic vs quantum view, from a classical mechanistic and deterministic view to a completely quantum view about all quantum entities.A photon has neither mass nor charge to a great extent Photon energy depends on the colour of the electromagnetic wave completely Nobody knows the position accurately of an electron in orbit around the nucleus because it is very small and moves very fast to a great extent As light is emitted by an atom, electron jumps from one orbit to another, i.e., it is not anywhere in between the two orbits completely . . . . . . If we perform an experiment with a double slit and know quite precisely the initial conditions, then we can predict where the electron will hit the screen not at all The electron spins around its axis not at all How one thinks of the nature of light depends on the experiment being carried out not at all A photon moves at the speed of light not at all 3 Nobody knows the position accurately of an electron in orbit around the nucleus because it is very small and moves very fast completely The atom is more or less like a small sphere completely The structure of the atom is similar to the way planets orbit the Sun completely Light travels as a wave but is absorbed as a packet of energy or photon not at all Electrons move in a non-deterministic way around the nucleus within a certain region or at certain distance not at all Electromagnetism and Newtonian mechanics cannot explain why atoms are stable not at all . . . . . . Nobody knows the position accurately of an electron in orbit around the nucleus because it is very small and moves very fast not at all The atom is more or less like a small sphere not at all The structure of the atom is similar to the way planets orbit the Sun not at all Light travels as a wave but is absorbed as a packet of energy or photon completely Electrons move in a non-deterministic way around the nucleus within a certain region or at certain distance completely Electromagnetism and Newtonian mechanics cannot explain why atoms are stable completely a For this factor we do not report the statements corresponding to the modalities.See text for further details. Clusters and their interpretation A 5-cluster solution has been adopted.Two quantitative criteria were used for choosing the final number of clusters: (i) a subsequent subdivision of the clusters produces a limited increase of the explained data variance; (ii) a subsequent subdivision in the dendrogram sequence produces, at least, one cluster with less than 5% of cases of the sample.This choice was aimed to avoid the identification of clusters with low face validity and, hence, harder to interpret.Table 3 reports the statements and related modality with the highest test-value for each cluster.The emerging clusters can be described as it follows: • Cluster 1: Students adopting a full quantum view about all entities (17.2%); • Cluster 2: Students adopting a deeply deterministic view about all entities (17.0%); • Cluster 3: Students adopting a partial quantum view on specific entities, like electrons and atoms (30.1%);Table 3: Elements with the higher test-value for each cluster. Cluster Item Modality 1 Electron is always a particle not at all Electrons move in a non-deterministic way around the nucleus within a certain region or at certain distance completely The wave or particle nature of the electron depends on the particular experiment carried out not at all How one thinks of the nature of light depends on the experiment being carried out completely A photon moves at the speed of light completely Electrons move along well determined orbits around the nucleus not at all 2 Electrons move in a non-deterministic way around the nucleus within a certain region or at certain distance not at all Electron is always a particle completely Electrons move along well determined orbits around the nucleus completely Nobody knows the position accurately of an electron in orbit around the nucleus because it is very small and moves very fast to a great extent The atom is more or less like a small sphere completely The structure of the atom is similar to the way planets orbit the Sun completely The atom is stable due to a balance between the attractive electric force and the centrifugal force Clusters' representation in the factorial space (see Figure 2) shows that the labels of factors and clusters are coherent.As far as factor 1 [see Figure 2(a)], on the right side of the horizontal axis we find two clusters that group students, who completely agree or disagree with the statements (and, therefore, with a quantum or deterministic view).On the left side, we find the other clusters that are more or less shifted according to the level of agreement expressed by students they group.As far as factor 2 [see Figure 2(a)], clusters 1 and 2 are practically independent of this factor.On the contrary, clusters 4 and 3 are shifted along the vertical axis according to the students' quantum view about photons and electrons, respectively.Lastly, regarding factor 3 [see Figure 2(b)], it is evident that clusters 3, 4 and 5 are independent of it.On the contrary, it clearly discriminates between clusters 1 and 2, which are very far apart along this axis. Chi-square test The association between the four groups and the emerging clusters is statistically significant: χ 2 = 45.410,d.f.= 12, p < 0.001.In particular, as shown in Figure 3, it has been found that among the students, interested in extracurricular activities in physics, about 30% have developed a full quantum view on all entities and 34% a quantum view at least about photons.On average, this happens only for one third of the students in the other groups. Discussion and conclusions First, we note that our results concur with previous studies [13] showing that curricular activities do not allow students to achieve a full quantum view.Only more motivated students seem to achieve a more complete quantum view about all entities.Differently from previous ones, our analysis shows that conflicting views (e.g., probabilistic vs deterministic) about different quantum entities (e.g., electrons vs photons) may coexist, perhaps because of how the waveparticle duality is taught.In addition, it is likely that classical models, used by teachers to simplify some explanations or to apply classical formulas to different contexts or to follow a quasihistorical framework [4,14,15] (i.e., application of balance between electric and centrifugal force at Bohr's atom model), as suggested by several textbooks (e.g.see [16,17]), prevent students from acquiring a correct quantum forma mentis.However, further research is needed to support our interpretation.Moreover, as also pointed out in the context of photons and quantum optics [18,19], students often seem unable to catch the difference between the nature and the description of a quantum object.Indeed, also students belonging to the quantum cluster expressed conflicting views on these two statements.Actually, it would be interesting to see whether our results could be framed within the theoretical perspective proposed in [18].Future developments include the analysis of students' confidence, the analysis of teachers' views and the design of a teaching-learning sequence about the emerged issues. Figure 2 : Figure 2: Cluster representation in factorial dimensions: (a) Factor 2 vs 1 and (b) Factor 3 vs 1.The point size is proportional to the cluster size. completely 3 Electron is always a wave not at all A photon has neither mass nor charge not at all Light travels as a wave but is absorbed as a packet of energy or photon to a small extent Electrons move at the speed of light not at all A photon moves at the speed of light to a great extent 4Electrons move at the speed of light to a great extent Photon energy depends on the colour of the light to a great extent Photons can assume any energy value to a great extent The electron spins around its axis completely Electron is always a wave to a great extent How one thinks of the nature of light depends on the experiment being carried out to a great extent 5Electron is always a wave to a small extent Electrons move at the speed of light to a small extent Nobody knows the position accurately of an electron in orbit around the nucleus because it is very small and moves very fast to a small extent How one thinks of the nature of light depends on the experiment being carried out to a great extent During the emission of light from an atom, the electrons follow a definite path as they move from an energy level to another to a small extent • Cluster 4: Students adopting a quantum view only about photons (18.9%); • Cluster 5: Students adopting a partial deterministic view about all entities (16.8%). Figure 3 : Figure 3: Association between students' curriculum and emerging clusters. Table 1 : Common students' misconceptions found in the present study. Table 2 : Factors retained from multiple correspondence analysis (MCA).
4,508
2024-04-01T00:00:00.000
[ "Physics" ]
Robust and Fragile Majorana Bound States in Proximitized Topological Insulator Nanoribbons Topological insulator (TI) nanoribbons with proximity-induced superconductivity are a promising platform for Majorana bound states (MBSs). In this work, we consider a detailed modeling approach for a TI nanoribbon in contact with a superconductor via its top surface, which induces a superconducting gap in its surface-state spectrum. The system displays a rich phase diagram with different numbers of end-localized MBSs as a function of chemical potential and magnetic flux piercing the cross section of the ribbon. These MBSs can be robust or fragile upon consideration of electrostatic disorder. We simulate a tunneling spectroscopy setup to probe the different topological phases of top-proximitized TI nanoribbons. Our simulation results indicate that a top-proximitized TI nanoribbon is ideally suited for realizing fully gapped topological superconductivity, in particular when the Fermi level is pinned near the Dirac point. In this regime, the setup yields a single pair of MBSs, well separated at opposite ends of the proximitized ribbon, which gives rise to a robust quantized zero-bias conductance peak. Introduction Three-dimensional (3D) topological insulators (TIs) have received a lot of attention in the last decade due to their interesting electronic properties, in particular due to their topologically protected surface states with a spin-momentum-locked Dirac-cone energy spectrum [1]. This interest has only increased when the possibility emerged to realize exotic forms of superconductivity by combining TIs and ordinary s-wave superconductors in heterostructures [2][3][4][5][6][7][8], exploiting the superconducting proximity effect [9]. Due to strong spin-orbit coupling in the TI, the induced superconductivity transforms into p-wave pairing for the TI surface states [10]. A promising application of p-wave superconductivity is to realize Majorana bound states (MBSs) in a spinless fermionic channel [11], forming a so-called Majorana wire. These MBSs come in pairs of states at zero energy, which are localized at opposite ends of the wire. A pair of MBSs can also be understood as a pair of quasiparticles forming equal-weight superpositions of a particle and hole state. As such, the MBS is a quasiparticle that is its own anti(quasi)particle, with an associated creation/annihilation operator that is self-adjoint [12]. Furthermore, these MBSs are anyons with non-Abelian exchange statistics [13]. By combining these exotic properties, MBSs can be exploited for quantum information processing with the promise of being immune against the most common sources of decoherence [14][15][16][17][18]. With the abovementioned properties, a superconductor-TI nanowire or nanoribbon heterostructure appears to be a natural Majorana wire candidate. Unfortunately, the surface-state spectrum suffers from a spin degeneracy, which naturally arises due to confinement quantization of the spin-momentum-locked Dirac cone spectrum with antiperiodic boundary conditions for the Dirac spinor solutions [19]. This prevents the realization of topological p-wave superconductivity with a single spinless channel that is underlying the realization of a Majorana wire. The unwanted degeneracy can be lifted, however, by applying an external magnetic field along the wire [19][20][21][22]. The magnetic flux through the cross section of the TI nanowire modifies the boundary condition for the surface states and thereby lifts the degeneracy that prevents the formation of a topologically nontrivial regime [23,24]. Even when the proximitized TI nanowire is brought into the topological regime with an external magnetic field, there is no guarantee that well-separated MBSs form at opposite ends of the wire. For this, a sizeable proximity-induced superconducting gap must be induced in the surface-state spectrum of the TI nanowire. In recent works, the realization of such a fully gapped nontrivial regime was brought into question, as it was found that the particle and hole states fail to couple due to a mismatch in transverse momentum and, hence, the proximity effect fails to induce a superconducting gap [25]. To overcome this mismatch, it has been proposed to consider a vortex in the superconducting condensate that envelopes the TI wire [25], or to break the transverse symmetry of the wire with an electric field by introducing an electrostatic gate in the device layout [26]. In this article, we scrutinize the conditions for fully gapped topological superconductivity in a proximitized TI nanoribbon (i.e., a nanowire with rectangular cross section) structure, considering a realistic device layout that is compatible with state-of-the-art nanofabrication processes [6]. In particular, we consider a selectively grown TI nanoribbon that is covered from the top by a conventional superconductor (see Figure 1a), e.g., Nb [27]. Through careful consideration of the proximity effect, we find that this setup naturally yields optimal conditions for fully gapped topological superconductivity when the ribbon cross section is pierced by (close to) half a magnetic flux quantum, with a single pair of robust MBSs appearing at the ends of the nanoribbon. For other flux values, we identify different gapless and gapped phases with different numbers of end-localized MBSs, depending on the position of the Fermi level with respect to the Dirac point. Some of these MBSs appear to be fragile when disorder is introduced in the system and suffer from hybridization with other MBS solutions on the same end of the ribbon. We also consider a tunneling spectroscopy setup for distinguishing robust and fragile MBSs and identifying these different phases in experiments. Starting with this introduction, the article is divided into six sections. In Section 2, we cover the details of our simulation approach, including the continuum model Hamiltonian (Section 2.1), the treatment of the superconducting proximity effect (Section 2.2), and the tight-binding modeling approach (Section 2.3). In Section 3, we discuss the (proximity-induced) spectral gap and the topologically trivial and nontrivial regimes of a top-proximitized TI nanoribbon. In Section 4, we discuss the different phases of this system, with different numbers of robust and fragile MBSs, which can be probed with tunneling spectroscopy (Section 4.1). We proceed with a discussion of the simulation results in Section 5 and conclude in Section 6. Topological Insulator For obtaining the TI nanoribbon spectrum, we consider a tight-binding model (see Section 2.3) that is derived from the following 4-band continuum model Hamiltonian for the Bi 2 Se 3 family of TI materials [28,29]: H 0 (k) is a 4 × 4 matrix that is written as a linear combination of a tensor product of Pauli matrices s a and σ b (a, b ∈ {x, y, z}) acting on the spin and orbital (pseudospin) subspaces, respectively. Note that the identity matrices are not written explicitly here. The parameters C 0 , C ⊥ , C z , M 0 , M ⊥ , M z , A ⊥ , A z can be obtained for different TI materials, such as Bi 2 Se 3 or Bi 2 Te 3 [29]. Here, we will neglect in-plane (⊥) versus out-of-plane (z) anisotropy (in terms of model parameters, A ⊥ = A z ≡ A, C ⊥ = C z ≡ C, M ⊥ = M z ≡ M) and asymmetry between valence and conduction bands (C = 0) for simplicity (these simplifications will not significantly affect the surface-state spectrum near the Dirac point [30], which is the focus of this work). We consider remaining model parameters A = 3 eV · Å, M = 15 eV · Å 2 , and M 0 = 0.3 eV to represent a 3D TI material with inverted (direct) bulk band gap at the Γ point equal to 0.6 eV and a Dirac velocity for the 3D TI surface states equal to v D = A/h ≈ 4.6 × 10 5 m/s, which are both comparable to Bi 2 Se 3 [28] (note that 0.6 eV reflects the band separation at the Γ point and not the overall bulk band gap, which is closer to 0.3 eV). For describing a charge carrier density fluctuations in the ribbon (i.e., electrostatic disorder), we can add a disorder term S dis φ(r) to the parameter C 0 (µ = C 0 corresponds to the chemical potential being pinned at the Dirac point in a pristine system without disorder). This disorder is characterized by a fluctuation amplitude S dis and the function φ(r), which we consider to be a unit-normalized white-noise profile [ φ(r)φ(r ) = δ(r − r )] or a Gaussian random field [ φ(r)φ(r ) = e −(r−r ) 2 /(2λ 2 ) ] with spatial correlation length λ. Here, we consider an electrostatic disorder strength of the order of the TI surface-state subband spacing (∼2π A/P, see below, which is in the 10 meV range) and a spatial correlation length in the few-nm range [31]. Proximity-Induced Superconductivity In our setup, we consider a 3D TI nanoribbon that is proximitized via its top surface by an s-wave superconductor. We make use of the Bogoliubov-de Gennes (BdG) formalism to treat the proximity-induced superconducting pairing [32], which yields the following model Hamiltonian: with the Nambu spinor We consider conventional s-wave pairing (induced by the superconductor on top of the TI nanoribbon) in the BdG formalism, which is given by a momentum-independent pairing with complex-valued pairing potential ∆ = {∆} + i {∆}. Combining Equations (1)-(3), we obtain the following 8-by-8 BdG-Hamiltonian matrix: with τ c (c ∈ {x, y, z}) Pauli matrices acting on the particle-hole subspace. Note that the superconducting proximity effect encompasses several aspects, such as an induced superconducting gap, an induced pairing potential, and induced particlehole correlations [33]. For our simulation approach, based on the BdG Hamiltonian in Equation (4), we do not explicitly include the superconductor on top and only consider the pairing potential that is induced at the interface (see Ref. [34] for an explicit derivation of such a pairing term at the interface) as input. With this input, we can calculate the surfacestate quasiparticle spectrum at low energies, with induced spectral (superconducting) gap, as well as particle-hole correlations throughout the complete TI nanoribbon, for example [35]. Tight-Binding Model For simulating a proximitized TI nanoribbon, we discretize the BdG Hamiltonian of Equation (4) via the standard procedure onto a regular cubic grid with lattice constant a = 1 nm, resulting in a tight-binding model Hamiltonian with on-site and nearest-neighbor hopping matrices, H onsite and H hop x,y,z , respectively. For the BdG-Hamiltonian H BdG (k) described in Equation (4), this results in the following matrices: with comparable hopping matrices along the y and z directions. All the simulation results presented in this work are obtained with this tight-binding model (see Appendix A for details on the implementation of the simulation approach and retrieval of the spectral gap). For our setup, we consider a nanoribbon with infinite length L or L = 1 µm along the x-direction, and a square cross section (width W = 10 nm along y, height H = 10 nm along z, perimeter P = 2W + 2H = 40 nm) that is proximitized by an s-wave superconductor covering its top surface (see Figure 1a). Because the TI is not an intrinsic superconductor, the pairing potential decays quickly away from the TI-superconductor interface, i.e., over atomic distances [35]. It is therefore reasonable to assume a nonzero ∆ (here, we consider ∆ to be real, without loss of generality, and equal to 5 meV) only on the topmost layer of the TI nanoribbon lattice model (∼1 nm thick) that is considered to be in direct contact with the superconductor, while ∆ = 0 elsewhere in the lattice. For the orbital effect of an external magnetic field, we consider the Peierls substitution method for the hopping terms: Here, t i→j represents the hopping matrix from site i with position r i to j with position r j , which is modified by Peierls substitution with a phase depending on the vector potential A with corresponding external magnetic field B = ∇ × A, reduced Planck's constanth, and the charge q = ∓e (with e the elementary charge) of the charge carrier (either a particle or a hole). We consider a constant external magnetic field oriented along the nanoribbon (in the frame of reference considered here, along the x-direction) with A = (0, |B|(z − H), 0). We consider this vector potential such that it vanishes on the topmost layer of the nanoribbon (z = H) and is compatible with A = B = 0 for z > H, avoiding any supercurrent ∝ |∆|A in our description [32]. These assumptions correspond to a simplified treatment of the experimental setup: a vector magnetic field that vanishes completely inside the superconductor on top of the TI nanoribbon while neglecting any shielding current. Spectral Gap In Figure 1c, we present the gap in the quasiparticle spectrum of a top-proximitized TI nanoribbon as a function of magnetic flux piercing the cross section of the ribbon and of chemical potential (note that µ = 0 corresponds to the position of the Dirac point of the TI surface-state Dirac cone). The parameter space is divided into different gapped regions that are separated by gapless phase boundaries or regions. In general, the gap lies somewhere between zero and |∆|, with ∆ the superconducting pairing potential considered on the top surface of the TI nanoribbon (in direct contact with the proximitizing superconductor), which provides a natural upper bound for the proximity-induced superconducting gap. Only near µ = 0 and integer multiples of the flux quantum does the gap exceed |∆|. Here, the evaluated spectral gap is a trivial insulating gap due to confinement quantization ∼πA/P, rather than a proximity-induced superconducting gap. The spectral gap is either topologically trivial or nontrivial, and the top-proximitized TI nanoribbon only forms a quantum wire with unpaired MBSs in the nontrivial regime. The nature of the gap can be determined with the following Z 2 topological invariant [11], with Pf short for the Pfaffian and H TINR (k) the tight-binding model Hamiltonian over the cross section of the TI nanoribbon with wave number k along the direction of the ribbon. The trivial and nontrivial regions are indicated by color and are in good qualitative agreement with the diamond-tiled phase diagram that can be obtained analytically for a cylindrical TI nanowire model with ∆ = 0 [23]. A (nonproximitized) cylindrical TI nanowire has the following surface-state (particle) spectrum: with k the wave number (for propagation along the nanowire), l = 0, ±1, . . . the quantum number for quantized angular momentum, v D the Dirac velocity of the Dirac cone, η = Φ/Φ 0 the total magnetic flux piercing the nanoribbon cross section in multiples of flux quanta (Φ 0 ≡ h/e), and P = 2πR the diameter of the cylindrical nanowire with radius R. From this expression, it can be seen that the spectrum is a subband-quantized Dirac cone that is flux quantum-periodic. By evaluating the number of Fermi points ν with k > 0 (or k < 0), i.e., the number of forward (or backward)-propagating surface states at zero energy from the different subbands that cross the chemical potential, the topological invariant above can be obtained in an alternative way by evaluating M = (−1) ν . In other words, the system becomes topologically nontrivial when there is an odd number of such Fermi points and corresponding propagating modes. Each diamond in the phase diagram represents a bounded region with a given number of Fermi points, which is even for the diamonds in grayscale, and odd for the diamonds in redscale. Hence, for a piercing magnetic flux that is a half-integer multiple of one flux quantum, the system always has an odd number of Fermi points and remains in the nontrivial regime for all values of the chemical potential µ (within the TI nanoribbon bulk gap, as we are only considering the topological surface states inside the bulk gap). Conversely, the system is always in the trivial regime without an external magnetic field, or when the piercing magnetic flux is an integer multiple of Φ 0 . Note that Fermi points, i.e., zero-energy surface states, can only be considered in general for ∆ = 0, as the states can otherwise gap out around zero energy. Therefore, we resort to the more general topological invariant in Equation (7) and the perfect diamond tiling gets slightly deformed, with different diamonds in the phase diagram becoming connected. Further note that the diamond height (its extent as a function of chemical potential) is equal to the subband energy spacing and smaller than 2π A/P (with A =hv D ), which is the expected spacing, based on the cylindrical wire model when substituting the diameter with the perimeter of the square cross section. This can also be seen in the surface-state spectrum presented in Figure 2a and has been reported before [30]. It may originate from the pile-up of wave function density near the corners, which appears to increase the effective perimeter of the cross section. Figure 1c, with the color indicating the particle-hole (p-h) mixing (blue for hole, red for particle). The spectrum for ∆ = 0 is presented with thin black lines and the extent of the spectral gap is indicated with horizontal pink dashed lines. An interesting finding is that the spectral gap in the nontrivial region is maximal for the diamond closest to the Dirac point (µ = 0), which corresponds to the region with a single Fermi point, and remains large for the diamonds at higher µ for a piercing flux close to a half-integer flux quantum (see Figure 1d). The size of this spectral gap agrees well with the perturbative estimate E gap ≈ ψ p | ∆ | ψ h ∼ (W/P)∆ ≈ ∆/4 for our TI nanoribbon with square cross section, with | ψ p and | ψ h particle and hole surface states at the Fermi point with ∆ = 0. The reduction factor 1/4 originates from the ratio of the section of the perimeter with nonzero ∆ (only the top surface) to the complete perimeter, which gets enveloped by the particle and hole surface states. In addition to the separation into trivial (grayscale) and nontrivial (redscale) regions, the spectral gap also reveals different gapped phases within the trivial and nontrivial regions themselves, separated by gapless phase boundaries (note that gap closings without a change of topological invariant were also reported in Ref. [34]). This suggests that the regions with and without unpaired MBSs subdivide further into different phases with additional distinct properties. As some of the regions stretch out over multiple diamonds, we can already rule out that the properties are strictly related to the number of Fermi points when ∆ = 0. To reveal the properties of the different phases in the phase diagram, based on the spectral gap, we take a closer look at four points (indicated by , , •, and ), which lie in different diamonds or regions separated by a gapless boundary in Figure 1c. Their quasiparticle spectrum is presented in Figure 2b. In general, we see the expected number of Fermi points of the different subbands, based on the diamond to which the point belongs (a single Fermi point in the diamond of , two Fermi points in the diamond of •, and three Fermi points in the diamond of and ), and local minima of the proximity-induced superconducting quasiparticle gap forming near them. Interestingly, belongs to the same phase that stretches out over the complete single-Fermi point diamond below, in which also lies, while sits in the same diamond and represents a different phase, separated from by a gapless boundary. The qualitative difference between the two phases in the quasiparticle can be narrowed down to the number of local minima that can be attributed to a single spinless channel. In the diamond with three Fermi points where and lie, there is either a single such local minimum or three of them, depending on the relative positioning of the Fermi points in reciprocal space. For , two Fermi points overlap in momentum, which effectively turns these channels into a trivial spinful channel. Hence, the number of spinless channels is reduced to one, which is the same numbers as in the diamond below. This spectral property has consequences for the formation of MBSs, as will be discussed in the following section. Robust and Fragile Majorana Bound States In Figure 3a, we present the low-energy quasiparticle spectrum as a function of flux of a top-proximitized TI nanoribbon with finite length. In this way, we also reveal the states that are localized at the ends of the ribbon. We fix the chemical potential to two different values to explore the different trivial and nontrivial phases, as discussed in the section above. Near Φ = nΦ 0 (n ∈ Z), there is a completely trivial insulating phase without any subgap states. For other values of the flux, however, subgap states appear in the spectrum, even in regions that are trivial according to the topological invariant in Equation (7), and they are localized at both ends of the nanoribbon (see Figure 4a). Figure 1c at µ ≈ 1.3 × 2π A/P (µ ≈ 0.75 × 2π A/P). The results are obtained (a,d) without disorder, and (b,c,e,f) with disorder (considering S dis = 0.1 × 2π A/P in (b,e), and S dis = 2π A/P in (c,f), with the different disorder strengths also indicated in Figure 1c). Close to Φ = (n + 1/2)Φ 0 and for the complete nontrivial diamonds nearest to the Dirac point (the connected region containing and , for example), there are only two subgap states. These can be identified as a pair of MBSs forming at opposite ends of the top-proximitized TI nanoribbon. In other words, this phase corresponds to the conventional Majorana quantum wire system. This phase and its MBSs are robust against local disorder, as can be seen in Figure 3b,c, where it is presented how the low-energy spectra of Figure 3a are affected by increasing electrostatic disorder throughout the nanoribbon. In the region with •, four subgap states form, two on each end of the ribbon. In this case, the TI nanoribbon effectively has two independent spinless channels (see previous section and Figure 2), which both give rise to the formation of a pair of MBSs forming at opposite ends of the nanoribbon. However, as there are two MBSs on each wire end, these MBSs can couple to the other MBS on the same end when they are exposed to local electrostatic disorder. Hence, in the presence of disorder, these MBSs hybridize into non-self charge-conjugate bound states with finite energy. We refer to such MBSs as fragile MBSs. In contrast, a single unpaired MBS can only suffer from hybridization with the MBS on the opposite end of the ribbon, which can be suppressed by making the proximitized section significantly longer than the MBS localization length ∼hv D /E gap . Therefore, we refer to it as a robust MBS pair. The low-energy spectrum of shows six subgap states, which can be identified as three pairs of MBSs that form at opposite ends of the TI nanoribbon. When electrostatic disorder is present, two out of the three MBSs can hybridize locally and are thus fragile, while a single robust pair should remain protected as long as the TI nanoribbon remains in the nontrivial regime (similar to what happens in a tri-junction of Majorana wires [16]). When the electrostatic disorder strength becomes of the order of the diamond height ∼ 2π A/P, the TI nanoribbon is not guaranteed to remain in in the same phase when the flux is not an integer (Φ = nΦ 0 ) or half-integer [Φ = (n + 1/2)Φ 0 ] multiple of the flux quantum. In this strongly disordered regime, the spectral gap will fluctuate strongly along the disordered nanoribbon and locally cross gapless phase boundaries. Because of this, many states with near-zero energies can appear, which renders it difficult to interpret the spectrum in terms of end-localized MBS pairs. Figure 1c) for four different combinations of chemical potential and piercing magnetic flux (also indicated in Figure 1c). The tunneling conductance in (b) corresponds to the vertical gray dashed line cuts in Figure 3d-f. The spectral gap of the TI nanoribbon considering infinite length and no disorder is indicated by vertical pink dashed lines. Now, we can put all these findings together with the phase diagram and spectral properties obtained in the previous section. It becomes clear that the different gapped phases can be identified by the number of MBS pairs forming at opposite ends of the proximitized TI nanoribbon, with the number always being even (odd) in the trivial (nontrivial) regime. When there is some amount of electrostatic disorder in the proximitized TI nanoribbon, however, it is expected that any even number of MBSs will hybridize locally with the other MBSs at the same end, while a single pair of MBSs should remain immune from hybridization in a nontrivial regime with an odd number of pairs in total. This single pair will then survive near zero energy at opposite ends of the proximitized TI nanoribbon. We can thus identify and label the phases in Figure 1c (also see Figure 1b) by their number of robust MBS pairs (zero or one) and fragile MBS pairs (an even number). As the phase with a single robust MBS pair and zero fragile pairs stretches out over a large chemical potential window near Φ = (n + 1/2)Φ 0 , it is the most robust phase with respect to electrostatic disorder. It does not suffer from fragile MBS, nor from strong fluctuations of the spectral gap. In the subsection below, we discuss characteristic signatures of these different phases and their MBSs in a tunneling spectroscopy setup. Tunneling Spectroscopy In this subsection, we consider the tunneling conductance of a metallic tunneling probe that is attached to an uncovered end of a proximitized TI nanoribbon (see Appendix A for details), as a function of piercing magnetic flux and energy of the injected carriers from the tunneling probe (corresponding to bias voltage across the tunneling junction). Note that, in the experimental setup, the uncovered end should be shorter than the induced coherence length ∼hv D /E gap [9] for obtaining a hard proximity-induced gap and clearly revealing the subgap states. The results of these simulations are shown in Figure 3d-f for the same TI nanoribbon and parameters as in Figure 3a-c, and in Figure 4b for fixed values of the piercing magnetic flux. We consider conductance normalized to G 0 ≡ e 2 /h. Overall, we find that the tunneling conductance reveals the subgap spectrum of the top-proximitized TI nanoribbon without disorder well, with a quantized conductance peak of 2e 2 /h near zero bias (corresponding to perfect Andreev reflection) when a single unpaired MBS is localized on the side of the tunneling probe. Due to the finite length of the ribbon, there is hybridization of MBS pairs across the length of the ribbon, resulting in a splitting of the conductance peak away from zero bias. This splitting is modulated by the piercing magnetic flux. When electrostatic disorder is introduced, the tunneling conductance of the phase with only a single (robust) MBS pair remains qualitatively the same. The zero-bias conductance peak is wider in the case of strong disorder, but it remains quantized and pinned at zero energy (see Figure 4b). The tunneling conductance of the phases with (fragile) MBS pairs gets heavily affected by disorder, however, and a quantized conductance peak cannot be easily identified. Instead, the conductance near zero bias displays irregular signatures that are very sensitive to the bias and piercing flux. This can be expected as the subgap spectrum itself is heavily affected by the disorder. Hence, it will be harder to identify phases with fragile MBSs via tunneling spectroscopy, and to determine whether the phase lies in the trivial or nontrivial regime, in particular when the electrostatic disorder strength is of the order of the subband spacing or larger. Discussion It is important to note here that the realization of fully gapped topological superconductivity in proximitized TI nanoribbons has been considered before with comparable simulation approaches [23][24][25][26]34,36]. It was pointed out by de Juan et al. in Ref. [25] that some form of transverse asymmetry is required to open up a superconducting gap in the TI nanoribbon surface-state quasiparticle spectrum. Without asymmetry, the induced gap is expected to vanish (E gap ≈ ψ p | ∆ | ψ h ≈ 0) because of a mismatch of quantized transverse momentum. Transverse asymmetry can be induced by a superconducting vortex enveloping the nanoribbon [25], by electrostatic gating [26], or by considering more sophisticated hybrid structures with multiple superconductors and gates, which may also enhance the proximity-induced gap away from the Dirac point [34], for example. Our results, however, suggest that the strong decay of the pairing potential away from the intrinsic superconductor is already sufficient to realize the required transverse asymmetry when only bringing one of the side surfaces (e.g., the top surface) in direct contact with the superconductor. In this way, a sizeable proximity-induced gap, ∼ (W/P)|∆| or ∼ (H/P)|∆|, can be naturally achieved near a half-flux quantum of piercing magnetic flux, especially when the Fermi level is close to the Dirac point [34]. Hence, our findings suggest that a rather straightforward device layout (a TI nanoribbon onto which superconducting material is deposited) should already be ideally suited for realizing the topologically nontrivial regime with well-separated unpaired MBSs. Furthermore, we note that multiple (fragile) MBSs are not unique to top-proximitized TI nanoribbons. They also appear in proximitized semiconductor nanowires with a multi-subband treatment [37], for example. In that case, however, the nontrivial regions with a different number of Fermi points are disconnected, such that electrostatic disorder can more easily push the system into a trivial regime [38]. For future work, the consideration of material and sample-specific parameters for the, e.g., TI model Hamiltonian, ribbon dimensions, and disorder strength would be interesting to explore. Equally relevant is the consideration of more complicated MBS architectures that allow for braiding, with multiple proximitized nanoribbons with different orientations as building blocks (e.g., a Y-junction [24]). For such structures, the orbital effect from an external magnetic field that is misaligned with one of the ribbons should also be considered. This has been shown to induce a steering effect in TI nanoribbon structures [30,39], and also affects the topological phase in semiconductor nanowires, for example [40][41][42][43]. Regarding the experimental feasibility, we expect that the phase diagram presented here can be resolved in state-of-the-art TI nanoribbon samples. Quasi-ballistic transport of topological surface states [44,45] (also in combination with tunability of the Fermi level with respect to the Dirac point via electrostatic gating [46][47][48][49][50]), surface-state subband quantization [51], as well as proximity-induced superconductivity [8], have all been demonstrated in TI nanowires or ribbons. Aside from electrostatic gating, heterostructure engineering can be considered to tune the intrinsic Fermi level within a few meV from the Dirac point [52,53]. Finally, we comment on the consideration of tunneling spectroscopy to probe the different topological phases of top-proximitized TI nanoribbons. From alternative MBS platforms (in particular, semiconductor nanowires), we know that a (quantized) zero-bias conductance peak as MBS signature must be considered with care, as such a signature can also have a trivial origin [54]. Nonetheless, the conditions for such false positives are less likely to appear in top-proximitized TI nanoribbons because of two important differences. First, topological TI surface states have more intrinsic robustness against disorder than the low-energy modes of semiconductor nanowires due to spin-momentum locking [55]. Second, a quantum dot at the end of the ribbon, which is one of the important mechanisms for retrieving a zero-bias conductance peak with trivial origin in semiconductor nanowires [54], is less likely to form because of the linear Dirac-cone spectrum. Conclusions With a detailed three-dimensional tight-binding model, we investigate numerically the spectral gap of three-dimensional topological insulator nanoribbons with magnetic flux-piercing and proximity-induced superconductivity, induced by a superconductor on the top surface. The spectral gap reveals a rich phase diagram as a function of flux and chemical potential, with different gapped phases with paired and unpaired Majorana bound states appearing at opposite ends of the nanoribbon. These Majorana bound states can be robust or fragile with respect to local hybridization due to electrostatic disorder. When the Fermi level in the topological insulator nanoribbon is close to the Dirac point and the piercing magnetic flux is close to half a flux quantum, we retrieve the optimal conditions for realizing fully gapped topological supperconductivity. With these conditions, there is a single pair of robust MBSs on opposite sides of the nanoribbon over an extended range of chemical potential and flux values. This phase gives rise to a robust quantized zero-bias conductance peak in tunneling spectroscopy. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Simulation Approach For the tight-binding simulations, we use the Python package Kwant [56] with parallel sparse direct solver MUMPS [57] and Adaptive [58] for parameter sampling (e.g., flux and chemical potential) on a nonuniform adaptive grid. The Pfaffian is calculated numerically with the algorithm of Ref. [59]. The source code and raw data of our simulations are archived in a public repository [60]. For the calculation of the spectral gap E gap , we use an algorithm that is based on that of Ref. [41], but slightly modified to speed up the convergence. The modification consists of checking whether modes exist for ∆ = 0 at E = 0 (i.e., a Fermi point). If not, the algorithm from Nijholt et al. is used [41]. If modes exist at E = 0, we look for a local minimum of the spectral gap as a function of k around the Fermi point with an adaptive search algorithm [58]. For ∆ small compared to the subband spacing (as is the case in our setup), the difference between the k value of the minimum and the subband crossing is small (see Figure A1) so the search of this local minimum quickly yields a precise result. For the tunneling spectroscopy simulations, we consider a metal lead with the following continuum model Hamiltonian: representing a free electron gas with effective mass m * = 0.01 m e and Fermi level µ = 20 eV. These artificial model parameters allow us to consider a lattice-matched few-channel metallic tunneling probe with which the tunneling conductance can be resolved, also when exceeding 2e 2 /h (due to Andreev reflection signatures of multiple MBSs adding up, for example). We discretize this Hamiltonian on the same cubic grid as for the TI nanoribbon model, with lattice constant a = 1 nm, and consider a metal lead with translational invariance along x and square cross section of 2-by-2 lattice sites. We attach this lead at one end of a proximitized TI nanoribbon, directly in the center of its cross section. We consider a 10-nm-long section of the TI nanoribbon that is not covered by a superconductor, separating the metallic tunneling probe from the part of the ribbon that is in direct contact with the superconductor (with ∆ = 0 on the top surface).
8,128.4
2022-12-29T00:00:00.000
[ "Physics" ]
Sustainable Supply Chain Finance and Supply Networks: The Role of Artificial Intelligence Supply chain finance (SCF) is receiving increasing awareness in research as a result of uncertainties in the global financing for supply chain (SC). There are limited and fragmented studies in the implementations of financial services in SC management. This article builds on recovery from the financial crisis of 2008 and posts COVID-19 pandemic, where uncertainties crippled SCF providers and brokers services. At the same time, cutting-edge technological advancements such as Artificial Intelligence (AI) are revolutionizing the processes of business ecosystem in which SCF is entrenched. This article thus adopts a fuzzy set theoretical approach to unpack the entities relationship validity for sustainable SCF mate-framework, and the originality of AI concepts to sustainable SCF to identify the issues and inefficiencies. The results indicate that AI contributes significant economic opportunities and deliver the most effective utilization of the supply networks. In addition, the article provides a theoretical contribution to financing in SC and broadens the managerial implications in improving performance. I. INTRODUCTION In the last two decades, technological advancements in supply chain (SC), underpinned by computerized shipping and tracking, enterprise resource planning (ERP) and big data, are still emerging with innovations contributing not only to human intelligence, data analytics, and system thinking but efficiency of SC management.In particular, the potentials of SC networks as assets for SC companies, combining technologies and systems applications to the SC modules.However, the application of Artificial Intelligence (AI) technologies in SC is rather too slow or limited while the distribution of enterprise environments is at a higher stage of implementation in their operations [1]. Conventionally, SC finance (SCF) focuses mainly on financial aspects of SC management, particularly defining inventories as cash flows in view of an application for financial services in this sector of global business [2].Furthermore, AI is one of the enablers of global financing in business services, hence the role of AI in building a relationship between SCF and SC networks [3]- [5].The introduction of AI technologies, according to Du et al. [6] primarily reduces SCF and SCN challenges such as lack of consideration of operations assets available in supply networks during application for financing limits the capacity of SCN.Nevertheless, with the emerging opportunities through AI enabled SCN, sustainable financial services can be implemented [4], [7]- [10] Thus, SC operations exist in multiple environments categorized by technology, organizational culture, and systems that vary depending on the policies in the region where they operate [11].Therefore, in terms of major challenges that exist in the SCF and SC networks environments, such as increasing regulations imposed by financial providers, AI can provide pathways to overcome these barriers by analyzing information and data flow and providing alternatives in SC operations.In addition, SCF has become the hub for processing global SC financial services in this age of digital transformation.Global markets and SC operations now face the challenges of developing innovations and technologies to integrate SC networks with financial services. Past SCF studies have examined the impact of the last economic recession on financing in SC, they proposed interorganizational management of financial flows and the advantages of infrastructure sharing as working models for SC [12], [13].Most of the previous work focused on interorganizational SCF however SC networks and technological advancements such as AI have very limited considerations, as SC networks continue to grow, leveraging technology-driven financing methods such as procure-to-pay, which integrate both financing functionalities and purchase management systems are becoming prefer alternatives for SC financing [14].Furthermore, large financial brokers and institutions are supporting these emerging initiatives worldwide, AI systems from current studies are projected as the tool to advance the course of financing in the SC management post COVID-19 pandemic, offering more reliable partnerships between the financier and the SC companies [15].Furthermore, SC networks act as a single colossal system of interconnecting SC companies and financial institutions/brokers, providing a link to control and manage financial services, tracking, and cash flows.Nevertheless, gaining continuous access to the supplier's networks requires direct relationships with companies' operations, and a higher level of SC integrations, which also has a direct impact on the independence of an individual supplier's security.The role of AI as a technological tool is to bridge and stimulate SC financing through existing SC networks, minimizing complications experienced by valuable SC companies as a result of tougher financing application requirements, understanding the designs and operations of SC networks [4], [5], [16], [17].Thus, this article investigates the theoretical research on SCF, SC networks, and AI in SC management, which leads to two primary research questions: RQ1: What are the components of SCF and SC networks that are required for an AI system?RQ2: Can AI simplify SC financing by understanding the relationship between SCF and SC networks? To achieve the objectives in this article, a fuzzy set theoretical approach for complimentary and equifinality of entities relationship as proposed in the conceptual meta-framework because it can evaluate consistency and coverage threshold among the criteria.This article responds to the need for theoretical insights to SC financing and the importance of SC networks.This article first explores the theoretical background, then presents an indepth study, data analysis, and the findings.Finally, the article concludes with a discussion of the implications of this article for research and practice, limitations, and future research directions. II. THEORETICAL BACKGROUND Sustainable SC financing as a continuous process tackles the challenges posed since the 2008 economic recession and post COVID-19 pandemic by holistically connecting financial institutions and brokers with SC companies, past studies argued for collaborative resources sharing, financing models and government grants for SC sector.As prior studies examined single factors of SCF risks [18], SCF opportunities [19], and SC firms [7], this section provides an in-depth review of SCF, SC networks, and AI with a perspective on a conceptual meta framework. A. Sustainable SC Finance According to past studies, there is a common phenomenon in SC that information, products, and financial flows are relevant factors in theories and to practitioners in understanding how to improve financing across SC companies [12].According to Caniato et al. [20], SC companies consist of entities that operate in SC management which include suppliers, transportations, retailers, etc.Based on this understanding, considerable efforts were devoted to studying product mobility and data flows [21].However, this does not apply to financial flows, where advanced optimization around product mobility and data flows is not up to date in terms of the integration of SC operations and financing.In general, a new stream of literature is emerging on some related topics to bridge this gap in the research. In this age of digitalization, SC companies are facing enormous pressure regarding the operations of their business activities and processes, providing the best service without disruptions and meeting the needs of their customers [22].The evolving innovation in information technology provides a new paradigm for SC operations, and some challenges for SC companies are becoming manageable [7].Nevertheless, the recent financial crisis and COVID-19 pandemic led to various difficulties for SC companies, customers' demand has skyrocketed with a limited turnaround time, creating the need for SC companies to seek more financial resources to meet the evergrowing demand in the market [23].Tactlessly, with the last financial crisis, financial institutions and brokers raised the standards and requirements for financing applications [12], making it extremely difficult to access financing for companies with inadequate cash flows.Furthermore, to meet the level of demand, SC companies require consistent and stable cash flows for sustainable and efficient daily operations.Carnovale et al. [24] argued that SCF driven by technology is an innovative method that solves the problems of financing for SC companies by considering cash flows and other activities in their operations.In addition, Lam Hugo [18] further explains the principle of SCF as a fundamentally integral component of financing in SC processes, financial institutions and brokers provide some credit and trade financial services to facilitate and support SC companies' operations, another study [25] argued that SCF as a financing solution for SC provides alternative solutions for credit issues improving SC companies performance by working in partnership with other companies and leveraging joint resources to reduce the risks of interruptions while supporting financing and operations opportunities in SCs. Hence, SC financing can take advantage of the commercial finance environment by combining technological advancement and financial solutions into a single system for financial and operational integration.Osadchiy et al. [26] argued that SC financing, such as business-2-business (B2B) also known as trade credit and crowdfunding, in practical terms, are expanding as their customer networks growat an exponential rate.Nonetheless, the challenge of cash flow deficiency remains as SC companies' financial and operational problems.Hence, exploring technological innovations for SC financing is not just important for research, also SC cash flows as SC companies are constantly seeking ventures from multiple sources from the financial capital market and stakeholders to sustain their operations and improve their partnerships [27]. As SCF is the most important financing solution for most SC companies that are struggling with access to steady and readily available cash flow, Zhao and Huchzermeier [28] further categorize SCF into collateral SCF, time-based SCF, and credit SCF.According to financial economics theory, SC companies have the ability to achieve specific organizational goals and excel when there is a financial mechanism in place to support their goals and objectives [29].Lekkakos Spyridon and Serrano [25] provided a detailed outline of the role of financial institutions and brokers in granting financial facilities to SC companies by managing the information asymmetry in cash flows. B. SC Networks SC networks represent the new integrative innovation in the SC financing processes for SC partnerships working towards a beneficial pool of resources and improving products and services, SC companies are investing in innovative processes, as their operations are directly linked with financial services [30].In SC management context, the environment is fundamental for SC networks, most especially when SC companies interlink both their associated suppliers and customers [31], [32].Consequently, the relationships among partners (SC companies, suppliers, and customers) in the definition of the overall structure for a sustainable SC networks, assuming the significant integration of new structures with existing interconnections.Scholars discussed how SC management studies are continuing to improve SC networks theory and helping to tackle SC challenges [33]. The need to establish sustainable SC networks for SC financing led to the search for more knowledge through research on SC networks-based theories and applications.Studies on SC networks explored SC procurement and sourcing networks and found that they can have a positive effect on SC suppliers and customers responsiveness [34], [35].Building on the fundamental research output showing the history of SC networks' structure and development, there are opportunities to construct SCF networks and future advancements in SC networks, these new opportunities, such as sustainable financing depends on innovations.Hence, SC networks-based theories proposed new network perspectives that revealed innovative network structures and compositions in global SC financing [36], [37].Prior studies showed significant connections between SC networks structures and SC companies' operations implementation, following a similar line of inquiry presented by the role of network brokerage [38].Specifically, SC companies have the ability to expand SC networks structure globally when network features grow with advancing technologies and information flows, the important role of SC companies positions in the operations of networks is that it can increase the company's governing and negotiating power and facilitate financing through financial institutions and brokers [39]. SC companies that maintain a consistent, reliable, and operational set of activities within the SC networks experience momentous advantages and benefits in obtaining resources, such as funding through crowdsourcing [40].Predominantly, from the view of resource dependency theory, SC companies struggle to operate autonomously, as they require networks to accommodate the interdependencies in product and service flows, resource flows, and information flows [41].These dependencies in SC markets create opportunities for SC companies to use the links to make considerable commitments in building sustainable technology driven SC networks.Some studies indicated that the interdependencies can either positively or negatively affect SC operations, and highlighted opportunities for further research [42].According to Pfeffer and Salancik [43], interdependence is a continuous process in which SC companies can foster inter-corporation based on resource and information sharing.However, further studies demonstrated that the degree of interdependence is also a risk in resource dependency theory, so putting mitigating parameters in place to address disconnections within the network is an important condition.Basole et al. [31] discussed that as SC networks is a global emerging field, risk management and business continuity packages are rolled out simultaneously.Therefore, the initial concerns raised [26] are considered in global SC networks.SC companies are taking advantage of the SC structure, practices, and resources in a single network, however with multi-layered hosts in the SC management databases, particularly SC financial institutions and brokers.The extensive research on SC management supports this concept, suggesting that SC companies are competent at managing high levels of operational and risk controls, including the ability to forecast SC echometric trends [44].Furthermore, recent research showed that the direct financial outcomes associated with SC networks, such as cost saving, result from networks sharing brings together SC companies and customers in a technologically driven platform [45].Certainly, this article [31] found that in purchasing, there is increasing support for SC networks in implementing resource management and distribution at the early stage.In addition, it is significant to understand whether or not there are benefits for SC companies that operate in a shared global SC network.However, few studies showed that there are strategic performance rewards, such as financial benefits, in a single multi-layered SC networks that connects SC companies in a unified technology driven resource system [33], [36], [39]. C. Artificial Intelligence in SCs SC management is encountering complex supply financial challenges, such as cash flow shortages and tougher access to financial credits.SC success is rooted in the company's ability to innovate, implement, and operate new ideas that benefit the entire SC networks with end-to-end SC operations and information flows [46].Thus, the introduction of AI to SCF and SC networks support technological advancements in SC management, such as technology driven materials acquisitions, digitalized cash flows systems, and automated networks to meet customer demand [47], [48].The significance of digitization in SC management is that it enhances end-to-end SC operations and processes.Cutting-edge SC innovations can create the foundation for implementing AI and gaining the benefits of enriched data analytics tools consisting of intelligent networks and systems [49], [50].SC financing is becoming more data driven and focuses on alternative asset evaluations in which inventory, equipment, and warehouses become real substitute data [51], [52].In addition, increasing significance of information in SC management, SC researchers and experts must continue to explore the benefits and challenges of managing large amounts of information [53], [54].According to Martínez-López and Casillas [51], AI has existed for decades, though it has not reached its full potential, especially for the SC management sector of the global economy.However, it is worth noting that cyber risks such as cyberattacks, malicious spying, and tempering are common to technology advancements, such as AI, most of these cyber risks are invisible to detect in SC [55].According to studies carried out by Radanliev et al. [56], cyber systems such as AI technologies are transactional environment for exchange of valuable information on products and services, the safeguarding of interactions and information in essence is significant to SC companies.Furthermore, technology advancements, such as AI, big data, and the internet of things are continuously investing in the security of data and developing new methods of shielding companies' valuable information from cyber risks and increasing confidence in AI technologies. 1) Artificial Intelligence Networks: The theory of artificial neural networks (ANNs) was developed to reflect the human brain, which uses the analogy of brain cells (neurons) in the design [57], [58]. Building on this concept, AI networks are connected like human memories and have the ability to learn and improve over time, which characterize its experience, distinct features, and complex analysis processes [59].ANNs consist of several nodes that represent human neurons [60] with multiple links connecting these nodes, where each link has a set of algorithms programmed into it for efficiency and to process complex commands.Furthermore, these links connecting the nodes have weights that are the core for long-term memory storage, data processing, and data analytics.AI networks process data with systemic methods where the output of one neuron is transformed into the input for another, making every single process a prerequisite for a new process [61].According to Russell and Norvig [58], one of the functions of the weights in AI networks is to determine the strength or weakness of data passing through the links.The links provide an environment that hosts the values of the combined weights to form an AI process for learning.AI networks learning capabilities create an opportunity for deployment in the SC management sector, specifically by integrating SCF,SCcompanies, and suppliers' data, and creating patterns for interrelationships among data [62].At the initialization of the AI networks, the system continues to improve its intelligence and performance with built-in learning algorithms by understanding SC operations and analyzing the optimum efficiency and required resources.2) Artificial Intelligence Systems: AI systems are technologically driven systems with the ability to simulate human cognitive skills, such as analyzing complex problems, visual analytics, optimum performance, and providing solutions [63].Cheung et al. [64] reported that AI systems have the capacity to perform analytic reasoning in complex problem-solving in contrast to human expertise problem-solving abilities.There are three fundamentals in AI systems: knowledge networks; interface engines; and user interfaces.Knowledge networks are the depository for data, facts, and rules of engagement during human activities, and are the basis for the resources that build AI systems [34], [37].The interface engine is a collection of algorithms for problem-solving reasoning, which is also referred to as the brain of AI systems and is primarily responsible for conduction complex analyses, such as solution search, algorithm reasoning, and providing an interface for the knowledge networks to leach on in an AI environment [65], [66], while the user interface connects the users with the system and supports user queries for interaction and communication [67].Overwhelmingly, AI systems are designed with the concepts and operations for the domain in which they will be implemented.Thus, experts and practitioners who are knowledgeable about the tasks and role of the AI systems and human-system interaction will be practicable in problem solving [53], [68].In particular, AI systems showed tremendous progress in terms of increasing performance in most sectors [69], such as manufacturing, specifically in the automobile industry.Tesla car manufacturing reached 75%automation of the entire production process, where AI systems were implemented and led to higher performance and less waste.The application of AI technologies and systems in SC management, specifically the integration of SC operations and financing, is emerging, as evidenced in the successes of AI implementation in logistics and manufacturing. III. RESEARCH META-FRAMEWORK This article developed a meta-framework based on the discussion of the theoretical background on three key perspectives: SCF; SC networks; and AI.These perspectives will be combined later in associations to find possible relationships.Table I gives how previous studies contributed to this article.To answer the research questions, this article will initially conceptualize the SCF [70], SC networks [31], and AI [71] perspectives. A. SCF Perspective While prior studies provided many different descriptions of SCF, as they commonly state that the purpose is to provide cash flows for SC companies [12], [13].Therefore, this article identified three components in this perspective: financial orientation (FO); SC orientation (SCO); and (3) cash flows (CF).The FO of the SCF perspective consists of a set of innovative solutions that financial institutions and brokers can rely on when making decisions when assessing applications by SC companies and suppliers, as they are the controlling actor in the SCF decision-making process.FO focuses on financing solutions that are important for payables or receivables and that are viable for the benefits of both the financial provider and SC companies and partners [74].Thus, FO is a significant trigger in the SCF perspective, with the main objective of supporting sustainable SC operations. The SCO component in the SCF perspective manages the records in the inventories, such as the optimization of customer and supplier inventories, thus ensuring sustainable working capital to support daily SC operations in ensuring that market demands are met [75].In addition, SC companies and their partners prioritize effective control and monitoring of financing and working capital, as shown in Fig. 1.The SCO ensures sustainable availability of working capital or financing at the lowest rate to maintain SC operations. Cash flow (CF) is a vital resource for daily operations that support the company's activities and keep the business afloat [36].In addition, CF demonstrates SC operations performance and indicates the direction in which cash is applied, allowing decision-makers to implement sustainable CF for SC operations, as this is an important factor when seeking financing from financial institutions or brokers. B. SC Networks and AI Perspectives As shown in Fig. 1, the SC networks, and AI perspectives combine to design sustainable networks consisting of strategic entities that integrate the SC associations of the members to create SC networks built on AI.There are three components associated with the SC networks perspective and two components associated with the AI perspective.According to Martinez et al. [8], traditional SC networks are studied with a focus on understanding the existing connections to SC operations, leading to the strategic development of possible blockchain integration through existing channels.With this understanding, this article proposes an advanced SC networks implementation driven by AI technologies.It is already known that SC networks support innovative technology in SC management areas such as SC operations.However, there are emerging opportunities to develop sustainable SC networks for SC financing driven by AI technologies.Fig. 1 shows that the AI-related components are embedded in the existing SC networks, indicating that existing information flows in the network are seamlessly transferred to AI knowledge networks for intelligence analysis. IV. RESEARCH METHOD A. Research Design and Data Collection Following the design method [15], this article used a longitudinal survey with online participants to test the relationships and associations in the proposed meta-framework.A cross-sectional online survey was conducted in 2019, we selected active participants through research conferences, SC specific events, and use online platforms, such as LinkedIn to engage in the survey exercise.This survey is for members, employees, and managers in SC organizations across the globe.Participants were also drawn from SC associated organizations, such as technology for operations management.The questionnaire was developed through the research gaps identified from SCF, SC networks, and AI literature, the associations identified in Fig. 1 transformed into sections of the survey. Consequently, we distributed the survey to 3185 active targeted participants and received 432 surveys that included both partial and completed participations.This accounts for a response rate of 13%, a study response rate that is consistent with extant research [76].Since this article is unable to select partially completed surveys for analysis, our final sample number thus only consists of 205 completed surveys.This article sample size consists of participants from across the globe, with North America accounting for 29% of the total survey which makes up for the largest share in terms of participant size.Experience with SCF platforms shows that 28% of the participants engage more than five times daily on the SCF platforms while 22.7% account for participants with five to six years working the SCF platforms.The research design was developed using this method, and the online survey was conducted using stratified sampling and the participants were proficient professionals in SC operations consistent with SC financing and have experience working with AI technologies.The participants were divided into specific demographic groups.Table II gives the expert profiles consist of gender, age, work locations, SCF/SC networks/AI usage, and SCF/SC networks/AI experience. B. Data Variables We obtained both dependent and independent variables using a multiple item, ranging from 1 symbolize "strongly disagree" to five representing "strongly agree" on the five-point Likert-type scales.The use of fivepoint Likert-type scales ensures that the survey responses conform to statistical variability, due to difficulties in proof objective data relationship outcomes as shown in past studies [77], [78].therefore, as prior studies created a composite scale to capture relational and scalable dimensions of supply relationship, this article follows a similar approach on the scale return to represent what we intend to measure. C. Non Response Bias Nonresponse is a frequently applied technique for assessing the bias in a research method, this article suggests that the participants that responded to the survey in the first month were at a 75% rate while 25% responses were completed later in the study variables.One-way nonresponse bias, performed at the entry-level suggests that there are no significant differences between the data gathered from an earlier stage and later responses, only that 1 in 26 which is 1.73% of the study variable.Concluding that nonresponse bias exists at the beginning of the time of participation is due to chance. D. Common Method Variance To minimize the impact of common method bias linked with reporting data sourced from one point such as survey, taking precautions in gathering the data, we followed guided procedures as suggested by Kave [79].The initial step taken in this article is to foremost ensure that most of the participants have experience working in the SC industries and are familiar with the technological platforms used in the sector.Most of the participants that responded to the survey have at least three years of work experience in the SC industries with sufficient managerial roles and knowledge about the increasing use of technology in the sector.Participants in the survey were reassured of the diligence ethical process in keeping their data anonymous.The inclusion of additional independent variables tends to reduce common method variance, the questions were organized in a strategic method to include intersperse entities. E. Analytical Technique According to Oyemomi et al. [80] and Chen et al. [81], a fuzzy set is a set-theoretic approach that evaluates theories, frameworks, and models with a deductive strategy driven by a positivist paradigm.Fuzzy sets are not a new technique for pure sciences and engineering, but are an emerging method in the management and social sciences, as researchers without a science and engineering background encounter problems, such as approximate reasoning.However, the introduction of hybrid analytic techniques with fuzzy set logic that support fuzzy analyses in management and social sciences addressed these initial problems [82].This article adopted relationship and association testing, as suggested in earlier work to test for Boolean expressions in the fuzzy settheoretic approach for the four intersections in Fig. 2. This article proposes an eight-step process flowchart (see Fig. 3).It consists of four loop relationships (represented in a double-line diamond) and three straw-in-the-wind relationships (represented in a single-line diamond) and shows the subsequent relationships used to discuss the outcomes from the analysis [76], [83], [84].The flowchart is described as follows.1) A loop relationship for an expression that a solution pathway is reliable shows whether the consistency of the sufficiency analysis is greater than 0.7 of the solution pathways as defined in this article for the consistency threshold analysis.Any relationship that falls below the set threshold is eliminated from further analysis testing as this means that that relationship does not meet the acceptable reliability.A loop relationship for an expression with an accepted solution pathway shows whether the consistency of Q1 is greater than 0.7, suggesting that any relationship that falls below the acceptance criteria in the solution pathway must be rejected and there should be no further analysis.2) A double-line diamond relationship for an expression that is strongly supported shows whether the consistency of Q2, Q3, and Q4 is less than or equal to 0.7, suggesting that any relationship that passes the acceptance criteria does not have significant contradictory proofs.3) A single-line diamond relationship for an expression that is not supported by itself, though would benefit subsequent relationships, can be described by the consistency of Q3 of less than or equal to 0.7.Furthermore, Q3 represents the type I consistency error, which usually has a lower acceptance threshold. 4) A loop relationship for an expression for which a solution pathway is weakly supported shows whether the consistency for the sufficiency analysis result that Q1 is greater than Q3 in the solution pathways as defined for the consistency threshold analysis.Any relationship that falls below the set threshold is eliminated from further analysis testing, as the relationship does not meet the acceptable reliability.5) A double-line diamond relationship for a supported expression shows whether the consistency of Q4 is less than or equal to 0.7, suggesting that any relationship that passes the acceptance criteria does not have a significant error reported during the analysis and supports the classification.6) A loop relationship for an expression for which a solution pathway is not weakly supported shows whether the consistency of Q2 is greater than 0.7, suggesting that any relationship that falls below the acceptance criteria in the solution pathway can be improved and there is weak support for the classification.7) A double-line diamond relationship for a supported expression shows whether the consistency of Q2 is greater than or equal to Q4, suggesting that any relationship that passes the acceptance criteria and partially supports the condition for Q2 and Q4 represents the type II consistency error, and it is usually equal to or higher than the acceptance threshold. F. Data Analysis and Results According to [83], complementarity and equifinality are two underlying features in the fuzzy set theoretic approach.It displays patterns of attributes and different results depending on the structure of the perspectives.The attributes in the perspectives are concerned with the present or absent conditions and the associations formed during conceptualization, rather than isolating the attributes from the perspectives.Furthermore, complementarity does exist if there is proof that causal factors show a match in their attributes and the results indicate a higher level, while equifinality exists if at least two unidentical pathways known as causal factors show the same level of results [85]. The results in Table III for the different perspectives indicate the part of the relationships that show empirical evidence for rejection and support.The results demonstrate that the relationships are more likely to yield rejection than support from this analysis.The solution pathway shows in the results, confirming the relationships.Consequently, supporting prior findings [86], [87], Fig. 3 illustrates that a higher consistency level value directly results in a higher reliability of the relationship.The three combinations of attributes in the sufficiency analysis shows that the input efficiency either fails or passes the set consistency threshold requirement (consistency and coverage are 0.72 and 0.44, respectively). In Table IV, the relationships indicate support that the analysis generates attributes in the perspectives above the combined solution pathways than in Table IV.As shown, the type II error of a false negative is one form of contradiction between the relationships and results which is ignored, as defined in Fig. 3.These findings indicate the least likely attributes in the perspectives show that the existing relationships hold, supporting the higher consistency level of the associations and stronger support for further relationships.Hence, this analysis can introduce additional causal conditions of similar attributes not yet shown in the current relationships by tracking back to the relationship mapping data, thus finding common attributes in the existing perspectives that may explain the undefined variance from the existing relationships. The results in Table V for the combined solution pathway for consistency and coverage indicate support for most attributes in the perspectives, indicating a type I error (or a false positive) in the form of contradicting variances in the relationships.In addition, the higher consistency level of the associations supports higher values to delimit the relationships.Thus, some unconfirmed attributes indicate a restriction of the current relationships. The analysis in Table VI of the combined solution pathway indicates that neither prediction in the relationships nor coverage by attributes definitions for the perspectives are strongly supported in the SCF for the role of AI technologies in SC networks.Therefore, alternative variances, as understood by experts and researchers, provide better supporting conditions for the definitions of the relationships in Q4.Five out of the six pathways are equal to or greater than the defined threshold, indicating that the relationships between the perspectives can benefit from tradeoffs.Furthermore, there are similarities in the results for the unique coverage, signaling significantly high efficiency input linked directly with the variance from the causal conditions. outcomes from Q1, Q2, and Q3 simultaneously.Q1 and Q2 alone are not adequate to support high input efficiency, indicating that AI will fade-out without a correlation with SC networks.Therefore, the combination of the two perspectives is highly significant to the relationships to create high input efficiency.However, Q3, which considers all attributes in the AI perspectives, rejects the associated attributes from Q1, but shows weak support for A2, indicating that the conditions are peripheral or are conditions with less supporting variance.This explains the weak support in the attributes of their relationships.Q4 outcomes show that this article considers the relationships of the attributes of the relations between Q1 and Q2, as the roles of Q3 have explanatory control over the outcomes from redefining the impact of both associations. This article developed a meta-framework for the role of AI in building sustainable SC financing using SC networks that are currently operating in SC activities by exploring novel findings that individually or in combination established links to build on for the three perspectives.An online survey was carried out with a stratified sample to test the meta-framework, and the data were used to further categorize the relationships among the perspectives.The empirical analysis shows important results that further the understanding of these associations. The findings show that Table III results for Q1: FO_SCO_CF/AIN/AIS where the relationships of both AI and SCF constructs in the solution pathway result are supported.Cheung et al. [64], highlight the significant role of AI in aiding innovative organizational operability and providing sustainable competitive advantages.As findings in Table IV, results for Q2:SCS_SO_SCR/AIN/AIS demonstrate support for constructs associations.More specifically, a section of Table IV, Q2: SCS_SO_SCR/AIN (S2) indicates that there is strong support for implementing AI networks with existing SC networks.AI technologies were implemented, there has been significant improvement to the operations and processes, complex tasks are simplified using AI algorithms. V. DISCUSSION The findings in this article demonstrate the important role of AI as shown in the associations of the construct with SCF and SC networks, the introduction of AI in practice as a tacit control of the SC networks as a resource for secure access to financial resources, and unavoidably includes other resources that benefit the SC.Consequently, AI puts together SC networks with SCF criteria set by the financial institutions and brokers, suggesting two themes.First, ensure that the dependence controls are balanced and that access to resources is mutually beneficial to all parties by consistent monitoring of performance.Second, network system homogeneities, structure and operations become a unified network that identifies resources usage and efficiency.The purpose of this article was to find out the role of AI as a technology tool for stimulating SC financing through existing SC networks.Explicitly, we carried out a complimentary analysis to explore the role of an AIenabled SCN in facilitating sustainable SC financing for SC companies.Three perspectives: SC finance; SC networks; and AI were developed following the development of SCF meta-framework, four intersections: Q1 = association one; Q2 = association two; Q3 = association three; and Q4 = association four between the three perspectives were further developed. The outcomes gathered after analysis of validated data from questionnaire show that AI-enabled SCN is important in minimizing the issue of financing which limited assets available in supply networks.Consequently, the complementarity of the three perspectives; SCF, SCN, and AI technology further enhanced into four intersections relationship mapping by constructing entities association from each perspective as shown in Fig. 2. The results from Table III, suggest that the testing of the relationship of AI and SCF perspectives and their entities financial orientation, SC orientation, and cash flows of SCF and AI networks and artificial intelligent systems of AI are supported.Furthermore, the outcomes in Table III concur with the fuzzy set relationship mapping which discussed the consistency and coverage requirement in Fig. 3, suggesting that the implementation of AI technology to SC financing provides vital information on how the distribution of financial services can influence performance in SC.This result demonstrates consistent with the findings of [15] which discussed the significant role of AI in financing the food and drink industry.Hence, the relationship between AI and SCF perspectives illustrates support in the implementation of AI-enabled solutions to the financial services available in SC. In Table IV, the result for AI and SC networks for their entities artificial intelligent networks, artificially intelligent systems, SC structure, SC resources and supply operations show strong support for the relationship as the consistency and coverage suggest that the association significantly influence the financial services for SC.The positive complimentary association further supports AI and SCN perspectives relationship mapping, with constructs' consistency and coverage meeting the set requirement in Fig. 3. Furthermore, there is support for the results Table V The findings strengthen the importance of the application of AI-enabled SCN to support SC financial services. In practice, AI-enabled SCN advances the understanding of challenges in financial services for SC, suggesting the exploration of the assets available through the SC [2], [15], [88], [89].Other opportunities that are available with the SC networks include partnership in financial services, suppliers, financial service institutions, and SC industry benefits from AI driven financial services integrated into the network systems in the SC. A. Implications for Research This article proposed complementarity of SCF, SC networks, and AI technologies to understand the explanatory influence by linking theoretical views that did not consider these connections previously.This article used the perspective of complex causality to analyze the data and generate empirical findings.This article provided a new understanding of the proposed complementarity by contributing a holistic evaluation of all attributes of the three perspectives, building relationships, and presenting findings that identify the significance of each association in an effort to build sustainable SC financing using AI-driven SC networks.Therefore, this research builds on existing studies [9], [90] that call for further work on SCF and SC networks, while contributing to the role of AI by exploring the conditions under different scenarios and complementarity values.The online survey data supports the solution coverage across attribute dimensions by analyzing complementarity efficiency using defined threshold requirements.This article answers the call for enquiries into how SC networks (the environment) and SC companies can strategically allocate all resources for cascading SC financing.Most importantly, the fuzzy set theory technique accounts for complex causality to yield novel empirical findings. This article contributes to the SCF, SC networks, and AI literature by developing a meta-framework that examines the integration of AI technology in existing SC networks, which can provide alternative SC financing by relying on the available resources and enabling financial institutions and brokers to partner with SC companies and suppliers through AI-enabled networks. B. Implications for Practice The comprehensive theoretical review and in-depth empirical analysis of the complex casualty on the role of AI in building sustainable SC networks for SC financing in this article allow SC companies and suppliers to consider their organizational strategies in their effort to create cascading networks and implement compatible sustainable solutions.As proposed in the relationships, the attributes from each perspective combinations demonstrate support for solution pathways in the outcomes, SC companies prioritizing innovative resources to ensure that AI-driven SC networks are sustainable assets for SC financing, as untapped potential resources are hiding with the layers in the networks in which SC operations are embedded.SC companies have long been searching for alternative sources of financing that consider current assets, such as operations and networks in SCF.With an innovative deployment of AI, financial institutions and brokers can support SC operations through AI technology, providing financial services based on transitions through AI-enabled networks.Therefore, financial risks are reduced, and AI-enabled networks can filter through complex and risk-exposed operations within SCs.The results reported here are important for financial opportunities for both short-and long-term sustainability on SC. C. Limitations and Future Research Directions Given the research aims and scope, this article has limitations that offer opportunities for future research.This article identified and analyzed SCF, SC networks, and AI technologies, focusing on sustainable SC financing through SC networks, though does not address other perspectives, such as SC companies' policies, political strategies, and negotiation strategies.Similarly, the sample during the data collection process targeted SC management experts and researchers, specifically those focusing on SC networks and financing, who engage most frequently in SC innovations by demography.However, financial analysts may be of relevance for future research.Given that previous research focuses on SCF risk management and financial challenges, to understand risks and issues in SC financing, the influence of AI as a possible sustainable solution to risks around SC financing will permit future research to proceed with new datasets.In the same line, this article did not consider the financial impact of implementing AI technologies, which is another interesting area for future research. This cross-sectional research aimed to provide an in-depth understanding of the relationships among the three perspectives, using a balanced sample to mitigate gaps in previous studies by analyzing data in terms of diverse significant roles of AI and Table VI consistency in SC financing, except for the relationship mapping in Table VI for AI perspective entities artificial intelligent networks and artificial intelligent systems which shows a rejected relationship.The condition S1 is rejected as it is the only association that is tested with only one condition.
9,597
2022-01-01T00:00:00.000
[ "Business", "Computer Science", "Economics" ]
Analytical solution for a hybrid Logistic‐Monod cell growth model in batch and continuous stirred tank reactor culture Abstract Monod and Logistic growth models have been widely used as basic equations to describe cell growth in bioprocess engineering. In the case of the Monod equation, the specific growth rate is governed by a limiting nutrient, with the mathematical form similar to the Michaelis–Menten equation. In the case of the Logistic equation, the specific growth rate is determined by the carrying capacity of the system, which could be growth‐inhibiting factors (i.e., toxic chemical accumulation) other than the nutrient level. Both equations have been found valuable to guide us build unstructured kinetic models to analyze the fermentation process and understand cell physiology. In this work, we present a hybrid Logistic‐Monod growth model, which accounts for multiple growth‐dependent factors including both the limiting nutrient and the carrying capacity of the system. Coupled with substrate consumption and yield coefficient, we present the analytical solutions for this hybrid Logistic‐Monod model in both batch and continuous stirred tank reactor (CSTR) culture. Under high biomass yield (Y x/s) conditions, the analytical solution for this hybrid model is approaching to the Logistic equation; under low biomass yield condition, the analytical solution for this hybrid model converges to the Monod equation. This hybrid Logistic‐Monod equation represents the cell growth transition from substrate‐limiting condition to growth‐inhibiting condition, which could be adopted to accurately describe the multi‐phases of cell growth and may facilitate kinetic model construction, bioprocess optimization, and scale‐up in industrial biotechnology. rate is proportional to the nutrient level. Under unlimited nutrient conditions (S → +∞), cells could reach their maximal growth potential (the cell is therefore saturated by the substrate) and follow zerothorder kinetics. The specific growth rate follows a monotonically increasing pattern as we increase the concentration of the limiting nutrient (S). (2)), was first introduced by the UK sociologist Thomas Malthus to describe the "the law of population growth" at the end of 18th century. This model later was formulated and derived by the Belgian mathematician Pierre François Verhulst to describe the self-limiting growth of a biological population in 1838. With little self-limiting factor (X → 0), the population attains the maximal grow rate (μ max ). As the cell growth, the population starts inhibiting themselves (could be considered as a negative auto-regulation loop). With sufficient self-limiting factors (X → X m ), the population reaches the carrying capacity of the system and the growth rate approaches to zero. The Logistic equation (Equation The specific growth rate follows a linearly decreasing pattern as the cell population (X) increases. Both Monod and Logistic model have been used extensively to analyze the fermentation process and study microbial consortia interactions. For example, an expanded form of the Monod equation was proposed to account for product, cell, and substrate inhibitions (Han & Levenspiel, 1988;Levenspiel, 1980;Luong, 1987). When the Monod equation was coupled with the Luedeking-Piret equation (Robert Luedeking, 1959), analytical solutions for cell growth, substrate consumption, and product formation could be derived (Garnier & Gaillet, 2015). A squareroot boundary between cell growth rate and biomass yield has been proposed (Wong, Tran, & Liao, 2009). Coupled Monod equations were applied to describe the complicated predatorprey (oscillatory) relationship between Dictyostelium discoideum and Escherichia coli in Chemostat (Tsuchiya, Drake, Jost, & Fredrickson, 1972). Much earlier than the Monod equation, the Logistic growth was used by the American biophysicist Alfred J. Lotka and the Italian mathematician Vito Volterra to describe the famous Lotka-Volterra predator-prey ecological model (Lotka, 1926;Volterra, 1926). More interestingly, the solutions of the discrete Logistic growth model were elegantly analyzed by the Australian ecologist Robert May (Baron May of Oxford) in the early 1970s. It was discovered that complex dynamic behaviors could arise from this simple Logistic equation, ranging from stable points to bifurcating stable cycles, to chaotic fluctuations, all depending on the initial parameter conditions (May, 1976). Both models benefit us to analyze the microbial process and explore unknown biological phenomena. To account for both substrate-limiting and self-inhibiting factors, herein we propose a hybrid Logistic-Monod model (Equation (3) (4)) and yield coefficient (Y x/s ), the implicit form of the analytical solution for cell growth (X, Equation (5)) and substrate (S, Equation (6)) could be easily solved by separation of variables or Laplace transformation. A typical Monod-type kinetics was plotted for batch culture ( Figure 1a). It should be noted that the initial conditions are prescribed as S = S 0 and X = X 0 at the beginning of cultivation (t = 0). In the case for Logistic model, we could also arrive the analytical solutions for cell growth (X, Equation (7)) and substrate (S, Equation (8) substrate consumption kinetics (Equation (4)). It should be noted that cell growth is independent of substrate consumption in the Logistic model, but the substrate will deplete proportionally with cell growth (Figure 1b). Due to the simplicity of the Logistic equation, we could arrive at the explicit solution for cell growth (X) and substrate (S). Similarly, by coupling Equations (3) with (4), the implicit solutions for the hybrid Logistic-Monod equations (Equation (3)) could be derived analytically with the aid of the symbolic computation package of MATLAB. This hybrid Logistic-Monod model (Equation (3)) retains the elementary differential equation norm and should be solved analytically by either separation of variables or Laplace transformation, despite the derivation process will be trivial. The exact solution for cell growth (X, Equation (9)) and substrate (S, Equation (10) We next will explore the steady-state solutions of three growth models in CSTR culture. Based on mass balance and the substrate concentration in the feeding stream (S F ), we could list the mass balance for cell growth (Equation (11)) and substrate consumption (Equation (12)). When the CSTR mass balance equations (Equations (11) and (12)) are coupled with the Monod growth kinetics (Equation (1)), it is easy to arrive the steady-state substrate and cell concentration in the CSTR (Equations (13) and (14)), which has been widely taught in Biochemical engineering or Bioprocess engineering textbooks. As the dilution rate increases, the substrate concentration increases with decreasing cell concentration at the outlet flow of the CSTR (Figure 2a). Similarly, when the mass balance equations (Equations (11) and (12)) are coupled with the Logistic growth kinetics (Equation (2)), the steady-state solutions for substrate and biomass could be derived analytically (Equations (15) and (16)). As the dilution rate increases, the substrate concentration linearly increases accompanying with proportionally decreased cell concentration at the outlet flow of the CSTR (Figure 2b). Finally, for the hybrid Logistic-Monod model, we could also derive the steady-state solutions for the substrate and biomass concentration (Equations (17) and (18), Figure 2c), when the CSTR mass balance equations (Equations (11) and (12)) are coupled with the hybrid Logistic-Monod model (Equation (3)). Plotting all three models together (Figure 2 for both the batch and CSTR culture, assuming that all the substrate could be converted to biomass. When biomass is the only product, the optimal dilution rate (D opt ) and the washout dilution rate (D w ) could also be analytically derived. Operation under D opt will maximize biomass produtivity (P = DX), and D w is the maximal dilution rate that engineer could possibly run the CSTR system (biomass will be washed out under D w ). Matlab implicit function fplot was used to draw most of the solutions for Figure 1 and Figure 2. Matlab code has been compiled into a supplementary file, has been uploaded to the journal website (Table 1 and 2).
1,748.6
2019-11-23T00:00:00.000
[ "Biology" ]
Implementation of the new easy approach to fuzzy multi-criteria decision aid in the field of management Decision-making is one of the most important management functions and a critical task for managers. The tools that support decision makers in making decisions are Multi-criteria Decision Making/Aid/Analysis (MCDM/MCDA) methods. Since most decisions are made under conditions of uncertainty, the fuzzy MCDM/MCDA methods are particularly important as they allow capturing the uncertainty and imprecision of the information used in making decisions. This method is the Fuzzy Preference Ranking Organization Method for Enrichment Evaluation (Fuzzy PROMETHEE), and its extension in the form of New Easy Approach to Fuzzy PROMETHEE (NEAT F-PROMETHEE). However, the unavailability of software using the NEAT F-PROMETHEE method significantly reduces its ease of use and may discourage potential users and researchers considering using it in their studies. Therefore, to facilitate the use of this MCDA method, the article presents the implementation of NEAT F-PROMETHEE in the MATLAB environment. Moreover, the verification of the developed implementation and its application in the management decision-making problem is presented, together with the analysis of the operation of the mapping correction function used in NEAT F-PROMETHEE. The results obtained with NEAT F-PROMETHEE were compared with the results of the Fuzzy PROMETHEE method which did not apply correction. The analysis shows that the correction applied in NEAT F-PROMETHEE allows obtaining results with a smaller error than the non-corrected implementations of PROMETHEE Fuzzy. Therefore, a more accurate solution of the decision problem is obtained.• improving the process of mapping fuzzy numbers in the Fuzzy PROMETHEE method• implementing a correction mechanism while mapping trapezoidal fuzzy numbers Method details Decision-making is inseparable from management [4] , and some researchers even claim that it is the most important function of management [5] and the most important task of managers [6] . In turn, most of the nontrivial management decision-making problems are of a multi-criteria nature, and Multi-criteria Decision Making/Aid/Analysis (MCDM/MCDA) methods in the fuzzy or crisp forms are used to solve them. Fuzzy methods, unlike crisp methods, allow to capturing uncertainty and imprecision [7] , usually occurring in management decisions [6] . Therefore, fuzzy methods are widely used in management [8] . One of the MCDA methods often used in management problems [9] is the Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) [2] , and its fuzzy or stochastic developments [10 , 11] . This is due to the ease of use and universality of the PROMETHEE method, transparency of its calculation procedure and high usefulness of the fuzzy versions of PROMETHEE in decision-making problems characterized by uncertainty [10 , 12 , 13] . The method which is a fuzzy development of PROMETHEE is New Easy Approach to Fuzzy PROMETHEE (NEAT F-PROMETHEE). This method, based on trapezoidal fuzzy numbers, has similar transparency and ease of use as the sharp PROMETHEE method. Moreover, it meets the methodological assumptions of the original PROMETHEE method, and thus gives the possibility to apply preference functions according to crisp PROMETHEE, as well as retaining appropriate scales for preference degrees and outranking flows. NEAT F-PROMETHEE uses six different preference functions, allows the use of linguistic, crisp and fuzzy values, in their natural scales, and gives the possibility of obtaining partial and total order of alternatives, thus offering great versatility. Finally, by applying the correction in the preference functions, it reduces the approximation errors that arise in other fuzzy PROMETHEE implementations when mapping fuzzy deviation to the form of a unicriterion preference degree. As a result, the NEAT F-PROMETHEE method is widely used in solving management decision problems [1 , 10 , 14 , 15] . On the other hand, the unavailability of software using the NEAT F-PROMETHEE method significantly reduces its ease of use and discourages potential users and researchers considering using NEAT F-PROMETHEE in their studies. Therefore, an important practical issue is the implementation of the NEAT F-PROMETHEE method in the programming language commonly used by researchers. This article presents theoretical basis, technical details and MATLAB implementation of the NEAT F-PROMETHEE method. In the following part of the article there is a description of calculation procedures used in the method together with relevant mathematical equations and codes implementing these procedures in the MATLAB environment. The article ends with both the validation of the method based on the application of the developed MATLAB implementation in order to solve the decision problem and the analysis of obtained results. Input data The NEAT F-PROMETHEE method is a discrete MCDA method that addresses the problem of ranking m of fuzzy decision alternatives belonging to the set ˜ A = { a 1 , a 2 , . . . , a m } using n criteria, belonging to the set C = { c 1 , c 2 , . . . , c n } . It is based on trapezoidal fuzzy numbers (TFNs) in the form of F N = , for which the membership function is described by the formula (1) : In turn, Fig. 1 shows the structure of the file alternatives.xlsx , from which the values of alternatives are loaded into the performance matrix. The structure of the performance matrix E is described by the formula (2) : where e i, j = c j ( a i ) , e i, j = ( e i, j 1 , e i, j 2 , e i, j 3 , e i, j 4 ) = ( c j ( a i 1 ) , c j ( a i 2 ) , c j ( a i 3 ) , c j ( a i 4 ) ) , therefore e i, j represents the performance level of an alternative a i according to a criterion c j . In addition to the values of the alternatives and the weights of the criteria, preference directions are defined at the beginning (for the 'profit' criteria, the maximum is preferred and for the 'cost' criteria the minimum is preferred), as well as preference functions and thresholds related to the preference functions (indifference ( q ), preference ( p ), Gaussian ( s )) for individual criteria are also defined. As a result, a complete model of the decision maker's preferences and, more broadly, a model of the decision-making problem is constructed. For the model developed in this way, in subsequent stages, calculations of the NEAT F-PROMETHEE method are performed, alternatives are ranked and results are displayed. The script code, including the indicated actions, is presented below. Calculations of NEAT F-PROMETHEE Calculations for the NEAT F-PROMETHEE method are performed in several steps. First, the performance matrix E is transformed in such a way that the direction of preferences for each criterion is maximum. In the subsequent steps, fuzzy deviations are calculated and mapped to the form of unicriterion preference degrees and the defuzzification and normalisation of weights, preference aggregation and calculation of fuzzy outranking flows and defuzzification of calculated outranking flows take place. The code of the main NEAT F-PROMETHEE function is presented below. The calculation of fuzzy deviations is carried out for each pair of alternatives for each criterion. It shall be carried out according to the formula (5) : Then, the fuzzy deviations obtained are mapped using the appropriate preference function. In the classic crisp PROMETHEE method, six preference functions are used, shown in Fig. 2 . The mapping of crisp numbers consists in calculating the value of the preference function ) . The comparison of the mapping of crisp numbers and TFNs is shown in Fig. 3 . In the case of crisp numbers, used in the classic PROMETHEE method, the preference functions allow precise mapping of the deviation value d k to the form of the unicriterion preference degree P k ( d k ) . But in the case of TFNs, used in many fuzzy versions of PROMETHEE, approximation errors can occur during mapping. Therefore, the NEAT F-PROMETHEE method extends the mapping process with a function to correct mapping errors. The preference functions together with the correction functions used in NEAT F-PROMETHEE are shown in the formulae 6 -(17) . Usual criterion (6) : Correction for usual criterion (7) : U-shaped criterion (8) : Correction for U-shaped criterion (9) : V-shaped criterion (10) : Correction for V-shaped criterion (11) : Level criterion (12) : Correction for level criterion (13) : V-shaped criterion with indifference area (14) : Correction for V-shaped criterion with indifference area (15) : Gaussian criterion (16) : Correction for Gaussian criterion (17) : Fig. 4 shows an example of an approximation error that occurs during the TFN mapping, the correct mapping result and the operation of the correction mechanism used in the NEAT F-PROMETHEE method. (14), the mapping result will be a to the membership function defined for a TFN [16 , 17] , the fuzzy number values at the indicated points 1 , 1 , 0 ) . In the case of precision mapping, since none of the preference functions used is an injection (so the preference function can take the same values for two different values on the x -axis), the mapping function described by the formula (18) [18 , 19] should be used to determine the value of the fuzzy number in points (0,0.125,0.75,1). Based on the formula (18) and the maximum values d k in points q k and p k , which, based on the formula (1) , . Therefore, TNF mapping generates a relatively large approximation error. That is why, in the NEAT F-PROMETHEE method, a shape correction of the obtained TFN was introduced to make it as close as possible to the result of precise mapping. The code of the MATLAB function, which calculates fuzzy deviation and the values of the correction preference function, is shown below. A separate MATLAB function contains a procedure to check if a correction is required. From the formulae 6 -(17) , it can be seen that the conditions of correction for all preference functions are similar and can be recorded as the formula (19) : However, the t and u variables allow distinguishing indifference, weak preference and strict preference relationships. Depending on the preference function used, t and u take different values: • for the usual criterion: t = 0 , u = 0 , • for the U-shaped criterion: t = q k , u = q k , • for the V-shaped criterion: t = 0 , u = p k , • for the level criterion and V-shaped criterion with indifference area: t = q k , u = p k , • for the Gaussian criterion: t = 0 , u = ∞ . In the case of u = ∞ for the Gaussian criterion, it should be clarified that this value is due to the property of the Gaussian preference function, which asymptotically tends to 1, and therefore does not allow obtaining strict preference [20] . In the case of u = ∞ there is an obvious contradiction because for a correction to occur, the value d k 4 would have to be greater than infinity. Therefore, this correction function is not used for the Gaussian criterion. The code of the MATLAB function for checking the conditions of correction is as follows. After the mapping and correction process, the weights of the criteria w f j = ( w f 1 , w f 2 , w f 3 , w f 4 ) are defuzzified and normalised. As a result, a new vector of weights of criteria W = { w 1 , w 2 , . . . , w n } is obtained. These actions are necessary to keep the scale [ −1,1] for the obtained solution, as it is done in the classic crisp PROMETHEE method. The defuzzification is performed using the Centroid method, described by the formula (20) : The Centroid method, unlike the Bisector method, does not allow defuzzying crisp numbers (where w j 1 = w j 2 = w j 3 = w j 4 ), so for such numbers a simple assignment w j = w j 1 should be used instead of the formula (20) . The purpose of the normalisation is to bring the sum of all weights to 1 ( n j=1 w j = 1 ) and to define proportionally the weights of each criterion. It is performed according to the formula (21) : The defuzzification and normalisation has been implemented in the appropriate MATLAB function. In the next step, preferences are aggregated between the different pairs of decision alternatives and fuzzy outranking flows are calculated for each alternative. The aggregation of preferences is done according to the formula (22) : After the aggregation of preferences, fuzzy outranking flows are calculated 23 -(25) : The given operations have been implemented as a function in MATLAB environment. The obtained values of fuzzy outranking flows are then defuzzified using the Centroid method 26 -(28) , similar to the fuzzy weights: Similarly to the defuzzification of weights of criteria, if the outranking flows are crisp numbers (e.g. φ + ( a i ) 1 4 ), one should use a simple assignment (e.g. φ + ( a i ) = φ + ( a i ) 1 ). The MATLAB code responsible for defuzzification of outranking flows is shown below. Generating rankings and displaying the results of the method On the basis of the defuzzified values φ net , a full NEAT F-PROMETHEE II (total order) ranking is generated, while the values φ + i φ − are the basis for constructing the rankings that are later used in the NEAT F-PROMETHEE I (partial order) ranking. The MATLAB function responsible for this assigns each alternative an appropriate rank in the full ranking and the rankings φ + i φ − . After three rankings have been constructed, they are presented to the decision maker using the showResults function together with the defuzzifieded values of outranking flows ( φ net , φ + , φ − ), which are the basis for building these rankings. In addition to presenting the rankings in the form of numerical values, in the NEAT F-PROMETHEE implementation, the results are also presented in a graphic form. This is performed by plotResults and Table 3 The values of outranking flows and rankings of alternatives (NEAT F-PROMETHEE). in the rankings. The chart φ + shows how much a given alternative is outranking the others, while the chart φ − depicts how much a given alternative is outranked by the others. In turn, the graph φ net illustrates the total order of alternatives, in other words, it presents a solution to a decisionmaking problem using the NEAT F-PROMETHEE II method. It should be added that there may be two preference relationships in the total order of alternatives: (1) indifference between a i and a j ( a i I a j ) The plotPartialOrder function presents in a graphical form a partial order of alternatives, constructed on the basis of φ + and φ − rankings. There may be three preference relationships in the partial order of alternatives: (1) indifference between a i and a j ( a i I a j ) when ( φ + ( , where one of the inequalities is strict, (3) incomparability between a i over a j ( a i R a j ) when The partial order presents an order of alternatives using the indicated preference relationships. It should be noted that the graphic presentation of the partial order shows indifference and preference Table 4 The values of outranking flows and the rankings of alternatives obtained without mapping correction (Fuzzy PROMETHEE). relations in the form of edges connecting the alternatives directly or indirectly, while incomparability is represented by the lack of direct or indirect connection. In the developed implementation of the NEAT F-PROMETHEE method, apart from the proprietary functions, the distinguishable_colors [21] and line2arrow [22] functions were also used. Method validation The correctness of the implementation of the NEAT F-PROMETHEE method has been verified by solving the decision problem on the basis of selecting a "green" supplier of electronic items for a manufacturing company in order to reduce costs at the manufacturing stage of finished products. In the decision-making process, 4 suppliers ˜ a 2 , a 3 , a 4 } , have been considered, assessing them Table 1 shows the parameters of the different decision alternatives and Table 2 includes the preference model used, i.e. the weights of the criteria, preference directions, preference functions and thresholds. The application of the NEAT F-PROMETHEE method presented in the article has made it possible to obtain a solution to the decision problem, presented in Table 3 and Figs. 5 and 6 . Figs. 5 and 6 , generated using plotResults.m and plotPartialOrder.m functions, enable an analysis of the obtained solution. Fig. 5 shows the fuzzy and disinfected values of alternatives, as well as the order of alternatives, separately for φ + , φ − and φ net rankings. The ranking φ + allows us to conclude that the alternative, which is outranking all others the most, is a 3 . It should be noted that the support of fuzzy numbers indicates that a 3 can be outranked in the ranking φ + by an alternative a 2 , or even a 4 . In turn, according to the ranking φ − , the alternative most outranked by the others is a 4 , although the analysis of fuzzy numbers indicates the possibility that the other alternatives will be outranked by a 4 . Finally, when analysing the ranking φ net the solution to the decision problem is the following total order of alternatives: a 3 a 2 a 1 a 4 . However, the total order of alternatives obtained is characterised by a relatively high degree of uncertainty, as evidenced by the wide range of kernels and support for the fuzzy numbers obtained. As regards Fig. 6 , which shows the partial order of the alternatives, it should be concluded that the dominant alternatives are a 3 and a 2 , which are preferred over a 1 and a 4 . This calculation example shows the usefulness of the fuzzy approach to interpret the degree of uncertainty of the solution obtained. Apart from the verification of the correctness of the implementation of the NEAT F-PROMETHEE method in the MATLAB environment, the operation of the correction of mapping errors (see Formulae (6)- (19)) and the impact of the correction on the obtained solution were also verified. For this purpose, the presented decision problem was solved using fuzzy PROMETHEE without correction. The solution obtained in this way is shown in Table 4 and Figs. 7 and 8 . d 2 ( a 1 , a 3 ) and d 2 ( a 3 , a 1 ) . a 1 ) and d 5 ( a 4 , a 2 ) . Fig. 11. Error and correction during the mapping of deviations d 4 ( a 2 , a 1 ) . Table 3 and Figs. 4 and 5 (solution with correction) with Table 4 and Figs. 6 and 7 (solution without correction) shows that, in the absence of correction, there was a change in the ranking φ net in positions 1-2 and 3-4. In addition, the partial order of alternatives, namely preference relationships a 3 a 1 , a 2 a 4 were converted into incomparability relationships a 1 R a 3 , a 2 R a 4 . In order to clearly determine which solution is correct, approximation errors resulting from the use of trapezoidal fuzzy numbers instead of accurate fuzzy mapping were examined. As a result of the study, it was found that in the decision problem under consideration, the approximation error occurs relatively often, because in 32 cases out of 72 mappings performed, i.e. in 44% of cases. On the other hand, the correction is made in 9 mappings, i.e. 12.5% of all cases and 28% of the mappings are affected by an error. The mappings for which the correction is made are shown in Figs. 9 -12 . In addition to the mapping analysis during which correction is applied, cumulative mapping errors were also examined, with and without correction. Mapping errors were calculated for each of the preference functions, by defuzzyfing fuzzy numbers obtained using precise mapping (29) and trapezoidal fuzzy numbers (30) obtained using non-corrected and corrected mapping. Then the errors for each criterion were summed up, separately for each preference function P , where P ∈ {usual criterion, V-shaped criterion, level criterion, V-shaped criterion with indifference area, Gaussian criterion} (31) (the U-shaped criterion function was not applied in the decision-making model under d 3 ( a 2 , a 4 ) and d 3 ( a 4 , a 2 ) . A comparison of The error values obtained during mapping with and without correction are shown in Table 5 . The results presented in Table 5 clearly show that the solution obtained by applying the correction is less error prone. This is confirmed by the diagram of errors during mapping, shown in Fig. 13 . The analysis of Fig. 13 indicates that in the decision problem under consideration, if a correction is made, it reduces the mapping error in each case. The presented analyses allow us to conclude that the decision problem solution obtained using the NEAT F-PROMETHEE method (with correction) has a smaller error than the solution obtained using Fuzzy PROMETHEE (without correction). Therefore, it can be concluded that the ranking of NEAT F-PROMETHEE II ( a 3 a 2 a 1 a 4 ) and the partial order shown in Fig. 6 is correct. Additionally, the presented calculation example shows that the correction made even when mapping a small number of deviations can significantly change the ranking of considered alternatives. Conclusion The article presents the methodological basis of the NEAT F-PROMETHEE method and the details of its implementation in MATLAB. Moreover, the calculation results of the NEAT F-PROMETHEE method were compared with the standard Fuzzy PROMETHEE method based on TFNs. This comparison was made using the management decision-making problem, which was solved using both multicriteria decision support methods. The results of the conducted research indicate that the NEAT F-PROMETHEE method allows to obtain more precise results, with a lower error resulting from the use of TFNs. Development of the NEAT F-PROMETHEE implementation in the MATLAB environment will increase the ease of use of this method. This will allow users to focus on better modelling of the decision problems under consideration, instead of worrying about the details related to the correct implementation of the method. As for further directions of research on the NEAT F-PROMETHEE method, in the context of sustainable management, it seems interesting to combine this method with the PROSA method [11 , 12] . This would allow uncertainty and imprecision to be taken into account in the decision-making problems of sustainable development, where the balance between economic, social and environmental factors is important. Yet another interesting research challenge is the development of GAIA (Geometrical Analysis for Interactive Assistance) [23] for NEAT F-PROMETHEE using TFNs. It would allow analysing the fuzzy decision problem from a descriptive perspective. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
5,464.2
2021-04-13T00:00:00.000
[ "Computer Science" ]
Neural regions discriminating contextual information as conveyed through the learned preferences of others The human brain consists of a network of regions that are engaged when one observes the movements of others. Observing unexpected movements, as defined by the context, often elicits greater activity, particularly in the right posterior superior temporal sulcus (pSTS). This implies that observers use contextual information to form expectations about an agent’s goal and subsequent movements. The current study sought to identify regions that support the formation of these context-dependent expectations, with the pSTS being one candidate, given the consistent contextual modulation of its activity. We presented participants with fictitious individuals who had emotion-dependent food preferences, and instructed participants to indicate which food they expected each individual to choose based on the individual’s current emotional state. Each individual’s preference and emotional state therefore created a context that informed the observer’s expectation of the individual’s choice. Multi-voxel pattern analysis (MVPA) was used to assess if these different contexts could be discriminated in the pSTS and elsewhere in the brain. No evidence for context discrimination was found in the pSTS. Context discrimination was found instead a network of other brain regions including the anterior medial prefrontal cortex (amPFC), bilateral parietal cortex, left middle temporal gyrus (L MTG) and left anterior temporal lobe (L ATL), which have been previously associated with context processing, and semantic and memory retrieval. All together, these regions possibly support the formation of context-dependent expectations of an agent’s goal. Introduction The human brain consists of a network of regions that are engaged when one observes the movements and actions of other living things. These regions are involved in processing the form and kinematics of motion, and identifying the actions performed (Thompson and Parasuraman, 2012). The brain, however, does not merely react to observed movements, but also seems to predict the movements of an agent, based on inferred goals and intentions. Evidence for this idea comes from studies showing that the same observed movements elicit greater activity, particularly in the right posterior superior temporal sulcus (pSTS), when the context renders the movement unexpected than expected. For example, Pelphrey et al. (2003) found that pSTS activity to shifts in an avatar's eye gaze was greater when the gaze shift did not occur in the direction of a preceding flashing checkerboard than when it did. In another study, Brass et al. (2007) found greater pSTS activity to the same action when the action seemed implausible than when it seemed plausible, for example, an actress flipping a light switch with her knees when her hands were free compared to when her hands were occupied. Vander Wyk et al. (2009Wyk et al. ( , 2012 found greater pSTS activity when an actress' action was incongruent with her expressed emotion (i.e., reaching toward a cup that she had previously expressed dislike for) than when it was congruent (i.e., reaching away from the cup that she had previously expressed dislike for). Increased pSTS activity to actions that are unexpected has been found in other studies as well (Pelphrey et al., 2004;Saxe et al., 2004;Shultz et al., 2011). The differences in neural response to observing identical actions embedded within different contexts suggests several stages of processing. That the pSTS shows different responses to expected and unexpected actions necessitates that the observer must have first formed an expectation about the agent's goal. Forming an accurate expectation, in turn, depends on the observer having assessed the context preceding the action. Indeed, according to the predictive coding framework of action observation, context provides priors from which predictions about an agent's intentions are formed, which in turn informs predictions about the immediate goal of an agent's subsequent movements, and the kinematics of those movements (Kilner et al., 2007). Therefore, it seems that assessing context and forming expectations about intentions and goals can occur prior to observing an action. Here, our operational definition of context is any situation-specific information that informs an observer's expectation of an agent's intention. For example, in Vander Wyk et al. (2009), the actress' particular emotional expression directed at a particular cup served as the contextual information that allowed the observer to expect that she would either choose that cup or the other cup. What are the neural substrates of these earlier stages of processing? That is, which regions are involved in assessing the context, thus allowing the observer to predict an agent's goal? To investigate this question, we reasoned that if a brain region uses contextual information to inform expectations about an agent's goal, then this region should be able to discriminate between different contexts. Therefore, in this study, participants were presented with unique contexts that led to specific expectations. To avoid using spatial cues as context, as the pSTS has also been implicated in attention reorienting (Corbetta et al., 2008), participant's expectations were instead informed via learned preferences of fictitious individuals. To this end, we used an ecologically valid manipulation of assigning different food preferences to these fictitious individuals depending on their emotional state (Lyman, 1982). Specifically, one individual would choose to eat meat when he was happy, and vegetables when he was sad. The other individual had the opposite preference. During the experimental task, participants were presented with each individual and his current emotional state, and were asked to indicate which food they expected the individual to choose based on the individual's current emotional state. Each individual's preference and his current emotional state therefore created a context that would inform the observer's expectation of the individual's choice. Multi-voxel pattern analysis (MVPA) was used to assess if these different contexts could be discriminated from one another. Unlike previous studies, neither spatial cuing in the form of motion nor outcome was presented in this study because our aim was to investigate context assessment and expectation formation prior to observing an outcome. Given the robust and consistent influence of context on pSTS activity reported in the literature, the pSTS served as a region-ofinterest (ROI) on which we performed a targeted analysis. The role of assessing context is also plausible for this region given that the surrounding cortex in the inferior parietal lobules has been proposed as a convergence zone for multimodal contextual information to support semantic (Binder and Desai, 2011) and episodic (Shimamura, 2011) memory. However, it is also possible that contextual information is represented not in the pSTS, but in other regions. In particular, the medial prefrontal cortex (mPFC) has been suggested to use contextual associations to form predictions about possible subsequent stimuli (Bar, 2009). We therefore also conducted a whole-brain searchlight analysis to uncover other brain regions that could discriminate between contexts. Materials and Methods Participants Twenty-one right-handed, healthy adults (14 male, mean age 23.2 ± 3.9 years) participated in the study. All participants had normal or corrected-to-normal vision and had no history of neurological or psychiatric illnesses. The protocol was approved by the Yale Human Investigation Committee and all participants gave informed consent. Data from one participant was excluded because the timing files were corrupted, and from another participant because of excessive artifacts in the data. Therefore, results from nineteen participants are reported. Stimuli and Design Stimuli consisted of colored pictures of three male faces with neutral expressions, obtained from the NimStim database (Tottenham et al., 2009), along with 36 colored pictures of meat dishes and 36 colored pictures of vegetable dishes obtained from the Internet. Stimuli were presented with using Psychtoolbox 3.0.8 (Brainard, 1997;Pelli, 1997) in MATLAB 7.8 (The MathWorks, Inc., Natick, MA, USA). The stimuli were presented using an event-related design. In each trial, one of the three faces was presented along with a text cue above the face indicating the person's emotional state (''happy'' or ''sad''), and pictures of a meat dish and a vegetable dish on the left and right of the face (Figure 1). Each trial was presented for 2 s and trials were separated by a 4-10 s jittered fixation interval. Each run consisted of six trials per condition (i.e., each face paired with each emotion) to give a total of 36 trials per run, and a run duration of 5 min. The program ''optseq2'' 1 was used to generate the optimal sequence and separation of trials for maximal statistical efficiency of rapid-presentation event-related hemodynamic response estimation for each run (Dale, 1999). The position of the meat and vegetable dishes FIGURE 1 | Schematic illustration of the experimental paradigm. During each trial, the neutral face picture of one of the three white male individuals was displayed. As the NimStim faces used cannot be published, sample faces generated with FaceGen (Singular Inversions, Toronto, ON, Canada) are shown here instead. Pictures of a meat dish and a vegetable dish were presented on the left and right of the face. The word "happy" or "sad" was displayed above the face to indicate the individual's current emotional state. Participant's task was to indicate, using the left and right button presses, which dish the individual would choose based on the individual's emotional state. Trials were presented for 2 s and were separated by a 4-10 s jittered interval during which a fixation cross was displayed (not shown). The red circles indicate each individual's emotion-dependent food preferences and were not displayed during the task. on the left and right of the face was counterbalanced across trials within each condition and each run. Ten runs were presented. Experimental Procedure Prior to scanning, participants were introduced to three fictitious male individuals (''John'', ''Alex'', and ''Rick''). They were briefed that each individual had different food preferences depending on their emotional state. When John was happy (''H1''), he would choose to eat vegetables, but when he was sad (''S1''), he would choose to eat meat. The exact description presented to participants read, ''This is John. He is into healthy living so when he's feeling happy, he'll choose to eat vegetables because they are refreshing. However, when he's sad, he'll indulge and choose to eat meat instead.'' Alex, however, had the opposite preference; he would choose to eat meat when he was happy (''H2''), but vegetables when he was sad (''S2''). The description of Alex read, ''This is Alex. Unlike John, when he's happy, he'll indulge and choose to eat hearty meat meals. However, when he's sad, he'll want something refreshing so he'll choose vegetables instead.'' These two individuals had opposite preferences so that the discrimination of context would not be confounded with discrimination of emotion (i.e., happy vs. sad) or food choice (i.e., meat vs. vegetables). These trials were considered the ''Preference'' trials, where participants had to rely on information about each person's preference and emotional state to form expectations about their choice. Rick had no particular preference and could choose to eat either meat or vegetables when he was happy (''H3'') or sad (''S3''). The description of Rick read, ''This is Rick. He doesn't have a strong preference for either type of food. When he's happy, he some times chooses to eat meat and he some times chooses to eat vegetables. Likewise, when he's sad, he some times chooses to eat meat and he some times chooses to eat vegetables.'' These were the ''No Preference'' trials and served as control trials since there was no contextual information from which the participants could form an expectation about the person's choice. Participant's task in the scanner was to indicate on each trial, using their right index and middle fingers corresponding to the left and right response buttons respectively, which food item they expected each person would choose based on his emotional state. No feedback was given during the in-scanner task. However, participants were familiarized with the preferences by performing a practice task, which included feedback, until they achieved an accuracy of at least 75%. A 2 (Person: John, Alex) × 2 (Emotion: Happy, Sad) repeated measures analysis of variance (ANOVA) revealed that participants performed equally well in the ''Preference'' trials for which there were correct answers (M = 93.7%); there was no main effect of Person or Emotion, and no Person x Emotion interaction (all ps > 0.5). However, there was a marginal main effect of Emotion on response times (F (1,72) = 3.078, p = 0.084); participants took longer to respond to Sad trials (M = 1486 ms) than to Happy trials (M = 1362 ms). Image preprocessing was performed using the FMRIB Software Library (FSL). 2 Structural and functional images were skull-stripped using the Brain Extraction Tool (BET). The first six volumes (6 s) of each functional dataset were discarded to allow for MR equilibration. Functional images then underwent motion correction (using the MCFLIRT linear realignment) and high-pass filtering with a 0.01 Hz cut-off to remove low-frequency drift. No spatial smoothing was applied to the functional data. The functional images were registered to the coplanar images, which were in turn registered to the highresolution structural images for subject-level analyses. Subjectlevel results were later normalized to the Montreal Neurological Institute's MNI152 template, using non-linear registration, for group-level analyses. Multi-Voxel Pattern Analysis (MVPA) To obtain data samples for the classification analysis, participant's preprocessed functional data were first normalized to their structural image (which were resampled to the resolution of the functional data) using the transformation matrix from preprocessing. Regression analyses were then performed to obtain beta estimates for each trial, using least-squaressum estimation (AFNI's 3dLSS), which is recommended for classification analyses involving fast event-related designs (Mumford et al., 2012). The model consisted of separate regressors for each 2-s trial from each condition, convolved with a hemodynamic response function, along with the six motion parameters obtained from preprocessing as nuisance regressors. Estimates were obtained for each run separately, and then concatenated to form a beta series for each participant. All classification analyses were implemented using PyMVPA (Hanke et al., 2009) using a Gaussian Naïve Bayes (GNB) classifier and a leave-one-run-out cross-validation scheme. Only correct trials were included in the analysis and PyMVPA's Balancer function was used to ensure an equal number of trials across conditions for each cross-validation fold. To determine if a region could discriminate between the different contexts, we used a GNB classifier to perform a four-way classification to discriminate correct ''Preference'' trials (i.e., H1, S1, H2, S2). A significant four-way classification can arise from accurate classification of some categories but not others. Therefore, we focused our discussion on regions where the classifier made the correct prediction about the actual target category on majority of the trials, that is, where the diagonal elements of the confusion matrix had the highest numerical value in each row. For each participant, the confusion matrices from all voxels within each searchlight cluster were averaged. The mean confusion matrix was then scaled such that each cell in the resulting confusion matrix reflected the percentage of trials in each category that were classified as each of the four potential categories (e.g., percentage of H1 trials classified as H1 trials, S1 trials, H2 trials, S2 trials). The cells in each row therefore add up to 100 (or approximately 100 due to rounding). The group-level confusion matrix for each searchlight cluster was obtained by averaging the subject-level confusion matrices. To verify that a successful four-way classification of the ''Preference'' trials indeed reflected context-dependent expectations (i.e., each individual's preferences and their emotional state), we also conducted a two-way classification on the control ''No Preference'' trials (i.e., H3, S3). Here, we expected that these trials should not be successfully discriminated since there was no preference and therefore no contextual information from which participants could form an expectation about the individual's choice. Only trials with behavioral responses were included in the analysis (i.e., missed trials were excluded). ROI-Based MVPA An independent pSTS ROI was obtained from the Atlas of Social Agent Perception (Engell and McCarthy, 2013). Briefly, this Atlas included results from a Biological Motion localizer (consisting of blocks of point-light figures and blocks of their scrambled counterparts) that was run on 121 participants. The probability map of the Biological Motion > Scrambled Motion contrast, which localizes the pSTS, was thresholded at 0.1 and intersected with the right Supramarginal Gyrus from the Harvard Oxford Atlas to obtain a liberal pSTS mask. The mask was further edited manually to remove voxels spreading into the parietal operculum. The resulting ROI of 751 voxels (Figure 1, in yellow) was then transformed into subject-space for each participant. The beta estimates within the ROI were mean-normalized by z-scoring within each sample to remove mean differences between samples. Feature selection was performed on the samples in the training set of each cross-validation fold by conducting a one-way ANOVA on the beta estimates for the four ''Preference'' trials for each voxel in the pSTS ROI. The top 123 voxels (to match the number of voxels used for the searchlight analysis described later) that showed the greatest variance between the four trial types were selected as features for that cross-validation fold. The accuracies from all participants were then averaged to obtain the group level classification accuracy. Significance testing at the group level was implemented using a combination of permutation and bootstrap sampling methods (Stelzer et al., 2013). Specifically, the data labels for each participant were permuted (within each run) 100 times and the classification analysis was repeated using each permuted label set to yield 100 chance accuracies for each participant. We then randomly drew one of the chance accuracies from each participant and averaged these accuracies to obtain a chance group-level accuracy. This random sampling (with replacement) was repeated 10 5 times to create a group-level null distribution. The true group-level classification accuracy was then compared to the null distribution to obtain the p-value associated with the accuracy. Whole-Brain Searchlight Analysis To identify other brain regions that discriminate context-specific information, we conducted a whole-brain searchlight analysis in subject-space for each participant with a three-voxel-radius searchlight consisting of 123 voxels centered on every non-zero voxel in an MNI152 brain mask. The four-way classification analysis performed for each searchlight followed the method used in the ROI-based analysis, except that no feature selection was conducted. The classification accuracy for each searchlight was assigned to the voxel at the center of the searchlight, yielding a whole-brain classification accuracy map for each participant. Each participant's accuracy map was transformed back into MNI152 template space. The group-level classification accuracy map was obtained by averaging the accuracy maps from all participants. Significance testing of the whole-brain classification results also used permutation and bootstrap sampling methods, along with cluster thresholding to correct for multiple comparisons (Stelzer et al., 2013). Specifically, we ran the searchlight classification analysis for each participant an additional 100 times, each time using a random permutation of the data labels (within each run), thus producing an accuracy map of chance classification. Each participant therefore had 100 chance accuracy maps. Each of these maps was then normalized to the MNI152 template space. To obtain a null distribution for the group level classification accuracies, we generated 10 5 grouplevel chance accuracy maps, each of which was obtained by choosing a random chance accuracy map from each participant and averaging those randomly chosen maps. A whole-brain threshold of p < 0.001 at each voxel was then applied to the group-level accuracy map. Cluster thresholding was used to correct for multiple comparisons. Each of the 10 5 group-level chance maps were also thresholded at voxel-wise p < 0.001. We recorded the number of clusters for each cluster size occurring in each of these 10 5 thresholded chance maps and generated a null distribution of clusters. Each recorded cluster across all 10 5 chance maps was then assigned a p-value based on the occurrence of its size in the chance-level cluster distribution. Significant clusters were those whose probability survived a false discovery rate (FDR) of q < 0.05. To verify that the significant four-way classification reflected accurate discrimination of all four categories, a clusterlevel confusion matrix was obtained by averaging the confusion matrices of all searchlights in each significant cluster. We also conducted a whole-brain searchlight analysis performing a two-way classification using the two ''No Preference'' trials in each searchlight to verify that regions that discriminated the four ''Preference'' trials did not also discriminate the two ''No Preference'' trials. Classification Analysis on the pSTS Region-of-Interest (ROI) No significant four-way classification of ''Preference'' trials was found in the pSTS ROI (M = 25.49%, p = 0.314). There was also no significant two-way classification for the control ''No Preference'' trials (M = 48.89%, p = 0.792). To assess if the four-way classification would improve with a larger number of features, the classification analysis was also run with the top 200, 300, and 400 voxels from the feature selection, but no improvement in the four-way classification accuracy was found (200 voxels: M = 25.36%, 300 voxels: M = 25.34%, 400 voxels: M = 25.71%, all ps > 0.2). We also performed a separate twoway classification, using only ''Preference'' trials, to assess if the pSTS could discriminate the expected outcome (in this case food choice, i.e., meat vs. vegetables). No successful discrimination of expected outcome was found with any feature selection size (all ps > 0.5). Whole-Brain Searchlight Analysis Regions that successfully discriminated the ''Preference'' trials in the whole-brain searchlight four-way classification analysis included the left inferior parietal lobule/intraparietal sulcus (L IPL/IPS) spanning from the angular gyrus to the intraparietal sulcus, precuneus, right intraparietal sulcus (R IPS), anterior medial prefrontal cortex (amPFC), left middle temporal gyrus (L MTG), dorsal anterior cingulate cortex (dACC), superior frontal gyrus (SFG), left anterior temporal lobe (L ATL) at the anterior MTG, and right inferior frontal sulcus (R IFS; Figure 2, in red and orange; coordinates of peaks are reported in Table 1). Of these regions, the L IPL/IPS, R IPS, amPFC, L MTG, and L ATL (Figure 2, in red) yielded confusion matrices where the diagonal elements had the highest numerical value in each row (Figure 3). No regions successfully discriminated the ''No Preference'' trials in the whole-brain searchlight two-way classification analysis. Discussion The current study sought to investigate the neural substrates of assessing contextual information to form expectations about an agent's goal. To this end, participants were presented with fictitious individuals who had emotion-dependent food preferences, and were asked to indicate which food they expected each individual to choose given the individual's emotional state. Here, knowledge about each individual's emotion-dependent food preferences and the individual's current emotional state served as a unique context that informed the observer's expectation of the individual's food choice (i.e., his goal). We assessed if the different contexts could be discriminated based on FIGURE 2 | The posterior superior temporal sulcus (pSTS) region-ofinterest (ROI) obtained from the Atlas of Social Perception (Engell and McCarthy, 2013) for the ROI-based MVPA is displayed in yellow. Clusters of searchlight centers with significant four-way classification of the "Preference" trials in the whole-brain searchlight analysis are displayed in red and orange. Regions in red (i.e., L IPL/IPS, R IPS, amPFC, L MTG, and L ATL) had confusion matrices in which the diagonal elements had the highest numerical value in each row. the spatial pattern of activity in different brain areas. Given the consistently observed influence of context of pSTS activity, the pSTS served as a ROI on which we performed a targeted analysis. We also conducted a whole-brain searchlight analysis to identify other regions in the brain that might discriminate between contexts. Despite using a liberal mask and selecting voxels that varied the most between trials to optimize classification performance, no evidence for context discrimination was found in the pSTS. However, we found robust evidence for context discrimination in three-voxel-radius searchlights centered in a network of other regions in the brain, including the left IPL/IPS, right IPS, amPFC, left MTG, and left ATL. The positive finding in the whole-brain analysis demonstrates that our task was sensitive to our experimental manipulation, but the lack of a positive finding in the pSTS does not rule out the possibility that the pSTS may still represent contextual information. A recent study found that MVPA failed to find information about face identity in macaques, even when single-unit recordings revealed the presence of this information in the underlying neural populations (Dubois et al., 2015), demonstrating the limitations of the method. The different contexts presented in this study may not be represented in a spatially organized or consistent way in the pSTS, which is what a successful classification analysis using MVPA requires. Alternatively, the pSTS could represent contextual information, but only those conveyed through visual or other sensory modalities, as was used in previous studies, and not those conveyed through linguistic, conceptual means, as was used in this study. Similarly, the pSTS may not represent information about an agent's stable preferences, which is only one type of contextual information, but may represent other types of contextual information that are conveyed through the stimulus, such as facial expressions. Indeed, the analysis rested on the assumption that regardless of the nature of the context, there should be a point of convergence where the contextual information is interpreted and translated into an expected outcome. Relatedly, the searchlights that discriminated the different contexts were centered in regions associated with semantic processing and retrieval. The left ATL is involved in semantic processing (Visser et al., 2010), and has been shown to be particularly important for processing person-specific semantic information (Brambati et al., 2010), which could refer to each individual's context-specific preferences in this study. Metaanalyses have also found that the parietal lobules and MTG regions are involved in episodic (Spaniol et al., 2009) and semantic (Binder et al., 2009) retrieval. However, previous studies that have used scenes to convey context have instead implicated the retrosplenial cortex and parahippocampal gyrus, which are associated with scene processing (Bar, 2009). The differences in regions implicated suggest that the regions that successfully discriminated the different contexts in this study may not necessarily be involved in all types of context processing, but could reflect the specific type of contextual information that is used in this task. In our study, the regions that showed successful context discrimination have previously been implicated in semantic processing and retrieval, which may reflect the retrieval of learned person knowledge required for the task. Similarly, Zaki et al. (2010) also found greater engagement of amPFC, left temporal and parietal regions when participants used contextual cues (e.g., text describing affective events) to infer a person's emotional state than when watching a silent video of the person describing the events. Notably, the region that was commonly implicated in both types of context studies was the amPFC, which may suggest that this region is critical for context processing more generally, regardless of domain. Indeed, the mPFC has been proposed to use contextual associations to form predictions (Bar, 2009). The mPFC has also been implicated in integrating context and past experience, albeit for guiding an organism's responses (Euston et al., 2012). One question that can be raised from this observation is whether the same neural mechanisms are also used to guide predictions about another's response. Interestingly, in a similar study, participants assessed how four individuals, each with different personalities, would react in a given situation (Hassabis et al., 2014). Successful discrimination of the four personalities was found in the mPFC. In our study, we also found successful within-personality discrimination, that is, of each person and his emotional state, suggesting that mPFC may make more fine-grained discriminations than personality models. It is possible therefore that the four personalities in Hassabis et al. (2014) represented four different contexts that informed participants' expectations about the agents' reactions. If the pSTS is not involved in re-evaluating contextual information, then what might explain the commonly observed increase in activity to unexpected actions? Given that this region also shows greater response to attention reorienting tasks (Corbetta et al., 2008;Lee and McCarthy, 2014), the increased activity could reflect attention reorienting, or prediction error signals (Koster-Hale and Saxe, 2013). One study, however, FIGURE 3 | Confusion matrices from each significant cluster from the four-way classification. Each cell reflects the group-level proportion of each type of trial (in rows) that were classified as each of the four types of trials (in columns). The cells in each row therefore add up to 100 (or approximately 100 due to rounding). Cells are colored according to a gradient ranging from the lowest (gray) to highest numbers (red). Successful classification of all four categories is reflected through strong red colors in the diagonal from top left to bottom right. The first five regions (i.e., L IPL, R IPS, amPFC, L MTG, and L ATL) had confusion matrices in which the diagonal elements had the highest numerical value in each row. H1: John-happy, H2: Alex-happy, S1: John-sad, S2: Alex-sad. dissociated attention reorienting from stimulus evaluations and suggested that the pSTS at the temporoparietal junction is involved in stimulus evaluation instead of reorienting attention (Han and Marois, 2014). Therefore, the increased activity could also reflect greater stimulus evaluation, given the unexpectedness of the stimulus. Another possibility is that the pSTS represents the expected outcome (e.g., a specific action), and when the outcome violates expectations, the region re-represents the outcome, leading to increased activity. However, we also found no evidence that the pSTS could discriminate between expected outcomes (in this case, the meat dish or vegetable dish) in this study. Indeed, the target object of an agent's reach was found to be encoded in the left IPS instead (Hamilton and Grafton, 2006). It is also possible that the pSTS' representation of expected outcomes could be specific to the domain of motion information and not static pictures as was used here, especially since the pSTS is known to respond robustly to biological motion (Allison et al., 2000;Puce and Perrett, 2003). For example, Said et al. (2010) found successful discrimination of dynamic facial expressions in the pSTS. However, motion was not presented in this study because our aim was to investigate the expectation phase of observation with no feedback, and goal-directed motion would inevitably hint at an outcome. A study that investigates if the pSTS can discriminate between different expected actions can address this issue. Limitations One limitation of this study is the rule-based nature of the task. That is, participants could have learned and applied the face-emotion-food combinations without reflecting on the person's goal. The left IPL/IPS has been found to represent event-specific (i.e., specific word-picture pairings) information (Kuhl and Chun, 2014), which resembles the face picture and emotional word pairings in the current study. We did not, however, find successful classification of the two types of ''No Preference'' trials, which suggests that there was additional information being represented in the four-way classification than just the face-emotion combination (perhaps the more subtle face-emotion-word combination). Other studies have also found decoding of task rules in the IPS (Woolgar et al., 2011;Zhang et al., 2013). It is possible, though, that the same mechanisms underlie action observation. For example, in Vander Wyk et al. (2009, an observer who sees a person scowling at an object presumably expects the person to retrieve the other object due to some internal rule, for example, Bayesian models for cue integration (Zaki, 2013). Conclusion In summary, we found no evidence that the right pSTS, a region that has been shown to be sensitive to the context in which the observed movements of others occur, discriminates contextual information. We did, however, identify a network of other brain regions commonly associated with context processing and semantic and memory retrieval that successfully discriminated contexts. These regions possibly support the formation of context-dependent expectations of an agent's goal.
7,322
2015-09-08T00:00:00.000
[ "Biology" ]
Commentary: Voluntary Agreement in Multi-use Climate Adaptation in the Oekense Beek from a Politic-Economic Perspective politico-economics and in to and effectiveness of environmental policies into and Introduction Recent high impact floods and droughts were experienced across the EU, where the economic and social impact was significant (Guha-Sapir et al. 2016): more than five times the losses incurred between 2000-2012. Driven by climate change, extreme flooding events, such as those experienced across the EU, are expected to increase in frequency with models suggesting that average annual economic losses predicted to exceed EUR 23 billion over the same period (Jongman et al. 2014). Preparing for and building resilience against future natural hazards events is challenging and resourceintensive (e.g., time, financial, etc.) with key difficulties for practical application. This era of climate change calls for new robust (i.e., inclusive of known or probable risks) and flexible (i.e., incorporating uncertain or possible risks) risk management approaches. The desire to manage land and water sustainably, introduce resilience to climate change, assess risk and implement sustainable environmental management strategies has broad support, but defining "sustainable" management has proven difficult for policymakers. One adaptation strategy might be Nature-based Solutions (NBS). NBS aim to harness ecosystems through both their resilient adaptive ecoservices and sustainable integrity that will provide short-, medium-and longer-term solutions to managing the risks associated with climate-driven extreme hydrometeorological events being experienced across Europe and globally. Nature-based solutions for hydrometeorological risk mitigation and adaptation in river catchments involve, for example, Natural Water Retention Measures (NWRM), space for the rivers, or measures for resilient cities (i.e., green infrastructure in cities, green roofs, decentralized rainwater management). These solutions are also referred to as "green and blue infrastructure". Nature-based solutions to water-related risks cannot entirely substitute for traditional measures such as flood pathway and receptor approaches both structural and behavioral (e.g., flood walls, channels, flood warnings), but their potential value for mitigation and adaptation has been recognized (Lafortezza et al. 2018). As such, NBS can often be easily designed in engineering terms and provide a good complement to local climate adaptation strategy, but a limiting factor is the area of land required to provide sufficient storage in the appropriate place to be useful. Nevertheless, implementing and paying for NBS and for grey or mixed (grey and green) infrastructure, requires not only appropriate technical data, risk analysis, functional testing but requires funding for building and maintaining the various targeted options over variable time-space continuums. There are significant differences between countries worldwide on how NBS must be implemented; but generally at least three main barriers can be summarized: • Cultural and social barriers: for example, in England and Wales FRM policy, the political definition has not so far been publically acceptable to use private land (upstream) to sacrifice it for the downstream communities as a mainstream strategy (Thaler 2015). • Uncertainties in frequency and magnitude: using flood storages as key FRM scheme also causes significant uncertainties about the next event, which cause large concerns by private landowners about how to use the land. • Mechanisms of compensation: flood storage includes the challenge of transferring a risk and benefit to others. This causes complicated discourse about the preferred form and institutional set up of compensation. For example, Ungvari and Kis (2013) showed that the implementation of flood storages on the Tisze river basin is that farmers and government have different views on how to organise the payment scheme (small fixed annual amounts or large amount based on the event). Use of Economic-Policy Instruments in Flood Risk Management As a result, the change in ownership such as land buying or taking by land expropriation might be approached to implement NBS. One solution might be to use/implement Economic Policy Instruments (EPIs) to manage easier water-related risks. EPIs have become more popular in the past decade, in particular, with the implementation of various EU regulations and directives. The range of EPIs can be as follows: (1) innovative payments schemes (i.e., compensation by public administration or insurance); (2) financial incentives for land-use changes (e.g., agro-environmental schemes); (3) flood risk pooling schemes; (4) financing schemes for urban development for stormwater management; (5) voluntary agreements, for example, between urban and rural areas; and (6) cap and trade schemes, like the insurance bonus malus system (Thaler 2015). In the Oekense Beek study area, the aim was to use the EPI voluntary agreement between private landowners (i.e., farmers) and users, such as regional water authority, the province, municipality Brummen as well as the (OECD 2003) and is often simply defined as any approach that does not involve a legally enforceable requirement that is imposed on one party to take action in the interests of another. Nevertheless, capturing, analyzing and understanding differences of success and failure in the adoption of voluntary agreement is challenging (Thaler 2015) yet necessary to encourage and support stakeholders' engagement in this direction and to understand limits in the transferability of success from one case to another. The successful implementation of voluntary agreements is clearly influenced (positively or negatively) by a number of factors. These factors may constrain the feasibility and acceptability of a project but also determine its efficiency. Providing and pre-empting an exhaustive list of factors would be misguiding. However, these factors can be grouped into four categories: (a) natural capital, (b) social capital, (c) institutional framework and (d) socio-economic activities (see Table 23.1). Natural capital refers to the stock of resources that provides environmental services. The catchment characteristics play a central role to implement flood storage areas. The retention volume capacity for instance will depend on various natural factors such the landscape profile, the soils and the geological conditions (EA 2009). The volume capacity also needs to match the hydrological behaviour of the catchment, the considered river and the upstream and downstream tributaries. A central aim of using catchment-wide NBS is to protect high value/vulnerability (usually urban) areas in the lower part of the catchment. Therefore, the central question and conflict arise how to compensate low-intensity agricultural areas and how to motivate farmers to provide the land as in the Oekense Beek example. Behind this simple transfer-interesting from a flood management and an economic perspective-lie potential sources of social tension and resistance to the project as the transfer from one location to another (e.g., often from one community or administrative boundary to another) means risk. The pre-existence or prior lack of shared knowledge, trust and social connection may influence the acceptability of such transfer. A critical element is the interaction (social capital) between farmers and other stakeholders from urban areas. Most of the time, the examples show a lack of social capital of these two groups by a lack of risk culture and/or solidarity. One recurrent conflict is the impact that adopting environmentally friendly measures may have on the socio-economic activities for farmers, as it reduces the profitability of a business by internalising externality. There is the question how to organise the compensation, which is often based on a negotiation process. However, such negotiations are regulated by institutional framework (Ostrom 1986;Scott 1995), formal and informal, which may affect the implementation of a policy instrument. An understanding of the possibility and limits within the institutional framework is crucial. In the context of implementing flood storage areas, it is essential to investigate the question of property right (e.g., right to flood/right to be protected), to land use planning (e.g., flood-prone areas defined or not) and to existing policy on the funding mechanism (e.g., right to compensate). Voluntary agreements involving a form of compensation are often preferred; yet their implementation differs from one place to another (in particular challenging the upscaling or transfer the lessons learn to other cases). In seeking to construct a flood storage area, there are alternative forms of power that might be used, with associated advantages and disadvantages. Because flood storage requires space and place, a purely market-based approach cannot always be relied upon to assemble the necessary area required in the appropriate place. In addition, the rules covering action by a particular administrative unit commonly will not allow it to buy land outside of its administrative boundaries. This is usually even more the case when the expropriation of land is concerned; for example, the Netherlands does not have the legal powers to acquire land in Germany through compulsion in order to construct a flood storage area. In strongly federal states, the same is true between federal states. Hence, a voluntary agreement may be the only viable option when action is desired outside of the boundaries of proposing body. Alternatively, if an attempt to use power is likely to be met by resisting power, there are two reasons why a voluntary approach may be preferable. Firstly, even if the resistance can be overcome, this will involve costs and time delays. Secondly, creating an adversarial context in one case can incur long-term costs by creating the anticipation by one or several parties that the future will also be one of conflict even when the interests of the parties actually coincide. Conversely, establishing cooperation in one instance may create a precedent for future cooperation and a norm of reciprocity (Nowak and Highfield 2011). Conclusion In FRM, voluntary agreements are now also receiving a lot of interest as complements to the existing policy instruments in order to achieve the objectives the EU Water Framework Directive and of the EU Floods Directive, such as the implementation of flood storages and use of natural retention areas. Whilst the issues of scale and fit of administrative units to physical problems have been identified (Underdal and Young 1997), if it is impractical or inappropriate to change the boundaries of the administrative, it is necessary to create bridging mechanisms (Kohn 2008) to enable co-action across the boundaries of the existing administrative units. Thus, the central problems addressed in the Oekense Beek and many other examples are the appropriate use of power and bridging across the boundaries to power created by rules. At the same time, the use of economic instruments has been questioned from the perspective of social equity (Thaler and Hartmann 2016). The importance of equity and distributional issues (be it between water use sectors, social groups or regions) is in fact receiving increasing attention in many policy discussions and research activities in different parts of the world. Not all these objectives are fulfilled to the same extent by the different economic instruments. More often than not, in practical policy implementation more attention is paid to use of EPI as a mean to raise revenues than to efficient allocation of water use/water service delivery. Dr. Thomas Thaler is a Research Fellow at the Institute of Mountain Risk Engineering, University of Natural Resources and Life Sciences. His research focuses on the topic of politicoeconomics and natural hazards in Europe, with a particular emphasis on questions relating to design and effectiveness of governance systems as well as integrating European environmental policies into national and local institutions. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2,713.2
2019-01-01T00:00:00.000
[ "Environmental Science", "Political Science", "Economics" ]