id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
268654875 | pes2o/s2orc | v3-fos-license | Irrigation Quantification Through Backscatter Data Assimilation With a Buddy Check Approach
Irrigation is an important component of the terrestrial water cycle, but it is often poorly accounted for in models. Recent studies have attempted to integrate satellite data and land surface models via data assimilation (DA) to (a) detect and quantify irrigation, and (b) better estimate the related land surface variables such as soil moisture, vegetation, and evapotranspiration. In this study, different synthetic DA experiments are tested to advance satellite DA for the estimation of irrigation. We assimilate synthetic Sentinel‐1 backscatter observations into the Noah‐MP model coupled with an irrigation scheme. When updating soil moisture, we found that the DA sets better initial conditions to trigger irrigation in the model. However, DA updates to wetter conditions can inhibit irrigation simulation. Building on this limitation, we propose an improved DA algorithm using a buddy check approach. The method still updates the land surface, but now the irrigation trigger is not primarily based on the evolution of soil moisture, but on an adaptive innovation (observation minus forecast) outlier detection. The new method was found to be optimal for more temperate climates where irrigation events are less frequent and characterized by higher application rates. It was found that the DA outperforms the model‐only 14‐day irrigation estimates by about 20% in terms of root‐mean‐squared differences, when frequent (daily or every other day) observations are available. With fewer observations or high levels of noise, the system strongly underestimates the irrigation amounts. The method is flexible and can be expanded to other DA systems, also real‐world cases.
Introduction
Irrigation represents more than 70% of freshwater withdrawals (Gleick et al., 2009) making it the most important human activity impacting the terrestrial water cycle.Over the last decades, irrigated areas have expanded almost sixfold (Siebert et al., 2015), and contributed significantly to the increase in global crop production over the same time period (Foley et al., 2011).Under a growing population, food demand will continue to rise, which will inevitably lead to a further expansion and intensification of irrigated agriculture (Foley et al., 2011).In parallel, climate change will impact the irrigation water needs as a result of the expected rising temperatures and drier periods in many regions (Busschaert et al., 2022;Döll, 2002;Fischer et al., 2007).Conversely, irrigation also plays an important role in weather and climate dynamics (Bonfils & Lobell, 2007;Hirsch et al., 2017;Mahmood et al., 2014;Thiery et al., 2017Thiery et al., , 2020)), but it is still not or poorly included in Earth system models (Cook et al., 2015;Gormley-Gallagher et al., 2022;Valmassoi & Keller, 2022).Not only is there a call to monitor irrigation in order to ensure that the available water meets the future irrigation demands, but future climate-related research, and Earth system models in general, could significantly benefit from large-scale irrigation estimates.
In the last years, several methods have been developed to map and quantify irrigation by making use of satellite remote sensing data (Massari et al., 2021).These observations (optical, microwave and gravimetric measurements) are used alone, combined with each other, or with models.Optical (visible or thermal) observations were first used to map irrigation relying on the difference in spectral responses between irrigated and non-irrigated areas (e.g., Ozdogan & Gutman, 2008;Pervez et al., 2014;Salmon et al., 2015;Xie & Lark, 2021;L. Zhang et al., 2022), and more recently using machine learning-based methods (e.g., Jin et al., 2016;Magidi et al., 2021;Nagaraj et al., 2021;C. Zhang et al., 2022).Optical data have further been used to quantify irrigation amounts, mostly using estimates of actual evapotranspiration (ET) based on vegetation indices, sometimes also including models (land surface, water and energy balance), or combining visible and thermal bands (e.g., Bretreger et al., 2022;Brombacher et al., 2022;Droogers et al., 2010;Le Page et al., 2012;Maselli et al., 2020;Olivera-Guerra et al., 2020;van Eekelen et al., 2015;Vogels et al., 2020).Furthermore, satellite-based leaf area index (LAI) and ET products have been assimilated into the Noah-MP land surface model (LSM; Niu et al., 2011) with a focus on improving irrigation estimations by updating the land surface (Nie et al., 2022;J. Zhang et al., 2023).
While irrigation detection methods based on visible and thermal data have progressed and shown some promising results, they typically rely on proxies, and are limited by cloud cover.By contrast, long microwave signals can directly be related to water, and are less limited by atmospheric conditions.Despite their coarse resolutions, soil moisture retrievals from passive L-band radiometers or from active C-band scatterometers can detect wetter moisture when large-scale irrigation water is applied.The first microwave-based irrigation estimates were derived by inverting the soil water balance (SM2RAIN algorithm; Brocca et al., 2014) using several surface soil moisture (SSM) products (Soil Moisture Active Passive [SMAP], Soil Moisture Ocean Salinity [SMOS], Advanced SCATterometer [ASCAT], Advanced Microwave Scanning Radiometer 2 [AMSR2]) at a 25-km resolution (Brocca et al., 2018).They found satisfactory results in terms of irrigation quantification, but the outcome strongly depended on the revisit and the uncertainty of the SSM retrievals.Following the same approach, Dari et al. (2020) achieved finer resolution quantification by downscaling SMAP and SMOS data using the Disaggregation based on Physical and Theoretical scale Change algorithm (DisPATCh; Merlin et al., 2008).Jalilvand et al. (2019) applied this method in a more arid climate (Iran).SSM retrievals (containing irrigation in the signal) were also contrasted against LSM simulations (without irrigation), in order to estimate the amounts of water applied (Zaussinger et al., 2019;Zohaib & Choi, 2020).Despite these methodological advances toward irrigation estimation, the currently most accurate microwave-based satellite SSM retrievals are, for the time being, only available at resolutions coarser than most irrigated fields in Europe.
C-band synthetic aperture radar (CSAR) observations provide data at finer (field-scale) resolutions, and they are also sensitive to soil moisture, albeit with less penetration depth than L-band observations.The Sentinel-1 (S1; Torres et al., 2012) mission from the European Space Agency (ESA) offers the opportunity for frequent (∼2-3 days revisit in Europe) and fine-scale (10 m) observations, which are required for irrigation detection and quantification purposes.The S1 mission comprises a constellation of two satellites (S1-A and S1-B) sensing in two polarizations over land: co-polarized VV (vertically transmitted, vertically received), and cross-polarized VH (vertically transmitted, horizontally received).The S1 CSAR instruments on board of S1-A and S1-B have respective radiometric accuracies (error standard deviation) of 0.25 and 0.32 dB (varying with the acquisition mode and polarization; Miranda et al., 2017).In December 2021, S1-B became unresponsive, resulting in fewer observations from that time onwards.High-resolution SSM estimates retrieved from S1 backscatter have been developed in the last years.Zappa et al. (2021) used the TU Wien S1 SSM product (Bauer-Marschallinger et al., 2019) to detect and quantify irrigation at a local scale, based on spatiotemporal variations in SSM.The method showed promising results in terms of detection and correlation, but systematic underestimations of the irrigation water amounts were found when the observation interval was longer than 1 day (in a follow-up synthetic experiment; Zappa et al., 2022).The first regional datasets of high resolution irrigation water use, based on S1 data, have been released by Dari et al. (2023) using the soil moisture-based inversion approach.While the backscatter itself has already been used in irrigation mapping and timing studies at the local scale (Bazzi, Baghdadi, Fayad, Charron, et al., 2020;Bazzi, Baghdadi, Fayad, Zribi, et al., 2020), the direct use of S1 data to quantify irrigation is only in its infancy, as changes in backscatter are affected by the water in the topsoil, but also by the vegetation (water, volume, density, and geometry), and terrain roughness (McNairn & Shang, 2016).
The most optimal and spatio-temporally complete estimates of irrigation could theoretically be expected to result from a combination of observations (e.g., microwave observations) with models through data assimilation (DA; De Lannoy et al., 2022).Abolafia-Rosenzweig et al. ( 2019) performed SMAP SSM DA into the variable infiltration capacity (VIC) LSM, using a particle batch smoother.With the intent of going to a finer resolution, Jalilvand et al. (2023) used a similar approach using the S1-SMAP SSM product (Das et al., 2019).Ouaadi et al. (2021) assimilated S1-derived SSM data into the FAO-56 (Allen et al., 1998) model with a particle filter.In the three aforementioned studies, irrigation was treated as a model input and not explicitly simulated.A series of DA synthetic experiments were carried out to evaluate the impact of for example, time interval between the assimilated observations and their level of error.They concluded that the proposed techniques could accurately estimate irrigation amounts and timing (only for Ouaadi et al., 2021) but that small errors (levels of noise) and frequent observations are crucial.
The above studies assimilated SSM products, but retrieval assimilation requires rescaling to remove the bias between the forecast (modeled) and observed soil moisture to achieve an optimal DA system.Some of these rescaling approaches can remove irrigation from the signal (Kwon et al., 2022).Moreover, retrievals can introduce errors and inconsistencies in the DA system (De Lannoy et al., 2022), and might suppress irrigation signals.Indeed, microwave-based retrievals often rely on ancillary data, and on empirical change detection algorithms in the case of active measurements.Based on these limitations, Modanesi et al. (2022) decided to directly assimilate the S1 backscatter signal into the Noah-MP LSM (Niu et al., 2011) equipped with a sprinkler irrigation scheme (Ozdogan et al., 2010), where irrigation is dynamically modeled and triggered based on a soil moisture deficit approach.The DA updated SSM and LAI using an Ensemble Kalman Filter (EnKF) and a calibrated Water Cloud Model (WCM; Attema & Ulaby, 1978;Modanesi et al., 2021) as observation operator to map between SSM, LAI and backscatter.The idea is to provide the model with a better initial state, in terms of soil moisture and vegetation, to improve the triggering and estimation of irrigation.However, the method has also shown several limitations related to the model (soil texture, crop type, irrigation parametrization), and the DA system itself.An important problem is that irrigation events could be missed when the DA updates soil moisture to wetter conditions, thereby preventing irrigation simulation.
We set up synthetic experiments based on the system of Modanesi et al. (2022) with the goal to investigate the exact benefits and shortcomings of S1 backscatter DA and to address the main shortcomings (Section 2).In this context, synthetic backscatter observations are generated from a nature run (also called "truth") with a calibrated WCM as observation operator (Modanesi et al., 2021).These observations are then assimilated into erroneous model simulations for which the forcings are altered compared to the reference nature run.We then propose and test a novel method based on an innovation (observation minus forecast) buddy check approach.The land surface state is still updated to have better initial conditions to estimate irrigation, but anomalous high backscatter observations are not assimilated, and used to flag an unmodeled process and trigger irrigation instead.This new method is evaluated for three different sites, under different forcing errors, observation intervals and observation errors in Section 3. Finally, we discuss (Section 4) the possible future developments of the method, along with the opportunity to bring this system to a real world experiment.
The irrigation scheme, coupled to the Noah-MP LSM, was initially developed by Ozdogan et al. (2010) and based on a soil moisture deficit approach.In case of a fully irrigated pixel, the irrigation scheme depends on two conditions for irrigation to be triggered: (a) the day must fall within the growing season, and (b) the rootzone soil moisture must reach a certain depletion.First, the growing season is defined by a greenness vegetation fraction (GVF [ ]) threshold, GVF irr , as suggested by Ozdogan et al. (2010): In this study, GVF is based on a monthly climatology, and GVF min and GVF max are respectively the minimum and maximum monthly GVF.Second, the soil must be dry enough, meaning that the rootzone soil moisture has to reach a certain depletion (MA irr ).In the irrigation scheme, the depletion is defined by the moisture availability (MA [ ]) as follows: where θ l [m 3 m 3 ] and RD l [m] are the soil moisture content and rooting depth (RD) of the l th soil layer, and θ WP [m 3 m 3 ] and θ FC [m 3 m 3 ] are the water contents at wilting point and field capacity of the corresponding soil texture.lroot is the number of considered soil layers in the computation of MA.This number varies over the growing season given that RD i (sum of RD l at day i) directly depends on the GVF i , that is, in which RD max [m] is the maximum rooting depth (a vegetation parameter), and GVF is based on a monthly climatology (as in Equation 1).When both conditions (growing season and dry soil) are fulfilled, irrigation is triggered and the amount of water required brings the rootzone soil moisture back to field capacity.The irrigation rate (Irr rate [mm s 1 ]) is then defined as follows: where the Irr time (in seconds) corresponds to the period when irrigation is allowed.This time frame is set to 06:00 to 10:00 LT following Ozdogan et al. (2010).The Irr rate is then added to the precipitation at each model time step.
In our study, we report the total irrigation amount per day, which is effectively applied at each model time step within the 4-hr irrigation period.
To simulate realistic Irr rate at the field scale, the θ WP and θ FC were chosen to be in line with the Cosby et al. (1984) soil parameters, that is, they are derived with the Campbell (1974) relation: where θ [m 3 m 3 ] is the volumetric soil water content for a defined pressure head h [m H 2 O], and the water content at saturation θ s , the air entry pressure h s [m H 2 O], and the shape parameter b are taken from the Noah-MP v3.6 parameter table.For all soil classes, θ WP is derived for a pressure head of 150 m (pF 4.2).For field capacity, we used the water content at pF 2.5, which is a common reference, as introduced by Colman (1947).Note that the resulting θ WP and θ FC values differ from those in the default Noah-MP v3.6 parameter table created by Chen and Dudhia (2001).In their work, θ WP and θ FC were based on previous literature but then further adapted to intentionally artificially increase the total available water (TAW).This was motivated to indirectly account for the effect of subgrid soil moisture variability on ET dynamics at large-scale simulations.However, we noticed that those adapted values lead to unrealistic irrigation amounts per event compared to the field reference data.The default and updated parameters are presented in Appendix A. Note that the choice was made to use the Community Land Model (CLM) type soil hydraulic scheme (Oleson et al., 2004;Yang & Dickinson, 1996) to simulate the stomatal response to soil moisture as this scheme is not a function of the parameters θ WP and θ FC but solely depends on soil matric potential parameters (Li et al., 2021), taken from the Noah-MP v3.6 parameter table.
Study Areas
Experiments are performed on three sites with different climates and soil textures, which consequently introduce variation in the irrigation volumes and number of applications per season.Table 1 presents the location of the study sites with their main characteristics (Köppen climate class, soil texture, and irrigation characteristics).The MA threshold was chosen to simulate realistic irrigation events compared to benchmark data at these sites.For instance, the measured sprinkler irrigation applications at the Italian site are typically around ∼20 mm/day.The simulated irrigation with a MA threshold of 0.70 [ ] resulted in similar volumes per application.
Nature Run and Synthetic Observations
For all sites, the Noah-MP with irrigation module was used to create a nature run (also called the "truth") that provides reference data of soil moisture, LAI, and irrigation, along with all other variables.The model was run at a temporal resolution of 15 min and at a 0.01°× 0.01°lat-lon spatial resolution.The setup can be readily expanded to other domains.Meteorological data to force the nature run were extracted from the Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA2; Gelaro et al., 2017), which were remapped from a spatial resolution of 0.5°× 0.625°to the resolution of this study by bilinear interpolation.Soil texture parameters were taken from the 1-km Harmonized Soil World Database (HWSD v1.21).Irrigation was triggered when the MA reached the site-specific threshold MA irr (Table 1).The nature simulation ran over the period from 2010 through 2019 after a model spin-up starting on 1 January 2000.The growing season was defined based on the GVF climatology (0.144°spatial resolution; Gutman & Ignatov, 1998) and is specified in Table 1.
Based on these Noah-MP simulations, synthetic γ 0 VV observations were generated daily (at 06:00 LT), by propagating the SSM and LAI estimates through a WCM.The WCM describes the local soil and vegetation scattering processes through semi-empirical formulas (Attema & Ulaby, 1978), using a simple linear relationship between SSM and backscatter (Ulaby et al., 1978).The WCM calibration was done for each site separately, based on Noah-MP SSM and LAI simulations and real S1 γ 0 VV observations, following Modanesi et al. (2021Modanesi et al. ( , 2022)).The synthetic observations are assimilated after perturbing them with different levels of Gaussian white noise, with standard deviations ranging from 0 to 0.7 dB (see Section 2.4).Such observation errors could partly reflect sensor error and mild white noise errors in the observation operator (WCM), incl.roughness and incidence angle effects (for different orbits).Future sensitivity studies could, for example, explicitly consider differences between the "truth" and the assumed LSM and WCM.
Model-Only Run
An overview of all experiments is given in Figure 1.Model-only runs, also called open loop (OL), were performed with the same settings and inputs as the nature run, but with an introduction of forcing error.Specifically, the meteorological forcings were altered in two different ways: (a) all forcings were kept identical to the nature run (MERRA2) except for precipitation which was shifted in time (using 2000-2009, instead of 2010-2019), referred to as OL H (high forcing error); and (b) all MERRA2 forcings were replaced with the European Center for Medium-Range Weather Forecasts (ECMWF) Reanalysis version 5 (ERA5; Hersbach et al., 2020), referred to as OL M (mild forcing error).For the first case, shifting the MERRA2 precipitation (also performed by Girotto et al., 2021) introduces long-term errors since the interannual variability of the precipitation deviates from the truth.By contrast, the experiments with mild forcing errors will use monthly and annual precipitation patterns that are closer to the truth and the errors mostly represent short-term deviations.The OL experiments served as a reference to assess the skill gain of the DA experiments.
Default DA Experiments
In the default DA experiments (DA def ), daily synthetic γ 0 VV , without any addition of noise, are assimilated to update the SSM at all time steps.An additional experiment assimilating an observation every other day with 0.3 dB of white noise is performed for all sites.The irrigation scheme is primarily triggered by the (updated) soil moisture deficit, similar to the OL.In line with the OL simulations, the DA def experiments were also run for a high (DA def,H ) and mild (DA def,M ) forcing error.The aim of DA def is to identify the strengths and weaknesses of the approach by Modanesi et al. (2022).
Buddy Check DA Experiments
The last series of experiments aim at testing our new buddy check approach (DA BC ) described in Section 2.5.2.Again, the DA was tested for two setups of forcing error, that is, DA BC,H and DA BC,M .The method was tested for daily perfect (no white noise) observations, as well as for different overpass intervals (one observation every 1, 2, 3, and 7 days), and different white noise levels in the assimilated observations.White noise is added to the observations in time through a Gaussian distribution of mean zero and different standard deviations (σ): 0, 0.3, 0.5, and 0.7 dB.This range in total observation error (measurement + representativeness error) was chosen considering the radiometric accuracies of the S1 CSAR instruments and the fact that the observation operator is assumed to be perfectly calibrated (i.e., the calibrated WCM is the truth), which limits the representativeness error (van Leeuwen, 2015).When white noise was added to the signal, experiments were run for three different seeds of random noise.Note that all observation intervals and noise combinations are only tested for the German site.
Ensembles
For all experiments (OL H , OL M , DA def,H , DA def,M , DA BC,H , and DA BC,M ), a total of 24 ensemble members were used to estimate the forecast uncertainty.The ensembles were generated by perturbing model forcings (rainfall, incident longwave radiation, incident shortwave radiation) and the SSM state variable (only, i.e. the LAI is not additionally perturbed), with the same perturbation parameters for all experiments.For further details on the perturbation parameters, the reader can refer to Modanesi et al. (2022).It should be noted that in contrast to the setup of Modanesi et al. (2022), a perturbation bias correction method was applied in this study.This adjustment was proposed by Ryu et al. (2009) to avoid unintended biases in the forecast of soil moisture.To be able to use this option with the soil moisture deficit irrigation approach (OL and DA def ), the conditions for which irrigation is triggered required slight modifications.In the Noah-MP v3.6 LSM, irrigation is triggered by considering the MA of each ensemble member individually.This is not compatible with the perturbation bias correction option as the correction can bring the soil moisture of several ensemble members sooner to an irrigation state when some other members have not reached the MA threshold yet.Therefore, in this study and for all experiments, irrigation was triggered based on the ensemble mean MA and the same amount of irrigation water (also calculated from the ensemble mean) is applied to each ensemble member.This was already corrected in the irrigation module of the latest Noah-MP version (4.0.1) implemented in LIS.The observation error standard deviation was set to 1 dB, as in Modanesi et al. (2022).
Synthetic γ 0
VV observations were assimilated to update the SSM, and all other variables via model propagation.An ensemble Kalman filter (EnKF) was employed to ingest γ 0 VV observations into an erroneous version of the Noah-MP LSM (i.e., different from that of the nature run, see Section 2.4).The "true" calibrated WCM was used as observation operator to produce observation predictions based on the erroneous LSM simulations of SSM and LAI.The update equation of the EnKF can be written as follows: for which x+ i is an ensemble of the updated model states at time step i, x i is the ensemble forecast state, y obs,i is the assimilated observation γ 0 VV ) , K i is the Kalman gain, and h i (.) is the WCM observation operator.The innovation at time i (innov i ) is defined as the residual between the observed and the forecast γ 0 VV and is expressed in decibels (dB): Even though the observation predictions use both SSM and LAI as input, the update is limited to SSM here for simplicity (unlike Modanesi et al., 2022).
Buddy Check Approach
Because Modanesi et al. (2022) reported that DA updates to a wetter soil moisture could possibly lead to missed simulated irrigation events, we tested a novel approach in this study, illustrated in Figure 2 for a case with daily observations.We still update SSM as in Equation 6.However, the triggering of irrigation simulation is now not merely based on a modeled soil moisture deficit, but instead always requires a high positive difference between the observed and forecast γ 0 VV (innovation; Equation 7).In other words, the timing of the irrigation is now primarily observation-based.The new method builds on a buddy check approach, commonly used in atmospheric DA (e.g., Dee et al., 2001), avoiding the assimilation of outlier observations.In this case, outliers are detected when the γ 0 VV innovation is suddenly large and positive (highlighted blue dots in Figure 2a).These sudden "jumps" in the innovations can be detected by looking at the difference between two successive innovation, that is, Δinnov i is defined as: where T is the overpass time interval [days].For days in the growing season, irrigation is triggered when: An observation is detected as an outlier, (a) if Δinnov i exceeds a multiple of the standard deviation of the innovations (SD innov,n , always positive) computed on the antecedent n days (excluding the outliers), and (b) when the moisture conditions m are likely to support irrigation.
By definition, the expected error standard deviation of Δinnov is ̅̅ ̅ 2 with a moving window to adaptively account for the natural variability of the model and observation errors.In this study, windows of 30 (SD innov,30 ) and 60 days (SD innov,60 ) were considered.
A strong positive Δinnov i (outlier) hints to an unmodeled process, that is, irrigation.To ensure that irrigation is limited to realistic conditions and to avoid over-irrigation, the dynamic threshold is modulated by a conditional factor m, which is a function of the MA: This rescaling factor gives a higher chance to an irrigation event when the soil is dry than when the rootzone moisture is close to field capacity (MA = 1).A fixed MA threshold for irrigation is thus avoided and replaced by an observation-based trigger, for example, a farmer can irrigate before the uncertain model reaches a critical MA.
Furthermore, unlike the nature (or OL or DA def case), the DA BC approach allows the MA to decrease freely, mimicking the reality that farmers might sometimes irrigate later than expected.The MA is illustrated in Figure 2c.For each irrigation event, the rootzone soil moisture is increased to field capacity, corresponding to MA = 1.However, this is not visible in MA time series (Figure 2c) as the plotted MA is computed based on the daily averaged soil moisture contents.Likewise, m is computed on the daily average MA of the antecedent day.
In short, an outlier innovation is not used to update the soil moisture state, but to correct the model input by adding water as irrigation.The amount of irrigation water is computed by the irrigation scheme coupled to the Noah-MP LSM described in Section 2.1, and it benefits from the updated soil moisture prior to the irrigation trigger.This method was implemented in the LIS framework itself, allowing an online modeling of irrigation with this approach.However, since irrigation starts at 6:00 LT in the irrigation scheme, and the synthetic γ 0 VV observations were produced at the same time of the day, irrigation is not yet part of the observations when they are checked for assimilation (and either assimilated or flagged as outlier): the irrigation of day i is only visible in the observation of day i + 1 (see Figure 2).Hence, with this buddy check approach, irrigation events are always delayed by a day compared the truth, and a negative innovation can be expected on the day following an irrigation event.This technical detail is tied to observation and irrigation times, and could be overcome via postprocessing or model rewinding in future work.
Evaluation Metrics
The experiments were evaluated in terms of irrigation, soil moisture (through the MA), ET, and LAI.The metrics used in this study to evaluate these variables are the Pearson correlation (R), the percentage bias (PBIAS), and the root-mean-square difference (RMSD), and are defined as follows: where x is the value of the simulated land surface variable from the OL or DA experiment, y is the reference value (from the nature run), and N are the number of reference data in time (n = 1, …, N). x and ȳ represent the temporal mean values.The land surface variables are evaluated using a 3-day smoothing window to account for the technical 1-day delay of irrigation.MA is evaluated for all the growing seasons over the years 2010 through 2019 (10 years).LAI and ET are evaluated over the entire 10 years and the R is computed on the anomalies (anomR), because these land surface variables have a clear climatological pattern and naturally result in high R values.
The normalized information contribution in R (NIC R ) and RMSD (NIC RMSD ) are commonly used to describe the improvement or degradation of the estimates compared to a model only (OL) run: Positive NIC values correspond to an improvement while a negative NIC indicates poorer estimations than those of the OL.
Irrigation is evaluated for the growing season only with the same metrics, considering different levels of smoothing where the antecedent n daily irrigation amounts are averaged.Smoothing windows of different lengths (n days) are considered to better grasp for which time intervals (e.g., daily, weekly, monthly) the irrigation events can be accurately simulated.Additionally, binary metrics are considered to assess the performance to detect (in terms of timing) the irrigation events.The probability of detection (POD) and the false alarm ratio (FAR) are computed on a daily basis and were defined by Roebber (2009) as follows: where TP, FN, and FP are the true positive (detected), false negative (missed), and false positive (false) irrigation events.Both metrics range from 0 to 1 and should be equal to 1 and 0 for the POD and FAR, respectively, in an ideal case.Note that POD and FAR were computed on the daily irrigation estimates ±1 day, therefore accounting for the technical delay of irrigation with the buddy check approach (see Section 2.5.2).
Results
First, the performance of the default DA def and new DA BC in terms of irrigation and MA is shown for the 3 different sites with daily assimilation of perfect observations, and assimilation every other day with noisy observations.The impact of both DA approaches on LAI and ET is also presented.Second, the detailed sensitivity analysis of the results to the observation noise and frequency, and to the smoothing level, is done for the German site only (Sections 3.2 and 3.3).
Irrigation and Soil Moisture
In Figure 3, irrigation and MA estimates are evaluated for the three sites and for four different experiments with high model errors: DA def,H and DA BC,H , both assimilating daily perfect observations (1 day 0 dB), and assimilating observations every 2 days containing some white noise (2 days 0.3 dB).DA BC experiments used a 30-day window for the outlier detection threshold (SD innov,30 ).The results are presented in terms of NIC R and NIC RMSD , that is, relative to the corresponding model only run (OL H ) of that site, and for two levels of smoothing (3 days, 14 days).
For irrigation (Figures 3a-3d), the DA BC outperforms the DA def for 2 of the 3 sites, when daily perfect observations are available.DA def shows the best short-term performance at the Italian site, where the irrigation application volumes are smaller (Table 1).When averaged over 14 days, the performance for DA def increases in the scenario with high model errors.When noisy observations are used every other day in DA BC , the NICs become lower and even negative, with a very poor performance in Spain.In this drier region with frequent (weekly) irrigation, the events become harder to detect when only a limited number of observations are available between two consecutive irrigation applications.When an irrigation event is missed, the soil can quickly dry out leading to severe dry biases, conditions in which outliers become hard to detect since the innovations remain large and positive.
For the MA (Figures 3e-3h), the NIC R values for the DA BC experiments are mostly larger than for the irrigation.This can be explained by the fact that the DA BC also catches the true rainfall events that were not in the meteorological forcings of the DA BC .The water signal contained in the synthetic observation can be attributed to rainfall or irrigation, and therefore corrects the MA but does not necessarily improve the irrigation estimates.Less and noisy data assimilation again limits the added value of the satellite data.
The effect of the model error is summarized in Table 2 and Appendix B. Compared to DA def,H , Appendix B shows that DA def performs relatively worse under mild model errors, so that DA BC always outperforms DA def when assimilating daily perfect observations.The irrigation detection skill (without considering volumes) of DA BC assimilating daily observations without noise is shown in Table 2 in terms of POD and FAR, for high and mild model error.Whereas DA def does not consistently improve the POD for all sites, DA BC significantly increases the POD for all sites relative to the OL.Overall, the POD is higher under a mild model error and the FAR is decreased.Falsely detected irrigation events are most common in Germany.In this region, the rainfall is typically larger compared to the other drier sites and the total number of irrigation events over the 10-year experiment is significantly smaller than for the other sites (56 vs. >100), allowing the proportion of falsely detected irrigated events to increase more rapidly.
Impact on Other Land Surface Variables
The quality of irrigation estimates has an impact on the other land surface variables.Figure 4 shows the anomR values between 3-daily smoothed results for selected OL and DA experiments, relative to the truth.The DA BC results are shown for an outlier threshold based on SD innov,30 .First, the difference between the two forcing errors is reflected in the lower anomR values for OL H (Figures 4a and 4c) than for OL M (Figures 4b and 4d), suggesting less room for improvement in the latter case.DA BC using daily observations without noise (DA BC 1 day, 0 dB) is generally superior or equivalent to DA def (with daily, perfect observations).The anomR values for DA BC with less frequent and noisy observations vary across sites with a good performance for Germany and poorer or equivalent anomR values than the OL for Italy and Spain.LAI is less impacted by irrigation over the German site due to the choice of the soil parameters and the model for the stomatal response to water stress (see Section 2.1).On this silt loam site, the vegetation is less sensitive to water and MA levels are generally kept at higher levels.The LAI is more responsive to missed irrigation events in (a) a site presenting for example, a sandy loam texture (such as the Italian site), and (b) in drier and warmer regions, where crop growth mainly relies on irrigation, in contrast to more temperate regions, such as Germany, where irrigation is used to supplement the precipitation.
DA def : Soil Moisture Updating Can Limit Irrigation Estimation
In the DA def experiments, irrigation is primarily triggered when the modeled (or analysis) soil moisture deficit exceeds a threshold.The limitation of this method is obvious at the German site.For this site, Figure 5a illustrates First, it can be seen that the γ 0 VV assimilation brings the soil moisture, and consequently the MA, to a state that is closer to the nature run, sometimes correctly moving irrigation events closer to the nature run compared to the OL (e.g., in early July).In contrast, another true irrigation event (in June) is delayed in the DA def,H compared to the OL.The MA of DA def,H does not reach the 0.60 threshold before the true event and the irrigation simulation is consequently prevented by updates to wetter soil moisture conditions as a result of large positive innovations (Figure 5).This effect can also clearly be observed in August, where none of the two irrigation events are simulated based on improved initial conditions, but positive soil moisture increments are applied instead.More generally, over the 10-year experiment, the DA irrigation events that are not estimated before the true irrigation are typically delayed or skipped, resulting in daily irrigation estimates ±1 day with low POD values (Table 2).
The missed or delayed irrigation events can be identified in the innovation time series.Large innovations occur on true irrigation days (Figure 5a), highlighting that a process is missed by the model.Instead of avoiding or delaying the event in a DA run, DA BC can identify and trigger such irrigation events, as described next.
Assimilation of Daily Perfect Backscatter Observations
The results of the new buddy check approach are first presented for daily assimilation of perfect observations and using a 30-day window to compute the outlier threshold (SD innov,30 ) at the German site.Figure 6 shows the innovations and irrigation results of DA BC,H for the period 2014-2016 (panel a), with the associated MA time series for 2015 in panel b.The DA BC,H irrigation estimates (dashed blue bars) capture all of the true irrigation events (full black lines) over these 3 years.The falsely detected irrigation events correspond to true rainfall events, leading to an occasional large positive innovation and irrigation is consequently triggered (if the MA is not too high).This proves that when the forcings are erroneous (i.e., rainfall is missed in the DA run), irrigation cannot be dissociated from rainfall in the γ 0 VV signal.By capturing these true rainfall events, the irrigation estimation R may deteriorate, but in turn causes the soil moisture to follow more closely the nature run (here represented by the MA in Figure 6b).Note that the false irrigation events can sometimes lead to a slight wet bias in the MA.Over the whole 10-year experiment, irrigation estimates are strongly improved by DA BC , significantly increasing the POD of the irrigation estimates, and also decreasing the FAR (Table 2).values are mostly positive but improvements remain limited.When the model error is high (DA def,H ), NIC R values increase with smoothing (peak from bimonthly estimates onwards, Figure 7a).However, for the NIC RMSD , even yearly irrigation estimates remain poor (Figure 7b).When the model error is mild (DA def,M ), NICs stay below 0.2 for all smoothing levels.This can be explained by the difference in forcings.The ERA5 meteorology follows the seasonal patterns of the MERRA2 meteorology used for the truth, therefore the seasonal amount of irrigation from a model-only OL M run is close to the nature, as indicated by the high R and low RMSD values, especially for the longer smoothing windows in Figures 7c and 7d.
Effect of Irrigation Smoothing
For mild model errors, DA BC,M is superior to DA def,M for all smoothing intervals.When model errors are high, DA def,H shows a better performance for longer smoothing intervals, likely due to the detection of true rainfall events in DA BC,H (expressed in the high FAR value).Between a 3-day and monthly smoothing interval, R and RMSD values are improved by more than 30% and 10%, respectively, for all forcing errors.The poor effect on irrigation quantification at a daily scale can be attributed to the timing of the observation and irrigation application (see technical 1-day delay, Section 2.5.2).
Effect of Observation Interval and White Noise
DA BC was tested for different observation intervals (1, 2, 3, and 7 days) and observation noise levels (0, 0.3, 0.5, and 0.7 dB) at the German site, to assess which observation configuration would be ideally suited for irrigation estimation.The 14-day smoothed daily DA BC irrigation estimates are evaluated relative to the OL through the NIC R and NIC RMSD .The PBIAS (difference between the daily simulated and nature irrigation relative to the nature irrigation, in %) is also assessed to indicate if there is a general over-or underestimation of the irrigation amounts.The results for the DA BC,H and DA BC,M experiments are shown in Figure 8. Two thresholds were tested to trigger irrigation, using two different window sizes for the calculation of SD innov : 30 days (SD innov,30 ), and 60 days (SD innov,60 ).Note that the experiments assimilating observations with a 7-day interval could only be performed with a 60-day window in order to compute a standard deviation with enough data points.This longest observation interval is also reason why 14-day irrigation estimates are shown, and not estimates smoothed for shorter intervals, which resulted in slightly higher NICs (Figure 7).All experiments with white noise in the observation signal were performed for three random seeds of added noise and the average metric is presented.
The performance of the buddy check approach degrades with longer observation intervals, to reach negative NICs for the assimilation of weekly observations (Figures 8a, 8b, 8d, and 8e).As already shown in Figure 3, the DA BC does not always result in a higher performance compared to the DA def .For the Italian site (shown in Figure 8), DA def already outperforms DA BC when observations are not assimilated on a daily basis.With frequent observations, there is a slight positive bias (Figures 8c and 8f), meaning that more irrigation is simulated compared to the nature run.This effect can be attributed to the detection of true rainfall events in addition to the irrigation events.Less frequent observations lead to stronger underestimations of the irrigation amounts, as shown by the negative PBIAS (Figure 8f).This has consequences for the LAI (and other land surface variables), as already demonstrated for the Spanish and Italian sites in Section 3.1.2.The largest underestimation is found for DA BC,M with a 7-day observation interval and noise.Counter-intuitively, underestimations are less severe for DA BC,H likely due to the 14-day smoothing window, in which false irrigation events (more frequent for DA BC,H ) compensate for the missed true events.This compensation results in a larger MA improvement for DA BC,H than for DA BC,M (shown in Appendix C).
The white noise in the signal strongly affects the performance under all observation intervals by increasing the number of missed irrigation events, as shown by the decreasing PBIAS when noise is added (Figures 8c and 8f).and (c, d) PBIAS [%] of 14-day irrigation amount estimates for the different DA BC , for 1, 2, 3, and 7-day observation intervals, for the German site.The colors correspond to the level of Gaussian white noise added to the signal (σ, dB), and for all experiments with noise, the mean metric is taken from the three runs.The rows are associated with the window size taken to compute the standard deviation for the irrigation threshold, where (a-c) are based on a 30-day window (SD innov,30 ), and (d-f) relate to experiments with a 60-day window (SD innov,60 ).Dots correspond to DA BC,H and crosses to DA BC,M .
Sorted from the most to least important factor, the added noise tends to degrade the ability to detect an outlier (and hence an irrigation event) by: (a) increasing the SD innov,n , (b) occasionally decreasing the irrigation signal in the observation affecting the Δinnov, and (c) resulting in biased MA at a certain time step (wetter or dryer) affecting the rescaling of the model error (2 * SD innov ).The threshold for the outlier is strongly affected, increasing this threshold (2 * SD innov,30 * m) from 1 dB for perfect observations to 1.7 dB for observations with 0.7 dB white noise for daily DA BC,H .For reasonable levels of noise and frequent observations (≤0.3 dB), the RMSD is reduced by at least 15% for both forcing errors compared to the OL, and the R is increased by at least 25% (Figures 8d and 8e).
Increasing the window size for the computation of the SD innov did not significantly alter the NIC values.However, the PBIAS is overall decreased, meaning that the irrigation overestimation is more limited under shorter observation intervals (1 and 2 days) but this also results in more severe underestimations under sparse observations (Figure 8e).More irrigation (or true rainfall) events remain undetected because the SD innov considers innovations up to 60 days before the event.This is problematic, especially at the beginning of the season as the natural variation in the innovations is larger in the late winter or spring (wet season), which are then considered in the computation of the SD innov,60 .A window size of 30 days seems more appropriate, in the sense that this SD innov should capture the natural variation of the innovations which is mainly induced by forcing errors but also vegetation.
Novel Approach to Estimate Irrigation in a DA System
The new method shows good performance over the three sites when daily observations are available.However, when the observations become sparse in time, the skill of the buddy check approach is highly influenced by the irrigation frequency and volumes (mainly defined by the climate, the soil texture, and the irrigation threshold in this synthetic setup).The poor irrigation estimation skill over the sites in Italy and Spain was mainly related to the high frequency of irrigation events (up to one application every 3 days).In these regions, the system could greatly benefit from model rewinding to avoid missing irrigation events when the previous estimation is delayed.Another determining factor is the level of white noise in the observations affecting the irrigation detection skill of the presented method, especially where irrigation application volumes are small (∼20 mm), such as for the Italian site.In this region, DA def showed large improvements (Figure 3), likely related to the fact that positive innovations following an irrigation event are smaller, limiting the shortcoming of DA def (explained in Section 3.2).
In Germany, the buddy check approach still works reasonably when observations are available every 2 or 3 days, corresponding to the initial revisit interval of the S1 constellation over Europe.Weekly observations would not be sufficient to guarantee the irrigation detection skill and lead to severe underestimations of irrigation water.The failure of S1-B halved the number of available observations (one every ∼4 days in Europe, 12 days elsewhere), making the buddy check approach (used within a S1 DA system) unsuitable outside of Europe until the launch of the next satellite (S1-C, expected in the near future).Other synthetic studies assimilating S1-related SSM products (Abolafia-Rosenzweig et al., 2019;Jalilvand et al., 2023;Ouaadi et al., 2021;Zappa et al., 2022) also highlighted the importance of frequent observations.Zappa et al. (2022) reported large irrigation underestimations when observations are too sparse in time.Similar to our study using Δinnov to detect outliers, their approach is based on observed differences in soil moisture (ΔSM).Both methods are observation-based, making them sensitive to observation noise, and to underestimation of the irrigation amounts with less frequent observations, because the irrigation signal fades away in the observations with time.In short, our buddy check approach underestimates irrigation more for infrequent observations or when a large time window is used to compute the Δinnov threshold (SD innov,60 ), but the remaining detected irrigation events are identified accurately in time and with the correct amounts of irrigation water, or they compensate for missed rainfall.precipitation RMSD values of around 5 mm day 1 and 2 mm day 1 , respectively (also for the growing season).For daily observations, good performances were found across the three sites without introducing a new locationdependent parameter.Second, using the MA to rescale the outlier threshold is more realistic than the use of a fixed threshold.For irrigation to be triggered, the MA can be off from the truth and still detect an irrigation event if the signal is strong enough in the innovations.This is more in line with a real world situation where irrigation is not necessarily determined by a fixed soil moisture deficit value, but depends on the agricultural practices and more generally on the water availability (Nie et al., 2021).
Limitations and Opportunities
The main limitation of the buddy check approach is the missing of irrigation events, esp.for longer observation intervals or drier regions where the soil moisture dries out rapidly and irrigation events are frequent.A first improvement would be to implement a model rewind system to minimize the technical 1-day delay of irrigation events.However, even with this development, irrigation events will still be missed if satellite observations are not frequent enough.We could then consider a hybrid DA system, where the buddy check approach is supplemented with a pure MA-based irrigation model trigger, if the observation interval exceeds the surface soil memory of an irrigation event.The missed irrigation events have a strong impact on vegetation in drier regions, where LAI strongly declines.This issue can be tackled by jointly updating SSM and LAI, as done by Modanesi et al. (2022).Vegetation updating would ask for the assimilation of backscatter in cross-polarization (γ 0 VH , or γ 0 VH /γ 0 VV ) as this signal has shown to be more affected by vegetation (Patel et al., 2006;Vreugdenhil et al., 2018).A joint assimilation of γ 0 VV and γ 0 VH would require a combination of both innovations in our buddy check method approach.
Future system developments could involve leveraging the ensembles within the EnKF.The ensemble spread before assimilation can serve as basis to determine whether irrigation should be triggered in the system.For example, if the observation plus its associated uncertainty falls outside the forecasted ensemble, irrigation could be applied.An alternative development entails refraining from triggering irrigation for all ensemble members simultaneously but rather applying irrigation to each member individually (similarly to Modanesi et al., 2022), allowing for irrigation uncertainty estimates.However, this alternative would not be compatible with the perturbation bias correction option proposed by Ryu et al. (2009), as described in Section 2.4.4.In general, the use of ensemble information to trigger or estimate irrigation will require a careful optimization of the ensemble perturbations.
The newly proposed buddy check approach could also be used to estimate irrigation with other (e.g., particle) filters or other observations than backscatter data.High resolution L-band soil moisture data would be interesting to guide the estimation of irrigation amounts.Such data can be obtained from for example, downscaled SMOS or AMSR-E with DisPATCh (Malbéteau et al., 2016;Merlin et al., 2013) or future missions such as the Copernicus ROSE-L (Davidson & Furnell, 2021) and the SMOS-HR (Rodríguez-Fernández et al., 2019).Nevertheless, the outcome would strongly depend on the quality of the retrievals, and an appropriate bias treatment is needed to avoid the attenuation of the irrigation in the signal (Kumar et al., 2015;Kwon et al., 2022).Instead of changing the type of assimilated observations, the buddy check method could also be used in systems with other models, possibly crop models that are originally designed for agriculture, offering new opportunities in such fields of application.
Future Real World Experiment
The success of a real world DA experiment with the buddy check approach will depend on the observability of irrigation and model-related limitations.First, satellite observations need to be available at high spatial and temporal resolution, and the actual type of irrigation method needs to be detectable.There is a chance that irrigation is applied for consecutive days over different fractions of the observed satellite footprint (one or a few fields receive irrigation per day).In that case, "jumps" in the innovations will not be detected and the backscatter signal is likely to remain high for these consecutive days.Future research is necessary to counter this limitation, or higher resolution input data and observations are needed.Similarly, some types of irrigation will be easy to detect, whereas others not.Punctual large sprinkler events, as simulated in this study, are more easily detectable than for example, drip irrigation, which is typically applied in smaller amounts and more frequently.
Second, the LSM and WCM are assumed to be perfect in our synthetic study, but the model-related limitations already mentioned in Modanesi et al. (2022) will be important when going to a real world experiment.Concerning the LSM, the quality of the input data is crucial.Erroneous crop rooting depth, soil texture, or irrigation fraction, would automatically lead to a bias in the irrigation amounts since these factors directly influence the volumes of irrigation water (see LSM equations in Section 2.1).Though more flexible than a rigid soil moisture threshold to trigger irrigation, the MA irr parameter, used to rescale the outlier threshold (Equation 10), will need some calibration, as this value varies across regions.Likely, the MA irr parameter will determine the sensitivity of the model to true rainfall events as the latter cannot be distinguished from true irrigation events (if they are of the same magnitude).However, this shortcoming benefits the DA BC in its ability to improve the MA and consequently the water balance (when frequent observations are available).The observation operator (e.g., WCM) could also pose a limitation, when directly assimilating microwave signals.Rather than calibrating an empirical model, novel machine-learning based observation operators could improve the system (de Roos et al., 2023;Rains et al., 2022), but in both cases, the observation operator training might suffer from an inaccurate match between irrigation simulation (and the effect on soil moisture) and irrigation observed in the satellite signals.
In short, a controlled field-scale experiment will be needed to bring the buddy check approach to the real world and consider further developments that were discussed above.Compared to DA def , the new method relies on observations to trigger irrigation, which has the potential to improve field-scale irrigation estimates by estimating the timing and volumes more accurately but is likely not suited for coarser resolution applications, or when the interval between two observations becomes too large.Even if DA def presents a strong shortcoming, it will likely be able to better estimate irrigation at the regional level when irrigation amounts are aggregated over biweekly or monthly time scales, or when observations are not frequently available, as compared to the irrigation frequency.
Conclusions
Irrigation detection and quantification are major challenges.New methods based on remote sensing data are now emerging, including the use of microwave observations in combination with models through data assimilation (DA).Modanesi et al. (2022) assimilated Sentinel-1 backscatter observations into the Noah-MP version 3.6, coupled to a sprinkler irrigation scheme.The soil moisture and vegetation state were updated to set better initial conditions to trigger irrigation simulation, but the system also had limitations, esp.when large updates to wetter conditions delayed or completely inhibited the process-based modeling of irrigation events.
In this study, we conducted synthetic experiments for the assimilation of backscatter observations γ 0 VV ) to update soil moisture in a system with erroneous meteorological forcings.After illustrating the shortcoming of blindly assimilating all data for state updating, a new method was developed based on a buddy check approach, in which unexpected changes in innovations (observation minus forecast) are detected and not assimilated.The method still updates the land surface to guarantee the best possible initial conditions to estimate irrigation amounts, but when an outlier in the Δinnov (difference between two consecutive innovations) is detected, an unmodeled process is assumed and the large innovation is not assimilated.Consequently, the "missed" irrigation is triggered, if the rootzone soil moisture is dry and it is a day in the growing season.The new method is now primarily observationbased, and better adapts to the timing of real irrigation events.The threshold value to identify outlier innovations was made dependent on the locally and temporally varying errors in the system.The method was tested on three different sites (in Germany, Italy, and Spain) with different climates, soil texture, and irrigation thresholds.A detailed evaluation was then performed for the German site where the method was tested for several observation intervals and noise.The main results can be summarized as follows: 1.When daily observations are available with reasonable levels of noise (≤0.3 dB), the method shows good performances for all three study sites.The probability of irrigation detection more than doubles for two sites, when assimilating perfect daily observations.When the observations become sparser in time, or when they contain larger noise levels, the performance decreases rapidly for regions where the irrigation events are very frequent (weekly or less), or when application rates are lower.2. For biweekly aggregated irrigation estimates in Germany, and compared to a model-only run, the new DA method reaches about 40% and 20% of improvement in terms of Pearson R and RMSD, respectively, when frequent observations (daily or every other day) are assimilated.From a 3-day observation interval onward, the performance degrades but remains reasonable (NICs > 10%), and for weekly observations, there is no improvement compared to a model-only run (NICs close to zero or negative).
Figure 1 .
Figure 1.Overview of the experiments.All experiments are repeated for high forcing error (OL H , DA def,H , DA BC,H ) and mild forcing error (OL M , DA def,M , DA BC,M ).All combinations of overpass and white noise for DA BC are tested for the German site only.For the DA BC over the other sites and DA def , two experiments are performed: (1) assimilating daily perfect γ 0 VV
Figure 2 .
Figure 2. Illustration of the buddy check approach for the Spanish site under a mild model error (DA BC,M ).(a) Irrigation time series [mm day 1 ] of the nature run and the DA run with buddy check approach with the innovations [dB] in gray.(b) Δinnov [dB] time series with the dynamic threshold (based on a 30-day window SD innov,30 ).(c) Moisture availability (MA [ ]) of the antecedent day of the nature run (and corresponding MA irr threshold in gray) and the DA buddy check.The shaded blue stripes highlight the irrigation events.The green arrows illustrate (1) the rescaling of the threshold by the MA and (2) the outlier that triggers irrigation.The blue arrow (3) links the irrigation event to the increase in MA.
Figure 3 .
Figure 3. NIC R and NIC RMSD for irrigation [mm day 1 ] (a-d) and moisture availability [ ] (e-h) smoothed over two time windows (3 and 14 days; columns).DA def,H and DA BC,H experiments are presented in green and blue, respectively, with full bars corresponding to the assimilation of daily observations without noise (1 day 0 dB), and the stippled bars present experiments assimilating observations every 2 days with 0.3 dB of white noise (2 days 0.3 dB).For the experiments with white noise, the bar represents the mean of the metric across the three seeds and the whiskers extend to the minimum and maximum NIC.The metrics (R and RMSD) of the OL H are shown in the plot for each domain.
some irrigation events of the nature run ("truth"), OL H , and DA def,H along with the γ 0 VV innovations.The corresponding MA time series are shown in Figure 5b with the MA Irr threshold of 0.60 [ ].
Figure 4 .
Figure 4. anomR [ ] for 3-daily LAI [ ] (a, b) and ET [mm day 1 ] (c, d) for the entire simulation period.The columns correspond to the model error (high or mild).DA def and DA BC experiments are presented in green and blue, respectively, with full bars corresponding to the assimilation of daily observations without noise (1 day 0 dB), and the stippled bar presents experiments assimilating observations every 2 days with 0.3 dB of white noise (2 days 0.3 dB).For the experiments with white noise, the bar represents the mean of the metric across the three seeds and the whiskers extend to the minimum and maximum anomR.
Figure 7
Figure 7 further compares the performance of daily DA BC and DA def with perfect observations, as a function of model error and temporal smoothing at the German site.The NIC R and NIC RMSD values are shown and after smoothing the daily irrigation estimates with various time windows.For the DA def experiments (green), NIC
Figure 6 .
Figure 6.(a) Time series for the German site of the irrigation [mm day 1 ] of the nature run, DA def,H , and DA BC,H for daily observations without noise and an outlier threshold based on SD innov,30 .Innovations of DA BC,H are shown in the background in gray.(b) Time series for the growing season of 2015 (shaded in blue in a) of the moisture availability (MA [ ]) of the nature run, DA def,H , and DA BC,H for the irrigation months (April through October).In (b), the full line corresponds to the threshold for the nature run and DA def,H (0.60 [ ]).
Figure 7 .
Figure 7. (a) NIC R [ ] and (b) NIC RMSD [ ] for irrigation smoothed with different window sizes for the German site.(c) Pearson R [ ] and (d) RMSD [mm day 1 ] of the OL.The dots and the crosses correspond to the high and mild forcing error experiments, respectively.DA BC were performed by assimilating daily perfect observations.All metrics were computed on the irrigation months only (April through October) over the 10-year experiment.
Figure 8 .
Figure 8. (a, d) NIC R [ ], (b, e) NIC RMSD [ ],and (c, d) PBIAS [%] of 14-day irrigation amount estimates for the different DA BC , for 1, 2, 3, and 7-day observation intervals, for the German site.The colors correspond to the level of Gaussian white noise added to the signal (σ, dB), and for all experiments with noise, the mean metric is taken from the three runs.The rows are associated with the window size taken to compute the standard deviation for the irrigation threshold, where (a-c) are based on a 30-day window (SD innov,30 ), and (d-f) relate to experiments with a 60-day window (SD innov,60 ).Dots correspond to DA BC,H and crosses to DA BC,M .
The main advantages of the new buddy check approach are (a) the flexibility of the outlier detection method to different situations (and different errors), and (b) the substitution of the strict model-based soil moisture threshold for irrigation by a rescaling of the outlier threshold, making the irrigation estimates more in line with what happens in reality (as observed by the satellite).First, the standard deviation of the γ 0 VV innovations for the DA H and DA M experiments reach average values of 0.8 and 0.6 dB, respectively, during the growing season for the German site.These values are in line with the magnitude of the expected high and mild forcing errors, with Journal of Advances in Modeling Earth Systems 10.1029/2023MS003661 BUSSCHAERT ET AL.
Figure C1 .
Figure C1.(a, d) NIC R [ ], (b, e) NIC RMSD [ ],and (c, d) PBIAS [%] of 14-day MA estimates for the different DA BC , for 1, 2, 3, and 7-day observation intervals, for the German site.The colors correspond to the level of Gaussian white noise added to the signal (σ, dB), and for all experiments with noise, the mean metric is taken from the three runs.The rows are associated with the window size taken to compute the standard deviation for the irrigation threshold, where (a-c) are based on a 30-day window (SD innov,30 ), and (d-f) relate to experiments with a 60-day window (SD innov,60 ).Dots correspond to DA BC,H and crosses to DA BC,M .
Table 1
Coordinates (Lat-Lon) of the Different Sites With the Corresponding Köppen Climate Class, Soil Texture, Chosen Irrigation Threshold (MA irr [ ]), Average Irrigation Interval [Days] Over the Summer Months (June, July, August), and Average Irrigation Application (Irr rate [mm day 1 ]) Over the Growing Season, Based on the Nature Run
Table 2
POD and FAR for Daily Irrigation Estimates ±1 Day for High Model Error (OL H , DA def,H , DA BC,H ) and Mild Model Error (OL M , DA def,M , DA BC,M ) Experiments POD OL H DA def,H DA BC,H OL M DA def,M DA BC,M BUSSCHAERT ET AL. | 2024-03-24T15:08:19.778Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "b449207fdf38227fedc95cbd8b3a3450c5da6260",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023MS003661",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "bb0090ab55d22cebdeebc29c9242cd16a91d18a7",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
55311099 | pes2o/s2orc | v3-fos-license | Moroccan Cedar softwood study : Application of FT-Raman spectroscopy
As non-destructive technique, FT-Raman spectroscopy has been used to study the molecular structure and monitor changes in the composition of carbohydrates and lignin components containing wood materials. For this purpose, four samples originated from Moroccan cedar wood were analyzed. Following the FT-Raman spectra, it was found that carbohydrates were identified by the bands at 898, 1098, 1123 and 1456 cm, while lignin matrix was evaluated by the bands at 1657, 1598 and 1267 cm. The decrease of the intensities related to these feature bands reflects the effects of natural degradation phenomenon and shows the evidence of chemical changes and quick deterioration of these contents upon exposure time to natural degradation process. Thus, the FT-Raman tool has the potential to be one of crucial sources to characterize composite materials and to evaluate the chemical changes occurred on their structures under the influence of physico-chemical or biological attacks without causing any damage of the wood surfaces or their supports.
Introduction
Moroccan softwood, as one of the most abundant materials on earth, is provides a resource of great value for construction and production of novel objects since antiquity.It is extensively used for many applications (artworks, packaging industry, shipbuilding, furniture, paper pulp, eating utensils, etc) since antiquity.
These materials have a heterogeneous and complex structure, primarily consisting of cellulose, lignin and hemicelluloses components which are reported as one group of materials with a well-known reputation for susceptibility to natural deterioration.The exposition to combined conditions of physical, chemical and microbial attack as ultraviolet (UV) light, solar irradiation, moisture (humidity), temperature and fungus can cause molecular degradation of their main components.It results a loss of fiber strength and rigidity.This is manifested in lower mechanical stability that may leads, sometimes, to full disintegration of wooden materials [1], and consequently results in loss of cultural heritage [2].
Hence, an accurate characterization and examination of different changes occurred in these materials through a spectroscopic study by non-destructive techniques, is extremely important for optimal safeguarding and preservation.
A very few works have been conducted on structural characterization and/or structural degradation of wood by Fourier transform Raman spectroscopy (FT-Raman) [3][4][5][6], in contrast to the other spectroscopic techniques which are the most used for material's study as Fouriertransform infrared spectroscopy (FTIR), X-ray diffraction (XRD), scanning electron microscopy (SEM) and nuclear magnetic resonance (NMR).In this field, Agarwal and Ralph [3] have applied FT-Raman technique to identify the major constituents of black spruce wood: lignin, cellulose and hemicelluloses, while Ona et al. [7] have performed interesting researches on the Eucalyptus wood properties by the same technique.
Recently, FT-Raman is being more performed for the chemical analysis of biomaterials as wood [8][9].It provides more information about polymer chain and fundamental knowledge at a molecular (micro-level) and macro-level [10].In addition, it has been shown to be a valuable technique for analyzing structural changes in the fibers which arise from physical, chemical or mechanical processing [11][12].Hence it is possible to perform fast, non destructive and non invasive measurements without extensive sample preparation [9].
As this non-destructive spectroscopic method seems to be a promising instrument for studying composition of wood materials, the main goal of the present work is to investigate, in details, the distribution of chemical composition in softwood materials and understand the structural rearrangement caused by the effect of natural degradation process.
Sampling
Analyzed Softwood samples were collected from four archaeological Cedar wood (Cedrus atlantica) dating to the 18 th , 19 th , 20 th and 21 st centuries in the Ecomuseum of Tazekka under standard climate.The pieces originate from Tazekka national park (WGS84: 34°6′0″N, 4°11′0″W) located in the Middle Atlas of Morocco and near the city of Taza (Bab Boudir region).The samples were dated by specialist researchers using Radiocarbon dating method.Thus, the dimensions of wood samples are 200×200×100 mm 3 (tangential × radial × longitudinal directions).The characteristics of the experimental materials are presented in Table 1.The FT-Raman analysis was performed directly on the surface of the samples.
FT-Raman spectroscopy
The FT-Raman study was conducted with a Bruker (USA) MultiRAM Stand Alone FT-Raman Spectrometer.The instrument is equipped with a diodepumped Nd:YAG excitation source with a large emission intensity at 1.064 nm.Furthermore, the signal was collected with a liquid nitrogen cooled germanium detector.For each FT-Raman measurement 100 scans were averaged, with a resolution of 4 cm -1 and a time measurement of 3 min for each spectrum.All FT-Raman spectra were registered from 4000 to 250 cm -1 .Three analyses were performed on several locations for each sample.The temperature and humidity room were controlled during analysis.
Experimental results
The common bands assignments of four wood's sample (D 1 , D 2 , D 3 and D 4 ), are given in Table 2.The band's attribution was extremely difficult due to the overlapping of some cellulose and lignin bands; and so, it confirmation was based on different literature data [5,6,13] which focused on the wood study and investigation of degradation effect using FT-Raman spectroscopy.
Cellulose and hemicelluloses
The For the first region, the detected bands are mainly due to the hydroxyl groups, methyl and methylene stretching vibrations.Concerning the second range, the bands correspond to methylene, methyl bending, wagging, rocking, C-O-H in-plane bending and C-O-H in-plane bending and skeletal bending vibrations (CCC, COC, OCC and OCO).Thus, the deterioration of the cellulose and hemicelluloses fractions has been explained by the decline in intensities of these bands (Fig. 1) exposure to natural atmospheric effect.
The recent samples dating to 21 st and 20 th century (Fig. 1.D 1 and D 2 ) clearly display a feature band at 2895 cm -1 .According to Barnette et al. [13], the latter was attributed to symmetric stretching vibrations of the CH 2 group in the glucopyranose ring of cellulose I β .The presence of this band in spectra of D 3 and D 4 (Fig. 2) degraded samples dating to 19 th and 18 th century respectively, suggests that upon a long exposure to the degradation phenomenon, the crystalline fraction decomposes and results disorder fraction which in turn re-crystallized and formed the new ordered fraction.This finding can be confirmed by the decline in intensities of feature bands typical of amorphous cellulose at 1456 cm -1 and 898 cm -1 and assigned to HCH bending and small proportion of HOC bending in amorphous cellulose as well as CH deformation in amorphous cellulose, respectively [14].On the other hand, the C-H and CH 2 deformations in cellulose and hemicelluloses compounds were observed at 1378 cm -1 .
From spectra of sample D 1 and D 2 (1200-1000 cm -1 ), it is easily to distinguish a doublet of peaks at 1123 and 1092 cm -1 assigned to combined stretching vibration of C-O ring and C-O-C glycosidic linkages in cellulose and hemicellulose [13,15]; providing information about the breaking of cellulosic chains at the β-1,4-glycosidic ether bonds.Thus, the disappearance of these two bands in spectra of oldest samples (Fig. 2 D 3 and D 4 ) indicates the serious degradation occurred on cellulose and hemicelluloses.
Referring to literature data [16], the bands at 379 cm -1 can be unambiguously attributed to CCC deformations in crystalline fraction of cellulose.Fig. 2 shows a discernible decrease of this band proportionally with the age of sample, indicating the loss in mechanical rigidity and toughness for these materials, consequently, the surrender of lignocellulosic biomass against deconstructive processes.
The weak features at 379 and 440 cm -1 might be explained as a result of intermolecular interactions between lignin and carbohydrates, that can caused a small shifts in peak positions and/or changes in band shapes [3].
Lignin
In order to estimate lignin fraction, different bands were studied.In the region between 3100-2800 cm -1 , the band at 2943 cm -1 was attributed to the C-H stretching of the methoxy groups in lignin [5,6].It appears less pronounced in the oldest samples dating to the 19 th and 18 th centuries (Fig. 1 The detection of feature band at 1717 cm -1 in spectra of samples D 2 , D 3 and D 4 (Fig. 2) indicates the presence of carbonyl groups related to the residual lignin amount resulted from delignification of wood sample upon process of natural degradation.It relative intensity appears no changeable for all oldest samples, while it appears absent in D 1 spectrum (Fig. 2).The non important sensitivity of this produced amount to degradation events is the possible explanation in this case.
Furthermore, the combined band in the region between 1657 and 1598 cm -1 was mainly originates from guaiacyl (coniferyl alcohol units for softwood) and syringyl (sinapyl alcohols units for hardwood) matrix in lignin compound.The band detected at 1657 cm ѵ-ѵ1 is attributed to conjugated C=C stretching vibration of coniferyl alcohol (guaiacyl) in lignin that overlaps with C=O stretch of coniferyl acid after oxidation of alcohol in side chain [6,17].According to Kihara et al. [18], this band can also assigned to marker bands for conjugate carbonyl groups (α,β-unsaturated C=O).
The most intense peak at 1598 cm -1 (Fig. 2 D 1 and D 2 ) is attributed to stretching vibration of polar aromatic C=C in phenolic compounds [19] related to guaiacyl and syringyl monomers in lignin [5,20].
The other predominant lignin band was detected at 1267 cm -1 and corresponds to C aromatic -O of guaiacyl lignin for softwood [17].It shifted to lower intensities during exposure time to natural degradation process (Fig. 2), because of guaiacyl lignin is less susceptible than syringyl lignin.Nevertheless, for hardwood spectra, there is a rapid decline in its intensity.Thus, we can confirm that our cedar samples belong to the softwood specie.
Conclusions
The present work has put in evidence the crucial role of FT-Raman spectroscopy as non destructive method to characterize and study the effect of natural degradation on chemical structure of cellulose, hemicelluloses and lignin as major components of softwood by providing accurate information about their chemical structures.Based on the obtained results, the gradual decline in intensities of features bands related to these constituents for each compound of wood, suggest their sensitivity to combined degradation agents and, consequently, irreversible losses of softwood material in archaeological sites.
Figure 1
Figure 1 reports the representative FT-Raman spectra for each sample (D 1 , D 2 , D 3 and D 4 ) between the spectral region of 3500-500 cm -1 .
Figure 1 .
Figure 1.FT-Raman spectra acquired from the four samples of softwood: D 1 -Wood sample dating to 21 st century; D 2 -Wood sample dating to 20 th century; D 3 -Wood sample dating to 19 th century; D 4 -Wood sample dating to 18 th century.
Figure 2 .
Figure 2. FT-Raman spectra 1750-250 cm -1 range acquired from the four samples of softwood: D 1 -Wood sample dating to 21 st century; D 2 -Wood sample dating to 20 th century; D 3 -Wood sample dating to 19 th century; D 4 -Wood sample dating to 18 th century.
D 3 and D 4 ) indicating lower lignin presence compared to the youngest ones (Fig. 1 D 1 and D 2 ).
s (CCC) in crystalline cellulose MATEC Web of Conferences 191, 00014 (2018) https://doi.org/10.1051/matecconf/201819100014NDECS 2017 MATEC Web of Conferences It is likely reported that decomposition of hemicelluloses and/or extractives, can lead to the decrease in quantity of lignin, consequently, a simultaneous deterioration of wooden materials. | 2018-12-13T00:51:52.190Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "c197cd9fd3ae047df6fe7ae288c462a6fdd3afc9",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/50/matecconf_ndecs2017_00014.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c197cd9fd3ae047df6fe7ae288c462a6fdd3afc9",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
86405569 | pes2o/s2orc | v3-fos-license | Tumor Lysis Syndrome: A Rare Complication of Chemotherapy for Metastatic Breast Cancer
Tumor lysis syndrome (TLS) is a fatal complication of chemotherapy treatment. It is rarely seen in the treatment of solid tumors particularly in breast cancer. We presented the case of a chemo-naïve 58-year-old Caucasian woman who developed tumor lysis syndrome (TLS) after a single treatment dose of gemcitabine for metastatic breast cancer. Despite optimal management, the patient clinically deteriorates and is referred to inpatient hospice. Although targeted chemotherapy options have become increasingly effective, physicians should be aware of the rare, yet often fatal complications of TLS. Similarly, physicians should be able to quickly recognize the development of TLS to ensure swift and effective prophylaxis or treatment.
Introduction
Tumor lysis syndrome (TLS) is defined as an oncologic emergency, characterized by massive tumor cell lysis and the release of large amounts of potassium, phosphate, and nucleic acids into the systemic circulation. It is often seen as a result of chemotherapy treatment of lymphomas and T-cell acute lymphoblastic leukemias [1]. In one report, the incidence of TLS in AML was found to be around 17 percent. The incidence of TLS in solid tumors is very rare and is mostly described in case reports [1]. TLS is rarely observed in solid tumors such as breast cancer. In this report, we have described a rare case of TLS that occurred after a single treatment of gemcitabine, which only rarely causes TLS in solid tumors, in the context of metastatic breast cancer in a chemo-naïve patient [1] .
Case Presentation
A 58-year-old Caucasian woman was admitted to our hospital with complaints of generalized weakness, lethargy, anorexia, and weight loss. She was diagnosed with metastatic breast cancer 17 days prior to this admission. She also had a past medical history of treated hypertension and chronic back pain. Before this admission, she had complained of a breast lump in the previous year but never got it examined. The primary breast tumor was found on ultrasound to be approximately 4 cm by 5 cm and was found to be an invasive, poorly differentiated, ductal carcinoma with extensive necrosis. It had no expression of the hormone receptors, estrogen, and progesterone and was human epidermal growth factor 2 (HER2) positive. At the time of presentation, the cancer was advanced with innumerable hepatic metastases and multiple bilateral pulmonary metastases. There was also a small-moderate right pleural effusion. It had spread to the spine, causing a bony lytic lesion at the T9 vertebrae. On physical examination, she also had jaundice of the skin and mild splenomegaly, likely secondary to extensive liver disease.
The patient had undergone the planned chemotherapy four days prior, which was a treatment of gemcitabine 1600 mg. Gemcitabine has long been shown to be an effective agent in the treatment of metastatic breast cancer [2]. A Port-A-Cath had been placed successfully without any complications two days before the first chemotherapy treatment. On this present admission, her blood tests showed high uric acid levels (18.2 mg/dL), hyperphosphatemia (6.7 mg/dL), hyperkalemia (5.4 mmol/L), calcium (9.6 mg/dL), increased creatinine (3.38 mg/dL) and decreased glomerular filtration rate (14 mL/min). Nephrologists were consulted and they recognized this as TLS. It was recommended to give the patient vigorous intravenous (IV) fluid hydration with normal saline at 125 cc/hr as well as transfuse packed red blood cells to maintain the hemoglobin levels above 8 g/dL. Allopurinol 100 mg three times a day was also given. Hematologists/oncologists were consulted and they recommended chemotherapy treatment to be on hold for now until the patient's labs become more stable.
By day two of admission, the patient appeared jaundiced and lethargic but was still alert and oriented. Her blood tests showed high uric acid levels (15.1 mg/dL), hyperphosphatemia (6.1 mg/dL), potassium (4.7 mmol/L), calcium (9.0 mg/dL), increased creatinine (2.69 mg/dL), and decreased glomerular filtration rate (18 mL/min). Rasburicase was not started at this time because it was not readily available at the current medical facility. Over the course of the next few days, the patient's platelet count continued to drop, likely due to the initial chemotherapy treatment. The creatinine levels remained elevated, and the patient's bilirubin and other liver function enzymes continued to rise, making the option of chemotherapy less feasible.
By day six of admission, the patient's blood tests showed high uric acid levels (11.1 mg/dL), potassium (4.0 mmol/L), calcium (8.7 mg/dL), increased creatinine (2.71 mg/dL), and decreased glomerular filtration rate (18 mL/min). On examination, clinical deterioration was evident and the patient appeared even more lethargic and sleepy. She was difficult to wake with verbal stimuli.
Despite optimal management, by day seven of admission, she was drowsy and minimally responsive and had a slow response to any stimuli. At times, she could not open her eyes. At this time, it was decided by the patient, husband, and daughter that the patient would have a 'do not resuscitate' order and would be transferred to inpatient hospice when stable.
Discussion
Only a few published cases of TLS developing in patients with breast cancer either due to underreporting or rarity are available. Moreover, the reason why TLS is rarely seen in solid tumors is currently unclear. TLS is more likely to happen in more indolent proliferations such as that seen in leukemias or lymphomas [3]. In a report evaluating the presence of TLS in breast cancer patients, it was found that in the few cases that have been published, most of the patients had metastatic breast adenocarcinomas. The age of the patients ranged from 31 to 94 years, with the average age being 54.1 years. The majority of these patients had a baseline increase in LDH as well as a baseline increase in uric acid levels [4].
TLS is diagnosed both clinically and through laboratory values. The Cairo-Bishop definition was proposed in 2004, which provides specific criteria for the diagnosis of TLS [5]. Clinically, the symptoms associated with TLS reflect the metabolic abnormalities. Symptoms include nausea, vomiting, diarrhea, anorexia, lethargy, hematuria, heart failure, cardiac dysrhythmias, seizures, muscle cramps, tetany, syncope, and possible sudden death [6]. Clinical TLS is defined as laboratory TLS plus one or more of the following: increased serum creatinine concentration (>1.5 times the upper limit of normal, cardiac arrhythmia/sudden death, or seizures). TLS can also be confirmed through laboratory values. Laboratory TLS is defined as two or more abnormal serum values, as shown in Table 1, presenting within three days before or seven days after chemotherapy treatment [5]. Based on these definitions, the patient, in this case, can be diagnosed with TLS both in terms of laboratory TLS and clinical TLS. The prevalence of TLS occurring after chemotherapy with solid tumors is rare but may be underreported in many patients [7]. In recent years, there have been a few case reports about TLS developing in patients with breast cancer. For example, in 2012 and 2014, two studies were conducted that reviewed modern literature on TLS in solid tumors and included 100 and 120 patients with solid tumors complicated with TLS, respectively [1,8]. A literature search for metastatic breast cancer treated with gemcitabine, as in this case, complicated by TLS, showed only one or two cases prior to this present case [9]. This again indicated either the rarity of TLS occurring with the treatment of solid tumors or its underreporting and underrecognition.
Element
Most patients who received chemotherapy treatment for solid tumors did not develop TLS. A strong risk factor for TLS is the patients' health status. This includes the presence of hypotension, dehydration, acidic urine, oliguria, and nephropathy [10]. Certain medications may be additional risk factors for TLS due to their side effects of increasing uric acid levels in the body. These are shown in Table 2. The patient discussed in this case was not on any of these substances. Lastly, additional risk factors for TLS include the tumor's size and expansion. For example, bulky tumors with wide metastatic dispersal and bone marrow involvement would put a patient at a higher risk [10]. In the present case, the patient had large primary breast cancer with numerous metastatic tumors to the liver, lungs, and spine, putting her at higher risk for the development of TLS.
Medications:
Alcohol Prophylaxis with rasburicase or allopurinol is often initiated for patients with lymphomas or acute leukemias for prevention of TLS [6]. In a phase III trial comparing the use of rasburicase with allopurinol, 280 patients with hematologic malignancies at risk for TLS were assigned to be prophylaxed with rasburicase (0.2 mg/kg daily), rasburicase (0.2 mg/kg daily), and oral allopurinol (300 mg daily), or allopurinol alone (300 mg daily). The results showed that both rasburicase groups were superior to allopurinol alone in controlling serum uric acid levels. Rasburicase and allopurinol together were shown to contribute to a higher control of uric acid levels rather than rasburicase alone. However, this result was not statistically significant (p = 0.06) [11]. Rasburicase has been seen to be a more effective way of reducing hyperuricemia and has thus begun to replace allopurinol as prophylaxis. Because the incidence of TLS with solid tumors is less common, prophylaxis is often not done. Patients who develop TLS during chemotherapy should receive intensive supportive care with continuous cardiac monitoring and measurement of electrolytes and creatinine and uric acid levels every four to six hours. It is also important to treat specific electrolyte abnormalities, to administer rasburicase at 0.2 mg/kg, and to wash out obstructing uric acid crystals with fluids with or without a loop diuretic [1]. It should be noted that patients with solid tumors at a higher risk for TLS could also benefit from prophylaxis with rasburicase or allopurinol
Conclusions
In this report, we describe one of the first literature-documented cases of TLS in a patient diagnosed with metastatic breast cancer as a result of a single dose of gemcitabine treatment. Although TLS is not an extremely common outcome resulting from chemotherapy treatment in patients with solid tumors, it is important for physicians to recognize patients who may be at a higher risk of developing TLS. Proper knowledge of the prevalence and outcomes is important for physicians to help recognize and prevent potentially fatal outcomes. Physicians should be aware of the clinical and laboratory diagnostic criteria of TLS as well as the options for management of TLS.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared | 2019-03-28T13:33:27.645Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "0731f111f9b6b3aced67698bba8e1adc193940a6",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/17079/1554758870-20190408-11743-18u0eqj.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0731f111f9b6b3aced67698bba8e1adc193940a6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53024043 | pes2o/s2orc | v3-fos-license | Dynamic cerebral autoregulation is impaired in Veterans with Gulf War Illness: A case-control study
Neurological dysfunction has been reported in Gulf War Illness (GWI), including abnormal cerebral blood flow (CBF) responses to physostigmine challenge. However, it is unclear whether the CBF response to normal physiological challenges and regulation is similarly dysfunctional. The goal of the present study was to evaluate the CBF velocity response to orthostatic stress (i.e., sit-to-stand maneuver) and increased fractional concentration of carbon dioxide. 23 cases of GWI (GWI+) and 9 controls (GWI) volunteered for this study. Primary variables of interest included an index of dynamic autoregulation and cerebrovascular reactivity. Dynamic autoregulation was significantly lower in GWI+ than GWI- both for autoregulatory index (2.99±1.5 vs 4.50±1.5, p = 0.017). In addition, we observed greater decreases in CBF velocity both at the nadir after standing (-18.5±6.0 vs -9.8±4.9%, p = 0.001) and during steady state standing (-5.7±7.1 vs -1.8±3.2%, p = 0.042). In contrast, cerebrovascular reactivity was not different between groups. In our sample of Veterans with GWI, dynamic autoregulation was impaired and consistent with greater cerebral hypoperfusion when standing. This reduced CBF may contribute to cognitive difficulties in these Veterans when upright.
Introduction
More than 25 years after Operations Desert Storm and Shield (Gulf War), deployed Gulf War Veterans continue to report substantially poorer health relative to non-deployed Veterans of the same era, including a higher prevalence of chronic illnesses [1]. Approximately 25-32% of Gulf War Veterans are afflicted with a particular chronic illness characterized predominantly by fatigue, musculoskeletal pain, and cognitive impairment-referred to as Gulf War Illness (GWI) [2]. Although both the etiology and underlying pathophysiology in GWI remains unresolved, neurotoxicant exposure and neurological dysfunction have received the greatest attention, respectively. Neurological dysfunction in the form of cognitive impairment is one of the hallmark symptoms of GWI, and problems with memory [3], executive function [3][4][5], and mood [6] have all been described in this population. Cognitive function is modulated by changes in cerebral blood flow (CBF), as evidenced by a relationship between cerebral hypoperfusion and cognitive impairment in older adults with [7,8] and without [9] dementia. Further, reversible cognitive impairment has been described following transient occlusion of CBF in patients with cardiovascular disease [10] as well as concussion-induced reductions of CBF among collegiate athletes [11]. Therefore, cognitive symptoms in Veterans with GWI may be attributable, in part, to reductions in CBF and/or CBF dysregulation.
Haley and colleagues have studied CBF in Veterans with GWI using both single photon emission computed tomography [12], as well as magnetic resonance imaging based arterial spin labeling [13,14]. In comparison to controls, Veterans with GWI had lower CBF in distinct regions at rest [12,13], and subgroups of Veterans with GWI demonstrated abnormal responses to cholinergic challenge [12][13][14]. Impaired cholinergic control of CBF is one of several mechanisms that may affect cerebral vasodilation [15], and work from our laboratory has recently demonstrated that enhancement of cholinergic activity attenuates the decrease in CBF observed during orthostatic stress [16].
The objective of the present study was to examine the cerebrovascular response to changes in arterial blood pressure and inspired carbon dioxide (CO 2 ) to determine the role of the endothelium and smooth muscle on CBF in Veterans with GWI. The response of CBF to blood pressure changes is accomplished predominately through a myogenic response (i.e., smooth muscle) of the arterioles in the cerebral vasculature, whereas the CBF response to changes in CO 2 occurs through both endothelial and smooth muscle signals [17][18][19]. By indirectly assessing the smooth muscle through manipulation of arterial blood pressure (i.e., orthostatic stress) and the endothelium (i.e., manipulating inspired CO 2 ), we can better understand the disruptions in CBF at the neurovascular unit. We hypothesized that Veterans with GWI would demonstrate impaired CBF in comparison to Veterans without GWI, and that this impairment would be greatest during the CO 2 challenge given the contributions of both smooth muscle and endothelium.
Participants
We studied 32 Gulf War-era Veterans for this study, including 23 cases of GWI (GWI+) and 9 controls (GWI-). Case status was assigned using the Kansas criteria, [20] which involves endorsement of moderate-to-severe symptoms in � 3 domain areas (i.e., fatigue, pain, neurological/cognitive/mood, skin, gastrointestinal and respiratory) that began after 1990 and persisted for � 1 year. Comorbid conditions (i.e., diabetes, heart disease, stroke, lupus, multiple sclerosis, cancer, etc.) were excluded per case definition. Participants provided written informed consent after receiving verbal instruction, acknowledging their understanding of the procedures and risks. All procedures were approved by the Department of Veterans Affairs New Jersey Health Care System Institutional Review Board (IRB# 01094) and conducted under the guidelines established by the Declaration of Helsinki.
Procedures
Participants arrived at the laboratory for a single testing session having abstained from caffeine and � 2 hours post-prandial. Participants were instrumented to obtain: 1) cerebral blood flow
Sit-to-stand maneuver and cerebral autoregulation
Following a rest period (� 5 min), all participants performed an orthostatic challenge test designed to induce a rapid change in blood pressure as previously described [21]. Mean values of physiological signals (CBFV, MAP, end-tidal CO 2 , and heart rate) were calculated while seated (50 s epoch) and standing (mean of 5 consecutive values at nadir of blood pressure). The average of 3 maneuvers was used for analysis and the difference between seated and standing values (Δ scores) was also computed. An autoregulatory index (ARI) was computed for each sit-to-stand maneuver using a predicted curve fit model [22], and the average of the three stands was used for comparison. To assess the effect of normal fluctuations in blood pressure on CBFV, we performed transfer function analysis on 3-minute segments while seated. Three min segments were used since our lab has previously found that transfer function assessment between 3 and 5 minutes segments were similar [23]. For this analysis, beat by beat values were linearly interpolated to a 100 Hz sampling rate and then transfer function analysis was performed using the Welch method with a 45 sec Hanning window with two thirds overlap resulting in 10 windows to average. No smoothing of windows was performed. Gain, coherence and phase were calculated in three frequency bands: very low-frequency (0.03-0.07 Hz), low (0.07-0.2 Hz), and high (0.2-0.5 Hz) using custom MATLAB code [24]. Two participants with GWI and one control were excluded from transfer function analysis due to the presence of ectopic beats. Ectopic beats may affect estimation of the power spectral density [25].
Cerebrovascular reactivity
Cerebrovascular reactivity was assessed while participants breathed at rest (normocapnia), while inspiring 8% CO 2 , 21% O 2 , balance nitrogen (hypercapnia), and during mild hyperventilation (hypocapnia) [26,27]. Two-minute periods for hyper-and hypocapnia were selected based on prior work [26], as well as to avoid changes in middle cerebral artery diameter that occur during extended periods (� 4 min) [28,29]. Change in CBFV per mmHg CO 2 was computed via linear regression for the entire period, and during periods of hyper-and hypocapnia.
Heart rate and blood pressure variability, and baroreflex estimate
Time-and frequency domain measures were determined as recommended by the task force report [30] in the low frequency (LF-0.04-0.15 Hz) and high frequency (HF-0.15-0.4 Hz) band on three minute segments while seated to assess autonomic function. In addition we used the Lomb-Scargle periodogram which does not require interpolation of the heart period signal and has been shown to be more robust with short datasets [31]. Blood pressure variability was determined from beat-by-beat blood pressure values that were re-interpolated to 4 Hz and then analyzed using the Welch method with a 50 second Hanning window and two thirds overlap, resulting in 8 windows in a 3-minute data set. Baroreflex function was estimated from transfer function gains in the LF (0.04-0.15 Hz) and HF (0.15-0.4 Hz) bands as well by the sequence method [32]. For the sequence method we examined three beat segments within the 3-minute period. Slopes were calculated on segments in which SBP values and RR interval of each consecutive beat increased or they both decreased. Each three-beat segment was plotted and a slope was calculated using least squares. All segment slopes were then averaged to determine a spontaneous baroreflex slope. In addition, number of mismatch slopes was calculated which is determined by segments in which increases in blood pressure were associated with reducing RR intervals or the reverse.
Statistical analysis
Group means and standard deviations (SD) for arterial blood pressure (mmHg), end-tidal CO 2 (mmHg), and CBFV (cm/s and %) were calculated for GWI+ and GWI-at rest and during orthostatic, hypo-and hyper-capnic challenges. Means (SD) were also calculated for transfer function assessment of cerebral autoregulation, heart rate and blood pressure variability, and baroreflex estimates. Data from all outcome measures were split by group and checked for normality (Kolmogorov-Smirnov test). Between-group comparisons were conducted with a series of independent samples t-tests for normally distributed data and Mann-Whitney U tests for non-normally distributed data (α = 0.05). Equality of variances between groups was checked prior to the analyses. Hedges' g (g) effect sizes were calculated for independentsamples t-tests and point-biserial r (r pb ) was calculated for Mann-Whitney U tests. Effects sizes of 0.8 [33] and 0.37 [34] were considered to be large for g and r pb , respectively. Pearson correlation coefficients were calculated to determine associations between CO 2 reactivity and autoregulation. Spearman's rho correlations were calculated to determine associations between changes in CBFV and transfer function gains.
Participant characteristics
Demographics and medication use are reported in Table 1. Veterans with GWI+ (49.0 ± 5.9 years) were younger than Veterans without GWI (54.1 ± 6.4 years), p = 0.040; however, all other measures were similar between groups. Summary scores for each domain of the Kansas screening questionnaire are also reported for descriptive purposes.
Sit-to-stand maneuver
Group-averaged responses to the sit-to-stand maneuver are illustrated in Fig 1. Values for hemodynamic variables and their between-group comparison are reported in Table 2. CBFV obtained during seated rest (baseline) was similar between groups; however, upon standing, the drop (Δ) in CBFV from seated to standing position was significantly larger in GWI+ than GWI-despite a similar drop in blood pressure. Examining the steady state changes from seated to standing (30-55 s after stand), mean arterial pressure was lower in the GWI+ group (p = 0.014), although only~3 mmHg which was likely not physiologically significant. Similarly, CBFV was lower in the GWI+ group (p = 0.042).
Cerebral autoregulation
Estimated ARI was significantly lower in GWI+ than GWI-( Table 2; Fig 2). Examination of transfer function response (Fig 3) demonstrated that gain in the low, but not high frequency band, was higher in the GWI+ (Fig 3). Coherence and phase was similar in both groups at both low and high frequency bands (S2 Table). Similarly, there were no differences in gain, phase or coherence in the very low frequency band between groups.
Cerebrovascular reactivity
Hemodynamic variables and CBFV were similar between groups at rest as well as during hyper-and hypocapnic stimulus. CO 2 reactivity was also similar when assessed separately for hyper-and hypocapnic stimulus as well as in combination (Table 2).
Heart rate and blood pressure variability, and baroreflex estimate
Time-and frequency-domain results for heart rate and blood pressure variability as well as baroreflex estimates are reported in the S1 and S2 Tables, respectively. Few differences were observed between-groups for heart rate and blood pressure variability (S1 Fig). In contrast, baroreflex sensitivity gains were significantly higher in the GWI+ group in the low frequency band (S2 Table and S1 Fig). We also examined how baroreflex sensitivity was related to changes in CBFV, and observed a significant correlation with the decrease in CBFV when standing (5 beat average at nadir), R 2 = 0.146, p = 0.044 (Fig 4B).
Discussion
This is the first study to assess dynamic cerebral autoregulation in Veterans with GWI, which we found was significantly impaired relative to Veterans without GWI (Table 2; Figs 2 and 3), and the level of impairment was similar to that reported for patients with carotid artery stenosis [35] and ischemic stroke [36]. Contrary to our hypothesis, CO 2 reactivity was similar between Veterans with and without GWI. Given that CBF dysregulation in the present study was observed during active transition to an upright posture, we suggest that disturbances among other systems (e.g., autonomic, vestibular, bioenergetics) may contribute to these results. Comparison of blood pressure, end tidal carbon dioxide, cerebral blood flow velocity (CBFV), and cerebrovascular resistance (CVR) values among cases with Gulf War Illness (GWI+; n = 23) and controls (GWI-; n = 9) during a baseline seated position (25 s), during the transition from seated to standing (first 30 s after initiations of stand) and during a steady state standing period (30-55 s after initiation of stand). For transition period, change from baseline (Δ) was determined for 5 beat average at nadir of CBFV or mean arterial pressure (MAP). For standing period changes were derived from sitting steady state to standing steady state. Autoregulation Index was derived off dynamic change in blood pressure and CBFV based on best fit curve as previously described [22]. CBFV, MAP and end-tidal CO 2 at baseline, hypercapnia, and hypocapnia are also reported, as well as the calculation of cerebrovascular reactivity. [26] A Effect sizes are reported as Hedges' g for independent samples t-tests and point-biserial correlations for Mann-Whitney U tests B Data analyzed with independent samples t-test C Data analyzed with Mann-Whitney U test D Missing data for n = 2 GWI+ participants. Results are based on a reduced sample of n = 21 for GWI+ and n = 9 for GWI-. https://doi.org/10.1371/journal.pone.0205393.t002
Dynamic autoregulation is impaired in Gulf War Illness
Prior studies have assessed the response to orthostatic stress in GWI using tilt-table testing with [37,38] and without isoproterenol [39] as well as a clinical wall lean test [40] with mixed results in terms of hypotension but similar reports of orthostatic symptoms. However, none of these prior studies assessed CBF which is noteworthy as orthostatic symptoms in the absence of hypotension may reflect an underlying orthostatic cerebral hypoperfusion [41]. For example, Novak recently described a tilt-induced drop in CBF of approximately -24% in 102 patients with idiopathic orthostatic symptoms in comparison to -4.2% decrease in controls, in the absence of orthostatic hypotension [41]. Direct comparison of our results is not possible as we did not employ similar tilt-table protocols. Rather, the present study utilized the sit-tostand maneuver, which unlike passive tilt that may take up to 12 s to transition from supine to upright, takes < 5 s to transition from sitting to standing thereby affording assessment of dynamic cerebral autoregulation [21]. In this work, Veterans with GWI displayed a marked reduction in CBFV in comparison to controls that is not fully explained by concomitant decreases in blood pressure and end-tidal CO2 (Fig 1). Moreover, we also found that impaired autoregulation (i.e., lower ARI values) was associated with a greater reduction in steady state CBFV when standing (Fig 4A).
Further support for impaired dynamic cerebral autoregulation in this group is derived from the transfer function analysis of steady state sitting values. Veterans with GWI had higher transfer function gains in both the low and high frequency bands, suggesting they were less effective at minimizing the impacts of changes in blood pressure on CBF (Fig 3). Similarly, the greater coherence in the low frequency band also suggests that CBF changes were related to blood pressure changes. However, phase, another indicator of autoregulation was unchanged. While there is no previous data that has examined transfer function gains in GWI, we have previously published that a cohort of 193 males with a mean age of 78.3 years had mean gains of 1.34 [42], less than the mean seen in these Veterans with GWI. Thus, despite the Veterans with GWI having a mean age of 49 years, their CBF regulation was worse than a cohort of elderly.
To assess the vasodilatory capacity of the cerebral vasculature, we examined changes in CBF velocity of the middle cerebral artery in response to changes in end-tidal CO 2 [43]. We hypothesized that Veterans with GWI would demonstrate attenuated CO 2 reactivity relative to controls, suggesting a decreased capacity to dilate the cerebral vessels. However, we observed similar CO 2 reactivity between-groups. This suggests that factors other than CO 2 reactivity contribute to the autoregulatory response, most notably cholinergic activation of the cerebral vasculature [44]. In support, we recently demonstrated in a randomized double-blind study that physostigmine infusion enhanced CBF velocity despite the presence of hypocapnia [16]. An active cholinergic vasodilatory reflex has been proposed to counter sympathetic vasoconstriction in the cerebral circulation [15], and this reflexive response appears disturbed in GWI [13,14] which may be consistent with broader autonomic dysfunction in this population [39].
To measure autonomic function in this group, we examined heart rate and blood pressure variability. We found no evidence of differences in indicators of cardiac parasympathetic control unlike previous works that have demonstrated reduction in high frequency HRV [45][46][47]. This might be due to the fact that previous work examined 24 hour recordings, compared to our short 3-min periods. However, Haley et al. [47] also found no difference during the day when Veterans were awake, which supports our daytime recordings. To examine indicators of sympathetic control of the heart and periphery, we examined LF HRV and blood pressure variability. We found that cardiac sympathetic control was elevated (LF HRV power) but there was no difference in peripheral sympathetic activity (LF blood pressure power). Our results contrast Stein et al. [45] who found reduced LF HRV in 24 hour recordings. Our peripheral results are similar to those of Haley et al. [46] who found no significant difference in sympathetic nerve activity between groups. The difference in our HRV findings may be due to the analysis method. When using traditional HRV FFT methods we did not find a significant increase in LF HRV. However, using a novel analysis method that has been shown to be more sensitive since it does not require interpolation of the RR interval sequence [31], we did detect a difference. In addition, both LF HRV and BP variability are non-invasive indicators of sympathetic activity. However, they are not direct measures and have many other inputs which affect the response. Thus, these data are only indicative of changes in autonomic function. Regardless, our data as well as previous data suggests that autonomic function is affected in GWI. However, none of our measures examined autonomic control of the cerebrovasculature.
Another novel finding in this work is the effect of GWI on baroreflex sensitivity. To our surprise we found that baroreflex sensitivity assessed by transfer function was significantly greater in Veterans with GWI. Despite having improved baroreflex, steady state standing blood pressure was significantly lower, although only slightly (~3 mmHg), in the veterans with GWI despite improved baroreflex sensitivity. The reason for this remains unclear and further work on central cardiac function and total peripheral resistance changes when upright is necessary. It is also unclear why baroreflex would be improved in veterans with GWI. One theoretical possibility is that to help maintain CBF when cerebral autoregulation is impaired, tighter regulation of blood pressure is necessary to ensure adequate perfusion of the brain. Several previous studies have found an inverse relationship between baroreflex and cerebral autoregulation in healthy young individuals [48][49][50]. Based on our findings this same relationship is true for Veterans with GWI since those with the lowest ARI values had the highest baroreflex transfer function gains ( Fig 4B).
As impairments in dynamic autoregulation were observed upon standing, our findings may also suggest a role for vestibular input to the cerebrovasculature. In support of this possibility, Haley and colleagues [51] found that Veterans with GWI classified as having vestibular ataxia and vertigo attacks not only exhibit the greatest functional impairment [52], but also demonstrate the largest reduction in CBF at baseline and largest increase in CBF secondary to physostigmine challenge [12]. Though the relationship between the vestibular system and CBF remains unclear in humans, we have previously found that stimulation of the vestibular system results in modulation of CBF and that this regulation was independent of blood pressure and end-tidal CO 2 [53]. Prior studies in GWI have also noted a role for vestibular dysfunction in GWI symptomatology [54], that is independent of stress and/or anxiety [55], and objectively worse in Veterans with GWI in comparison to controls [55]. Moreover, episodes of syncope [56] and dizziness [57,58], reported in GWI could be interpreted by either impaired CBF regulation and/or vestibular dysfunction. No published work to date has comprehensively examined vestibular function in GWI or whether vestibular dysfunction affects cerebral autoregulation in GWI.
An important aspect of this work is that measures of CBF were performed upright. Our recent work demonstrating that cholinergic enhancement was only effective in improving CBF in healthy subjects when upright highlights the importance of orthostatic stress in considering CBF response [16]. Further support for the importance of orthostatic stress comes from work in chronic fatigue syndrome in which patients not only demonstrated greater drops in CBFV during head up tilt but also impaired cognitive performance when upright but not supine [59]. Since cognitive complaints in Veterans with GWI are made when they are upright, we must consider the importance of studying the upright posture. If cholinergic inputs are involved in cerebral vasodilation when upright and cholinergic function is impaired in GWI, enhancing cholinergic function could provide a target for treatment. Note that in our previous work the improvement in CBF did not occur with neostigmine but only with physostigmine, a cholinesterase inhibitor that can cross the blood brain barrier. Future work is needed to examine if cerebrovascular cholinergic mechanisms are impaired in Veterans with GWI when upright.
Notwithstanding the limitations of a cross-sectional study, there are other factors that may impact the interpretation of our findings. GWI is a heterogeneous disorder; therefore, our relatively small sample size, poor representation of female veterans, and unequal group sizes are limitations. Human subject research in GWI is challenging as common conditions of advanced age are exclusions for GWI case status. Despite this, we observed a large effect size for a primary variable of interest, ARI. However, we were likely sufficiently underpowered to detect a meaningful change in certain secondary variables. Another possible consideration for differences between groups is due to physical activity. GWI is a fatiguing illness that makes most unable to exercise and thus results in the veterans becoming extremely sedentary. Thus, impaired autoregulation in the veterans with GWI could be the result of deconditioning. While we did not have measures of physical activity or fitness in the two groups previous work examining fit vs sedentary elderly found no difference in cerebral autoregulation [60].
Conclusions
In summary, Veterans with GWI demonstrate dysregulation of CBF during transition from a seated to standing position and when standing. Reasons for the disrupted regulation of CBF are likely multifactorial and may be manifested by disturbances in other physiological systems (i.e., autonomic and vestibular) and bioenergetics (i.e., mitochondrial dysfunction). Future studies are needed to expand on this work, including association of impaired cerebral blood regulation when upright and impaired cognitive function as well as in relation to symptoms at rest and during symptom exacerbation (i.e., post-exertion malaise).
Supporting information S1 Table. Heart rate & blood pressure variability. Comparison of measures of heart rate and blood pressure variability among Veterans who screened positive for Gulf War Illness (n = 23) and healthy controls (n = 9) during a 2-3 min steady state period while seated. For time domain measures, mean values of RR Interval (Mean RR) as well as standard deviation (SDNN) and root mean square of successive differences between RR intervals (rMSSD) were obtained. Heart rate variability was derived using the Power Spectrum (Welch) and Power Spectrum (Lomb-Scargle) periodgrams in the low frequency (LF: 0.04-0.15 Hz) and the high frequency (HF: 0.14-0.4 Hz) bands. Values were calculated for power in absolute units as well as % of total power. Ratios of low frequency to high frequency power (LF/HF) was derived. Variability of systolic blood pressure (SBP) was also calculated in the LF and HF bands. A Effect sizes are reported as Hedges' g for independent samples t-tests and point-biserial correlations for Mann-Whitney U tests; B Data analyzed with independent samples t-test; C Data analyzed with Mann-Whitney U test. (DOCX) S2 Table. Baroreflex estimate. Comparison of measures of baroreflex sensitivity (BRS) among Veterans who screened positive for Gulf War Illness (n = 23) and healthy controls (n = 9) during a 2-3 min steady state period while seated. Transfer function estimates of gain, coherence and phase were obtained in the low frequency (LF: 0.04-0.15 Hz) and the high frequency (HF: 0.14-0.4 Hz) bands. Spontaneous baroreflex was also estimated from 3 beat segments of systolic blood pressure and RR interval to obtain a mean slope as well as the number of mismatched segments. A Effect sizes are reported as Hedges' g for independent samples ttests and point-biserial correlations for Mann-Whitney U tests; B Data analyzed with independent samples t-test; C Data analyzed with Mann-Whitney U test. | 2018-11-01T18:46:31.979Z | 2018-10-15T00:00:00.000 | {
"year": 2018,
"sha1": "d586e9252b4d40d8e39e1f3af346793c3fc0a6e0",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0205393&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2582e7b8142a13865e84678fb2a05109f0dc34ef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199380027 | pes2o/s2orc | v3-fos-license | Garcinol inhibits esophageal cancer metastasis by suppressing the p300 and TGF-β1 signaling pathways
Metastasis causes the main lethality in esophageal cancer patient. Garcinol, a natural compound extracted from Gambogic genera, is a histone acetyltransferase (HAT) inhibitor that has shown anticancer activities such as cell cycle arrest and apoptosis induction. In this study, we investigated the effects of garcinol on the metastasis of esophageal cancer in vitro and in vivo. We found that garcinol (5–15 μM) dose-dependently inhibited the migration and invasion of human esophageal cancer cell lines KYSE150 and KYSE450 in wound healing, transwell migration, and Matrigel invasion assays. Furthermore, garcinol treatment dose-dependently decreased the protein levels of p300/CBP (transcriptional cofactors and HATs) and p-Smad2/3 expression in the nucleus, thus impeding tumor cell proliferation and metastasis. Knockdown of p300 could inhibit cell metastasis, but CBP knockdown did not affect the cell mobility. It has been reported that TGF-β1 stimulated the phosphorylation of Smad2/3, which directly interact with p300/CBP in the nucleus, and upregulating HAT activity of p300. We showed that garcinol treatment dose-dependently suppressed TGF-β1-activated Smad and non-Smad pathway, inhibiting esophageal cancer cell metastasis. In a tail vein injection pulmonary metastasis mouse model, intraperitoneal administration of garcinol (20 mg/kg) or 5-FU (20 mg/kg) significantly decreased the number of lung tumor nodules and the expression levels of Ki-67, p300, and p-Smad2/3 in lung tissues. In conclusion, our study demonstrates that garcinol inhibits esophageal cancer metastasis in vitro and in vivo, which might be related to the suppression of p300 and TGF-β1 signaling pathways, suggesting the therapeutic potential of Garcinol for metastatic tumors.
INTRODUCTION
Esophageal cancer is the eighth most common cancer in the world, and it has a poor prognosis and uneven geographic distribution [1]. There are two dominant histological types of esophageal cancer: esophageal adenocarcinoma and esophageal squamous cell carcinoma (ESCC). Historically, ESCC has shown a trend of increasing incidence in eastern Asia and Africa [2]. According to statistics from China, the United States and Europe, the 5-year survival rate of esophageal cancer is generally less than 21% [3][4][5]. Furthermore, over 50% of patients present with either unresectable tumors or radiographically visible metastases upon the preliminary diagnosis of esophageal cancer [6]. Cancer metastasis is the main cause of death among cancer patients [7]. Although chemoradiotherapy is the standard therapy for esophageal cancer and offers a statistically significant extension of survival, the therapeutic benefit of this treatment is unsatisfactory for most patients, and its potential for toxic effects should draw attention [8][9][10]. Thus, it is crucial to find an improved treatment strategy that impedes the development of this fatal disease.
Histone acetylation is an important protein modification. Usually, histone modification acts on the N-terminus of histones, altering gene transcription, translation and cell regulation [11,12]. Histone acetyltransferases (HATs) and histone deacetylases (HDACs) antagonize each other to maintain a dynamic balance of histone acetylation levels and participate in gene expression regulation [13]. p300/CBP is an important member of a group of acetyltransferases that are ubiquitously expressed transcriptional coactivators. The abnormal expression of p300/CBP leads to a series of effects, such as the induction of tumor cell proliferation and metastasis [14,15], and affects the activities of downstream pathways [15,16]. While p300 is a closely related paralog of CBP, the functions of p300 and CBP are different [17]. The high expression of p300 is associated with poor prognosis and an unfavorable impact on survival in non-small cell lung cancer (NSCLC) [18] and ESCC patients [19]. Linc00460, which is upregulated by CBP/p300 through histone acetylation, promotes carcinogenesis in ESCC [20]. p300 and CBP may be new targets for the treatment of cancer and metastasis [21,22].
Transforming growth factor-β (TGF-β) signaling enhances metastasis to promote malignancy during cancer development, and the mechanism remains unclear [23]. TGF-β mediates cell metastasis through Smad-mediated transcription regulation [24] and non-Smad pathways. The protein level and acetylation ability of p300 are also increased after TGF-β1 stimulation. TGF-β1 enhances the transcription of epithelial-mesenchymal transition (EMT)-related genes by activating transcription factors.
Natural compounds play a leading role in the development of anticancer drugs. Recent studies have indicated that some of the natural compounds that target HATs, especially p300, exhibit anticancer activity [25][26][27]. For instance, allspice extracts and Rosa rugosa methanol extract inhibit both p300 and CBP activity and reduce prostate cancer cell growth [28,29].
Garcinol, which is extracted from Garcinia yunnanensis Hu [30], is regarded as a potent inhibits or HATs, especially p300 [31]. Antiinflammatory, antioxidation, antitumor, and other activities of Garcinol have been reported [32][33][34]. Garcinol can reverse the EMT to mesenchymal-epithelial transition via the Wnt signaling pathway in breast cancer [35]. Garcinol can alter the expression and acetylation of the tumor suppressor p53, which results in growth arrest in breast cancer [36]. However, the mechanism by which Garcinol inhibits cell metastasis requires further research, and the process by which p300 mediates downstream signaling and influences cell metastasis has not been reported.
In the present study, we provided evidence that Garcinol inhibits cell migration and invasion by suppressing p300, but not its paralog CBP, and the silencing of p300 can decrease EMT marker protein levels. The TGF-β1-related pathway is also suppressed by Garcinol, and p-Smad2/3, which forms a complex with p300 in the nucleus, is decreased after Garcinol treatment, suggesting that Garcinol is a potent antitumor metastasis drug candidate for preventing and treating esophageal cancer.
Plant material
Garcinol was obtained from G. yunnanensis Hu as previously described [37]. Its structure was determined by 1 H-NMR and 13 C-NMR spectral analysis, and the purity of Garcinol was more than 98% based on HPLC analysis. This compound was prepared by dissolving in dimethyl sulfoxide (DMSO), and the final concentration of DMSO was adjusted to 0.1% (v/v) in the culture media. DMSO was the control in all cases.
Cell culture
The human esophageal cancer cell lines KYSE150 and KYSE450 were provided by the Fudan University Shanghai Cancer Center. The cells were maintained in a humidified atmosphere containing 5% CO 2 at 37°C. These cells were cultured in RPMI-1640 (Invitrogen, NY, USA) with 10% fetal bovine serum (FBS, Invitrogen, NY, USA), 100 U/mL penicillin, and 100 mg/mL streptomycin (Invitrogen, NY, USA).
Wound healing assay A total of 1.5 × 10 5 cells were seeded into 24-well culture plates. When the cells reached 80%-90% confluence, a scratch was made through the confluent monolayer with a sterile plastic pipette tip. The cells were incubated in fresh complete medium at 37°C with or without Garcinol and TGF-β1. The migration distance of the cells was monitored and imaged under an Olympus microscope IX83 (Tokyo, Japan).
Transwell and matrigel invasion assays Cell migration and invasion were determined using a transwell chamber (Corning, NY, USA) with a pore size of 8 μm. For the migration assay, 5 × 10 4 cells were seeded in FBS-free medium in the upper chamber, and complete medium was added to the lower chamber. For the invasion assay, a total of 2 × 10 5 cells were plated in serum-free medium in the upper chamber of a Matrigel-coated transwell, and complete medium was added to the lower chamber. After incubation for 24 h at 37°C, the cells on the upper surface of the chamber were removed using cotton swabs, and then, the migrated cells on the bottom surface were fixed in ethyl alcohol and stained with crystal violet. The membrane was scored under a light microscope in five random fields.
MTT assay
The cells were treated with various concentrations of Garcinol for 24 h. At the end of the incubation period, 10 μL of 3-(4,5dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) solution was added to each well of a 96-well plate for 4 h at 37°C, and then 150 μL of dimethyl sulfoxide (DMSO) was added to dissolve the purple crystals. The optical densities were measured at 570 nm, and cell viability was normalized as a percentage of the control.
Immunofluorescent staining A total of 5 × 10 4 cells were grown on glass coverslips overnight and treated with or without 15 μM Garcinol in a culture of 5 ng/mL TGF-β1 for 2, 6, and 12 h. The cells were fixed with 4% paraformaldehyde, washed with PBS for 5 min three times, and then permeabilized using 0.3% Triton X-100 in PBS. After h The mRNA levels of p300 and CBP were detected by RT-PCR after Garcinol or C646 treatment. *P < 0.05, **P < 0.01 Fig. 2 a, b KYSE150 cells were transfected with control siRNA, siRNAs (1, 2, and 3) targeting p300 or siRNAs (761, 2204, 2209) targeting CBP for 24 h and analyzed by Western blotting (n = 5) and RT-PCR. c KYSE150 cells were transfected with siRNA targeting p300 (2), siRNA targeting CBP (2209) or a mixture of siRNAs targeting p300 and CBP for 24 h and then treated with or without 10 μM Garcinol and analyzed with a wound healing assay at 24 h. d KYSE150 cells were transfected with siRNA targeting p300 (2), siRNA targeting CBP (2209) or a mixture of siRNAs targeting p300 and CBP. Twenty-four hours after transfection, the cells were seeded into a chamber and treated with or without 10 μM Garcinol for 24 h, and then the migrating cells were stained with crystal violet. e KYSE150 cells were transfected with siRNA targeting p300 (2), siRNA targeting CBP (2209) or a mixture of siRNAs targeting p300 and CBP for 24 h. The cells were seeded into Matrigel-coated chambers and treated with or without 10 μM Garcinol for 24 h, and the invading cells were stained with crystal violet. f, g The number of cells in the Transwell assay (d) and the Matrigel invasion assay (e) was counted for each group in three independent experiments. The relative protein levels were normalized to GAPDH by using ImageJ software (n = 5). The data are presented as the means ± S.D. *P < 0.05, **P < 0.01 permeabilization, the cells were blocked with 5% bovine serum albumin for 1 h and then incubated with a p-Smad2/3 antibody (diluted 1:500) overnight. The coverslips were washed with PBS and then incubated with a Cy3-labeled goat anti-rabbit IgG secondary antibody (diluted 1:500; Beyotime) for 1 h. The coverslips were then washed and mounted using 4′6-diamidino-2phenylindole, and images were obtained using an Olympus microscope.
Cytosol/membrane fractionation KYSE150 cells were treated with or without Garcinol for 6, 12, and 24 h. Nuclear and cytoplasmic protein extractions were obtained using the Nuclear and Cytoplasmic Protein Extraction Kit (Beyotime). A protease and phosphatase inhibitor cocktail was added to the protein extract.
Pulmonary metastasis assay in mice Briefly, 5-week-old male BALC/c nude mice were purchased from the Experimental Animal Center of the Chinese Academy of Science (Shanghai, China) and maintained in a pathogen-free environment. The experimental procedures were approved by the Shanghai University of Traditional Chinese Medicine Committee on the Use of Live Animals for Teaching and Research. The mice were intravenously injected with 1 × 10 6 KYSE150 cells via the tail Fig. 3 a KYSE150 cells were transfected with siRNA targeting p300, siRNA targeting CBP or a mixture of siRNAs targeting p300 and CBP for the indicated times. The protein levels of p300 at 24 h and 48 h were detected by Western blotting (n = 5). b Cell viability and the number of cells were analyzed using trypan blue dye staining. The number of cells was counted for each group in three independent experiments (n = 3). c KYSE150 cells were transfected with siRNA targeting p300 (2), siRNA targeting CBP (2209) or a mixture of siRNAs targeting p300 and CBP for 24 h; p300, CBP, E-cadherin, vimentin, and snail levels were analyzed by Western blotting. d KYSE150 cells were transfected with siRNA targeting p300, siRNA targeting CBP or a mixture of siRNAs targeting p300 and CBP and then treated with Garcinol for 24 h; p300, CBP, E-cadherin, vimentin, and snail levels were analyzed by Western blotting (n = 5). The relative protein levels were normalized to GAPDH by using ImageJ software. e, f KYSE150 cells were transfected with siRNA targeting p300 or siRNA targeting CBP and then treated with Garcinol for 24 h. The mRNA levels of p300 and CBP were analyzed. *P < 0.05, **P < 0.01 vein. After the injection of the tumor cells, the mice were randomly divided into three groups and received an intraperitoneal injection of saline, Garcinol or 5-fluorouracil (5-FU) once every two days for five weeks.
HE staining and immunohistochemistry After 35 days of treatment, the mice were sacrificed, and the lungs were immediately removed and fixed in 10% neutral buffered paraformaldehyde at 4°C for 48 h. Selected samples were embedded in paraffin, sectioned and stained with hematoxylin and eosin. The p300, p-Smad2/3, and Ki-67 primary antibodies were used at 1:100 dilutions. The sections were finally mounted with DPX mountain (317616, Sigma, MO, USA) for histological analysis.
Statistical analysis Statistical comparisons were performed using Student′s t test and repeated-measures one-way ANOVA followed by post hoc Dunnett′ s test with GraphPad Prism 5 software (GraphPad, CA, USA). Values of P < 0.05 were considered significant. All of the results are expressed as the means ± SD, and the experiments were repeated three times independently.
Garcinol inhibits metastasis in ESCC cells
Previous studies showed that Garcinol has the ability to inhibit metastasis in several cancer cell lines, such as HT-29 and PANC-1 cells [38,39]. Therefore, we examined whether Garcinol affects metastasis in ESCC cells. In the wound healing assay, Garcinol inhibited KYSE150 cell migration at a concentration of 5 μM (Fig. 1b). The inhibitory effects of Garcinol on migration and invasion were investigated using Transwell and Matrigel assays. As shown in Fig. 1c, the number of migrating and invading KYSE150 cells was decreased after treatment with 5, 10, and 15 μM Garcinol, and the compound C646, a commercial p300 inhibitor, was applied as a positive control. The number of migrating and invading cells was counted in Fig. 1d. To eliminate the possibility that the suppression of cell metastasis by Garcinol may be due to the inhibition of cell proliferation, we examined the cytotoxicity of Garcinol in KYSE150 and KYSE450 cells using the MTT assay. Fig. 1e, f shows that 15 μM Garcinol did not suppress cell proliferation, suggesting that Garcinol suppresses cell metastasis. We next determined how Garcinol influences the metastatic signals. As shown in Fig. 1g, Garcinol decreased the protein levels of p300 and CBP in a dose dependent manner. In addition, we observed that Garcinol upregulated the EMT-related protein Ecadherin and downregulated vimentin and snail. The mRNA levels of p300 and CBP were not affected by Garcinol treatment (Fig. 1h). These results suggest that Garcinol inhibits metastasis in ESCC cells.
Garcinol inhibits metastasis in a manner that is dependent on the downregulation of p300 To investigate whether Garcinol affects metastasis by inhibiting p300 and CBP, we evaluated p300/CBP levels upon Garcinol treatment in KYSE150 cells. siRNAs targeting p300 and CBP were transfected into KYSE150 cells, and the protein levels and mRNA levels of p300 and CBP were decreased (Fig. 2a, b). p300-1 siRNA, p300-2 siRNA, and CBP-2209 siRNA were selected for subsequent experiments. The wound healing assay showed that the knockdown of p300 inhibited the migration of KYSE150 cells, but the knockdown of CBP did not (Fig. 2c). The Transwell and Matrigel assays showed similar effects of p300 and CBP siRNA as the wound healing assay. As shown in Fig. 2d, e, the number of migrating and invading cells was further decreased after treatment with 10 μM Garcinol. The statistical analysis of migrating and invading cells is shown in Fig. 2f, g. Thus, the expression of Fig. 4 a, b KYSE150 and KYSE450 cells were treated with or without Garcinol for 24 h, and E-cadherin, snail, p-Stat3, p-Src, p-AKT, p-Smad2/3, p-MEK, p-S6, and GAPDH proteins were separated and analyzed by Western blotting (n = 5). c KYSE150 cells were transfected with siRNA targeting p300, siRNA targeting CBP or a mixture of siRNAs targeting p300 and CBP and treated with or without Garcinol for 24 h. p-Stat3, p-Src, p-AKT, p-Smad2/3, p-MEK, and GAPDH levels were analyzed by Western blotting (n = 5). The relative protein levels were normalized to GAPDH by using ImageJ software p300 is related to the mobility of KYSE150 cells, and the antimetastatic effect of Garcinol may depend on the downregulation of p300.
We next determined the detailed mechanism of the effect of p300 against metastasis and detected the related changes in proteins after siRNA transfection. The protein levels of p300 and CBP were decreased after transfection with p300 and CBP siRNAs for 24 h and 48 h, and the cell numbers were counted (Fig. 3a, b). The results indicated that the knockdown of p300 suppressed cell growth at 48 h. The expression of the EMT marker was also regulated by p300 and CBP siRNAs. The knockdown of p300 increased the protein level of E-cadherin and decreased the protein level of snail, while the knockdown of CBP did not influence the expression of E-cadherin but decreased the protein level of snail (Fig. 3c). The expression of the EMT marker decreased after p300 knockdown or Garcinol treatment (Fig. 3d). Taken together, our results indicate that p300 is essential for the mediation of KYSE150 cell metastasis and that Garcinol inhibits metastasis by downregulating the expression of p300.
The mRNA levels of p300 and CBP were also determined in p300 and CBP knockdown cells after Garcinol treatment. The expression of p300 mRNA was lower after Garcinol treatment than after vehicle treatment in p300 or CBP knockdown cells (Fig. 3e). However, the expression of CBP mRNA did not change in p300 knockdown cells after Garcinol treatment (Fig. 3f). Garcinol further decreased the expression of p300 mRNA after the knockdown of p300. These data indicate that the metastatic inhibition by Garcinol may be related to the expression of p300 but not CBP.
Garcinol inhibits TGF-β1-induced metastasis in ESCC p300 and p-Smad2/3 can form a complex in the nucleus and activate downstream metastasis signaling [40]. As shown in Fig. 4a, b, the activation of some protein kinases, including p-Stat3, p-AKT, p-Src, p-Smad2/3, p-MEK, and p-S6, was decreased upon Garcinol treatment in both KYSE150 and KYSE450 cells. The knockdown of p300 increased the protein level of p-Stat3 and decrease the protein level of p-MEK at the same time, but 15 μM Garcinol reversed the expression of p-Stat3 (Fig. 4c). 5 a KYSE150 cells or b KYSE450 cells were induced with 5 ng/mL TGF-β1, treated with or without 10 μM Garcinol for 24 h and analyzed by a wound healing assay. c, d KYSE150 and KYSE450 cells were induced with 5 ng/mL TGF-β1 and treated with different concentrations of Garcinol for 24 h; E-cadherin, snail, p-Stat3, p-Src, p-AKT, p-Smad2/3, p-MEK, p-S6 and GAPDH proteins were separated and analyzed by Western blotting (n = 5). The relative protein levels were normalized to GAPDH by using ImageJ software. The data are presented as the means ± SD Garcinol inhibits cancer metastasis by targeting p300 J Wang et al. Fig. 6 a KYSE150 cells were induced with 5 ng/mL TGF-β1 and treated with or without 15 μM Garcinol for 6 h or 12 h; nuclear and cytosolic proteins were separated using a Nuclear-Cytosol Extraction Kit, and the protein expression levels of p300, p-Smad2/3, lamin A/C and α-tubulin were analyzed by Western blotting (n = 5). b KYSE150 cells were induced with 5 ng/mL TGF-β1 and incubated with 15 μM Garcinol for 24 h; nuclear and cytosolic proteins were separated using a Nuclear-Cytosol Extraction Kit, and the protein expression levels of p300, p-Smad2/3, lamin A/C and β-tubulin were analyzed by Western blotting (n = 5). c KYSE150 cells were induced with 5 ng/mL TGF-β1 and treated with or without 15 μM Garcinol for the indicated times, and then images were acquired with a fluorescence microscope with a 60× objective. The scale bars represent 10 μm. The relative protein levels were normalized to GAPDH by using ImageJ software TGF-β1 can mediate EMT by suppressing the phosphorylation and acetylation of Smad2 and Smad3 in cancer cells [40]. Therefore, we evaluated the expression of proteins downstream of TGF-β1 after Garcinol or p300 siRNA treatment. Garcinol inhibited cell migration with or without TGF-β1 treatment in KYSE150 and KYSE450 cells (Fig. 5a, b). After TGF-β1 simulation for 24 h, Garcinol decreased the protein levels of vimentin, snail, p-Smad2/3, p-Stat3, p-Src, p-AKT, p-MEK, and p-S6 in KYSE150 and KYSE450 cells (Fig. 5c, d). We then determined whether Garcinol can inhibit cell metastasis by inhibiting the nuclear expression of p-Smad2/3. The expression of p-Smad2/3 in the nucleus was decreased in a time-and dose-dependent manner after treatment with Garcinol (Fig. 6a, b). Immunofluorescence staining showed that the expression of p-Smad2/3 was lower after Garcinol treatment for 2, 6, and 12 h (Fig. 6c). Taken together, our data suggest that Garcinol can inhibit TGF-β1-induced metastasis in ESCC.
Garcinol inhibits pulmonary metastasis in mice
To explore the metastatic inhibition effect of Garcinol in vivo, we used a mouse model of pulmonary metastasis induced by tail vein injection. After KYSE150 cells were injected, the mice were randomly divided into three groups and administered vehicle, Garcinol, or 5-FU via intraperitoneal injection (n = 7 in each group). Thirty-five days after cell injection, the mice were sacrificed, and pulmonary metastasis was examined by HE and immunohistochemistry staining. As shown in Fig. 7a, lung tumor nodules were observed in the control group, whereas both Fig. 7 Garcinol inhibits pulmonary tumor metastasis in mice. a Six-week-old male nude mice were injected into the tail vein with 1 × 10 6 KYSE150 cells. After injection, the mice were divided into three groups: the vehicle group, the Garcinol-treated (20 mg/kg per 2 days) group and the 5-FU-treated (20 mg/kg per 2 days) group (n = 7 in each group). Representative examples and HE staining of lungs from each group after 5 weeks. b Quantitative analysis of metastatic nodes in lung. *P < 0.05, **P < 0.01. c Lung weight, as analyzed after treatment. *P < 0.05, **P < 0.01. d Immunohistochemical staining for Ki-67, p300 and p-Smad2/3 in lung tissues. e Body weights were analyzed every two days throughout the experiment Garcinol and 5-FU reduced the number of tumor nodules. The quantitative analysis is shown in Fig. 7b; the number of nodules in the Garcinol-and 5-FU-treated groups was significantly decreased compared with that in the vehicle group. The weight of the lungs in the Garcinol and 5-FU treated groups was decreased compared to that in the vehicle group (Fig. 7c). Ki-67 is a marker of cell proliferation, and the inhibition of Ki-67 expression leads to the suppression of proliferation. After Garcinol or 5-FU injection, the expression of Ki-67 was lower than that after vehicle injection, as was 5-FU (Fig. 7d). Consistent with the in vitro experiments, Garcinol did not have significant effects on the weight of the mice (Fig. 7e) or other tissues. After Garcinol or 5-FU injection, the levels of p300 and p-Smad2/3 were also decreased in the lung tissues (Fig. 7d). In summary, our study indicates that Garcinol is an interesting natural compound that affects non-Smad and Smad pathways in esophageal cancer cells.
DISCUSSION p300 and CBP function as transcriptional cofactors and HATs. The high expression of p300 is associated with poor survival in esophageal cancer patients, and p300 has been an important drug target in cancer research [41]. In this study, we found that Garcinol may be an active metastatic inhibitor in esophageal cancers. Garcinol may be an effective inhibitor of tumor metastasis, and its effect was investigated by using wound healing, Transwell migration and invasion assays. Our data also suggested that Garcinol inhibits p300 and CBP without downregulating their mRNA levels (Fig. 1h). Our further studies indicated that Garcinol inhibits TGF-β1 signaling pathways, including Smad and non-Smad pathways. Phosphorylated Smad2 and Smad3 are associated with Smad4, translocate to the nucleus and act as transcription factors [42]. Garcinol treatment leads to a decrease in p-Smad2/3 and p300 in the nucleus, which causes the downregulation of EMT markers. p300 and CBP have multiple functional domains that accommodate diverse protein-protein interactions to enable a large number of disparate transcription factors [43]. Although p300/CBP have been implicated in cancer development, the specific mechanisms have been less precisely defined [44]. Research has shown that p300 promotes proliferation, migration, and invasion in NSCLC [45]. The overexpression of p300 is a poor prognostic factor in breast cancer, prostate cancer, hepatocellular carcinoma, and esophageal squamous cell carcinoma [46][47][48]. In our previous studies, we found that Garcinol has a potential effect on p300 and CBP, leads to p-Stat3, p-Src and p-AKT downregulation, and inhibits EMT functions and characteristics. Then, we investigated the role of p300/CBP in inhibiting ESCC metastasis, and the results indicated that p300, but not CBP, has an antimetastatic effect. We also made an attempt to overexpress p300 to explore its effect. However, due to the high molecular weight of p300, it was difficult to obtain sufficient effective and convincing experimental results; we will try to verify the results in the future. In our animal study, compared to the positive control 5-FU, Garcinol inhibited pulmonary metastasis (Fig. 7).
TGF-β1 stimulation results in the phosphorylation of the (Serine-Serine-X-Serine) SSXS motif of Smad2 and Smad3, which leads to nuclear translocation [49]. Both Smad2 and Smad3 directly interact with p300/CBP in the nucleus [50]. In this study, we determined that the phosphorylation of Smad2 and Smad3 can be suppressed by Garcinol, which can also decrease the p300 protein level in the nucleus. This activity of p300/CBP is considered a novel target for the prevention of EMT and fibrosis [51]. TGF-β1 can upregulate the acetyltransferase activity of p300 in some sensitive ESCC cells, in which Garcinol inhibits the stimulation of p300 activity by TGF-β1 [49]. The results suggested that the regulation of TGF-β1 signaling by Garcinol is also important for the chemoprevention of invasion and metastasis in cancer. Thus, our findings highlight that p300/CBP and Smad2/Smad3 in the TGF-β1 signaling pathway are essential components for the regulation of the TGF-β1-induced transcriptional activation of EMT markers in human ESCC cells (Fig. 6).
In conclusion, we demonstrated that Garcinol can inhibit p300 and p-Smad2/3 in the nucleus to block the transcription of metastasis-related genes. We also found that the knockdown of p300 can downregulate p-MEK and p-S6, which are related to EMT. Garcinol can also suppress Smad and non-Smad pathways, which are activated by TGF-β1. Thus, Garcinol decreases EMT marker gene expression by inhibiting the TGF-β1 pathway and p300. As a result, our findings reveal a new mechanism by which Garcinol inhibits cell metastasis through the inhibition of p300 and p-Smad2/3 activity in human ESCC cells. | 2019-08-03T13:03:25.863Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "77737f2398a0fad9ca9d0a5090bc015a758db62d",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/s41401-019-0271-3.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5a7905f28d009a0678b42c3ce048fe46ff439dc",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
216123665 | pes2o/s2orc | v3-fos-license | A Conversation Analysis of Repair Strategies in Indonesian Elementary EFL Students
This study aimed to investigate the types of repair strategies and techniques of repair initiation used by Indonesian Elementary EFL students during the classroom interaction with their teacher. The participants were Elementary EFL students at the beginner level. By using qualitative research, the study used four types of repair strategies by Schegloff, Jefferson, and Sacks (1977) and techniques of repair initiation from Finegan (2008). The data sources were in the form of video recorded of classroom interactions that were transcribed by applying Jefferson Transcription Notation (2004). The findings of the study revealed that the students used all types of repair strategies. The most frequently is OISR which obtained 23 occurrences (37.1%). Besides, the three techniques were found in the conversation. Asking question toward the problem is the dominant one which occurred 31 (50.0%). Another technique was revealed which is giving possible understanding toward the problem. The results of the present study indicate that the speakers produced the trouble source more which affected the recipient to initiate asking for the repair. It means that the trouble source identified by the teacher, but the students did the repair. The trouble source that appeared was affected by the students’ proficiency and the lack of knowledge that they had toward the topic. Also, the teachers initiated asking for explanation to raise the students’ ability in terms of their English knowledge and speaking fluency. However, the teacher should consider that the students should have a chance to repair their trouble source or problem by themselves.
INTRODUCTION
Conversation is the way people communicate with others. It also shows how they interact with others and exchange information. However, in a conversation, people do not only maintain their relationship and exchange information. There are still other numbers of features of conversation that can be studied. In the recent time, the conversation has been extended into spoken discourse such as doctor-patient consultations, news interviews, talk show, and classroom interaction (Paltridge, 2006). To examine conversation, Conversation Analysis (CA) becomes a suitable approach because it is the organization of social action through talk (Mazeland, 2006).
It is the approach of social interaction and action focusing on investigating interaction by analyzing how the participants use to construct it. Paltridge (2006) believes that CA is an analysis of talk which focuses on how people maintain their everyday conversational and also the study of spoken discourse that looks at how people manage their conversational interaction. Furthermore, CA focuses on the practical details of how talk-in-interaction is organized (Schegloff, 2007 In a study of CA, the phenomenon above is called as repair. It is an aspect of conversational interaction and becoming a crucial thing in a conversation. According to Schegloff, Jefferson, and Sacks (1977) define repair as a tool used in conversation to correct an error made by speaker or trouble source and state that repair deals with recurrent problems in speaking, hearing, and understanding. Clark and Schaefer (1989) 3 also checks what they have understood in a conversation (Paltridge, 2006).
Sometimes, the speakers do not realize if they made a mistake. Therefore, the recipient should give a signal to inform and initiate the repair of a previous statement (Tiara, 2018). To study repair in a conversation, Schegloff, Jefferson, and Sacks (1977) categorize repair into several types.
Thus, this study aims to see how Indonesian beginner EFL students solve miscommunication problems involving speaking, hearing, and understanding in their class, and also how they initiate the repairs.
RESEARCH METHODS
This study was qualitative because the data were conversational interaction in the classroom. Qualitative research is a kind of social science research that deals with non-numerical data. It focuses on the micro-level of social interaction that composes everyday life (Crossman, 2018). Whereas, Wray and Bloomer (2006) (Schegloff et al., 1977).
According to them (as cited in Liddicoat, 2007), the bold clause is as a repaired segment. It is as the trouble source or repairable, and the thing in talk which needs to be repaired. Meanwhile, the repairing segment is the segment of utterance that repairs the trouble source. It also must follows the initiation () given by another participant. The repairing segment can be do in several ways for example by asking question, repeating the misheard or misunderstood, or using particle and expression. After analyzing the data through the steps above, the last step was that the results were interpreted and conclusions were drawn.
FINDINGS AND DISCUSSION
The study aims to examine types of repair strategies and techniques of repair initiation used by Indonesian Elementary EFL students at the beginner level. The analysis analyzed through the framework of types of repair strategies developed by Schegloff, Jefferson, and Sacks (1977) and techniques of repair initiation by Finegan (2008). The findings of this study revealed that there are 62 occurrences of repair strategies used by EFL students that are shown and discussed below.
The Types of Repair Strategies
The analysis of types of repair strategies used the theory proposed by Schegloff et al. (1977). The results showed that the participants used all types of repair strategies during the conversation, which are self-initiated self-repair (SISR), self- (table 2). In
Other-Initiated Self-Repair (OISR)
According to Schegloff et al. (1977), In this excerpt, the students here were discussing the past tense. One of the students (student 1) tried to answer the question, but the teacher and another student (student 2) identified there was a problem in the student's 1 utterance. In the conversation, when the participants are more than two, it is possible for the trouble source is initiated by more than one recipient (Tiara, 2018). Then, the teacher used the particle of "huh?" when he noticed the trouble source. Also, the student 2 initiated asking student 1 a question by saying, "bukannya went?" Schegloff, Jefferson, and Sacks (1977). It means that the teacher identified the trouble source, and the student did the repair (Chalak & Karimi, 2017). Besides, the students in the conversation answered the question by using Indonesian because he was afraid that the answer would be incorrect if he answered it using English.
Other-Initiated Other-Repair (OIOR)
There are 16 occurrences of other-initiated other-repair (OIOR). OIOR is how the trouble source is identified and repaired by the interlocutor or recipient. According to Schegloff et al. (1977), other-initiated other-repair occurs when the recipient completes the repair. In the analysis, this type appeared 16 times (20.8%). This is an example of how this type occurred in the conversation.
Teacher : I fell off a bicycle.
In this excerpt, the participants were discussing the past tense. The teacher asked the students to give an example of a sentence in past tense form.
In the next turn, one of the students gave an example. The trouble source appeared when the student uttered the example by saying, "I fell (2.0) bicycle." The teacher here indicated there was a trouble source in the previous turn in terms of grammar.
However, the student unaware the mistake in his utterance. Then, the teacher simultaneously initiated and repaired it into the correct one for the student by saying, "I fell off a bicycle."
Tita Novitasari, 2020 A CONVERSATION ANALYSIS OF REPAIR STRATEGIES IN INDONESIAN ELEMENTARY EFL STUDENTS
Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu The conversation among the teacher and the student above showed that the teacher initiated and repaired the student's utterance into the correct one. As Schegloff et al. (1977), when the interlocutor initiates and completes the trouble source, it is categorized as otherinitiated other-repair. Tiara (2018) in her study states that OIOR occurs when the initiation and completion are done simultaneously. Sometimes, the initiation from the interlocutor is disguised as a solution of the trouble source. Also, this strategy is used to correct the problem that is produced by the current speaker as well as give the correct answer.
Self-Initiated Self-Repair (SISR)
Self-initiated self-repair ( In excerpt 3, the teacher asked the students what they already studied in the last meeting. They were discussing the material first before the class was started. The teacher asked the students to translate the sentence into English, "how do you say 'saya pergi ke sekolah dengan motor tadi pagi?'" When the student tried to answer the question in the next turn, he was repeating the word "I'm go (1.0) I'm (3.0)", and cuts-off for three seconds. But after he got the answer, he immediately repaired his utterance to make the message was conveyed well to the interlocutor by saying "I go to school (2.0) I go to the school with motorcycle". In the student's statement, he realized that there was a trouble source in terms of his grammar that needed to be corrected.
Therefore, he initiated to repair his utterance by repeating his statement.
11
In accordance with Schegloff et al. (1977) theory, the excerpt showed how self-initiated self-repair (SISR) used by the student in the conversation. According to them, SISR takes the form of initiation with a non-lexical initiator, followed by the repairing segment. To repair the errors in the conversation, language users repeat words to achieve communication goals.
Besides, SISR appears when the interlocutor is responsible for the trouble source both initiates and completes the repair. Also, Rahayu (2016) states that SISR occurs when the speaker is aware toward the problem in his/her utterance and directly resolves it in his/her turn of speaking by cuts-off, repeats, and replaces the incorrect word or statement.
Self-Initiated Other-Repair (SIOR)
Self-initiated other-repair (SIOR) is the least type of repair strategy used by EFL students. This type refers to the situation when the initiation of repair is given by the recipient, while the speaker does the repair completion (Schegloff et al., 1977). This strategy emerged 24 occurrences (31.2%).
The following excerpt is the sample of SIOR. Therefore, the student initiated to ask another question to get a repair by saying, "'was' itu untuk apa?" ('was' is for what?). Then, the teacher repaired the trouble source by answering "for 'is.'" The following excerpt showed the speaker acted as the trouble maker, and he would be the one who initiated the repair.
However, the person who completed the repair was the interlocutor. It is called as self-initiated other-repair (SIOR) (Schegloff et al., 1977). SIOR strategy also occurs when the speaker wants to confirm the recipient's answer to the speaker's question by asking another question (Tiara, 2018). She also states that this strategy aims to confirm something that the speaker has known but unsure.
A CONVERSATION ANALYSIS OF REPAIR STRATEGIES IN INDONESIAN ELEMENTARY EFL STUDENTS
Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu cannot resolve the error by themselves, so the interlocutor repairs the error.
The Techniques of Repair Initiation
Besides the types of repair strategies, this study also investigated techniques of repair initiation. The theory based on the framework proposed by Finegan (2008
Asking Question
Asking question toward the problem is the The explanations are shown below.
Asking question in OISR
In the conversation, asking question occurred in other-initiated self-repair. The participants used this technique in order to
Tita Novitasari, 2020 A CONVERSATION ANALYSIS OF REPAIR STRATEGIES IN INDONESIAN ELEMENTARY EFL STUDENTS
Universitas Pendidikan Indonesia | repository.upi.edu | perpustakaan.upi.edu 13 get a clarification for the trouble source.
Therefore, when the recipient initiates the repair to the speaker by giving a question, the speaker will correct the trouble source.
The following excerpt is the example of asking question toward the problem in other-initiated self-repair.
Student : It's easy! Teacher : Huh? Is it easy?
Student : Yes!
In excerpt 5, the students were doing an exercise in the form of a puzzle. One of the students thought that the task that was given by the teacher was too easy, then he said: "it's easy!". Hearing to the student's statement, the teacher initiated asking a question to the student by saying, "Huh?
It's easy?" In the next turn, the student repaired for clarification on her statement to the teacher by answering, "Yes!" Finegan (2008) claims the technique of repair initiation above is asking question toward the problem. This technique begins with an interrogative word.
Besides, when the participants find the trouble source in the conversation, they will actively offer a question to get more explanations or clarifications for proper understanding (Tiara, 2018).
Asking question in SIOR
Asking question toward the problem did not only occur in other-initiated selfrepair, but it also appeared in self-initiated other-repair. In SIOR, the speaker used this technique to get an explanation and clarification toward the trouble source to the recipient. Excerpt 6 shows the example of SIOR in the conversation.
Excerpt 6
Student : (Schegloff et al., 1977). In the example of the repeat part of the utterance above, the participants were talking about the preposition. When the teacher asked the students the Indonesian translation for the word "between" by asking "between, what is between?". The student tried to answer the question, but she recognized the trouble source in her utterance. Therefore, she had a role as the one who initiated and repaired her statement by herself by repeating "Di antara (1.0) di tengah-tengah!" because she wanted the interlocutor to understand her intended.
In excerpt 8, the student used selfinitiated self-repair strategy. She acted as the initiator and the person who repaired her utterance by herself. According to Finegan (2008), repeat part of the utterance to be repaired technique is when repair initiation that appears in the same turn as the speaker talks. Moreover, the repetition occurs when the participants recognize their trouble source in the conversation and repairs it for the other participants (Tiara, 2018). Also, Rieger (2003) states that repetition is the type of self-repair in which the repairable and repairing segments happen in the same turn, and the repair is performed by the initiator of the repairable.
Use particle and expression 'uhh'
The particle and expression of 'uhh' appeared only twice (3.2%). It occurs when in the middle of conveying the message to the interlocutor, there is a pause for less than a tenth of a second in the speaker's utterance. The following excerpt explains the example of use particle and expression 'uhh.'
Excerpt 9
Teacher : Which one is odd one? In excerpt 9, the participants were discussing the vocabulary. When the teacher said, "which one is odd one?" the student answered, "kick." Then the (Utami, 2018).
CONCLUSIONS
Other-initiated self-repair (OISR) is the most frequently used repair strategy among the students at the beginner level. | 2020-04-23T09:02:25.366Z | 2020-04-09T00:00:00.000 | {
"year": 2020,
"sha1": "fccf6b0934a5e848169a1c38ee2896ef4a1a2e1b",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125938655.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "83301065606544d00c5033253584a0cf3dad15bf",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
212596707 | pes2o/s2orc | v3-fos-license | Investigation of soil tillage practices and weed control methods on Zea may farms in North West of Iran
Nowadays discusses about fuel usage optimizing are very noticeable and the most of investigated projects are conducted in order to offering methods and useful techniques, for better consumption of energy resources. Soil practices are the most important operations of agricultural performance which have generally dedicated more than 50% of whole consumed energy for crop production. So, applying various methods of plowing will be an effective movement in reducing energy consumption in agricultural activities. Applying Reversible ploughs in comparing with slender ploughs, for the primary soil practices, needs more energy and is time consuming. However, in these methods the soil moisture is evaporated much more, and its texture will be destroyed and therefore soil is exposed to windy erosion. With continuing applications of ploughs, the soil surface will be disturbed that result in forming the plowing pans in constant depth of soil reducing the penetration and extension of plant roots. In other hand, the slender ploughs have less control on weeds.1 Studying the alternative planting system of corn and soybean for a period of 10 years, has found that plowing system by reversible ploughs with neglecting the kinds of alternative plants, has been resulted in producing the maximum cop yield. In none of the soil practices systems neither ordinary nor protective one, in the mentioned period, any reduction has been observed in crop yield.2 This demonstrates that, the minimum soil practices systems do not cause any reduction in crop yield. The effect of year on operation in both ordinary and protective soil practices was not significant. In continues plantings of corn, the applying reversible plough in soil practices, has the maximum investment retention, while in cornsoybean alternative plantings, applying the slender plough, has showed the most investment retention. This study has demonstrated that the protective soil practice system could perform without any noticeable losses in investment retention and significant increase in chemical pesticides applications,3 had reported that in comparing the soil practices methods on silt-loam soils, the yield of corn in without soil practices system was 8.4 tons per hector in the first year, and soil practices with reversible plough and slender plough resulted in 10.5 and 9.3 tons per hector, respectively. In the second years, the effect of soil practices on corn bean operation was not significant, therefore, protective soil practices (slender one or none) have been recommended for sloping fields.4
Introduction
Nowadays discusses about fuel usage optimizing are very noticeable and the most of investigated projects are conducted in order to offering methods and useful techniques, for better consumption of energy resources. Soil practices are the most important operations of agricultural performance which have generally dedicated more than 50% of whole consumed energy for crop production. So, applying various methods of plowing will be an effective movement in reducing energy consumption in agricultural activities. Applying Reversible ploughs in comparing with slender ploughs, for the primary soil practices, needs more energy and is time consuming. However, in these methods the soil moisture is evaporated much more, and its texture will be destroyed and therefore soil is exposed to windy erosion. With continuing applications of ploughs, the soil surface will be disturbed that result in forming the plowing pans in constant depth of soil reducing the penetration and extension of plant roots. In other hand, the slender ploughs have less control on weeds. 1 Studying the alternative planting system of corn and soybean for a period of 10 years, has found that plowing system by reversible ploughs with neglecting the kinds of alternative plants, has been resulted in producing the maximum cop yield. In none of the soil practices systems neither ordinary nor protective one, in the mentioned period, any reduction has been observed in crop yield. 2 This demonstrates that, the minimum soil practices systems do not cause any reduction in crop yield. The effect of year on operation in both ordinary and protective soil practices was not significant. In continues plantings of corn, the applying reversible plough in soil practices, has the maximum investment retention, while in cornsoybean alternative plantings, applying the slender plough, has showed the most investment retention. This study has demonstrated that the protective soil practice system could perform without any noticeable losses in investment retention and significant increase in chemical pesticides applications, 3 had reported that in comparing the soil practices methods on silt-loam soils, the yield of corn in without soil practices system was 8.4 tons per hector in the first year, and soil practices with reversible plough and slender plough resulted in 10.5 and 9.3 tons per hector, respectively. In the second years, the effect of soil practices on corn bean operation was not significant, therefore, protective soil practices (slender one or none) have been recommended for sloping fields. 4
Abstract
In order to study the influence of tillage methods on weed control in corn (single cross 704), an experiment was conducted in Miandoab research station for 3 years from 2014-2017.The design of experiment was spilt plot based on random block designed. Main factor was two tillage method with chisel and mould board plow in 25 cm depth and weed control methods as sub factor in 4 level were: chemical control with using of 2 L /ha (Cruz) Nicosulfuron (4% S.C), mechanical control with 2 times cultivation between rows in 2 and 8 leaves stage of corn, control treatment without weeds (with 3-hand weeding) and control treatment with weeds. Analysis of variance results indicated that the effect of tillage methods on corn yield was not significant. Average of corn yield in different tillage methods were 9.422 and 9.148 respectively but the effect of control methods was significant at 1%level.Means comparison by Duncans multiple range test showed that chemical control and hand weeding respectively with 11.285 and 10.85 were placed in a same group but mechanical control and without control respectively with 8.654 and 6.357 were placed in another groups. Analysis of variance results indicated that the effect of Tillage methods on density of common purslane and barnyard grass was significant at 1%level but on the other weeds, tillage method didn't show any effect on their density. Average of density of barnyard grass during 3 years with mould board plow and chisel were43/083, 12 respectively and average of density of common purslane with mould board plow and chisel were6/917 and 14/5 respectively and they were placed in separated group by using of Duncans multiple range test. They also suggested that, in corn planting in plowing by reversible plough, due to decreasing the useful graining of soil, total soil prosier and fertility, showed decline in comparing to protective soil practices, 5 have declared that application of slender plough in a period of 5 years in autumn season, resulted in 5% losses in comparing to reversible one, 6 after investigating the six methods of soil practices, have found that, in duration of 3 years, only none-soil practice systems had less performance than the other systems According to the reports of 7 soil practices by slender plough caused the equal yield as soil practices by reversible plough. Soil practices by reversible plough need low fuel, energy (40%) and are time-consuming. Janzen HH 8 according to his economic studies has found that, the application of slender plough has better and more direct investment retention than the other methods 9 announced that the external specific germ and soil cone index was more in soil practice via slender plough than reversible one. Also, resistance of soil in fewer slopes to penetrating has showed more increases in reversible soil practice than slender one 6 have reported that the effect of soil practice on resistance of soil to penetrating in silt-loam soil was limited to plowing layer (about 23 cm), and cone index in depths was less than 23 cm for all treatments, and it was less than 2 mega Pascal 10 had reported that the protective soil practice systems, allows to elevating the moist contact of soil, reducing soil temperature and external high specific germ in comparing to ordinary soil practices. In sloppy fields, with good drainage and low organic materials, the protective soil practice methods showed equal crop yield or more, compared to the ordinary soil practices in corn beans planting. In a field with weak drainage and low organic materials and weak soil structures, protective soil practices with taking time showed improved soil structure, in a way that the organic matters and soil grains increased. The corn yield has improved with time and often was more than the ordinary soil practices 3 have mentioned that the low depth soil practices systems in comparing to soil practices systems by reversible ploughs are results in: I. Increase in external specific germ in low layers of soil II. Increase in organic matters of soil, which promotes increasing the water holding ability of soil and grain resistance III. Increasing earthworm's populations that allows to promoting the water penetration into soil, and decreasing the erosion and soil washings.
Griffith DR, et al., 6 has reported that, by lesser intensity of soil practices, we would obtain the maximum resistance closer to soil level, and for reversible plough the maximum depth to cone index was greater than slender one and the maximum resistance to soil practices via slender plough was about 22-25 cm and in soil practices via reversible plough it was limited to 23-40 cm 11 has reported that the decreasing rates of external specific germ after soil practices via reversible plough were 22%, whereas it was about 17% after slender plough 12 has demonstrated that, the external specific germ of soil is not significantly affected by the kind of soil practices, but it is affected by depth changing, and the variations of external specific germ is significant have declared that among the different methods of soil practices especially in depths of 8-16 cm, significant differences (P<0.05) between the both slender and reversible systems has been observed 13 has reported that the cone index is affected by the kinds and depths of soil practices. Also the difference in external specific germ across soil depths and kind of soil practices were significant. Base plowing was observed in all the treatments except for the soil practices by slender plough 14 have announced that applying herbicides and cultivators between rows, 2-4 weeks after germination of corn increased economically crop performance and they observed an accretive yield and better weeds control.
Helgason BL et al., 10 following their researches have declared that, after mechanical controls of weeds the density of weeds was 41% which was more than in chemical controls, but the decrease of corn yield in mechanical way was only 22%. However, in comparing with chemical control of weeds in rows by using herbicides, the weeds density was 8% more, but the only 1% loss was observed in crop yield. Weeds control was the most beneficial effect of useful plowing. Different experiments has demonstrated that in the soils without weeds, the yield of product in plowed fields any significant increase in crop yield have been not observed . Corn is a kind of plants that have more growth in high temperature and light, but have low growth in low temperatures. So, this would be more sensitive in beginning of its growth, whenever the weather in not suitable for this purpose, it would be more sensitive to weeds. 9 The researches which have conducted for estimating the weeds damages had demonstrated that, its damage level in German is 45%, in Russia is 30%, in Indonesia is 50%, and in USA is about 41% up to 86%. 4 Soil seed bank (various seeds which are stored in soil) is the most important origin of weed attacks in most of the plowed field. Soil practices by destroying the soil structure, affected directly the seeds bank. Soil practices follow the herbicides usage, with decreasing indirectly the seed production of plants, would affect the gene bank. This kind of operations, would affected the seed bank characteristics such as seeds number, their growth ability, seeds dormancy, and combination of species. Changes in seed bank characteristics are often lead to change in species combinations and varying in flora of weeds. The soil practice methods are influenced the lifelong and desperations of seeds in vertical profile of the soil. In plowing by reversible plough, the weed's seeds are dispreading more uniformly in all plowed layers of soil than by slender plough. In plowing by slender ploughs the seeds are concentrated near to the soil level. Changes in the soil practice system methods from traditional system to protective one would leads to some changes in weed species combinations, in which germination of weeds species would show a resistance against the weeds controlling practices. 4 The results of several studies have demonstrated that the effects of soil practices on species combinations are different, and mostly depend on the planting systems and its duration. Tremblay G 15 has performed an experiment in order to evaluate the effects of primary soil practices (plowing by slender and reversible plough), secondary soil practices (cultivator application) and herbicides application on controlling of weeds, changing the population of weeds species, storing seeds in soil during the period of three years. In this study the alternative plants were: continuous planting of corn for 3 years (CN), continuous planting green bean for 3 years (PB), and planting of sugar bean for 2 years and corn in third year (SB). By comparing the slender plough with reversible plough, they have demonstrated that the seeds of weeds after plowing by slender plough have more dispersed to near of soil level than plowing by reversible plough. Some densities of annually weeds' seeds after plowing by slender plough have showed a high amount of seeds in seed bank during three years period. The most important species that have showed notable increase are mentioned in bellow:
Chenopodium album, Amaranthus retroflexas, and for SB alternatives, Efagrostis cilianesis, Solanum sarrachoides
Reversely, the seeds density in plowed plates by slender plough in SB plant system has rapidly decreased in Kachia scoparia species. Cultivator applying in comparing with not applying, leads to decline the seeds bank density in soil. Using herbicides in controlling weeds on each planting sequence do not causes changes in density of weed species which makes resistance to herbicides. Seeds of S. sarrachoides in alternative planting of PB were more highlighted and the seeds of K. scoparia, A. retroflexus, Chenopodium album in SB alternative planting were very remarkable. Grover KK, et al., 3 have investigated the effects of tillage and herbicides on weeds flora combinations in irrigated planting system. In this research, the effects of primary soil practices (plowing by slender and reversible plough) cultivator applications between rows, different levels of herbicides in changes of weeds were evaluated on three alternative irrigated planting for 5 years. The alternative plantings were: corn continues planting for 5 years (CN,) green bean continues planting for 3 years and planting corn after green bean for 2 years (PB), sugar bean planting for 2 years and corn for 3 years (SB). Total density of weeds for PB alteration were 1 to 245, for SB was 100 to 209 and, for CN was 2 to 190 weeds per square during 5 years of studies. Density inductions in plowed treatments were not significant by reversible plough. Total observations in each alternative showed that during the last year in alternative planting of CN, Setaria viridus, in PB alternative, A.retrofexus, S. sarrachoides, and in SB alternatives, A. retroflexus, S. viridus, were more prevalent. 16 Differences in weeds species due to alternative planting were only observed in treatments with primary soil practices by slender plough. Furthermore, density of A. retroflexus has been reduced because of alternative planting and plowing by slender plough with high amounts of chemicals application. Using cultivator in units was more effective in destroying the weeds species combinations than the none-treatment plates. In this study, only during the drought period occurred in July 1994, the plowing has remarkably affected the germination of A. retroflexus. Temperature and moisture of the soil in 2.5cm deeps had been measured and evaluated and demonstrated that, plowing has not any significant effect on them.
Materials and methods
Present study was conducted in agricultural research center in Miandoab, West Azarbyjan -Iran, with 46 9 ' longitudes and 36 58 ' latitudes and 1371 meter high from sea level. This station has dry and semi dry regimes and Mesick thermal regime. Average annual raining is about 286-330 mm, and its soil structure was silt-loam (river sediments) with pH 8.9 and its electrical conductibility was 1.21 mmos/cm. Experiments were arranged in randomized complete block design with 4 replications in each of three years. Soil practice treatments and substantial one is included different weeds controlling methods according to the plan. Fertilizer applications were done after soil analyses results. Corn seeds were planted in 4 rows with 75cm distance from each other, and with 20.5 cm distance between plants in each row. The planting area was 9.225 square meter (harvesting was from 2 middle rows). The spaces between blocks were 4 meter and the spaces between plots were one planting line. Single cross 704 corn seed with 65000 seeds density per hector were slowed by corn planting machine in rows. Chemical controlling treatments was done at 2-4 leaves stage of corn with Cruz (Nicosulfuron 4% EC 2 l/ha) First and second cultivator usage in mechanical controlling treatments were done in 4 and 8 leaves stage of corn plants weed density and dry mass were measured 30days after spraying with herbicides and at the end of experiment before harvesting also density and dry weight of weeds were measured. In soil practiced treatments by reversible plough, plowing by reversible plough was done in 25cm depth in spring, and then disc and leveler were applied and then planting were done. However, plowing by slender plough was done in the same depth and spring plowing was done in 25cm depth by slender plough, and then disc and leveler were applied and then planting were done.
Effects of soil practice methods on weeds density
As illustrated in Table 1, during the first year of experiment, the main factor of soil practice methods in both slender plough and chisel level in 25cm depth, none of treatments expect of Common purslane has not shown significant effect of the weeds. During the second year of experiment soil practices has not significant effects on weeds density (Table 2), but in third year of experiment the effects were significant (P<0.05). Combined analyses of variance showed that the soil practice methods has significant effect on Barnyard grass and Common purslane weeds density (P<0.01) ( Table 3). According to results, the average of Barnyard grass density in reversible plough and chisel application were 43.083 and 12 plants per square, and the average of Common purslane weeds density in plowing by reversible plough and chisel were 6.917 and 14.5, respectively. Concerning to Red root pigweed, the results were in agreement other studies that investigated the effects of different plowing methods on germination, phonology and density of Red root pigweed. According to their result, although plowing is necessary for germination of most of the weeds, but in the case of amaranth plowing has less importance in prediction of population dynamic and controlling of it. As the results of Oryokot JO et al., 16 experiments mentioned before, soil practices by slender plough in comparing with reversible plough causes changes in annual weeds density.
Effects of controlling methods on weeds density
Controlling methods on weeds density during three years have significant effects on weeds density in corn field (P<0.01) ( Table 4). Comparing the average of weeds density during three years tillage has showed that mechanical controlling without any controlling effect on Barnyard grass, Lambs quarters, Field bindweed, were placed in the same group and chemical controlling and manual weeding with together were placed in another group. Concerning to Barnyard grass, controlling methods were placed in different groups (Table 5).
Reviewing the results has demonstrated that the mechanical controlling treatments were 27.41% effective in weeds controlling weed in corn field on Miandoab area, West Azerbaijan province, and this controlling method in comparing with chemical controlling one have resulted in yield loss about 42.01% comparing to chemical controlling treatment (highest yield). Chemical controlling demonstrated the best and highest weeds controlling in both soil practice methods. Additionally, changing the soil practice system from traditional (reversible plough) to protective system (chisel) caused change in combination of weeds species via germination rate of species seeds in reaction to the different soil practices. These species are mostly included annual weeds in which soil practices caused to gathering seeds of these weeds near to the soil surface by chisel plow, while by reversible plough they dispersed in all over the soil layers, uniformly. | 2020-03-07T18:03:35.626Z | 2020-02-02T00:00:00.000 | {
"year": 2020,
"sha1": "f8371356c7ca7b804c2da695c09de135deb4192f",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/OAJS/OAJS-04-00144.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f8371356c7ca7b804c2da695c09de135deb4192f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
} |
62835744 | pes2o/s2orc | v3-fos-license | Response of bread wheat to integrated application of vermicompost and NPK fertilizers
A greenhouse pot experiment was conducted to determine effects of vermicompost, inorganic fertilizers and their combinations on nutrient uptake, yield and yield components of wheat. A factorial combination four levels (0, 2, 4 and 6 tha-1) of vermicompost and four levels (0, 33.33, 66.67 and 100% ha-1) of the recommended NPK fertilizers was laid out in RCB design with three replications. Bread wheat variety, Kekaba was used as a test crop. Main effect results indicated that both vermicompost and NPK fertilizers significantly increased yield components, yield and nutrient uptake of wheat. Vermicompost applied at 2, 4 and 6 tha-1 increased grains yield of wheat by 11, 17 and 26% over control respectively whereas 33.33, 66.67 and 100% NPK fertilizers increased the grain yield by 10, 24 and 30%, respectively over the control. Vermicompost applied at 6 tha-1 resulted in the highest nutrient uptake and it increased grain uptake of N, P and K by 51, 110 and 89% over control respectively whereas among fertilizer rates, the highest uptake was produced by 100% NPK treatment and it increased the N, P and K uptake in the grain by were 79, 100 and 96% over control respectively. Combined application of vermicompost and NPK fertilizers has also significantly increased nutrient uptake, yield and yield components of wheat. It is concluded that wheat responds significantly to application of vermicompost and NPK fertilizers suggesting that nutrient contents of experimental soil is low for optimum production of wheat. Field verification and demonstration of results are recommended.
INTRODUCTION
Bread wheat is one of the major cereal crop produced in Ethiopia.According to central statistics authority (CSA) of Ethiopia, it is ranked fourth in terms of area cultivated and total production in 2014/2015 main cropping season (CSA, 2015).Wheat grains are used to prepare traditional food and beverages such as Dabbo (homemade bread), Enjera and Nifro, Tela, etc.It is also being used by food processing industries to prepare local bread, biscuits, pasta and macaroni.Despite, large area of land cultivated and suitable climate for wheat production in Ethiopia, the country is unable to produce sufficient amount of wheat grain to meet its annual domestic need.Thus, it is forced to import 30 to 50% of its annual demand for wheat grain (White et al., 2001).The low productivity of wheat (<2 tha -1 ) is the main reason for the current wide gap between demand and supply for wheat grain in Ethiopia.
Decline in soil fertility among others is the main cause of very low productivity wheat in the country.Application of inorganic fertilizer especially those containing N and P have long been practiced to improve soil fertility for enhanced wheat and other crop production as these nutrients are the most limiting nutrients in almost all Ethiopia soils (ATA, 2014).However, fertilizers were applied irrespective of soil and crop types as well as agroecology.Such kind of blanket application of fertilizers are unrealistic due to the fact that the amount and type of fertilizer that should be applied can widely vary based on soil and crop type, and agroecology.Thus, developing site specific fertilizer recommendations are important for economic and environmental sound use of these inputs.
However, inorganic fertilizers were found to be more effective in increasing crop productivity when they are applied along with organic fertilizers.This is especially important for Ethiopia as nearly all soils in the arable lands of the country are highly depleted of organic matter.According to Gete et al. (2010), despite five times increase in fertilizer application in the Ethiopia, national cereal yields increased only by 10% since the 1980s.This was attributed to declining soil organic matter (IFPRI, 2010).This is because soil organic matter (SOM), in addition to improving the physicochemical properties of the soil and serving as nutrient sources, they hold nutrients from fertilizers applied in such a way that they are protected from loss through leaching and other pathways, and taken up by plants.
Organic fertilizers such as farm yard manure (FYM) and vermicompost can serve as a source of SOM and source of nutrients needed for the growth and production of crops.However, it is difficult to have sufficient amount of FYM that can supply adequate amount of nutrients needed by crops in smallholder famers' fields.Thus, integrate applications of inorganic and organic fertilizers are import to ensure adequate and balanced supply of nutrient to crops.With integrated nutrient management approach, the inorganic fertilizer can supplement with readily available nutrients to plants at early stages whereas organic fertilizers at later growth stages of plant that can boost yield and reduce the associated risks of chemical fertilizers (Mitiku et al., 2014).Integrated application of inorganic and organic fertilizers increases fertilizer use efficiencies, ensure balanced nutrient supply to crops, improve soil sustainability, etc.There are several literatures indicating the multiple advantages with integrated application of organic and inorganic nutrient sources over that obtained with sole application of either source (Kumar et al., 2015;Singh et al., 2011;Sangiga and Woomer, 2009).Therefore, the objectives of this experiment were to determine the effects of integrated Hadis et al. 15 applications of vermicompost and NPK fertilizers on the yield components and yields of wheat and to determine the effect of integrated application of vermicompost and NPK fertilizers on the uptake of N, P and K by wheat.
Brief descriptions of the study site
The experiment was conducted in the greenhouse at tissue culture micro-propagation laboratory, Mekelle, Northern Ethiopia.Composite soil samples for pot experiment were collected randomly from farmlands of Mekan village, Enda-Mehoni district, Southern Tigray, Ethiopia.The sampling sites were located between 12°43'28" to 12°46'12'' N and 39°29'18'' to 39°33'35" E.
Physicochemical properties of soil and vermicompost used in this experiment
Prior to starting the experiment, the soil and vermicompost samples were analysed for their selected physicochemical properties following standard laboratory procedures (Jones, 2002) and the results are summarized in Table 1.The soil belongs to sandy clay loam textural class.The soil reaction (pH) was moderately alkaline and that of vermicompost was neutral (Tekalign, 1991).Organic carbon (OC) content of the soil was found to be low whereas it was very high in vermicompost.Cation exchange capacity (CEC) of the soil was found to be high as outlined by (Hazelton and Murphy (2007).The soil TN content was in medium range but it was very high in the vermicompots (Berhanu, 1980).The available and total P contents of the soil and vermicompost were rated as medium (Cottenie, 1980) and very high (Murphy, 1968), respectively.Moreover, the total and exchangeable K contents of vermicompost and the soil were in medium ranges (FAO, 2006), respectively.
Vermicompost was processed by earthworm (Eisinea fetida) using cow manure, Lantana camara leaves and wheat straw as main feedstock.Urea, TSP, and KCl were used as nitrogen (N), phosphorus (P), and potassium (K) sources in this experiment.Plastic pots with size 30 × 20 × 28 cm which were perforated at the bottom were filled up with 4 kg of soils.Then eight seeds of wheat variety, Kekaba were planted on pot as a test crop and later thinned to five seedlings after germination.The moisture contents of pots were regularly monitored and watered with distilled water as required.
Plant sampling and nutrient analysis
Plant samples were collected at harvest to determine the uptake of nitrogen, phosphorus, and potassium in the plant tissue.The above ground biomass of all the five plants from each pot were collected and partitioned into grain and straw.The grain and straw samples were washed with distilled water to clean contaminants, separately air-dried and oven dried to remove the moisture until constant weight was attained.The plant sample was ground and passed through 0.5 mm sieve for laboratory analysis.Plant phosphorus and potassium concentrations were analyzed through wet digestion method as described in Jones (2002).The P in the digest was determined by spectrophotometer, K by flame photometer and total nitrogen was analysed by Kjeldahl method (Bremner and Mulvaney, 1982).
Data collection
Data on total number of tillers per plant (TNTPP), effective number of tillers per plant (ENTPP), plant height (PHT), spike length (SPL), number of seeds per spike (NSPSP), above ground biomass (AGBYLD), and grain (GYLD) data were collected.The grain yield was divided to the biological yield and multiplied by 100% to calculate harvest index (HI) of wheat.Furthermore, nutrient (NPK) uptake data were obtained by multiplying the concentration of each nutrient in the straw and grain of wheat in each pot with the corresponding straw and grain yields.
Statistical data analyses
Data on yield component, AGBYLD, GYLD, HI, and nutrient uptake data were subjected to analysis of variance (ANOVA) using SAS software Version 9.0 (SAS, 2002).Mean were separated using least significance difference (LSD) method at 0.05 probability level using the same software.
Effects on yield components of wheat
The results of main effects data showed both vermicompost and NPK fertilizers significantly affected the yield component data of wheat grown in the greenhouse experiment (Table 2).VC3 produced the highest PHT, SPL, and NSPSP of wheat which was followed by VC2 and VC1 in that order and the least values of these parameters were produced in the control treatment.VC3 increased these parameters by 6, 16 and 36% over the control, respectively.However, vermicompost did not significantly affect TNTPP and ETNPP (Table 2).
On the contrary, NPK fertilizers significantly increased TNPP and ETNPP of wheat relative to the control treatment.The highest number of TNTPP (2.1) and ETNPP (1.9) were produced by NPK3 treatment.The results are in agreement with the findings of Niamatullah et al. (2011) who observed a significant difference in number of tillering and productive tillers of wheat due to NPK levels.This could be due to the priming effect of chemical fertilizers on availability of nutrients especially mineral N that could have contributed to the vegetative growth and tiller initiation of wheat unlike vermicompost.Similarly, PLH, SPL and NSPP of wheat have been significantly increased by all NPK fertilizer doses.The highest values of these parameters were produced by NPK and it increase PLH, PLH and NSPP by 14, 43 and 43% over the control, respectively.The magnitudes of increases in these parameters are far higher than that produced by the highest dose of vermicompost which is VC3.This happened due to immediate availability of nutrients contained in chemical fertilizers than those in the organic fertilizers such as vermicompost in this case.
PLH, SPL and NSPP of wheat have been significantly affected by interaction effects of vermicompost and NPK fertilizers (Table 3).All treatment combinations of vermicompost and NPK fertilizers significantly increased PLH, SPL, and NSPP of wheat compared to the control.However, the highest increases of the parameters were obtained with treatments involving VC3 + NPK3, VC2 + NPK2, and VC2 + NPK3 in that order.But these treatments were statistically at par among each other with respect to their effects on these parameters.The results are in agreement with several reports indicating that combined application of organic and inorganic fertilizers produce significantly higher values of yield components of crops including wheat that obtained from sole application organic or inorganic fertilizers (Dastmozd et al., 2015;Yavarzadeh and Shamsadini, 2012).
Effects on biomass, grain yield, and HI
Main effects of vermicompost and NPK fertilizers on the biomass and grain yield of wheat are presented in Table 4.All vermicompost rates produced significantly higher AGBYLD and GYLD of wheat than the control.But the highest values of these parameters were obtained with VC3 followed by VC2 and VC1 in that order.This is in agreement with findings of Joshi et al., (2013) and Yousefi and Sadeghi (2014) who reported that application of vermicopost to soil significantly increases the yield of wheat.Besides, different studies have also demonstrated the beneficial effect of application of vermicompost at different rates on the yields of other crops such as tomato (Arancon and Edwards, 2005;Kashem et al., 2015), maize (Reshid, 2016), and barley (Mitiku et al., 2014).As vermicopost is a source of different essential plant nutrients, its application in soil with low nutrient content especially in NPK will definitely increase the growth, yield and yield components of crops including wheat.However, in addition to being sources of different nutrients, vermicompost is also supposed to contain growth promoting hormones (Edwards et al., 2004) which might facilitate higher nutrient uptake by plants and this could be an addition factor for the positive effect of vermicompost on crops.Both vermicompost and NPK fertilizers have significantly increased HI of wheat (Table 4).VC3 produced the highest HI than that produced by all other treatments including control.Similarly, the highest HI was produced by NPK3 than that produced by all other treatment.
Similarly, all NPK fertilizers rates produced significantly higher ABGYLD and GYLD of wheat than the control (Table 4).NPK3 produced the highest yield than that produced by all other fertilizer treatments and it increased the AGBYLD and GYLD by 22.8 and 30.5% over the control, respectively.It also resulted in significantly higher HI value of wheat.
The positive effects of vermicompost and NPK fertilizers application on wheat seen in this experiment suggest that the study soils are low in its nutrient contents particularly of NPK.The result of initial soils analyses data (Table 1) also proves this claim.
Vermicompost by NPK fertilizer interaction effect was highly significant (P<0.001) for biomass yield of wheat (Table 5).Accordingly, the highest AGBYLD was produced by treatment involving VC2 + NPK3 which was statistical at par with biomass yield produced by VC3 + NPK2, VC1 + NPK3, VC2 + NPK2, and VC3 + NPK3 and all these treatments were statistically at par among each other with respect to ABYLD of wheat produced by them.However, they produced significantly higher ABYLD of wheat than that produced by sole application vermicompost and NPK fertilizers.The result suggests that there was a synergistic interaction between the two nutrient sources in availing nutrients to the growing wheat and the finding is in agreement with report of Davari et al (2012) and Davis et al. (2011).In line with the current finding, Seal et al. (2014) reported that straw yield, which is the major constituent of biological yield, was also significantly increased by the combined application of vermicompost and NPK fertilizers.
Effects on nutrient uptakes
The uptakes of N, P and K by the straw and grain yield of wheat were significantly affected (P ≤ 0.01) by the main effects of vermicompost and chemical fertilizers (Table 6).VC1, VC2, and VC3 increased grain N uptake by 22, 35, and 51%, respectively over the control.Similarly, these treatments increased the grain P uptake by 22, 45, and 71% over the control, respectively and the grain K uptake by 33, 48, and 53% over the control, respectively.There were also significant increases in the straw uptake of N, P and K due to vermicompost application.The apparent increased uptake of nutrients due to application VC indicates that there was net mineralization of nutrients from vermicompost.Similarly, all NPK treatments have significantly increased the uptake of N, P and K by the straw and grain of wheat compared with the control or NPK0 (Table 6).However, the highest uptake of N, P and K by straw and grain of wheat was produced by NPK3 followed by NPK2 and NPK1 in that order.These treatments increased the grain N uptake by 79, 50 and 25% over the control (NPK0), respectively.These treatments increased the grain P uptake by 100, 67, and 22% over the control, respectively and the grain K uptake by 96, 60, and 20% over the control, respectively.The finding is in line with Sheoran et al. (2015) and Laghari et al. (2010) who reported that applications of NPK have significantly increased grain nutrient uptake of wheat.
Conclusion
Application of vermicompost significantly increased the yield components, yield and nutrient uptake of wheat grown in the greenhouse suggesting that there was net mineralization of nutrients contained in the vermicompost and availed to the growing wheat.The results also suggest that the soil used in the experiment was low in essential plant nutrients.Similarly, application of NPK fertilizers significantly increases the yield components, yield and nutrient uptake of wheat indicating insufficient amount of N, P and K in soil used in the study.This was confirmed by results of initial soil analyses data of experimental soil which showed that low levels of N, P and K as well as low level of soil organic matter content.There was a significant interaction between vermicompost and NPK fertilizers for above ground biomass yield of wheat and optimum yield was produced by treatment combination of VC2 + NPK2.The result suggests that there was synergistic interaction between vermicompost and NPK fertilizer in increasing nutrient availability to the growing wheat.The finding further indicates that the full recommended dose can be decreased to 67% and the vermicompost dose can be decreased by 50% to achieve the same yield produced by 100% vermicompost and NPK fertilizer doses applied alone.Further verification and demonstration of the current results in the field are recommended.
Table 1 .
Some initial physicochemical properties of the soil and vermicompost used in the pot experiment.
*Means followed the same letter (s) are not significantly different each other at 0.05 probability level.
Table 3 .
Interaction effects of vermicompost and NPK fertilizers on PHT, SPL and NSPP.
*Means followed the same letter (s) are not significantly different each other at 0.05 probability level.
Table 4 .
Main effects of vermicompost and NPK on biomass and grain yield and harvest index (HI).
*Means followed the same letters are not significantly different each other at 0.05 probability level.
Table 5 .
Interaction effects of vermicompost and NPK fertilizers on AGBYLD (g pot -1 ) of wheat.
Table 6 .
Effects of vermicompost and NPK fertilizers on uptake of N, P and K by grains and straw of wheat.
*Means followed the same letter (s) are not significantly different each other at 0.05 probability level. | 2018-12-26T21:55:52.913Z | 2018-01-04T00:00:00.000 | {
"year": 2018,
"sha1": "11f700f9651c914e06f2d38b4b86476cbeb443e1",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/950B53B55267.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "11f700f9651c914e06f2d38b4b86476cbeb443e1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
270066639 | pes2o/s2orc | v3-fos-license | Longitudinal changes in the volume of residual lung lobes after lobectomy for lung cancer: a retrospective cohort study
It is unclear how the residual lobe volume changes over time after lobectomy. This study aims to clarify the temporal patterns of volume changes in each remaining lung lobe post-lobectomy. A retrospective review was conducted on patients who underwent lobectomy for lung cancer at Yueyang Central Hospital from January to December 2021. Lung CT images were reconstructed in three dimensions to calculate the volumes of each lung lobe preoperatively and at 1, 6, and 12 months postoperatively. A total of 182 patients were included. Postoperatively, the median total lung volume change rates relative to preoperative values were -20.1%, -9.3%, and -5.9% at 1, 6, and 12 months, respectively. Except for the right middle lobe in patients who underwent right upper lobectomy, the volumes of individual lung lobes exceeded preoperative values. The volume growth of the lung on the side of the resection was significantly more than that of the lung on the opposite side. For left lobectomy patients, the right lower lobe’s volume change rate exceeded that of the right upper and middle lobes. Among right lobectomy patients, the left lower lobe and the relatively inferior lobe of right lung had higher volume change rates than the superior one. Right middle lobe change rate was more in patients with right lower lobectomy than right upper lobectomy. Six months postoperatively, FEV1% and right middle lobectomy were positively correlated with the overall volume change rate. One year postoperatively, only age was negatively correlated with the overall volume change rate. 75 patients had pulmonary function tests. Postoperative FEV1 change linearly correlated with 1-year lung volume change rate, but not with theoretical total lung volume change rate or segmental method calculated FEV1 change. Time-dependent compensatory volume changes occur in remaining lung lobe post-lobectomy, with stronger compensation observed in the relatively inferior lobe compared to the superior one(s). Preoperative lung function and age may affect compensation level.
Ethics approval
This retrospective study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Yueyang Central Hospital (approval number: 2023-026).Due to the retrospective nature of the study, the need of informed consent was waived by the Ethics Committee of Yueyang Central Hospital (approval number: 2023-026).
Patients
We conducted a review of thoracic surgery database at Yueyang Central Hospital, examining all patients who underwent lung resection surgery in our hospital from January to December 2021.Prior to surgery, all patients were subjected to a chest CT scan within a two-week window.Post-surgery, patients who had undergone lung resection due to malignant diseases were advised to have follow-up chest CT scans at the 1, 6, and 12-month marks.Any patients who missed any of these CT scans were excluded from our review.Reasons for exclusion encompassed failure to follow up in a timely manner, undergoing a CT scan at a different hospital, a decision by the doctor to forego a CT scan, or death within a 12-month period.For patients who had undergone pulmonary function test one year after surgery, the results were recorded.
Data collection and definitions
Clinical and demographic data, such as age, gender, body mass index (BMI), smoking history, comorbidities, history of lung resection, lung function parameters, extent of resection, surgical approach [thoracotomy or video-assisted thoracic surgery (VATS)], pathological type and stage, and postoperative complications, were all collected.Both current and former smokers ceased smoking prior to surgery.Chronic obstructive pulmonary disease (COPD) was defined as an FEV1/forced vital capacity (FVC) ratio of less than 0.7 after the inhalation of a bronchodilator.Overweight was defined as BMI over 24.0.We routinely performed single-port VATS surgeries of 3-4 cm, and when single-port operation proved challenging, we added 1 to 2 auxiliary incisions of about 2 cm each.Cases with more than 3 incisions or a total incision length exceeding 10 cm were classified as open surgeries and were consequently excluded from the study.We routinely performed systematic lymph node dissection, which involved clearing groups 5, 6, 7, 9, 10, and 11 lymph nodes during left lung surgery, and groups 2, 4, 3a, 7, 9, 10, and 11 lymph nodes during right lung surgery.All surgeries were carried out by two senior doctors within the department.Other exclusion criteria included: pulmonary fibrosis, previous lung surgery, intraoperative diaphragmatic nerve injury, and the presence of pneumothorax or pleural effusion requiring drainage or pulmonary infection requiring antibiotic treatment as revealed in follow-up CT scans.
CT parameters and image processing
Chest CT scanning was implemented using a 16-slice CT system (Lightspeed 16, GE Healthcare) at our institution.The patient was positioned supine.During a deep inspiratory breath hold, we obtained highresolution CT images that were 1.25-mm-thick and covered the entire lungs in a 512 × 512 matrix.This was achieved using a 20-mm collimation (16 × 1.25 mm), with a rotation time of 0.5 s, at 120 kVp and 100-440 mA.Subsequently, the transaxial CT images were reconstructed using the lung algorithm.
Three-dimensional (3D) lung volume images were created using imaging software Mimics Medical 21 software (Materialise NV, Leuven, Belgium).Firstly, the "Segment Airways" tool was used to semi-automatically segment the airway by indicating the trachea.Then, with the "New Centerline Label" tool, a new centerline was created from the 3D model of the airway and the names of centerline branches was automatically assigned.Next, the "Segment Lung and Lobes" tool was used to segment the lungs and detect the lung separating fissures; subsequently the lungs were cut into lung lobes.The operation creates masks and 3D models of the left and right lung, as well as separate 3D models for each of the lung lobes.The tool relied on both the centerline of airway and
Statistical analysis
Statistical analyses were performed using STATA 17.0 software (Stata Corp., College Station, TX, USA).The volume change rate (VCR) is defined as (postoperative lung actual volume/preoperative lung volume) − 1. Theoretical VCR of the total lung is equal to 0 minus the preoperative volume of the resected lobe divided by the preoperative total lung volume.The postoperative FEV1 change rate is defined as (postoperative FEV1/ preoperative FEV1) − 1. Theoretical FEV1 change rate is equal to 0 minus the number of functional lung segments intended to be removed/total number of functional lung segments.Continuous variables are presented as median and interquartile range (IQR), and categorical variables are presented as number and percentage.The signed rank test was used for paired continuous variables (e.g., comparing the VCRs of different lobes with the same lobectomy), while the Wilcoxon rank-sum test was used for unpaired continuous variables (e.g., comparing the VCRs of different lobes with the same lobectomy) but different lung lobes for matched continuous variables, and comparing the lung VCRs of patients with different lobectomies).Multiple linear regression analysis was performed to analyse influencing factors of clinical variables.Linear regression analysis was used to test the relationship between the two continuous variables.We first explored potential independent variables to see which of them could have collinearity with each other in the model.All statistical testing was 2-sided, and P values < 0.05 were considered statistically significant.
Patients and characteristics
Figure 1 shows the patient selection flowchart.A total of 182 patients who underwent lobectomy due to malignant disease in our institute were finally included in this study.Complete descriptions of participants are listed in Table 1.None of the patients included had undergone neoadjuvant treatment, and none had complications of grade 3 or above according to the Clavien-Dindo system.The volumes of individual lung sections and their respective proportions of the total lung volume are displayed in Table 2.
Changes in remaining lung volume after lung lobectomy
For the entire cohort, the median overall VCRs at 1, 6, and 12 months postoperatively were − 20.1% (− 28.2 to 10.7%), − 9.3% (− 18.4 to 1.8%), and − 5.9% (− 13.7 to 6.0%), respectively.There was no statistical difference between the overall theoretical VCR and the VCR at the first month after surgery (P = 0.4149), but the VCRs at 6 and 12 months postoperatively were significantly greater than the theoretical VCR (both P < 0.001).As shown in Fig. 2 and Table 3, patients' total lung volume exhibited time-dependent compensatory changes within one year after respective lung lobectomy.However, except for patients who underwent right middle lobectomy, the majority of patients who had other lobes removed showed a lower total lung volume after surgery compared to preoperative levels.
Compensatory patterns of remaining lung lobes after lobectomy
In general, most patients consistently experienced volume increase in the retained lung lobes within one year, resulting in final volumes higher than the preoperative levels.However, except for patients who had undergone upper right lobe resection, 52.86% of them had consistently lower volumes in the middle lobe after surgery compared to the preoperative levels (Fig. 2 and Table 3).Among patients who underwent right upper lobectomy, there was no statistically significant difference in the proportion of preoperative RUL volume to total lung volume between those who experienced an increase in middle lung volume and those who experienced a decrease in middle lung volume one year after surgery [22.2% (19.8-23.8%)vs 21.7% (19.9-23.9),P = 0.564].There was no statistically significant difference in VCR of RLL between those who experienced an increase in middle lung volume and those who experienced a decrease in middle lung volume one year after surgery [44.2% (27.7-66.4%)vs 68.6% (20.1-88.8),P = 0.438].
Pulmonary function and lung volume
Among the 182 patients who were studied, 75 had undergone pulmonary function tests one year after surgery.The postoperative FEV1 change rate had a linear relationship with the 1-year postoperative total lung VCR (Y = 0.801X − 0.301; adjusted R-squared = 0.391, P < 0.001; Fig. 5A).The postoperative FEV1 change rate had no linear relationship with the Theoretical VCR of the total lung (Y = 0.316X + 0.019; adjusted R-squared = − 0.005, P = 0.431; Fig. 5B).The postoperative FEV1 change rate had no linear relationship with the Theoretical FEV1 change rate that was calculated by the segmental method (Y = 0.090X − 0.290; adjusted R-squared = − 0.013, P = 0.831; Fig. 5C).
Discussion
In this study, we investigated the changes in the volume of the remaining lung lobes over time within one year after lobectomy due to malignant tumors.We found that the compensatory growth of the volume of each lobe was significant, although the compensatory capacity of each lobe was not exactly the same.
In China, the Mimics Medical software is widely used for preoperative planning of lung resection.Through three-dimensional reconstruction, this software helps surgeons understand the anatomical variations of the airways and blood vessels inside the lung, and predicts the distance between the tumor and the cutting edge [14][15][16] .Some previous studies have also used similar three-dimensional reconstruction software to calculate lung volume, and the results showed that there was a significant correlation between the lung volume calculated and the lung function parameters [8][9][10]17 . Chages in lung function parameters after lung resection have been widely studied.In the early stages following surgery, the traditional segmental method tends to overestimate a patient's FEV1 measurement.However, the gap between the actual and predicted values quickly narrows, with the actual value surpassing the predicted one within a span of 3 months 18,19 .Our data suggested that the traditional segmental method could not accurately predict lung function one year after surgery, but the actual change in lung volume after surgery was closely related to the change in lung function.In our cohort, the total lung volume and the theoretical volume of the remaining lung were comparable one month postoperatively.But this included two opposite situations: first, a subset of patients immediately showed significant volume expansion in the ipsilateral lung after surgery, which was more common in the left lower lung of patients with left upper lobectomy and the right lower lung of patients with right upper lobectomy; second, some patients still had a small amount of pleural effusion and localized atelectasis, potentially resulting in the overall lung volume being below the theoretical value.As for the recovery of long-term lung function, Shibazaki et al. conducted a retrospective study on 104 patients who underwent VATS lobectomy and found that the average values of FEV1 at 3, 6 and 12 months after surgery were 85.8%, 87.9% and 89.2% of the preoperative values, respectively 6 .Shin et al. reported in a prospective study that the average values of FEV1 at 2 weeks, 6 months and 12 months after lobectomy 75.7%, 86.7% and 89.8% of the preoperative values, respectively 3 .This is consistent with the patterns of lung volume changes we observed, where rapid recovery occurred within six months and slow recovery between six months and one year after substantial reduction in lung tissue due to the surgery.
Based on existing reports regarding the impact of different types of lobectomy on lung function, except for the RML, there are no significant differences in the long-term effects on lung function parameters following resections of various lung lobes 6,7,9 .For this phenomenon, our study may provide a more detailed explanation.Firstly, according to the principles of the traditional segmental method, the LUL, LLL, RUL, RML, and RLL should contribute 26.3%, 21.1%, 15.8%, 10.5%, and 26.3% of lung function, respectively 4,5 .However, the actual proportion of the RUL is not as small as predicted.Shibazaki et al. 17 also found that 3D-CT volumetry could predict postoperative FEV1 independent of the resected lobe when predicting postoperative lung function, but the subsegment counting method could not.Secondly, although the volume of the RUL is relatively small, the volume of the RML shrinks after the right upper lobectomy, resulting in a greater actual volume loss than the volume of the RUL itself.Thirdly, compensatory growth of the lungs occurs not only in the ipsilateral lung but also in the contralateral lung.Therefore, even if there is less residual lung tissue on the ipsilateral side, it can be compensated by the growth of the contralateral lung.
The phenomenon of reduced volume in the RML after right upper lung lobectomy may share similarities with some situations in segmentectomy.Nomori et al. 20 demonstrated that left upper division segmentectomy (which is functionally equivalent to right upper lobectomy) leads to only marginal improvement in lung function parameters to lobectomy.Additionally, imaging revealed that the lingular segment preserved during segmentectomy did not function optimally.Tane et al. 21also reported that left S1 + 2 and upper division segmentectomy caused more lung function loss than lingular segmentectomy.Though the volume of segments which had been expected to be rescued by segmentectomy were usually less than theoretical value 10 , Yoshimoto et al. 22 reported that segmentectomy of RUL caused less function loss of RML compared with right upper lobectomy, which was reportedly caused by less displacement of the RML after segmentectomy than after lobectomy.Our data suggests that the reduction in middle lung volume seems unrelated to the original size of the RUL.Our center typically did not tie the RML and RLL together in right upper lobectomy.Thus, we speculate that the reasons for reduced volume in the RML include: 1.Many patients exhibit incomplete horizontal fissure development, and among these patients, interlobar veins often exist between the RUL and RML.When using a linear stapler to cut the fissure, the staples directly compress, leading to a reduction in the volume of the middle lobe.Simultaneously, interlobar veins may be damaged or severed, hindering blood reflux in the middle lobe and thus affecting its growth.2. After the RUL is resected, the remaining blood vessels and bronchi of the middle lobe are in a twisted state due to the loss of support.Ueda et al. 23 conducted postoperative CT with airway threedimensional reconstruction in 50 patients who underwent upper lobe resection, revealing that 42% of patients exhibited bronchial kinking.Disruption of airflow in the airway due to this reason may impact compensatory lung growth 24 .3. The expansion of the lower lobe, combined with the original shape constraints of the thoracic cavity, transforms the middle lobe into a flat and elongated shape (Fig. 3), limiting its volume increase.
Furthermore, we found that the lung in the relatively lower part of the thoracic cavity always has a stronger volume compensation ability than the lung above.The growth of the residual lung following pulmonary resection is primarily initiated by mechanical stimuli.This includes not only the direct mechanical force exerted by the negative pressure in the thoracic cavity on the alveoli but also the increased blood flow within the remaining vessels due to the reduction in the vascular bed 11,25,26 .In humans, the lungs relatively positioned downward experience not only a more direct and sustained influence from the contraction force of the diaphragm but also a relatively richer blood flow compared to the superior ones.
We used multiple linear regression to analyze the impact of various factors on the overall VCR.As mentioned earlier, in the early postoperative period, various confounding factors might have affected the total lung volume, so no factors that were studied were found to be significantly related to the overall VCR.Six months after surgery, FEV1% and right middle lobectomy were found to have significant correlation with the rate of lung volume change, which might suggest that patients with better preoperative lung function and less resected lung tissue were more likely to have achieved better recovery in the early stage.The overall VCR one year postoperatively was only found to significantly related to age, suggesting that age might be a more important factor affecting the upper limit of lung compensation.This is consistent with previous research 27,28 .Aging itself may lead to reduced lung regenerative capacity, increased alveolar volume, changes in chest wall shape, and decreased respiratory muscle
Figure 2 .
Figure 2. Box plot of lung volume change rates at different postoperative time.The y-axis represents the volume change rate, which is defined as: postoperative actual volume/preoperative volume − 1.The x-axis represents time [month(s)].TV: theoretical volume, whose change rate is defined as 0 − preoperative volume of the resected lobe/preoperative total lung volume.LUL left upper lobe, LLL left lower lobe, RUL right upper lobe, RML right middle lobe, RLL right lower lobe, IQR interquartile range.
Figure 5 .
Figure 5. Linear regression analysis of the postoperative FEV1 change rate with the 1-year postoperative total lung volume change rate (A), theoretical volume change rate of the total lung (B), and theoretical FEV1 change rate calculated by the segmental method (C).FEV1 forced expiratory volume in 1 s.
There was a greater change rate observed in the relatively inferior lobe compared to the superior one in the right lung [i.e., after right upper lobectomy, the RLL showed more growth than the RML [46.0%(23.
Table 4
presents the results of multivariable analysis of the factors associated with the VCR of total lung at different postoperative time.At 1 month postoperatively, none of the factors was found to be related to the
Table 1 .
Demographics and clinical characteristics.Values in the table are presented as median (IQR) or n (%).BMI body mass index, FEV1 forced expiratory volume in 1 s, FVC forced vital capacity, DLCO diffusing capacity of the lungs for carbon monoxide, COPD chronic obstructive pulmonary disease, IQR interquartile range.
Table 3 .
Volume change rate of various parts of the lung at one year after surgery.Values are presented as median (IQR).LUL left upper lobe, LLL left lower lobe, RUL right upper lobe, RML right middle lobe, RLL right lower lobe, IQR interquartile range.
Table 4 .
Multiple linear regression analyses of factors affecting on the volume change rate of total lung at different postoperative time.BMI body mass index, FEV1 forced expiratory volume in 1 s, Std.err.standarderror.Variables 1 | 2024-05-29T06:17:33.134Z | 2024-05-27T00:00:00.000 | {
"year": 2024,
"sha1": "28ae083fbf940e8f22f8a439c5096967f5c37427",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f1a0f316f1ea17b870a2b04e88cfdbb80cb34a5b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263951074 | pes2o/s2orc | v3-fos-license | COCH predicts survival and adjuvant TACE response in patients with HCC
The aim of the present study was to measure the expression of Cochlin (COCH) and analyze its association with survival, recurrence and the benefits from adjuvant transarterial chemoembolization (TACE) in patients with hepatocellular carcinoma (HCC) following hepatectomy. Patients with high COCH expression levels had a poorer prognosis in terms of overall and disease-free survival rate compared with those with low COCH expression levels. Further analysis revealed that patients with low COCH expression who received TACE experienced markedly lower early recurrence rates compared with those who did not receive TACE. However, patients with high COCH expression with and without adjuvant TACE after resection experienced no difference in disease recurrence rates. The expression of COCH was found to be associated with hepatitis B virus infection, portal vein tumor thrombosis and Barcelona Clinic Liver Cancer stage in HCC. Therefore, the findings of the present study indicated that clinical detection of COCH expression may help estimate the prognosis of patients with HCC, as well as determine whether to administer TACE after surgery to prevent recurrence.
Introduction
Hepatocellular carcinoma (HCC) is the fifth most common type of cancer worldwide and the cause of 8.2% of cancer-related fatalities in 2008 (1).Hepatectomy remains the first option for patients with HCC, while non-surgical approaches, such as chemotherapy, radiation, radiofrequency ablation (RFA), transarterial chemoembolization (TACE) and percutaneous ethanol injection, have been used to inhibit tumor progression and recurrence (2).Due to inadequate resection, unprecedented tumor formation or intrahepatic metastases that were not detected during resection, the majority of patients experience recurrence within 5 years of surgery (3).
TACE is globally performed as an effective treatment for HCC, as it inhibits residual tumor growth, suppresses metastasis, prevents relapse and prolongs patient survival time (4).The patients with large HCC tumor size, Child Pugh A/B or intrahepatic metastases are considered as candidates for receiving TACE 1-2 months after resection (5,6).However, not all patients are suitable candidates for TACE, as it can also result in a deterioration of liver function and prognosis after surgery (7).Individual analysis of the molecular mechanism, prediction of the effects of different treatments and selection of the most appropriate treatment based on the histological characteristics is the main focus of personalized and precision medicine (8).
Cochlin (COCH) is a secreted protein identified in glaucomatous but not normal trabecular meshwork, that has been shown to be responsive to altered fluid shear dynamics (9).COCH is mainly detected in the normal inner ear and its mutation has been found to be associated with hearing loss, glaucoma and DFNA9 (an autosomal dominant cause of non-syndromic adult-onset sensorineural hearing loss with associated variable vestibular dysfunction), while it is expressed at lower levels in the eye, spleen, cerebellum, lung, brain and thymus (10)(11)(12).Through RNA-seq analysis of 20 HCC tissues and adjacent non-neoplastic tissues, we observed that COCH was highly expressed in HCC samples (preliminary research, data not shown).However, to the best of our knowledge, whether COCH is associated with the tumorigenesis and progression of HCC has not been reported to date.Therefore, the aim of the present study was to investigate the prognostic value of COCH and its association with the effects of TACE in patients with HCC.
Materials and methods
Patients and tissue samples.A total of 135 patients with HCC were recruited from the Shanghai Eastern Hepatobiliary Surgery Hospital (Shanghai, China) between January 2005 and December 2007.All the patients underwent hepatectomy with or without postoperative TACE.The tumor tissues were embedded in paraffin and underwent tissue microarray (TMA) analysis.The patients were selected according to the following inclusion criteria: World Health Organization performance status 0-1, Child-Pugh class A, absence of ascites, no chemotherapy or radiotherapy prior to curative resection, and confirmation of HCC diagnosis by pathological examination (13,14).The following histological features were examined: Thin beam, thick beam or pseudoglandular duct, the degree of differentiation, the degree of necrosis and infiltration, cell type and microvascular invasion.Hepatectomy was performed as previously described, Tumor-Node-Metstasis (TNM) stage was then determined (5,15).Tumor tissue, adjacent non-neoplastic tissues and non-neoplastic distant tissues were collected after hepatectomy.The tissues outside the capsule (distance ≤1 cm) were defined as adjacent non-neoplastic tissues, while the tissues outside the capsule (distance >1 cm) were defined as non-neoplastic distant tissues.The protocol of the present study was approved by the Ethics Committee of Shanghai Eastern Hepatobiliary Surgery Hospital.Patients provided written informed consent for the publication of any associated data and accompanying images.
Adjuvant TACE.Patients received hepatic arterial angiography and adjuvant TACE within 1-2 months of hepatectomy.Patients without a tumor in the residual liver received preventive TACE (10 mg hydroxycamptothecin, 20 mg pirarubicin and 1 ml lipiodol).Patients with tumor in the residual liver received therapeutic TACE (10 mg hydroxycamptothecin, 20 mg pirarubicin, 100 mg oxaliplatin and 5 ml Lipiodol).Positron emission tomography-computed tomography (PET-CT) or magnetic resonance imaging evaluation was performed 1 month after the treatment, in order to decide whether subsequent TACE treatment should be performed.
Follow-up.The follow-up visits took place once every 3-6 months in the first 5 years after surgery.A complete physical examination was performed at each follow-up visit.Serum α-fetoprotein measurements, liver function tests and an abdominal ultrasound were performed.Furthermore, PET-CT or magnetic resonance imaging was performed upon suspicion of recurrence or metastasis.Patients with recurrence received repeat hepatectomy, chemotherapy, radiotherapy or local ablative therapy, depending on the size, location and number of recurrent tumors, as well as the liver function.Overall survival (OS) time was defined as the time from hepatectomy to the date of death or the date of the last follow-up.Disease-free survival (DFS) time was defined as the time from hepatectomy to recurrence or the date of the last follow-up.
TMA and immunohistochemical analysis.The clinical tissue samples were fixed with 10% formaldehyde at room temperature for 24 h and embedded in paraffin.The section thickness was 3-5 µm.Hematoxylin and eosin staining (room temperature for 50 sec for both) was performed on the tumor tissues, adjacent non-neoplastic tissues and non-neoplastic distant tissues to determine optimal contents.Tissue samples (1 mm in diameter) were punched from paraffin-embedded tissues and then arranged in a TMA module with 0.2-mm intervals (Shanghai Biochip Company, Ltd.).An immunohistochemical assay was performed as previously reported (16).The antibody against COCH was purchased from Abcam (cat.no.ab171410; 1:100 dilution).Secondary antibody was purchased from Agilent Technologies, Inc. (anti-Rabbit-HRP; cat.no.K400311-2; 1:100 dilution).Stained sections were evaluated by three different researchers who were blinded to the clinical characteristics.The immunohistochemical staining intensity was scored as follows based on the coloration intensity and the percentage of stained cells: The staining intensity was score as: 0, negative; 1, weak; 2, moderate; or 3, strong.The percent positivity was scored as 0-100%.The percentage and staining intensity scores were multiplied to yield immunoreactive score: Scores of 0 or 1 were defined as low expression of COCH, while scores of 2 and 3 were defined as high expression of COCH (17).Cases in which there were disagreements on the immunohistochemistry staining intensity score were discussed with other researchers until a consensus was reached.
Statistical analysis.All the statistical analyses were conducted using SPSS version 20.0 (IBM Corp.).The differences between tumor tissues, adjacent non-neoplastic tissues and non-neoplastic distant tissues were determined by Kruskal-Wallis test, followed by Dunn's test.The associations between COCH expression and clinical data were determined using the χ 2 test (once expected values were ≤5, Fisher's exact test was chosen).The differences in OS and DFS times between groups were determined by Kaplan-Meier analysis with log-rank tests.A univariate analysis was performed to determine the variants with statistical significance.The Cox regression model was used to analyze the effect of independent factors on OS and DFS time, based on the variants selected by univariate analysis.P<0.05 was considered to indicate a statistically significant difference.
Results
Patient histological characteristics.The characteristics of the patients (n=135) are summarized in Table I.All the patients were diagnosed through radiological and pathological examination, and had undergone hepatectomy, with or without TACE.Reverse transcription-PCR analysis of 27 patients revealed that the mRNA level of COCH was higher in the tumor tissues compared with that in the adjacent and distant non-neoplastic tissues (Fig. 1A).The patients were divided into two groups according to the expression of COCH, which was determined by the immunostaining intensity of TMA slides (Fig. 1B).The immunostaining results were analyzed and evaluated by three individual researchers independently.
High COCH expression predicts a poor prognosis of HCC.
In all patients, COCH expression levels were found to be significantly associated with portal vein tumor thrombosis (PVTT; P= 0.039) and BCLC stage (P= 0.049) (Table II 1D).
COCH expression level may predict the effect of adjuvant TACE.The aim of adjuvant TACE is mainly to prevent HCC recurrence.As shown in Fig. 2A and B
Discussion
Partial hepatectomy is the recommended first-line treatment for primary HCC.However, the local recurrence rate in the first 5 years following resection is as high as 70% (20).Satellite lesions, cirrhosis and tumor size are considered to be closely associated with postoperative recurrence (21).Several treatment options may be used to prevent the recurrence of HCC, including repeat hepatectomy, RFA and TACE (3).RFA and TACE may be considered more suitable for patients with Child-Pugh grade A or B, and for those with a greater size or number of tumors (22).However, based on the heterogeneity of HCC, not all patients will benefit from TACE.Patients with large tumors or venous invasion are at higher risk of recurrence and are advised to receive TACE in clinical practice (23).Molecular analysis of in situ and recurrent tumors may improve our understanding of the mechanisms underlying recurrence and help identify prognostic biomarkers (24,25).
Several systematic analyses based on >10,000 patients with HCC demonstrated that patients receiving TACE experienced a survival benefit compared with the control group (26,27).However, other studies have reported different results regarding recurrence and survival after receiving TACE.A Cochrane analysis of 6 trials observed no superior effectiveness of TACE compared with the control group (28,29).This controversy focuses not only on patient recruitment for TACE, but also on the need for more large-scale trials (30,31).Therefore, patient selection is crucial when considering TACE.The present study demonstrated that COCH was a suitable predictor of survival and recurrence in patients with HCC.The expression of COCH was closely associated with PVTT, Table II.Association between COCH protein expression and clinicopathological characteristics.-------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------- HBV infection and BCLC stage.However, when the efficacy of adjuvant TACE was analyzed, only patients with low COCH expression appeared to benefit from the treatment.To the best of our knowledge, the present study is the first to report that COCH is associated with recurrence and may be useful in evaluating prognosis.It may also serve as a factor determining whether TACE should be administered to prevent recurrence and prolong OS time.Measuring COCH expression may also help evaluate the effect of TACE following hepatectomy and to determine whether to select TACE as a first-line adjuvant therapy.However, the univariate analysis in patients with high COCH expression revealed that the recurrence rate was not associated with any variables, including BCLC and TNM stage, which was reported to be associated with recurrence (3,32).This discrepancy may be due to the limited number of included patients.A larger study is required to confirm the results of the present study.The mechanisms underlying the beneficial effect of TACE treatment on patients with low COCH expression remain elusive.The results of immunohistochemical analysis revealed that COCH was expressed in both the nucleus and the cytoplasm.It has not yet been reported whether COCH is more highly expressed in HCC and whether its expression is associated with the survival and recurrence of HCC, but the expression of COCH in normal liver tissue is low (33).TACE inhibits recurrence mainly by suppressing the early metastasis of tumor cells (34).However, it is difficult to detect the small intrahepatic metastases that contribute to early tumor recurrence, before or after hepatectomy.Theoretically, therapies focusing on undetected intrahepatic metastases are crucial for preventing the recurrence of HCC.However, some studies highlight the need for the careful selection of patients for TACE, as the treatment may damage liver cells and compromise liver function, which is important to help optimize the benefit of the overall HCC treatment course (7,35).The side effects of TACE may affect patient survival, which may explain why patients with high COCH expression do not benefit from TACE treatment.
COCH expression in HCC
In the present study, COCH was identified as a potential biomarker of HCC prognosis.Patients with high COCH expression exhibited poor OS times, early recurrence and no obvious response to adjuvant postoperative TACE.By contrast, patients with low COCH expression exhibited better OS and DFS times, as well as a better response to TACE.However, the predictive value of COCH for the clinical selection of TACE usage requires further verification by large-scale clinical trials, and the underlying mechanism must also be further investigated.
Figure 1 .
Figure 1.COCH expression is associated with OS and DFS.(A) mRNA level of COCH in tumor tissues, adjacent non-neoplastic tissues and distant non-neoplastic tissues from 27 patients with HCC.*** P<0.05.(B) Immunohistochemical analysis of COCH in patients with HCC.(C) Kaplan-Meier analysis of OS in patients with HCC and different COCH expression levels.(D) Kaplan-Meier analysis of DFS in patients with HCC and different COCH expression levels.COCH, cochlin; HCC, hepatocellular carcinoma; OS, overall survival; DFS, disease-free survival; K, tumor tissues; L, adjacent non-neoplastic tissues; N, distant non-neoplastic tissues.
Figure 2 .
Figure 2. Prognostic significance of postoperative adjuvant TACE.(A) Kaplan-Meier analysis of the overall survival in patients with and without TACE.(B) Kaplan-Meier analysis of the 5-year disease-free survival in patients with and without TACE.TACE, transarterial chemoembolization.
Figure 3 .
Figure 3. Prognostic value of COCH for postoperative adjuvant TACE efficacy.(A) Kaplan-Meier analysis of the association between adjuvant TACE therapy and OS in patients with HCC and high COCH expression.(B) Kaplan-Meier analysis of the association between adjuvant TACE therapy and 5-year DFS in patients with HCC and high COCH expression.(C) Kaplan-Meier analysis of the association between adjuvant TACE therapy and OS in patients with HCC and low COCH expression.(D) Kaplan-Meier analysis of the association between adjuvant TACE therapy and 5-year DFS in patients with HCC and low COCH expression.COCH, cochlin; HCC, hepatocellular carcinoma; TACE, transarterial chemoembolization; OS, overall survival; DFS, disease-free survival.
Table I .
, adjuvant TACE prolonged the OS (adjuvant TACE group: Median OS time, 25.647 months; 95% CI, 19.250-32.044;and control group: Median OS time, 12.396 months; 95% CI, 9.045-15.693;P<0.001; Fig. 2A) and 5-year DFS (adjuvant TACE group: Median DFS time, 19.836 months; 95% CI, 13.250-26.422;and control group: Median DFS time, 10.103 months; 95% CI, 5.211-14.994;P<0.001; Fig. 2B) times of the patients.The results shown in Fig. 1 indicated that COCH predicted a poor patient prognosis and early cancer recurrence.The present study also investigated the association between COCH expression and the effectiveness of TACE.TACE treatment did not decrease the recurrence rate of patients with high COCH expression compared with that of patients who did not receive TACE (control group vs. TACE group: Median DFS time, 8.254 months vs. 12.402 months; 95% CI, 2.725-13.782 vs. 4.915-19.890;P=0.087; Fig. 3B).However, patients with low COCH expression exhibited a significantly lower recurrence rate after TACE (low COCH group vs. high COCH group: Median DFS time, 27.348 months vs. 12.386 months; 95% CI, 17.310-37.385vs. 3.895-20.878;P= 0.002; Fig. 3D).As recurrence significantly affects the prognosis of patients with HCC, the OS time of patients in different COCH expression groups, with and without TACE, was analyzed.TACE treatment was found to not be suitable for patients with high COCH expression, as it did not reduce recurrence or prolong OS time (control group vs. TACE group: Median OS time, 11.000 months vs. 14.214months; 95% CI, 7.574-14.426Univariate and multivariate analysis of prognostic factors.Cox regression analysis was employed to analyze the association between COCH level and the effects of TACE.As shown in Table III, univariate Cox regression analysis indicated that adjuvant TACE, the size and number of tumors, completeness Patient characteristics (n=135).
of the tumor capsule and HBV infection (shown as HBsAg in the table) were associated with recurrence in patients with low COCH expression.Multivariate Cox regression analysis based on the factors identified as statistically significant on the univariate Cox regression analysis revealed that TACE was an independent biomarker for 5-year DFS in HCC patients with low COCH expression (hazard ratio, 0.4727; 95% CI, 0.3503-2.139;P=0.0324).In addition, tumor number (P= 0.0033) and tumor size (P= 0.0393) were independent predictors of 5-year DFS.However, the univariate analysis revealed that no variable was significantly associated with tumor recurrence in patients with high COCH expression.
Table III .
Univariate and multivariate Cox regression analyses of 5-year disease-free survival in patients with different COCH expression levels. | 2021-03-17T05:18:37.044Z | 2021-02-10T00:00:00.000 | {
"year": 2021,
"sha1": "77c6a4422de8e7cc8ca4fd06fc1ab37f46bd55f3",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2021.12536/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77c6a4422de8e7cc8ca4fd06fc1ab37f46bd55f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119263869 | pes2o/s2orc | v3-fos-license | Globular clusters hosting intermediate-mass black-holes: no mass-segregation based candidates
Recently, both stellar mass-segregation and binary-fractions were uniformly measured on relatively large samples of Galactic Globular Clusters (GCs). Simulations show that both sizeable binary-star populations and Intermediate-Mass Black Holes (IMBHs) quench mass-segregation in relaxed GCs. Thus mass-segregation in GCs with a reliable binary-fraction measurement is a valuable probe to constrain IMBHs. In this paper we combine mass-segregation and binary-fraction measurements from the literature to build a sample of 33 GCs (with measured core-binary fractions), and a sample of 43 GCs (with a binary fraction measurement in the area between the core radius and the half-mass radius). Within both samples we try to identify IMBH-host candidates. These should have relatively low mass-segregation, a low binary fraction (<5%), and short (<1 Gyr) relaxation time. Considering the core binary fraction sample, no suitable candidates emerge. If the binary fraction between the core and the half-mass radius is considered, two candidates are found, but this is likely due to statistical fluctuations. We also consider a larger sample of 54 GCs where we obtained an estimate of the core binary fraction using a predictive relation based on metallicity and integrated absolute magnitude. Also in this case no suitable candidates are found. Finally, we consider the GC core- to half-mass radius ratio, that is expected to be larger for GCs containing either an IMBH or binaries. We find that GCs with large core- to half-mass radius ratios are less mass-segregated (and show a larger binary fraction), confirming the theoretical expectation that the energy sources responsible for the large core are also quenching mass-segregation
INTRODUCTION
Theoretical arguments suggest that Intermediate-Mass Black Holes (IMBHs) may be present in Globular Clusters (GCs; Miller & Hamilton 2002;Portegies Zwart et al. 2004;Freitag et al. 2006), even though a definitive observational confirmation is still elusive. The presence (or the absence) of IMBHs in GCs would have important implications for cosmology, especially for the formation of Super-Massive black Holes (SMBHs; e.g. see Ebisuzaki et al. 2001), and for gravitational wave detection (Bender & Stebbins 2002;Will 2004;Baumgardt et al. 2004;Gültekin et al. 2004;Mandel et al. 2008;Konstantinidis et al. 2013). A promising indirect method for detecting IMBHs in GCs is based on their effect on mass-segregation: in GCs, IMBHs are expected to spend most of their time in a binary with other massive objects, such as stellar-mass black holes, thus injecting energy in the GC core and quenching stellar mass-segregation Trenti et al. 2007;Gill et al. 2008). Primordial binaries behave in a somewhat similar way to an IMBH, also reducing mass segregation dynamically, as shown by Beccari et al. (2010). This leads to an IMBH/binary degeneracy problem in the mass-segregation indicator, which can be solved by measuring the core binary fraction independently. The interplay between mass-segregation and the binary fraction measured in the GC core is further complicated by the fact that masssegregation may lead to an increased binary fraction in the GC core because binaries are heavier than single stars and thus tend to sink to the center. The radial mass-segregation profile was compared to N-body simulations to rule out an IMBH in NGC 2298 by Pasquato et al. (2009), while in M10, instead, mass-segregation data would have been compatible with an IMBH if the core binary fraction were below ≈ 3% (Beccari et al. 2010), but this was later shown not to be the case (Dalessandro et al. 2011). Recently, Goldsbury et al. (2013) used star counts to derive a uniform measure of masssegregation by comparing the core radii of King (1966) models fit to stars in different mass-bins, over a sample of 54 GCs. Star counts are not affected by the large fluctuations introduced by the relatively few, luminous stars that dominate surface-brightness measurements, and make it possible to measure mass-segregation in a cluster by comparing the radial distribution of stars of different masses. Photometric binary fractions for a sample of 59 GCs from Milone et al. (2012), based on uniform HST ACS/WFC photometry (Sarajedini et al. 2007;Anderson et al. 2008), are also available, resulting in a combined sample of 33 GCs where both core binary-fractions and mass-segregation are measured, and in a combined sample of 43 GCs for which mass-segregation and arXiv:1604.03554v1 [astro-ph.GA] 12 Apr 2016 binary-fractions measured between the core and the half-mass radius are available. In this paper we use this information to identify clusters that: • are dynamically old, with a relaxation time < 1 Gyr, • have low mass-segregation (based on criteria discussed in the following), and • have a binary-fraction < 5%.
These would be candidates for more in-depth testing, either by a tailored application of the mass-segregation method or by more direct approaches, such as radial velocity and proper motion searches. However, we fail to identify strong candidates. This may be due to shortcomings of our sample or to a genuine lack of GCs where IMBHs are responsible for mass-segregation quenching in the absence of a large binary fraction.
2. THE DATASET Goldsbury et al. (2013) measured the mass-segregation of main-sequence stars in 54 Milky Way GCs by fitting King (1966) models to star counts binned in stellar mass. They found a simple law in the form where r 0 is the scale radius of the King (1966) model fitting stars of mass M, and A and B are two parameters. The parameter A is the scale radius of solar-mass stars. The parameter B is a measure of mass-segregation: if it were 0, all the stars would be distributed equally, independent of mass, while for negative values, heavier stars have a smaller scale radius. So in order to measure mass-segregation we adopted the B parameter from Goldsbury et al. (2013), Table 2. We adopt photometric binary fractions for a sample of 59 GCs from Milone et al. (2012), based on uniform HST ACS/WFC photometry (Sarajedini et al. 2007;Anderson et al. 2008). In particular we adopted the total binary fraction in the core (last column of their Table 2, r C sample) and the total binary fraction in the area comprised between the core and the half-mass radius (last column of their Table 2, r C−HM sample).
We obtained the half-mass relaxation time from Harris (1996), and the core-to half-mass radius ratio from Miocchi et al. (2013). We also considered ratios between the A parameter and the half-mass radius from Harris (1996) when a core-to half-mass radius ratio was not available in Miocchi et al. (2013). We use the A parameter instead of the Harris (1996) core radius because we favour star-count based indicators, as shot noise due to bright stars negatively affects surface-brightness-based indicators. This issue impacts the core radius much more than the half-mass/half-light radius. We have assigned a symbol to each quantity we considered in order to keep a consistent and compact notation throughout tables and figures. A quick-look table for the adopted notation is provided in Tab. 1.
Additionally, in order to extend our study to the largest possible number of GCs, i.e. to the whole sample with a measure of mass-segregation by Goldsbury et al. (2013), we derived an empirical relation to predict f C as a function of the cluster's integrated absolute magnitude M V and its metallicity [Fe/H]. In this way we can fill in the values of f C for all the 54 clusters in Goldsbury et al. (2013). The relation was obtained by TABLE 1 Summary of the adopted parameters (Col. 1) and our notation (Col. 2).
The number of GCs also in the mass-segregation sample for each parameter is reported in Col. 3. References in Col linear regression over the sample of 36 clusters with a measured f C from Milone et al. (2012). We used metallicity and magnitude values from Harris (1996), which are available for all the clusters in the sample from Milone et al. (2012). The best fit relation we obtained is with a standard deviation of residuals (over the dataset used for its derivation) of 0.05. The scatter is driven mainly by clusters with large f C , while the relation is tighter for the low f C regime we are interested in (see Fig. 3). This relation was obtained empirically by looking for parameters that correlate with f C on the Milone et al. (2012) sample, but is likely to reflect regularities of the underlying physics of binary formation and evolution in GCs. Milone et al. (2012) already pointed out that absolute magnitude and binary fraction correlate in their sample, and suggested an explanation based on theoretical models (Sollima 2008;Fregeau et al. 2009). Sollima (2008) predicts that binary ionization efficiency is proportional to cluster mass, so that higher magnitude (lower mass) GCs are less efficient in destroying binaries dynamically. On the other hand, that metallicity may influence binary fractions through the cross-section for binary formation via tidal capture was suggested by Bellazzini et al. (1995) and Ivanova (2006) in the context of low-mass X-ray binary studies.
RESULTS
In Fig. 1 we plot the mass-segregation parameter B as a function of the log half-mass relaxation time, dividing the GCs in core binary-fraction ( f C ) bins. In this plot a strong IMBH-host candidate would be a relaxed GC (i.e. a GC with short relaxation time, less than 1 Gyr), with a low f C and low mass-segregation. Such a GC would lie in the upper-left corner of Fig. 1, and be represented by a filled circle. The upperleft corner of the figure is however devoid of filled circles, suggesting that in this sample we do not have a clear cut situation where binaries can be excluded and IMBHs are left as the only plausible cause for low observed mass-segregation. Considering f C−HM instead of f C , we obtain Fig. 2. Quantitatively, a candidate can be defined as being relaxed (log T h < 9, left of the dashed line), having f C−HM < 0.05 (filled circles), and lying one sigma above the best fit regression line for masssegregation as a function of relaxation time, thus being less segregated than expected based on its relaxation time. Given the relatively large number of GCs with f C−HM < 0.05 (18, as opposed to 4 with f C < 0.05), it is unsurprising that two GCs match this criterium. They are represented by filled, slightly bigger, red circles. They are NGC 6397 and NGC 6254. These GCs are also part of the f C sample, but have in all cases f C > 0.05. Were the threshold set at just two sigma, we would again have no candidates even for f C−HM . We conclude that there are no strong candidates for IMBH hosts that Milone et al. 2012) over 5%, and filled circles below 5%. According to the dynamical arguments discussed in Gill et al. (2008) and Pasquato et al. (2009), a good candidate for hosting an IMBH would have low masssegregation despite being dynamically old, in the absence of a sizeable core binary population. If present in this sample, such a candidate would be represented by a filled circle lying in the upper-left corner in this plot, but there is none. The black solid line is a linear least-square fit, the oblique gray solid line is one-sigma above the best-fit, the horizontal gray solid line is the median of B, and the dashed line represents the boundary (arbitrarily chosen at 1 Gyr) between relaxed (on the left side) and non-relaxed (on the right side) clusters.
can be spotted by mass-segregation quenching alone, at least in this sample. This may be due to the fact that we have few GCs with a low binary fraction in our adopted sample. We do not know whether this is a chance occurrence or a systematic selection effect, but the best we can do in both cases is to increase the number of clusters in our sample. Therefore, we considered the full sample of 54 clusters with a measurement of B from Goldsbury et al. (2013), by using estimated values of f C based on Eq. 2. While the scatter on that relationship is relatively large, as can be seen in Fig. 3, it is still a sufficiently good approximation for the purposes of our paper. We show the results obtained on this larger sample in Fig. 4. Also in this case no suitable candidates for hosting an IMBH emerge, i.e. there is no GC with f C < 0.05 that deviates more than 1-sigma from the mass-segregation VS relaxation time relationship in the direction of low masssegregation. Actually GCs with f C < 0.05 appear to be located systematically below the best fit relation represented by the solid line in Fig. 4, suggesting that in the absence of a large binary fraction GCs tend to undergo a larger amount of mass segregation for a given dynamical age.
Relaxation and energy sources in the core
We also find a correlation between the core binary fraction and mass-segregation, i.e. that more mass-segregated clusters have a larger binary fraction in their cores, as shown in Fig. 5. The correlation is expected, because binaries are heavier than single stars and tend to segregate to the core, so that core binary fractions are understandably higher in clusters more affected by mass-segregation. Fig. 1, but using the total binary fraction (from Milone et al. 2012) in the region comprised between the core and the half-mass radius to label clusters instead of the core binary fraction. Candidates with binary fraction under 5% and at least 1−sigma away from the best fit line for masssegregation as a function of relaxation time are represented by a large red filled circle. Miocchi et al. (2013) show that the ratio of GC core-to half-mass radius correlates with an indicator of dynamical age derived from the BSS radial distribution, which is interpreted in terms of mass-segregation. It is therefore no surprise that the scatter plot of the core-to half-mass radius against the mass-segregation indicator B shown in Fig. 6 suggests that clusters with a large core are less mass-segregated, because of their younger dynamical age. Binary stars also are expected to play an important role in determining the dynamical evolution of the core, but, unfortunately, the subsample of clusters with Fig. 1, but using the estimated core binary fraction from Eq. 2 on the whole Goldsbury et al. (2013) sample. Also in this case there are no candidates with binary fraction under 5% and at least 1−sigma away from the best fit line for mass-segregation as a function of relaxation time. The solid line is the least-square linear regression. Two outliers are identified and the regression is re-run without them, resulting in the dashed line. Mass-segregated clusters tend to have higher binary-fractions in the core, likely due to masssegregation of the binaries. a measured binary fraction by Milone et al. (2012) within the Miocchi et al. (2013) sample is too small (n = 13) to divide into binary-fraction bins. So we extended the sample to n = 33 (i.e. the clusters for which both B and f C are available) by calculating the core-to half-mass radius ratio using the A parameter from Goldsbury et al. (2013) (in place of the core radius) and Harris (1996) half-mass radii, which are available for all clusters. On this sample we show the relation of A/R e (which we still denote as R c /R e in the figure for consistency) with the mass-segregation parameter in Fig. 7. The relations between mass-segregation and core-to half-mass radius ratio still holds, except for few outliers with extremely high mass segregation. There are, instead, no outliers with low mass segregation, which may be candidates for hosting an IMBH, especially in combination with a low binary fraction. Clusters with binary fractions below 5% (filled circles Fig. 7) instead appear to generally fit the overall trend and tend to have small cores.
CONCLUSIONS
In this paper we considered the uniform measure of stellar mass-segregation in GCs obtained by Goldsbury et al. (2013) and the core binary fraction ( f C ) and the binary fraction measured between the core and the half-mass radius ( f C−HM ) by Milone et al. (2012). We find that: • as expected, mass segegation and relaxation time are anticorrelated, with non-segregated GCs usually having longer relaxation times (i.e. being dynamically young), • the few outliers to this trend tend to be more masssegregated than expected based on their dynamical age, • those GCs that are, instead, slightly less masssegregated than expected based on their dynamical age all have a core binary fraction f C > 0.05, consistent with the binaries being responsible for the reduced mass-segregation, both on a sample of 33 GCs with measured f C and on an extended sample of 54 GCs where we estimated f C by means of a linear relationship with metallicity and total absolute magnitude, • we find two clusters that have f C−HM < 0.05 and are over one sigma less mass-segregated with respect to the dynamical age expectation: NGC 6397 and NGC 6254. This finding is compatible with a statistical fluctuation and probably does not indicate that these clusters contain an IMBH, Goldsbury et al. (2013) A and the half-mass radius ratio from Harris (1996). The solid line shows the linear fit including all points, while the dashed line excludes the outliers (marked with their NGC number in the plot). Filled circles are GCs with a total core binary fraction (from Milone et al. 2012) below 5%, empty circles are the remaining GCs. The size of the empty circles increases with their binary fraction. GCs with a lower binary fraction tend to have a smaller core, so they lie towards the left of the plot. Clusters with a large core (with respect to the half-mass radius) and low mass-segregation (i.e. those lying in the upper-right corner of the plot) despite a low binary fraction would be candidates for hosting an IMBH. However no such clusters are found on this plot, as all clusters in the upper right corner of the plot have a core binary fraction exceeding 10%.
• the binary-fraction f C is correlated with masssegregation, with GCs that are very segregated having a large binary fraction in the core, • mass-segregation anticorrelates with the ratio of coreto half-mass radius measured by Miocchi et al. (2013), confirming that the energy sources (binaries, segregation of dark remnants, or, potentially, IMBHs) that bring about the swelling of the core also inhibit masssegregation, as expected theoretically (see e.g. Trenti et al. 2007).
Therefore, we conclude that the samples we considered do not include any GC that qualifies as a strong candidate for hosting an IMBH, based on their core binary-fraction. The reason for this is that core binary fractions f C are high enough, in relaxed clusters that display low mass-segregation, to be responsible for the low mass-segregation observed. This may be due to lowf C clusters being underrepresented in our adopted sample due to selection effects, but is also confirmed on the larger sample of all clusters with a measured mass-segregation from Goldsbury et al. (2013) when we use an estimated f C . The consistency of our result over this extended sample casts doubts over selection effects playing a significant role in our negative finding. | 2016-04-12T20:00:05.000Z | 2016-04-12T00:00:00.000 | {
"year": 2016,
"sha1": "38c9af51f962db884c14dee9d80d6e00fbf78ade",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1604.03554",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "38c9af51f962db884c14dee9d80d6e00fbf78ade",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
13490097 | pes2o/s2orc | v3-fos-license | Innovating for women’s, children’s, and adolescents’ health
Innovation is central to reaching the sustainable development goals on women’s, children’s, and adolescents’ health. The task now is to scale up these innovations in a sustainable way, say Haitham El-Noush and colleagues
T he progress report on the UN secretary general's Global Strategy for Women's and Children's Health, Saving Lives, Protecting Futures, notes that "innovation is essential to achieving the ultimate goal of ending preventable deaths among women and children and ensuring they thrive." 1 The report advocates for integrated innovation, which combines science and technology and social, business, and financial innovation to enable sustainability and the scaling up of interventions. 2 Innovation is required in all aspects of the Every Woman Every Child initiative (www. everywomaneverychild.org), including health systems, social determinants of health, human rights, leadership, finance, and accountability, to help to achieve the United Nations' sustainable development goals.
Strategically, innovation forges non-traditional partnerships among the public and private sectors, attracts new sources of funding through investment opportunities for the private sector and governments, and stimulates creative ways for countries to use innovation to accelerate attainment of their health goals. Innovation complements programmes that achieve results in the near term but that may not be sustainable without ongoing support from donors.
Alongside Every Woman Every Child in 2010 the UN secretary general, Ban Ki-moon, launched an associated Innovation Working Group to advocate for, identify, and support innovations to accelerate progress on the health targets in the millennium development goals. Meanwhile, global partners of the secretary general's strategy were developing a pipeline of innovations in women's, children's, and adolescents' health. Research conducted for Saving Lives, Protecting Futures showed that more than 1000 innovations totalling over $255m (£165m; €235m) had been supported in the research and development pipeline.
We are in a watershed year. The transition from the millennium development goals to the sustainable development goals provides a pragmatic opportunity to advance the innovation agenda to ensure that the best innovations are scaled up and have maximum impact on saving and improving the lives of women and children by 2030.
In this paper we propose challenges and solutions for the post-2015 period, aimed at meeting the goals of the Global Strategy for Women's, Children's and Adolescents' Health and the sustainable development goals.
Methods
Evidence for this article was gathered from the published literature, UN reports, and the authors' experiences in development innovation. While we cannot claim consensus, this paper was reviewed by members of the Every Woman Every Child Innovation Working Group and other global health experts, whose feedback was used to modify it.
What is the problem?
Despite important progress, unfortunately each year 6.3 million children still die before the age of 5 and 289 000 women die in pregnancy and childbirth. A third of children, meanwhile, fail to reach their full potential. Innovation is needed to rectify this situation and help us reach the new sustainable development goals. In the past five years over 1000 innovations in women's, children's, and adolescents' health have been supported. Most of these, however, are at proof of concept stage, with only a few being fully scaled up.
A major gap is the lack of a smooth pathway along which innovations can be scaled up sustainably. Every Woman Every Child is uniquely positioned to bridge any gaps by providing a platform to deliver strong political and leadership commitments, mobilise resources, and connect the stakeholders needed to successfully scale up an innovation. These stakeholders include innovators, universities, small and medium enterprises, incubators and accelerators, foundations, development agencies, civil society organisations, multinational corporations, investment banks, high net worth individuals, and governments.
EWEC innovation marketplace
The Innovation Working Group aims to smooth the innovation pathway in a sustainable manner by establishing the Every Woman Every Child innovation marketplace to facilitate the four interlinked elements of innovation: the pipeline, curation, brokering, and investment. The group seeks to create links to already existing resources and initiatives, thus establishing a more coherent system for scaling up innovations in a sustainable manner. But it does not propose to replicate what is already being done well by others in the innovation ecosystem. Every Woman Every Child provides investors with a trustworthy source of investment opportunities that is free from conflicts of interest, developed by a trusted partner that used transparent criteria and governance processes. It catalyses the convergence of initiatives and stakeholders in a way that might not otherwise be possible.
Priority interventions
The goal of the EWEC innovation marketplace is to scale up 20 investments in women's, children's, and adolescents' health by 2020 and to enable at least 10 of these innovations to be widely available and having a significant effect by 2030.
One inspiring example of innovation is the African meningitis vaccine project, which took 15 years to start saving lives but has now been used to immunise more than 215 million people. By 2020 the vaccine is expected to protect more than 400 million Key messages innovation in healthcare is essential to the achievement of the post-2015 sustainable development goals over the past five years over 1000 innovations in women's, children's, and adolescents' health have been supported but few have been fully scaled up in a sustainable manner to tackle the scaling challenge in innovation, the every Woman every Child innovation Working group builds networks of investors and links innovations to private sector commitments and national resources, through the brokering function of its innovation marketplace open access people and prevent one million cases of meningitis A, 150 000 deaths, and 250 000 cases of severe disability. 3 The time frame for innovation means that their full impact may not be felt for five, 10, or even 15 years. 4 Examples of innovations that are in the process of being scaled up are in box 1.
Four interlinked aspects of innovation Pipeline
The pipeline comprises early stage innovations supported by investments of $100 000 to $250 000 to reach the proof of concept stage. There are more than 1000 innovations in the pipeline for women's, children's, and adolescents' health. Examples of key sources of innovations in the pipeline are shown in box 2.
Although the innovation pipeline is robust, it is difficult to access and analyse. For example, 1689 innovative projects (including but not limited to women's, children's, and adolescents' health) in 80 countries are listed on grandchallenges.org. This level of information is an advance, but it is difficult to search for all the projects on a specific topic, access project level information (potentially including results), analyse individual projects, or allow other qualified funders to deposit projects. The Bill and Melinda Gates Foundation, USAID, Grand Challenges Canada, and the Results for Development Institute are working together to improve the interoperability of these data.
The Innovation Working Group's role is to stimulate funders to refresh the pipeline, to monitor it, and to encourage the consolidation of pipeline information to make it easier to access and analyse. A specific example is the use of common data elements, allowing project information and updates to be easily transferred from one repository to another.
Curation
Curation is the comparative analysis of innovations in the pipeline. It answers the question of which of the innovations are best. It is a critical step in distilling dozens of innovations that might be in the pipeline for a women's, children's, and adolescents' health sub-topic such as pneumonia down to a few of the best to present to an investor who may be interested in supporting an innovation for pneumonia. Naturally, what is "best" depends on the intended audience, and the curation process needs to take this into account. The figure shows a taxonomy of sub-topics developed through consultation by the Innovation Working Group .
Currently there is not enough comparison of innovations. The provenance of initial funding at proof of concept stage often determines which investments are scaled up. Curation activity must focus on conditions with the greatest disease burden and on innovations with the greatest potential to save and improve lives.
A process and criteria are needed to enable comparison among innovations, especially those vying for further investment in certain sub-topics. A good example of an attempt to do this is the PATH Innovation Countdown 2030 report, funded by the Norwegian Agency for Development Cooperation (Norad), US Agency for International Development (USAID), and the Bill and Melinda Gates Foundation (see http:// ic2030.org). Many groups, from foundations to companies to venture capital firms, do their own curation when deciding on investments, but there is no system to share and build on these efforts.
Curation may show that some innovations are not quite ready for investment because they have not reached the stage of scientific proof of concept or because their business plan is poorly developed. This highlights the need for bridge financing in the range of $250 to $1m and also for mentoring through investment readiness programmes such as Lemelson/Venture Well, Duke SEAD, Villgro, GSBI, and NESsT.
A neutral body associated with the UN can gain the confidence of investors and governments. The Innovation Working Group can stimulate, organise, and finance curation exercises in the sub-topics shown in the figure so that the most promising innovations can be scaled up through brokering and investment, ultimately achieving impact. WHO has a track record of providing technical assistance to governments and can lend expertise. The working group's neutrality is crucial, because investors seek a trustworthy list of investment opportunities that is free of conflicts of interest and has transparent criteria and governance processes.
Preventing infection among newborns
With investment from the Saving Lives at Birth partners, John Snow International has pioneered the use of the antiseptic compound chlorhexidine in Nepal as a safer, more effective alternative than existing methods for disinfecting a newborn's umbilical cord stump. Research indicates that routine use of chlorhexidine could reduce the incidence of newborn death by 24%. Already 1.2 million babies have had chlorhexidine applied to their umbilical cord stump, leading to an estimated more than 7500 lives saved in Nepal alone. Scaling up is already occurring in Nigeria and Madagascar, and in other countries.
Brokering
Brokering is the process of investment due diligence and of matching innovations to investors. Brokers need a "line of sight to the entire community," including looking "backward" to curation and "forward" to investment, to effectively link innovators and investors. Communication of the curation effort is important to the marketing of the investment opportunity, conveying messages of the product's benefits and, critically, that it is "doable," given a sound investment thesis. Lessons can be learnt here from other impact investment organisations, such as the Global Health Investment Fund. There is no successful systematic evaluation of experience of offering social investments to investors. As Judith Rodin of the Rockefeller Foundation has pointed out, trillions of dollars in private capital are sitting on the sidelines. 5 Investors require trustworthy channels and an effective and neutral deal sourcing process through which to make investments that have an impact. An impact investment manager is needed to broker such opportunities.
The week of the UN General Assembly, and the annual Every Woman Every Child innovation sector session, are opportunities to celebrate private sector commitments in the form of brokered deals. Examples of brokered deals announced at the assembly include the Odon device (2013) and inhaled oxytocin (2014).
Health ministries have an important role in selecting innovations on the basis of need. The Innovation Working Group can help by creating "a global platform that thinks locally." This platform would provide user feedback from frontline staff and bring other benefits to countries in terms of procurement and distribution. The ultimate goal is to create a culture of innovation in health ministries. As a neutral platform, the innovation group can take the lead on brokering and the development of brokering models, including using the annual UN General Assembly as a brokering platform and to celebrate successful deals. This is one important way for the EWEC innovation marketplace to add value.
Investment
Investment is the process of decision making for public and private funding of innovations of more than $1m. We need ways to access new pools of capital, such as private sector investors, and to mobilise countries' domestic resources. Investors include multinational companies, impact investors, venture philanthropists, "angels," venture capital funds, civil society organisations, foundations, and governments. The innovation marketplace is not itself an investment fund but provides channels that increase opportunities to invest in innovation. Investment can also be enhanced by online platforms such as the Canadian government's "Convergence" platform, which will help create partnerships for new blended finance investment vehicles.
Innovation in women's, children's, and adolescents' health, and in particular its shared global governance through the Grand Challenges initiatives (http://grandchallenges.org), has great potential as a domestic resource mobilisation strategy to help countries reach the sustainable development goals. 6 Countries support their own innovators because this leads to social and economic development and jobs. Country plans under the UN global financing facility-a recently launched mechanism that pools resources to fund women's, children's, and adolescents' health programmes in low and middle income countries-will provide a means of financing innovations. Nothing drives innovation like market demand. Scaling up and adoption of innovative service delivery approaches and new technologies by countries is associated with an annual decline of about 2% per year in the under 5 mortality rate. 7 Imagine a scenario whereby a health minister can survey the national gaps in care, match these gaps to innovations in the EWEC marketplace, and finance the scaling up of these innovations through procurement, by using domestic resources or the UN global financing facility. Ultimately, countries are the biggest investors in innovation as it is scaled up, and health ministries institutionalise these innovations. Such a system optimises country leadership and the lifesaving and life improving power of innovation for women's and children's health.
Civil society organisations are another source of finance and are well positioned to adopt and scale up innovations. The same foundations and development agencies that helped create the pipeline at proof of concept stage will also help finance the most promising innovations, serving to further reduce risk for subsequent private and public investors.
Although beyond the scope of the innovation marketplace, a country's regulatory environment influences the adoption of innovations. International technical agencies such as WHO have a valuable role in making recommendations in support of health interventions, including innovations. More generally, mechanisms that focus on creating enabling environments for national health systems to absorb innovations, including the lessons learnt from scaling innovations in other countries, would be useful.
Conclusion
In 2010, the challenge for Every Woman Every Child was to create a pipeline of innovations. In 2015, a pipeline of over 1000 innovations in women's, children's, and adolescents' health has been created, and the challenge now is to scale them up. A key strategy of the Innovation Working Group will be to link existing activities and gaps in care and to create a global marketplace for the innovations, where they meet investors so that they can be scaled up sustainably and achieve widespread impact. The innovation model developed for women's, children's, and adolescents' health may also be useful to pave the way from innovation to impact for other sustainable development goals in the post-2015 era. | 2017-09-16T01:32:09.886Z | 2015-09-14T00:00:00.000 | {
"year": 2015,
"sha1": "e4585e1d76cdb4e7561f8531ed20b4f8e6a2f60c",
"oa_license": "CCBYNC",
"oa_url": "https://www.bmj.com/content/bmj/351/bmj.h4151.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e4585e1d76cdb4e7561f8531ed20b4f8e6a2f60c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
53579904 | pes2o/s2orc | v3-fos-license | Effects of atomic interactions on Quantum Accelerator Modes
We consider the influence of the inclusion of interatomic interactions on the delta-kicked accelerator model. Our analysis concerns in particular quantum accelerator modes, namely quantum ballistic transport near quantal resonances. The atomic interaction is modelled by a Gross-Pitaevskii cubic nonlinearity, and we address both attractive (focusing) and repulsive (defocusing) cases. The most remarkable effect is enhancement or damping of the accelerator modes, depending on the sign of the nonlinear parameter. We provide arguments showing that the effect persists beyond mean-field description, and lies within the experimentally accessible parameter range.
Quantum Accelerator Modes (QAMs) are a manifestation of a novel type of quantum ballistic transport (in momentum), that has been recently observed in cold atom optics [1]. In these experiments, ensembles of about 10 7 cold alkali atoms are cooled in a magnetic-optical trap to a temperature of a few microkelvin. After releasing the cloud, the atoms are subjected to the joint action of the gravity acceleration and a pulsed potential periodic in space, generated by a standing electromagnetic wave, far-detuned from any atomic transitions. The external optical potential is switched on periodically in time and the period is much longer than the duration of each pulse. For values of the pulse period near to a resonant integer multiple of half of a characteristic time T B (the Talbot time [2]), typical of the kind of atoms used, a considerable fraction of the atoms undergo a constant acceleration with respect to the main cloud, which falls freely under gravity and spreads diffusively.
The non-interacting model is a variant of the wellknown quantum kicked rotor (KR) [3], in which the effects of a static force, produced by the earth gravitational field, are taken into account. The linear potential term breaks invariance of the KR hamiltonian under space translations. Such an invariance may be recovered by moving to a temporal gauge, where momentum is measured w.r.t. the free fall: this transformation gets rid of the linear term and the new hamiltonian, expressed in dimensionless units, readŝ wherep andx are the momentum and position operator, k and τ are the strength and the temporal period of the external kicking potential, g is the gravity acceleration. The relationship between the rescaled parameters and the physical ones, denoted by primes, is k = k ′ / , where η is the momentum gain over one period, G is twice the angular wavenumber of the standing wave of the driving potential and M is the mass of the atom.
Symmetry recovery allows to decompose the wavepacket into a bundle of independent rotors (whose space coordinate is topologically an angle): this Bloch-Wannier fibration plays an important role in the theory of QAMs [4]. QAMs appear when the time gap between kicks approaches a principal quantum resonance, i.e. τ = 2πl + ǫ, with l integer and |ǫ| small. The key theoretical step is that in this case the quantum propagator may be viewed as the quantization of a classical map, with |ǫ| playing the role of an effective Planck's constant [4]: QAMs are in correspondence with stable periodic orbits of such pseudo-classical area-preserving map. We refer the reader to the original papers for a full account of the theory, we just mention a few remarkable points: stable periodic orbits are labelled by their action winding number w = j/q, which determines the acceleration of the QAM w.r.t. the center of mass distribution The modes are sensitive to the quasimomentum (Bloch index induced by spatial periodicity), being enhanced at specific, predictable values [4]; also the size of the elliptic island around the pseudoclassical stable orbit plays an important role (if the size is small compared to |ǫ| the mode is not significant [4]). We consider in this letter the role of atomic interactions in such a system; namely evolution is determined by a nonlinear Schrödinger equation with a cubic nonlinearity: where u is the rescaled nonlinear parameter, whose sign describes an attractive (negative)/repulsive (positive) atomic interaction. We will come back to its connection with physical units in the end of the paper. The condensate wave function is normalized to unity. The dynamics does not only acquire in this way a qualitative novel form, but, due to the nonlinear term, Bloch decomposition into independent rotors breaks down. The main scope of this letter will be to numerically scrutinize how QAMs are still present in the modified system, and explore how nonlinearity modifies their features. In the end we will briefly comment upon some stability issues, by showing that a more refined description, including loss of thermalized particles, does not destroy the scenario we get from a mean field description. Our analysis will be restricted to QAMs corresponding to fixed points of period q = 1 of the pseudoclassical map; the numerical analysis of nonlinear evolution has been performed by using standard time-splitting spectral methods [5]. There are several physical parameters characterizing the system: g, τ , k and u. Here we mainly address the role of nonlinearity u: we fix k = 1.4, l = 1, ǫ = −1, τ η ≃ 0.4173, and choose as the initial state a symmetric coherent state centered in the stable fixed point of the pseudoclassical map (x 0 ≃ 0.3027, p 0 = 0), whose corresponding winding number is zero.
A quite remarkable feature appears when we compare results for opposite nonlinearity signs (keeping the strength |u| fixed), see fig.(1). As in the linear system, the wave packet splits into two well-separated components: the accelerator mode (whose acceleration is still compatible with (2)) and the remaining part, which moves under two competitive contributions, the free fall in the gravitation field, and the recoil against the accelerating part. Note that for the present choice of the parameters, the former contribution is negligible compared to the second.
We remark some features, that are common to what we observed for a choice of other parameter values: the distribution around the accelerator mode is more peaked and narrower in the presence of attractive nonlinearity; the opposite happens in the case of a repulsive interac- tion. This can also be appreciated from a Husimi representation of the modes (see fig.(2)). While for repulsive interactions the spreading of the distribution, together with peak damping, seems to depend monotonically on the nonlinearity strength, the attractive case exhibits more complicated features (see fig.(3)). Enhancement of the accelerator mode is only observed for small nonlinearities, while a striking feature appears at larger values of |u|, namely the accelerator mode is suppressed (see fig.(4a)). The intuitive explanation of this result is that strong focusing nonlinearity opposes to the separation of the wave packet into two parts; indeed, in the case of exact resonance (namely τ = 2π), the mode is absent, so the whole wave freely falls without splitting and then the maximum height of the wave, plotted vs u as in fig.(3a), is found to monotonically increase to the left towards a saturation value. While the behavior shown in fig.(3) has been observed for a variety of other parameter choices, we mention that more complex, strongly fluctuating behaviour was sometimes observed at large focusing nonlinearities. In all such cases a bad correspondence between the quantum and the pseudoclassical dynamics was also observed, already in the linear case.
We remark that the mode damping is sensitive to the choice of the initial state, as shown in fig.(4). While a gaussian initial wave packet leads to the mentioned QAM suppression, we may tailor a QAM enhancing initial condition as follows: we take the quasimomentum β 0 that in the linear case dominates the mode (here β 0 = π/τ −η/2 ≃ 0.5551 [4]) and we drop from the initial gaussian all components with |β − β 0 | > 0.15. As quasimomentum is the fractional part of momentum, this leads to the comb like state of fig.(4b). Even through quasimomentum is not conserved due to nonlinearity, the QAM is strongly enhanced with respect to the linear case and the recoiling part is almost cancelled. Another way of looking at the nonlinear evolution with techniques that are proper in the linear setting is to consider the distribution function over quasimomenta, defined by This distribution is stationary under linear evolution, its shape being determined by the choice of the initial state. We consider the evolution of a gaussian wave packet (for which the linear f is essentially a constant -the horizontal red line of fig.(5)), and probe the effect of nonlinearities of both signs. Typical results are as in fig.(5): the effect of attractive (repulsive) nonlinearity is to enhance (lower) the distribution around a valueβ ≃ 0.4. No deviation occurs for quasimomentum β 0 (marked by vertical lines), whose wave function, according to fig.(4b), closely follows the linear pseudo-classical island. Again theβ peak of the focusing case is suppressed for large focusing nonlinearities.
To make sure that our findings may be experimentally significant we discuss some stability issues: the first concerns decay properties of the QAMs. It is known that linear modes decay due to quantum tunnelling out of pseudoclassical islands [6]: we checked that, on the avail- able time scale, the nonlinear decay behaves in a similar way. In fig.(6a) the probability inside the classical island is shown as a function of time for the initial state of fig.(4b); it has been calculated integrating the Husimi distribution of each β-rotor fiber over the island area and summing the contributions of different rotors. However in the condensate regime there is another possible mechanism that might completely modify the former picture, namely depletion of the condensate due to proliferation of noncondensed, thermal particles. A standard technique to estimate the growth of the number of thermal particles is provided by the formalism of Castin and Dum [7], which has been employed in similar contexts in [8]. To the lowest order in the perturbation expansion and in the limit of zero temperature T → 0, the number of non-condensed particles is given by: where v k (t) is one of the mode functions of the system. The modal functions (u k (t), v k (t)) are pairs of functions that represent the time-dependent coefficients of the decomposition, in terms of annihilation and creation operators, of the equation of motion for the field operator describing the thermal excitations above the condensate. They describe the spatial dependence of these excitations and propagate by modified Bogoliubov equations. Our findings (see fig.(6b)) are consistent with a polynomial growth of noncondensed particles, namely in our parameter region (and within the time scale we typically consider) no exponential instability takes place. This is consistent with recent experimental work [9], where 87 Rb atom condensate has been used to explore QAMs. In [9], a condensate of 50000 Rb atoms with repulsive interactions is realized. In the case of a "cigar shaped" trap, the relationship between the number of atoms in the conden-sate N and the effective 1-d nonlinear coupling constant u is, in our units, N = ua 2 ⊥ /2a 0 [10], where a 0 is the 3-dimensional scattering length and a ⊥ ≫ a 0 is the radial extension of the wave function. Using the parameter values of the experiment [9], one finds N ≃ 10 5 · u and so N ∼ 50000 corresponds to u ∼ 0.5. Therefore our range of parameters includes the experimental accessible one.
We have investigated effects of atomic interactions, in the form of a cubic nonlinearity, on the problem of quantum accelerator modes: in particular we have characterized the consequences of both attractive and repulsive interaction; we have also provided evidences that the modes are not strongly unstable when reasonable parameters are chosen. | 2007-04-11T09:39:45.000Z | 2007-04-11T00:00:00.000 | {
"year": 2007,
"sha1": "066d2d7b2195fc5c0d28cb04967b1ca49ee2f412",
"oa_license": "CCBYNCND",
"oa_url": "https://irinsubria.uninsubria.it/bitstream/11383/1668630/1/pra-laura07.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "066d2d7b2195fc5c0d28cb04967b1ca49ee2f412",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233387976 | pes2o/s2orc | v3-fos-license | Entangled Two-Photon Absorption Spectroscopy with Varying Pump Wavelength
In virtual-state spectroscopy, information about the energy-level structure of an arbitrary sample is retrieved by Fourier transforming sets of measured two-photon absorption probabilities of entangled photon pairs where the degree of entanglement and the delay time between the photons have been varied. This works well for simple systems but quickly becomes rather difficult when many intermediate states are involved. We propose and discuss an extension of entangled two-photon absorption spectroscopy that solves this problem by means of repeated measurements at different pump wavelengths. Specifically, we demonstrate that our extension works well for a variety of realistic experimental setups.
Introduction
Today, there exists a great variety of spectroscopic techniques, each with their own set of advantages and disadvantages, for a myriad of applications ranging from medicine [10] and material science [11] to biology [24] etc. Some of the more sophisticated protocols that have emerged are related to two-photon spectroscopic techniques, where two timed photon pulses interact with the sample in short succession. Specifically, entangled two-photon absorption spectroscopy (eTPA spectroscopy) [1, 2, 6, 7, 9, 12-15, 17-19, 22, 23, 25, 26] represents a technique that utilizes the quantum nature of light to devise a powerful spectroscopic tool. For instance, it has been applied to propose novel experimental schemes that might be used for the determination of the electronic level structure of single molecules [7] and complex light-harvesting systems [5,20], and has become a useful addition to the spectroscopic toolbox. In fact, eTPA spectroscopy as originally proposed by Saleh et al. in 1998 [17] relies on tuning and integrating over the entanglement time T e (a parameter of the second-order quantum correlation of the photon pair, see section 2) in order to separate those features of the spectrum that allow direct access to the eigenenergies of the material from spurious background signals. However, this method can quickly become quite involved as it requires multiple experiments with two-photon states that bear different temporal correlations. Consequently, novel ways of extracting information from eTPA signals have to be considered [9].
In this work, we develop a variant of this technique by exploiting the eTPA signal's dependence on the wavelength of the pump light. More specifically, our proposed scheme extracts information about the electronic level structure of the samples under study by correlating measurements at two or more different pump wavelengths. Our setup could be realized using standard and widely used entangled-photon sources, thus opening up a novel avenue towards nonlinear quantum spectroscopy.
Our work is organized as follows. In section 2, we describe the model setup, the basic workings of ordinary eTPA spectroscopy, and elucidate the problem of many intermediate states. We introduce our extension of eTPA spectroscopy to multiple pump wavelengths in section 3 and provide a detailed discussion of its applicability in realistic settings. Finally, we summarize our findings and conclude in section 4.
The Model
Our model setup consists of a source of entangled photon pairs with tunable delay for two-photon absorption spectroscopy, a multi-level material system and a second-order perturbative analysis of the eTPA signals.
We consider an entangled-photon spectroscopy setup as schematically depicted in Fig. 1. The light source we employ is a two-photon state created by collinear Type-II spontaneous parametric down-conversion (SPDC) with continuous-wave pump and is described by the spectral decomposition withâ † ωs ,â † ωi being the creation operators for the signal (s) and idler (i) photons, respectively. The joint spectral function of the photons is given by which is commonly referred to as the twin state [4]. Here, l denotes the path length within the birefringent nonlinear crystal, ω p is the angular frequency Figure 1: Schematic of a eTPA spectroscopic setup: A collinear type-II SPDC source is pumped with monochromatic light of angular frequency ω p and produces two entangled photons with the frequencies ω s and ω i with a common central angular frequency ω 0 = ω p /2. A tunable delay τ is introduced into the path of one of the photons. Subsequently, both photons interact with a material system whose electronic level structure is schematically depicted in Fig. 2.
of the (monochromatic) light used to pump the SPDC source, ω s and ω i are the angular frequency of the signal and idler down-converted photons, respectively and τ is the external delay introduced into the path of the signal photon. Furthermore, the entanglement time T e is with the inverse group velocities N s = 1/v g,s (N i = 1/v g,i ) of the signal(idler) photons. Note that, in the analysis below, we will assume that the photons are degenerate with central wave-packet frequencies ω 0 s , ω 0 The sample material model is a multi-level system with non-degenerate energy eigenstates |j with respective energiesh j (see Fig.2). It is described by the HamiltonianĤ Two of these states fulfill the two-photon resonance condition: and we consider them as the initial state |i and the final state |f . The final state is assumed to lie within a band of closely spaced levels. It is important to note that in any realization of our setup |f is defined by our choice of ω p . The remaining N states are intermediate states that contribute as pathways to the two-photon absorption signal, i.e the eTPA probability. These intermediate states are virtual states in the sense that they are energy eigenstates states j of the unperturbed system, whose detuning to the center frequencies ω 0 of the entangled photons is larger than two times the Rabi frequency [21]. Using second-order perturbation theory, we can calculate the two-photon transition probabilities P fi from the initial to the final state upon interaction with the twin field state. Within dipole-and rotating-wave approximation, the perturbation, i.e. the interaction Hamiltonian, is described in the interaction picture asV whereμ (t) denotes the dipole operator. Through a Fourier transform in conjunction with suitable coordinate transformations, the time-ordered time integral that arises in second-order perturbation theory evaluates to [9] with the delta-like function of width 4/πt and In these expressions, we denote the energy mismatch of the center frequencies of the entangled photons ω 0 , the intermediate states by ∆ j = j − i − ω 0 and the transition matrix elements are A j = µ fj µ ji /∆ j , where µ kl = k|μ |l are the corresponding transition dipole moments. The delta-like function ensures energy conservation for times large compared to the energy mismatch of the pump angular frequency and the energy of the total transition. Expanding the absorption cross section s(T e , τ ) now gives us Note, that for N intermediate states j , the Fourier transform F(P fi ) with respect to τ , i.e. the eTPA spectrum, shows peaks at zero angular frequency, as well as the 2(N + 1)N angular frequencies where j , k denote the energies of the intermediate states.
In order to illustrate how the different frequency peaks appear in the Fourier transform of the eTPA signal, we display in Fig. 3(a) We observe, that even though the energy mismatch ∆ 1 of the lower intermediate state is much larger than the energy mismatch ∆ 2 of the higher intermediate states, the corresponding peaks of the spectrum (marked with the triangles) do not differ proportionally in size. This suggests that, as a general rule, the heights of the peaks are not a reliable way of making sense of the spectrum and do not allow to deduce the underlying energy structure of the sample [6,17].
In Fig. 3(b) we infer that it quickly becomes difficult to interpret the spectrum when the number of intermediate states grows. This results from the fact that the number of spectral peaks grows quadratically with the number of immediate levels. Clearly, this severly limits the usefulness of the eTPA spectroscopy in the present form.
Extracting energies of intermediate-state levels
Our goal is to extract the energies of the intermediate states j from the eTPA spectrum. An easy way to achieve this, would be to identify within these peaks those at the frequencies ∆ j of Eq. (11), as these only depend on one of the eigenenergies j and its respective value is readily extracted by adding ω 0 . (13). We observe that the three distinct slopes clearly identify each peak as a member of one of the three distinct sets. easily be carried out with simple spectra such as that of Fig. 3(a).
However, in the general situation there are 2(N + 1)N peaks and for the aforementioned technique of 'educated guessing', we would have to select N members from this set and check whether they align with the actual spectrum. Clearly, this scheme quickly becomes rather cumbersome to execute, see Fig. 3(b). Moreover, for systems with many intermediate states another detrimental effect sets in and fundamentally obstructs the extraction of relevant information from a spectrum. Specifically, as the number N of peaks increases, it becomes more and more likely to encounter overlapping peaks with low amplitudes or very shallow signals that get lost in the background noise so that it will become less and less likely that the approach of 'educated guessing' will succeed. The immediate response to this challenge would be to reduce the noise floor and to increase spectral resolution but there clearly are limits to what reasonably can be done. In what follows, we, therefore, address this problem by extending eTPA spectroscopy by means of repeated measurements at different pump wavelengths.
Dependence of s(T e , τ ) on the pump wavelength
Fortunately, the sets of frequencies in Eqs. (11)- (13) are set apart from all other frequencies by their dependence on ω 0 as demonstrated in Fig. 4. Most importantly, the locations of the +∆ j signals in Eq. (11) go with −ω 0 . This implies that by measuring at different pump wavelengths λ p , i.e. different ω 0 , we will be able to uniquely identify the intermediate-state energies of the sample, as we can distinguish them from the peaks at Eq. (12), which do not depend on ω 0 , and the peaks at Eq. (13) which change with 2ω 0 .
In Fig. 5 we display the eTPA spectrum for the same two-intermediatestate samples as in Fig. 3(a), considering two different central frequencies of the pump. Note, that in these plots the peaks corresponding to the frequencies +∆ j of the two spectra are separated by a distance ±(ω 1 0 −ω 2 0 ). Now, we simply run a signal processing routine to identify the set of peaks of both spectra and find pairs, one element from each spectrum, that are separated by ±(ω 1 0 − ω 2 0 ). By adding the respective ω 0 to these peaks we can thus find the intermediate states j = ∆ j + ω 0 . This process can easily be automated.
An important advantage of this technique is that we can make further measurements at additional pump wavelengths should two measurements be insufficient to deduce the j from the spectrum. This is preferable over simply increasing the resolution in the delay time τ , and decreasing statistical errors through repeated measurements of the same system, as features of the spectrum that are obscured by the overlapping of the peaks or low peak amplitudes at a particular frequency tend not to overlap or to be poorly visible at another pump frequency. This is due to the fact that peak positions and amplitudes also change with the pump frequency.
Discrete Fourier transform and experimental accessibility of our technique
While the basic scheme laid out here is rather simple, a number of potential problems lie in the choice of parameters for the experiment. In an actual experiment, the values of τ are discrete. Assuming a free delay line with a mirror setup on a translation stage, their spacing ∆ r is determined by the smallest path delay we can introduce. Here we are using a value of ∆ τ = 0.3 × 10 −15 s, which, using a mirror, translates to a step size ∆ L = c∆ τ /2 = 45 nm, which is attainable using modern translation stages [27].
As we are using a discrete Fourier transform, our angular frequency resolution ω res is defined by our sampling rate in time, i.e. smallest path delay, ∆ τ and the number of points we can measure τ N = τ max − τ min /∆ τ by The inequality follows from the fact that the range of τ we can access is, in turn, limited by the bandwidth ∆ ω of our entangled photons, as it defines their entanglement time T e . Here, we assume an SPDC type-II source with a bandwidth of ∆ ω = 7.4 meV [3], resulting in an entanglement time of The two photons have to overlap in space-time to contribute to two-photon absorption and thus we have Furthermore, peaks that are supposed to be measurable with a simple setup need to roughly lie within the bandwidth ∆ ω of our photons. This is a serious constraint, as at the same time our angular frequency resolution ω res becomes poorer for large bandwidth and, consequently, small entanglement times T e [see Eq. (14)]. In other words, ideally we would want a large ∆ ω and a small ω res , which by (14) is mutually exclusive. In Fig. 6, we illustrate this effect for two choices of ∆ ω . Figure 6: eTPA spectra for ten random sets of two intermediate states each for two different choices of ∆ ω (T e ) at constant ∆ L = 45 nm and pump wavelength λ p = 405 nm. We observe that a small bandwidth leads to strong attenuation of peaks at frequencies far from the resonance (bottom), yet a large bandwidth results in poor resolution in ω (top).
This problem could be addressed by increasing the photon flux to offset the limited bandwidth and increase visibility of otherwise very low peaks. However, when choosing a bright source, we must take care to not exceed intensities Φ for which the quantum processes still cease to dominate the absorption rate [4]. Specifically, the absorption cross section R has two contributions where δ r is the classical, i.e. probabilistic, absorption cross section and σ e ∝ P fi is the quantum-mechanical cross section. It is the latter which we are trying to measure. Their actual values depend on the experiment and have been analysed in detail in Refs. [8,16,17]. Finally, it is worth remarking that the angular frequency range of the eTPA spectrum is determined by our sampling rate in time as and does not tend to be a limiting factor on our choice of parameters.
Conclusion
We have demonstrated that the pump frequency ω p of a type-II SPDC source represents an additional resource for eTPA spectroscopy. Specifically, we have shown that a varying pump wavelength provides a robust way to interpret the spectroscopic data that otherwise may well be very difficult to interpret. In particular, for samples with complex energy spectra and when many intermediate states contribute to the two-photon absorption our novel approach can make eTPA spectroscopy feasible. Further, our analyses of the limitations in the choice of parameters have revealed that there is ample room for balanced choices regarding frequency resolution as well as the frequency range. How these are weighted depends on the concrete problem at hand. Further, the trade-off between resolution and range can, to some extent, be relaxed by reducing the step size ∆ L using more sophisticated delay lines.
Acknowledgments
We would like to thank Armando Perez-Leija for many fruitful discussions on the topic, as well Sven Ramelow for his valuable insights on the experimental side of things. | 2021-04-26T01:15:50.441Z | 2021-04-23T00:00:00.000 | {
"year": 2021,
"sha1": "06a471b01edcd7b8d0de74d7a410e57ac5485f36",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2104.11664",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "06a471b01edcd7b8d0de74d7a410e57ac5485f36",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
254360942 | pes2o/s2orc | v3-fos-license | Towards the Search for Potential Biomarkers in Osteosarcoma: State-of-the-Art and Translational Expectations
Osteosarcoma represents a rare cause of cancer in the general population, accounting for <1% of malignant neoplasms globally. Nonetheless, it represents the main cause of malignant bone neoplasm in children, adolescents and young adults under 20 years of age. It also presents another peak of incidence in people over 50 years of age and is associated with rheumatic diseases. Numerous environmental risk factors, such as bone diseases, genetics and a history of previous neoplasms, have been widely described in the literature, which allows monitoring a certain group of patients. Diagnosis requires numerous imaging tests that make it possible to stratify both the local involvement of the disease and its distant spread, which ominously determines the prognosis. Thanks to various clinical trials, the usefulness of different chemotherapy regimens, radiotherapy and surgical techniques with radical intent has now been demonstrated; these represent improvements in both prognosis and therapeutic approaches. Osteosarcoma patients should be evaluated in reference centres by multidisciplinary committees with extensive experience in proper management. Although numerous genetic and rheumatological diseases and risk factors have been described, the use of serological, genetic or other biomarkers has been limited in clinical practice compared to other neoplasms. This limits both the initial follow-up of these patients and screening in populations at risk. In addition, we cannot forget that the diagnosis is mainly based on the direct biopsy of the lesion and imaging tests, which illustrates the need to study new diagnostic alternatives. Therefore, the purpose of this study is to review the natural history of the disease and describe the main biomarkers, explaining their clinical uses, prognosis and limitations.
Introduction
Among neoplasms, sarcomas represent a heterogeneous group of malignant neoplasms of mesenchymal origin that comprises a wide variety of histological subgroups; each malignancy can manifest in any anatomical location and carries a complexity of diagnosis and prognosis. Overall, more than 80% of sarcomas correspond to soft tissues (mainly liposarcomas, leiomyosarcomas and undifferentiated sarcomas), while 20% correspond to bone, with osteosarcoma being the most frequent primary malignant tumour, followed by Ewing's sarcoma and chondrosarcoma, among others [1]. We must mention that there are more than 100 histological subtypes in both soft tissue sarcomas and bone sarcomas, requiring a complex diagnosis. As a whole, sarcomas are rare neoplasms accounting for less than 1% of malignant neoplasms, which are diagnosed at a rate of approximately 3.4 cases per million inhabitants worldwide [2]. In Spain, for example, in 2021, 141 deaths from malignant tumours of bone and articular cartilage were recorded out of a total of 47,222 deaths from cancer [3]. Despite being a rare neoplasm, it presents two bimodal peaks, being the most frequent malignant neoplasm of bone in people under 20 years of age, in which group 50% of patients are diagnosed with osteosarcoma, and presenting another increase in incidence in people over 65 years of age [4]. Therefore, despite being a rare neoplasm, approximately half of the people affected are young, resulting in a high loss of years of potential life in the population. Various risk factors have been described, among which we can highlight exposure to radiotherapy and chemotherapy (Tables 1 and 2). Regarding this point, osteosarcoma represents the leading cause of solid secondary malignancy in patients who received radiotherapy for a neoplasm in their youth. The time interval to appearance can be up to 20 years, and this aetiology should be suspected in bone tumours in patients with a history of radiation therapy [5]. On the other hand, the chemotherapy that is most often associated with osteosarcoma involves alkylating agents, such as nitrogen mustards or platinum derivatives among the most frequent [6]. We must also point out that, in young people, osteosarcoma is associated with genetic diseases that can account for up to 30% of cases, there having been described a wide variety of genetic conditions that predispose to it [7,8]. Among the hereditary diseases that represent the vast majority of these patients, alterations in Rb1 stand out, which are inherited in a dominant way, and in addition to the characteristic ocular tumour, there is a predisposition to other solid neoplasms where sarcomas represent up to 60% of cases themselves. Similarly, Li-Fraumeni syndrome, which is associated with inherited mutations in p53, is associated with an increased risk of developing osteosarcomas [9]. It is important that the diagnostic criteria for the Li-Fraumeni syndrome include osteosarcoma and other soft tissue sarcomas as the main tumours in this genetic disease [10]. There are other less common hereditary conditions, such as Rothmund-Thomson syndrome, which is associated with various dermatological and ophthalmological alterations and a risk of approximately 30% of presenting with osteosarcoma [11]. Other less common syndromes, such as Bloom syndrome and Werner syndrome, have also been associated with an increased risk of osteosarcoma in children and adults [12]. We have previously noted that there is another peak in incidence in older people. This increased incidence can be associated with various bone diseases, most notably Paget's disease, which is characterised by an alteration in the bone turnover process and affects up to 1% of the population over 55 years of age in Spain. The probability of developing osteosarcoma in these patients is approximately 1% of patients, and progression to invasive disease usually occurs in bone affected by Paget's disease, where it is also common for several areas of the bone to be affected at the same time. This leads to more aggressive tumours that are difficult to treat and therefore have a worse prognosis [13]. Various genetic alterations have been associated with the invasive progression of Paget's disease, relating to alterations in chromosome 18 or aberrant variants in chromosome 5 [14].
From a pathological point of view, the 2020 WHO classification allows the histological differentiation of different grades of osteosarcomas, such as low-grade, periosteal, highgrade, unspecified osteosarcoma with different variants, and secondary osteosarcoma, each with its own frequency and different prognosis [15]. Within non-specified osteosarcoma, conventional osteosarcoma represents more than 90% and usually affects the metaphysis of long bones in the intramedullary region. Other tumours, such as low-grade sarcoma, which accounts for up to 2% of osteosarcomas, and parosteal tumours, have a relatively good prognosis with cure rates of up to 90% with surgical resection [16]. As we have previously indicated, osteosarcomas, probably due to past radiotherapy or Paget's disease, are grouped in the classification of secondary osteosarcomas. There are other rarer variants with a worse prognosis, such as multifocal sarcoma that can affect various bones synchronously and craniofacial osteosarcoma [17]. Patients in both situations have an ominous prognosis and are usually candidates for palliative chemotherapy, with a fatal outcome and very short survival in most cases. From an anatomical point of view, osteosarcoma is usually located in the metaphysis of long bones in children (mainly the distal femur and proximal tibia) and in the lower limb in adults [18]. The vast majority of patients usually present week-long pain, constitutional syndrome with asthenia or weight loss, and a tumour in the knee region. Given the bone instability, there is a high probability of pathological fracture, which can be one of the main causes of visits to the emergency room, so multidisciplinary management with traumatology is recommended to avoid this situation [19]. Regarding the diagnosis, simple radiography is usually the first test to be performed, and alterations in the trabecular bone pattern, pathological fractures, periodic reaction and ossification of adjacent soft tissue that give a rising-sun image can be observed. Radiologically, the ideal imaging test for assessing both bone breakdown and soft tissue involvement and local infiltration is magnetic resonance imaging. In addition, this test will allow both biopsy planning and possible surgical intervention [20]. The definitive diagnosis is made by guided biopsy, allowing the correct identification of the histological variety, which is important given the differences in prognosis that depend on differences in pathological variety. Both directed and open needle biopsy should be carefully planned, and the field to be biopsied should be properly delimited, since there is the possibility of tumour spread along the insertion path of the needle within the tumour mass [21]. Upon diagnosis, up to 20% of patients have metastatic disease, primarily to the lungs (~80% metastases in these patients) followed by other bones. For this reason, these patients should be evaluated with chest radiography or CT of the chest, abdomen, and pelvis for the evaluation of disseminated systemic disease [22]. Another useful imaging test is PET CT, which can help locate lung or bone metastases. For bone dissemination, bone scanning may be an alternative for localisation [23]. In any case, metabolic dissemination according to the AJCC criteria already implies a stage IV malignancy, where survival is limited and recurrences are early and aggressive in most cases. The long-term prognosis is determined by the spread of the disease. For example, the 5-year survival of localised disease is 77%, while the presence of disseminated disease reduces the 5-year survival to 26% [24]. The prognosis of patients in recent years has improved thanks to the application of different chemotherapy regimens associated with surgery, but given the complexity of managing this disease, patients should be referred to expert centres where a multidisciplinary approach can be implemented [25]. There is currently no standard chemotherapy regimen available, and there is insufficient evidence to compare the survival benefits of preoperative and postoperative chemotherapy. In cases of localised disease, according to the results of the EURAMOS-1 clinical trial, which included 2260 patients, the regimen of choice is based on methotrexate, doxorubicin, and cisplatin, although there is currently no established standard treatment for this type of tumour [26]. On the other hand, the surgical approach depends on the degree of involvement, location, and locoregional invasion and often leads to the total amputation of a limb in those possible cases. In the case of metastatic dissemination rather than resection candidacy, the previously described regimen of methotrexate, doxorubicin and cisplatin may be appropriate, but there are no data to establish therapeutic lines [27]. Therefore, although osteosarcoma is a rare tumour, there are still major questions regarding its management, which applies to young people half of the time and involves cases of great prognostic uncertainty. Table 1. Main biomarkers and translational applications explored in osteosarcoma.
Marker
Translational Applications Ref.
Lactate dehydrogenase (serological) Elevated serological levels are associated with a worse prognosis.
[28] Alkaline phosphatase (serological) Elevated serological levels are associated with a worse prognosis. [29] TIM-3 (serological) Elevated serological levels are associated with a worse prognosis and allow differentiation between benign bone lesions and osteosarcoma. [30] WNT6 (serological) Elevated serological levels are associated with a worse prognosis and allow differentiation between benign bone lesions and osteosarcoma (AUC 0.854) [31] SAA and CXCL4 (serological) Elevated serological levels are associated with a worse prognosis. [32] P53 (genetic) Mutations in P53 are associated with more aggressive tumours and Li-Fraumeni syndrome. [33] Tb1 (genetic) Mutations in Rb1 are associated with more aggressive tumours.
It is associated with a better prognosis.
[40] miR16 upregulation Less histological invasion and greater response to cisplatin.
Serological Markers
The role of biomarkers in different tumours is based on their ability to detect, both serologically and histologically, different molecules that have an impact on the diagnosis, prognosis and follow-up of patients with different oncological diseases. The detection of tumour markers such as CA 19-9, CA 125 and CA 15.3 peripherally is such that the higher the level of serological elevation, the greater the tumour burden, the greater the probability of relapse and the worse the prognosis [50,51]. Currently, in osteosarcoma, the diagnosis and evaluation of disseminated disease are based on radiological tests and biopsy. Practically, the only complementary serological test is the detection of elevated levels of alkaline phosphatase and lactate dehydrogenase, which are only elevated in half of the cases and are a consequence of bone turnover without having clear repercussions at a diagnostic, prognostic or follow-up level [52]. Therefore, although in most tumours there are biomarkers that support the diagnosis and have demonstrated their usefulness in the management and follow-up of these patients with osteosarcoma, follow-up is currently based on imaging tests without a recommendation of clear chronological tracking. The detection of tumour antigens as a form of peripheral biomarker is a diagnostic and followup standard in numerous neoplasms. In normal clinical practice in osteosarcoma, it is not possible to detect these molecules, although different authors have evaluated the usefulness of different serological markers. Currently, the most commonly used and most controversial are alkaline phosphatase and lactate dehydrogenase. Given the disparate results of different studies in recent years on their usefulness, different authors have evaluated their usefulness in the prognostic diagnosis of osteosarcoma. In reference to lactate dehydrogenase, one of the most relevant studies comes from Fu et al. In a meta-analysis of 18 studies that included 2543 patients with osteosarcoma, it was observed that high levels of LDH in peripheral blood were accompanied by a worse prognosis [28]. Regarding alkaline phosphatase, the meta-analysis by Hao et al. included 12 studies and provided evidence that high levels of alkaline phosphatase were associated with a worse prognosis and worse average survival in patients with osteosarcoma [29]. These results are in line with Sahran et al., where evidence from a study of 163 patients showed that high levels of LDH and ALP were related to worse prognosis, although, after a multivariate analysis, the high levels of LDH are the most clearly related to the prognostic value [53]. Other serological markers, such as TIM-3 (T-cell immunoglobulin domain and mucin domain-3), were studied by several authors. For example, Ge et al. evaluated the diagnostic and prognostic utility of TIM-3 in 120 patients with osteosarcoma, comparing with 120 control subjects and 120 patients with benign bone tumours. Their results not only demonstrated the usefulness of TIM-3 in differentiating osteosarcoma patients from those with or without benign bone lesions but also demonstrated that elevated levels of TIM-3 were associated with poorer median survival and poorer prognosis [30]. On the other hand, Kai et al. described the utility of Wingless-Type MMTV Integration Site Family 6 (WNT6) for diagnosis and follow-up in 88 patients with osteosarcoma compared with 32 patients with Ewing's sarcoma and 20 patients with osteomyelitis. Their results demonstrate how the detection of peripheral WNT6 mRNA presents an ROC curve with an area under the curve (AUC) to differentiate osteosarcoma from other entities of 0.854 with a sensitivity of 88.4% and a specificity of 77.8%. In addition, elevated levels of peripheral WNT6 are associated with poorer median survival and an increased presence of metastases [31]. Flores evaluated the usefulness of Serum Amyloid A (SAA) and Chemokine Ligand 4 (CXCL4) in 233 patients, where a serological elevation of SAA and low levels of CXCL4 were associated with poorer median survival [32]. We must emphasise that given the rarity of this neoplasm, it limits the possibility of carrying out studies to evaluate the presence of tumour antigen detection in peripheral blood, although possible biomarkers have been shown that can be used both in diagnosis and in allowing the better stratification of those patients with a higher probability of metastatic progression.
Genetic Markers
Osteosarcomas, like most sarcomas, are characterised by presenting a wide variety of genetic alterations and expressing a highly complex karyotype. The differentiation of genetic alterations based on the age of presentation is characteristic of osteosarcoma since genetic diseases play an important role in paediatric staging. Likewise, numerous driver mutations have been described both in paediatric osteosarcoma, being the one that is most related to genetic diseases, and in adult osteosarcoma, where a great variety of genes are involved, highlighting that up to 30% of osteosarcomas may be due to genetic causes.
Among them, we must highlight the deletions of the 3q, 13q, 17p and 18q regions that are mainly related to alterations in the Rb and p53 genes [54]. On the one hand, the Li-Fraumeni disease represents the most frequent cause of genetic disease and is accompanied by mutations in p53 that generate osteosarcoma in up to 12% of carriers of this genetic disease [55,56]. In this case, Chen et al. evaluated the clinical usefulness of alterations in p53, which also represent the most frequent driver mutation of osteosarcoma, being present in up to 90% of cases in a meta-analysis that included 210 patients with a mean age of 26 years and showed worse median survival in patients with p53 mutations [33]. On the other hand, in children, we have retinoblastoma syndrome, with mutations in the Rb1 gene, in which up to 7% of carrier patients are predisposed to develop osteosarcoma [57]. Ren et al. carried out a systematic review that included 12 studies with a total of 491 patients; alterations in Rb1 were associated with higher mortality, a higher risk of metastatic disease and worse response to chemotherapy treatment in patients with osteosarcoma [34]. Another of the genetic diseases significantly associated with osteosarcoma involves alterations in RECQL4, which are mainly associated with Rothmund-Thomson syndrome type II; up to 30% of patients with this condition may present with osteosarcomas [58]. All these associations of osteosarcoma with genetic diseases are important to know because they allow us to know populations with risk diseases that can be subjected to closer surveillance. There are currently no screening programs for osteosarcoma in populations at risk, but given that osteosarcoma can often be confused with benign bone diseases or bone fractures, the correct and early identification of this entity can allow early stages to be detected and provide better long-term prognosis [59]. Among other driver markers, we can find NOTCH1, which has been studied by several authors, including Zhang et al., who showed from immunohistochemistry evaluation in 68 patients that high levels of the marker in osteosarcomas were related to a greater presence of metastasis [35]. We must also highlight the importance of the C-fos gene; authors such as Wang et al. evaluated 54 osteosarcoma cell lines and determined that high levels of Fos were accompanied by lesions with greater histological aggressiveness and invasion [36]. An association between osteosarcoma and HER2 expression levels has also been observed for many years. Although many authors have analysed the usefulness of HER2 as a prognostic factor, Grolick et al. observed in 149 paediatric patients with osteosarcoma that its usefulness as a prognostic factor is not so clear, limiting its usefulness in osteosarcoma [37]. Another one of the most commonly activated oncogenes is c-Myc. The study by Feng et al. allowed us to observe that the activation of c-Myc was accompanied by more invasive lesions in 70 patients with osteosarcoma and that it was associated with a worse prognosis [38]. Various authors have shown that alterations in MyC expression can be found in up to 50% of cases. There are other less frequent alterations, such as FGFR1, which, as demonstrated by Amary et al. in 288 patients, occurs in up to 18.5% of patients and is associated with a worse response to chemotherapy and, therefore, a worse prognosis [39]. On the other hand, there are good prognostic factors, such as PTEN, which was highlighted by Zhou et al. in a review of 13 articles that included 580 patients with osteosarcoma; a positive expression of PTEN was associated with a better prognosis, including a lower incidence of metastasis and larger differentiated tumours, which were, therefore, less aggressive [40]. We have previously noted that there is a difference between osteosarcoma in adults and in children, where the most frequent alterations in both cases are alterations in the expression of the p53 and Rb genes. There are currently no clear guidelines for genotyping these tumours and assessing, based on the expression of different genes, the probability of a more aggressive disease developing, as well as whether to proceed with more aggressive chemotherapy regimens.
MicroRNA
MicroRNAs are small noncoding RNA molecules of approximately 20 nucleotides that regulate posttranscriptional genes that are related to processes of cell differentiation, proliferation and apoptosis by promoting or suppressing gene expression after transcription. A microRNA molecule regulates the posttranscription of up to 200 different genes, and studying it allows us to understand the underlying pathophysiology of the metastatic process [60]. In relation to osteosarcoma, the implications of microRNAs are multiple and range from maintaining proliferation, promoting metastatic invasion and immunoresistance mechanisms, among others, to overregulating or underregulating oncosuppressive genes or oncogenes that can also be measured in peripheral blood or directly in histological samples through different laboratory techniques; the usefulness of microRNAs lie in them being able to be used not only in diagnosis but also as prognostic factors [61]. In this regard, it has been observed from the histological analysis of osteosarcoma lesions in a study of 40 patients that the overexpression of miR-16 is accompanied by a lower capacity for histological invasion and a greater response to cisplatin [41]. The same happens with other microRNAs such as miR-31, miR-100 or miR-221-3p, miR-29b-1-5p, miR-125b, miR-27, miR-148a, miR-181a-5p, miR-181c-5p, and miR-195, among the many described [62]. In relation to prognostic utility, given the great variety of microRNAs described, we should highlight the systematic review and meta-analysis by Cheng et al., wherein 55 articles were evaluated based on the prognostic utility of different microRNAs. In it, it is evident that an overexpression of miR-21, miR-214, miR-29, miR-9 and miR-148a at the same time as an under-regulation of miR-382, miR-26a, miR-126, miR-195 and miR-124 was associated with worse prognosis and worse mean survival [42]. From a diagnostic point of view, many authors have tried to demonstrate the usefulness of different miRNAs in diagnosis. For example, Allen-Rhoades et al. analysed 30 control patients and 40 patients with osteosarcoma; miR-205-5p had an AUC of 0.70, miR-214 had an AUC of 0.8, miR-335-5p had an AUC of 0.78 and miR-574-3p had an AUC of 0.88 for the diagnosis of this entity; and low plasma levels of miR-214 were accompanied by better median survival [43]. On the other hand, Lian et al. compared the levels of four miRNAs measured in peripheral blood in patients with osteosarcoma and 90 control patients, and the combination of miR-195-5p, miR-199a-3p, miR-320a, and miR-374a-5p had an AUC of 0.96 in differentiating patients with osteosarcoma versus healthy controls [44]. Wang et al. determined that the under-regulation of miRNA 152 allows differentiation with an AUC of 0.956, a sensitivity of 92.5% and a specificity of 96.2% in differentiating patients with osteosarcoma from periostitis patients and healthy controls in in a group of 80 patients with osteosarcoma, 20 with periostitis and 20 healthy controls [45]. In this regard, authors such as Cao et al. have evaluated the usefulness of miR-326, also measured serologically in 60 patients with osteosarcoma versus 20 healthy controls, and obtained ROC curves with an AUC of 0.817; they also observed that patients with decreased levels of miR-326 tended to have a worse prognosis and a higher likelihood of metastatic disease [46]. Given the great variety of microRNAs and their possible diagnostic uses, we should highlight the systematic review by Gally et al. They carried out a systemic review of up to 60 microRNAs in 35 different studies and, given the numerous different studies with various results, were unable to obtain the stratification of a subgroup of microRNAs to be used in diagnosis [63]. This highlights the complexity of having the necessary material; given that the intention is to obtain the peripheral levels of microRNA, complex laboratory techniques are often required that may not be available in many hospitals. Therefore, we can observe that although a great variety of miRNAs are available and their diagnostic and prognostic utility are useful, it is complex to evaluate a set of miRNAs that are appropriate for this entity.
Circulating Tumour Cells
The concept of circulating tumour cells (CTCs) is based on the existence of epithelial cells in the blood circulatory system derived after a process of angioinvasion and, therefore, metastatic dissemination; these cells are not normally seen in patients without cancer [64]. CTCs are typically found per 10 million peripheral blood leukocytes and are often associated with underlying metastatic disease [65]. It should be noted that the importance of circulating tumour cells has already been described in prostate, breast and colon cancer, where their presence is associated with a worse prognosis and a higher rates of recurrence after chemotherapy or surgery [66]. Multiple methods have been studied for the detection of circulating tumour cells. The preferred method approved by the FDA which is the gold standard is based on the detection of epithelial protein EpCAM and cytokeratins 8, 18 and 19 using the Cellsearch method, which is approved for metastatic breast cancer, prostate adenocarcinoma and colorectal cancer [67]. Other methods for the detection of CTCs, such as the positive immunoselection of EpCAM, negative immunoselection of leukocytes, filtration, immunomagnetic, electrophoresis or flow cytometry, have also shown utility but are not currently approved by the FDA and are based on complex techniques that require very well-trained personnel, which are inaccessible for daily clinical practice [68]. Various authors have shown the prognostic utility of CTCs in osteosarcoma. For example, Wu et al. showed that 93.75% of CTCs were detected in 32 patients with osteosarcoma compared to 10 controls where they were negative; they also showed that patients who maintained high levels of CTCs after surgery and chemotherapy had worse average survival, higher recurrence rates and more metastatic disease [47]. In this regard, Minghui et al. observed that the detection of CTCs in a group of 30 patients with osteosarcomas was related to metastatic disease and worse prognosis [48]. On the other hand, Han et al. evaluated the usefulness of cisplatin nanodeletion in in vivo models of mice with osteosarcoma in a preclinical model, also demonstrating the chemosensitivity of CTCs from 16 patients to cisplatin in these samples [49]. In reference to liquid biopsy by CTC, the main limitations of this technique relate to sample collection and processing techniques, given that CTCs can become fragile and cannot be processed correctly, which can generate false negatives, and entails various diagnostic-therapeutic implications. On the other hand, the mesenchymal origin of sarcomas limits the use of cytokeratins in their detection, requiring the detection of other markers for their correct identification [69]. Authors such as Fasanya et al. have demonstrated by flow cytometry that ganglioside markers 2 and 3, in addition to vimentin, are possible candidates for the detection of osteosarcoma cells in different harvesting techniques with greater superiority compared to EpCAM, which is the classic epithelial cell marker for the detection of circulating tumour cells. In addition, its high price and technical complexity should be noted, as it often requires a support laboratory that not all hospitals can afford [70]. All of this can affect the performance of the diagnosis, decreasing both the sensitivity and specificity in their detection. On the other hand, the low incidence of this disease only generates small groups of patients for evaluating its clinical utility in large clinical trials. Even so, different authors have shown its usefulness in evidencing the presence of metastatic disease, which is one of the main limitations in survival in osteosarcoma.
Conclusions
Osteosarcoma is a rare cause of neoplasia in developed countries, affecting half of patients with malignant bone neoplasms. Regardless of its frequency, it is a diagnostic challenge requiring a multidisciplinary approach and, in many instances, treatment that cannot limit metastatic disease, presenting these patients with an ominous short-term prognosis. In addition to the disease being aggressive, it should also be noted that chemotherapy treatment in advanced stages does not allow obtaining an adequate response rate and that there is currently no universally accepted chemotherapy regimen available. Although various molecular markers have been described in different neoplasms in recent years, this has not occurred in osteosarcoma (Figures 1 and 2), and practically no targeted therapy has been shown to be clinically useful in these patients. Likewise, the diagnosis of this disease is limited to the use of imaging tests and biopsy, there being very little clinical relevance in the diagnosis and monitoring of different biomarkers. In Table 1, the main applications of the explored biomarkers are summarised Therefore, future objectives in the management of osteosarcoma in its varied histological forms are based on improving early detection and describing new molecular markers in relation to its prognosis and diagnosis that allow the better stratification of those patients with disseminated disease. therapy has been shown to be clinically useful in these patients. Likewise, the diagnosis of this disease is limited to the use of imaging tests and biopsy, there being very little clinical relevance in the diagnosis and monitoring of different biomarkers. In Table 1, the main applications of the explored biomarkers are summarised Therefore, future objectives in the management of osteosarcoma in its varied histological forms are based on improving early detection and describing new molecular markers in relation to its prognosis and diagnosis that allow the better stratification of those patients with disseminated disease. | 2022-12-07T19:23:50.633Z | 2022-11-29T00:00:00.000 | {
"year": 2022,
"sha1": "9101227e020fa1ce4f179c9bb062f6b695c4a231",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/23/14939/pdf?version=1669713890",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ccebada8bcadcc84a7a6ce5e3bd66dc547205b4b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234831942 | pes2o/s2orc | v3-fos-license | Numerical study of drill string uncertainty in acoustic information transmission
Acoustic data transmission in drill string is one of the effective methods to solve the bottleneck of downhole information transmission speed, but its channel characteristics are mainly affected by the structural changes of string. In the process of drilling, the downhole BHA changes constantly according to the well condition, and the shape of drill string changes indefinitely under the condition of wear and force, which leads to the uncertainty of acoustic channel. Through the numerical simulation analysis, we can get the channel changes of drill string in different shapes, and analyze the main factors that affect the information transmission. The size inconsistency of multiple drill strings will lead to the deterioration of the channel characteristics to a certain extent, but the aperiodic drilling tool structure in the channel will cause a significant change in the transmission characteristics.
Introduction
At present, the high-speed information transmission while drilling is one of the bottlenecks of intelligent drilling technology [1][2]. The mud pulse measurement while drilling (MWD) technology which has been widely used commercially, and electromagnetic wave measurement while drilling (EMWD) technology have great limitations, especially the data transmission rate is difficult to meet the transmission demand of a large amount of data while drilling [3]. The high speed transmission of data while drilling has become the key technology to improve downhole drilling technology. Compared with the above technologies, the acoustic information transmission technology in drill string has advantages in reliability and applicability. Theoretically, the acoustic transmission speed in drill string is 1-2 orders of magnitude higher than that of mud pulse or electromagnetic wave data transmission technology.
Since 1948, the Sun Oil Company of the United States began to study the technology of sound wave transmission in drill string [4]. Drumheller began to study the sound propagation theory in the string in 1983, and obtained the sound propagation characteristics of the drill string system [5][6][7][8]. Up to 2011, XACT's related products have been used in more than 400 wells, of which the maximum well depth of 4000 meters can be achieved by using two transponders [9][10][11][12][13]. In 1991, Liu Qingyou analyzed and explored the mechanical model of drill string based on vibration model [14]. At present, the channel characteristics of periodic drill string and the attenuation of acoustic signal by environment have been fully studied in China, and the influence of different drill string structures on the channel has also been studied [15][16][17][18][19][20].
In the process of drilling, the loss state of drill string is constantly changing, and a complex BHA also may be used in directional drilling to control the well trajectory [31][32][33]. The inconsistency of the dimensions between unconventional drill string and conventional drill string will lead to the non periodicity and unpredictability of drill string acoustic channel.
Influence of inconsistent length loss of drill string joint on acoustic transmission characteristics
In the field operation, the old and new drill string are used together, and the length and other dimensions of the repaired old drill string will change. At the same time, the inner and outer diameters of the drill string will also change when it is washed and worn by drilling fluid and borehole wall. The random change of the size parameters of each drill string in the channel will lead to the uncertainty of the acoustic transmission channel while drilling.
During drilling, the rotary torque and WOB required by the bit to break rock are transmitted through the drill string. The joint thread of the drill string is invalid after a certain period of work, which needs to be repaired and new threads are machined, resulting in the reduction of the length of the joint of the drill string. Referring to the classification standard of drill string, assuming that the length reduction of drill string threaded joint used in acoustic transmission while drilling channel is less than 10%, but the reduction changes randomly, the acoustic transmission characteristics are analyzed. It can be seen from the figure that the random variation of the drill string joint length within the error produces a small amplitude of high-frequency superposition on the acoustic wave transmission waveform of the drill string. In the channel characteristics, the high-frequency passband bandwidth above 2500Hz decreases slightly, while the low-frequency passband characteristics below 2500Hz change slightly.
Influence of wear change of drill string joint on acoustic transmission characteristics
In the process of rotary drilling and lifting and lowering of drilling tools, the diameter of drill string joint is much larger than the diameter of drill string pipe body, which leads to the contact between drill string joint and borehole wall. The friction between drill string joint and borehole wall rock causes the outer diameter of joint to decrease, and the inner diameter of joint to increase due to the corrosion and erosion of mud.
According to the grading standard of the drill string, the wear of the outer diameter of the excellent drill string is not more than 3%, and the wall thickness is more than 80%. The wear of the outer diameter of the II drill string is not more than 4%, and the wall thickness is more than 70%. Otherwise, it needs to be scrapped. According to this standard, the variation range of outer diameter of drill string joint is 152.4mm to 137.7mm, the variation range of inner diameter is 82.6mm to 86.82mm, and the variation range of cross-sectional area is 129cm 2 to 89.72cm 2 . The simulation model is used to calculate the variation of sound wave transmission in drill string.
The wear of drill string joint and the change of inner diameter make the change of cross-sectional area up to 30%. The consistency of change in actual drilling process should be better. The decrease of cross-sectional area caused by joint wear can reduce the reflection of sound wave when the cross-section changes, which can play a powerful role in improving the transmission channel.
Influence of composite wear loss on acoustic transmission characteristics
In the process of drilling, all kinds of wear loss and random change of drill string length will appear at the same time. Assuming that the joint length loss is less than 10%, the cross-sectional area loss is less than 30%, the tube length loss is less than 1% (8.7mm), and the cross-sectional area loss is less than 18.5%, then the channel characteristics are calculated. As shown in Figure 3, it is a comparison diagram in the frequency range of 0-1400Hz. The figure above shows the influence of composite loss on acoustic transmission characteristics, and the figure below shows the acoustic transmission characteristics of ideal drill string. Therefore, the influence of the wear state of drill string on the acoustic transmission characteristics in the actual drilling process can be obtained as follows: If the length and cross-sectional area of the drill string joint are within the scope of the drill string classification standard, the impact on the transmission characteristics is small, and if the consistency of the loss can be controlled, the transmission characteristics can be optimized to a certain extent. The random length loss of drill string has a great influence on the acoustic transmission characteristics. The random length variation of more than 10% can make the available passband disappear. The consistency of drill string length directly determines the feasibility of acoustic transmission channel and the number and bandwidth of available passbands. The consistency range of the length of drill string pipe body should be less than 5%. The change of 18.5% of the cross-sectional area of the drill string in the standard wear range of drill string classification reduces the passband bandwidth by 20%.
Conclusion
The transmission of acoustic wave carrier information in drill string is greatly affected by the size of drill string. However, in drilling engineering operation, due to the wear and repair of drill string, there is uncertainty in various dimensions. Research shows that the better the consistency of drill string, the better the channel characteristics. When the loss of drill string is inevitable, it is necessary to determine the consistency parameters of different size data of drill string according to the number of drill string or drilling depth to ensure the feasibility of drill string channel. If the drill string is not optimized, the channel will deteriorate until there is no available channel. This technical requirement puts forward higher requirements for conventional drilling operation, but it is necessary for channel transmission. | 2021-05-21T16:57:47.149Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "8bc9c53cf8744d2f74e350f92ada35c44472c01f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/734/1/012024",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e3504035f9ba6910c16da71b81489a6179bf9d94",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
263656051 | pes2o/s2orc | v3-fos-license | Enhancement of cytotoxic effects with ALA-PDT on treatment of radioresistant cancer cells
Radiation therapy is a lower invasive local treatment than surgery and is selected as a primary treatment for solid tumors. However, when some cancer cells obtain radiotherapy tolerance, cytotoxicity of radiotherapy for cancer cells is attenuated. Photodynamic therapy (PDT) is a non-invasive cancer therapy combined with photosensitizers and laser irradiation with an appropriate wavelength. PDT is carried out for recurrent esophageal cancer patients after radiation chemotherapy and is an effective treatment for radiation-resistant tumors. However, it is not clear why PDT is effective against radioresistant cancers. In this study, we attempted to clear this mechanism using X-ray resistant cancer cells. X-ray resistant cells produce high amounts of mitochondria-derived ROS, which enhanced nuclear translocation of NF-κB, resulting in increased NO production. Moreover, the expression of PEPT1 that imports 5-aminolevulinic acid, the precursor of photosensitizers, was upregulated in X-ray resistant cancer cells. This was accompanied by an increase in intracellular 5-aminolevulinic acid-derived porphyrin accumulation, resulting in enhancement of PDT-induced cytotoxicity. Therefore, effective accumulation of photosensitizers induced by ROS and NO may achieve PDT after radiation therapy and PDT could be a promising treatment for radioresistant cancer cells.
T he number of patients with cancer is increasing dependent on the proceeding the aging society in the world.Common cancer treatments utilized recently are surgery, radiation therapy, chemotherapy, and immunotherapy.Radiation therapy is a lower invasive local treatment than surgery because of ionizing radiation sensitivity.Thus, radiation therapy is selected as a primary treatment for solid tumors.Moreover, radiosensitivity on tumor is higher than normal tissue. (1)According to this phenomenon, radiation therapy can induce cancer specific cell death without relatively severe side-effects.
Radiation can induce cellular DNA damage directly and indirectly with double strand or single stand DNA breaks. (2)In normal cells, cellular injury caused by radiation can be effectively repaired compared to cancer cells. (3)However, some cancer cells obtain radioresistance and can attenuate damage derived from radiation. (4)Radiotherapy tolerance is one of the most severe problem in cancer therapy and would directly influence the subsequent prognosis. (5)hotodynamic therapy (PDT) is a cancer therapy that utilizes a combination of photosensitizers and laser. (6)Just after exposure to the light with an optimal wavelength, photosensitizers are excited, and then reactive oxygen species (ROS) is produced immediately through the energy transition reaction. (7)ROS could be a potent cytotoxic and cancer cell death is induced. (8)The phenomenon of cancer-specific uptake of porphyrins was reported in the early 20th century, and PDT came into one of the cancer treatment in the 1970s. (9)Kennedy et al. (10) reported that 5-aminolevulinic acid (5-ALA) is utilized in PDT as a heme precursor of protoporphyrin IX (PpIX) in 1992.PpIX accumulation levels in cancer cells are higher than in normal cells because of the metabolic abnormality and expressions of several transporters.Therefore, cancer specific cytotoxicity can be achieved with PDT.
PDT is carried out for recurrent esophageal cancer patients after radiation chemotherapy. (11)From therapeutic outcomes, it is indicated that PDT is an effective treatment for radiation-resistant tumors.However, the mechanism of the effective therapeutic outcomes on the radioresistant cancer cells with PDT is still unclear.In general, radioresistant tumors are reported to produce high amounts of ROS. (12)Thus, the relationship between intracellular ROS production and porphyrin metabolic pathways may influence the intracellular PpIX accumulation and cytotoxic effects with PDT.In this study, we evaluated whether ALA-PDT is an effective treatment for radioresistant tumors.
X-ray resistant strain.X-ray resistant RGK1 strains (RGK-XRR) were established in our laboratory.X-irradiation was carried out using an X-irradiation system MBR-1505R (Hitachi Power Solutions Co. Ltd., Hitachi, Japan) for 23 days.To maintain the properties of X-ray resistant strains, further X-ray irradiation was performed using another device, MBR-1520R (Hitachi Power Solutions) for more 7 days.The irradiation was performed under the following conditions: tube voltage of 120 kV, tube current of 3.8 mA, and filter of 0.5 mm Al. 2 Gy/day irradiation was performed for a total of 30 days.During 30 days irradiation, subculture was performed 3-5 times.Survival cells with this irradiation were designated as X-ray-resistant strains.
Mitochondrial reactive oxygen species (mitoROS) assay.
To evaluate mitochondrial ROS production, MitoSOX TM Red superoxide indicator (Thermo Fisher Scientific Inc., Waltham, MA) was used.Cells were seeded at a density of 1 × 10 4 cells/well in 96-well plates (black plate clear bottom) and cultured overnight.The medium was replaced with HBSS containing 5 µM MitoSOX and incubated at 37°C for 30 min.Fluorescence intensity was measured with a Varioskan microplate reader (Thermo Fisher Scientific Inc.).Fluorescence was excited at 396 nm and measured at 610 nm filter.
Measurement of intracellular NO production.RGK-WT and RGK-XRR were cultured overnight at a density of 5 × 10 4 cells/well in 6-well plates.After the supernatant aspiration, cells were incubated in FluoroBrite TM DMEM (Thermo Fisher Scientific Inc.) with 10 μM diaminorhodamine-4M acetoxymethyl ester (DAR-4M AM) (Goryo Chemical, Hokkaido, Japan) for 15 min.After incubation, the medium was replaced with FluoroBrite TM DMEM.Fluorescence images were obtained, and the intensity was measured using a fluorescence microscope IX83 (Olympus Optical Co. Ltd., Tokyo, Japan).Fluorescence was excited at 535-555 nm, and emission was observed using a 570-625 nm filter.
Immunohistochemistry of NF-κB.RGK-WT and RGK-XRR were cultured in slide chambers 8 wells.After incubation, cells were incubated with PBS containing 4% paraformaldehyde for 15 min.After aspirating the supernatant, cells were washed three times with PBS and added 0.5% Triton TM X-100 dissolved in PBS, then incubated for 15 min.After treatment, cells were incubated for 60 min in blocking reagent.Anti-rabbit NF-κB antibodies (Genetex Inc., Irvine, CA) (1:1,000) were added to the Can Get Signal Immunoreaction Enhancer Solution 1 (TOYOBO CO., LTD., Osaka, Japan) as a primary antibody and cells were incubated for 1 h.After incubation, cells were washed three times with PBS and Goat anti-Rabbit IgG (H+L) Cross-Adsorbed Secondary Antibody, Alexa Fluor TM 405 (Thermo Fisher Scientific Inc.) (1:1,000), was added to the Can Get Signal Immunoreaction Enhancer Solution 2 (TOYOBO CO., LTD.) and cells were incubated for 1 h using this solution.After treatment, cells were washed three times with PBS and observed under an all-in-one fluorescence microscope (BZ-X710; Keyence Corp., Osaka, Japan).
Measurement of PEPT1 and ABCG2 expression by Western
blotting.Protein expression of PEPT1 and ABCG2 in RGK-WT and RGK-XRR was analyzed by Western blotting.The cells were washed three times with PBS and lysed with RIPA buffer (FUJIFILM Wako Pure Chemical Corporation, Osaka, Japan) on ice to make a total cell lysate, then heating at 70°C for 10 min.For SDS-polyaclylamide gel electrophoresis, the cell lysates were added into wells of ePAGEL ® E-T12.5L(ATTO Corporation).Gels were electrophoresed at 250 V for 20 min and proteins were transferred to polyvinylidene fluoride (PVDF) membranes (Clear Blot P+ Membrane; ATTO Corporation, Tokyo, Japan).PVDF membranes were blocked with PVDF blocking reagent, Can Get Signal ® (TOYOBO CO., LTD.) for 60 min.Anti-PEPT1 antibody (Abcam plc, Cambridge, UK) and anti-ABCG2 antibody (Cell Signaling Technology Japan K.K., Tokyo, Japan) were diluted 1:1,000 in Can Get Signal Immunoreaction Enhancer Solution 1 (TOYOBO CO., LTD.) and were allowed to react with the membrane at 4°C overnight.After aspiration of the primary antibody solution, the membrane was washed three times with PBS containing 0.1% Tween 20 (Sigma-Aldrich Co.) (PBS-T) for 5 min.The secondary HRP-linked anti-rabbit IgG antibody (Cell Signaling Technology Japan K.K.) (1:1,000) was added to Can Get Signal ® immunoreaction enhancer solution 2 (TOYOBO, CO., LTD.), to which the membrane was exposed for 60 min.After reaction, the membrane was washed three times with PBS-T.The membrane was reacted with Lumina Forte Western HRP Substrate (Millipore Co., Billerica, MA) and the luminescence was captured and measured on an ImageQuant LAS4000 (GE Health Care Japan, Tokyo, Japan).β-Actin was detected with an anti-β-actin antibody (Cell Signaling Technology Japan K.K.) and was used as a sample loading control.
Measurement of intracellular PpIX fluorescence.RGK-WT and RGK-XRR were seeded at 2 × 10 4 cells/well in 24-well cell culture plates, respectively, and cultured for overnight.The medium was replaced with 1 mM 5-ALA hydrochloride (FUJIFILM Wako Pure Chemical Corporation) and incubated in the dark for 6 h.After removing the medium and washing three times with PBS, the cells were lysed in 100 μl/well of RIPA buffer and transferred to a 96-well cell culture plate.The fluorescence intensity of the cell lysate was measured with a Synergy H1 microplate reader (BioTek Instruments Inc., Winooski, VT).The wavelength was excited at 415 nm and the fluorescence wavelength at 625 nm was measured.
PDT and cell viability.RGK-WT and RGK-XRR were seeded in 6-well plates at 5 × 10 5 cells/well and cultured for 2 days.The cells were incubated in medium supplemented with 1 mM 5-ALA hydrochloride for 6 h.The medium was removed and washed 3 times with 1 ml of FluoroBrite TM DMEM (Thermo Fisher Scientific Inc.).After 1 day incubation, the cells were incubated with 0.05% trypsin/EDTA (0.05%) for 1 day.The cells were collected and stained with Trypan Blue Stain [Invitrogen TM Trypan Blue Stain (0.4%)] and measuring cell number using the Countess C10281 automated cell counter (Thermo Fisher Scientific Inc.).Cells were irradiated with laser light (635 nm, 0.5 J/cm 2 ) for PDT via 3ch LD Light Source & Modulation System (YAMAKI CO., LTD., Tokyo, Japan).
Statistical analysis.Microsoft Excel (Microsoft Corporation, Redmond, WA) or SPSS (IBM Corporation, Armonk, NY) was used for statistical processing; Student's t test was used to test between the two groups and Tukey's post-hoc was used to test more than two data sets.P value less than 0.05 was considered to be statistically significant.All data are presented as mean ± SD.Furthermore, the expression of ABCG2 was significantly upregulated in X-ray resistant strains compared to RGK1.This indicates that both 5-ALA uptake protein and PpIX excretion protein are upregulated in X-ray resistant strains.
Intracellular PpIX accumulation.Quantification of PpIX metabolized and accumulated in the X-ray resistant strains and RGK-WT was performed after 5-ALA administration.Quantification was evaluated by measuring PpIX fluorescence.Fluorescence of PpIX was significantly higher in RGK-XRR than in RGK-WT strain at 6 h after 5-ALA administration, indicating that 5-ALA uptake in RGK-XRR resulted in increased metabolism and accumulation of PpIX.
Effective cytotoxicity of ALA-PDT in RGK-XRR.ALA-PDT was performed on RGK-WT and RGK-XRR.Cell viability was 71.9% in the RGK-XRR with PDT and 85.8% in the RGK-XRR without PDT.91.9% in the RGK-WT with PDT and 100% in the RGK-XRR without PDT.A significant decrease in cell viability was observed between the RGK-XRR with PDT and RGK-WT with PDT.No significant difference was observed between RGK-WT with and without PDT.In other words, the RGK-XRR was more effective with ALA-PDT than RGK-WT.
Discussion
In this study, we used RGK-WT and RGK-XRR generated from RGK to search for differences in the effects of ALA-PDT knowledge of cancer cells. (17)t is reported that X-ray irradiation induces the elevation of intracellular ROS production, and we demonstrated that ROS generation derived from mitochondria increased in the RGK-XRR established by the continual irradiation of X-ray, as shown in Fig. 1.In addition, intracellular production of NO is enhanced through the expression of NO synthase (NOS), and inducible NO synthase (iNOS) is activated via ROS production.In fact, intracellular NO production was increased in RGK-XRR compared to RGK-WT cells, as shown in Fig. 2. Thus, increase of NO generation in the present study was likely to be induced by iNOS activation through the ROS signal transduction.
Intracellular mitoROS and NO production in both cells were evaluated and were higher in the RGK-XRR than in RGK-WT.MitoROS activate NF-κB. (15)Translocation into the nucleus of NF-κB induced iNOS expression and NO production. (16)ccording to the immunohistochemistry result, NF-κB nuclear localization was more enhanced in RGK-XRR than in RGK-WT (Fig. 3).From these results, MitoROS production was increased and accelerated NF-κB nuclear localization in RGK-XRR.And then, intracellular NO production was increased by upregulation of iNOS expression.
We hypothesized that cytotoxicity of PDT in RGK-XRR would be enhanced compared to the parental RGK-WT because mitoROS and NO production in RGK-XRR increased.We reported that mitoROS can enhance the expression of heme carrier protein 1 (HCP1) and PEPT1. (18,19)HCP1 is carrier protein of heme and porphyrin.PEPT1 is carrier protein of ALA and amino acid.We also reported that the expression of iNOS, which is downstream of mitoROS, induced the increase of NO production and then HCP1 expression was enhanced. (20)From these phenomena, we expected PEPT1 expression was also upregulated by enhancement of NO production.In fact, PEPT1 expression was enhanced through the increase of NO production.We reported that the expressions of ATP-binding cassette sub-family G member 2 (ABCG2), which exports porphyrins and some anti-cancer drugs from cells, were decreased by the upregulation of ROS production. (21)We expected that the ABCG2 expression in the X-ray resistant cells also downregulated because intracellular ROS production increased in the cells.However, the ABCG2 expression in RGK-XRR was higher than in RGK-WT.Thus, several signaling pathways in RGK-XRR might be activated and then the ABCG2 expression upregulated.Some X-ray resistant cells acquired the property of resistance to anti-cancer drugs like doxorubicin, which exports from cells through ABCG2. (22)Indeed, the ABCG2 expression in RGK-XRR was upregulated.
We hypothesized that the upregulation of PEPT1 expression increased the uptake of ALA and enhanced the cytotoxicity of PDT.Compared to RGK-WT, the amounts of intracellular PpIX derived from ALA increased in RGK-XRR 6 h after ALA addition.According to this phenomenon, the effect of PDT was also enhanced.Although the ABCG2 expression upregulated in RGK-XRR, PpIX accumulation in RGK-XRR also increased and then cytotoxicity of PDT enhanced.We considered intracellular ferrochelatase was inactivated by increase of NO production.NO enhances the effects of ALA-PDT by decreasing the levels of mitochondrial iron-containing enzymes. (23)In other words, when NO is increased, the activity of ferrochelatase is decreased.Thus, PpIX accumulation increases, which may be advantageous in the therapeutic effect of ALA-PDT.
The department of gastroenterology in University of TSUKUBA hospital, PDT was demonstrated for treatment of recurrent esophageal cancer after chemoradiotherapy (CRT).This post-CRT recurrent esophageal cancer is thought to be cancer cells that are resistant to radiation and chemotherapy.The results of this study indicate that PDT is the optimal treatment for salvage treatment of cancer recurrence after radiotherapy.The advantage of this treatment is an effective curative treatment for patients who cannot undergo other invasive treatments.As side effects of PDT, photosensitivity and delirium reported.However, benefit may be higher for patients with raio-and/or chemo-resistant cancer.
In conclusion, intracellular ROS and NO production in the Xray resistant cells was higher than in RGK-WT.These indicates that upregulation of PEPT1 expression increases PpIX accumulation derived from ALA and the subsequent enhancement of the cytotoxicity by PDT.Moreover, inactivation of ferrochelatase also contributes to the increased accumulation of PpIX.
Fig. 1 .
Fig. 1.Intracellular fluorescence intensity of MitoSOX.MitoROS was detected by staining with MitoSOX TM Red superoxide indicator and fluorescence intensity was measured.Fluorescence intensity of MitoSOX was higher in RGK-XRR than in RGK-WT.Data are expressed as mean ± SD (n = 12).Statistical significance was tested with Student's t test.***p<0.005.
Fig. 2 .Fig. 3 .Fig. 5 .Fig. 6 .
Fig. 2. NO production in RGK-XRR and RGK-WT.(A) Fluorescent microscopy utilized to assess cellular uptake of DAR-4M AM. (B) The fluorescence intensities were analyzed, and the size is the number of cells in the analyzed images.Scale bar, 50 µm.Data are expressed as mean ± SD (n = 10).Statistical significance was tested with Student's t test.**p<0.01. | 2023-10-05T15:34:19.185Z | 2023-10-03T00:00:00.000 | {
"year": 2023,
"sha1": "132b72655b8b8df984db1177826b0127d43eca0f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jcbn/advpub/0/advpub_23-79/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "befe6f81434b874852dbe6bbeb6093e5e3339804",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24241555 | pes2o/s2orc | v3-fos-license | A Novel Application of Zero-Current-Switching Quasiresonant Buck Converter for Battery Chargers
The main purpose of this paper is to develop a novel application of a resonant switch converter for battery chargers. A zero-current-switching ZCS converter with a quasiresonant converter QRC was used as the main structure. The proposed ZCS dc–dc battery charger has a straightforward structure, low cost, easy control, and high efficiency. The operating principles and design procedure of the proposed charger are thoroughly analyzed. The optimal values of the resonant components are computed by applying the characteristic curve and electric functions derived from the circuit configuration. Experiments were conducted using lead-acid batteries. The optimal parameters of the resonance components were determined using the load characteristic curve diagrams. These values enable the battery charger to turn on and off at zero current, resulting in a reduction of switching losses. The results of the experiments show that when compared with the traditional pulse-width-modulation PWM converter for a battery charger, the buck converter with a zerocurrent-switching quasiresonant converter can lower the temperature of the activepower switch.
Introduction
Batteries are extensively utilized in many applications, including renewable energy generation systems, electrical vehicles, uninterruptible power supplies, laptop computers, personal digital assistants, cell phones, and digital cameras.Since these appliances continuously consume electric energy, they need charging circuits for batteries.Efficient charging shortens the charging time and extends the battery service life, while harmless charging prolongs the battery cycle life and achieves a low battery operating cost.Moreover, the charging time and lifetime of the battery depend strongly on the properties of the charger circuit.The development of battery chargers is important for these devices.A good charging method can enhance battery efficiency, prolong battery life, and improve charge speed.Several charging circuits have been proposed to overcome the disadvantages of the traditional battery charger.
Have a crossing and produce switching loss turn on Figure 1: The switching loss of a traditional PWM power transistor.
The linear power supply is the simplest.A 60-Hz transformer is required to deliver the output within the desired voltage range.However, the linear power supply is operated at the line frequency, which makes it large both in size and weight.Besides, the system conversion efficiency is low because the transistor operates in the active region.Hence, when higher power is required, the use of an overweighted and oversized line-frequency transformer makes this approach impractical.The high-frequency operation of the conventional converter topologies depends on a considerable reduction in switching losses to minimize size and weight.Many soft-switching techniques have been proposed in recent years to solve these problems.Neti R. M. Rao developed the traditional pulse width modulation PWM power converter in 1970.PWM was used to control the turn-on-time of the power transistors to achieve the target of voltage step-up and step-down.The switching loss of traditional PWM converters is shown in Figure 1, where V C t is the voltage across both the collector and emitter of the transistor, and i C t is the current across the collector of the transistor.
The advantages and drawbacks of this modulation style are addressed as follows.Advantages 1 A high switching frequency can reduce the volume of magnetic elements and capacitors.
2 Power transistors are operated in the saturation region and cut-off region.This makes the power loss of the power transistors nearly zero.
Drawbacks
1 The power is still restrained voltage and current during switching period, resulting in switching losses.
2 Fast switching can result in serious spike current di/dt , voltage dv/dt , and electromagnetic interference EMI .
The control switches in all the PWM dc-dc converter topologies operate in a switch mode, in which they turn a whole load current on and off during each switching.This switch-mode operation subjects the control switches to high switching stress and high switching power losses.To maximize the performance of switch-mode power electronic conversion systems, the switching frequency of the power semiconductor devices needs to be increased, but this results in increased switching losses and electromagnetic interference.To eradicate these problems, soft switching and various charger topologies more suitable for battery energy storage systems have been presented and investigated.Zero-voltageswitching ZVS and zero-current-switching ZCS techniques are two conventionally employed soft-switching methods.These techniques lead to either zero voltage or zero current during switching transition, significantly decreasing the switching losses and Have a crossing and produce switching loss turn on increasing the reliability for the battery chargers.The ZVS technique eliminates capacitive turn-on losses and decreases the turnoff switching losses by slowing down the voltage rise, thereby lowering the overlap between the switch voltage and the switch current.However, a large external resonant capacitor is needed to lower the turnoff switching loss effectively for ZVS.Conversely, ZCS eliminates the voltage and current overlap by forcing the switch current to zero before the switch voltage rises, making it more effective than ZVS in reducing switching losses, particularly for slow switching power devices.For high-efficiency power conversion, the ZCS topologies are most frequently adopted.This paper adopts zero-current-switching ZCS converter with quasiresonant converter QRC as the main structure to charge a lead-acid battery.The resonant phenomena of ZCS converter with QRC is used to determine the switching loss of the switch.Traditional PWM power converters have nonideal power loss during switching procedure.A capacitor in parallel with the switch is adopted in the proposed structure.Both the inductor and capacitor resonate to make the current into sine waves.This can reduce the overlap area of the voltage and current waves, decreasing switching loss.The switching loss of a resonant power converter is shown in Figure 2.
In the attempt to overcome the tradition PWM converters, many efforts have been made to search a less expensive charger topology for batteries to offer a competitive price in the consumer market.This paper presents a relatively simple topology for the battery charger with a ZCS quasiresonant buck converter, which is the most economical circuit topology commonly used for driving low power energy storage systems.In the proposed approach, a resonant tank is interposed between the input dc source and the battery.With the added resonant tank, the battery charger can achieve low switching loss with only one additional active power switch and easy control circuitry.
Power converters can be divided into the following types 1-8 .
1 Resonant Converter (RC): this converter uses both the half-bridge circuit and the fullbridge circuit as basic structures.It is implemented as a series resonant converter or parallel resonant converter.
2 Quasiresonant Converter QRC : this converter adopts both the half-wave style circuit and the full-wave style circuit as basic structures.It is implemented as a ZCS converter or zero voltage switching ZVS converter.
3 Multiresonant Converter MRC : this converter adopts both a ZCS circuit and a partial resonant ZVS circuit in the half-bridge style DC-DC converter as basic structures.
The advantages and drawbacks of resonant power converters are given below.Advantages: the power transistor has no voltage or current during the switching process.It can reduce switching loss and restrain EMI effectively.Let V C t and i C t turn on and cut off at the same time, as shown in Figure 3. Assuming that their variation time is Δt.The initial turn-on time is adopted from
Mathematical Problems in Engineering
where V s is the voltage of both the collector and emitter in the transistor during the turn-off period, and I S is the collector current during the turn-on period.The switching loss can be written as
The Investigation of a Lead-Acid Secondary Battery
Batteries have become an increasingly important energy source.As shown in Figure 4, lead-acid batteries produce both lead sulfate and water during the discharge period.At the positive electrode, lead dioxide reacts with sulfuric acid in the electrolyte during the discharge period.Sulfuric acid is decomposed at the electrode.Sulfuric acid reacts with lead dioxide, which is the activated material at the positive electrode.This reaction produces lead sulfate, which sinks and piles up at the electrode.Lead, the activated material, reacts with sulfuric acid in the electrolyte at the negative electrode.Lead sulfate is produced from the above reaction.Then, lead sulfate sinks and piles up at the electrode.In the electrolyte, sulfuric acid is decomposed by reactions with the activated material at both the positive and negative electrodes.The reaction reduces the electrolyte concentration.A lot of lead sulfate sinks and piles up at both the positive and negative electrodes.This reaction increases the interior resistance of the battery and decreases the voltage of the lead acid battery.As shown in Figure 5, the lead-acid battery recharges when it is discharged to a certain level.The interior reaction of the battery is the charging reaction, as shown in At both the positive and negative electrodes, the charge electrical energy of the exterior supply produces lead sulfate that is needed during the discharge period.After the above reaction, the activated material, lead dioxide, is deposed at the positive electrode.Lead reacts with sulfuric acid in the electrolyte at the negative electrode at same time.Their reactions increase the electrolyte concentration and raise voltage.In a lead-acid battery, the electrochemistry reactions of both charging and discharging are reversible.This is the socalled "Double Sulfate Theory."It can be expressed as The water production in lead-acid batteries during the discharge period is reelectrolysis by means of the charging reaction.From the above reactions, oxide is produced at the positive electrode, and hydrogen is produced at the negative electrode at the same time.This prevents water loss from the electrolyte in a closed lead-acid battery.Oxide is produced from the positive electrode during the charging period; the activated material, that is, lead, is obtained from the negative electrode.The two materials above react to make lead monooxide.Lead monooxide reacts with the sulfuric acid of lead sulfuric acid.Oxide produced from the positive electrode during the charging period is absorbed by the negative electrode.The oxide does not leave from the battery, resulting in water loss in the electrolyte.
In order to charge a battery properly, four charge modes should be designed and implemented in sequence, which are trickle charge, bulk charge, overcharge, and float charge.At the beginning of charge process, the trickle charge mode is adopted.And a very low constant current is applied to the battery to raise the voltage to the deep discharge threshold.Then the mode is switched into bulk charge.At the stage, a constant current is applied to the battery with the purpose of quickly replenishing electricity to the battery.When the voltage of battery exceeds overcharge limits, it enters into overcharge mode.In this mode a constant voltage is applied to the battery, and its value is typically set between 2.45 V/cell and 2.65 V/cell.Float charge is also a constant voltage charge mode after completing charge process to maintain the capacity of the battery against self-discharge.
ZCS-QRC Buck Converter for a Battery Charger
A variety of driving circuits have been employed for the ZCS quasiresonant buck converter.Conventionally, the trigger signal is associated with a proper duty cycle to drive the 0 0 0 active power switch with the required charging current.The major elements of the ZCS quasiresonant buck converter for battery charger are available in a single integrated circuit.The integrated circuit contains an error amplifier, sawtooth waveform generator, and comparator for PWM.The turn-on and turn-off of the ZCS-QRC switch is operated when the current is zero.The produced current that is resonated by L r and C r passes through the switch.Because L r is very large, i o is assumed to be a constant I o .The circuit structure is shown in Figure 6.In addition, the steady-state waveform is shown in Figure 7.
The following assumptions are made 1 All semiconductor elements are ideal.This means that switches have no time delay during the switching period.
2 There is no forward voltage drop in the diode D m during the turn-on period.There is no leakage current during the turn-off period.
3 The inductor and capacitor of the tank circuit have no equivalent series resistance ESR .
4 The filtering inductor L f L r the filtering capacitor C f C r .Because the cut-off frequency of current that is composed of low pass filter circuit load and filtering capacitor C r is much less than the resonant phase angle frequency ω o 1/ L r C r of resonant circuit that is composed by resonant inductor L r and resonant capacitor C r .Compared to the resonant circuit, the filtering circuit composed of L f and C f and the load can be regarded as a constant current source I o .
5 Unregulated line voltage V in does not significantly vary during the resonant circuit turn-on and turn-off period T s .V in is regarded as a constant.
The operation of the complete circuit is divided into four modes.
Mode 1 [linear stage
Before turning on the switch, the output current I o passes through diode D m .Thus, the voltage across C r makes v Cr V in .Thus, the initial conditions are i Lr 0 and v Cr V in .The current that passes through the switch is zero at t t 0 .The switch Q is turned on at the same time t t 0 with ZCS.Diode D m is also turned on simultaneously.The inductor current i Lr t is increased linearly.If i Lr is less than I o , The freewheel diode D m is still turned on, and v Cr is maintained at V in , as shown in Figure 8.The circuit equation is represented as The switch Q is turned off automatically due to the forward direction, as shown in Figure 9.The equations of the circuit are shown as Substituting formula 3.4 into formula 3.3 and yielding As a result, both L r and C r form a resonant path.From 3.4 , it is necessary to have Z o I o < V in to confirm with ZCS at this moment.Mode 2 is finished at t t 2 , when the peak value of the capacitor voltage v Cr pk is equal to -V in .The period of Mode 2 is calculated by The pulse trigger of switch Q is eliminated, and Mode 3 is entered at t t 2 .
Mode 3 [recovery stage
The equivalent circuit of Mode 3 is shown in Figure 10.I o passes through C r .So v Cr is increased linearly at this stage.The circuit equation is presented as
3.7
v Cr t 3 can be calculated by substituting t t 3 into formula 3.7 ; therefore
3.8
Due to v Cr t 3 V in , the following equation can be obtained from formula 3.8 : Thus, the period of Mode 3 can be represented as
3.10
Mode 3 is finished at t t 3 .So D m is turned on at this moment, and enter to Mode 4.
Mode 4 [freewheeling stage t 3 ≤ t ≤ t 4 ]
At this stage, the switch Q is still controlled under the turn-off condition.Diode D m is turned on and is formed of the I o loop, as shown in Figure 11.At t t 4 , the switch Q is triggered again.Then, the next cycle begins.If we can control the period of the freewheeling stage, we can regulate the output voltage.The circuit equation is described by
3.11
The capacitor and inductor do not consume power in the ideal condition.In the ideal condition, no energy is wasted in switch element, transistor, or diode.Neither the capacitor nor inductor has a parasitic resistor in the ideal condition.The supply energy of the power source is equal to the absorbing energy of the load in a cycle.The circuit equation is shown as
3.12
After rearrangement, the average value of output voltage V o can be derived by
3.13
According to above condition, formula 3.13 can be rewritten as The voltage average value of the filtering inductor is equal to zero in steady state.The average voltage of v Cr is exactly equal to the output voltage V o .We can modulate the value of the output voltage V o by controlling the period of Mode 4 i.e., changing the switching frequency .From the waveforms of Figure 7, we can get the characteristics of the device.
Quasiresonant Buck Converter
Compared to a traditional PWM converter, the switching loss of a ZCS quasiresonant buck converter is low.This paper adopts the charger of a ZCS quasiresonant buck converter.As shown in Figure 6, a resonant capacitor C r and a resonant inductor L r were added to reduce the switching loss of switch Q in the traditional PWM Buck converter circuit.According to the results of the operation stage analysis, we can design the resonant elements i.e., resonant capacitor C r and resonant inductor L r .According to the charger's energy balance of the ZCS quasiresonant buck converter in Figure 6, neither capacitor nor inductor consumes average energy in the ideal condition.There is no energy consumption in the switch element, transistor, or diode during the ideal condition.Thus, the supply energy of the power source is equal to the absorbing energy of the load.The supply energy of power source can be written as The absorbing energy of the load can be calculated by If we neglect the power consumption of the converter, we can assign the normalization of load resistor r R o /Z o and the ratio of the output voltage X V o /V in .We can obtain formula 4.3 after simplification at α ω o t 2 − t 1 : After converting formula 4.3 appropriately, we can get the curve sketch of the load characteristics as Figure 12 10 .
MATLAB simulation software was used to sketch the curve plot.Because the ZCS quasiresonant converter must be suitable for Z o I o < V in , this paper adopts the curve for which the ratio of the output and input is the closest to 1. f s /f r is chosen to be 0.75.The switching frequency is equal to 22.72 kHz.The resonant frequency is 30 kHz.With Z o I o < V in , I o is designed to equal 0.4 A, and V in is equal to 24 V.After calculating, we get C r > 88.49 nF and L r < 318 uH.One adopts C r 0.1 uF and L r 300 uH.
Experiment Results and Discussion
This study includes circuit simulations using Pspice software and practically implements the developed novel charger.Finally, the simulated and practical results are compared.The battery charger characteristics of a resonant switching converter were investigated.A temperature curve comparison chart between the ZCS converter charger and traditional PWM switching ones is shown in Figure 17.A temperature comparison of the power switch is also shown in Figure 17 at the same test conditions same output and input voltages, lead-acid battery, filtering inductor, and filtering capacitor .The charger temperature of the ZCS converter was kept at 32 • C after a certain period.This can verify that ZVS can reduce the power loss of the power switch.
Conclusion
This paper has developed a novel application of zero-current-switching buck dc-dc converter for a battery charger.The circuit structure is simpler and much cheaper than other control mechanisms requiring large numbers of components.From the results of the experiments, charger switch is turned on and off at the zero-current stage.Resonant switching improves the traditional hard-switching power loss produced by turning the switch on and off at a nonzero current stage and lowers the switch temperature to reduce power loss of the power switch.From the measurements, the power switch transistor temperature of the ZCS converter charger was kept at 31 • C after a certain period.Compared to a traditional hardswitching charger, the temperature of the power switch transistor of the proposed charger was much lower.
Figure 2 :
Figure 2: The switching loss of resonant power transistor.
Figure 3 :
Figure 3: The dimensions of hard switching.
Figure 8 :
Figure 8: The equivalent circuit of Mode 1.
Figure 9 :
Figure 9: The equivalent circuit of Mode 2.
Figure 10 :
Figure 10: The equivalent circuit of Mode 3.
Figure 11 :
Figure 11: The equivalent circuit of Mode 4.
T S 0 i
Lr t dt.
Figure 17 :
Figure 17: A comparison of power switch temperature curves.
They can convert electrical energy into physical energy.Batteries can be divided into physical energy and chemical energy types.Physical type batteries convert both solar energy and thermal energy into electrical energy using physical energy.Using the oxidation-reduction reactions of electrochemistry is currently very popular.The chemical energy of active materials is converted into electrical energy in chemical energy batteries.All batteries contain energy produced chemical electrolysis.Normal batteries use electrolysis only.If we add extra energy into the battery, the battery stored the energy by antireaction.The battery releases energy by way of electrolysis.Lead-acid batteries are traditional energy-storage devices.They have a large electromotive force EMF and a wide of range operation temperature.Their advantages are a simple structure, mature technology, cheap price, and excellent cycle life.For the above reasons, lead-acid batteries are still important today.This paper uses a lead-acid battery as the load for the charging test.A lead-acid second battery made by Man-Shiung Corporation was chosen.When a lead-acid battery is connected to a load, the interior reaction of the lead-acid r and C r resonate at this stage.The peak value of i Lr is V in /Z o I o , and v Cr is equal to zero at t t 1 .The negative peak value of v Cr occurs when i Lr is equal to I o at t t 1 .i Lr is decreased to zero at t t 2 .
1 This mode is finished when i Lr t is equal to I o at t t 1 .The period of Mode 1 m is turned off.Then, the mode enters to Mode 2. Mode 2 [resonant stage t 1 ≤ t ≤ t 2 ] L | 2017-07-31T03:06:36.261Z | 2011-09-04T00:00:00.000 | {
"year": 2011,
"sha1": "2ae3f1f8c03278fabbd5926f00196048ed268697",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2011/481208.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2ae3f1f8c03278fabbd5926f00196048ed268697",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
54864250 | pes2o/s2orc | v3-fos-license | Evaluating wind speed probability distribution models with a novel goodness of fit metric: a Trinidad and Tobago case study
AbstractWind energy has been explored as a viable alternative to
fossil fuels in many small island developing states such as those in the Caribbean for a long time. Central to evaluating the feasibility of any wind energy project is choosing the most appropriate wind speed model. This is a function of the metric used to assess the goodness of fit of the statistical models tested. This paper compares a number of common metrics then proposes an alternative to the application-blind statistical tools commonly used.Wind speeds at two locations are considered: Crown Point, Tobago; and Piarco, Trinidad. Hourly wind speeds over a 15-year period have been analyzed for both sites. The available data is modelled using the Birnbaum–Saunders, Exponential, Gamma, Generalized Extreme Value, Generalized Pareto, Nakagami, Normal, Rayleigh and Weibull probability distributions. The distributions were compared graphically and their parameters were estimated using maximum likelihood estimation. Goodness of fit was assessed using the normalised mean square error testing, Chi-squared testing, Kolmogorov–Smirnov, R-squared, Akaike information criteria and Bayesian information criteria tests and the distributions ranked. The distribution ranking varied widely depending on the test used highlighting the need for a more contextualized goodness of fit metric. With this in mind, the concept of application-specific information criteria (ASIC) for testing goodness of fit is introduced. This allows distributions to be ranked by secondary features which are a function of both the primary data and the application space.
Introduction
Electricity costs in most Caribbean Small Island Developing States (SIDS) are amongst the highest in the world [1] with majority of electrical energy being produced from imported fossil fuels [2]. As a result, wind energy is increasingly being explored as an alternative source of energy [3] with feasibility studies showing great potential for various islands [4][5][6][7]. Conversely, the Caribbean countries are amongst the most wind storm-prone regions in the world suffering from 26 storm impacts in the last 4 years alone [8][9][10]. These storms have an acute impact on the economies of these small states [11]. Additionally, wind speeds can even impact upon the region's flora and fauna [12].
Given the importance of wind to Caribbean SIDS, it is necessary for the characteristics of the wind be studied closely. Energy studies, storm risk studies and aviation considerations, among others, require the wind to be modelled as accurately and comprehensively as possible. Many studies have aimed at characterizing or comparing wind speed distributions at different locations [13][14][15][16][17][18][19][20][21]. Several emphasize seasonal variations while diurnal variations have also been examined [5]. However, these have either been located outside the Caribbean or have been limited in their exploration of candidate distributions. Furthermore, there is no consensus on the goodness of fit criterion which is most suitable for evaluating the appropriateness of the distribution to a particular application [22].
The paper examines the applicability of probability distributions commonly used to model wind speeds to data available from two locations in Trinidad and Tobago. The relative performance of these distributions is compared using goodness of fit tests. Additionally, the concept of application-specific information criteria (ASIC), is introduced as an improved method for distribution ranking in the case of wind energy studies. Section 2, "Description of Data,"describes the data used for this study, investigates the basic statistical properties and outlines the pre-processing required before utilisation of the data. Section 3, "Methodology" describes the candidate distributions, goodness-of-fit criteria used and the method for parameter estimation. Section 4, "Results and Discussion" displays the fit of the candidate distributions to the data graphically. Several goodness-of-fit tests are used to rank the performance of the distributions. In addition, expected wind energy output from a turbine is estimated and compared to the energy output calculated using the actual wind data. Finally, concluding remarks are given in Sect. 5.
General
The locations given in Fig. 1, provide a useful opportunity for comparison as they are both greeted by the same north-easterly trade wind system [23], but are located at sites with differing geography. Crown Point is on a sheltered coast while Piarco is located in-land in an open plain. Piarco also receives some degree of sheltering by mountains to the North.
The dataset consists of the mean hourly wind speeds at Crown Point, Tobago and Piarco, Trinidad (locations indicated in Table 1 for the years 2000-2015, provided by the meteorological offices at airports at both locations and does not include wind direction or peak gust speed. The speeds were recorded in knots, rounded to the nearest knot, and are given in intervals of 1 h for each hour of the 24-h day for every day of the month. It should be noted that there are data points missing from both datasets. For Piarco, Trinidad, approximately 19 days of data from October 21 to November 9, 2009 are missing. There are also missing data points for a number of hours on other days. The total number of measurements is 133,083 out of a maximum possible number of 133,656 (0.4% missing data). For Crown Point, the data for the months of July to September, 2001, and August, 2011, are missing. Additionally, some days are missing data for an hour or a few hours. The total number of measurements is 123,429 out of a maximum possible number of 133,656 (7.7% missing data).
Basic statistics
Before any pre-processing, the data were described by the statistics given in Tables 2 and 3. Data pre-processing included data inspection for possibly erroneous values: -As seen in Table 2 Saffir-Simpson scale [25]. However, review of archived daily newspapers, for the next day make no mention of the event [28]. Again, no corroborating records were found to confirm these wind speeds [14] and as such, they were deemed erroneous. All noted erroneous measurements were replaced with null values. Tables 4 and 5 reflects the wind statistics after pre-processing.
Distribution fitting
Typically, wind data is modelled as a Weibull distribution, especially when the aim is to characterise the annual resource [5,15,[29][30][31][32], however, a number of other candidate distributions have been catalogued [33]. For other applications such as statistical analysis of extreme wind speeds, the Weibull (or reverse Weibull) has also been recommended [34] while other distributions such as the generalised extreme value distribution [17] and the generalised pareto [19] have been explored. Agustin [20] encouraged using mixed distributions while confirming the applicability of Weibull. Sarkar et al. [35] identified the weakness of the Weibull distribution as its failure to describe the upper tail.
The Rayleigh distribution has also been used as a probability model for wind speed [31], although some applications have found Weibull to be more accurate [32,36]. Recent studies found autoregressive models [37] and maximum entropy distributions [38] to be better suited to wind speed applications than Weibull or Rayleigh. Alavi et al. [39] found that the Nakagami distribution performed well when compared to other distributions frequently used to model wind speed. Additionally, the Normal and Exponential Distributions were identified as potential candidates via visual inspection of the histogram shape. The Birnbaum-Saunders and Gamma distributions performed well when goodness of fit was assessed using the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) criteria, and were thus included in the comparative analysis (Figs. 2, 3).
Review of probability distribution functions
The equations defining the probability density functions (PDFs) for various candidate distributions of interest are given below. While by no means exhaustive, the distributions represent those commonly used in the literature.
Birnbaum-Saunders
where is the location parameter, > 0 is the shape parameter, > 0 is the scale parameter and
Generalized extreme value
where where ∈ is the location parameter, > 0 is the scale parameter and ∈ is the shape parameter [44].
Generalized Pareto
where ∈ is the location parameter, > 0 is the scale parameter and ∈ is the shape parameter [45].
Nakagami
where is the shape parameter and is the spread parameter, for x > 0 [46].
Normal
where is the mean and is the standard deviation [45].
Rayleigh
where is the scale parameter [47].
Weibull
where k > 0 is the shape parameter and > 0 is the scale parameter [48].
Parameter estimation
Several techniques exist for parameter estimation (e.g., [22]). In this work, the parameters for these various distributions were estimated using the maximum likelihood method, which selects as its estimate the parameter value that maximizes the probability of the observed data [49]. This method is popularly used since the resulting estimators are generally asymptotically unbiased and consistent. They also offer the advantage of simplicity in implementation. While this method can be limited through the need to determine closedform estimator solutions, for the distributions of interest in this work, these can be readily obtained [22].
Goodness of fit
After plotting the distributions using the estimated parameters, the goodness of fit of the distributions to the data profile were assessed using the following metrics: -Normalised mean square error (NMSE).
Fig. 5 Wind distribution at Piarco
The normalised mean square error (NMSE) The NMSE was calculated using the previous method with the following equation: Where y n are the modelled values and f n are the reference data.
The Chi-squared statistic
For testing the goodness of fit, the Chi-squared was used. The Chi-square statistic ( 2 ) is calculated as follows: where O i are the observed counts and E i are the expected counts [49]. O i was the estimated sample datasets calculated using the estimated pdf of each distribution. E i was derived via the frequency histogram based on the measured data. N was determined by the number of bins used in the frequency histogram. A smaller Chi-squared statistic indicates a better fit.
The two-sample Kolmogorov-Smirnov test
The two-sample Kolmogorov-Smirnov test statistic was calculated as follows: where F 1 (x) is the proportion of x1 values less than or equal to x, and F 2 (x) is the proportion of x2 values less than or equal to x. The smaller the test statistic the better the fit [52].
Co-efficient of determination, R 2
The R 2 statistic was calculated as follows: where,
and,
where y i represents the dataset and f i represents the modelled values. R 2 varies between − Inf (bad fit) to 1 (perfect fit) [53].
Akaike information criterion
The AIC statistic was calculated as follows: aic = −2logL( ) + 2k, Birn-Saun Birn-Saun Gamma Birn-Saun Rayleigh where logL( ) denotes the value of the maximized loglikelihood objective function for a model with k parameters. A smaller AIC statistic value indicates a better fit [54].
Bayesian information criterion
The BIC statistic was calculated as follows: where logL( ) denotes the value of the maximized loglikelihood objective function for a model with k parameters fit to N data points. A smaller BIC statistic value indicates a better fit [54] (Figs. 4, 5, 6, 7).
Results and discussion
The estimated parameters for each distribution are shown in the Appendix. The performance of these distributions were compared using the goodness of fit metrics described in Sect. 3.4 (Tables 6,7,8,9).
As evident, rankings varied depending on the goodness of fit metric used. Although in some other studies goodness of fit metrics corroborated each other [32,38,39], similar variability was observed in [55]. Figures 8,9,10,11,12,13,14,15,16 and 17 show the details for each goodness of fit metric. This is particularly evident in Figs. 8 and 9 which show rankings by NMSE and Chi-squared metrics, where the Birnbaum-Sanders distribution was particularly ill fitted as compared to Figs. 11 and 12 in which it is comparable when evaluated using the R 2 , and AIC and BIC criteria, respectively.
The variability in rankings raises the question of suitability of any given metric to the application. Consequently, some method of determining which goodness-of-fit criterion is best suited to the application has to be found or a new application-specific information criterion (ASIC) has to be formulated.
Application-specific information criterion
Wind models are used to calculate the expected energy generated by wind turbines. In this case, expected energy output over a particular time would be an important consideration in design and investment decisions. The ability of the distribution to accurately estimate this value is crucial.
Consider a wind turbine modeled as a 3MW unit using a piecewise linear model with a cut-in speed (cis) of 3.5 ms −1 , rated speed (rs) of 14 ms −1 and cut-out speed ( cos ) of 25 ms −1 as shown in Fig. 18.
The expected energy output of the turbine over a given period of time is calculated according to Eq. 19.
where P(v) is the turbine power vs speed characteristic (Fig. 18) and f(v) is the distribution function used to model the data.
For this work, the proposed ASIC is defined as a normalized weighted error function (in this case normalized error in expected energy is used), with the weightings defined by the turbine characteristic.
is the estimated distribution function. Using this approach, sections of the distribution which contribute more to the application are more heavily weighted than those that do not. In this case, the fit of the distributions below wind speeds of 3.5 ms −1 or above 25 ms −1 are not as important since the wind turbine does not output any power for those conditions. Using the chosen ASIC, the fit of the data over the range of power producing speeds of the turbine is assessed. This marks a departure from the philosophy behind other goodness-of-fit tests which equally weight all sections of a distribution or weight them based on probability and do not consider any external information in the determination of goodness of fit.
The actual energy output for Piarco was calculated as approximately 112 GWh, while the value for Crown Point was 155 GWh. Tables 10 and 11 show the percentage difference in energy predicted by the models as compared to the energy derived directly from the wind data. As evident, the results did not match any ranking derived from the conventional goodness-of-fit metrics.
Among the traditional goodness-of-fit tests, the Chisquared and Kolmogorov-Smirnov tests produced similar results to the ASIC in that they placed similar candidate distributions within the top four ranked distributions, albeit with a different order. This indicates that they may be better suited as goodness-of-fit tests for the purpose of wind energy studies than the other traditional goodness of fit metrics utilised in this paper. Given that the application space is known, however, using an ASIC would still be preferrable since rankings are made according to a parameter (energy in this case) which is meaningful to users of the data.
Finally, it is also noteworthy that the Weibull distribution, which is traditionally used in wind modelling in the Caribbean, performed poorly for both datasets using all the metrics investigated. This is likely due to the large amount of low to zero wind speed measurements. Castellanos [37] has also noted that the Weibull distribution performs poorly when the data contains a large proportion of low wind speeds (Figs. 19, 20; Table 12).
Conclusions
The Weibull distribution was found to perform relatively poorly as a wind probability model for both sites. The Rayleigh distribution performed consistently better than the Weibull but was still ill suited as a model for the data.
The inconsistency in results for the goodness of fit led to the conceptualization of application-specific information criteria (ASIC) as a more meaningful approach for assessment of goodness of fit in cases where the secondary, applicationspecific features must be calculated from the primary data.
For the application in question, the normalized error in expected energy is used as a goodness-of-fit metric to rank candidate distributions. The advantage of this technique is that the distributions can be examined in terms of overestimation or under-estimation of expected energy as well as the magnitude of deviation while using a metric that is meaningful in the context of the intended application space.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-12-05T07:38:04.660Z | 2018-05-17T00:00:00.000 | {
"year": 2018,
"sha1": "6975694115b63f8ea58801f2edc5781a0e40bb0a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40095-018-0271-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "fc22e85a56c8832a3716c1ac8778680f6dbf7adc",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
214470741 | pes2o/s2orc | v3-fos-license | Respiratory Health and Urinary Trace Metals among Artisanal Stone-Crushers: A Cross-Sectional Study in Lubumbashi, DR Congo
Background: Thousands of artisanal workers are exposed to mineral dusts from various origins in the African Copperbelt. We determined the prevalence of respiratory symptoms, pulmonary function, and urinary metals among artisanal stone-crushers in Lubumbashi. Methods: We conducted a cross-sectional study of 48 male artisanal stone-crushers and 50 male taxi-drivers using a standardized questionnaire and spirometry. Concentrations of trace metals were measured by Inductively Coupled - Plasma Mass Spectrometry (ICP-MS) in urine spot samples. Results: Urinary Co, Ni, As, and Se were higher in stone-crushers than in control participants. Wheezing was more prevalent (p = 0.021) among stone-crushers (23%) than among taxi-drivers (6%). In multiple logistic regression analysis, the job of a stone-crusher was associated to wheezing (adjusted Odds Ratio 4.45, 95% Confidence Interval 1.09–18.24). Stone-crushers had higher values (% predicted) than taxi-drivers for Forced Vital Capacity (105.4 ± 15.9 vs. 92.2 ± 17.8, p = 0.048), Forced Expiratory Volume in 1 Second (104.4 ± 13.7 vs. 88.0 ± 19.6, p = 0.052), and Maximum Expiratory Flow at 25% of the Forced Vital Capacity (79.0.1 ± 20.7 vs. 55.7 ± 30.1, p = 0.078). Conclusion: Stone-crushers were more heavily exposed to mineral dust and various trace elements than taxi-drivers, and they had a fourfold increased risk of reporting wheezing, but they did not have evidence of more respiratory impairment than taxi-drivers.
Introduction
Several studies have documented high to very high levels of mineral dust in worksites-and also the surrounding environment-where stones or rocks are crushed or milled using various types of mechanical crushers to produce aggregates for use in the construction of roads and buildings [1][2][3][4][5][6]. In these publications, the main emphasis was on the health risks associated with high exposures to free crystalline silica, i.e., mainly quartz, the content of which depends on the nature of the materials that are being crushed. Chronic inhalation of free crystalline silica may lead, usually after several years of exposure, to silicosis, chronic obstructive pulmonary disease (COPD), and lung cancer, and contribute to pulmonary tuberculosis and autoimmune diseases, such as systemic sclerosis [7]. Moreover, chronic inhalation of other poorly soluble low-toxicity particles ("biopersistent granular dust") is also associated with the development of COPD [8].
Lubumbashi is the second largest city of the Democratic Republic of Congo (DRC) and the capital of the Haut-Katanga Province, which is situated in the African Copperbelt, an area of intense past and current mining and processing of copper and cobalt ores. Artisanal mining has also become widespread in the past 20 years, with tens of thousands of young people involved. These activities have led to widespread environmental pollution by various trace metals [9]. Biomonitoring studies have documented that people living close to mining activities have high internal exposure to cobalt and other trace metals [10,11]. Besides copper and cobalt mining and smelting, many other industrial and artisanal activities take place in the area. One of these artisanal activities consists of crushing stones to produce gravel for use in the construction of buildings and roads. In Lubumbashi, hundreds of poor people are engaged in artisanal stone crushing using hand tools (see photographs in Figure 1). The degree of exposure to toxic metals and the respiratory impact of this dusty work have not been studied in this population. An artisanal stone-crusher and a transporter of stones working in precarious conditions. Right panel: General view of the site where stones are manually extracted and then transported on shoulders towards the place where they are kept before being transformed into gravels that will be sold to the purchasers for building roads, bridges, houses.
We, therefore, performed a cross-sectional study using urinary biomonitoring to assess metal exposure and spirometry to assess pulmonary function among artisanal stone crushers, taking drivers of collective taxis as controls without occupational exposure to mineral dust.
Methods
This cross-sectional study took place in Lubumbashi in October 2014 (dry season). Potential participants were recruited by convenience sampling at their place of work over a period of 6 days, during which 48 men working as artisanal stone-crushers and 50 men working as drivers of collective minibus taxis were included. All participants found at their workplace were invited to participate, and those giving their oral consent replied to a respiratory questionnaire, performed spirometry, and provided a spot sample of urine, all procedures being done at their worksite. The study protocol, including the oral consent procedure, was approved by the medical ethics committee of the University of Lubumbashi.
We used a combination of questionnaires, as in a study of workers performed in Algeria [12], namely the International Union Against Tuberculosis and Respiratory Diseases (IUATLD) Bronchial Symptoms Questionnaire [13], to obtain information about respiratory symptoms in the past 12 months, and the questionnaire on allergic rhinitis [14], with some additional questions related to the local context. The questions were administered face-to-face in Swahili (own translation) by the same interviewers for stone-crushers and taxi-drivers.
We performed spirometry in 68 participants (34 stone-crushers and 34 taxi-drivers) using the portable EasyOne ® Air device (ndd Medical Technologies, Zurich, Switzerland). In accordance with ATS/ERS (American Thoracic Society/European Respiratory Society) guidelines [15], a minimum of 3 and a maximum of 8 satisfactory forced expiration maneuvers were performed, in the sitting position and without a nose clip, and the highest values for the Forced Vital Capacity (FVC), Forced Expiratory Volume in 1 Second (FEV1), Peak Expiratory Flow (PEF), Maximal Mid-Expiratory Flow (MEF25-75), Maximal Expiratory Flow at 50% (MEF50), and 25% (MEF25) of the FVC obtained from the best curves, were retained. However, because the curves were not displayed on a computer screen during the forced expiration maneuvers, an experienced lung function expert (Geert Celis, Pulmonary Function Laboratory, UZ Leuven) later independently checked the quality of the printed spirometry curves and scored them as follows: Score 0: Unacceptable; score 1: FEV1 (and PEF) probably reliable, but FVC not acceptable; score 2: FVC probably reliable, but FEV1 not acceptable; score 3: Both FEV1 and FVC acceptable. Only spirometries with a score of 3 were used for assessing FEV1/FVC, MEF25-75, MEF50, and MEF25. Height was measured using a measuring rod. Percent predicted values for FEV1 and FVC were obtained for subjects of African descent, as provided by the EasyOne software.
We obtained a spot sample of urine from 75 participants (41 stone-crushers and 34 taxi-drivers), who were instructed to void urine, after hand-washing and without contaminating the sample by their hands, into a 40 mL polystyrene vial with screw cap (Plastic-Gosselin, Hazebrouck, France). Urine was transferred the same day into cryovials, which were kept frozen and later shipped in cool-boxes to Belgium by commercial flights. The concentrations of 24 elements [Lithium (Li), Beryllium (Be), Aluminium (Al), Vanadium (V), Chromium (Cr), Manganese (Mn), Cobalt (Co), Nickel (Ni), Copper (Cu), Zinc (Zn), Arsenic (As), Selenium (Se), Molybdenum (Mo), Cadmium (Cd), Indium (In), Tin (Sn), Antimony (Sb), Tellurium (Te), Barium (Ba), Platinum (Pt), Thallium (Tl), Lead (Pb), Bismuth (Bi) and Uranium (U)] were analyzed in 100 µL urine by Inductively Coupled Plasma-Mass Spectrometry (ICP-MS), using an Agilent 7500ce instrument (Agilent Technologies, Santa Clara, CA, USA), in the internationally accredited Laboratory of the Louvain Center for Toxicology and Applied Pharmacology (Université catholique de Louvain, Belgium) using validated methods, as previously described [16]. In brief, urine specimens were diluted quantitatively (1 + 9) with a HNO3 1%, HCl 0.5% solution containing Sc, Ge, Rh, and Ir as internal standards. Sb, Al, Cd, Pb, Mo, Te, Sn, and U were analyzed using no-gas mode, while helium mode was selected to quantify As, Cu, Co, Cr, Mn, Ni, Se, V, and Zn. Using this method, the laboratory obtained successful results in external quality assessment schemes organized by the Institute for Occupational, Environmental and Social Medicine of the University of Erlangen, Germany (G-EQUAS program) and by the Institut National de Santé Publique, Quebec.
Metal concentrations were corrected for dilution by the concentration of creatinine, as measured by using a Beckman Synchron LX 20 analyzer (Beckman Coulter GmbH, Krefeld, Germany).
For the statistical analysis, independent variables were age, height, educational level (below vs. above 12 years of study), tobacco smoking (no smoking vs. current smoker or any regular smoker at home), alcohol consumption (none vs. current drinker), living close to mining, i.e., within visible distance from home (yes vs. no), type of drinking water (always drinking water from own well vs. no), the use of personal protective mask, gloves, and glasses (yes vs. no).
The outcome variables were positive/negative replies to 4 questions on respiratory symptoms (wheezing, cough, phlegm, shortness of breath) currently or during the last 12 months, and on 5 oculo-nasal symptoms (itchy nose, itchy eyes, runny nose, sneezing, stuffy nose).
Descriptive statistics consisted of frequency and percentages for categorical variables and means with standard deviation (SD) and range for continuous variables, or geometric means with their 95% confident interval (CI) for trace metal concentrations. To compare groups, the Fisher exact test, t-test, and/or Mann-Whitney rank-sum tests were used. Unadjusted (uOR) and adjusted odds ratios (aOR) and associated 95% confidence intervals (CIs) were calculated to summarize the strength of association between baseline characteristics and symptoms (respiratory and nasal) among groups. A stepwise logistic regression was performed, adjusting for variables considered clinically or epidemiologically relevant, and the variables with p-value less than 0.2 in the bivariate analysis. The threshold level for significance was set at p < 0.05.
Graphpad 6 (Graphpad corps, La Jolla, CA, USA, 2015) was used to perform descriptive statistics and bivariate comparisons, and JMP Pro 14.2.0 (SAS Institute Inc. Cary, NC, USA, 2019) was used for multivariate analyses.
Study Population
The characteristics of the 98 participants are presented in Table 1. The stone-crushers were, on average, 3 years younger than the taxi-drivers (27 ± 5 vs. 30 ± 8 years, respectively), they were much less educated than the taxi-drivers (2% vs. 52% with more than 12 years education, respectively), and they were also more likely to drink exclusively water from their own wells (50% vs. 26%, respectively). The prevalence of tobacco smoking (13%) and the reported number of cigarettes smoked daily by smokers (median 5, range 1-25) were low and similar in both groups. Very few stone-crushers reported wearing protective equipment (face-mask by three subjects, gloves by two, goggles by none). All consenting subjects replied to the questionnaire, but urine sampling and spirometry could not be performed in all participants for logistic reasons, i.e., not because of refusals.
Urinary Biomonitoring
The urinary concentrations of trace metals/metalloids (expressed as µg/g creatinine) are presented in Table 2. For comparison, this table also shows data measured in the same laboratory using the same methods and derived from two other studies [10,16]. One study, by Hoet et al. [16], provided general reference values based on 1022 healthy adult male and female persons, all living in Belgium and having no occupational exposure to metals. The other study, by Banza et al. [10] was done among residents living in several locations in the same region as the current study, and we considered the data obtained in 179 male and female children and adults who were living at a distance of less than 3 km from a mine or metal-processing plant, to be suitable local reference values. Creat = creatinine, GM = geometric mean (CI = 95% confidence interval), *: If more than 25 percent of values were below the limit of detection, -: if the value was not available or not shown. Data of Hoet et al. [16] are those reported in their Table 2 for 1022 male and female adults; data of Banza et al. [10] are those reported in their Table 2 for 179 residents (male and female adults and children) living close to mining.
In general, most urines proved to be highly concentrated since the creatinine concentration averaged 2.46 g/L (SD 1.04) for the 75 participants, with no sample having a creatinine concentration below 0.7 g/L and 20 samples (10 in each group) having a concentration exceeding 3 g/L. However, the concentrations of creatinine did not differ between stone crushers and taxi-drivers.
The concentrations of five elements (Be, V, In, Pt, Bi) were below detection limits in most subjects and are, therefore, not reported. Of the 19 other measured elements, four (Co, Ni, As, and Se) exhibited significantly higher concentrations in urine from stone-crushers than in the urine of taxi-drivers, the highest contrasts being observed for Ni and As, for which the concentrations were about two-fold higher among stone-crushers than among taxi-drivers. For two elements (Sn and Sb), urinary concentrations were higher in taxi-drivers than in stone-crushers.
Because it has been recommended to exclude too concentrated samples [17], we repeated the analyses after the exclusion of samples with creatinine concentrations above 3 g/L. This did not substantially modify the results (not shown).
Symptoms
The proportion of participants without reported respiratory symptoms tended (p = 0.07) to be lower among the stone-crushers (35%) than among the taxi drivers (54%) ( Table 3). However, only wheezing was significantly more prevalent among stone-crushers (23%) than among taxi-drivers (6%, p = 0.021) ( Table 3). In a multivariate analysis involving the entire population (Table 4), tobacco smoking and residential proximity to mining did not affect the prevalence of reported symptoms after adjustments for the various relevant variables. However, wheezing (aOR 1.17, 95% CI 1.02-1.35), cough (aOR 1.14, 95% CI 1.01-1.29), and shortness of breath (aOR 1.28, 95% CI 1.06-1.55) were more likely with increasing age (values of aOR are for a one-year increase in age). The excess of reported wheezing among stone crushers was significant (aOR 4.45, 95% 1.09-18.24) after adjustment for age, tobacco smoking, and proximity to mines.
Spirometry
Of the 68 participants who performed spirometry, 39 did not provide reliable spirometric results based on the quality check of the spirograms (score 0), and only 21 participants provided fully acceptable curves (score 3); three additional subjects had acceptable FEV1 (score 1) and five additional subjects had acceptable FVC (score 2). The age of the 21 participants with fully acceptable spirometry (29 ± 5 y) differed significantly (p = 0.02) from that of the 47 participants who failed to produce satisfactory spirometries (26 ± 6 y), but it did not differ (p = 0.16) from that of the 30 participants who did not perform spirometry (31 ± 1 y). Although none of the 21 participants with acceptable spirometries reported wheezing, subjects who reported nasal or other respiratory symptoms were not more likely to have failed spirometry (not shown).
Overall, the stone-crushers had better pulmonary function than the taxi-drivers, this being significant for FVC and nearly significant for FEV1 and MEF25 (Table 5). Among the 21 participants with fully acceptable spirometry, those who were free of respiratory symptoms (n = 11, 4 taxi-drivers and 7 stone-crushers) did not have better pulmonary function than those who reported at least one respiratory symptom (n = 10, 7 taxi-drivers and 3 stone-crushers), except for FVC, which was higher among the symptom-free participants (105.1 ± 4.8 vs. 91.3 ± 5.6, p = 0.038).
Discussion
In this cross-sectional design, we studied trace metal exposure and respiratory health of a group of male artisanal workers involved in stone-crushing in Lubumbashi, a city in the African Copperbelt well-known for its mining-related environmental pollution [18]. Compared with a control group of taxi-drivers, the stone-crushers exhibited higher urinary levels of several trace elements [Co, Ni, As and Se], and they were more likely to report wheezing. Nevertheless, the stone-crushers tended to have better pulmonary function values than the taxi-drivers.
In both groups of participants, trace metals in urine were substantially higher than reference values from industrially developed countries [16], as well as values obtained in Kinshasa, the capital city of the DR Congo, situated at more than 1000 km distance from the mining region [19]. This is consistent with a previous study [10], in which we also found high urinary levels of cobalt and other metals among the general population living in Lubumbashi as compared to international standards. However, the urinary concentrations of As and Se were higher in the stone-crushers than those found in the residents living close to mines [10], this being possibly due to their exposure to dust produced from the crushed stones. Unfortunately, we do not have information on the elemental composition of the dust produced by crushing the stones.
Another possible explanation for the high urinary As in the stone-crushers may be the presence of high concentrations of As in their drinking water since more than 50% of the stone-crushers reported drinking only well water, and well water can be highly polluted by metal(oid)s in Lubumbashi [9]. However, we found no difference in urinary As between those reporting drinking water exclusively from their own wells and those who did not (not shown). The sources of As and other trace elements among the stone crushers, therefore, requires further investigation. Recently, we have also found high metal concentrations in surface dust obtained from households in Lubumbashi [20].
On the other hand, the metal concentrations found in the present study were generally considerably lower, especially for Co and U, than those observed in artisanal miners of cobalt in Kolwezi [11] and elsewhere in the region [21], except for Pb and Mn that were relatively high in Lubumbashi. We have no explanation for the finding that our stone-crushers and taxi-drivers had higher urinary Mn concentrations than the inhabitants and diggers of Kolwezi. However, it is known that Mn can accompany copper extracted in Lubumbashi. Similarly, high values of Pb, the origin of which is unclear, have been previously found in pregnant women in Lubumbashi [22].
Regarding respiratory symptoms, logistic regression analysis revealed that wheezing, cough, and shortness of breath were positively associated with age, even in our relatively young population. However, the only significant difference found between stone-crushers and taxi-drivers concerned wheezing, which was more frequently reported by stone-crushers. The difference was significant both in bivariate analysis and in logistic regression analysis, which revealed an adjusted OR of more than 4 for wheezing among the stone-crushers compared to taxi-drivers. Within the limits of a questionnaire-based cross-sectional survey, we attribute this excess risk of wheezing to their exposure to high levels of dust. Further studies are needed to determine whether this reported wheezing corresponds to asthma.
We did not find an effect of smoking on respiratory symptoms or pulmonary function in our study. We speculate that this can be explained by the low prevalence and low daily consumption of cigarettes in our (relatively young) study population.
Against our expectation, pulmonary function was or tended to be better among stone-crushers than among taxi-drivers, at least as assessed among the minority of participants in whom satisfactory spirometries could be obtained. The problem of obtaining good measurements of pulmonary function is not often acknowledged in published surveys done in low-income countries, where precarious field conditions, including unavailable electric power supply, render spirometry testing much less reliable than when spirometry is done in well-equipped pulmonary function laboratories. Although the ndd EasyOne portable spirometer has been recommended for epidemiological studies (e.g., in the BOLD studies) [23], its use "in the field" without a link to a computer allowing a visual control of the maneuver during the performance of the test, renders quality control almost impossible by the operator. This explains why a later check of the printed spirometry tracings led to excluding the tests of more than half of our participants, which is in accordance with a study done in Ghana among gold miners and farmers [24]. Nevertheless, if we assume that the available satisfactory measurements of pulmonary function reflect the status of the entire group of participants reliably, it remains that we did not find negative effects of dusty work on pulmonary function in the stone-crushers, who proved to have even better average values than the taxi-drivers. The absence of detectable effects on pulmonary function could be explained by the young age of our participants, as well as by the cross-sectional nature of our study. Indeed, it is generally accepted that the adverse effects of mineral dust exposure on pulmonary function correlate poorly with respiratory symptoms and take many years to become manifest [25,26].
To our knowledge, studies of respiratory health in relation to stone crushing have been exclusively done in quarries using mechanical crushers, and we found no published surveys of workers who only used hand tools to break and crush stones, as was the case in the present study. Only a few longitudinal studies have investigated workers employed in stone quarries. In the USA, a large longitudinal study of granite workers in Vermont showed excessive exposure-related declines in FEV1 and FVC, especially among subjects who failed spirometry or were lost to follow-up [25,26]. In Sweden, Malmberg et al. [27] found that 45 granite crushers (age range 35-77 years) had experienced slightly faster declines in FEV1 after 12 years of work than 45 matched controls. Most published studies of stone crushing workers have been cross-sectional. In Spain, a cross-sectional study of 440 active granite workers by Rego et al. [28] revealed silicosis in 17.5% of subjects, but functional alterations were also found regardless of silicosis (synergistically with smoking). In Singapore, Ng and Chan [29] similarly concluded from a cross-sectional study of 320 workers and ex-workers from two granite quarries that dust exposure was associated with a loss of pulmonary function mainly in the presence of silicosis, but also without silicosis. In Pakistan, Leghari et al. [6] described high prevalences of respiratory symptoms among stone-crushing workers. In India, studies among quartz stone grinders by Tiwari et al. [30][31][32][33] and among sandstone crushers by Singh et al. [34,35] and Rajavel et al. [36] identified silicosis, silico-tuberculosis, and respiratory functional impairment in a high proportion of exposed workers, even at a young age. In Nigeria, Nwibo et al. [37] studied 403 male and female stone quarrying workers [mean age 30 years (SD 9 years)] by means of a questionnaire, spirometry, and chest radiography, and they concluded (without having a control group) that stone quarrying may increase the risk of respiratory symptoms and impaired lung function. In addition, in Nigeria, Isara et al. [38] observed a higher prevalence of various symptoms (mainly chest tightness and cough) and lower levels of FEV1 and FVC among 76 quarry workers [mean 36 years (SD 11years] than among 37 controls. In Libya, Draid et al. [39] found significantly lower spirometric parameters (FEV1, FVC, FEV1/FVC and PEF) among 83 "silica quarry workers" compared to 85 controls. In Ghana, Ahadzi et al. [40] showed, among 524 workers from 30 stone quarries, that self-reported symptoms (eye irritation, breathing difficulties, cough) were inversely related to distance to the main dust source (i.e., crushers) and to the usage of personal protective equipment (worn by around 10% of workers only). We speculate that the levels of dust exposure in our artisanal stone-crushers were lower than those produced when machines are used for crushing stones, but this needs to be evaluated by appropriate environmental measurements.
Moreover, the exposure to dust and the high physical demands associated with stone-crushing may have led to the "healthy worker effect," whereby people with respiratory impairment tend to quit their job early. Of note, in a study of 272 stone-crushing workers and 123 control agricultural workers in West-Bengal, India, Chattopadhyay et al. [41] found that, contrary to expectation, pulmonary function parameters tended to be better among the exposed group, although there was a higher prevalence of restrictive impairment among the exposed group. As indicated above, in the longitudinal study of pulmonary function among Vermont granite workers, there was evidence of a potent healthy worker effect [26].
On the other hand, we cannot exclude an effect of higher exposure to traffic-related air pollution in the taxi-drivers. These issues can only be investigated by well-powered longitudinal observations. The strengths of our study include its originality as a first field study of respiratory health among artisanal stone-crushers, with characterization of their metal exposure by urinary biomonitoring. Nevertheless, we also acknowledge several limitations. A first limitation is the cross-sectional design of our survey and the convenience sampling of our population, thus giving rise to uncontrolled selection biases, among which the healthy worker effect is probably an important drawback. We also only included a relatively small group of adult male workers, even though women and children also work on the stone crushing sites. A second limitation is that we do not have information on the levels and composition (including free-silica content) of the dust inhaled by the workers. A further limitation concerns the logistic and technical difficulties of obtaining good quality spirometry in precarious field conditions.
Conclusions
This study shows that artisanal stone-crushers and taxi-drivers are highly exposed to trace metals in this highly polluted area (Lubumbashi). Wheezing was more prevalent among stone-crushers than among controls and, although no evidence for functional impairment was detected in this preliminary study, this excess wheezing may be indicative of a higher risk of long-term respiratory impairment.
In view of the limitations of our cross-sectional study, which involved small groups of relatively young participants, the lack of a detectable impact on spirometry in the group of stone crushers should not be interpreted as suggesting the absence of risk of respiratory impairment for workers engaged in such artisanal activities. Legislation enforcement and advocacy are warranted to protect the workers. Funding: Training grants to TKK and PMO received from VLIR-UOS (Vlaamse Interuniversitaire Raad-Universitaire Ontwikkelingssamenwerking) and ARES (Académie de Recherche et de l'Enseignement Supérieur), Belgium; costs of metal analysis supported by IDEWE Occupational Services, Belgium. | 2019-11-28T12:32:07.224Z | 2019-09-28T00:00:00.000 | {
"year": 2020,
"sha1": "f4fe60fe819e81e7722adbb1e1ee5af247b542b8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/24/9384/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd18a7c6eb878d2d47a5a7621f15cc6c2e736ccd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2341814 | pes2o/s2orc | v3-fos-license | Selected gene profiles of stressed NSC-34 cells and rat spinal cord following peripheral nerve reconstruction and minocycline treatment
The present study was conducted to investigate the effects of minocycline on the expression of selected transcriptional and translational profiles in the rat spinal cord following sciatic nerve (SNR) transection and microsurgical coaptation. The mRNA and protein expression levels of B cell lymphoma-2 (Bcl-2), Bcl-2-associated X protein (Bax), caspase-3, major histocompatibility complex I (MHC I), tumor necrosis factor-α (TNF-α), activating transcription factor 3 (ATF3), vascular endothelial growth factor (VEGF), matrix metalloproteinase 9 (MMP9), and growth associated protein-43 (GAP-43) were monitored in the rat lumbar spinal cord following microsurgical reconstruction of the sciatic nerves and minocycline treatment. The present study used semi-quantitative reverse transcription-polymerase chain reaction (RT-PCR) and immunohistochemistry. As a PCR analysis of spinal cord tissue enabled the examination of the expression patterns of all cell types including glia, the motorneuron-like NSC-34 cell line was used to investigate expression level changes in motorneurons. As stressors, oxygen glucose deprivation (OGD) and lipopolysaccharide (LPS) treatment were performed. SNR did not induce significant degeneration of ventral horn motorneurons, whereas microglia activation and synaptic terminal retraction were detectable. All genes were constitutively expressed at the mRNA and protein levels in untreated spinal cord and control cells. SNR significantly increased the mRNA expression levels of all genes, albeit only temporarily. In all genes except MMP9 and GAP-43, the induction was seen ipsilaterally and contralaterally. The effects of minocycline were moderate. The expression levels of MMP9, TNF-α, MHC I, VEGF, and GAP-43 were reduced, whereas those of Bax and Bcl-2 were unaffected. OGD, but not LPS, was toxic for NSC-34 cells. No changes in the expression levels of Bax, caspase-3, MHC I or ATF3 were observed. These results indicated that motorneurons were not preferentially or solely responsible for SNR-mediated upregulation of these genes. MMP9, TNF-α, VEGF and Bcl-2 were stress-activated. These results suggest that a substantial participation of motorneurons in gene expression levels in vivo. Minocycline was also shown to have inhibitory effects. The nuclear factor-κB signalling pathway may be a possible target of minocycline.
Introduction
During embryonic and early postnatal development, the axotomy of motorneurons or removal of their target results in significant motorneuron cell loss. In adults however, axotomy can result in either complete motorneuron survival or motorneuron death. For patients this implies an incomplete or even complete loss of function of the muscular targets of the lost motorneurons (1).
It is well-established that injuries of the spinal cord (2,3) as well as the peripheral nerves (4)(5)(6) lead to changes in gene and protein expression levels in motorneurons and glial cells, which may result in neuronal apoptosis. This is the basis for therapeutic strategies that aim to enhance axonal regeneration and functional recovery following peripheral nerve injury, including pharmacological treatments (7).
In this physiological context, minocycline has been widely used, but the advantages and disadvantages of this treatment appear equal in number. Minocycline is a semi-synthetic second generation tetracycline with broad spectrum anti-microbial activity (8). The primary applications of minocycline include treatment of pneumonia, rheumatoid arthritis, acne and infections of the skin, the genital, and urinary systems (9). There are also promising preclinical studies for the treatment of stroke (10,11), Alzheimer's disease (12), Huntington's disease (13), Parkinson's disease (14), amyotrophic lateral sclerosis (15), multiple sclerosis (15,16) and traumatic brain injury (17). Clinical trials with minocycline for the treatment of spinal cord injury have been underway since the early 2000s (18). The predominant effect of minocycline is associated with its ability to modulate microglia and immune cell activation and to reduce apoptosis (19). There have however been reports of conflicting results, with a number of previous studies demonstrating that minocycline worsened spinal cord and brain injuries (20)(21)(22)(23).
Our previous investigations demonstrated that minocycline impairs motorneuron survival in organotypic rat spinal cord cultures (24) and inhibited the regeneration of peripheral nerves (25). The present study was undertaken to examine the effects of minocycline on the expression of selected transcriptional and translational profiles in the rat spinal cord following sciatic nerve transection and microsurgical coaptation. In addition to the spinal cord in vivo, the present study conducted in vitro experiments using NSC-34 motorneuron-like cells. NSC-34 is a hybrid cell line produced by the fusion of neuroblastoma with mouse motorneuron-enriched primary spinal cord cells (26). These cells share numerous morphological and physiological characteristics with mature primary motorneurons, and thus are an accepted model for studying the pathophysiology of motorneurons (26). Stress was induced by oxygen glucose deprivation (OGD) or lipopolysaccharide (LPS) treatment. The mRNA and protein expression levels of the following compounds were examined: i) B cell lymphoma 2 (Bcl-2)-associated X protein (Bax), which has been demonstrated to be upregulated in the spinal motorneurons of newborn rats following sciatic nerve injury (27) and in adult cats following partial dorsal root ganglion ectomy (28); ii) caspase-3, which is activated in adult spinal motorneurons during injury-induced apoptosis (29); iii) Bcl-2, which has been reported to be activated in the adult spinal motorneurons of rats in the first three weeks following sciatic nerve injury (30); iv) major histocompatibility complex of class I (MHC I), which is upregulated in the spinal motorneurons of neonatal rats following sciatic nerve injury (31); v) tumor necrosis factor (TNF-α), released from astrocytes and microglia around motorneurons in rat spinal cord in the first two weeks following sciatic nerve crush (32); vi) activating transcription factor (ATF3), which is a marker for regenerative response following nerve root injury (33), and its expression in neurons is closely associated with their survival and the regeneration of their axons following axotomy (34); vii) vascular endothelial growth factor (VEGF), which has been demonstrated to be upregulated in the spinal motorneurons of adult rats in response to neurotomy (35); viii) matrix metalloproteinase 9 (MMP9), immediately upregulated in adult mice spinal motorneurons following nerve injury (36); and ix) growth-associated protein 43 (GAP-43), which is expressed at high levels during development (37) and stressed by nerve injury adult motorneurons (38).
Materials and methods
Ethical approval. The present study was conducted in accordance with the European Commission regulations and those of the National Act on the Use of Experimental Animals of Germany, and adhered to the guidelines of the Committee for Research and Ethical Issues of the International Association for the Study of Pain.
Animal model Animals.
A total of 51 female Wistar rats (10 weeks old, 200-230 g, strain-matched, inbred) were obtained from Harlan-Winkelmann GmbH (Borchen, Germany). The rats were housed under controlled laboratory conditions with a 12-h light/dark cycle (lights on at 6 am) at 20±2˚C with an air humidity of 55-60%. The animals were provided with ad libitum access to commercial rat pellets (Altromin 1324™; Altromin Spezialfutter GmbH & Co. KG, Lage, Germany) and tap water. Following intervention the rats were housed in pairs in Makrolon IIL cages (Bioscape GmbH, Castrop-Rauxel, Germany). Every effort was made to minimize the amount of suffering and the number of animals used in the experiments.
A total of 46 rats were injured and divided into four phosphate-buffered saline (PBS; Sigma-Aldrich Chemie GmbH, Munich, Germany) and four minocycline treatment groups with survival times of 3, 5, 7 and 14 days post-intervention (DPI), with five animals/group for semi-quantitative reverse transcription-polymerase chain reaction (RT-PCR). An additional three animals from the 7-day PBS-treated and from the the 7-day minocycline-treated groups were used for immunohistochemical analysis. For semi-quantitative RT-PCR the spinal cords of five untreated animals were also prepared.
Minocycline treatment. Minocycline hydrochloride (Sigma-Aldrich, St. Louis, MO, USA) was administered once daily for ≥7 consecutive days by intraperitoneal injection at a dosage of 50 mg/kg body weight (~10 times the usual human dose), starting at 30 min following nerve reconstruction. The drug was dissolved in saline (pH 7.2, freshly prepared daily) at 37˚C. A dosage of >20 mg/kg was selected to induce the maximal anti-hyperalgesic effect, as lower doses are unable to affect gene expression in a sufficiently stable manner (39). Control rats were injected with PBS (pH 7.2) using an identical treatment regime.
Surgical protocol. The surgical procedure protocol for nerve reconstruction was the same for all groups, and consisted of exposing the right sciatic nerve through a dorsal incision under general anesthesia (60 mg/kg pentobarbital, intraperitoneal; Sigma-Aldrich) and aseptic conditions using an SV8 operating microscope (Zeiss GmbH, Jena, Germany). The nerve was transected at the proximal origin of the gracilis muscle and immediately microsurgically coaptated with respect to intraneuronal topography using epineural sutures (Ethilon 11x0; Johnson & Johnson, New Brunswick, NJ, USA) followed by closure of the dorsal incision.
Semi-quantitative RT-PCR. Following the respective survival times (3, 5, 7 and 14 days), the animals were sacrificed by an excess of anesthesia (pentobarbital) via intraperitoneal injection. L3-L6 sections of the spinal cord, divided into ipsilateral and contralateral sites were harvested and homogenized in peqGOLD TriFast total RNA isolation reagent (cat. no. 30-2030; PeqLab Biotechnologie GmbH, Erlangen, Germany) using an Ultra-Turrax Homogenizer (IKA ® Werke GmbH & Co. KG, Staufen im Breisgau, Germany). Total RNA was prepared according to the manufacturer's instructions. Potentially contaminating DNA was removed by treating 5 µg total cell RNA with Turbo DNA-free (Ambion; Thermo Fisher Scientific, Inc., Waltham, MA, USA). RNA (4 µl; 2 µg input RNA) was reverse transcribed using a RevertAid TM H Minus First Strand cDNA Synthesis kit primed with Oligo(dT) 18 primers (cat. no. K1631; Thermo Fisher Scientific, Inc.; primers listed in Table I). cDNA (1 µl) was then amplified by PCR using Taq DNA polymerase (PeqLab Biotechnologie GmbH), as previously described (40). One-tenth of each reaction product was electrophoresed on a 1% agarose gel (Serva Electrophoresis GmbH, Heidelberg, Germany) (excluding TNF-α, which required a 2% agarose gel). The PCR product bands were quantified by densitometric analysis using a GeneGenius bio-imaging system (Syngene, Cambridge, UK) and the ratio of their expression levels to those of the GAPDH reference gene were calculated. Each experiment was repeated in triplicate.
Statistical analysis of all groups was conducted using a non-parametric Kruskal-Wallis test. Dunn's multiple comparison test was used as a post-hoc test. For statistical analysis of the groups within one survival time, analysis of variance with Tukey's post-hoc test was performed. Graph Pad Prism 4 software (GraphPad Software Inc., La Jolla, CA, USA) was used to conduct the statistical analyses. P<0.05 was considered to indicate a statistically significant result.
Stress induction. OGD was induced following 4 DIV. Briefly, the medium was removed and replaced with normal medium under normal conditions or OGD medium (glucose-free DMEM supplemented with 10% FCS and 0.2% Ciprobay) under OGD conditions. OGD conditions were reached by exposing the cultures to an atmosphere composed of 5% CO 2 and 1% O 2 , using nitrogen gas to displace ambient air in a C200 incubator (Labotect Technik-Göttingen GmbH, Rosdorf, Germany) at 37˚C for 6 h. For reoxygenation the incubator atmosphere was reestablished to 5% CO 2 and 21% O 2 and 4.5 mg/ml glucose was added.
Addition of LPS (Escherichia coli; Sigma-Aldrich) was also performed at 4 DIV. Regarding OGD, the medium was replaced with normal medium supplemented with 2 mg/ml LPS for 24 h.
Minocycline treatment. Minocycline hydrochloride (molecular weight, 493.9 g/mol; Sigma-Aldrich) was dissolved in sterile PBS to obtain a stock solution of 5 mg/ml (pH 6.5). From this stock solution, 1 µl/well, 20 µl/dish and 50 µl/flask were added to the respective groups (control, OGD, LPS) following medium replacement in order to start the stress induction (final minocycline concentration 100 µM, final pH 7.0).
Assessment of cell proliferation/survival by M TT, bromodeoxyuridine (BrdU) and vital staining. The specific turnover of MTT (6 mg/ml; Sigma-Aldrich) to formazan by viable cells was analyzed 24 h following OGD induction using photometry. Briefly, 8 µl MTT (6 mg/ml) was added to each well and incubated for 3 h prior to complete removal of the medium. A total of 100 µl dimethyl sulfoxide (DMSO; Merck Millipore, Darmstadt, Germany) was then added to each well, and extinction coefficients in each well were determined using an Infinite ® M200 (Tecan GmbH, Crailsheim, Germany) and calculated by subtracting the reference absorbance at 690 nm from the absorbance at 570 nm. The absorbance of the empty wells filled only with DMSO was subtracted. Subsequently, the mean values of the respective treatment groups were calculated and associated with the Table I. Sequences of primers used for semi-quantitative reverse transcription-polymerase chain reaction. norm medium control. Each experiment was performed with 12 repeats/treatment group. As the MTT assay is a general assay for cell viability and proliferation, the mitotic indices were additionally determined using BrdU (Roche Diagnostics GmbH, Mannheim, Germany), which was added at the same time as stress induction, and 24 h prior to fixation with 4% PFA, as previously described (41). Fixed cell cultures were washed with PBS, incubated with 2 N HCl at 37˚C for 1 h, washed repeatedly with borate buffer (pH 8.5) and PBS, and finally incubated at 7˚C for 24 h with monoclonal rat anti-BrdU antibody (1:100; cat. no. OBT0030; AbD Serotec; Bio-Rad Laboratories, Inc., Hercules, CA, USA) combined with mouse monoclonal anti-pan-NF (cat. no. 837802; Biolegend). Subsequently, the washed cultures were then incubated for 3 h with secondary antibodies goat anti-rat Alexa 546 (1:500; cat. no. A11081; Thermo Fisher Scientific) and anti-mouse Alexa 488 (1:500; Invitrogen; Thermo Fisher Scientific, Inc.) prior to examination using an AxioImager M1 fluorescence microscope with a 20x objective lens. Each treatment group consisted of three culture dishes in which the BrdU-positive NSC-34 cells in three different fields of view were counted. The three values/dish were combined, and the percentage of BrdU-positive cells relative to the total number of NSC-34 cells was calculated.
Cell viability. Cell viability was assessed by double-labeling with fluorescein diacetate and propidium iodide (PI) (42). The assay is based on the ability of living cells to hydrolyze fluorescein diacetate (10 µg/ml PBS, 5 min; Sigma-Aldrich) using intracellular esterases, resulting in a green/yellow-colored fluorescence. Dead cells were labeled with PI (5 µg/ml PBS, 5 min; Sigma-Aldrich), which interacts with DNA to produce a red fluorescence of cell nuclei. The analysis procedure was the same as described above for the BrdU assay.
For all assays, the respective mean values were analyzed using a non-parametric Kruskal-Wallis test and Dunn's multiple comparison test as post-hoc tests using Graph Pad Prism 4 software. P≤0.05 was considered to indicate a statistically significant result. All experiments were independently performed in triplicate.
Semi-quantitative RT-PCR. Cells were harvested 24 h prior to OGD induction or LPS treatment. The mRNA expression levels were determined as described above for the spinal cord tissue sections. Five flasks were prepared for each treatment group (control, control + minocycline, OGD, OGD + minocycline, LPS and LPS + minocycline). The experiment was repeated in duplicate. Statistical analysis was performed with a non-parametric Kruskal-Wallis test and Dunn's multiple comparison test as post-hoc test using Graph Pad Prism 4 software. P<0.05 was considered to indicate a statistically significant result.
Immunohistochemistry. At 5 DIV (24 h after OGD induction), the cell cultures were fixed for 30 min in 4% buffered PFA, and unspecific binding sites were blocked with 10% bovine serum albumin (BSA; Sigma-Aldrich)/0.3% Triton X-100 in PBS for 1 h. Subsequently, the cultures were incubated with the aforementioned primary antibodies: Anti-Bax anti-β-III-tubulin (1:1,000) at 7˚C overnight, followed by a wash with PBS, then incubation with the secondary antibodies goat anti-mouse Alexa 488 (1:500) and donkey anti-rabbit Cy3 (1:250) at room temperature for 3 h. All antibodies were diluted in 10% BSA/0.3% Triton in PBS. The specificity of the immunoreaction was controlled by the application of buffer instead of primary antibodies. Cell cultures were examined using a fluorescence microscope (AxioImager M1; Plan-Neofluar objective; 20/0.5). For each treatment group (control, control + minocycline, OGD, OGD + minocycline, LPS, LPS + minocycline) and staining type two dishes were examined, in total 108 dishes. The experiment was performed in duplicate.
Animal model
Surgical outcome/macroscopic assessment. The surgical procedure was well tolerated and wounds healed well. The treatments were fatal to none of the animals. In the first two post-operative weeks, clinical signs of hyperalgesia or discomfort were observed. Compared with injured PBS-treated rats, the minocycline-treated animals demonstrated diminished peripheral nerve regeneration, as indicated by significantly lower axon counts in the distal stump when compared with PBS-treated animals. Functional outcome (response of animals to thermal stimuli and muscle weight ratio of the gastrocnemius muscle) has been previously described (25).
Microscopic assessment. Immunohistochemical assessment was performed at 5 DPI as at this DPI the PCR revealed the most marked alterations (see below). At 5 DPI, the population of SMI311-expressing motorneurons of the contralateral and ipsilateral ventral horns (VH) was equal in number, form and staining intensity (Fig. 1). Astroglia-specific GFAP (Fig. 1A) and microglia-specific IBA1 (Fig. 1B) were expressed in the contralateral VH. In the ipsilateral side, a marked induction of both markers was evident ( Fig. 1A and B). Microglia activation could also be demonstrated by cell morphology. The cells were altered from their ramified form, and became thicker and retracted their branches. Glia activation indicated ongoing neurodegenerative processes at the nerve fiber level, which was not yet evident from SMI311 staining. With regards to GFAP, treatment with minocycline was ineffective (Fig. 1A). However, microglia activity was decreased by minocycline, and this effect was more marked in the ipsilateral VH (Fig. 1B).
Semi-quantitative RT-PCR. In the spinal cord of untreated animals, all experimental genes were constitutively expressed. GAP-43 possessed the highest expression levels, which were increased at ≥7 DPI. The expression levels of all other genes were significantly increased by sciatic nerve injury at 3 DPI. This increase in expression levels was evident >5 DPI. In the case of MMP9 a marked increase in expression levels was observed at 3 and 5 DPI. At 7 DPI, the expression levels of caspase-3, Bcl-2, ATF3, TNF-α and VEGF returned to levels similar to those of the control, and the expression levels of VEGF were already significantly reduced at 5 DPI. The expression levels of Bax, MHC I and MMP9 remained high at ≥14 DPI. With the exception of MMP9, no significant differences were observed between ipsilateral and contralateral effects following nerve injury. At 5 DPI, MMP9 was expressed at significantly higher levels on the ipsilateral side (Fig. 2).
Only the expression levels of certain genes were affected by minocycline. Treatment with minocycline reduced the ipsilateral expression levels of MHC I at 3 DPI. The expression levels of TNF-α were reduced ipsilaterally at 3 and 5 DPI. At 5 DPI, the contralateral expression levels of TNF-α were also diminished. Minocycline reduced the ipsilateral expression levels of MMP9 at 3 DPI and the contralateral expression levels at 5 DPI. Treatment with minocycline decreased the ipsilateral and contralateral expression levels of VEGF at 3 DPI. Furthermore, the nerve injury-induced expression of GAP-43 was significantly suppressed (Fig. 2).
The majority of immunofluorescence signals were activated by unilateral nerve injury in the ipsilateral VH. In the case of Bax, in addition to marked cytoplasmic staining of motorneurons, marked nuclear fluorescence was visible (Fig. 3A). Such injury/hypoxia-induced translocation of Bax to the nucleus has previously been described for neonatal neurons of the spinal cord (27) and brain (43). The enhanced motorneuronal signals of caspase-3 and Bcl-2 (Fig. 3A) were located in the cytoplasm, and Bcl-2 was also markedly expressed in the contralateral VH motorneurons (Fig. 3A). Intense TNF-α staining was observed in the motorneuronal cytoplasm with compaction/concentration around and inside the nucleus (Fig. 3A). The markedly intense immunosignals of MHC I, MMP9 (Fig. 3A) and VEGF (Fig. 3B) were evenly distributed in the cytoplasm of the motorneurons. In addition, the expression of the ATF3 transcription factor was predominantly upregulated in the cytoplasm (Fig. 3B). A similar pattern with the majority of neurons exhibiting marked cytoplasmic staining and only a minority also exhibiting nuclear translocation was reported by Seijffers et al (44). GAP-43 immunostaining demonstrated a reduction in synaptic contacts (Fig. 3A).
The effect of minocycline was marginal. MHC I appeared to be upregulated by minocycline in the contralateral and ipsilateral VH motorneurons (Fig. 3A). Conversely, the injury-induced upregulation of VEGF was reversed by minocycline (Fig. 3B).
Cell culture model Assessment of cell survival and proliferation. MTT, bromodeoxyuridine (BrdU) and vital staining were used to assess cellular survival and proliferation (Figs. 4 and 5). The MTT assay is based on the specific turnover of MTT to formazan, requiring viable cells. An increased extinction coefficient indicates an enhanced MTT turnover rate and thus a greater number of viable cells. The MTT assay demonstrated a significant (P<0.05) OGD-induced reduction of the metabolic activity of NSC-34 cells, whereas LPS had no effect on metabolic activity levels (Fig. 5A). Independent of treatment, minocycline did not alter the treatment group-specific MTT turnover (Fig. 5A).
In contrast to post-mitotic motorneurons, NSC-34 cells are able to proliferate as they are neuroblastoma-spinal cord hybrids. The basic mitotic index, determined by BrdU incorporation in the control group, was 48±5% (Figs. 4B and 5B). OGD significantly reduced the proliferation of NSC-34 cells (P<0.01); however, LPS was ineffective (Figs. 4B and 5B). Minocycline had no effect on NSC-34 cell proliferation, in either the control or the LPS group. In addition, minocycline was not able to reverse the inhibitory effect of OGD (Figs. 4B and 5B).
A B
Vital staining of control cultures revealed only scattered dead (PI-positive) cells, which were not affected by minocycline (Figs. 4A and 5C). OGD induced significant neurotoxicity (P<0.001), while minocycline was marginally able to reverse this neurotoxicity (P<0.1; Fig. 4A and 5C). LPS alone or in combination with minocycline had no effect on NSC-34 cell viability ( Fig. 4A and 5C).
Semi-quantitative RT-PCR. In untreated control NSC-34 cell cultures, all experimental genes were constitutively expressed, with low expression levels observed for ATF3, caspase-3 and VEGF. OGD was able to significantly increase the expression levels of Bcl-2 (P<0.05), TNF-α (P<0.01) and MMP9 (P<0.05). TNF-α and MMP9 were also significantly upregulated by LPS (P<0.05). Similarly to the results observed following the analysis of the tissue samples, the OGD stress-induced expression was significantly (P<0.05) suppressed by minocycline. Furthermore, minocycline also significantly reduced VEGF expression (P<0.05; Fig. 6).
Immunohistochemistry. The mRNA expression profile of NSC-34 cells was also investigated by fluorescence immunohistochemical evaluation of protein expression (Fig. 7). In untreated control cell cultures (Fig. 7A and B), all proteins were expressed endogenously. This result was expected as primary cell cultures are usually characterized by a low and constant cell death rate. The stressors OGD (Fig. 7A and B) and LPS (Fig. 7A and B) induced the activation of all proteins.
Minocycline had no effect on control cultures ( Fig. 7A and B), but was able to reduce the stress-induced upregulation of TNF-α and MMP9 (Fig. 7A) expression. Combined with LPS, minocycline was able to inhibit VEGF expression (Fig. 7B). Conversely to in vivo experiments, the expression of Bax was predominantly located in the cytoplasm (Fig. 7A). Only in stressed and minocycline-treated cells was nuclear expression visible (Fig. 7A). The expression of transcription factor ATF3 was upregulated following stress and located in the nucleus (Fig. 7B), and this expression was inhibited by minocycline (Fig. 7B). Furthermore, the expression pattern of GAP-43 was different to those observed in vivo. Under all experimental conditions, fluorescence signals were present in the cytoplasm of NSC-34 cells, although not in the fibers (Fig. 7A).
Discussion
Transection of peripheral nerves induces a complex cascade of reactions, including retrograde processes targeting the axotomized spinal motorneurons. In neonatal rats, axotomized motorneurons often die (45,46). However, in adult animals, degeneration of spinal motorneurons following peripheral nerve axotomy rarely occurs (47,48). Only severe nerve ventral root avulsion has more severe effects and induces a significant loss of axotomized motorneurons in the respective spinal cord segments (49). In addition, less severely injured adult ipsilateral In the rat model of sciatic nerve reconstruction in the present study, a post-traumatic pattern in the VH was also observed. Motorneurons did not exhibit visible signs of neurodegeneration. Aside from that the sciatic nerve was reconstructed immediately following transection, which is neuroprotective (52). However, microglia activation and synaptic terminal retraction were detectable, and mRNA expression levels correlated with microglial activation. In the untreated spinal cord, all experimental genes were constitutively expressed at mRNA and protein levels. GAP-43, as a crucial component of the axon and presynaptic terminals exhibited, as expected, the highest expression levels. The genes and proteins involved in inflammation (MHC I, MMP9, TNF-α), apoptosis (Bax, Bcl-2, caspase-3), or stress response (ATF3, VEGF) were expressed at basic levels. Sciatic nerve transection and reconstruction significantly increased the expression levels of these genes, although only in the first week. Similar time courses have been described previously, including those for Bax (53), ATF3 (54) and MHC I (55). The coincident but temporary upregulation of pro-apoptotic genes and proteins (Bax and caspase-3) or anti-apoptotic genes and proteins (Bcl-2), inflammation or stress demonstrated the presence of a self-defensive response to injury, and a conflict between injury-induced neurodegenerative signaling cascades and neuroprotective mechanisms during the acute phase following injury. In certain cases, this results in the survival and recovery of stressed motorneurons, as was observed in the present model of sciatic nerve reconstruction. In severe traumatic injuries, such as nerve avulsion, significant motorneuronal death occurred, accompanied by mitochondrial accumulation of Bax, cytochrome c redistribution and activation of caspase-3 (29). Schwartz et al (56) termed this process a detrimental cost-benefit ratio; inflammation, being primarily a positive self-response eliminating or neutralizing injurious stimuli and restoring tissue integrity, exceeds the threshold of tolerability, and contributes to neuropathology. In this regard it appeared that the immune response to Figure 5. Respective quantitative analysis of (A) MTT, (B) BRdU and (C) PI assays. Data are presented as the mean ± standard deviation; n=9 cultures/group; for each culture, a mean of three vision fields/dish was used for calculations. * P<0.05; ** P<0.01; *** P<0.001. Con, control; OGD, oxygen glucose deprivation; LPS, lipopolysaccharide; PI, propidium iodide; BrdU, bromodeoxyuridine; Ø mino, not treated with minocycline; + mino, treated with minocycline.
A B C
nerve injury in neonatal rats is reversed (57), thus offering one possible explanation for the aforementioned enhanced neuro-vulnerability in young animals. Only MMP9 and GAP-43 demonstrated significantly increased ipsilateral induction compared with the contralateral side. All other genes reported no significant differences when the ipsilateral and contralateral sides were compared. The absence of ipsilateral vs. contralateral differences is in contrast to findings of Tang et al (58), which demonstrated that unilateral root-avulsion resulted in significant alterations to microRNA expression only in the ipsilateral spinal cord. However, the present results are in agreement with Rotshenker and Tal (59), who revealed that sprouting and synapse formation is enhanced by contralateral axotomy. Furthermore, there is evidence for transneuronal correspondence between ipsilateral and contralateral motorneurons. Transneuronal labeling of the L4 and L5 VH neurons following pseudorabies virus injection in the rat medial gastrocnemius muscle has been previously described (60). Neuropeptides, including peptide histidine isoleucine (61) and calcitonin gene-related peptide (CGRP) (62), were also induced bilaterally in rat spinal motor neurons following unilateral sciatic nerve transection. CGRP has been proposed to be involved in pain transmission and inflammation (63), as well as in repair mechanisms for neural regeneration following brachial plexus (2) or sciatic nerve (5) injury, in which its anti-apoptotic properties (64) were essential. These results are concordant with the hypothesis that unilateral sciatic nerve injury is able to induce bilateral stress and self-defense.
Synapse stripping is a regular result of peripheral axotomy, in which the extent of synaptic terminal retraction depends on the distance between motorneuron and lesion [it is lessened when the lesion side is further from the cell soma (65)], and on the severity of the lesion. This process occurs when neuronal cell death is not obvious (66). This remodeling has been suggested to be an adaptive mechanism of self-defense underlying enhanced neuronal viability (67,68). Although the microglia activation demonstrated in the present study may be associated with synaptic stripping, a previous study suggested that the activation of glia is not correlated with the degree of synaptic stripping (65).
Numerous studies have demonstrated minocycline-induced neuroprotection (69)(70)(71). For axotomized motorneurons it influenced both the ipsilateral as well as the contralateral site (72). In the present investigation the effects of minocycline were relatively low, but did induce marginal inhibition
A B
with a reduction in the expression levels of MMP9, TNF-α, MHC I, VEGF and GAP-43. The inhibitory effects of minocycline have previously been described for MMP9 (73), MHC I (74), VEGF (75,76), TNF-α (77,78), and GAP-43 (79). One target of minocycline appears to be the transcription factor NF-κB. Minocycline has been demonstrated to inhibit the activation of NF-κB (80), as well as its translocation into the cell nucleus (81). NF-κB however induces the expression of MMP9 (82), MHC I (83) and VEGF (84). Furthermore, the activation of NF-κB culminates in the release of TNF-α (85), which is a potent activator of NF-κB (86), thus an escalation of the minocycline effects can be assumed.
A late induction of motorneuronal GAP-43 expression following sciatic nerve injury has been previously demonstrated (87). GAP-43 is widely used as a marker for the growth/regeneration state of motorneurons, including synapse reconstruction, referred to as the 'cell body response' (88). The expression of GAP-43 requires acetylated p53 (89). However, minocycline is able to downregulate the expression of p53 (90) and to inhibit acetylation (91), which results in the minocycline-induced downregulation of GAP-43 expression demonstrated by the results of the present study.
Minocycline is able to downregulate or upregulate Bax and Bcl-2, respectively, thereby resulting in an anti-apoptotic ratio (76,(92)(93)(94). However, the present study did not demonstrate any significant minocycline effects on Bax or Bcl-2. These non-concordant results may be due to the heterogeneity of the models and minocycline treatment regimes. Matsukawa et al (95) demonstrated that minocycline attenuates experimentally-induced ischemic cell death by upregulating Bcl-2 expression at low doses. However, high minocycline doses exacerbated ischemic injury and reduced the number of Bcl-2-expressing neurons. Furthermore, minocycline targeted neurons alone, not astrocytes (95). The present results of the PCR analysis on the spinal cord tissue samples revealed the expression pattern of all spinal cord cell types including glial cells. Experiments were conducted using the NSC-34 motorneuronal-like cell line.
OGD, but not LPS, was highly toxic for NSC-34 cells and minocycline reduced the OGD-induced cell death rate, although these results were not significant (P>0.05). Notably, stress-induced changes in apoptosis-associated Bax and caspase-3 expression were not observed. These results were concordant with an in vivo study that demonstrated that dying lumbar motorneurons did not always exhibit apoptotic morphology (96). There is also evidence that NSC-34 cells expressed the apoptotic markers only under specific conditions (97). NSC-34 cell death could be induced by various apoptotic agents, and when intracellular protein inclusions containing mutant SOD1 existed, dispersed SOD1 prevented NSC-34 cells from apoptotic cell death (98). This may explain the observed apoptotic death of NSC-34 cells following H 2 O 2 -induced oxidative stress (99), as oxidative stress induced SOD1 aggregation (100). The absence of Bax, caspase-3, MHC I and ATF3 activation in NSC-34 cells suggested that motorneurons were not preferentially or even solely responsible for the nerve injury-mediated upregulation of these genes. PCR analysis demonstrated the expression pattern of neurons and glial cells, and the expression and stress-mediated regulation of these genes has been well-described: Astroglial Bax and caspase-3 (101); MHC I (102); ATF3 (103); microglial Bax and caspase-3 (104); MHC I (105); ATF3 (34); oligodendroglial Bax and caspase-3 (106); MHC I (107); and ATF3 (108). The absence of GAP-43 activation in NSC-34 cells may be a result of the in vitro absence of retrograde signals, which in vivo originate from the distal nerve stump and the disconnected nerve targets to initiate and support axonal regeneration (109).
TNF-α and VEGF induced the expression of MMP9; however, only OGD induced the expression of Bcl-2, and was able to parallelize the activating potency of sciatic nerve reconstruction. These results suggested that the expression level changes observed in vivo may be induced by motorneurons. However, the involvement of glia cells cannot be excluded. The glial expression of the four genes has previously been described: MMP9 (110,111); TNF-α (112); VEGF (113,114); and Bcl-2 (92,115).
In NSC-34 cells minocycline exhibits inhibitory effects, whereby the above-mentioned NF-κB signaling pathway may be accepted due to the fact that NF-κB is expressed by NSC-34 cells, and activated and translocated into the nucleus as a result of cell stress (116,117).
The present study demonstrated a massive but temporary SNR-mediated upregulation of all studied genes in L3-L6 sections of the spinal cord that was moderately affected by minocycline. The results observed within NSC-34 cells indicate that motorneurons are not significantly or solely responsible for these SNR-mediated changes in gene expression. To further clarify the cell-specific gene profiles, a more complex model of organotypic cell cultures may be a helpful alternative. This model could mimic tissue architecture of the spinal cord, which would allow an understanding of cellular etiology of these processes. | 2016-05-18T13:21:30.973Z | 2016-03-02T00:00:00.000 | {
"year": 2016,
"sha1": "4212ad362b9994fbe331865015b794dd991fe9e6",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/etm.2016.3130/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4212ad362b9994fbe331865015b794dd991fe9e6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248085075 | pes2o/s2orc | v3-fos-license | Human vs Objective Evaluation of Colourisation Performance
Automatic colourisation of grey-scale images is the process of creating a full-colour image from the grey-scale prior. It is an ill-posed problem, as there are many plausible colourisations for a given grey-scale prior. The current SOTA in auto-colourisation involves image-to-image type Deep Convolutional Neural Networks with Generative Adversarial Networks showing the greatest promise. The end goal of colourisation is to produce full colour images that appear plausible to the human viewer, but human assessment is costly and time consuming. This work assesses how well commonly used objective measures correlate with human opinion. We also attempt to determine what facets of colourisation have the most significant effect on human opinion. For each of 20 images from the BSD dataset, we create 65 recolourisations made up of local and global changes. Opinion scores are then crowd sourced using the Amazon Mechanical Turk and together with the images this forms an extensible dataset called the Human Evaluated Colourisation Dataset (HECD). While we find statistically significant correlations between human-opinion scores and a small number of objective measures, the strength of the correlations is low. There is also evidence that human observers are most intolerant to an incorrect hue of naturally occurring objects.
Introduction
The goal of automatic colourisation is to convert grey-scale images to colour images. Colourisation is an ill-posed problem as many plausible colourisations can result from the same grey-scale image. Predicting the exact colour of the original scene is impossible without further prior information from historical sources. A recent trend is to take any natural colour image dataset, convert it to a luminance-chrominance colour space, use the luminance channel as the grey-scale image and use the two chrominance channels as the ground-truth that a deep neural network must predict. This admits only a single ground-truth for each grey-scale image despite other plausible colourisations existing. Objective assessment of a colourisation model's predictions generally relies on distance measures from the ground-truth image. In this paper we wish to answer the following questions.
• Do commonly used objective measures of colourisation performance correlate with mean human opinion scores? • Is the ground-truth the perfect colourisation for its grey-scale prior? • Will correction of white-balance of ground-truth images lead to higher opinion score? • Are there any image statistics that all plausible colourisations might have in common?
Our key contributions are: • An extensible Human Evaluated Colourisation dataset of recolourisations with matching human-opinion scores to benchmark future objective measures of colourisation. arXiv:2204.05200v1 [cs.CV] 11 Apr 2022 • An assessment of the correlation between the human-opinion score of colourisation performance and the objective measures used in the colourisation literature.
• Analysis and insight into aspects of colourisation that affect the human-opinion score.
• An interactive tool to allow other researchers to analyse the HECD dataset and its resultshttps://github. com/seanmullery/HECD 2 How is colourisation performance measured in the literature?
Most colourisation techniques rely on some form of human-visual inspection to determine efficacy, or for comparison to other techniques. Human-visual inspection can include qualitative analysis [Irony et al., 2005, Welsh et al., 2002, Yatziv and Sapiro, 2006, Zhang et al., 2016, Su et al., 2020, Li et al., 2019,Amelie Royer and Lampert, 2017,Górriz et al., 2019,Lee et al., 2020,Cao et al., 2017,Yoo et al., 2019, naturalness scoring , Iizuka et al., 2016, Zhao et al., 2018, user preference between two options [Li et al., 2019], Visual Turing Test (VTT) judged by human [Zhang et al., 2016, Cao et al., 2017, Guadarrama et al., 2017, which of two colourisations best match a reference image's colour [Li et al., 2019], or which, from many images, appears closest to a ground-truth [Yoo et al., 2019]. Many attempt an objective measure based on absolute pixel value errors, such as RMSE (Root Mean Squared Error) or L 2 pixel distance [Zhang et al., 2016, Deshpande et al., 2015, Deshpande et al., 2017, MAE (Mean Absolute Error) or L 1 pixel distance [Górriz et al., 2019], and PSNR (Peak Signal to Noise Ratio) [Zhang et al., 2017,Su et al., 2020, Górriz et al., 2019, Zhao et al., 2018, Cheng et al., 2015, Kim et al., 2021, Özbulak, 2019. [Lee et al., 2020] develop a patch based version of PSNR called SC-PSNR (Semantically Corresponding PSNR), as they wish to compare colour to a semantically similar patch from a reference image. SSIM (Structural Similarity Index Measure) [Wang et al., 2004], is used by [Su et al., 2020, Özbulak, 2019, Zhao et al., 2020 and its multi-scale version MS-SSIM [Wang et al., 2003] is used by [Wu et al., 2019]. [Kim et al., 2021] developed an objective measure called CDR (Cluster Discrepancy Ratio) based on SLIC (Simple Linear Iterative Clustering) superpixels [Achanta et al., 2012]. CDR is formulated by looking at the discrepancy between super-pixel assignment for ground-truth versus colourisation. Similarly, [Zhao et al., 2018] use mean IoU of segmentation results on the PASCAL VOC2012 dataset [Everingham et al., 2012]. [Wu et al., 2021] use a no-reference measure called colourfulness score [Hasler and Süsstrunk, 2003] which incorporates the means and standard deviations of the a* and b* channels of CIEL*a*b* in a parametric model to compute a measure of how colourful the image is. The parameters were learned from data based on psychophysical experiments . [Górriz et al., 2019] and [Guadarrama et al., 2017] compare histograms in the a* and b* channels of CIEL*a*b* over a distribution of images. Some methods, [Zhang et al., 2016, Larsson et al., 2016, Vitoria et al., 2020, utilise the concept that colour will assist in classifying objects. Therefore a neural network designed to classify objects using colour images will show a deterioration in performance if inferred with a poorly colourised image. The difference can then be used as a proxy measure for colourisation performance. [Górriz et al., 2019] compare L 1 distance between convolutional features in the VGG19 model [Simonyan and Zisserman, 2015] for ground-truth and colourised samples. Similarly, [Lee et al., 2020] and [Wu et al., 2021] use Fréchet Inception Distance [Heusel et al., 2017], which requires comparing the inception score for colourisations versus ground-truth for 50K samples. [Zhang et al., 2018], developed a perception measure based on the features of deep neural networks called the Learned Perceptual Image Patch Similarity (LPIPS) metric, and this has also been used for the measure of colourisation in [Su et al., 2020, Yoo et al., 2019, Kim et al., 2021. The work of [Anwar et al., 2020] is the only work we have found which attempt a dataset that is specifically designed for colourisation. Their dataset is designed with the idea of restricting synthetic objects or natural objects such as flowers that may have a wide distribution of plausible colours. Instead they include only natural objects that would be considered to have a narrow distribution of plausible colours such as specific types of fruit and vegetables. The images contain only a single object type, against a white background. There are 20 categories and 723 images in all. They then use PSNR, SSIM, PCQI (Patch-based contrast quality index), and UIQM (Underwater Image Quality Metric) to test out SOTA algorithms on their dataset. As is explained in section 3, our dataset design is considerably different and includes human evaluation data for each image.
The Human Evaluated Colourisation Dataset
The Human Evaluated Colourisation Dataset is based on 20 images from the Berkeley Segmentation Dataset [Martin et al., 2001]. From each of these 20, 65 images are created that differ in colour from the original. While we attempt to make changes that will be interpretable later, our primary objective is to have many different colour versions for human evaluation to allow appropriate comparison to objective measures. In total, 65 × 20 = 1300 and 20 original images will result in a total of 1320 images in the set. The BSD set was chosen, as it has a variety of natural images and multiple human segmentations of each. The segmentations, in many cases, segment colour sections, allowing the alteration of the colour of specific sections without modification of the rest of the image. The original image will be referred to as the ground-truth from here on. The following changes are made to the ground-truth to create the HECD dataset. The first recolour modification is to auto-white-balance correct the 20 ground-truth images in Photoshop [Adobe, 2021], creating 20 new images. To ensure that Photoshop has not changed the L*-channel, the L*-channel is replaced with the ground-truth to ensure that only changes are made to a*b*. While the a*b* channels are close to perceptually uniform, they are not very intuitive, so a reformulation of these channels to hue and chroma channels is used via the equations of [Fairchild, 2013].
(2) Where h is hue, and c is chroma. We now proceed to make the following global changes to the 40 images (20 ground-truth + 20 WB corrected). The changes below are arbitrary as there is no prior work to guide sample spacing or types of parameters: • Alter intensity value of chroma by ±2σ, ±1σ of the chroma of the image (4 × 40 = 160 images).
• Alter contrast of chroma by 1 4 , 1 2 , 2, 4 (4 × 40 = 160 images). • Shift (offset registration) the a*b* channels spatially relative to the L*-channel by 0.01, 0.02, 0.03, 0.04 of the width and height of the image (4 × 40 = 160 images). The edges that had no donor pixels, just retain their original value. • Collect some SOTA colourisation algorithms' predictions of colour given the L*-channels of the 20 groundtruth images. The choice of which SOTA methods to include was based on availability of implementation and ability to accept the BSD image sizes without modification. [Zhang et al., 2016], [Zhang et al., 2017] (using straight-forward inference with no user guidance), and [Iizuka et al., 2016]. We also replace these L*-channels with the ground-truth L*-channel in case any of the SOTA algorithms alter the L*-channel as part of their processing pipeline. 6 × 20 = 120 images.
In addition to the global changes, we also introduce some local changes. For each of the 40 images, we choose either a single segment or multiple segments that are of the same colour and make the following modifications to just the chosen segment(s).
• For the segment we alter the intensity of the chroma by ±2σ, ±1σ of the chroma of the image (4 × 40 = 160 images). • Hue is not a magnitude space; you cannot have an absence of hue or more/less hue, and all hues are equally important. Therefore, statistics like mean and standard deviation are not meaningful in the hue channel. We alter the hue of the segment in a logarithmic fashion so that we can get better resolution in results closely surrounding the reference hue but still cover the full space of hue values without the cost of sampling all 256 hue values. Future extensions could more tightly sample the whole space. With the hue from Equation 2 forming a circular space ∈ [0, 255] we make the following alterations from the reference hue. ±2, ±4, ±8, ±16, ±32, ±64, and 128 (±128 results in the same change). (13 × 40 = 520 images).
While this is a small dataset by current standards, it has been designed with extensibility in mind. The arbitrary modifications above were chosen to return the most information for the available resources. More ground-truth images and more recolour modifications along with tighter sampling between modification types, could be added in the future by collecting data in a manner consistent with that given in section 4.
4 Collecting the data.
The Amazon Mechanical Turk was used to assess human opinion on colourisation. Ethics approval was obtained in accordance with the University's Research Ethics Committee guidelines. Each assessment consisted of three images appearing on the screen simultaneously: the L*-channel (in the middle) and a colourisation on each side. One of the colourisations is the ground-truth colour image, and the other is one of the modifications described in section 3, see Figure 1. In this manner, all scores have a control in common. The observer is not informed that one image is the ground-truth and the positions vary in a pseudo-random manner so that there is an equal likelihood that the ground-truth could be on the left or the right. The user is asked to score the two colour images on naturalness (how much the colour looks like it would appear in real life). The scores are from 1-5 on an ordinal scale. Each observer rates 20 pairs. Figure 1: Each survey question displays three images. In the middle is the L*-channel, which is common to the three images. On either side are the ground-truth and a recolourisation. The participants are not informed that one image is the ground-truth and it could appear on either right or left with equal probability. The participant must respond to both before continuing.
They see each of the 20 ground-truth images in the dataset and a recolourised version. For any set of 20, the type of recolourisation is pseudo-random, so the user does not become accustomed to the type of colour change. As there are 65 recolour versions, 65 surveys of 20 comparisons are created for 1300 responses in total. While each survey is pseudo-random internally, the actual survey is identical for each observer that responds to it. We do not allow a participant to respond to a unique survey twice. In general, we allow participants to complete only one survey. A small number completed two different surveys (19 participants). This is not a problem, but if participants were allowed to do many surveys in a short period, it could lead to non-naivete, with the participant learning that specific colour versions appear in all surveys, i.e. they may learn to recognise the ground-truth and be biased towards awarding it the higher of the two scores. In all, there were 1281 participants. Twenty participants completed each survey. Twenty-nine incomplete surveys were not used but also not counted in the total 1300 complete surveys. In surveys with more than one response for a pair of images (respondent used the back button in the browser), the final result was used on the assumption that this is what the respondent intended. There were 25 surveys where the user gave the same value for all answers (straight-lining) and 15 where the respondent gave the same number for the two images under consideration in all 20 comparisons; these were removed from the data, leaving 1260 complete surveys.
Processing the raw numbers
As the ground-truth image was used as the control, it is the difference between the score for the ground-truth and the recoloured image in which we are interested. However, account for differences in individual participants that may bias the results still needs to be taken. One participant may score all pairs lower than another participant, with all else equal. As the ordinal values for scoring and the gaps between them are subjective, two participants who perceive the same difference between two images may still give a larger/smaller difference in scores compared to each other. Differences in viewing equipment/environment may also have systematic effects between two respondents. For this reason, it is necessary to consider the trend for the participant as a whole over the 20 image pairs to which they respond. The method of [Sheikh et al., 2006] can then be used to calculate the difference for each pair.
where r ij is the raw score for the i-th participant and j-th image, and r iref (j) denotes the raw quality score assigned by the i-th participant to the reference image corresponding to the j-th recolourised image. The raw difference scores d ij for the i-th participant and j-th image are converted into Z-scores.
Whered i is the mean of the raw difference scores over all of the images ranked by participant i, and σ i is the standard deviation of the differences for participant i. z ij then represents a score for an image j by participant i. In most cases, in this document, the score for an image j is given as the mean over all the participants that responded to it. Because the ground-truth is used as the control and the processing is based on the statistics of the participants, the ground-truth images are all considered of equal quality. When their z-scores are calculated and averaged, they all come to the same value. As there were more recolourisations that scored lower than the ground-truth than those scoring higher, the average score for the ground-truth will have a positive non-zero value.
6 Results 6.1 How do objective measures correlate with human opinion?
As outlined in section 2, colourisation researchers have attempted to use many different types of objective measures to assess the quality of colourisations. We test if the human scores correlate with the commonly used objective measures. As ordinal data is used, two rank-order correlation measures will be used to examine the rank order correlation of the results, namely Spearman-r [Spearman, 1904], see table 1 and Kendall-tau [Kendall, 1938], see table 2. The shaded values, in the tables, represent values where the p-value of the rank-order correlation was less than 0.05, indicating statistical significance. We test against SSIM [Wang et al., 2004], MS-SSIM [Wang et al., 2003], MSE, RMSE/L 2 , MAE/L 1 , Colourfulness and Colourfulness Difference [Hasler and Süsstrunk, 2003], PSNR, CDR [Kim et al., 2021] and LPIPS [Zhang et al., 2018] for both VGG [Simonyan and Zisserman, 2015] and Alexnet [Krizhevsky et al., 2012]. We cannot test FID [Heusel et al., 2017], or SC-PSNR [Lee et al., 2020] [Hasler and Süsstrunk, 2003]. CDR is developed from the details in [Kim et al., 2021] and relies on SKImage's SLIC library. We also test in three different colour spaces where the method is not specific to a particular colour space, namely a*b*, hc (see Equations 1 and 2) and RGB. a*b* and hc do not include the L*-channel in the comparison as L* is common in all pairings. RGB incorporates the L*-channel but in a different formulation. We can see from tables 1 and 2 that MS-SSIM, when used with either a*b* or RGB, has the strongest correlation with human judgement. Standard SSIM with a*b* is the only other that has a statistically significant correlation for all images. hc seems to be a poor space in which to use any of the objective measures despite most of the changes in our dataset being made in this formulation, see section 3. Even for the top performer, MS-SSIM with a*b*, the correlation for the complete set with Spearman is 0.567 and Kendall is 0.389. In general, values above 0.7 are considered "Strong Correlation", so none of the objective measures meets that threshold, though a small number do reach this for an individual image. In short, the objective measures employed in the literature do not work well for colourisation. There is scope here for a more targeted objective measure, and our HECD dataset is publicly available to help in this search. GT FileName SSIM (a*b*)
Is the ground-truth the perfect colourisation for its grey-scale prior?
The method currently employed in most deep-learning colourisation systems is to take any natural image dataset, convert the images to CIEL*a*b*, then use the L*-channel as the prior (input) and predict the a*b* colour channels, with the colour channels from the dataset as the ground-truth. Figure 2 shows that human observers do not rate the ground-truth as higher than all other colour versions that were created in the dataset. Approximately 36% of the area is above the mean ground-truth score (to the right of the blue-dashed line). This shows that many more plausible colourisations of a scene exist than the ground-truth, but will, in current training regimes, be penalised for being different from the ground-truth. The 20 ground-truth images in our dataset are quite good as they come from the BSD dataset. The images are not necessarily natural or high-quality in many commonly used large image datasets; They may be in black and white, duo-tone, or stylised. In classification models, these unnatural or poor quality images are a feature rather than a bug as the desire is to train models to recognise objects even in poor quality images. It therefore makes sense for poor quality images to have the same label as high-quality images if they contain the same object. For generative tasks, such as colourisation, when the task requires a model to generate high-quality colourisations, then poor colourisations in the dataset should not have an equal label to good ones. However, the lack of a reliable no-reference measure for the quality of a colourisation leaves little choice but to treat all images in a dataset as equal-maximum colourisation quality. The only alternative is to assess and sort the large training datasets by resource-intensive human visual inspection.
Does white-balance correction of the ground-truth image lead to higher opinion score?
Photoshop's [Adobe, 2021] white-balance auto-correction was used to produce a white-balance corrected version of each ground-truth image. Using only the direct comparisons between the ground-truth images and WB corrected images resulted in a score of 0.376 for the white-balanced images and a mean of 0.364 for the ground-truth images. This difference is minimal and has a statistical significance of p=0.058, using the Mann-Whitney u-test [Mann and Whitney, 1947]. While the traditional threshold of p < 0.05 is arbitrary, the mean difference has not reached that threshold, and with such a slight difference in the value of the mean, white-balance correcting the images in a dataset before training will have only a minimal effect unless there is reason to believe the images in the dataset have particularly bad colour casts.
How do SOTA colourisation algorithms fair?
Six state-of-the-art colourisation algorithm's outputs were included in the HECD dataset. The choice of algorithms was made primarily on the availability of implementation and the ability to accept the exact image dimensions used in BSD images. The results in Figure 3 and table 3 show that the two commercial products, DeOldify (from MyHeritage.com [Antic, 2021]) and Photoshop [Adobe, 2021], edge ahead of all the others, which are considerably less recent than the commercial products. DeOldify came top in the surveys, and the difference to Photoshop was Figure 2: The distribution of responses after the processing in section 5 for all reference images individually and all together. The blue-dashed line shows the value for the ground-truth images. All ground-truth images are assumed equal as we grade on difference scores from the ground-truth. Any area under the curve to the right of the blue line represents scores where the participant gave the recoloured version a higher score than they gave the ground-truth. We can see that all references have a large area to the right of the ground-truth score. This shows that the ground-truth is far from the most plausible colourisation, as judged by human evaluation. [Mann and Whitney, 1947]. The mean score for the ground-truth images was still higher than the mean for any of the SOTA methods, table 3. Many of the human-evaluation methods outlined in section 2 found that their method could fool a human evaluator or obtain a higher score from a human evaluator on some occasions. We also find this to be the case, as evidenced by the area under the curves in Figure 3 to the right of the dashed line. This area represents the proportion of samples from each model that achieved a higher score than the ground-truth when it and the ground-truth appeared together for comparison scoring. Figure 4 shows the effect of making a single modification type to a reference image. These are all relative changes that should be understood in terms of the reference image. The L*-channel is held fixed for all these modifications. Units, such as standard deviation, refer to the statistics of the reference image and so will represent a different absolute Figure 3: The blue dashed line shows the average Ground-Truth score, when compared against SOTA colourisations, of 0.397. This is higher than the average of the SOTA Colourisation methods, but we can see that all methods achieve some part of the distribution of their scores which is higher than the average ground-truth score. The blue dots represent the mean score of individual images. These can be explored further with our interactive tool.
Image Modification Statistics
value in each case. Figures 4 (a) and 4 (b) show that when the statistics of the chroma of a reference image change, this will, in general, cause a deterioration in mean opinion score, with the caveat that the participants seemed to prefer slightly higher chroma than the reference. Figure 4 (c) is the equivalent change to 4 (a) but for only a colour segment of the image, with results are similar but less pronounced, as the rest of the pixels in the image retain the reference statistics. Figure 4 (d) shows the effect of spatially shifting the colour channels relative to the L*-channel, causing deterioration with an increase in spatial misalignment. However, it should be noted that slight misalignment leads to a relatively small drop in opinion score, particularly when we consider that all pixels are misaligned. We can extrapolate that local colour bleeding across boundaries in colourisations by a small number of pixels will have a relatively small impact on opinion score. Indeed chroma subsampling, widely used in image and video encoding, utilises the Human Visual System's lower acuity in chroma. 4 (e) shows the effect of changing the hue of a segment. When the data is split into it's two reference image categories, namely original ground-truth and white-balance corrected image derived from the ground truth, the responses of these are broadly similar. However, when the data is separated into hue changes to colour segments representing natural objects and those representing synthetic objects, a clear difference between the two groups emerges. Examples of natural objects are skin tones and foliage. Examples of synthetic objects are painted surfaces and textiles. Figure 4 (e) shows that both categories see a deterioration in opinion score with medium to large changes in hue for a segment. However, this deterioration is relatively small for synthetic segments compared to the large change for natural objects. While synthetic objects can theoretically take on any hue, there is still a drop in opinion score with large changes in hue for a colour segment. This may be because the L*-channel prior and the surrounding colours (which did not change from the reference) constrain the most plausible hues to a small band of hue values close to the ground-truth. For colour segments of natural objects, the response is quite different. Small changes in hue to a natural segment may increase the mean opinion score. This may be that the small correction looks more plausible, but it could also be the inherent noise in opinion scores, particularly due to the more dense sampling close to the reference hue. However, the trend is that medium to large changes in natural segment hue sees a large deterioration in the mean opinion score. By directly comparing Figure 4 (e) with Figures 4 (a) and 4 (d) we can see that changing the hue of a natural segment by 64/256 of the full-scale has an equivalent effect on the opinion score of misaligning the colour channels with L* of 0.03 of the dimensions of the image and it has a greater effect than globally changing all of the chroma values by two standard deviations of the chroma in the reference image. This tells us that not all pixels are created equal in colourisation performance. Figure 4: In the figure, are various subsets of the data relating to specific modifications, outlined in section 3, and the effect of those on mean opinion score. All graphs have the same y-scale (mean opinion z-score) so that comparisons of different types of change can be made at a glance. Sub-figures (a) and (b) look at relative global changes to the statistics of the chroma of the reference image. Sub-figure (c) is the equivalent of (a) but for changes to only a colour segment of an image, leaving all other pixels the same as the reference. (d) shows the effect of spatially shifting (misaligning) the colour channels relative to the L*-channel. (e) looks at the effect of changing the relative hue of a colour segment while leaving all other pixels unchanged from the reference. We look at two data slices for global changes, whether the recolour version derived from the original ground-truth or the white-balance corrected image.
For segment modifications (c) and (e), we also look at those slices as well as slicing on whether the modified segment represented a natural or synthetic object.
Conclusion.
We have shown that the widely-used objective measures utilised in the colourisation literature do not correlate well with human opinion. MS-SSIM shows the highest correlation in our findings but is still too low to make it an appropriate gauge of colourisation quality. The hue of natural objects stands out as significant to the human opinion of the naturalness of an image. Observers seem tolerant of minor differences in hue to natural objects, but medium to large changes are heavily penalised. The observers are relatively tolerant of all changes to the hue of synthetic objects. There is a general trend towards a preference for more saturated (higher average chroma) images. Small increases to the chroma of the ground-truth images led to higher opinion scores, but increases beyond that led to a deterioration in opinion score, as did any decrease in the chroma from the ground-truth. The trends were similar when changes were only made to the chroma of small colour segments; The effects were smaller because only some pixels were affected by the change. However, the effect is not necessarily proportional to the number of pixels, as the observer may be guided by the discrepancy in chroma to the surrounding regions. Both increasing and decreasing the global contrast of chroma caused a deterioration in opinion scores. The observers registered a slight change in opinion for small global registration discrepancies between the colour channels and the L*-channel. Increasing deregistration led to a significant deterioration in opinion scores. This suggests some tolerance to small amounts of colour bleeding but intolerance to more significant amounts. We can assume some cross over with the hue of natural objects here; If de-registration problems change the hue of a natural object, we will again see a significant deterioration in opinion score. Finally, caution should be exercised in simply treating all colour images in a data set as perfect colourisations. The results show that many versions in our limited set of arbitrary modifications scored higher than the ground-truth. Auto-white-balance correction of ground-truth images brought only a minor improvement on average, though it may bring a more significant improvement if the white-balance is poor in the ground-truth images. | 2022-04-12T01:16:20.239Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "e49a01c44166e1da7b4fb3b4beb1fc7cdf13562e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e49a01c44166e1da7b4fb3b4beb1fc7cdf13562e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
209789013 | pes2o/s2orc | v3-fos-license | Waste to Carbon: Biocoal from Elephant Dung as New Cooking Fuel
: The paper presents, for the first time, the results of fuel characteristics of biochars from torrefaction (a.k.a., roasting or low-temperature pyrolysis) of elephant dung (manure). Elephant dung could be processed and valorized by torrefaction to produce fuel with improved qualities for cooking. The work aimed to examine the possibility of using torrefaction to (1) valorize elephant waste and to (2) determine the impact of technological parameters (temperature and duration of the torrefaction process) on the waste conversion rate and fuel properties of resulting biochar (biocoal). In addition, the influence of temperature on the kinetics of the torrefaction and its energy consumption was examined. The lab-scale experiment was based on the production of biocoals at six temperatures (200–300 ◦ C; 20 ◦ C interval) and three process durations of the torrefaction (20, 40, 60 min). The generated biocoals were characterized in terms of moisture content, organic matter, ash, and higher heating values. In addition, thermogravimetric and di ff erential scanning calorimetry analyses were also used for process kinetics assessment. The results show that torrefaction is a feasible method for elephant dung valorization and it could be used as fuel. The process temperature ranging from 200 to 260 ◦ C did not a ff ect the key fuel properties (high heating value, HHV , HHV daf , regardless of the process duration), i.e., important practical information for proposed low-tech applications. However, the higher heating values of the biocoal decreased above 260 ◦ C. Further research is needed regarding the torrefaction of elephant dung focused on scaling up, techno-economic analyses, and the possibility of improving access to reliable energy sources in rural areas.
Introduction
It is estimated that there are around~450,000 elephants today, of which 400,000 are in Africa and 50,000 in Asia.In Africa, these mammals live in 34 countries (Angola, Benin, Botswana, Burkina Faso, Cameroon, Central African Republic, Chad, Congo, Ivory Coast, Equatorial Guinea, Eritrea, Ethiopia, Gabon, Gana, Guinea, Bissau, Kenya, Liberia, Malawi, Mali, Mozambique, Namibia, Niger, Nigeria, Rwanda, Senegal, Sierra Leone, Somalia, South Africa, Sudan, Tanzania, Togo, Uganda, Zambia, Zimbabwe), and on the Asian continent they can be found in 15 countries (India, Nepal, Bhutan and Bangladesh, China, Burma, Thailand, Cambodia, Laos, Vietnam, Malaysia, Andaman Islands, Sri Lanka, Sumatra, Borneo) [1].The daily amount of dung produced by one elephant is 100-150 kg.The weight of elephant excrement depends on the amount of consumed water [2][3][4].Thus, taking into consideration the conservative estimate of the minimum dung weight (100 kg), the daily and annual dung production on a global scale is 45,000 Mg and more than 16 million Mg, respectively, i.e., a large amount of biowaste that could be valorized [2][3][4].
From an ecological point of view, untreated animal waste or handling, air-drying and combustion without prior treatment can be problematic due to health and environmental concerns, such as elevated risk of contamination with pathogens, contamination of drinking water sources, gaseous emissions of odor, hydrogen sulfide, ammonia, and other toxic gases [5,6].In addition, the loss of nutrients from dung associated with current practices can also represent economic losses due to its lower value as a fertilizer [5].
We propose a solution to these problems with the introduction of the torrefaction process to manage and valorize the elephant dung.Resulting biocoal can be used as a fuel with a useful high heating value (HHV).Research with slow pyrolysis and hydrothermal carbonization of other types of livestock manure resulted in HHVs ranging from 15.8 to 18.4 MJ/kg [7].Qambrani et al. [8] showed that biocoal from animal manure contains more N compared to biochar from plant residues.Although the pore structure is more organized in biochar from plant sources, the fertilizer quality and heavy metal adsorbability were found to be excellent in manure biochars.On the other hand, some raw waste types (such as poultry manure or sewage sludge) can contain a large amount of copper and zinc, which limits its use as a fertilizer.The proposed concept to valorize elephant manure can provide new technologies for using the torrefaction process in rural areas, which can be used to obtain better quality fuel and fertilizer.
To date, several methods to valorize elephant dung have been proposed.Vermicomposting is a biological process in which the organic fraction of dung is decomposed by microorganisms and earthworms under controlled environmental conditions to a level when it can be applied on arable land.This method can be ecological and economically profitable [9].Vermicomposting of animal dung from the zoo was investigated in pilot-scale by a team of scientists in Mexico [6].Elephant dung was also used for research by scientists in Thailand for the production of biogas in co-fermentation with water hyacinth and fermentation on a laboratory scale.In the case of co-fermentation, the calorific value of biogas was 15.05 MJ•m −3 [10,11].
Biohydrogen production through anaerobic mixed cultures of microorganisms found in elephant dungs was also researched in laboratory conditions.It is based on simultaneous saccharification and fermentation of cellulose.The bacteria break down the cellulose to glucose, and then non-cellulolytic bacteria from the formed glucose produces hydrogen [12,13].The microorganism's culture from elephant dung stimulated the production of H 2 from cellulose.It was assumed that cellulolytic bacteria in the dung originate from the plant diet of the elephant.Animal manure, including elephant dung, was also the subject of research conducted in Thailand on cellulolytic bacteria for the direct production of butanol from cellulose, which could be an alternative to fuel obtained from petroleum [14].
The knowledge about practical considerations for the valorization of elephant dung and the progression from lab to full-scale (e.g., costs of construction and operation ) is limited.There are also questions about the storage and distribution of finished products (e.g., fuel briquettes for cooking), which could be prohibitively expensive for long-range transport.Life-cycle analyses could be useful to assess the critical transport range [15].It is equally important to consider managing the residues (e.g., raw dung and sludge), which may require specialized collection, storage, treatment, and disposal.It has not been described yet how existing or developing technologies (anaerobic digestion, biohydrogen production) could be used for waste management, especially in rural regions in which elephant dung is Energies 2019, 12, 4344 3 of 32 available in large quantities.Thus, there is a need to find local-scale solutions suited for these regions, which should be safe, inexpensive, simple to build, use and maintain, dependable, and not generating another waste stream to manage.
We propose an alternative solution for elephant dung management via torrefaction (Figure 1).Torrefaction (a.k.a., 'roasting' or low-temperature pyrolysis) is a thermochemical process occurring at 200-300 • C without the presence of an oxidant.Jia et al. [16] described the possibility of using a co-gasification of woody biomass and animal manure as a useful technology to utilize organic waste, which could be practical in the case of elephant dung as well.The elephant dung fuel produced may be an attractive source of rural fuel.For example, in India alone, 6.3% of all households use the so-called We propose an alternative solution for elephant dung management via torrefaction (Figure 1).Torrefaction (a.k.a., 'roasting' or low-temperature pyrolysis) is a thermochemical process occurring at 200-300 °C without the presence of an oxidant.Jia et al. [16] described the possibility of using a cogasification of woody biomass and animal manure as a useful technology to utilize organic waste, which could be practical in the case of elephant dung as well.The elephant dung fuel produced may be an attractive source of rural fuel.For example, in India alone, 6.3% of all households use the socalled 'dung cake' to produce the energy needed for cooking [17].Assuming 1.34 billion people in India in 2018 [18] and that one household comprises 10 people, as many as ~8.4 million households use dung cake for energy production.Although the torrefaction process requires some energy, it is also the most promising technology for organic waste treatment for its highest greenhouse gas mitigation potential [19].The produced biocoal, especially when pelletized, poses a lower environmental risk during transport, storage, and combustion, in addition to lowering the risks of sanitary and aquatic pollution [20,21].Therefore, torrefaction could be one of the potential technologies for elephant dung utilization that are sustainable.
To date, no work has been carried out on the torrefaction of elephant dung as a method for the production of fuel.Local-scale torrefaction can address challenges with dung management, through its valorization, while improving the socio-economic situation in rural households.Therefore, the research carried out was aimed at determining:
•
Whether torrefaction can be used as a method of preliminary valorization of elephant dung; • Whether the duration of the torrefaction process at a given temperature affects the dung conversion rate (e.g., mass loss, energy densification, and improved fuel properties); To date, no work has been carried out on the torrefaction of elephant dung as a method for the production of fuel.Local-scale torrefaction can address challenges with dung management, through its valorization, while improving the socio-economic situation in rural households.Therefore, the research carried out was aimed at determining:
•
Whether torrefaction can be used as a method of preliminary valorization of elephant dung; • Whether the duration of the torrefaction process at a given temperature affects the dung conversion rate (e.g., mass loss, energy densification, and improved fuel properties); • Whether energy consumption is needed for the torrefaction of elephant dung.
Feedstock
The study used Asian elephant dung from the Zoological Garden, located in Wrocław, Poland.The 5 kg sample was dried at 105 • C for 24 h in a laboratory dryer, followed by milling to the grain size of ≤0.425 mm with the laboratory knife mill (TESTCHEM, model LMN-100, Pszów, Poland) to make the sample homogeneous.Samples were frozen at −15 • C for further testing.
Biocoal Production Method via Torrefaction
A scheme of the experiment is shown in Figure 2. The biocoal production process was carried out in triplicates according to the methodology presented by [22] at six temperatures from 200 to 300 • C (20 • C intervals) at 20, 40, 60 min for each interval, followed by the cooling phase.The biocoals were generated using a muffle furnace (Snol, model 8.1/1100, Utena, Lithuania).CO 2 inert gas was provided to the furnace to ensure non-oxidative conditions.The elephant dung samples were heated from 20
Thermogravimetric Analysis (TGA) of Elephant Dung
Thermogravimetric analyses (TGA) were first performed in isothermal conditions to determine the kinetics parameters (k-reaction rate constants and E a -activation energy) of the torrefaction process of elephant dung.Reaction rate constants were determined for the following temperatures: and 300 • C in accordance with the methodology and reactor set-up presented elsewhere [22].First, the empty furnace was pre-heated to the set point.Then, 3 g of dry dung was placed in the steel crucible and placed in the furnace for 1 h.Measurement of mass loss was performed using a balance coupled to a steel crucible at 10 s intervals with 0.01 g accuracy.The calculating methodology for the kinetic parameters is presented in Section 2.6.2.The kinetics parameters (reaction rate and activation energy) were calculated.
TGA analyses were also completed in non-isothermal conditions to obtain more comprehensive data on the thermal degradation of elephant dung.These TGA analyses were performed at rising temperatures (from 20 • C to 850 • C) at a heating rate of 650 • C•h −1 (10.83 • C•min −1 ).The sample was heated for 2 min after reaching a set point.The study of kinetic parameters and thermal degradation was performed using the stand-mounted tubular furnace (Czylok, RST 40x200/100, Jastrzębie-Zdrój, Poland).
Differential Scanning Calorimetry (DSC) of Raw Elephant Dung
Differential scanning calorimetry (DSC) analysis was carried out using a differential scanning calorimeter (TA Instruments, DSC Q2500, New Castle, DE, USA).Approximately 6 mg of the tested material was weighed into the aluminum hermetic crucible.Each sample (n = 1) was then placed in the analyzer and heated from 10 • C to 300 • C at a heating rate of 10 • C•min −1 .The N 2 inert gas was supplied at 3 dm 3 •h −1 flowrates.The analysis provided information on endothermic and exothermic changes during torrefaction.
Mass Yield, Energy Densification Ratio, and Energy Yield
The mass yield, energy densification ratio, and energy yield of each of the variants were determined based on Equations (1)-(3), respectively [27]: where: MY-mass yield, % m a -the mass of dry elephant dung before torrefaction, g, m b -the mass of dry biocoal after torrefaction, g. where: EDr-energy densification ratio, -, HHV b -the high heating value of biocoal, J•g −1 , HHV a -the high heating value of raw elephant dung, MJ•kg −1 . where: EY-nergy yield, %, MY-mass yield, % ED r -energy densification ratio, -, The ash-free value of the HHV was determined based on [28]: where: HHV daf -high heating value on dry and ash-free base, MJ•kg −1 , HHV-high heating value, MJ•kg −1 , M f -dry mass of fuel, kg, M ash -the mass of ash in fuel, kg.
Calculation of Kinetics Parameters (Reaction Rate and Activation Energy)
The data obtained from isothermal TGA analysis were used to determine the reaction rate (k) constant for each temperature, based on the first-order model [22]: where: m s -mass after time t, g, m o -initial mass, g, k-the reaction rate constant, s −1 , t-time, s.
The nonlinear estimation of k in Equation (5) for each temperature was made with the Statistica 13.3 software (StatSoft, Inc., TIBCO Software Inc. Palo Alto, CA, USA).The Arrhenius plot was created (ln(k)(T) vs. 1/T) on the basis of k values for individual temperatures [29], and a trend line was found: Then, the activation energy (E a ) values [22] were determined as follows: where: E a -activation energy, J•mol −1 , a-the coefficient from Equation ( 6), K, R-gas constant, J•mol −1 •K −1 .
Calculation of Energy Demand for Torrefaction of Elephant Dung
The results from the DSC and TGA analyses were used to calculate the actual energy demand in processing dry elephant dung (to heat dung from 20 • C to 300 • C) in accordance with the methodology presented in a previous paper [30].The lack of TGA analysis causes overestimated energy amount needed to process the material, due to the decreasing amount of material during torrefaction caused by its devolatilization.The following is an example of the model use where the calculation for 1 g of the raw elephant dung torrefied at 300 • C was considered.The total amount of energy needed to processing raw elephant dung was calculated by adding the energy needed to evaporate water from raw elephant dung to the result from the model of dry elephant dung.The energy needed to evaporate water was calculated by Equation ( 8) [31]: where: Q-the total amount of heat needed to heat and evaporate water, J, m-the mass of water in the sample, g, ∆T-the temperature difference between ambient temperature (20 Polynomial models of the influence of torrefaction temperature and time on torrefaction process and biocoals fuel parameters were developed.These models were based on measured data from the torrefaction process, and biocoal properties for a particular temperature and time using a similar modeling approach described in our previous work [32].Equations describing MY, EDr, EY, organic matter content, combustible parts, ash, HHV, and HHV daf for biocoal were developed.The general form of the applied polynomial equation was: where: Regression analysis used a 2-degree polynomial with a general form, with intercept (a 1 ) and six regression coefficients (a 2 -a 7 ).The confidence interval of the parameter evaluations (a 1 -a 7 ) was 95%.All parameters for which the results of p-value were <0.05, were assumed to be statistically significant.The results of the analysis are presented in the form of equations.as well as the correlation coefficients (R) and determination coefficients (R 2 ).The results of the DSC analysis were also subjected to polynomial regression analysis in order to determine a useful model of the specific heat (SH) of elephant dung for 200-300 • C. The polynomial regression analysis was used because the torrefaction process has a non-linear character.The results were presented in the form of an equation describing the dependence of the change of specific heat of elephant dung as a function of temperature.The general form of the polynomial used is in the form of Equation (10).Nine regression coefficients were used to provide a higher level of matching model to raw data. where: SH-specific heat of elephant dung as a function of temperature, J•(kg Nonlinear regression and evaluation of intercepts and regressions coefficients (p < 0.05) were completed with Statistica software (13.3,StatSoft, Palo Alto, CA, USA).
Statistical Analysis
An analysis of variance (ANOVA) evaluation of differences between mean values was performed with the application of post-hoc Tuckey's test, at the p < 0.05 significance level.For statistical data evaluation, the Statistica software (13.3,StatSoft, Palo Alto, CA, USA) was used.
Result of the Torrefaction Process
The mass yields (MY) for elephant dung biocoals (Figure 3) showed a downward trend with the increase of process temperature.The highest mass yields values were obtained for biocoal generated at 200 • C and were above 90%.The lowest MY was for 300 • C, in this case, the mass yield decreased to 66%.All regression coefficients were statistically significant (p < 0.05) in the MY model, (R 2 = 0.75) (Table 1).Detailed MY data are shown in Table A2.
Energies 2019, 12, x FOR PEER REVIEW 8 of 36 2.6.5.Statistical Analysis An analysis of variance (ANOVA) evaluation of differences between mean values was performed with the application of post-hoc Tuckey's test, at the p < 0.05 significance level.For statistical data evaluation, the Statistica software (13.3,StatSoft, Palo Alto, CA, USA) was used.
Result of the Torrefaction Process
The mass yields (MY) for elephant dung biocoals (Figure 3) showed a downward trend with the increase of process temperature.The highest mass yields values were obtained for biocoal generated at 200 °C and were above 90%.The lowest MY was for 300 °C, in this case, the mass yield decreased to 66%.All regression coefficients were statistically significant (p < 0.05) in the MY model, (R 2 = 0.75) (Table 1).Detailed MY data are shown in Table A2.The energy yield (EY) of the biochar from elephant dung (Figure 4) also decreased with the increase of temperature and did not change with time.The biocoals produced at 200 • C resulted in more than 105% EY compared to raw material.However, the EY dropped below 68% for torrefaction at 300 • C. All regression coefficients were statistically significant (p < 0.05) for the EY model (R 2 = 0.85) (Table 2).The energy yield (EY) of the biochar from elephant dung (Figure 4) also decreased with the increase of temperature and did not change with time.The biocoals produced at 200 °C resulted in more than 105% EY compared to raw material.However, the EY dropped below 68% for torrefaction at 300 °C.All regression coefficients were statistically significant (p < 0.05) for the EY model (R 2 = 0.85) (Table 2).The energy densification ratio (EDr) in biocoals generated from elephant dung (Figure 5) decreased with increasing temperature and did not change much with time.Biocoals produced at 200 • C had the highest EDr of ~1.1, while biocoals generated at 300 • C had the lowest EDr (~0.9).All regression coefficients were statistically significant (p < 0.05) for the EDr model (R 2 = 0.83) (Table 3).The energy densification ratio (EDr) in biocoals generated from elephant dung (Figure 5) decreased with increasing temperature and did not change much with time.Biocoals produced at 200 °C had the highest EDr of ~1.1, while biocoals generated at 300 °C had the lowest EDr (~0.9).All regression coefficients were statistically significant (p < 0.05) for the EDr model (R 2 = 0.83) (Table 3).
Result of Proximate Analysis of Raw and Torrefied Elephant Dung
The content of organic matter (OM) decreased as the temperature and the retention time increased.The lowest OM value was 28.26% for torrefaction at 280 • C and 60 min, and for torrefaction at 300 • C in time 20 min and 40 min (Figure 6, Table A1).Analysis of variance showed that statistically significant differences occur between the results obtained at 260 • C, 280 • C, and 300 • C, (p < 0.05) (Figure A1, Table A3).All regression coefficients were statistically significant (p < 0.05) for the OM model (R 2 = 0.83) (Table 4
Result of Proximate Analysis of Raw and Torrefied Elephant Dung
The content of organic matter (OM) decreased as the temperature and the retention time increased.The lowest OM value was 28.26% for torrefaction at 280 °C and 60 min, and for torrefaction at 300 °C in time 20 min and 40 min (Figure 6, Table A1).Analysis of variance showed that statistically significant differences occur between the results obtained at 260 °C, 280 °C, and 300 °C, (p < 0.05) (Figure A1, Table A3).All regression coefficients were statistically significant (p < 0.05) for the OM model (R 2 = 0.83) (Table 4).The ash content was inversely proportional to the OM content and increased to over 71% in comparison to 50.81% for raw dung (Table A1) in biocoal produced at 280 and 300 • C at 60 min (Figure 7).Analysis of variance showed statistically significant differences between the results for temperatures 260 • C, 280 • C, and 300 • C (p < 0.05), (Figure A2, Table A4).All regression coefficients were statistically significant (p < 0.05) for the ash content model (R 2 = 0.83) ( The ash content was inversely proportional to the OM content and increased to over 71% in comparison to 50.81% for raw dung (Table A1) in biocoal produced at 280 and 300 °C at 60 min (Figure 7).Analysis of variance showed statistically significant differences between the results for temperatures 260 °C, 280 °C, and 300 °C (p < 0.05), (Figure A2, Table A4).All regression coefficients were statistically significant (p < 0.05) for the ash content model (R 2 = 0.83) (Table 5).The content of combustible parts (CP) decreased with time and the rise of the process temperature.Raw elephant dung had a CP = 48.9%(Table A1).During the torrefaction, the CP decreased to 28.6% at 60 min and 300 • C (Table A1, Figure 8).The analysis of variance showed numerous statistically significant differences, the majority of which occurred between 260 • C, 280 • C, and 300 • C (Table A5, Figure A3).All regression coefficients were statistically significant (p < 0.05) for the CP model (R 2 = 0.67) (Table 6).
The content of combustible parts (CP) decreased with time and the rise of the process temperature.Raw elephant dung had a CP = 48.9%(Table A1).During the torrefaction, the CP decreased to 28.6% at 60 min and 300 °C (Table A1, Figure 8).The analysis of variance showed numerous statistically significant differences, the majority of which occurred between 260 °C, 280 °C, and 300 °C (Table A5, Figure A3).All regression coefficients were statistically significant (p < 0.05) for the CP model (R 2 = 0.67) (Table 6).The decrease in the HHV of the biocoals produced from the elephant dung was observed along with the increase of temperature and time (Figure 9, Table A1, Figure A4).The highest HHV was obtained for the biocoal generated at 200 • C and 60 min.A similar trend was discovered by Li et al., [33].They explained this phenomenon by the effect of specific biocoal properties (pH; C, H, N, S, O content; specific surface area) and noticed also a possibility of predicting the biocoal yield of a group of feedstocks with similar physiochemical properties.
The average HHV was 13 MJ•kg −1 and was higher than the HHV of raw elephant dung (by 1.59 MJ•kg −1 ) and higher than the lowest HHV for the biocoal obtained at 300 • C and 60 min (by 6.51 MJ•kg −1 ).The HHV is affected by the high ash content in the biocoals and raw material.Thus, it was decided to estimate the value of HHV on an ash-free basis (HHV daf ).The highest average HHV daf was obtained for the biocoal generated at 280 • C and for 60 min (27.20 MJ•kg −1 ) (Figure 10, Table A1).Regression coefficients for the HHV and HHV daf were statistically significant (p < 0.05), the proposed model worked well for HHV but was less representative for HHV daf (R 2 were 0.74 and 0.21, respectively) (Tables 7 and 8).Analysis of the variance of average values of HHV showed statistically significant differences between the results for 280 • C and 300 • C and 40 & 60 min (p < 0.05) (Figure A4, Table A6).This result has practical implications for the collection and initial processing of elephant dung to minimize mineral ash content and impurities and to maximize the HHV.
ranged from 20min to 60 min; * more information in Section 2.2.
Result of the Thermogravimetric Analysis (TGA) of Elephant Dung
Table 9 summarizes kinetics parameters based on the TGA analyses and the mass loss data.
Table 9.The values of reaction rate constants and activation energy for elephant dung torrefaction.The obtained values of k were analyzed by ANOVA, which showed that there were statistically significant differences (p < 0.05) for biocoal produced at 300 • C, and those obtained at 200 • C, 220 • C, 240 • C, and 260 • C, respectively.There were no statistical differences between k for 280 and 300 • C and k for 200-260 • C range.Kim et al. indicated that different optimal temperatures should be selected for different types of manure to maximize the energetic retention efficiency [34].The energy yield of hydrochar (48.0-71.9%) is higher than that of pyrolysis char (31.5-52.4%),implying that the carbonization process, rather than the reaction temperature, is also a key factor that affects the energy yield of manure [35].The TGA analysis showed the most substantial mass decrease in the first repetition to 54% of the initial mass of the sample, while in the second and third repetitions, the mass decreased to 64% and 62%.The loss of mass began at a temperature of ~300 • C, and it started to stabilize after exceeding ~600 • C (Figure 11).decomposition of cellulose and lignin takes place at 305-375 °C and 250-500 °C, respectively [36].No degradation of hemicellulose was observed based on the DTG (derivative thermogravimetry) analysis.The decomposition of hemicellulose takes place at 225-325 °C [36].However, the apparent lack of mass change in this temperature range (Figure 11) does not necessarily indicate a lack of hemicelluloses content.It is also likely that particular decompositions could be superimposed [36] and could not be detected by the lack of precision of the used thermogravimetric analyzer.
Differential Scanning Calorimetry (DSC) of Elephant Dung
DSC analysis showed that during heating, two endoenergetic transformations occurred (Figure 12).New knowledge on the substrates of elephant dung was gained from the TGA analyzes.There was a characteristic peak start at ~330 • C with a maximum at ~500 • C, most likely related to the decomposition of undigested (by elephant) cellulose and lignin from consumed biomass.The decomposition of cellulose and lignin takes place at 305-375 • C and 250-500 • C, respectively [36].No degradation of hemicellulose was observed based on the DTG (derivative thermogravimetry) analysis.The decomposition of hemicellulose takes place at 225-325 • C [36].However, the apparent lack of mass change in this temperature range (Figure 11) does not necessarily indicate a lack of hemicelluloses content.It is also likely that particular decompositions could be superimposed [36] and could not be detected by the lack of precision of the used thermogravimetric analyzer.
Differential Scanning Calorimetry (DSC) of Elephant Dung
DSC analysis showed that during heating, two endoenergetic transformations occurred (Figure 12).At the beginning of the experiment, the energy was supplied to the sample to raise the temperature of the system.The first observation was that transformation began at 37 • C. Here, the energy was delivered to heat a sample and to initiate its transformation, which reached its maximum value at 80 • C and ended at 146 • C. The total energy demand for this first transformation was 66.17 J•g −1 .After the first transformation ended, the energy needed only for heating the sample was supplied to the system (146-158 • C).The second transformation began at 158 • C, reached its maximum at 216 • C, and ended at 252 • C, requiring only 9.76 J•g −1 .After the second transformation occurred, the energy required for heating decreased significantly.After T > 252 • C the exothermic reaction occurred.
The total energy demand for the whole process including heating and transformations of dry elephant dung was 485.37 kJ•kg −1 for the −20 to 300 • C range.The estimate for process energy demand calculated by model for torrefaction [30] decreased to 484.81 kJ•kg −1, and it was due to mass loss during the process.In addition, the heating and evaporation of the water contained in raw elephant dung (moisture content 49.19%), results in the additional 1275.49kJ•kg −1 (Equation ( 8)) energy demand.Thus, the total energy demand for processing of raw elephant dung (heating, moisture evaporation, and torrefaction) is 1760.30kJ•kg −1 .At the beginning of the experiment, the energy was supplied to the sample to raise the temperature of the system.The first observation was that transformation began at 37 °C.Here, the energy was delivered to heat a sample and to initiate its transformation, which reached its maximum value at 80 °C and ended at 146 °C.The total energy demand for this first transformation was 66.17 J•g −1 .After the first transformation ended, the energy needed only for heating the sample was supplied to the system (146-158 °C).The second transformation began at 158 °C, reached its maximum at 216 °C, and ended at 252 °C, requiring only 9.76 J•g −1 .After the second transformation occurred, the energy required for heating decreased significantly.After T > 252 °C the exothermic reaction occurred.
The total energy demand for the whole process including heating and transformations of dry elephant dung was 485.37 kJ•kg −1 for the −20 to 300 °C range.The estimate for process energy demand calculated by model for torrefaction [30] decreased to 484.81 kJ•kg −1, and it was due to mass loss during the process.In addition, the heating and evaporation of the water contained in raw elephant dung (moisture content 49.19%), results in the additional 1275.49kJ•kg −1 (Equation ( 8)) energy demand.Thus, the total energy demand for processing of raw elephant dung (heating, moisture evaporation, and torrefaction) is 1760.30kJ•kg −1 .
The Impact of Technological Parameters on the Efficiency of the Process
A related torrefaction study carried out on cow manure showed that the MY of torrefaction decreased with the increase of the process temperature [37], similar to the finding in this research.The torrefied elephant dung (200-300 °C at 40 min) had the MY of 100-68%, whereas it was 90-55% for cow manure at the same process conditions [37].Differences in MY could be explained by a greater decomposition of biodegradable substrates at lower temperatures.Also, elephant dung had higher moisture and OM content compared with the cow manure.In addition, it has been reported that it is possible to change specific surface area (SSA) as a result of morphological changes due to thermal condensation, and it could be exploited in different materials [38].The energy yield of torrefaction of cow manure decreased from around 92% at 200 °C to approximately 57% at 300 °C, whereas elephant
The Impact of Technological Parameters on the Efficiency of the Process
A related torrefaction study carried out on cow manure showed that the MY of torrefaction decreased with the increase of the process temperature [37], similar to the finding in this research.The torrefied elephant dung (200-300 • C at 40 min) had the MY of 100-68%, whereas it was 90-55% for cow manure at the same process conditions [37].Differences in MY could be explained by a greater decomposition of biodegradable substrates at lower temperatures.Also, elephant dung had higher moisture and OM content compared with the cow manure.In addition, it has been reported that it is possible to change specific surface area (SSA) as a result of morphological changes due to thermal condensation, and it could be exploited in different materials [38].The energy yield of torrefaction of cow manure decreased from around 92% at 200 • C to approximately 57% at 300 • C, whereas elephant dung were of 110% and 60%, respectively.The EDr ratio for cow manure had the same downtrend as elephant dung [37].It was also noticed that there are different degradation processes in the studied range of 200-300 • C. Lignocellulose degradation occurs at approximately 120 • C; hemicellulose degradation occurs at 200-260 • C; cellulose degradation occurs at 240-350 • C; while lignin degradation occurs at 280-350 • C [39], which due to the observation of narrow temperature ranges could have affected the lack of a decrease or increase trend in the case of obtained moisture and MY.
Proximate Analyses of Elephant Dung and Biocoals
The average moisture content in the elephant dung was 49.19%.The moisture content of dung depends on the amount of water consumed by the animal.For example, pig manure could have a moisture content of ~35-82%, whereas cow manure is of ~66-97% [40][41][42].In the case of poultry manure, moisture content ranges from ~5 to 40% [40].The OM content in the studied elephant dung was 48.09% (d.m.).For comparison, the OM content for Indian elephant and rhinoceros were 52% and 56%, respectively [43].For yet another case of the cattle manure, an OM content was ~74% [44].These OMs are much lower than those reported in related torrefaction studies for pruned biomass of Paulownia (90%) [45], or brewery spent grain (96%) [46].
The HHV of the torrefied dung was not much higher than the raw sample (Table A1).For biocoal, the highest HHV was 13 MJ•kg −1 (260 • C, 60 min), and a further increase in temperature and time caused a decrease in its value.The low increase of HHV in comparison to the raw base for cow dung was reported by Pahla et al. [37] and HHV increased from 16.78 to 18.64 MJ•kg −1 (at 300 • C).A small increase of HHV in dung biocoal is directly affected by a low amount of fixed carbon (high amount of ash content).During torrefaction, fixed carbon is enhanced by thermal degradation of hemicellulose and part of cellulose and lignin [49].The decomposition of these constituents results in releases of compounds with low energy content, leaving organic compounds with higher energy content [50].Cow manure, similarly to elephant dung did not experience high HHV enhancement likely because it had less OM and more ash content.Pulka et al. [28] tested sewage sludge via torrefaction and met the same problem-the highest value of HHV for biocoal generated at 260 • C, 60 min, and further temperature increase decreased HHV.Therefore, it may be assumed that at a temperature > 260 and time > 60 min, some organic components from elephant dung and sewage sludge start to decompose and release volatiles with higher energy content.
There was no observed relationship between the moisture content and the process temperature and time for the biocoals from elephant dung.This is likely because dry material was used for the torrefaction process.Small differences in the moisture content of biocoals can result from the time between their generation and the determination of the moisture content experiment.Stored biocoals can adsorb moisture (e.g., from the air), making biomass-derived fuels less advantageous compared with coal [50].
There was a sharp drop in the OM and the simultaneous increase in ash content for torrefaction above 260 • C.This also caused a decrease of HHV and an increase in the HHV daf, especially in the biocoals produced at 260 • C and 300 • C. A practical implication is that the torrefaction process conducted at temperatures from 200 • C to 260 • C (regardless of time) will have a small impact on the decrease of HHV of biocoals.
Furthermore, it could be recommended that torrefaction at 200 • C for 20 min (lowest temperature and shortest time) is needed for the maximization of the HHV and minimization of the cost of the torrefaction process.In addition, a lack of significant differences (p < 0.05) in 200-260 • C allows us to use torrefaction of elephant dung as a low-tech technology, i.e., one that can be controlled without an accurate measurement system.It is especially important for rural areas.Also, during torrefaction of a more substantial amount of the dung, it would be challenging to evenly heat and then cool down fast all the processed material.However, based on the apparent lack of effect in this research, the risk of generating substandard biocoals appears to be relatively low.
The highest HHV daf value (27.2•MJ•kg −1 ) was observed for 280 • C, 60 min (Table A1).This value is theoretical, and it is worth considering ways of reducing the ash content in the elephant dung, because it may have a high energetic potential after processing.Considering ash-free elephant dung after torrefaction, it is possible to obtain better solid fuel than commercially-available pellets.For example, pellets made from pine sawdust, wheat straw, corn settlements, agricultural residues have HHV of 19.5, 17.5, 18.8, 18.1 MJ•kg −1 , (HHV daf 19,6, 19.0, 19.0, 19.8 MJ•kg −1 ) (Table A8) respectively [51].These values are still relatively low when compared to ash-free biocoal from elephant dung of 27.2 MJ•kg −1 .
The ash in elephant dung is derived from two primary sources, (1) ash introduced during collecting, transporting, storing, and processing, (2) biogenic ash inside plant tissue consumed by an elephant.The sum of these sources is referred to as ash content.Biogenic ash could be removed from biomass using air separation.For woody pine forest residue, air separation costs ~2.23 $•Mg −1 of biomass to reduce 40% of total biogenic ash to <7% of total biomass [52].Ash could also be removed from biomass cells via chemical pre-processing that solubilize it.Here, knowledge of the exact morphology and chemical state of the ash is needed to determine the most effective removal methods [52].From a practical point of view, elephant dung should be collected with the least soil impurities as possible.Next, during transportation, drying, etc. the dung should not be exposed to dust.If prevention is not enough, air separation could be considered, due to its relatively low operational cost.Nevertheless, dung morphology is important factor for air separation.Dung is much more brittle and lighter than wood.Because of this, chipped particles of dung could be lighter than mineral impurities causing the different share of ash in particular fractions than in the case of wood.Although some chemical pre-processing technologies have a high level of ash removal (over 90% removal of alkaline earth and alkali metals) [52], their technological infrastructure and cost would be difficult to adopt in underserved areas.
Another important aspect is the issues related to the supply chain, which may influence the quality of biocoal and efficiency of the process.The collection of elephant dung has a dispersed character with a random accumulation ratio in one specific localization, especially when elephants live in natural habitats.The dung usually is collected directly from the ground, which may increase the ash content.However, when dung is exposed to climatic conditions (especially to wind and sun), the overall effect might be beneficial to drying, which brings benefits related to transportation and torrefaction efficiency.Pre-dried material is more suitable for collection, transportation (less water to be transported), and is less prone to decay.In the case of breeding of elephants or using them as work animals (as practiced in South-East Asia), the accumulation of dung in one specific area is more likely.Natural drying maybe not be sufficient.Therefore, one solution could be pre-drying in the dedicated dryer, which could use a warm air stream for water removal.Solar energy could be used as a heat source.Such solution could solve several practical problems: i) the long-range transport of untreated and wet dung to processing sites that is energy inefficient, while a significant portion of the transportation costs are being used to transport water [5]; ii) the long-term storage of raw biomass can be problematic and impractical because the piled biomass can decompose over time resulting in the decrease of useful HHV [7].
Thermogravimetric Analysis of Raw Material and Kinetic Parameters of Torrefaction
Reported TGA analyses of elephant dung are the first of their kind in the literature.A comparison of kinetic parameters with the literature is then confounded because of the variety of determination methods used for other materials.For this reason, we discuss the kinetics of a subset of the most common and related substrates.We considered the elephant diet consisting mostly of grasses, and the activation energy for some grass plants is available.The activation energy of wheat straw and sorghum determined for the 250-450 • C range was 176 kJ•mol −1 and kJ•mol −1 , respectively [53].For comparison, lignocellulose materials (eg., woody biomass) have an E a of 103-165 kJ•mol −1 [54,55].The values presented in this paper were obtained for non-isothermal conditions and pyrolysis temperature range.
It should also be noted that the greatest E a was determined for MSC, which had the highest OM content, and much smaller during the torrefaction of elephant dung and SS, where OM contents were lower by ~20%.An opposite trend was observed in the case of the k value, which was the highest during the torrefaction of SS, followed by MSC and elephant dung.This may indicate that the content of OM is one of the critical drivers of the waste's kinetic properties, such as the Ea and possibly k.
Differential Scanning Calorimetry of Raw Material
DSC analysis showed that two endothermic reactions (37-146 • C and 158-252 • C) and one exothermic reaction (252-300 • C) occur during the torrefaction process (Figure 12).The first transformation observed on DSC plot may be attributed to water evaporation.Interestingly, the elephant dung was dried at 105 • C before the DSC test.Thus, the presence of water in a previously dried sample could be due to the hygroscopicity (the sample absorbed some water from the atmosphere before the test; i.e., biocoals are known to be affected by this phenomenon) [58].The first transformation ended above drying temperature (105 • C), so it is probably associated with bound water evaporation.The nature of the second endothermic transformation is unknown.To our knowledge, there are no DSC data of elephant dung to compare.This transformation may be related to residue hemicellulose degradation.Degradation of hemicellulose takes place at a lower temperature range (225-325 • C) than the degradation of cellulose (305-375 • C) [36].After the second endothermic transformation ended, the heat flow starts to decrease, which is related to an exothermic reaction (253-300 • C).This exothermic reaction corresponds to mass loss observed on TG/DTG plot observed at the beginning of the process (Figure 11).Interestingly, neither of the endothermic reactions were apparerent in the TG/DTG plot (Figure 11).This might be a result of insufficient precision in the use of the laboratory balance, or due to transformations that were not related to mass loss.In general, endothermal reactions are related to depolymerization and volatilization process, whereas exothermic transformations are due to the charring process [59] phenomenon, the DSC plot shows that the elephant dung torrefaction is an (overall) endothermic process and it requires energy delivery.Some energy cost savings might be realized by using the torrefied elephant dung as a fuel for the torrefaction process (Figure 1).
High ash content 50.81% (Table A1) is not without significance.It makes measurements of TGA and DSC less accurate because smaller mass loses in organic compounds were measured.In the case of DSC, the endothermic reactions of <200 • C that were found could also be associated with water evaporation from components of ash such as chlorine and potassium [60].The growth of the mineral fraction lowers the activation energy of the pyrolysis reaction, and accelerates exothermic thermochemical conversion reactions [61].
Conclusions
Initial valorization of elephant dung by torrefaction is proposed as a possible low-tech fuel production in rural areas with abundant supply.Proposed valorization could be used in households for cooking and heating.These studies have expanded knowledge on the possibilities of torrefaction of elephant dung and provided practical knowledge about the fuel properties of torrefied elephant dung, as high heating value, combustible parts, ash content, and organic matter content.Based on the results, models of torrefaction of elephant dung with kinetics parameter evaluation have been proposed.The following conclusions arise from this research:
•
Torrefaction improves the higher heating value of elephant dung.The torrefied elephant dung has an HHV = 13 MJ•kg −1 compared to the HHV = 11.41MJ•kg −1 for unprocessed dung.
•
Minimal process controls appear to be needed, and thus, scaling the torrefaction up to larger batches of dung is feasible, but due to lack of data, these options need more tests on a technical scale.Biocoals with similar quality are obtained for 200 • C to 260 • C range regardless of the duration of the process (20 to 60 min).
•
The recommended temperature of the torrefaction for elephant dung is 200 • C, due to the lack of significant improvements in fuel properties with increasing process temperature.
•
The total energy needed to heat the dry elephant dung from 20 • C to 300 • C was approximately 485 kJ•kg −1 (obtained in laboratory conditions), and 484.81 kJ•kg −1 (obtained from calculations) after the mass loss during the process is factored in.The total energy demand for drying and torrefaction was the total amount of energy for processing (heating, moisture evaporation, and torrefaction) was 1760.30kJ•kg −1 .
This research has shown that there is a potential in using elephant dung as a substrate for torrefaction and its valorization as an improved fuel source.The next step should be to identify the technological parameters for the torrefaction of elephant dung.This is important for investment analysis and technology design, particularly in rural areas.
C to set point at 50 • C•min −1 .The cooling times were 38 min, 33 min, 29 min, 23 and 13.5 min, from torrefaction setpoints of 300 • C, 280 • C, 260 • C, 240 • C, and 220 • C to 200 • C, respectively.After the CO 2 supply was cut off, the biocoals were removed from the furnace when the interior temperature was <200 • C. The mass of the sample was determined before and after the cooling process in order to calculate the mass loss.Dung samples of 10 ± 0.5 g (dry mass, d.m.) were used to produce biocoal.Energies 2019, 12, x FOR PEER REVIEW 4 of 36 °C (20 °C intervals) at 20, 40, 60 min for each interval, followed by the cooling phase.The biocoals were generated using a muffle furnace (Snol, model 8.1/1100, Utena, Lithuania).CO2 inert gas was provided to the furnace to ensure non-oxidative conditions.The elephant dung samples were heated from 20 °C to set point at 50 °C•min −1 .The cooling times were 38 min, 33 min, 29 min, 23 and 13.5 min, from torrefaction setpoints of 300 °C, 280 °C, 260 °C, 240 °C, and 220 °C to 200 °C, respectively.After the CO2 supply was cut off, the biocoals were removed from the furnace when the interior temperature was <200 °C.The mass of the sample was determined before and after the cooling process in order to calculate the mass loss.Dung samples of 10 ± 0.5 g (dry mass, d.m.) were used to produce biocoal.
Figure 2 .
Figure 2. Scheme of experiments -biocoal production via torrefaction of elephant dung to determine the process kinetics with thermogravimetric analyses (TGA) and differential scanning calorimetry (DSC).
Figure 2 .
Figure 2. Scheme of experiments -biocoal production via torrefaction of elephant dung to determine the process kinetics with thermogravimetric analyses (TGA) and differential scanning calorimetry (DSC).
Figure 3 .
Figure 3.The influence of temperature and time on the mass yield of biocoal from elephant dung.
Figure 3 .
Figure 3.The influence of temperature and time on the mass yield of biocoal from elephant dung.
Figure 4 .
Figure 4.The influence of temperature and time on the energy yield in biocoal from elephant dung.Figure 4. The influence of temperature and time on the energy yield in biocoal from elephant dung.
Figure 4 .
Figure 4.The influence of temperature and time on the energy yield in biocoal from elephant dung.Figure 4. The influence of temperature and time on the energy yield in biocoal from elephant dung.
Figure 5 .
Figure 5.The influence of temperature and time on the energy densification ratio in biocoal from elephant dung.
Figure 5 .Table 3 .
Figure 5.The influence of temperature and time on the energy densification ratio in biocoal from elephant dung.Table 3. Statistical evaluation of energy densification ratio of biocoal from elephant dung.Intercept/ Coefficient
Figure 6 .
Figure 6.The influence of temperature and time on the organic matter content in biocoal from elephant dung.
Figure 6 .
Figure 6.The influence of temperature and time on the organic matter content in biocoal from elephant dung.Table 4. Statistical evaluation of organic matter content of biocoal from elephant dung.Intercept/ Coefficient 0.83, R = 0.91; T* ranged from 200 • C to 300 • C, t* ranged from 20 min to 60 min; * more information in Section 2.2.
Figure 7 .
Figure 7.The influence of temperature and time on the ash content in biocoal from elephant dung.Figure 7. The influence of temperature and time on the ash content in biocoal from elephant dung.
Figure 7 .
Figure 7.The influence of temperature and time on the ash content in biocoal from elephant dung.Figure 7. The influence of temperature and time on the ash content in biocoal from elephant dung.
Figure 8 .
Figure 8.The influence of temperature and time on the combustible parts in biocoal from elephant dung.
Figure 8 .Table 6 .
Figure 8.The influence of temperature and time on the combustible parts in biocoal from elephant dung.Table 6. Statistical evaluation of ash content of biocoal from elephant dung.
Energies 2019 , 36 Figure 9 .
Figure 9.The influence of temperature and time on the high heating value (HHV) in biocoal from elephant dung.
Figure 10 .
Figure 10.The influence of temperature and time on the HHVdaf in biocoal from elephant dung.
Figure 9 . 36 Figure 9 .
Figure 9.The influence of temperature and time on the high heating value (HHV) in biocoal from elephant dung.
Figure 10 .
Figure 10.The influence of temperature and time on the HHVdaf in biocoal from elephant dung.Figure 10.The influence of temperature and time on the HHV daf in biocoal from elephant dung.
Figure 10 .
Figure 10.The influence of temperature and time on the HHVdaf in biocoal from elephant dung.Figure 10.The influence of temperature and time on the HHV daf in biocoal from elephant dung.
50 a
, b-letters present a lack of statistically significant differences between k values (p < 0.05).
Figure A1 .
Figure A1.Presentation of differences in individual groups (of torrefaction time) for organic matter content in biocoals from elephant dung.
Figure A1 .
Figure A1.Presentation of differences in individual groups (of torrefaction time) for organic matter content in biocoals from elephant dung.
Figure A2 .
Figure A2.Presentation of differences in individual groups (of torrefaction time) for ash content in biocoals from elephant dung.
Figure A2 . 36 Figure A3 .
Figure A2.Presentation of differences in individual groups (of torrefaction time) for ash content in biocoals from elephant dung.Energies 2019, 12, x FOR PEER REVIEW 26 of 36
Figure A3 .
Figure A3.Presentation of differences in individual groups (of torrefaction time) for combustible parts in biocoals from elephant dung.
Figure A4 .
Figure A4.Presentation of differences in individual groups (of torrefaction time) for the high heating value of biocoals from elephant dung.
Figure A4 .
Figure A4.Presentation of differences in individual groups (of torrefaction time) for the high heating value of biocoals from elephant dung.
2.3.Proximate Analysis of Raw and Torrefied Elephant DungPhysical and chemical properties were subjected to raw material and produced biocoals.The following tests were made in three replicates using the following standard methods:
Table 1 .
Statistical evaluation of mass yield of biocoal from elephant dung.
Table 1 .
Statistical evaluation of mass yield of biocoal from elephant dung.
Table 2 .
Statistical evaluation of energy yield of biocoal from elephant dung.
Table 5 .
Statistical evaluation of ash content of biocoal from elephant dung.
Table 7 .
Statistical evaluation of the high heating value of biocoal from elephant dung.
Table 8 .
0.74, R = 0.86; T* ranged from 200 • C to 300 • C, t* ranged from 20 min to 60 min; * more information in Section 2.2.Statistical evaluation of high heating value on the dry ash-free basis of biocoal from elephant dung.
Table A1 .
Author Contributions: Conceptualization, P.S., M.H. and K. Ś.; methodology, M.H.; software, P.S., S.K., and K. Ś.; validation, P.S., K. Ś., and M.H.; formal analysis, M.H.; investigation, M.H. and S.K.; resources, M.H. and A.B.; data curation, P.S., K. Ś., A.B. and J.A.K.; writing-original draft preparation, P.S., M.H. and K. Ś.; writing-review and editing, P.S., K. Ś., S.S.-D., J.A.K.; visualization, P.S. and K. Ś.; supervision, A.B., J.A.K., and P.M.; project administration, P.S.; funding acquisition, P.S., A.B., and J.A.K.The research was funded by the Polish Ministry of Science and Higher Education (2015-2019), the Diamond Grant program # 0077/DIA/2015/14."The PROM Programme -International scholarship exchange of Ph.D. candidates and academic staff" is co-financed by the European Social Fund under the Knowledge Education Development Operational Programme PPI/PRO/2018/1/00004/U/001.The authors would like to thank the Fulbright Foundation for funding the project titled "Research on pollutants emission from Carbonized Refuse Derived Fuel into the environment", completed at Iowa State University.In addition, this project was partially supported by the Iowa Agriculture and Home Economics Experiment Station, Ames, Iowa.Project no.IOW05556 (Future Challenges in Animal Production Systems: Seeking Solutions through Focused Facilitation) sponsored by Hatch Act and State of Iowa funds.The authors declare no conflict of interest.Summary of proximate analysis of the tested elephant dung and biocoals resulting from its torrefaction.
Table A2 .
Values of mass yield, energy yield, and energy densification ratio for biocoals.
Table A6 .
Analysis of variance for high heating value (HHV). | 2019-11-22T01:15:14.869Z | 2019-11-14T00:00:00.000 | {
"year": 2019,
"sha1": "2304fe7ec09fd1868233743d082e98cccf27b235",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/22/4344/pdf?version=1573956791",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fa4b8bba47e17b353e16cf122e8096eb36342a96",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
252738354 | pes2o/s2orc | v3-fos-license | Condition-dependence resolves the paradox of missing plasticity costs
Phenotypic plasticity plays a key role in adaptation to changing environments. However, plasticity is neither perfect nor ubiquitous, implying that fitness costs must limit the evolution of phenotypic plasticity in nature. The measurement of such costs of plasticity has proved elusive; decades of experiments show that fitness costs of plasticity are often weak or nonexistent. Here, we show that this paradox can be at least partially explained by condition-dependence. We develop two models differing in their assumptions about how condition-dependence arises; both models show that variation in condition can readily mask costs of plasticity even when such costs are substantial. This can be shown simply in a model where costly plasticity itself evolves condition-dependence. Yet similar effects emerge from an alternative model where trait expression is condition-dependent. In this more complex model, average condition in each environment and genetic covariance in condition across environments both determine when costs of plasticity can be revealed. Analogous to the paradox of missing trade-offs between life history traits, our models show that variation in condition masks costs of plasticity even when costs exist, and suggests this conclusion may be robust to the details of how condition affects trait expression. Our models demonstrate that condition dependence can also account for the often-observed pattern of elevated plasticity costs inferred in stressful environments, the maintenance of genetic variance in plasticity, and provides insight into experimental and biological scenarios ideal for revealing a cost of phenotypic plasticity.
Abstract. Phenotypic plasticity plays a key role in adaptation to changing environments. 25
However, plasticity is neither perfect nor ubiquitous, implying that fitness costs must limit 26 the evolution of phenotypic plasticity in nature. The measurement of such costs of plasticity 27 has proved elusive; decades of experiments show that fitness costs of plasticity are often 28 weak or nonexistent. Here, we show that this paradox can be at least partially explained by 29 condition-dependence. We develop two models differing in their assumptions about how 30 condition-dependence arises; both models show that variation in condition can readily mask 31 costs of plasticity even when such costs are substantial. This can be shown simply in a model 32 where costly plasticity itself evolves condition-dependence. Yet similar effects emerge from 33 an alternative model where trait expression is condition-dependent. In this more complex 34 model, average condition in each environment and genetic covariance in condition across 35 environments both determine when costs of plasticity can be revealed. Analogous to the 36 paradox of missing trade-offs between life history traits, our models show that variation in 37 condition masks costs of plasticity even when costs exist, and suggests this conclusion may 38 be robust to the details of how condition affects trait expression. Our models demonstrate 39 that condition dependence can also account for the often-observed pattern of elevated 40
Introduction 50
Phenotypic plasticity occurs when the same genotype produces different phenotypes 51 in response to different local or developmental environments. Plasticity, when adaptive, 52 allows organisms to track an environment-dependent optimum within a single generation, 53 permitting expression of adaptive phenotypes in a new environment and preventing 54 maladaptation in temporally-or spatially-variable environments (Charmantier et al. 2008; constraints are an insufficient explanation for limited plasticity. Empirically, this is because 66 standing genetic variance is often observed for reaction norms (Scheiner 1993), and 67 theoretically, because unless genetic constraints are complete they simply slow the rate of 68 plasticity evolution (Via & Lande 1985;Van Tienderen 1991). This implies some fitness 69 cost of plasticity must exist ) -there must be a fitness penalty for the 70 ability to alter development or behavior in response to the environment, that balances with 71 the benefits of tracking an environmentally-dependent trait optimum. 72 A fitness cost of plasticity is expected to manifest as a reduction in fitness of a plastic 73 genotype compared to a non-plastic genotype that otherwise expresses the same trait value in 74 factor may account for the observation that assessed costs are relatively common when the 125 test environment is stressful (Van Buskirk & Steiner 2009; Snell-Rood & Ehlman 2021). 126 Finally, we use these insights to suggest experimental designs that will maximize the 127 detection of any existing costs of plasticity. 128 129
Estimating Costs under Condition-independence 131
We begin by describing a causal model for fitness, and how a such a causal model of fitness 132 is related to a statistical regression model based on trait data to infer costs of plasticity. We 133 focus on the case of two environments. We assume individual fitness in a focal environment 134 is a function of trait expression in that environment (natural selection) as well as the cost of 135 having a plastic genotype, 136 where " is individual fitness in environment 1 (the focal environment), " is natural 137 selection in the focal environment, " is the trait value expressed in the focal environment, 138 is the fixed value for plasticity of the genotype, and C is the cost of such plasticity. We refer 139 to as "genotype plasticity" to be clear that it is a property of a genotype. Throughout, we 140 refer to plasticity measured on actual traits as the "phenotype plasticity" to distinguish it from 141 this true fixed cost. Equation 1 represents an assumed causal model for fitness effects of 142 plasticity; noteworthy is that we have assumed fitness costs of plasticity are a fixed property 143 of a genotype, depending only on its genotype plasticity (expected to be the same for a 144 given genotype across all environments) and the cost parameter C (a population parameter 145 fixed across environments). Thus, we assume costs are fixed within a genotype and are paid 146 across all environments, which is perhaps the simplest form of a cost of plasticity. We make 147 no assumptions about selection on z in other environments. DeWitt et al. (1998) identified 148 five non-exclusive mechanisms which may generate a cost of plasticity. These include 149 maintenance costs (the cost of maintaining sensory and regulatory mechanisms), production 150 costs, information acquisition costs, developmental instability, and genetic costs. Of these 151 categories, maintenance, information acquisition, developmental instability, and genetic costs 152 are all expected to often be a fixed property of the genotype, and so are consistent with the 153 assumptions of Equation 1. We discuss environment-dependent production costs later. 154 Importantly, genotype plasticity b cannot be measured with trait data from only one 155 environment, and so is instead typically inferred from trait data from the same genetic 156 backgrounds expressed in two or more environments in a GxE design. In this approach, b is 157 assumed to be proportional to phenotype plasticity; that is, trait expression in a another 158 environment, + . The cost of plasticity is thus assumed to be related to the covariance 159 between fitness in one environment and trait expression in another, ( " , + ). Thus, with 160 trait data from the same set of genotypes in two environments and fitness measured in one of 161 the environments, the following regression has been proposed (Van Tienderen 1991; DeWitt 162 1998; Scheiner & Berrigan 1998) to infer costs of plasticity, " ~ 4 + " " + + + (2) where the estimate 5 + is interpreted as being an estimate of the cost of plasticity in equation 164 1, C. Note that a conceptually equivalent but more complex model could instead model 165 Figure 1A shows an example of how a cost of plasticity is expected to manifest 166 a reduction of fitness of a plastic genotype relative to a non-plastic genotype in a GxE 167 experiment. Our goal is to understand when and why the regression model in equation 2 may 168 fail to adequately describe the causal fitness effects assumed in equation 1, and to do so we 169 need to develop more explicit descriptions of trait expression in each environment. 170 We first describe trait expression in two environments independent of condition, 171 172 " = 8 + 8 " Where 8 is genotype plasticity or the reaction norm for genotype i, " the environmental 174 where R is condition; the total pool of resources an individual has available to allocate to 194 phenotypes and fitness components ( Figure 2A). In this model, we must also modify our 195 fitness function to reflect the fact that condition R will itself affect fitness via other paths 196 besides plasticity, where < is the strength of selection on condition, independent of its effects on plasticity. 198 This parameter reflects the summed effects of all the condition-dependent traits that affect 199 fitness. In this fitness model, condition affects fitness both directly, and indirectly via effects 200 on plasticity. We assume in Model I, for simplicity, that condition is a property of a 201 genotype that is constant across environments. This general model of condition-dependent 202 plasticity is illustrated in Figure 2A. shows that fitness effects of plasticity, ( " , + ), are fundamentally influenced 210 by variance in condition (see Figure 1B). This covariance will be negative, thus implying a 211 net cost of plasticity in the focal environment, only when 212 where C is in absolute terms. Although the second term on the right hand side of equation 7 213 can be controlled for in a multiple regression that includes " , even in this case the cost of 214 plasticity must be greater than total selection on condition itself ( < ) in order for a negative 215 cost to be inferred, an unlikely situation. When < is greater than the cost of plasticity, a 216 positive fitness effect of plasticity will be inferred despite the existence of a cost, even when 217 controlling for " , and the magnitude of this positive fitness effect will be proportional to the 218 variance in condition ( ), as illustrated in Figure 1B. In the appendix, we show that this 219 model of costly plasticity can alternatively be framed as a specific case of the classic tradeoff 220 model of van Noordwijk & de Jong (1986). 221 This simple model shows that when plasticity is itself condition dependent, inferring a 222 cost of plasticity in a GxE experiment will be difficult or impossible unless variation in 223 condition or resource acquisition can be controlled. This model of condition-dependence 224 can thus readily explain the variable and weak costs of plasticity that have typically been 225 inferred in previous experiments. However, it cannot immediately explain the finding that 226 costs are typically inferred to be greater in stressful or poor quality environments (except to 227 the degree that such manipulations affect within-environment variance in condition), and 228 represents only one way that condition may impact variance in trait expression across 229 multiple environments. In our second model we assume that fitness is caused solely by the trait and by 246 plasticity, as in equation 1, 247 " = + " " + with no independent causal path for condition. Thus, this causal model of fitness is the same 248 as that assumed in typical analyses of the cost of plasticity. We now assume trait expression 249 is the result of both plastic resource allocation and the total pool of resources an individual 250 has available to allocate, or condition. Thus we have assumed the trait is costly, in terms of 251 resources, to express. We can modify our description of trait expression accordingly, 252 253 " = " " = ( 8 + 8 " ) 8," where " is the pattern of resource allocation in environment 1 containing both plastic and 255 fixed components, and 8," is condition of individual i in environment 1, with a 256 corresponding term for environment 2. In this model, condition affects fitness only indirectly 257 via its effects on expression of z. Variation in condition may be genetic or non-genetic. 258 Importantly, in this model two separate components contribute to the phenotypic plasticity 259 (that is, + ): 1) plastic changes in resource allocation A across the environments determined 260 by b (which we have assumed is costly, as stated in equation 1), which can be described as 261 the ability to match allocation strategy to the environment, and 2) plasticity arising from 262 variation in resource acquisition R, or condition (which we have assumed is cost-free), which 263 is simply the amount of resources an individual has available to allocate to traits. As we 264 show below, if a substantial amount of variation in plasticity is determined by this second 265 component, variation in condition, then costs will be difficult to infer even if they exist. 266 This general model of trait expression is shown in Figure 2B, and can be seen as an we assume that all variation in plasticity is costly, and variation in plasticity is condition-274 dependent. In Model II, we assume condition-dependent trait expression in that trait 275 expression is a function of condition and allocation, where phenotype plasticity arising from 276 differences in condition across environments carries no cost, while plasticity arising from 277 differential resource allocation across environments does carry a cost. In Model I, condition 278 was assumed to have independent effects on fitness; in Model II condition only affects fitness 279 via z. 280 As before, we can expand For simplicity assuming no covariance between a or b and R and where ℋ = C + + ( ) + 287 " C C + " ( , ) + + C C + + ( , ) + + " C + + + " ( ). Although this 288 expression is complex, it illustrates that fixed genotypic costs of plasticity have to be very 289 high to manifest a negative ( " , + ), particularly when ( " , + ) is positive. We can 290 see this in Figure 3A, noting the very high magnitude of C needed to generate negative 291 ( " , + ). Because this magnitude can be interpreted relative to the strength of selection, 292 this suggests that biologically-realistic values of a cost to plasticity will not generate negative 293 relationships between trait expression in one environment and fitness in another, unless 294 covariance in resource acquisition is almost perfectly negative across the two environments 295 or there is no phenotypic selection (i.e. all fitness variance arising from costs of b). 296 Importantly, Model II shows that when trait expression is condition dependent across 297 two environments, the observed degree of phenotypic plasticity in the trait itself will be in-298 part condition dependent, and variance in phenotypic plasticity will be in part determined by 299 (co)variance in condition across the two environments. If a measure of individual condition, 300 R, is available, or a measure of b controlling for condition, then an appropriate multiple 301 regression could be fit to obtain an estimate the cost parameter C. However, such measures 302 are rarely available and not typically used as control variables in previous GxE studies aimed 303 at inferring costs of plasticity. For example in a multiple regression of the form 304 " ~ 4 + " " + + + (DeWitt 1998) any variance in condition R will affect both the measure of plasticity ( + ) and 305 fitness in the focal environment ( " ), resulting in a biased estimate of the true cost of 306 plasticity such that 5 + ≠ . This effect of variance in condition on cost inference in a 307 multiple regression is illustrated in Figure 3D-F; (co)variance in condition masks inference of 308 costs in a multiple regression, regardless of the nature of environmental quality ( Figure 3D, 309 E). Whenever variance in condition exists, this variance generates variance in the phenotype 310 plasticity that is independent from costs associated with genotype plasticity, and so this 311 variance masks costs ( Figure 3E; see also Figure 1B). In the extreme, when covariance in 312 condition across environments is strong, costs of genotype plasticity are high, and the 313 strength of natural selection on the trait in the focal environment is weak, it is possible for 314 (co)variance in condition to lead to the inference of positive fitness effects of plasticity even 315 when the expected evolutionary response of genotype plasticity is negative ( Figure 3F). This 316 extreme scenario suggests it is possible to infer, from a multiple regression analysis of trait 317 data in a GxE design, direct selection for increased phenotypic plasticity when the 318 evolutionary response of genotype plasticity is in fact negative. Although it is unclear how 319 frequently such conditions occur, in general these effects ( Figure 3D-F) illustrate that costs of 320 plasticity incurred within the path from genotype to trait expression will be readily masked 321 by condition, a point also made in simpler visual terms in Figure 1. Although we have 322 assumed a cost of at the level of resource allocation, assuming instead that a cost is incurred 323 through differential resource acquisition would lead simply lead to a reversal of the expected 324 effects of acquisition and allocation on ( " , + ). 325 Equation 5 also illustrates that average resource acquisition in each environment 326 influences ability of costs to be inferred. In particular, costs of plasticity are most readily 327 inferred when the quality of the focal environment ( C " ) is low relative to environment 2 (that 328 is, when C " < C + ), and the interaction between average resource acquisition in the two 329 environments determines whether allocation costs generate ( " , + ) ( Figure 3). These 330 effects can be illustrated more simply in Figure 4, which shows how changes in mean 331 condition in environment 1 affect fitness, while changes in mean condition in environment 2 332 affect the inference of plasticity, and so average condition across the two environments 333 interact to influence ( " , + ). 334 335
Environment-dependent costs 336
We have explored the case where costs of plasticity are a fixed property of a 337 genotype, paid in all environmental contexts, although inference of these costs may depend 338 on the environmental conditions (as shown above). Alternatively, costs of plasticity may only 339 be incurred when a plastic trait is actually expressed. For example, a "production cost" 340 Where defines the fitness cost of plasticity that only manifests in certain environmental 345 contexts E (the same environment in which fitness is measured). This cost, , could in 346 principle be inferred if one had estimates for , R, z, b across multiple environments for a 347 set of genotypes. However, the challenges produced by condition dependence, shown above 348 for the much simpler case of fixed costs, will only be greater in this more complex case. As 349 an example, consider the case of Figure 4B, C. In these cases, where costs are in fact fixed, 350 an analysis of phenotype data across the different environments would suggest costs to differ 351 across these environmental contexts. Note also that Model I could be modified so that " ≠ 352 + , which would lead to similar complexities of costs that vary across environments. We have shown that costs of plasticity may be especially hard to infer when the cost 449 is paid at some point along the path from genotype to phenotype, rather than at the final level 450 of phenotype expression itself. We believe that most previous ideas surrounding the causes 451 of potential costs of plasticity fit this assumption, that costs are paid on one of several 452 components of the path from genotype to phenotype. For example, DeWitt et al. (1998), who 453 lay out 5 non-exclusive categories of costs of plasticity, propose that these costs likely arise 454 somewhere on the path from detection of environment ® information processing ® 455 regulatory mechanism ® production machinery ® trait expression. Our model shows that 456 even when costs are high at any one point in the path, variance at another point that is cost-457 free will mask the cost when measured at the level of trait expression. 458
If plasticity is adaptive, an open question is how genetic variance in plasticity is 459
maintained (which appears to often be the case; Scheiner 1993) in the face of persistent 460 natural selection. Our model shows that if trait expression in multiple environments is 461 condition-dependent, then expression of plasticity will itself be condition dependent and a 462 portion of the standing variance in a phenotypic plasticity will reflect standing (co)variance in 463 condition across environments. Because we define condition as the total pool of resources an 464 individual has available to allocate, we expect condition to be a large mutational target, and 465 thus capture novel mutational input across the genome (Rowe & Houle 1996). Thus, our 466 model suggests genic capture may provide a mechanism maintaining variance in adaptive 467
plasticity. 468
We have attempted to provide some closure for the unsolved problem of costs of 469 plasticity. In many ways, the study of phenotypic plasticity has moved on from the 470 uncertainty surrounding costs that reached a zenith over a decade ago (Auld et al. 2010). Yet 471 the issue remains, and although the field has moved on two segregating viewpoints linger: do 472 costs exist, but are hard to measure? Or are costs simply so weak as to be unimportant? We 473 show, by recasting the problem of plasticity as one of differential resource acquisition and 474 allocation, why costs may indeed be prevalent and important but difficult to reveal. 475 476 Acknowledgements We thank Erik Svensson and his lab group for discussion and feedback. environments. In A, Model I, the total pool of resources available, R, which we call 559 condition, is correlated with both fitness components and the degree of costly plasticity that is 560 expressed. We note that this model is agnostic to the exact developmental causality of 561 condition-dependence, and assumes only the existence of a relationship between these 562 components. Variance in condition can mask costs of plasticity, because in this case 563 individuals that have high plasticity (and thus pay a high cost) will nonetheless have high 564 fitness despite the cost because condition also positively affects other fitness components. 565 Panel B, Model II, represents a model of condition-dependent trait expression in two 566 environments, where a condition-independent cost of plasticity may be found in any 567 difference in resource allocation across in environments (A1 vs A2). Trait expression (z) is 568 determined by condition (R), and allocation (A) in environment 1 and 2. Differences in 569 resource acquisition and/or allocation across environments leads to differential trait 570 expression across environments-phenotypic plasticity. In this model, we have assumed costs 571 of plasticity arise when genotypes differ in resource allocation strategy (A) across 572 environments. In this model, (co)variance in condition can mask costs of plasticity by 573 generating variance in trait expression across the environments that is independent of 574 variation in the costs paid. Note that for simplicity, we have not expanded these path 575 diagrams to directly compare the complete path to fitness; rather to illustrate the differing 576 roles of condition. | 2022-10-07T13:26:03.296Z | 2022-10-03T00:00:00.000 | {
"year": 2022,
"sha1": "bebdc385fcce603f5f9cbca2766533e07e72ff62",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/10/03/2022.09.30.510277.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "bebdc385fcce603f5f9cbca2766533e07e72ff62",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
148736095 | pes2o/s2orc | v3-fos-license | Spirituality , dual career family worker , demographic factors , and organizational commitment : evidence from religious affairs in Indonesia
The purpose of this study is to specify whether spirituality, age, and tenure have an effect on organizational commitment and to determine whether the moderating variables, i.e. dual career family worker, moderates the effect of spirituality, age, and tenure on organizational commitment. The samples of the study were 90 staffs and lecturers of three educational institutions under Indonesian Journal of Islam and Muslim Societies Vol. 7, no.2 (2017), pp. 277-304, doi : 10.18326/ijims.v7i2. 277-304
Introduction
Establishing a code of ethics and work culture is one of the steps taken by the Ministry of Religious Affairs in improving governance at the ministry.There are three values that should be considered in order to implement the work culture: integrity, professionalism, and work to achieve a common goal and togetherness in work.Integrity is the persistence of attitudes in maintaining ethical principles and professionalism, keeping the loyalty in the implementation of the task, and having a responsibility based on honesty.Values of integrity include the issues of ethics and spirituality, promoting exemplary value, and honesty.Therefore integrity is the most fundamental component and it will affect the overall individual and group behavior in carrying out any duties and responsibility entrusted to him. 1 Indonesian Government Regulation No. 46 of 2011 regarding the Assessment of Civil Servants Job Performance Article 12 Paragraph (1) states that The ratings of behavior as referred in Article 4 letter b includes several aspects such as service orientation, integrity, commitment, discipline, cooperation, and leadership.In the explanation of the law, Article 12 Paragraph (1) letter c stated that "commitment" is civil servants willingness and ability to align their attitudes and actions to achieve organizational objectives by focusing on the service rather than self-interest, a person, and /or class.
The term commitment in the regulation, in the literature of Management 2 is well known with Organizational Commitment.Organizational Commitment is the psychological contract between employee and or organization that makes it less likely for employees to voluntarily leave the organization. 3he statement above is relevant to the statement from the Minister of Religious Affairs.According to the Minister of Religious Affairs, The Five Work Culture in Ministry of Religious Affairs include integrity, professionalism, innovation, responsibility, and exemplary. 4These values are needed in improving the existing bureaucracy in Indonesia such as changing the unintegrated culture, overlapped legislation, redundant and inefficient organizational structure, slow and inefficient business process, governance, incompetence and unprofessional human resources, corruption, collusion and nepotism, up to the issue of unresponsive and unaccountable services to the public. 5he work culture in the Ministry of Religious Affairs is in line with the values of spirituality.The spiritual values refer to honesty, integrity, good quality work, responsibility, caring to colleagues and subordinates as well as socially responsible to the environment and community. 6Some of the major companies in the world and in Indonesia have participated in spirituality training.Approximately, there are 67,000 employees of Pacific Bell in California who participated in spiritual training as the style of New Age.That action was followed by Procter & Gamble, Ford Motor Company, AT & T, General Motors, and IBM. 7Some companies in Indonesia such as Garuda Indonesia, Krakatau Steel, Pertamina, Pusri, and TASPEN have sent their executives to join spiritual training. 8he spiritual training valued and respected employees' emotion, thus they can determine their own destiny 9 .The results of the previous research show that work spirituality is an important factor for improving business performance 10 , avoiding moral stress, personality dissociation, or loss of personal integrity. 11pirituality is the innate part of a human that needs to connect with something larger than ourselves.It means something beyond us, the ego or sense of oneself.It is defined as a vertical and horizontal component.The vertical component includes something sacred, divine, or eternal, meanwhile horizontal components include service to fellow human beings. 12
Literature review and hypothesis development
The research on spirituality associated with organizational commitment which studied 361 people across 154 organizations shows that when someone has a spirituality of work experience, the affective commitment to the organization will be attached, in this case, the experience as a sense of loyalty. 13Self-leadership made employees have a spiritual experience in the organization. 14Spirituality predicts affective commitment among employees. 15Malik and Naeem (2010) Spiritual dimensions such as selfdetermination, organization mode, transaction mode, self-control, a small group mode, transformational mode, and self-enrichment are related to organizational commitment. 16n contrast to the above study, spirituality does not always affect organizational commitment.17A study also conducted on 163 full-time nontraditional students who attend business school in the Southeast and working professionals at the managerial level.The results show that workplace spirituality, workplace spirituality by age, and workplace spirituality by gender do not affect organizational commitment. 18he research shows inconsistency in the results.The inconsistent results above lead to the assumption that there is a moderating variable.Researchers propose that a dual career family worker is the moderating variable in the influence of spirituality on organizational commitment.The novelty of this research lies on the organizational commitment topics with the dual-career family-worker as a moderating variable in the influence of spirituality on organizational commitment.To the knowledge of the researchers, there is no research on the topic.
The basis for selecting the dual-career family-worker as a moderating variable based on the transformation of the workplace, which leads to an increase in dual-career families.Social trends changed, such as the increasing number of women entering the workforce and the economy that requires a double income to support the average of living standards. 19n the late 1950s, there was an increasing attention to families where both husband and wife become partners. 20According to the Census Bu-reau in 1997, 17 % of all families in the US has household income derived from the father (husband).Married couples with children where the husband and wife work were accounted for 50.9 % of the labor force in 2003. 21n addition to the above reasons, research on marital status that is associated with organizational commitment shows that marital status is associated with lower continuance commitment. 22When organizational commitment compared between dual and single income family, it was found that single income family had a higher organizational commitment than the dual income family 23 .In contrast, a study shows that dual wage-earner family had a higher organizational commitment than single wage-earner family. 24rganizational commitment consists of three components (commitment) namely affective, continuance, and normative commitment. 25ffective commitment refers to employee's emotion and identification with the organization.Individuals with high levels of affective commitment will stay working in an organization because they want it.Continuance commitment refers to awareness of the costs that arise due to leaving an organization.So people who have a high level of continuance commitment will remain in the organization because they need to.Normative commitment reflects the sense of responsibility to stay in the organization.Individuals who have a high normative commitment will stay in an organization because they feel obligated to stay (relating to loyalty).Personal variables such as age, years of employment, education level, marital status, having children who are still in the home, and the type of job changes, are strongly associated with organizational commitment. 26his research uses the Organizational Support Theory or Perceived Organizational Support (POS).The theory assumes that in order to meet the emotional needs of social and organizational readiness in giving rewards, the organization has to increase their business and employee confidence about how organizations assess their contribution and cared about their performance. 27Organizational support has positive effects on organizational commitment. 28Moreover, company concern to the employee will be able to increase its performance and contribution. 29he studies about spirituality, which associated with organizational commitment, shows that when a person has a spirituality of work, the affective commitment to the organization will be attached 30 who investigated 361 individuals in 154 organizations.Workplace spirituality is the sense of team community, alignment with organizational values, sense of contribution to society, enjoyment at work, and opportunities to enrich inner life. 31Organizational commitment namely affective, normative, and continuance. 32The presence of workplace spirituality will support organizational commitment and both individual and organizational per-formance.When the organization satisfies the needs of members spiritually, they will feel safe psychologically and feel valued as human beings. 33esults of Rego and Cunha34 research is supported by Wainaina, Iravo, Waititu, 35 Sorizeni, Kamalipur, Qhalandarzehi, Jamshidzehi, 36 Torkamani, Naami, Sheykhshabani, Beshlide. 37The results of these studies are different from Marschke38 and Wainaina39 who investigate the effect of workplace spirituality on organizational commitment in 27 private universities and 22 public universities in Kenya.The samples used in the study are 282 employees at the university.The results show that there is a positive effect of workplace spirituality on organizational commitment.
Workplace spirituality has an effect on organizational commitment.The study conducted on 67 employees who work at the Agricultural Jihad Organization in Iran.Research result shows that workplace spirituality organizational commitment. 40he influence of spiritual leadership on organizational commitment, productivity, and performance with spiritual well-being and learning organization as mediating variables was conducted on 400 workers at a gas plant in Iran. 41The result is in line with that of Vanderberghe 42 who examines the influence of spiritual leadership on employee commitment.
A study was conducted on 121 branch managers, area managers, and regional managers in private banking and the government in Pakistan which study about spiritual leadership and organizational commitment shows that spirituality predicts affective commitment among employees. 43he research is supported by Malik and Naeem 44 who study the Higher Education Institution members in Pakistan which consist of three public institutions and five private institutions.Based on the results above and the theory of Perceived Organizational Support the research hypothesis is: H 1a : Spirituality positively influences organizational commitment.
The dual career couple and organizational commitment research were conducted to the 70 working couple in India, US, Canada, and Australia in some sectors such as IT, telecommunications, healthcare, counseling, education, and manufacture.The results show that couples with children under two years old have less time with their children because of the increasing pressure of work.It leads to a high rate of absenteeism, turnover, and low organizational commitment. 45aur and Kumar's 46 research result is in line with Balmforth and Gardner 47 in New Zealand and Nart and Batur 48 in Turkey.They state demographic characteristics (age, tenure, education, a total of servicing years) only have a small effect on organizational commitment and job characteristics has a strong effect on organizational commitment. 70 study that was conducted at the Nong Lam university lecturer, Ho Chi Minh in Vietnam about the correlation between age and organizational commitment shows that there is a weak negative correlation between age and normative commitment 71 .In contrast, a research which examines the relationship between demographic factors (age, duration of service, and level of education) and organizational commitment shows that age and education levels do not correlate with the organizational commitment.The study was conducted in an organization of Knitwear in Lahore and Faisalabad, Pakistan and used a sample of 415 employees. 72 research which is held in Pakistan investigate the influence of demographic factors such as gender, age, qualifications, and marital status on organization commitment in the University of Khyber Pakhtunkhwa employees. 73Research results show that demographic factors affect organizational commitment.Researchers also find that younger employees are less committed to the organization.
The investigation about age, job satisfaction, and organizational commitment of teachers in Turkey conducted on 173 respondents.The results show that the age differences among teachers moderate the relationship between organizational commitment and job satisfaction.The effect of this variable is non-linear. 74he influence of age, gender, job level, marital status, and tenure on organization commitment in the industrial field in Odisha India through 240 employees indicates that age has a positive influence on affective and normative commitment.Marital status influences affective, normative, and continuance commitment.Tenure has a positive effect on affective commitment, job level influences affective and continuance commitment, and gender influences affective and normative commitment 75 .The results of the study are consistent with Salami 76 who states that the age of the employee is the determining factor of organizational commitment.
The influence of demographic factors such as age, marital status, experience, qualifications, and gender on organizational commitment in Ghana, used 206 sample employees of a commercial bank in Ghana and the results show that demographic factors influence organizational commitment generally.Age positively influence organizational commitment.Researchers also find that gender has little influence on organizational commitment, while the dominant demographic factors affect is the experience. 77H 2a : Age positively influences organizational commitment.
The results of studies on the effects of age on organizational commitment conducted by Kaur and Kumar, 78 Balmforth and Gardner, 79 Nart and Batur 80 show that dual career family worker negatively affects organizational commitment.This evidence has reinforced a research 81 which find that there are differences in organizational commitment between 75 R. K Jena, "An assessment of…, 61. 76 single income and dual-income family as well as their changes in social trends in which more percentage of women work, 82 then the hypothesis of this study is: H 2b : Dual-Career Family Worker moderates the influence of age on organizational commitment.
The result of research conducted in Serbia 83 shows that demographic factor such as tenure gives only a small effect on organizational commitment.The result was supported by Viet (2015) in Vietnam who states that the correlation between years of work and continuance commitment is weak.
The results were slightly different from a research conducted in Ghana 84 who find that experience measured using tenure has a positive effect on organizational commitment.The results of their study are consistent with Iqbal 85 who find that the duration of service measured using tenure has a positive relationship with organizational commitment.This result is supported by Jena 86 in India, who states that tenure has a positive effect on employees' affective commitment.The results of the study are consistent with Salami 87 research in Nigeria which find that tenure becomes the determining factor of organizational commitment.Salami 88 conducts a study on 320 employees at five service companies and five manufacturing companies in the Oyo State of Nigeria.H 3a : Tenure positively influences organizational commitment.
Research conducted by Kossek and Ozeki 89 and Namasivayam and Zhao 90 in India find that dual-career family worker negatively affects or-ganizational commitment.This finding is relevant to the research conducted in India, US, Canada, and Australia. 91The finding is also reinforced by the research which find that there were differences in organizational commitment between single income and dual-income family as well as their changes in social trends in which more percentage of women work, 92 thus the hypothesis of this study is: H 3b : Dual-Career Family Worker moderates the influence of tenure on organizational commitment.
Based on that explanation, the conceptual framework of this study is:
Dual Career Family Worker
According to Figure 1 The Conceptual Framework, the independent variables in this research are spirituality, age, and tenure, the dependent variable is organizational commitment and the moderating variable is dual career family worker.
Research method
The population of this study is employees who work in the Ministry of Religious Affairs at the Republic of Indonesia especially in IAIN Surakarta (246 employees), IAIN Salatiga (166 employees), and MTsN 1 Surakarta 91 Gurvinder Kaur, and Raj Kumar, "Organisational work pressure…, 46. 92Kelly L Brunning, "Quality of work life…, 6.
(employees).This study uses employees who work at the Ministry of Religious Affairs because it is in line with the phenomenon in this research namely Saifuddin statement, the Minister of Religious Affairs, about The Five Work Culture developed in the Ministry of Religious Affairs.IAIN Surakarta, IAIN Salatiga, and MTsN 1 Surakarta are selected as the object of the study because there is no research about organizational commitment in those institutions.It becomes the second differentiator of this study compared to other studies in addition to the use of dual career family worker variable as a moderating variable.Total population in this study is 440 employees.The sampling technique used in this study is purposive sampling with the following criteria: a) Civil Servants in the Ministry of Religious Affairs; b) Having a husband/wife of a civil servant or an employee in Ministry of Religious Affairs or another institution.
The questionnaire was distributed to 120 respondents and 90 questionnaires returned.This data collection method is administrated questionnaires personally.The questionnaire is divided into three parts namely: a) The first part is about organizational commitment including affective, continuance, and normative commitment which refers to Allen and Meyer 93 ; b) The second part is about spirituality which refers to Spirituality Transcendence Scale 94 ; c) The third part about Dual-Career Family Worker which refers to Revised Dyadic Adjustment Scale (RDAS). 95alidity test and reliability test are used to test the instrument of the research.The hypothesis testing in this study is performed using Moderated Regression Analysis (MRA) with the assistance of software SPSS version 20.00.The research model is as follows: 93 NJ Allen and JP Meyer, "The measurement and…, 11. 94 All of the 33 questions in the questionnaire are valid with the 0.01 level.Organizational commitment, spirituality, and dual career family worker variables meet the minimum required values of Cronbach's Alpha > 0.6, so all the questions are reliable.Based on the test for normality, multicollinearity, and heteroscedasticity, it can be concluded that the variables used in this study have passed the classical assumption test.
Hypothesis testing and discussion
The results of the hypothesis testing can be seen in Table 1 The equations model based on the MRA is as follows: OC = 2.692 + 0.244S + 0.194AGE + 0.401TENURE + 0.050DCFW + 0.051SDCFW -0.018AGEDCFW + 0.037TENUREDCFW + e Table 1 Moderated Regression Analysis Output shows that spirituality has a positive effect on organizational commitment (coefficient 0.244 with sig.0.000 <0.05), so H 1a is supported.The result is not in line with Sanders and Joseph E. 96 and Marschke 97 research which find that there is no significant causal relationship between spirituality and commitment.This result supports the research conducted by Rego and Cunha, 98 Usman 96 Sanders III, Joseph E. Hopkins, Willie E. and Gary D Geroy, "A Causal Assessment…, 50. 97Eleanor Marschke, Robert Preziosi, and William Harrington, "Professional and ex-ecutives…, 40.and Danish, 99 Wainaina et al., 100 Sorizeni et al., 101 and Torkamani et al. 102 This result reflects that employees get adequate social and emotional needs, so they feel valued and appreciated, and thus their organizational commitment is higher.
Dual Career Family Worker moderates the influence of spirituality on organizational commitment (coefficient 0.051 with sig 0.000 <0.05), so that the H 1b is accepted.It can be caused by a greater burden of work and family in Dual-Career Family Worker, so if there is a conflict in their role, it will weaken the influence of spirituality on organizational commitment.The effect of Dual Career Family Worker on organizational commitment is not proven, it is not in line with the research conducted by Kaur and Kumar, 103 Balmforth and Gardner, 104 Nart and Batur, 105 Kossek and Ozeki, 106 Alloy and Flynn, 107 Joiner, 108 and Namasivayam and Zhao. 109ecause Dual-Career Family Worker has no effect on the organizational commitment, in this research this variable is a pure moderator.
Table 1 Moderated Regression Analysis Output shows that age does not positively affect organizational commitment (coefficient 0.194 with a sig.0.328 > 0.05), so H 2a is not supported.The result is not in line with Konya et al., 110 Viet, 111 Khan et al., 112 Jena, 113 Yucel and Bektas, 114 and Ossei et al. 115 This study supports the finding of research conducted by Iqbal.It might be caused by the older employees, they have more seniority in the organization, and many parties used his services besides his organization increasingly because of his reputation.Another reason is the health condition of the employee due to the age.
Dual Career Family Worker does not moderate the influence of age on organizational commitment (coefficient -0.018 with sig 0.367 > 0.05) so that H 2b is not supported.It might be caused by a greater burden of work and family in Dual-Career Family Worker, so if there is a conflict in their role, it will weaken the influence of age on organizational commitment.The effect of Dual Career Family Worker on organizational commitment is not proven, it is not in line with the research conducted by Kaur and Kumar, 116 Balmforth and Gardner, 117 Nart and Batur, 118 Kossek and Ozeki, 119 Alloy and Flynn, 120 Joiner, 121 and Namasivayam and Zhao. 122able 1 Moderated Regression Analysis Output shows that tenure has a positive effect on organizational commitment (coefficient 0.401 with a sig.0.044 <0.05), so H 3a is supported.This study supports the research conducted by Ossei et al., 123 Jena, 124 Iqbal, 125 and Salami. 126It might be caused by that the longer the workers stay with the organization, the more they have time to evaluate, and develop the relationship with the organization.Moreover, the longer they work in the organization, the stronger the emotional attachment so that tenure influences organizational commitment.
Dual Career Family Worker moderates the influence of tenure on organizational commitment (coefficient 0.037 with sig 0.073 < 0.1), so that H 3b is supported.It can be caused by no greater burden on work and family in Dual-Career Family Worker, so if there is no conflict in their role, the stronger the influence of tenure on organizational commitment.The greater perceived responsibility on a dual career family worker and the longer the employee works, the more employees attached emotionally to the organization, so that the commitment to the organization increases.
Effect of Dual Career Family Worker on organizational commitment is not proven, it is not in line with the research conducted by Kaur and Kumar 127 , Balmforth and Gardner, 128 Nart and Batur, 129 Kossek and Ozeki, 130 Alloy and Flyn, 131 Joiner, 132 and Namasivayam and Zhao. 133Because Dual-Career Family Worker has no effect on the organizational commitment, Dual-Career Family Worker in this research is included in the pure moderation.
Conclusion
Based on the above discussion, our conclusions are as follows: firstly, Spirituality influences organizational commitment positively.The implications of these results, especially for the leader of the Ministry of Religious Affairs in IAIN Salatiga, IAIN Surakarta, and MTsN 1 Surakarta is to maintain and improve the care and concern for the social-emotional needs of his subordinates (intrinsic and extrinsic reward) so that they feel valued and appreciated.Secondly, Dual Career Family Worker moderates the influence of spirituality on organizational commitment.The implication of this research is that the leader of IAIN Salatiga, IAIN Surakarta, and MTsN 1 Surakarta has to give attention to the needs of dual career family workers, mainly regarding the fulfillment of work and family balance.Thirdly, Tenure positively influences organizational commitment.The implications of these results, especially for the leadership of the Ministry of Religious Affairs in IAIN Salatiga, IAIN Surakarta, and MTsN 1 Surakarta is they should maintain and improve the care and concern for the social-emotional needs of his subordinates (intrinsic and extrinsic reward) so that they feel valued and appreciated based on the tenure of the worker.Finally, Dual Career Family Worker moderates the influence of tenure on organizational commitment.The implication of this research is that the leader of IAIN Salatiga, IAIN Surakarta, and MTsN 1 Surakarta needs to give attention to the needs of dual career family working mainly related to the fulfillment of the balance between work and family based on the tenure of the worker.
Suggestions for future research are as follows: firstly, future studies should examine different objects, different theories in organizational commitment, and different variables (independent and moderation) from
Table 1 .
Moderated Regression Analysis Output below: Moderated Regression Analysis Output | 2018-12-12T08:58:48.986Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "3b7d305da36ab6f14f2afd6340fb8e9fb1f91b92",
"oa_license": "CCBYSA",
"oa_url": "https://ijims.iainsalatiga.ac.id/index.php/ijims/article/download/1294/820",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3b7d305da36ab6f14f2afd6340fb8e9fb1f91b92",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Sociology"
]
} |
248512347 | pes2o/s2orc | v3-fos-license | Access to Essential Personal Safety, Availability of Personal Protective Equipment and Perception of Healthcare Workers During the COVID-19 in Public Hospital in West Shoa
Introduction The pandemic of coronavirus disease-2019 has fundamentally changed the physician–patient relationship due to health care workers’ being at high risk of getting COVID-19 infection from their patients. Therefore, healthcare workers are a priority to be protected and prevent transmission within a healthcare setting. Objective To assess the actual and perceived personal safety of healthcare workers practicing in public hospitals. Methods and Materials A descriptive cross-sectional study design was done among 361 health professionals in West Shoa. A simple random sampling technique was used to select representative respondents. Data was collected by a pretested, self-administered questionnaire. The collected data was entered into EPI-Info and exported to STATA for analysis. Descriptive statistics were used to present the data. Results A total of 361 healthcare workers responded to the question with a 97% response rate. The median age of the study participants was 29. Of the total participants, access to personal protective equipment was: hand sanitizer 322 (89.2%), disposable gloves 285 (78.9%), face mask 280 (77.6%), KN95 face mask 163 (45.2%) and facial protective shields 112 (31%). One hundred sixty-nine (46.8%) of the study participants reported that their hospital has personal safety policies and procedures. One hundred sixty-one (44%) reported that they perceived no support, while only 35 (9.7%) participants reported that they perceived full support from their hospital. Furthermore, the participants perceived that their local concerned bodies took fewer necessary measurements to defend physical integrity in the workplace (mean 2.86 SD = 3.34). Conclusion There are many healthcare workers who have limited access to the majority of essential PPE. The majority of study participants perceived limited support from their health facilities, hospitals and local concerned bodies. Therefore, hospitals and local public health authorities should increase access to PPE to protect healthcare workers.
Introduction
Coronavirus Disease 2019 (COVID-19) is a disease caused by the SARS-CoV-2 virus. Infected people can spread SARS-CoV-2 to other people through respiratory droplets produced when an infected person coughs or sneezes. A person may also get COVID-19 by touching a surface that has SARS-CoV-2 on it, followed by touching their own nose, mouth, and eyes. 1,2 Personal protective equipment (PPE) is an essential component to safeguarding against COVID-19 crossinfection. Therefore, PPE is a current hot topic, probably the most talked about and sensitive subject during the current COVID-19 for healthcare workers working with COVID-19 infected patients. The shortage of PPE and inappropriate use of the equipment are the main problems related to PPE in the healthcare setting. 3 At the beginning, COVID-19 infected many healthcare workers, posing a big challenge for epidemic control. For instance, as of early March, there were over 2600 infected, with 13 dying in Italy as of March 20, 2020, and the infected number increased to 3300, and at least 22 died in China. 4,5 Among the 19 staff members positively diagnosed with COVID, 88.3% developed psychological stress or emotional changes during their isolation period in Wuhan. 6 Although worldwide, millions of people stay at home to minimize the transmission of severe COVID-19, healthcare workers go to clinics and hospitals, putting themselves at high risk of COVID-2019. 4 As the COVID-19 pandemic accelerates worldwide, access to PPE for healthcare workers is a key concern. Medical staff are prioritized in many countries, but PPE shortages have been described in the most affected facilities. Some medical staff are waiting for equipment while already seeing patients who may be infected with COVID-19 or are supplied with equipment that might not meet requirements. In addition, they are anxious about spreading the infection to their family, which puts their personal safety at risk. 4 One of the precautions to be applied by healthcare workers during caring for patients with COVID-19 is the use of appropriate PPE. The WHO recommends implementing safety protocols for healthcare workers. However, basic PPEs are not always available in many health facilities dealing with COVID-19 patients. Many health facilities around the globe do not have access to an appropriate number of human resources and diagnostic and/ or therapeutic protocols to care for patients suffering from COVID-19. 7 Healthcare workers are dedicated to working in close contact with infected people with inadequate PPE during an outbreak of COVID-19. The majority of reported COVID-19 cases among healthcare workers might be due to poor safety procedures, inadequate access to PPE, diagnostic protocols, and inappropriate use of PPE. Therefore, this study was aimed at evaluating reality and perception about access to personal safety among healthcare workers during the COVID-19 pandemic in a public hospital in West Shoa. As far as researchers' knowledge is concerned, there are limited studies on the access of personal safety during the COVID-19 pandemic in the study area. Hence, the study findings might help to highlight whether there is a need for essential PPE to care for suspected and/or confirmed cases of COVID-19 in the study area to protect healthcare workers in the workplace.
Materials and Methods
The study was conducted in three hospitals (Ambo Referral Hospital, Ambo General Hospital, and Guder Hospital), which were selected purposefully from eight hospitals in the West Shoa Zone public hospitals from June 5 to August 30, 2020. A descriptive cross-sectional study design was done on 100 randomly selected participants using the lottery method, using the study participant identification card as a sampling frame. All healthcare workers who were working and available during the data collection period were included in the study. The sample size was determined by the single population proportion formula and using the following assumption: The proportion (P) of previous study participants who had access to disposable gowns was 67.3% (Latin); the maximum acceptable error/margin of error (d) was 5%; and the level of significance was 0.05 = 1.96. Then the minimum sample size after adding a 10% non-response rate is 372. Then, proportional allocation was used to select the number of healthcare workers from each hospital.
Data Collection Tool and Techniques
Structured, paper-based, self-administered questionnaires were created to collect data. The questionnaire contained three parts. The first part was used to gather socio-demographic information of the study participants and included age, sex, occupation, and monthly income; the second section was developed to evaluate access to PPE, personal safety policies and procedures, COVID-19 diagnostic and management processes, and institutional support with human resources in case healthcare workers get sick. The third part has two items used to evaluate respondents' perceptions about their institutions' taking all the required measurements to protect physical integrity in the workplace (10-point Likert scale; 0 = no support, 10 = full support) and participants' perceptions regarding their local public health authorities' taking all the necessary measurements to protect physical integrity in the workplace (10-point Likert scale; 0 = no support, 10 = full support).
The pilot study of the questionnaire was done on 5% of the sample size in Bako hospital, and training was given for data collection facilitators. Daily supervision was done to check the completeness of the questionnaire. Before data analysis, the data were cleaned, edited, and checked.
Data Processing and Analysis
The data were checked, coded, and entered into Epi-Info version 7.2.2.6 and export to the STATA version 14 software for analysis. Descriptive statistics, frequency, and a percentage were used to present the data.
Ethical Consideration
The Institutional Review Ethics Committee of the College of Health Sciences, College of Medicine and Health Science, Ambo University approved this research as oral consent was enough to conduct this research and ethical clearance was obtained. All the information was kept confidential and the study was done as per the ethical guidelines of the Declaration of Helsinki Only oral informed consent was taken from each participant because the research did not contain any risk to the study participants, and since the study participants are health professionals, they are expected to be aware that the research did not harm them but rather benefit them by increasing the accessibility of PPE. Oral consent was taken from all study participants after explaining the objectives of the study to them. All information obtained from the respondents was kept confidential using code. The data was not provided to a third party. The study respondents were informed that they have the full right to refuse to take part in this research and that they also have the full right to withdraw at any time they wish.
Results
A total of 361 health professionals responded to the question, with a 97% response rate. The median age of the study participants is 29, with an inter-quartile range of 6. Of the total respondents, male health workers account for 218 (60.4%) and 207 (57.3%) of them (Table 1).
Training on Personall Protective Equipment (PPE) and Ability to Use Correctly
Although it is expected that all healthcare workers get training on PPE, only 236 (65.4%) of study participants reported that they got training on how to put on and remove PPE. The respondents were asked whether they believed they could correctly don and doff based on the information received during training, and more than 80% of them reported knowing how to do it for disposable gloves, face masks, and NK95 face masks (Table 2).
Access to Essential PPE
Study participants responded that they had access to hand sanitizer 322 (89.2%), disposable gloves 285 (78.9%), face masks 280 (77.6%), KN95 face masks 163 (45.2%) and facial protective shields 112 (31%) (Figure 1). One hundred sixty-nine (46.8%) of the study participants reported that their hospital has personal safety policies and procedures. Furthermore, the vast majority of them (310, 85.8%) reported having access to personal safety policies and procedures in their workplace. More than two-thirds (247, 68.4%) of the study participants reported that they do not have access to COVID-19 diagnostic and management systems.
Perception About Access to PPE
Regarding perception about access to adequate PPE necessary for daily professional activity, 55 (15.2%), 183 (50.7%), and 123 (34%) respondents reported it as rarely/never, sometimes, and always, respectively ( Figure 2). The analysis of perceptions among gender, education, marital status, and professional status toward access to adequate PPE for their daily activities was done and the results are shown in (Table 3). Regarding perception about access to adequate information to use PPE to protect healthcare workers from contracting COVID-19 have received, only 126 (35%) participants reported as they always received adequate information (Figure 3).
Perception About Support from Their Own Hospital and Local Public Health Authorities
Participants in the study were asked about their perceptions of the hospital where they work, including whether they provide extra human resources to health professionals in the event they become ill, and reported mean = 2.68 Sd = 3.35. One hundred sixty-one (44%) reported that they perceived no support, while only 35 (9.7%) participants reported that they perceived themselves as fully supported.
Moreover, study participants' perceptions about their own hospital and whether they were taking all the needed measurements to protect physical integrity in the workplace was mean 2.96 SD = 3.40. Of them, about 149 (41.3%) reported that they perceived no support, and about 37 (10.2%) were perceived as full support. Furthermore, the participant was asked their perceptions about their nearby governmental health authorities taking all the needed measurements to protect physical integrity in the workplace and reported a mean of 2.86 SD = 3.34. Of them, about 151 (41.8%) reported that they perceived no support and about 33 (9.1%) perceived full support.
Finally, participants were asked about their perception of the risk of contracting COVID-19 within the next 30 days, and they reported that the mean risk of contracting COVID-19 is 62.2, with a standard deviation of 34.6.
Discussion
Ensuring a constant supply of PPE is important to protect healthcare workers from COVID-19. Our study aimed to assess the reality and perception of personal safety among health professionals in West Shoa, Ethiopia during the current COVID-19 pandemic. Many studies show that adequate training, proper use, and uninterrupted accessibilities of adequate PPE reduce the risk of infection when treating cases of COVID-19. 8,9 Our results show that about 236 (65.4%) workers received training on COVID-19 infection prevention and controls. This finding is in line with the studies conducted in Ethiopia. 10 This may be due to the fact that both the studies were conducted in the same country that is using similar guidelines and policies to train human power. Inadequate PPE may put HCPs at risk of contracting the virus and infecting other healthcare workers and their families. This problem did not only exist in Ethiopia; it was also reported in China 11 and other countries. This study's findings showed that the majority of the healthcare workers had access to hand sanitizer, surgical face mask, and disposable gloves. However, there are healthcare workers that do not have access to KN95 face masks and facial protective shields as per WHO recommendation during the COVID-19 pandemic. This finding is similar to a study done in Latin America that reported access to PPE such as hand sanitizer (95%), disposable gloves (91.1%), disposable gowns (67.3%), disposable surgical masks (83.9%), N95 masks (56.1%), and facial protective shields (32.6%). In line with a study conducted in Latin America, our study findings showed that the majority of study participants had access to personal safety procedures and policies and had access to COVID-19 diagnostic and management processes. 7 When asked whether they had received adequate information regarding the use of PPE to protect themselves from contracting COVID-19, only 34.9% of the healthcare workers reported that they had always received adequate information. The remaining respondents never, rarely, or sometimes received such information. This funding is similar to a study conducted in April, 9,12 suggesting the main reasons for this difference are frequent changes due to in national and international guidelines about the use of PPE during the course of a pandemic and a lack of clarity in information. Moreover, this study finding showed that the majority of respondents perceived there to be inadequate help from their hospital and nearby governmental health authorities regarding their wellbeing. The finding is similar to a study done in Latin America that reported healthcare workers in that country had inadequate help from healthcare authorities during the COVID-19 pandemic. 7 Finally, this study had some limitations. First, the study focused on more general populations of healthcare workers rather than those who might have direct contact with COVID-19 patients. Secondly, the results of this study are based on a self-reported questionnaire using a cross-sectional design that might not represent the true situation. Lastly, the study was not designed for hypothesis testing since it is difficult to generate outcome variables but mainly to generate descriptive information.
Conclusion and Recommendation
Many healthcare workers had limited access to essential PPE such as the NK95 face mask and facial protective shield, although access to other essential PPE seems good and there is a need to increase access to the particular PPE mentioned above. The need for implementing personal safety policies and allocating human resources in the work place. Training on PPE should be given to all healthcare workers to increase their personal safety and the integrity of the workplace.
The majority of study participants perceived inadequate support from their hospital and nearby governmental health authorities. Therefore, hospitals and local, nearby governmental health authorities should take all required protocols to protect physical integrity in the workplace through providing full support to healthcare workers. | 2022-05-05T05:13:49.174Z | 2022-04-29T00:00:00.000 | {
"year": 2022,
"sha1": "4c149bf5603a01b246faae18c5458dcd875b2bbd",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4c149bf5603a01b246faae18c5458dcd875b2bbd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271263252 | pes2o/s2orc | v3-fos-license | Oxidative stress and the multifaceted roles of ATM in maintaining cellular redox homeostasis
The ataxia telangiectasia mutated (ATM) protein kinase is best known as a master regulator of the DNA damage response. However, accumulating evidence has unveiled an equally vital function for ATM in sensing oxidative stress and orchestrating cellular antioxidant defenses to maintain redox homeostasis. ATM can be activated through a non-canonical pathway involving intermolecular disulfide crosslinking of the kinase dimers, distinct from its canonical activation by DNA double-strand breaks. Structural studies have elucidated the conformational changes that allow ATM to switch into an active redox-sensing state upon oxidation. Notably, loss of ATM function results in elevated reactive oxygen species (ROS) levels, altered antioxidant profiles, and mitochondrial dysfunction across multiple cell types and tissues. This oxidative stress arising from ATM deficiency has been implicated as a central driver of the neurodegenerative phenotypes in ataxia-telangiectasia (A-T) patients, potentially through mechanisms involving oxidative DNA damage, PARP hyperactivation, and widespread protein aggregation. Moreover, defective ATM oxidation sensing disrupts transcriptional programs and RNA metabolism, with detrimental impacts on neuronal homeostasis. Significantly, antioxidant therapy can ameliorate cellular and organismal abnormalities in various ATM-deficient models. This review synthesizes recent advances illuminating the multifaceted roles of ATM in preserving redox balance and mitigating oxidative insults, providing a unifying paradigm for understanding the complex pathogenesis of A-T disease.
Introduction
Oxidative stress, defined as a disturbance in the equilibrium between oxidant production and antioxidant defense mechanisms, represents a fundamental mechanism of cellular injury [1,2].Reactive oxygen species (ROS), the primary oxidants, can indiscriminately react with and modify cellular macromolecules like DNA, proteins, and lipids, thereby disrupting their structural integrity and biological functions [1,3,4].If unresolved, excessive ROS accumulation can trigger programmed cell death pathways, including apoptosis initiated by mitochondrial membrane permeabilization, as well as necrosis resulting from the disruption of plasma membrane ion gradients and eventual rupture following lipid peroxidation [1,5,6].
In response to oxidative stress, cells mount an adaptive response characterized by the transcriptional upregulation and/or posttranslational activation of various antioxidant proteins [13].Consequently, the expression levels and activities of these enzymes often serve as reliable biomarkers indicative of oxidative insult across diverse pathophysiological contexts.However, when the oxidative burden overwhelms the cellular antioxidant capacity, deleterious oxidative modifications to biomolecules may ensue, compromising organelle function and ultimately triggering cell death cascades [1,13].Thus, maintaining an optimal redox equilibrium is crucial for cellular E-mail address: microljh@jnu.ac.kr.
Distinct pathways of ATM activation by DNA damage and oxidation
The ataxia telangiectasia mutated (ATM) protein kinase, a master regulator of the DNA damage response (DDR), can be activated through distinct mechanisms upon exposure to DNA double-strand breaks (DSBs) or oxidative stress (Fig. 1) [14][15][16][17].The canonical form of ATM activation is dependent on both DSBs and the MRE11-RAD50-NBS1 (MRN) complex, which recruits the ATM homodimer, induces monomerization of the kinase and promotes interaction of the ATM monomers with DNA ends and with its substrates [18][19][20].This process is facilitated by MRN and involves ATM trans-autophosphorylation.
In contrast, oxidative stress triggers the formation of an active dimeric ATM species covalently linked via intermolecular disulfide bonds, a process independent of the MRN complex, DNA ends, or ATM autophosphorylation [21,22].Several disulfide bonds were mapped in this dimer form, including at Cys2991 (C2991) in the PIKK regulatory domain (PRD), which regulates ATM activation by oxidative stress; mutations in this site (e.g.C2991L) cause defects in hydrogen peroxide (H2O2)-mediated ATM activation in vitro and in human cells [23].
Separation-of-function ATM mutants have provided crucial insights into these divergent activation pathways.The oxidation-deficient C2991L mutant remains competent for MRN/DNA-dependent activation yet cannot be activated by oxidants like H2O2.Conversely, the MRN/DNA-binding deficient R2579A/R2580A mutant selectively abrogates activation in response to DSBs while retaining oxidation-induced activation capabilities [22,23].
Remarkably, oxidation-activated ATM exhibits distinct substrate preferences compared to its DSB-activated counterpart.Global phosphoproteomic analyses revealed that while both activation modes converge on some common substrates, oxidation-induced ATM selectively impacts a subset of phosphorylation events largely regulated by the CK2 kinase [17,22,23].Furthermore, unlike DSB-activated ATM, the oxidized form fails to induce phosphorylation of canonical DSB markers such as γH2AX and KAP1.
These findings uncover an oxidation-sensing role for the ATM kinase, whereby it engages a specific subset of downstream effectors distinct from its canonical DDR functions upon oxidative insults.This redoxregulated ATM signaling axis likely constitutes an important cellular stress response pathway.Elucidating the molecular underpinnings and physiological implications of this oxidation-induced ATM activation may reveal novel strategies for therapeutic intervention in oxidative stress-related pathologies, including the neurodegenerative disorder ataxia-telangiectasia.
ATM as a sensor of oxidative stress and regulator of ROS levels
Mounting evidence indicates that the ATM protein itself functions as a critical sensor and regulator of oxidative stress within cells (Fig. 2) [17,22,24].Numerous studies have reported significant accumulation of ROS, particularly hydrogen peroxide (H2O2), in ATM-deficient models, underscoring ATM's pivotal role in maintaining redox homeostasis [17].A novel mode of ATM activation occurs through the formation of intermolecular disulfide bonds between the monomers of the ATM dimer in response to oxidants like H2O2 [22,23].This oxidation-induced activation is dependent on the Cys2991 residue in the PIKK regulatory domain, as mutation of this cysteine (e.g.C2991L) abrogates ATM activation by H2O2 in vitro and in human cells [22,23].
In cerebellar and cerebral tissues of ATM knockout mice, altered levels of antioxidants like glutathione (GSH), cysteine, and the redox protein thioredoxin suggest compensatory responses to elevated oxidative stress [11].Furthermore, decreased catalase activity coupled with increased superoxide dismutase (SOD) activity in these brain regions implies higher H2O2 levels due to impaired scavenging and enhanced generation from superoxide [11].Similar findings of reduced catalase activity and increased SOD levels have been observed in ATM-deficient lymphoblasts [25], fibroblasts [26,27], and astrocytes [28].Direct measurements consistently reveal significantly higher intracellular ROS levels, especially H2O2, in various ATM-deficient cell types, including hematopoietic stem cells [29], cerebellar neurons, and specific brain regions like the cerebellum, basal ganglia, hippocampal CA1, and substantia nigra [30].Importantly, transient ATM depletion or expression of the oxidation-resistant ATM mutant (C2991L) in normal cells can also lead to increased ROS accumulation [31], highlighting ATM's crucial role in regulating redox homeostasis.Mechanistically, ATM promotes the activity and expression of antioxidant enzymes like glucose-6-phosphate dehydrogenase (G6PD) in mitochondria [24].During oxidative stress, ATM also induces the transcription factor NRF1, which upregulates genes involved in mitochondrial functions [32].Moreover, A-T patients exhibit slightly lower plasma antioxidant levels compared to healthy controls [33,34], potentially rendering their cells more susceptible to oxidative damage.
Furthermore, ATM regulates the Nrf2-ARE (Nuclear factor erythroid 2-related factor 2 -Antioxidant Response Element) pathway, a crucial defense mechanism against oxidative stress [35].The Nrf2-ARE pathway is protective in neurodegenerative conditions by reducing oxidative stress and neuroinflammation, making it a potential therapeutic target [35,36].Nrf2 induces expression of cytoprotective and detoxifying genes, and is essential for the induction of many detoxification enzymes via the ARE enhancer sequence [35].Nrf2 is regulated by Keap1, which binds and suppresses it.NRF2 can be activated directly by ROS oxidizing Keap1, or indirectly via ATM blocking Nrf2 degradation through the tumor suppressor BRCA1 by increasing its stability [37][38][39].Upregulated Nrf2 induces antioxidant genes like HO-1, NQO1, GPx-1 and CAT to bolster the cellular antioxidant defense system [40].Downstream of oxidative activation, ATM orchestrates a signaling cascade involving the kinase CHK2 and subsequent induction of the denitrosylase GSNOR and autophagy regulator Beclin1 to promote the clearance of damaged mitochondria via mitophagy [41,42].Furthermore, agents that specifically induce mitochondrial ROS production, such as menadione, are sufficient to engage the oxidation-dependent activation of ATM [24].Recent findings.
These findings firmly establish ATM as a key sensor and regulator of oxidative stress, with its deficiency leading to ROS accumulation that likely drives various pathological features of A-T.Modulating this redox imbalance through antioxidant therapies could therefore represent a promising strategy for managing this disorder.
Mitochondrial aberrations and ATM's role in redox homeostasis
Accumulating evidence positions ATM as a crucial sensor and regulator of mitochondrial homeostasis and cellular redox balance.Cells deficient in ATM exhibit transcriptional changes indicative of mitochondrial dysfunction, including upregulated expression of mitochondrial DNA repair and ROS-scavenging genes [43,44].Functionally, these cells display impaired mitochondrial respiration, reduced membrane potential, defective mitophagy, and increased mitochondrial massphenotypes that are not rescued by oxidation-deficient ATM mutants, directly linking ATM's redox functions to mitochondrial quality control (Fig. 3) [43][44][45].
A key mechanism by which ATM preserves mitochondrial integrity appears to be through the regulation of mitochondrial DNA (mtDNA)
Fig. 3. Interaction Between ATM and Antioxidant Defense Systems
The ATM kinase interacts with and regulates various antioxidant defense systems to maintain cellular redox homeostasis, acting as a central hub that stabilizes the transcription factor NRF2 through inhibition of KEAP1 by BRCA1, leading to the expression of antioxidant enzymes like NQO1, HO-1, GPx, and CAT, while also enhancing mitochondrial function and biogenesis to increase mitochondrial antioxidant capacity via upregulation of NRF1 and G6PD, as well as directly interacting with and activating antioxidant proteins such as peroxiredoxins and thioredoxins through post-translational modifications to boost their enzymatic activity.homeostasis.ATM inhibition or depletion decreases the expression of RR (ribonucleotide reductase) subunits, resulting in lower mtDNA levels [46], suggesting a role for ATM in maintaining mtDNA content.Furthermore, ATM has been shown to phosphorylate and activate the transcription factor NRF-1 in response to oxidative stress, promoting its nuclear localization and transcriptional activation of mitochondrial biogenesis genes; expression of a phosphomimetic NRF-1 mutant rescued mitochondrial dysfunction in ATM-deficient neurons [32].
In addition to preserving mitochondrial biogenesis and function, ATM plays a pivotal role in sensing and mitigating mitochondrial oxidative stress.In ATM-deficient cells, mitochondrial dysfunction coincides with elevated mitochondrial ROS levels and aberrant mitophagy, phenotypes that can be rescued by partial depletion of the autophagy regulator Beclin-1 to restore mitophagy [42,44].Interestingly, ATM localizes to mitochondria and can be activated upon mitochondrial damage, even in the absence of nuclear DNA lesions.Furthermore, activation of ATM by mitochondrial hydrogen peroxide promotes its dimerization and upregulates the expression of glucose-6-phosphate dehydrogenase (G6PD) and the pentose phosphate pathway, thereby increasing NADPH and cellular antioxidant capacity [24].These findings suggest that ATM senses mitochondrial ROS signals and engages transcriptional programs to restore redox homeostasis.
Recent findings have more elucidated a molecular pathway by which ATM and its downstream effector CHK2 initiate autophagy in response to oxidative stress [47].Upon exposure to ROS, ATM becomes phosphorylated at S1981, leading to the subsequent phosphorylation of CHK2 at T68. Activated CHK2 then binds and phosphorylates the E3 ubiquitin ligase TRIM32 at S55, enabling TRIM32 to catalyze K63-linked ubiquitination of the autophagy protein ATG7 at K45.This post-translational modification of ATG7 is crucial for initiating the autophagy process [47] and mitophagy since ATG7 is one of the factor for regulating mitochondrial clearance [48].
Moreover, ATM restrains mitochondrial ROS production by regulating the expression of the ROS-producing enzyme NOX4, which is abnormally upregulated in A-T cells and contributes to elevated oxidative DNA damage and replicative defects [49].Collectively, these studies highlight the multifaceted roles of ATM in preserving mitochondrial function, maintaining redox balance, and mitigating oxidative stressprocesses that are critically dysregulated in the absence of ATM and likely contribute to the neurodegenerative pathology observed in ataxia-telangiectasia.
In the conventional DNA damage response pathway, it was proposed that the Mre11-Rad50-Nbs1 (MRN) complex detects DSBs and directly activates ATM by promoting dissociation of the inactive dimer into monomers, thereby relieving the inhibition imposed by the PRD helices [60][61][62].
In contrast, oxidative activation of ATM by hydrogen peroxide (H2O2) follows a unique mechanism involving formation of an intermolecular disulfide bond between the Cys2991 residues of the two monomers [23,31].This disulfide crosslinking stabilizes the ATM dimer but in a dramatically different rotated conformation compared to the inactive state [63].Accompanying this rotation is the displacement of the inhibitory PRD helices from the substrate binding cleft [63].
Moreover, the kinase N-lobe twists relative to the C-lobe into a catalytically competent active conformation akin to activated mTOR [63].Interestingly, two distinct conformations were observed for human ATM a closed symmetrical dimer and an open asymmetrical dimer, with the latter suggesting conformational changes that could facilitate substrate binding [50].
The cryo-EM structure of H2O2-activated ATM bound to a p53 peptide substrate revealed the molecular basis of this redox activation mechanism [63].A key aspect is that in the basal dimer state, the PRD loop harboring Cys2991 is not ideally positioned for disulfide formation across the dimer interface [63].However, the conformational changes triggered by oxidation allow the disulfide to form, stabilizing the activated rotated dimeric state [63].Several known regulatory sites on ATM, including Ser1981, Ser2996, Cys2991, and Lys3016 (the acetylation site), are located in disordered loops in close proximity to the PRD [50,60,64].Post-translational modifications at these sites after oxidative stress may disrupt the interactions between the PRD and the active site, thereby promoting ATM activation.
Therefore, while both activation pathways involve alleviating the inhibition imposed by the PRD element, oxidative stress promotes disulfide-crosslinking and conformational changes to rotate the dimer into an active state [63].This contrasts with the MRN/DNA damage pathway proposed to drive dimer dissociation into monomers for activation [60][61][62].This oxidation-specific mechanism enables ATM to function as a critical cellular redox sensor [17,23,31].
Oxidative stress drives neurodegeneration in A-T
Mitochondria are the major source of cellular ROS, generated as byproducts of oxidative phosphorylation and the electron transport chain [1,65].Neurons are particularly susceptible to mitochondrial oxidative stress due to their high energy demands and reliance on mitochondrial respiration [1,66].Impaired mitochondrial function and elevated ROS levels have been implicated in various neurodegenerative disorders, including Parkinson's, Alzheimer's, ALS, and Huntington's disease [67].
Loss of functional ATM kinase has been directly linked to elevated oxidative damage, proposed as an underlying mechanism driving the neuronal degeneration and ataxic phenotypes in A-T patients [66,68,69].Studies in ATM-deficient mouse models revealed significantly increased oxidative damage to proteins and elevated oxidative stress markers specifically in the brain and cerebellum, but not other organs [70].This oxidative insult preferentially impacts neural cells, as ATM-null astrocytes and neural stem cells exhibited impaired growth, premature senescence, and earlier death in culturephenotypes rescued by antioxidant treatment or inhibition of ROS-induced signaling pathways like ERK1/2 and p38 MAPK [71,72].
Accumulation of intracellular ROS in ATM-deficient cells also impairs self-renewal and longevity of hematopoietic stem cells (HSCs) by upregulating p16INK4a, leading to Rb inactivation and p38 MAPK activation.Notably, treatment with antioxidants or p38 inhibition extended the lifespan of ATM-null HSCs [29].Collectively, these findings implicate elevated oxidative stress as a common pathogenic factor driving the developmental defects and neurodegeneration observed upon ATM loss, with a particular impact on the cerebellum and neuronal compartments.
At the molecular level, cells lacking functional ATM exhibit significantly elevated levels of oxidative DNA lesions like 8-hydroxy-2′-deoxyguanosine (8-OHdG) as well as an increased burden of single-strand breaks (SSBs) compared to normal cells [34,49,73].The formation of these SSBs is dependent on ROS, as treatment with the antioxidant N-acetylcysteine (NAC) can prevent their accumulation in ATM-depleted human cell lines [73,74].A downstream consequence of unresolved oxidative DNA damage appears to be widespread protein aggregation.Loss of ATM's oxidation-sensing capability promotes the formation of detergent-resistant protein aggregates in a ROS-dependent manner across multiple human cell types, including brain-derived glioma and neuroblastoma cells [31,74].Mass spectrometry analysis revealed over 1100 polypeptides significantly enriched in these insoluble aggregates isolated from A-T patient cerebellum samples compared to healthy controls [74].
The accumulation of protein aggregates correlates with signs of poly (ADP-ribose) polymerase (PARP) hyperactivation, as indicated by elevated poly(ADP-ribose) (PAR) levels detected by immunohistochemistry in A-T granule cells [74].This suggests a model where ROS accumulating in ATM-deficient neurons triggers a pathological cycle of oxidative DNA damage, PARP enzyme hyperactivation due to unrepaired DNA lesions, and ultimately the widespread aggregation of proteins into an insoluble state [31,74].The cerebellar enrichment of these protein deposits aligns with the cerebellum being the primary region affected by the neurodegenerative process in A-T.
In summary, oxidative stress emerges as a key instigating factor driving the molecular pathogenesis of neurodegeneration in A-T.The ROS burden arising from ATM dysfunction initiates a cascade involving DNA damage, PARP hyperactivation, and finally irreversible protein aggregation, which ultimately disrupts proteostasis and neuronal viability in the cerebellum.The ATM kinase therefore plays a critical role in maintaining mitochondrial redox homeostasis and quality control processes by acting as a mitochondrial ROS sensor to engage antioxidant responses, mitochondrial biogenesis, and clearance programs.Disruption of these protective mechanisms in A-T likely fuels excessive mitochondrial ROS production and oxidative damage that preferentially impacts high energy-demanding neurons, providing a unifying paradigm for the neurodegenerative phenotypes of this disorder.
Oxidative stress and transcriptional dysregulation during neurodegeneration in A-T disease
Cells derived from A-T patients and ATM-null mouse models exhibit significantly elevated levels of ROS, altered redox homeostasis, and heightened antioxidant responses compared to controls [11,12,28,30,33,34,[75][76][77], suggesting that loss of ATM function compromises the cellular ability to mitigate oxidative insults.Importantly, ATM itself can be activated through a non-canonical pathway independent of double-strand DNA breaks.This alternative mode of activation involves the formation of intermolecular disulfide bonds between the two monomers of the ATM dimer in response to oxidative stress [23,78].The source of ROS capable of eliciting this oxidation-dependent ATM activation has been traced to dysfunctional mitochondria [31].
Through genome-wide mapping, it has been revealed that loss of ATM activity increases R-loop levels preferentially at promoter regions and GC-rich sequences, coinciding with poly(ADP-ribose) (PAR) accumulation [74,83].These oxidative genomic lesions correlated with reduced expression of highly transcribed, GC-rich genes in A-T patient cerebellum, many implicated in cerebellar ataxias [83].Mechanistically, it is proposed that in the absence of ATM's redox functions, elevated ROS triggers transcriptional stress manifested as persistent R-loops.The resulting ssDNA breaks hyper-activate PARP1/2, depleting cellular NAD + pools and promoting aberrant PAR signaling that disrupts transcription [74].Over time, this oxidative damage accumulates preferentially at highly expressed, GC-rich loci like those encoding calcium signaling proteins (e.g.ITPR1, CA8), progressively disrupting transcriptional programs vital for cerebellar neuron function and survival [83][84][85][86][87][88].
In addition to its roles in the DNA damage response, ATM also regulates transcriptional processes that may contribute to neurodegeneration when disrupted (Fig. 4).Exposure to ionizing radiation has been shown to induce alternative splicing of pre-mRNAs in an ATMdependent manner [89][90][91].Similar R-loop accumulation has been observed in ATM-deficient systems like mouse testes where it correlates with elevated DNA damage and apoptosis [92].In plant models, ATM regulates alternative splicing of mitochondrial transcripts like nad2 in response to genotoxic stress [93], indicating an evolutionarily conserved role in coupling mitochondrial function to gene expression programs.
These findings highlight a previously unappreciated role for ATM in safeguarding transcriptome integrity via oxidation sensing and R-loop resolution.Oxidative stress-induced transcriptional dysregulation, preferentially impacting highly transcribed cerebellar genes encoding essential proteins like calcium signaling factors, likely represents a central pathogenic mechanism underlying the region-specific neurodegeneration in A-T.Therapeutic strategies to ameliorate oxidative damage and restore redox balance may offer novel interventions for this devastating disease.
ATM deficiency and the role of antioxidants/reducing agents
Accumulating evidence strongly implicates oxidative stress as a key contributor to the clinical manifestations of A-T, a disorder caused by deficiency in the ATM protein kinase.Treatment with various antioxidants and reducing agents has been shown to ameliorate multiple phenotypic abnormalities associated with ATM deficiency in cellular and animal models (Fig. 5) [17].
Administration of the catalytic antioxidant EUK-189, which has superoxide dismutase and catalase activities, corrected neurobehavioral deficits, normalized brain fatty acid levels, and extended lifespan in ATM knockout mice [94].Similarly, antioxidant treatment with isoindoline nitroxide rescued impaired Purkinje neuron survival and dendritic differentiation in ATM-null models, underscoring oxidative stress in cerebellar degeneration [95].
NAC, a thiol-containing compound, exhibits multifaceted antioxidant and cytoprotective properties attributed to three main mechanisms [96].Firstly, the free thiol group in NAC confers disulfide reductant capacity, allowing it to reduce extracellular and intracellular disulfide bonds, which is beneficial in conditions associated with oxidative stress and protein misfolding [97].Secondly, the sulfhydryl group enables NAC to directly scavenge and neutralize various oxidants, such as hydrogen peroxide, hypochlorous acid, and highly reactive hydroxyl radicals, counteracting oxidative stress and mitigating the deleterious effects of reactive oxygen species [98].Thirdly, NAC serves as a precursor for the synthesis of the endogenous antioxidant glutathione (GSH), boosting intracellular GSH levels by providing cysteine, the rate-limiting substrate for GSH biosynthesis, thereby enhancing the cellular redox balance and protection against oxidative damage [96].
In ATM-null mice, NAC increased lifespan, reduced ROS, restored mitochondrial membrane potential, and delayed lymphoma development [45].Additionally, NAC prevented T cell apoptosis [99], premature senescence, and defective T cell development in ATM-deficient cells/mice [29].Remarkably, NAC relieved widespread protein aggregation, including of CK2β, observed in ATM-deficient lymphoblastoid cells and cells expressing an oxidation-resistant ATM variant [74], suggesting ROS drive this aggregation.
Studies have also revealed ROS-scavenging by NAC can reduce detrimental phenotypes.NAC reduced accumulation of topoisomerase 1-DNA covalent complexes (TOP1cc) in ATM-deficient astrocytes [100], eliminated ROS-dependent single-strand DNA breaks in ATM-depleted human cells [74], and normalized metabolic dysregulation related to (Left) In healthy neuron (blue), orderly transcription occurs at a GC-rich promoter, with an active RNA polymerase synthesizing RNA, suggesting that GC-rich genes, ITPR1 and CA8, show normal expression levels.Additionally, functional ATM is shown activating splicing factors, enabling proper RNA splicing.(Right) In an A-T neuron (orange), transcription is disrupted at a GC-rich promoter, with an R-loop formation.The RNA polymerase II is blocked, unable to transcribe, followed by the reduced expression of ITPR1 and CA8.In the absence of functional ATM, splicing factors remain inactive, leading to impaired splicing.Furthermore, accumulation of PAR chains is observed, suggesting PARP hyperactivation due to unresolved DNA damage.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)the TCA cycle in cells expressing an oxidation-sensing ATM mutant [24].Moreover, NAC supplementation conferred lifespan extension in ATM-deficient mice and nematode models [101], potentially by alleviating PARP1 hyperactivation driven by unrepaired oxidative lesions.
Collectively, these findings provide compelling evidence that oxidative stress is a central driver of ATM deficiency pathologies.Antioxidant or reducing agent-based therapies thus hold significant therapeutic potential for ameliorating diverse aspects of the complex A-T phenotype.
Concluding remark
While the link between ATM dysfunction and increased oxidative stress leading to the Ataxia-Telangiectasia phenotype has been established, the precise mechanisms by which ATM maintains redox homeostasis remain incompletely understood (Fig. 6).Several potential mechanisms have been proposed: Transcriptional Regulation: ATM may modulate redox balance by regulating the transcription of genes involved in antioxidant responses and mitochondrial function.The transcription factors NRF-1 and NRF-2, which are influenced by ATM, play crucial roles in regulating mitochondrial ROS levels and antioxidant gene expression, respectively [102,103].Additionally, the histone protein H2AX, a target of ATM, is implicated in maintaining mitochondrial homeostasis and ROS regulation [102,103].
RNA Splicing: ATM's involvement in pre-mRNA processing [104] suggests that RNA splicing regulation could contribute to its role in redox homeostasis.Supporting this, cells expressing an oxidation-defective ATM mutant (C2991L) accumulate protein aggregates enriched in factors related to DNA metabolism and gene expression, indicating potential sequestration of proteins involved in ROS homeostasis and mitochondrial function [31].
Protein Aggregation: Aberrant protein aggregation, a common feature in neurodegenerative diseases like Parkinson's and Alzheimer's [105], has been observed in ATM-deficient cells.Notably, this aggregation is dependent on elevated oxidative stress, as treatment with the antioxidant NAC rescues the specific aggregation pattern [31].Disrupted proteostasis due to protein aggregation may contribute to the A-T pathology.
Despite these insights, several fundamental questions remain unanswered regarding the mechanisms underlying A-T neurodegeneration: 1. What is the role of ROS, given the involvement of the ROS-activated ATM pathway in neurodegeneration [17,23], and how ROS contributes to elevated DNA damage in A-T cells/tissues, potentially via mitochondrial dysregulation [31,32,43,44].However, this mitochondrial link requires more direct investigation in future studies.loops, which are overabundant in ATM-deficient neurons/cells [74,83,106], to cytoplasmic RNA/DNA hybrid accumulation and innate immunity activation, as R-loops can be excised to the cytoplasm under certain conditions [107].3. Why are highly expressed genes, particularly those involved in Purkinje cell function and high GC content, preferentially downregulated in A-T patient cerebellum tissues?Recent article shows that there is different patterns of transcripts specific to Purkinje cells in between A-T and control cerebellum [83].The potential role of DNA sequence context in transcriptional dysregulation could be further investigated.4. What are the potential therapeutic strategies that could be developed based on the findings related to transcriptional stress, RNA-DNA hybrids, and DNA damage in ATM-deficient cells?Antioxidant treatment or expression of RNA-DNA helicases could reduce the levels of single-stranded breaks in neuron-likes cells [83]. 5. What is the role of single-strand DNA breaks observed in ATMdeficient cells, which are linked to cerebellar dysfunction in other DNA repair syndromes [82,108]?6.What are precise roles of PARP hyperactivation, NAD + depletion, and protein aggregation in driving neurotoxicity and Purkinje cell loss in A-T cerebellum.Parylation levels are higher in cerebellum tissue from humans with A-T compared with control subjects [74].7. What are the implications of dysregulated calcium signaling, with components of the inositol phosphate pathway like ITPR1 and CA8 being downregulated in A-T cerebellum and mouse models [74,[83][84][85][86][87][88]106,109], and whether this is a cause or consequence of neurodegeneration, potentially related to impaired ER-mitochondrial crosstalk [110]?
Elucidating the interplay between these mechanisms and ATM's functions in maintaining redox homeostasis is crucial for developing treatments to slow the progressive neurodegeneration and ataxia in A-T patients.
Fig. 1 .
Fig. 1.ATM's Dual Activation MechanismsThe ATM kinase can be activated through two distinct pathways, illustrated side-by-side.(Top) Canonical DNA double-strand break (DSB)-induced activation pathway: In response to DNA damage, the MRN complex senses and binds to DSBs, recruiting and activating the ATM kinase.This causes ATM monomerization from an inactive dimer and initiates phosphorylation of canonical substrates involved in DNA repair and cell cycle control.(Bottom) Oxidation-induced activation pathway: Oxidative stress, particularly increased hydrogen peroxide (H2O2) levels, leads to the formation of intermolecular disulfide bonds between the monomers of the ATM dimer.This oxidation-dependent activation is mediated by the C2991 residue and results in the phosphorylation of distinct substrates involved in antioxidant responses, mitochondrial homeostasis, and protein aggregation.
Fig. 2 .
Fig. 2. Interaction Between ATM and Antioxidant Defense SystemsATM shows multifaceted roles in maintaining mitochondrial homeostasis through four key mechanisms: regulating mitochondrial DNA (mtDNA) levels by upregulating ribonucleotide reductase (RNR), promoting mitochondrial biogenesis via phosphorylation and nuclear translocation of NRF-1 to induce mitochondrial genes, enhancing cellular antioxidant capacity by inducing glucose-6-phosphate dehydrogenase (G6PD) expression and NADPH production, and orchestrating mitophagy of damaged mitochondria through a signaling cascade involving CHK2, GSNOR, and Beclin-1-mediated autophagosome formation.
Fig. 4 .
Fig. 4. Transcriptional Dysregulation in A-T DiseaseThe transcriptional dysregulation occurs in neurons affected by Ataxia-Telangiectasia (A-T) disease, contrasted with the normal transcriptional processes in healthy neurons.(Left) In healthy neuron (blue), orderly transcription occurs at a GC-rich promoter, with an active RNA polymerase synthesizing RNA, suggesting that GC-rich genes, ITPR1 and CA8, show normal expression levels.Additionally, functional ATM is shown activating splicing factors, enabling proper RNA splicing.(Right) In an A-T neuron (orange), transcription is disrupted at a GC-rich promoter, with an R-loop formation.The RNA polymerase II is blocked, unable to transcribe, followed by the reduced expression of ITPR1 and CA8.In the absence of functional ATM, splicing factors remain inactive, leading to impaired splicing.Furthermore, accumulation of PAR chains is observed, suggesting PARP hyperactivation due to unresolved DNA damage.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Fig. 5 .
Fig. 5. Therapeutic Interventions in A-T.This figure illustrates the diverse beneficial effects of antioxidant therapies in various models of ataxia-telangiectasia (A-T).It depicts the antioxidant NAC reducing ROS levels, restoring mitochondrial function, preventing protein aggregation, and alleviating the accumulation of topoisomerase 1-DNA covalent complexes (TOP1cc) in astrocytes.It also indicates that the antioxidant EUK-189 can normalize brain fatty acid levels and extend lifespan in treated mice.Together, these illustrations highlight the potential of antioxidants like NAC and EUK-189 to ameliorate various pathological processes underlying A-T through their diverse mechanisms of action.
2 .Fig. 6 .
Fig. 6.Consequences of ATM Dysfunction in Redox HomeostasisDysfunction of the ATM kinase, which is crucial for maintaining redox homeostasis, leads to a cascade of detrimental cellular events including accumulation of reactive oxygen species (ROS) due to reduced antioxidant responses, oxidative damage to DNA, lipids, and proteins, impaired repair of oxidative DNA lesions resulting in genomic instability, disruption of cell cycle checkpoints and p53 activation causing uncontrolled cell cycle progression and apoptosis, compromised mitochondrial biogenesis and function with reduced ATP production and increased mitochondrial ROS generation exacerbating oxidative stress, ultimately culminating in neuronal damage and degeneration manifesting as the neurodegenerative symptoms characteristic of Ataxia-Telangiectasia (A-T). | 2024-07-18T15:08:37.441Z | 2024-07-16T00:00:00.000 | {
"year": 2024,
"sha1": "7f7f730e9a46a60975edade7922e9b49d2a3736e",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0ce4d26fee63490a687ad7d530d17b796ac43f2d",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73457676 | pes2o/s2orc | v3-fos-license | Predictors and Clinical Impact of Delayed Stent Thrombosis after Thrombectomy for Acute Stroke with Tandem Lesions
BACKGROUND AND PURPOSE: There are very few published data on the patency of carotid stents implanted during thrombectomies for tandem lesions in the anterior circulation. We aimed to communicate our experience of stenting in the acute setting with systematic follow-up of stent patency and discuss predictors and clinical repercussions of delayed stent thrombosis. MATERIALS AND METHODS: We performed a retrospective study of stroke thrombectomies in a single center between January 2009 and April 2018. Patient files were reviewed to extract patient characteristics, procedural details, imaging studies, and clinical information. Predictors of delayed stent thrombosis and clinical outcome at discharge were analyzed using univariate and multivariate analyses. RESULTS: We identified 81 patients treated for tandem lesions: 63 (77.7%) atheromas, 17 (20.9%) dissections, and 1 (1.2%) carotid web. TICI 2b–3 recanalization was achieved in 70 (86.4%) cases. Thirty-five patients (43.2%) were independent (mRS score ≤ 2) at discharge. Among 73 patients with intracranial recanalization and patent stents at the end of the procedure, delayed stent thrombosis was observed in 14 (19.1%). Among 59 patients with patent stents, 44 had further imaging controls (median, 105 days; range, 2–2407 days) and 1 (1.6%) had 50% in-stent stenosis with no retreatment. Stent occlusion rates were 11/39 (28.2%) for periprocedural aspirin treatment versus 3/34 (8.8%) for aspirin and clopidogrel (P = .04). Delayed stent thrombosis was independently associated with higher admission NIHSS scores (OR, 1.1; 95% CI, 1.01–1.28), diabetes (OR, 6.07; 95% CI, 1.2–30.6), and the presence of in-stent thrombus on the final angiographic run (OR, 6.2; 95% CI, 1.4–27.97). Delayed stent thrombosis (OR, 19.78; 95% CI, 2.78–296.83), higher admission NIHSS scores (OR, 1.27, 95% CI, 1.12–1.51), and symptomatic hemorrhagic transformation (OR, 23.65; 95% CI, 1.85–3478.94) were independent predictors of unfavorable clinical outcome at discharge. CONCLUSIONS: We observed a non-negligible rate of delayed stent thrombosis with significant negative impact on clinical outcome. Future studies should systematically measure and report stent patency rates.
I n around 15% of endovascular procedures for anterior circulation stroke, 1 there is a tight stenosis or occlusion of the cervical carotid artery in addition to the intracranial arterial occlusion. The optimal endovascular management of tandem intra-and extracranial lesions remains subject to debate. The landmark thrombectomy trials either included relatively small numbers of tandem lesions 2-4 or completely excluded them. 5,6 Available data mostly consist of retrospective case series published in recent years. 7 Regardless of technical variations, most groups communicate high recanalization rates with a favorable safety profile for stenting of the extracranial carotid artery. 7 However, there are very few data available regarding patency rates for the implanted carotid stents and the impact of stent thrombosis on clinical outcome.
Our aim was to communicate our single-center experience in endovascular management of consecutive cases of tandem lesions with systematic follow-up of stent patency and to discuss predictors and clinical repercussions of stent thrombosis.
MATERIALS AND METHODS
We conducted a retrospective analysis of our prospective data base of acute stroke endovascular procedures between January 1, 2009, and April 1, 2018, using the following inclusion criterion: association of extracranial internal carotid artery occlusion or stenosis of Ն70% using the NASCET criteria and an intracranial arterial occlusion in the anterior circulation. Endovascular treatments for complications of surgical carotid endarterectomy were excluded. Images stored on the PACS and radiology reports were reviewed to extract technical details of the endovascular procedure, as well as postprocedural imaging. Patient files were reviewed to extract patient demographics, comorbidities, complications, clinical status at discharge, and clinical follow-up information. The study was approved by the Strasbourg University Hospital's ethics review board. Due to the retrospective nature of the study, the board waived the need for signed informed consent.
Patient Selection and Preprocedural Imaging
Patients with acute stroke were selected for endovascular procedures using MR imaging, except in case of extreme agitation or absolute contraindications. Patients with favorable profiles for recanalization were selected using clinicoradiologic mismatch (discrepancy between the severity of neurologic deficits and the size of acute ischemic lesion on the diffusion sequence) as well as estimation of leptomeningeal collateral status using FLAIR vascular hyperintensities. 8,9 We did not use a specific collateral scoring system; vascular hyperintensities were evaluated visually and considered indicative of the presence of ischemic penumbra. Patients with acute infarction in more than two-thirds of the middle cerebral artery territory were generally not considered for treatment. Wake-up strokes and patients with unclear time of onset were considered for treatment if last seen well Ͻ12 hours before evaluation, using the same imaging-selection criteria.
Endovascular Procedure
All procedures were performed with the patient under general anesthesia. The strategy did not change during the study period and consisted of an antegrade approach in most cases: stent placement and angioplasty of the proximal occlusion first before addressing the intracranial occlusion. Briefly, a 9F balloon-guide catheter was placed in the distal common carotid artery, and the proximal occlusion was explored with a microcatheter and a 0.014-inch guidewire. If the occlusion could not be crossed using the microcatheter, the system was replaced with a long 4F or 5F vertebral catheter and a 0.035-inch guidewire. After crossing the occlusion, we performed a distal angiographic run to assess the distal cervical ICA. Subsequently, a long 0.014-inch guidewire was advanced into the ICA, and using an exchange maneuver, we placed a carotid stent (usually Wallstent; Boston Scientific, Natick, Massachusetts) covering the lesion and extending to the common carotid artery. The guiding catheter was then advanced inside the stent, and postdilation of the stenosis was performed if needed by means of a 6 ϫ 20 mm monorail angioplasty balloon under proximal flow arrest using the balloon of the guiding catheter. The angioplasty balloon was then deflated and removed; the stagnating column of blood was aspirated using a 50-mL syringe before deflation of the balloon-guide catheter. Subsequently, the distal occlusion was treated using a stent retriever, aspiration, or a combination of both methods.
Depending on operator preferences, a minority of cases (mostly carotid dissections) were performed using a retrograde approach. A distal-access catheter or large-bore aspiration catheter was advanced across the proximal lesion, and the distal occlu-sion was treated by aspiration or a combination of a stent retriever and distal aspiration. Subsequently, the proximal occlusion was treated with the method previously described, using a long 0.014inch guidewire advanced through the distal-access catheter.
The antiplatelet and procedural anticoagulation regimen varied across the study period. In the early experience, before carotid stent placement, we administered loading doses of clopidogrel, 300 mg (nasogastric tube), and aspirin, 250 mg (IV); and between 2500 and 4000 U of heparin (IV). Due to an increased rate of hemorrhagic complications, since October 2011, heparin administration was discontinued and the regimen was reduced to IV aspirin (250 mg) with or without a loading dose of clopidogrel (300 mg), depending on operator preferences and case-by-case discussion (estimation of hemorrhagic-transformation risk depending on the size of the acute ischemic lesion and concomitant treatment with IV thrombolysis). If the stent was patent after 24 hours and in the absence of sizeable hemorrhagic transformation, clopidogrel, 75 mg/day, was continued for 3 months in addition to life-long aspirin, 75 mg/day. None of the cases were treated with glycoprotein IIb/IIIa inhibitors.
Postprocedural Imaging and Clinical Follow-Up
All patients underwent cerebral CT 24 hours postprocedure. Hemorrhagic transformation was evaluated using the European Cooperative Acute Stroke Study criteria. 10 In addition, for patients with carotid stents, cervical and transcranial Doppler sonography was performed at 24 hours and before discharge to check for stent patency. If a sonographic examination was not feasible at 24 hours, CT angiography of the carotids was performed along with the 24-hour CT examination.
In addition, whenever possible, patients were recalled for additional clinical and carotid sonography examinations between 3 months and 1 year after the initial event.
Evaluation of Delayed Stent Thrombosis
Delayed stent thrombosis was researched in the subgroup of patients who underwent carotid stent placement, and in whom the procedure resulted in partial or complete recanalization of the cervical and intracranial vasculature. The selection procedure is detailed in Fig 1. Delayed stent thrombosis was defined as carotid stent occlusion diagnosed on follow-up imaging. Predictors of stent thrombosis were researched using univariate and multivariate analyses.
Clinical Impact of Delayed Stent Thrombosis
To assess whether delayed stent thrombosis had any repercussions on clinical outcome, we researched predictors of unfavorable clinical outcome at discharge (mRS score Ͼ 2) within the same subgroup of patients using univariate and multivariate analyses.
Statistical Analysis
Continuous variables were presented as median and range and compared using the Mann-Whitney U test after assessment of the normality of the distribution. Categoric variables were presented as numbers and percentages and compared using the 2 test. To assess independent predictors of stent thrombosis and clinical outcome at discharge, we implemented baseline characteristics associated with a P Ͻ .10 in univariate analyses into backwardstepwise multivariable binary logistic regression models using a removal criterion of P Ͼ .10. A logistic regression model using the Firth bias reduction method was fitted to handle separation in our data for the clinical outcome at discharge. Results are presented as odds ratios with their 95% confidence intervals. Statistical data were analyzed using GraphPad Prism, Version 6.0 (GraphPad Software, San Diego, California) and SPSS software, Version 20.0 (IBM, Armonk, New York). The significance level was established at P Ͻ .05.
Patient Characteristics
We identified 81 patients treated for tandem lesions (77.7% carotid atheromas, 20.9% dissections, and 1 case [1.2%] of a carotid web). Patient demographics and baseline characteristics are detailed in On-line Table 1. The median age was 63 years, and the median admission NIHSS score was 14. Initial imaging consisted of MR imaging in nearly all patients (80/81). Intravenous alteplase was administered in 49.3% of cases. The median time from symptom onset to femoral puncture was 255 minutes. Of note, 19.7% of cases were wake-up strokes or with unclear time of onset (in these cases, time when last seen well was used instead of symptom onset).
Thrombectomy Procedure and Outcome
Technical details of the thrombectomy procedure as well as clinical and imaging outcomes are detailed in On-line Table 2. A carotid stent was implanted in 77 (95%) patients, of which 42/77 (54.5%) received periprocedural aspirin (250 mg IV) and 35/77 (45.4%) received aspirin and clopidogrel (300 mg via a nasogastric tube). Most patients (83.9%) were treated using an antegrade approach. The median procedural time was 80 minutes; intracranial circulation TICI 2b-3 recanalization was achieved in 86.4% of cases. Symptomatic hemorrhagic transformation occurred in 6.1% of cases. Eight patients (9.8%) died during the initial hospitalization. Good clinical outcome (mRS Յ 2) was observed in 43.2% of patients at discharge. Follow-up was available in 60/81 patients (including deceased patients); after a median interval of 10 months, (range 1-78), 61.6% of patients had mRS Յ 2.
Delayed Stent Thrombosis
A subgroup of 73 patients had patent carotid stents and partial/ complete intracranial recanalization achieved at the end of the thrombectomy procedure (see Fig 1 for subgroup selection). Cervical imaging at 24 hours consisted of Doppler sonography for 64/73 patients (87.6%) and CT angiography for 9/73 patients (12.3%).
Delayed stent thrombosis was observed in 14/73 (19.1%). In most cases (13/14), thrombosis occurred in the first 24 hours; in 1 patient, the stent thrombosed 5 days after the procedure despite double-antiplatelet therapy with aspirin and clopidogrel. Testing of clopidogrel resistance was not performed.
Initially, none of the 14 cases of stent thrombosis were associated with intracranial re-embolization, and imaging demonstrated collateral flow to the MCA via the anterior and/or posterior communicating arteries. However, in 5/14 cases (35.7%), transcranial Doppler detected lower flow velocities in the MCA compared with the contralateral side, suggestive of insufficient collateralization. Subsequently, in 1 additional patient (1/14, 7.1%), the MCA reoccluded at 5 days and remained occluded on further follow-up.
On clinical examination, only 3/14 (21.5%) patients presented with a clear aggravation of neurologic deficits that could be attributed to stent occlusion. They all had reduced MCA flow velocities on transcranial Doppler compared with contralateral side.
Among the 59 patients with patent stents, further imaging follow-up was available for 44 patients (median, 105 days; range, 2-2407 days). One patient (1.6%) had 50% in-stent stenosis; there were no retreatments. Among the 14 patients with occluded stents, further stent patency follow-up was available for 11 cases (median, 124 days; range, 5-371 days). The stents remained occluded in all cases.
Administration of intravenous thrombolysis before thrombectomy was not associated with a significantly reduced rate of stent thrombosis in univariate or multivariate analyses.
Impact of Stent Thrombosis on Clinical Outcome
Within the same subgroup of 73 patients, 34 (46.5%) had good clinical outcome at discharge. Among patients with delayed stent occlusion, only 1 (7.1%) was independent at discharge, compared with 33 (55.9%) cases with patent stents (P ϭ .001). The distribution of mRS scores for both groups is detailed in Fig 2. Univariate analysis of predictors for clinical outcome is presented in On-line
DISCUSSION
Our study of endovascular treatment for 81 consecutive patients with tandem lesions provides the largest single-center series reported in the literature. By performing systematic imaging fol-low-up of stent patency, we observed a non-negligible rate of delayed stent thrombosis with a significant impact on clinical outcome.
Numerous retrospective case series of endovascular management for tandem lesions have been published in recent years. The data are synthetized in 2 recently published meta-analyses. 7,11 Most articles reported high recanalization rates with different technical variations of the procedure and identified predictors of successful recanalization and/or good clinical outcome. Surprisingly, there was very little information on the outcome of implanted carotid stents.
In many publications, [12][13][14][15][16][17][18][19][20][21] there is no mention of postprocedural stent patency. Other groups communicate partial data: Sadeh-Gonik et al 11 studied 43 patients; they reported 1 delayed stent thrombosis of 8 cases with available imaging follow-up (12.5%). Lockau et al 22 performed imaging controls in 28 of 37 patients; there was delayed stent thrombosis in 6 cases (16.2%) and significant stenosis in another 2 (5.4%). Steglich-Arnholm et al 23 controlled stent patency for 3 months in 43 of a total 47 patients; 4 (9%) had occluded stents. Heck et al 24 controlled stent patency in 18 of 23 cases and found 1 (5.5%) delayed stent thrombosis; in 13 patients with follow-up sonography ranging from 90 days to 24 months, there were no subsequent events. In a series of 24 cases, Cohen et al 25 reported 4 readmissions for new cerebrovascular (n ϭ 2) or cardiovascular events (n ϭ 2); the stents were patent in all 4 patients. Stent thrombosis rates in these articles are lower than the ones observed in our series, but their data concern only a proportion of the total number of patients. We have shown, in our series, that in most cases (11/14, 78.5%), stent thrombosis was not associated with overt aggravation of neurologic deficits; clinical examination alone therefore seems to be insufficient for detection of stent thrombosis. In the absence of systematic imaging controls of stent patency in the reported series, their real stent thrombosis rates remain unknown.
We identified a single article 26 reporting 24-hour imaging follow-up of stent patency for all 77 patients, with only 1 (1.2%) thrombosed stent. The long-term (30 days or later) in-stent restenosis rate was 2/27 (7.4%) in patients with available follow-up imaging. Of note, patients in this series received either epifibatide or double antiplatelet therapy with clopidogrel, 600 mg, and aspirin, 325 mg, in addition to systemic heparinization. Hemorrhagic transformation occurred in 10.4% of cases.
Several articles discussed intraprocedural stent thrombosis. Distribution of mRS scores at discharge in patients with patent-versus-occluded carotid stents. Among patients with delayed stent occlusion, only 1 (7.1%) was independent (mRS Յ2 ) at discharge, compared with 33 (55.9%) patients with patent stents (P ϭ .001). a Candidate predictors for delayed stent thrombosis were the following: antiplatelet treatment (aspirin vs aspirin and clopidogrel), admission NIHSS, diabetes, diffusion ASPECTS of Ͻ7, visualization of in-stent thrombus on final angiographic run, presence of cervical thrombus distal to the proximal lesion, and time from onset to recanalization. Candidate predictors for clinical outcome at discharge were the following: delayed stent thrombosis, admission NIHSS, location of distal occlusion (M2 versus ICA/M1), presence of cervical thrombus distal to the proximal lesion, diffusion ASPECTS of Ͻ7, symptomatic hemorrhagic transformation, and time from onset to recanalization. b Because none of the patients with good clinical outcome had symptomatic hemorrhagic transformation, a logistic regression model using the Firth bias reduction method was fitted to handle separation in our data for the clinical outcome at discharge.
Multivariable regression analysis of predictors for delayed stent thrombosis and clinical outcome at discharge a
Yoon et al 29 observed 1 case (2.2%) of acute stent thrombosis in a series of 47 patients. Lockau et al 22 had 3/37 (8.1%) acute stent occlusions during the procedure: One was recanalized by aspiration and balloon angioplasty. In the 2 other cases, recanalization attempts remained unsuccessful, but there was sufficient crossflow from the contralateral site. In our series, 3 cases (3/73, 4.1%) of intraprocedural stent thrombosis were treated successfully with aspiration using a large-bore 6F intracranial aspiration catheter or a guiding catheter. Several conclusions can be drawn from the available literature. First, because the data are clearly insufficient, there is a clear need for systematic follow-up of stent patency in all future case series or prospective studies. This will provide more robust evidence, which can be used to refine the technical details of the endovascular procedure and periprocedural medication, to reduce stent thrombosis rates. In addition, we have shown that delayed stent thrombosis is an independent predictor of unfavorable clinical outcome. Incorporating stent patency data in future studies could improve understanding of clinical outcomes.
Second, the reported stent thrombosis rates were highly variable. There are several causative factors: variability of the procedural antiplatelet protocol (ranging from rectal aspirin to intra-venous glycoprotein IIb/IIIa inhibitors), differences of implanted stents (varying percentages of metallic surfaces, mesh size, closed-or open-cell design, stent length), subnominal or nominal diameter dilation, and use of overlapping stents.
Third, there seems to be a link between the occurrence of intraprocedural thrombosis and subsequent patency. Intuitively, the underlying pathophysiologic process is the same and is initiated as soon as the stent is implanted. Not surprisingly, we found that visualization of in-stent thrombus on the final angiographic run was an independent predictor of delayed thrombosis. This is concordant with the observation of Steglich-Arnholm et al, 23 in which all patients with occluded stents at follow-up had also experienced partial or complete stent thrombosis during thrombectomy.
Subsequently, it seems that the risk of thrombosis is highest in the first 24 hours. In our series, almost all (13/14) stent occlusions were diagnosed at 24 hours. Similar results have been reported, 24 but the number of studied cases is clearly insufficient to draw a conclusion. Given the negative impact of stent thrombosis on clinical outcome, it would seem reasonable to perform more frequent controls of stent patency during the first 24 hours, especially in cases with additional risk factors for stent thrombosis.
Intervention for Occluded Carotid Stent
Once the diagnosis of delayed stent thrombosis has been made, the decision to attempt recanalization can be problematic. In our experience, stent occlusion was not associated with distal re-embolization in the intracranial branches. Initial CT angiography or transcranial sonography demonstrated collateral flow in the MCA through the anterior and/or posterior communicating arteries. In addition, only 3/14 (21.5%) patients had clear aggravation of neurologic deficits that could be attributed to stent occlusion. The main procedural risk is distal intracranial embolization during carotid recanalization attempts.
To avoid this clinical dilemma and in light of the clear association between stent thrombosis and unfavorable clinical outcome observed in our series, it seems justified to make every possible effort to prevent delayed stent thrombosis. This involves administering dual-antiplatelet treatment whenever possible, angiographic surveillance of the stent at the end of the thrombectomy for 5-10 minutes, and specific treatment of in-stent thrombus (either by thromboaspiration or administration of glycoprotein IIb/IIIa inhibitors). Illustrative case. A patient in his sixties with a history of type 2 diabetes, severe chronic obstructive pulmonary disease, and siderosis was found by his wife hemiplegic and aphasic on wake-up. Initial examination showed depressed consciousness (Glasgow Outcome Score, 7) and signs of respiratory failure for which orotracheal intubation was necessary. Emergency MR imaging showed a relatively small acute ischemic lesion in diffusion imaging (A) not visible in FLAIR imaging, as well as occlusion of the left internal carotid and middle cerebral (arrowhead) arteries (B). Given the important clinicoradiologic mismatch, we proceeded to thrombectomy (C). There was calcified atheroma at the origin of the ICA with floating thrombus. An IV bolus of aspirin, 250 mg, was administered, and a 9 ϫ 50 mm Wallstent was deployed and postdilated with a 6 ϫ 20 mm balloon. Then the MCA occlusion was treated by 2 passages of a stent retriever with TICI 3 recanalization. The final cervical angiographic run showed excellent patency of the carotid stent with images suggestive of plaque protrusion but no in-stent thrombus. However, the next day, Doppler sonography demonstrated stent occlusion. The patient remained intubated, with signs of right hemiplegia. Because repeat MR imaging (D) showed flow across the anterior communicating artery and patency of the left MCA, recanalization was not attempted. The patient remains dependent with an NIHSS score of 9 and mRS score of 4 at 3-month follow-up.
Predictors of Stent Thrombosis
There was a relatively high rate of delayed stent thrombosis in this study. We believe this is because more than half of the patients with stents (42/77, 54.5%) received a single antiplatelet agent (aspirin) during the first 24 hours.
In addition, most patients in this series were treated using long 50-mm Wallstent stents. In comparison with open-cell designs, the mesh size is smaller and the percentage of metal coverage is higher; these features offer better plaque impaction but are also more thrombogenic in an acute setting.
Delayed stent occlusion was more frequent in patients with diabetes. The association between diabetes and higher rates of stent restenosis and occlusion has been extensively documented in the cardiology literature. 30 Moreover, diabetes can be associated with an accelerated platelet turnover time, which leads to reduced efficacy of aspirin treatment. 31 The circulating quantity of new uninhibited platelets rises more rapidly; thus, platelet aggregation returns to normal more rapidly after aspirin administration. To counter this phenomenon, we can speculate that patients with diabetes may need a second dose of aspirin in the first 24 hours, however, further research is needed to balance efficacy versus the added risk of hemorrhagic transformation.
Patients with high NIHSS scores on initial presentation were also more likely to experience delayed stent thrombosis in this series. We can hypothesize that a larger volume of hypoperfused brain leads to decreased carotid outflow and thus promotes stent thrombosis, analogous with peripheral vascular interventions.
Limitations
This study has several limitations. Patients were identified retrospectively in a single center, and most of the procedures were performed using an antegrade strategy and a single type of stent. Because we included patients during Ͼ9 years, endovascular approaches and periprocedural anticoagulant/antiplatelet regimens were heterogeneous. In addition, none of the patients in this cohort received glycoprotein IIb/IIIa inhibitors; subsequently, we cannot provide information on stent patency rates for this subgroup.
CONCLUSIONS
By performing systematic follow-up of stent patency in a consecutive series of thrombectomies for anterior circulation tandem lesions, we observed a non-negligible rate of delayed stent thrombosis in cases with patent stents at the end of the procedure. Stent thrombosis was independently associated with unfavorable clinical outcome at discharge. Stent patency seems to be an important end point that needs to be systematically measured and reported in future studies of tandem lesions. | 2019-03-08T14:17:34.927Z | 2019-02-14T00:00:00.000 | {
"year": 2019,
"sha1": "d818d3943a536be28298d9af673320b0c54e0984",
"oa_license": "CCBY",
"oa_url": "http://www.ajnr.org/content/ajnr/40/3/533.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "c1c259dbbb3d0fbcce09f5d69ad32164b9ebf961",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245671461 | pes2o/s2orc | v3-fos-license | Pityriasis Rosea-like eruptions following COVID-19 mRNA-1273 vaccination: A case report and literature review
Pityriasis rosea (PR) is a self-limited disease with exanthematous papulosquamous rashes mostly associated with reactivation of human herpesvirus (HHV)-6 or HHV-7. PR-like eruptions, which occur along with peripheral eosinophilia, interface dermatitis, and eosinophils on histopathology, may result from medications or vaccinations. Previously, PR-like eruptions had been noted following vaccination for influenza or other vaccines. During this pandemic, acute COVID-19 infection has been related to PR or PR-like eruptions in several cases. Various COVID-19 vaccines associated with PR-like eruptions were rarely reported. Herein, we report a case of cutaneous PR-like eruptions following COVID-19 mRNA-1273 vaccination.
Introduction
Pityriasis rosea (PR) is a self-limited exanthematous papulosquamous disease, usually associated with reactivation of either human herpesvirus (HHV)-6 or HHV-7, while PR-like eruptions are reactions to vaccinations or medications. 1 PR or PR-like rashes have been identified after vaccination against H1N1 influenza, human papillomavirus, smallpox, poliomyelitis, hepatitis B, diphtheria, pneumococcal infections, and tuberculosis. 1e3 During this pandemic, COVID-19 has been associated with PR or PR-like eruptions following acute infection. 4,5 PR-like eruptions were also reported to be associated with different COVID-19 vaccines. 6,7 The exact pathogenetic mechanism that leads to PR or PRlike eruptions after infection or vaccination is still unclear. Herein, we report a case of cutaneous PR-like eruptions following COVID-19 mRNA-1273 (Moderna) vaccination.
Case report
A 40-year-old male patient developed skin rash seven days after the first dose of the COVID-19 mRNA-1273 vaccine.
The severe itching skin lesions developed over the lower abdomen initially, but spread to the neck, trunk, and four limbs afterward. There were no fever, myalgia, or other associated systemic symptoms. The patient had urticaria several months ago, but denied any history of recent infections, drug exposure, contact with COVID-19 patients, or similar skin rashes in personal or family history. Urticaria was diagnosed at another clinic and treated with oral antihistamines. His urticaria showed no obvious improvement, so a hypersensitivity reaction to COVID-19 mRNA-1273 vaccination was suspected. The patient was given systemic prednisolone 15 mg/day, but the skin lesions persisted.
Ten days after the onset of the rash, he came to our dermatology outpatient clinic for further evaluation of the skin rashes. The cutaneous examination revealed multiple variously sized oval erythematous papules and plaques with central darkening and collarette scales over the neck, trunk, back, and four limbs (Fig. 1). No herald patch, oral or genital lesions were noted. He had no oropharyngeal lesions, either. These plaques on the patient's body were slightly distributed along the cleavage lines with a Christmas tree pattern.
Histopathological examination demonstrated scattered foci of angulated parakeratosis with slight acanthosis and mild spongiosis in the epidermis as well as mixed lymphohistiocytic and eosinophilic infiltrates surrounding the perivascular spaces in the superficial dermis (Fig. 2). No basal hydropic degeneration was noted. These findings were compatible with PR or PR-like drug hypersensitivity reaction. However, the patient did not accept other investigatory laboratory examinations. The patient was treated with oral prednisolone 30 mg/day for five days and tapered gradually in the following two weeks. There was no recurrence of the skin lesions after a follow-up for two months.
Discussion
PR or PR-like rashes have been described after vaccination against H1N1 influenza, human papillomavirus, diphtheria, poliomyelitis, smallpox, pneumococcal infections, hepatitis B, and tuberculosis. 1 According to the criteria proposed by Drago et al., typical PR rather than PR-like eruptions were considered if there were non-itching discrete exanthematous lesions with the herald patch, no eosinophilia in the peripheral blood count, and no eosinophils in the histopathological findings. 8 Our patient likely had PR-like eruptions due to the lack of herald patch, severe itching, and eosinophils in the histopathological findings. Because he did not accept other investigatory laboratory examinations, the eosinophil count in the blood as well as the antibody titers of HHV-6 and HHV-7 were unavailable.
Adya et al. reported a patient whose cutaneous histopathologic examination revealed epidermal spongiosis, perivascular lymphocytic infiltrate in the papillary dermis, and extravasated red blood cells in the papillary and reticular dermis. 10 Another study by Akdas et al. demonstrated focal parakeratosis in mounds with exocytosis of lymphocytes, spongiosis in the epidermis, and extravasated red blood cells in the dermis. 11 Cyrenne et al. described the lesional biopsy result with parakeratosis, interface changes, and scattered dyskeratotic keratinocytes. 12 The pathological findings in our patient showed parakeratosis with slight acanthosis and mild spongiosis in the epidermis. Mixed lymphohistiocytic and eosinophilic infiltrates surrounding the perivascular spaces in the superficial dermis were also noted. Although these findings were compatible with PR-like eruptions, the eosinophilic infiltrates surrounding the perivascular spaces in the superficial dermis were different from those in the previous literature. Only one report by Marcantonio-Santa Cruz et al. showed a superficial perivascular infiltrate with scattered eosinophils in the cutaneous biopsy of the patient. 13 Our patient may have developed a delayed-type hypersensitivity reaction, just like the drug-induced PRlike eruption. It has been reported previously that PR is a manifestation of COVID-19 infection. 4,19 The exact pathogenetic mechanism that leads to PR or PR-like eruptions after viral infection is still unclear. The SARS-CoV-2 virus spike protein was found on endothelial cells and lymphocytes of PR-like skin lesions, indicating a direct association between SARS-CoV-2 infection and PR. 5 It is also possible that SARS-CoV-2 infection may distract the cell-mediated control of HHV-6 or HHV-7, resulting in the reactivation of herpes viruses and PR manifestation. 20,21 Several mechanisms may explain the PR or PR-like eruptions development after COVID-19 vaccination. Firstly, it was suspected that cell-mediated immune response may develop against the molecular structural mimicry of the specific viral epitope after vaccination. 2,3 Secondly, the COVID-19 vaccine may trigger PR by reactivation of HHV-6 or HHV-7. Català et al. hypothesized that a strong specific immune response against SARS-CoV-2 or the S protein from vaccines may distract the cellmediated control of another latent virus. 6 Whether driven by vaccines, infections, drugs, or other factors, immune-related herpes virus reactivation may be involved in the pathogenesis of PR or PR-like eruptions. However, the serological evidence for HHV-6/7 reactivation following COVID-19 vaccination is not found in the literature. Thirdly, vaccines may trigger a delayed-type systemic hypersensitivity response, similar to medicationinduced PR-like eruptions.
In conclusion, we report a case of cutaneous PR-like eruptions with eosinophils in the histopathological findings following COVID-19 mRNA-1273 vaccine injection. Further studies on direct tissue and serological examination for evidence of HHV-6 and HHV-7 reactivation are mandatory to confirm the causative link between PR-like eruptions and COVID-19 vaccination.
Patient consent
The patient in this manuscript has given oral informed consent to the publication of his case details.
Declaration of competing interest
None. | 2022-01-05T14:09:29.587Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "6f6a3d4a4fa9da9f175f613531b05c5c0e499848",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jfma.2021.12.028",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2e1374ee8b5e8c97978654d0e517f94c1d18024",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258313685 | pes2o/s2orc | v3-fos-license | Role and mechanism of FOXG1-related epigenetic modifications in cisplatin-induced hair cell damage
Cisplatin is widely used in clinical tumor chemotherapy but has severe ototoxic side effects, including tinnitus and hearing damage. This study aimed to determine the molecular mechanism underlying cisplatin-induced ototoxicity. In this study, we used CBA/CaJ mice to establish an ototoxicity model of cisplatin-induced hair cell loss, and our results showed that cisplatin treatment could reduce FOXG1 expression and autophagy levels. Additionally, H3K9me2 levels increased in cochlear hair cells after cisplatin administration. Reduced FOXG1 expression caused decreased microRNA (miRNA) expression and autophagy levels, leading to reactive oxygen species (ROS) accumulation and cochlear hair cell death. Inhibiting miRNA expression decreased the autophagy levels of OC-1 cells and significantly increased cellular ROS levels and the apoptosis ratio in vitro. In vitro, overexpression of FOXG1 and its target miRNAs could rescue the cisplatin-induced decrease in autophagy, thereby reducing apoptosis. BIX01294 is an inhibitor of G9a, the enzyme in charge of H3K9me2, and can reduce hair cell damage and rescue the hearing loss caused by cisplatin in vivo. This study demonstrates that FOXG1-related epigenetics plays a role in cisplatin-induced ototoxicity through the autophagy pathway, providing new ideas and intervention targets for treating ototoxicity.
Introduction
Cisplatin is widely used to treat tumors but frequently causes ototoxicity, including tinnitus and hearing loss (Lanvers-Kaminsky et al., 2017). Cisplatin-related ototoxicity is cumulative with dose and time (Keilty et al., 2021) and may be related to various factors, such as DNA damage, oxidative stress, and cellular inflammatory factors . Cisplatin is widely used in clinical practice despite the risk of ototoxicity (Kros and Steyger, 2019). While numerous studies have been conducted, the mechanism underlying cisplatin-induced ototoxicity remains elusive.
Forkhead box G1 (FOXG1) plays an important role in the development of hair cells (HCs) and supporting cells and the innervation of cochlear and vestibular neuron (Pauley et al., 2006;Zhang et al., 2020). However, the exact role of FOXG1 in cisplatin-induced ototoxic injury remains unclear. In this study, we use a cisplatin-induced HC damage model to determine the underlying mechanism of FOXG1 in ototoxicity.
Autophagy is an essential intracellular process that transports cytoplasmic substances to lysosomes for degradation (Klionsky et al., 2021). Previous studies have shown that FOXG1 plays an important role in the process of hearing degradation by regulating autophagy (He et al., 2021). BIX01294 is an inhibitor of G9a, the enzyme in charge of Histone H3 lysine 9 dimethylation (H3K9me2), and can induce autophagy in various cell types, including neuroblastoma (Ke et al., 2014), glioma stem-like (Ciechomska et al., 2016), oral squamous cell carcinoma (Ren et al., 2015), breast and colon cancer (Kim et al., 2013), and osteosarcoma (Fan et al., 2015) cells. Many microRNAs (miRNAs) regulate the autophagy pathway and influence body processes (Mao et al., 2018;Yuan et al., 2018;Chen et al., 2019;Xie et al., 2019;Khodakarimi et al., 2021). In addition, BIX01294 can reduce HCs loss in organ of Corti explant under cisplatin treatment (Yu et al., 2013). The roles of FOXG1, autophagy, and H3K9me2 in mammalian hair cells and their interrelationships in cisplatin ototoxicity require further exploration.
H3K9me2 modifications and miRNA activities are epigenetic processes. Epigenetics is the regulation of gene expression programs in conjunction with DNA templates including DNA modification, histone modification, and noncoding RNA regulation (Han and He, 2016). H3K9me2 levels increase after neomycin and cisplatin is applied to the cochlea; however, this increase disappears after prolonged neomycin action, indicating that changes in H3K9me2 levels after HC injuries are dynamic (Yu et al., 2013). Epigenetic modifications play an important role in development, protection, and regeneration of inner ear (Layman and Zuo, 2014).
In the present study, we analyzed the roles and mechanisms of FoxG1 and epigenetics in cisplatin-induced hair cell loss in CBA/CaJ mouse model. Our data show that FOXG1 regulates autophagy levels under cisplatin-induced HC injury and that FOXG1 overexpression activates autophagy after cisplatin treatment. We also evaluated H3K9me2 levels in OC-1 cells and cochlea after cisplatin treatment and found that H3K9me2 affects autophagy through FOXG1, affecting the ability of autophagyrelated miRNAs to regulate autophagy. Inhibition of H3K9me2 helps reduce hearing and hair cell loss induced by cisplatin in vivo. This study demonstrates the important roles of FOXG1 and epigenetics in cisplatin-induced ototoxicity through the autophagy pathway, providing a new target for investigating cisplatin-associated ototoxicity.
Construction of a cisplatin-induced ototoxicity animal model
Furosemide transiently decreases the red blood cell count in the stria vascularis on the cochlear lateral wall, allowing cisplatin to easily pass through the blood-ear barrier into the cochlea (Li et al., 2011). We administered furosemide at 200 mg/kg (intraperitoneal) and cisplatin at different concentrations (0.5, 1, 1.5, and 2 mg/kg, subcutaneous) daily for three consecutive days to CBA/CaJ mice to create the model, followed by 3 days of post-treatment recovery. Mouse hearing was evaluated based on the auditory brainstem response (ABR). The threshold of click ABR in the treatment groups (mice treated with different cisplatin concentrations) was higher than that in the control group to different degrees ( Figure 1A, p < 0.05). In the treatment groups, the tone burst ABR showed varying degrees of loss at 8, 16, 24, 32, and 40 kHz, and the tone burst ABR loss increased as cisplatin concentrations increased ( Figure 1B, p < 0.05). After modeling, the cochlea was dissected, and cochlear HC loss was assessed via immunofluorescence staining with phalloidin and DAPI. Outer HCs loss in the cochlea worsened with increasing cisplatin concentrations (Figures 1C, D, p < 0.05, n = 3). The loss of outer HCs began from the base turn and spread toward the apex of the sensory epithelium of Corti as the cisplatin concentration increased.
OC-cell viability decreases with increasing cisplatin concentrations and treatment times
HEI-OC1 is one of the most commonly used mouse auditory cell lines suitable for exploring ototoxic drug models (Kalinec et al., 2016). We treated OC-1 cells with cisplatin at different concentrations and times to construct the cisplatin-induced OC-1 cell cytotoxicity model. First, we treated OC-1 cells with 5, 10, 30, 50, and 100 µM cisplatin for 24 h and detected their viability with CCK-8. Viable OC-1 cell numbers gradually decreased with increasing cisplatin concentrations. Approximately 50% of the OC-1 cells were viable 24 h after 30 µM cisplatin treatment (Figure 2A, p <0.001, n = 6).
Next, we treated OC-1 cells with 5, 10, 30, 50, and 100 µM cisplatin for 24 h and labeled apoptotic cells with annexin V and dead cells with propidium iodide (PI) to assess the apoptotic and dead cell ratio by flow cytometry. The apoptotic and dead cell ratios of OC-1 cells gradually increased as the cisplatin concentration increased (Figures 2D-F, p < 0.01, n = 3). Additionally, OC-1 cells were treated with low (5 µM) or high (30 µM) cisplatin concentrations for 12, 24, 48, and 72 h before flow cytometry. The apoptotic and dead cell ratios of OC-1 cells gradually increased as the cisplatin treatment time increased (Figures 2G-K, p < 0.05, n = 3).
The accumulation of mitochondrial superoxide in cells can induce DNA damage and ultimately lead to cell damage (Srinivas et al., 2019). We detected the level of oxidative stress in OC-1 cells by Mito-SOX flow cytometry. After treatment with 5, 10, 30, 50, and 100 µM cisplatin for 24 h, the flow cytometry results showed that ROS levels in OC-1 cells gradually increased as the cisplatin concentration increased ( Figures 3A, B, p < 0.01, n = 3). Next, OC-1 cells were treated with low (5 µM) or high (30 µM) cisplatin concentrations for 12, 24, 48, and 72 h before .
/fnmol. . FOXG expression and autophagy level are altered by cisplatin treatment FOXG1 is a nuclear transcription factor that participates in morphogenesis, cell fate determination, and proliferation and is required for mammalian inner ear morphogenesis . FOXG1 is related to the survival of HCs; however, the specific downstream pathways and mechanisms are unclear. FOXG1 is related to mitochondrial function and metabolism, as is autophagy (He et al., 2020). Autophagy is an essential intracellular process that transports cytoplasmic substances to lysosomes for degradation (Klionsky et al., 2021). It plays a crucial role in adaptive responses to starvation and other forms of stress (Jiang and Mizushima, 2014). Autophagy is involved in multiple signaling pathways and contributes to HC development and protection. With autophagy pathway activation, autophagosomes can envelop the damaged mitochondria and fuse with lysosomes to form autolysosomes, degrading the damaged mitochondria and promoting HC survival (He et al., 2017).
To investigate changes in FOXG1 and autophagy levels after cisplatin treatment in a mouse model, we administered furosemide and different cisplatin concentrations. We dissected the cochlea 3 days post-cisplatin treatment and extracted proteins for western blotting. FOXG1 and LC3B levels in the cochlea increased relative to the control after treatment with low cisplatin concentrations but decreased with high cisplatin concentrations (Figures 4A-C, p < 0.05, n = 3). Immunofluorescence staining showed similar LC3B levels in the cochlea after cisplatin treatment to those observed by western blotting. LC3B fluorescence intensity increased after treatment with 0.5 mg/kg cisplatin but decreased with 1.5 mg/kg cisplatin ( Figure 4D, p < 0.05, n = 3).
We next treated in OC-1 cells with 5 and 30 µM cisplatin for 24 h and performed transmission electron microscopy (TEM) to confirm the changes in autophagy after cisplatin treatment.
The 5 µM cisplatin group demonstrated a significantly higher number of autophagic vacuoles and autolysosomes than the control group. In contrast, the 30 µM cisplatin treatment group showed a significantly lower number of autophagic vacuoles and autolysosomes than the control group (Figures 4E-G, p < 0.05, n = 3).
Next, we treated OC-1 cells with 5, 10, 30, 50, and 100 µM cisplatin for 24 h and detected changes in FOXG1 and LC3B levels. Western blotting showed that FOXG1 levels increased with 5 µM cisplatin relative to the control but decreased when the cisplatin concentrations exceeded 30 µM. Similarly, LC3B levels increased with 5 and 10 µM cisplatin treatment before gradually decreasing when the cisplatin concentrations exceeded 30 µM. These findings indicate that autophagy levels initially increase and then decrease as cisplatin concentration increases (Figures 4H-J, p < 0.05, n = 3).
Then, we treated OC-1 cells with 5 µM cisplatin for 12, 24, 48, and 72 h and detected changes in FOXG1 and LC3B levels. Western blotting showed that FOXG1 and LC3B levels were increased relative to the control in the treatment group at 48 h; however, no significant differences were observed between the treatment and control groups at 72 h (Figures 4K-M, p < 0.05, n = 3). Finally, we repeated this experiment with 30 µM cisplatin. Western blotting showed that FOXG1 and LC3B levels gradually decreased after cisplatin treatment (Figures 4N-P, p < 0.05, n = 3). These results suggest that FOXG1 plays an important protective role against cisplatin-induced ototoxic damage in OC-1 cells. However, FOXG1 and autophagy levels are significantly reduced after the cell damage exceeds its repair capacity.
We found that cisplatin treatment could affect FOXG1 expression and autophagy pathway. Our previous study showed that FOXG1 could regulate the autophagy pathway in presbycusis (He et al., 2021). Here, low cisplatin doses activated FOXG1 expression and the autophagy pathway. As the cisplatin concentration gradually increased, FOXG1 and LC3B expression levels decreased. Therefore, we speculate that low concentrations of cisplatin activate the cells' self-defense mechanism, increasing FOXG1 expression and activating the autophagy pathway to eliminate ROS in OC-1 cells. With high concentrations of cisplatin, cells gradually lose their self-defense ability, significantly reducing FOXG1 expression and autophagy levels.
H K me changes in HCs after cisplatin treatment
Epigenetic modifications have recently been found to contribute to inner ear development and HC regeneration (Taiber et al., 2022). Histone methylation and demethylation are implicated in transcriptional regulation, genome integrity, and epigenetics (Klose and Zhang, 2007). H3K9 methylation is critical for early embryogenesis and is involved in the transcriptional repression of developmental genes (Tachibana et al., 2002).
We treated the mouse model with different cisplatin concentrations. At 3 days post-cisplatin treatment, we detected changes in H3K9me2 levels in the cochlea, which were decreased relative to the control at low cisplatin concentrations (0.5 and 1 mg/kg) but increased at high cisplatin concentrations (1.5 and 2 mg/kg; Figures 5A, B, p < 0.05, n = 3). Immunofluorescence staining showed similar H3K9me2 levels in the cochlea after cisplatin treatment to those observed by western blotting. H3K9me2 fluorescence intensity was decreased with 0.5 mg/kg cisplatin but increased with 1.5 mg/kg cisplatin ( Figure 5C, p < 0.05, n = 3). We then treated OC-1 cells with 5, 10, 30, 50, and 100 µM cisplatin for 24 h and detected H3K9me2 levels. H3K9me2 levels decreased with 0.5 µM cisplatin relative to control but increased when cisplatin concentrations exceeded 10 µM (Figures 5D, E, p < 0.05, n = 3). H3K9me2 levels decreased relative to control after low-concentration cisplatin treatment in vivo and in vitro. When the concentration of cisplatin increased, the level . /fnmol. . of H3K9me2 increased, and the expression of FOXG1 and the autophagy pathway were inhibited.
BIX01294 is an inhibitor of euchromatic histone methyltransferase G9a that can transiently and reversibly inhibit H3K9me2 activity by competing with G9a for substrates (Kubicek et al., 2007;Kondengaden et al., 2016;Milite et al., 2019). H3K9me2 inhibition by BIX01294 can induce autophagy in various cell types, including glioblastoma cells (Ciechomska et al., 2016). Previous studies have shown that BIX01294 can reduce HC loss in organ of Corti explant under cisplatin treatment (Yu et al., 2013).
Viable OC-1 cell numbers gradually decreased as the BIX01294 concentration increased (Supplementary Figure 1A, p < 0.05, n = 6). We performed flow cytometry on the BIX01294-treated OC-1 cells to detect changes in the ratios of apoptotic and dead cells. The apoptotic and dead cell ratios of OC-1 cells increased significantly as BIX01294 concentrations increased (Supplementary Figures 1B-D, p < 0.05, n = 3). We also performed Mito-SOX flow cytometry to detect mitochondrial ROS in the OC-1 cells treated with BIX01294. ROS levels in OC-1 cells increased significantly as BIX01294 concentrations increased ( Supplementary Figures 2A, B, p < 0.05, n = 3).
FOXG1 knockdown decreased the levels of miR-34a, miR-96, miR-182, and miR-183 and inhibited the autophagy pathway. Our results showed that the level of autophagy significantly decreased when the expression of the above miRNAs was inhibited. The expression levels of these miRNAs were inhibited after high-dose cisplatin treatment, and this inhibition could be recovered by BIX01294 treatment or overexpressing FOXG1. BIX01294 could not activate the autophagy pathway when these miRNAs were inhibited. However, the overexpression of the above miRNAs could restore autophagy under cisplatin treatment. Our results showed that miR-34a, miR-96, miR-182, and miR-183 were related to the activation of the autophagy pathway, and FOXG1 autophagy regulation is miRNA-dependent.
MiRNA levels are associated with apoptosis ratios and ROS levels in OC-cells
We performed flow cytometry on OC-1 cells with inhibited miR-34a, miR-96, miR-182, and miR-183 expression to detect changes in the ratios of apoptotic and dead cells and found that these ratios were significantly increased (Figures 8A-C, p < 0.05, n = 3). We also performed Mito-SOX flow cytometry to detect mitochondrial ROS levels in these cells and found that they were significantly increased ( Figures 8D, E, p < 0.05, n = 3). The apoptosis and death ratios and the ROS levels of the OC-1 cells, significantly increased as the miRNA levels decreased, suggesting that these miRNAs play an important protective role in OC-1 cell survival.
BIX can protect against cisplatin-induced ototoxicity in vivo
To investigate the role of H3K9me2 in hearing during cisplatininduced injury, we administrated BIX01294 via intraperitoneal injection before and during cisplatin administration in vivo. The groups were treated as follows: 2 mg/kg cisplatin, 2 mg/kg cisplatin +20 mg/kg BIX01294, and 2 mg/kg cisplatin +40 mg/kg BIX01294. We administrated BIX01294 via intraperitoneal injection to CBA/CaJ mice on the day before the initiation of cisplatin injections and half an hour before each furosemide injection. The ABR results showed that 40 mg/kg BIX01294 intraperitoneal injection could rescue cisplatin-induced hearing loss, while the hearing changes in the 20 mg/kg BIX01294 group were not significant compared to the cisplatin-only group (Figures 9A, B). The protective effect of BIX01294 on cisplatin-induced hearing loss was more obvious at low-frequencies. BIX01294 rescued the ABR threshold shift at 16 kHz ( Figure 9C). We sacrificed the mice after the ABR test, and the cochlea was dissected after fixation and decalcification. We used Myosin 7a and phalloidin to label HCs and quantify HC loss. We observed that the loss of outer HCs in mice in the cisplatin +40 mg/kg BIX01294 group was significantly reduced compared to that in the cisplatin-only group, especially in the apical and middle turns ( Figures 9D, E, p < 0.05). These results suggest that BIX01294 reduced the ototoxicity caused by cisplatin and protected hearing in CBA/CaJ mice.
Discussion
Cisplatin is clinically used to treat tumors but has ototoxic side effects. Studying the mechanism of cisplatin-induced ototoxicity is crucial in hearing research. In this study, we conducted related mechanistic experiments since FOXG1-related epigenetics appeared to play a role in cisplatin-induced HC damage.
Auditory system studies have revealed hundreds of miRNAs that are differentially expressed during mammalian inner ear development and aging (Weston et al., 2006;Rudnicki et al., 2014). miRNAs participate in proliferation, apoptosis, and transcription factor regulation, thus playing important roles in organ development and maturation (Harfe, 2005), including sensory organs and systems (e.g., the inner ear and auditory system) (Conte et al., 2013). Transcription factors control miRNA expression at the transcriptional level (Nenna et al., 2022). In an animal acute LPS-induced hearing loss model, the histone deacetylase 2 Hdac2/transcription factor Sp1/miR-204-5p/apoptosis suppressor gene Bcl-2 regulatory axis mediated apoptosis in the cochlea (Xie et al., 2021). FOXG1 is a nuclear transcription factor that participates in morphogenesis and cell fate determination and proliferation and is required for mammalian inner ear morphogenesis . In this study, we knocked down FOXG1 expression in OC-1 cells and observed decreased autophagy levels and altered levels of autophagyrelated miRNAs, including miR-34, miR-96, miR-182, and miR-183. Autophagy levels also decreased when these miRNAs were inhibited. We demonstrated that reducing FOXG1 expression decreases the autophagy level by reducing miR-34a, miR-96, miR-182, and miR-183 expression levels, leading to cisplatininduced ototoxicity.
H3K9me2 modification is one of the most abundant and dynamic histone modifications and its levels are highly variable in disease development and pathogenesis (Bhaumik et al., 2007). Studies have shown that BIX01294 can reduce the resistance of tumors to cisplatin by inhibiting H3H9me2 and increase the tumor chemosensitivity by enhancing autophagy Fu et al., 2023). Herein, we found that cisplatin-induced injury increased H3K9me2 levels, and H3K9me2 inhibition increased FOXG1 expression and autophagy levels in OC-1 cells.
This study demonstrated that inhibition of H3H9me2 by BIX01294 in vivo can reduce the damage to the inner ear hair cells caused by cisplatin, indicating that the regulation of epigenetics can reduce the ototoxicity of cisplatin in vivo. Therefore, whether overexpression of FOXG1 and miRNA in vivo can also reduce the ototoxicity of cisplatin will become a new research goal in ototoxicity prevention and treatment. Many inner ear gene therapy methods exist for cochlear hair cells and supporting cells, such as synthetic adeno-associated virus approaches (Zhu et al., 2019). Because of the low transduction rate of adeno-associated virus in the cochlea, researchers have designed AAV-inner ear for gene delivery in the mouse cochlea and achieved a good therapeutic effect (Tan et al., 2019). RNase readily degrades miRNA in the plasma; thus, researchers have used exosomes produced by lentiviral overexpression of miR-21 as a carrier to deliver miR-21 to the inner ear, preventing hearing loss from ischemia-reperfusion (Hao et al., 2022). Injecting miR-375 agomir can alleviate nasal mucosa inflammation in allergic rhinitis mice (Wang et al., 2018). However, the ability to overexpress FOXG1 and miRNA efficiently in the inner ear remains limited. Preventing and treating hair cell damage and hearing loss caused by cisplatin through gene therapy is a new focus of inner ear research.
We used the cisplatin ototoxic OC-1 cell line and the CBA/CaJ mouse model to determine FoxG1's role and mechanism in cisplatin-induced ototoxic HC degeneration. We found that cisplatin decreased FOXG1 expression and autophagy levels, and that H3K9me2 played a role in cisplatin-induced ototoxicity. Reduced FOXG1 expression resulted in a series of miRNA changes that reduced autophagy activity and led to ROS accumulation and subsequent cochlear HC death. Following miRNA inhibition, autophagy levels decreased, but ROS levels and the apoptosis ratio increased, leading to HC death ( Figure 10). By epigenetic regulation, we found that combining G9a inhibition with cisplatin has the potential to rescue hearing trauma and sensory hair cells loss. This protocol might represent an improvement for patients to limit chemotherapy-induced hearing loss. Our study has identified a potential target for future auditory HC protection against cisplatin injury.
Materials and methods Animals
Six-week-old male SPF-grade CBA/CaJ mice (RRID: IMSR_JAX:000654) were obtained from SPF (Beijing) Biotechnology Co. They were kept for 1 week after purchase .
In vivo drug treatment
The in vivo experiments used male SPF-grade CBA/CaJ mice. The cisplatin group were given furosemide and cisplatin to create the animal model. First, furosemide at 200 mg/kg was injected intraperitoneally. Next, half an hour later, cisplatin at 0.5, 1, 1.5, or 2 mg/kg was given subcutaneously. Then, 1 h later, 0.5 ml isotonic sodium chloride solution was given intraperitoneally. The control group was injected with isotonic sodium chloride solution. All injections were performed for three consecutive days. In the cisplatin +BIX01294 group, BIX01294 at 20 mg/kg or 40 mg/kg was injected intraperitoneally. BIX01294 was injected on the day before the start of cisplatin injection and half an hour before each furosemide injection.
ABR
The ABR was measured before and 3 days after treatment. After anesthesia induction, the mice were placed in a soundproof room and kept warm with a warm water bag. Electrodes were inserted . /fnmol. .
into the ear to be tested, the contralateral ear, and subcutaneously in the middle of the head. A TDT device measured the click ABR and tone burst ABR at 8, 16, 24, 32, and 40 kHz. Each frequency was measured from 90 dB and lowered by 10 dB each time until there was no response to determine the threshold for each frequency.
Cell culture
The OC-1 cells were cultured in a 37 • C incubator with 5% CO 2 in a complete culture medium of high-glucose DMEM (Hyclone) supplemented with 10% fetal bovine serum (Gibco) and 50 units/ml penicillin. Cultured cells were passaged when they reached 80%−90% confluence. Cells were digested with 0.25% trypsin, and the digestion was terminated with the complete medium. The cells were collected in a 5 ml EP tube, centrifuged at 1,500 rpm for 5 min at room temperature, and the supernatant was discarded. Thereafter, 2 ml of the complete culture medium was added to resuspend and inoculate an appropriate amount in a 10 cm Petri dish.
CCK-
The cells were seeded in 96-well plates at the appropriate density and treated with the appropriate drugs 24 h after inoculation. Each group comprised six sub-wells. After treatment, the drugs were removed, and 100 µl DMEM containing 10% CCK-8 reagent was added to each well. Each well's absorbance at 450 nm was measured after 30 min and 1 h in the 37 • C incubator.
Transfection siRNA-Foxg1 was designed and synthesized by Tsingke Biotechnology Co. to knock down Foxg1 expression in OC-1 cells, Foxg1 overexpressing plasmid was designed and synthesized by Shanghai GenePharma Co. to upregulate Foxg1 expression. The miRNA inhibitors and miRNA mimics were designed and synthesized by Guangzhou RiboBio Co. to inhibit or increase target miRNA expression. The OC-1 cells were passaged, seeded in six-well plates for 24 h, cultured to 50%−60% confluence, and transfected in Opti-MEM using lipo3000 reagent. At 6-8 h after transfection, the Opti-MEM was replaced with the complete culture medium.
Rt-PCR
Total RNA was extracted from cells using TRIzol reagents. A miRNA rt-PCR reagent (Guangzhou RiboBio Co.) and a reverse transcription kit (Takara) to perform miRNA reverse transcription and rt-PCR.
Protein extraction
Cells were digested using 0.25% trypsin, which was stopped using a complete culture medium. Briefly, cells were collected in a 1.5 ml EP tube, centrifuged at 1,500 rpm for 5 min at room temperature, and the supernatant was discarded. Next, cells were resuspended in a RIPA buffer containing phosphatase inhibitors, protease inhibitors, and PMSF and left to lyse on ice for 20 min. Then, a 5× loading buffer was added to the lysed mixture, which was boiled at 95 • C for 15 min and stored at −20 • C.
The cochlea was dissected, removed, and soaked in phosphatebuffered saline (PBS). Next, a pre-chilled RIPA buffer containing phosphatase inhibitors, protease inhibitors, and PMSF was added. Then, the cochlea was crushed and sonicated at 20% energy for 5 s before centrifugation at 12,000 rpm and 4 • C for 10 min in a lowtemperature high-speed centrifuge. Finally, the supernatant was aspirated, and a 5× loading buffer was added to the mixture, which was boiled at 95 • C for 15 min and stored at −20 • C.
Western blotting
SDS-PAGE gel electrophoresis was performed to separate the proteins. The proteins on the gel were transferred to PVDF membranes, which were blocked with 5% nonfat milk in TBST for 1 h at room temperature on a shaker. The membrane was placed in the primary antibody, incubated overnight using a refrigerator shaker, washed three times with TBST for 5 min each, and then incubated with a 1:5,000 secondary antibody for 1 h at room temperature. Washing with TBST was performed thrice for 5 min each, and exposure to film was achieved using an ECL solution in a dark room. The films were developed, fixed, and air dried. After scanning the films, we analyzed the immunoblot bands using Image J. The primary antibodies used were anti-FOXG1 antibody (Abcam, ab18259), anti-LC3B antibody (Sigma-Aldrich, L7543), anti-G9a antibody (Abcam, ab185050), and anti-H3K9me2 antibody (Abcam, ab176882).
Flow cytometry
Mito-SOX Red (Thermo Fisher Scientific) was used to analyze mitochondrial ROS production. After trypsinization, the OC-1 cells were collected via centrifugation and washed with PBS. The cell pellets were then resuspended in a solution containing Mito-SOX Red for 15 min at 37 • C in the dark and analyzed via flow cytometry (FACSCalibur, BD Biosciences,).
FITC/annexin V (BD Biosciences) was used to analyze apoptosis and PI to differentiate between live and dead cells. The OC-1 cells were trypsinized and collected via centrifugation at 1,000 rpm for 5 min, washed with PBS, resuspended in binding buffer, and aliquoted at 1 × 10 5 cells (100 µl) into a 5 ml flow tube. FITC/annexin V and PI were added to the tube, and the mixture was vortexed gently, incubated at room temperature for 15 min in the dark, and analyzed via flow cytometry within 1 h.
The samples were incubated in 4% paraformaldehyde (Sigma-Aldrich) for 1 h and then blocked with 0.5% Triton X-100 (blocking medium) for 1 h. The primary antibodies were then added at a 1:400-1:1,000 dilution and incubated overnight at 4 • C. The samples were washed thrice with PBST, incubated with fluorescent secondary antibodies for 1 h at room temperature in the dark, rewashed thrice with PBST, and reincubated with rhodamine phalloidin and DAPI for 30 min in the dark. After sealing the slides with clear nail polish, we imaged them using a confocal microscope.
Data analysis
All data were presented as means ± SDs. All experiments were repeated at least thrice. Statistical analysis was performed using Microsoft Excel and GraphPad Prism 8. Statistical significance was determined using a two-tailed unpaired t-test when comparing two groups and using one-way ANOVA and Dunnett's multiple comparison test when comparing more than two groups. p-values of <0.05 were considered statistically significant.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author/s. The raw data from the figures presented in the study are publicly available. This data can be found here: https://www.jianguoyun.com/p/DSZBQ_EQmd-ECxjv094EIAA.
Ethics statement
The animal study was reviewed and approved by the Committee on Animal Research of Tongji Medical College, Huazhong University of Science and Technology. | 2023-04-26T13:11:37.141Z | 2023-04-26T00:00:00.000 | {
"year": 2023,
"sha1": "0710495f5d55e9cb9fee90a33700ae1931cd991d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "0710495f5d55e9cb9fee90a33700ae1931cd991d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258227217 | pes2o/s2orc | v3-fos-license | Primary retropharyngeal leiomyosarcoma in a young cat
Case summary An Oriental Shorthair cat, aged 1 year and 6 months, developed progressive stridor and a palpable right ventral cervical mass. Fine-needle aspiration of the mass was inconclusive, while thoracic radiography and CT showed no evidence of metastasis. There was initial improvement in stridor with oral doxycycline and prednisolone treatment, but it recurred 4 weeks later and excisional biopsy was performed. Histopathology with immunohistochemistry diagnosed leiomyosarcoma with incomplete surgical margins. Adjunctive radiation therapy was declined. Repeated physical examination and CT 7 months postoperatively documented no evidence of mass recurrence. Relevance and novel information This is the first reported case of retropharyngeal leiomyosarcoma in a young cat with no evidence of local reoccurrence 7 months following an excisional biopsy.
Introduction
Leiomyosarcomas are uncommon, malignant tumours arising from smooth muscle cells 1 and have been reported in the duodenum, 2 iliocaecocolon, 3 large intestine, 1 urinary bladder, 4 stomach, 5 oesophagus, 6 spleen, 1 uterus, 7 pancreas, 8 liver, 9 kidney, 10 vulva, 11 eye, 12 cutaneous smooth muscle, 13,14 and dermal interphalangeal region 15 and heart. 16 They are frequently non-encapsulated and invasive tumours. 1 Histological features vary from densely packed, relatively homogeneous spindle cells with the appearance of smooth muscle to more pleomorphic ovoid or round cells. 1 Surgery is the recommended treatment of choice and the prognosis depends on the anatomical location of the tumour and the presence of metastasis. [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16] To the authors' knowledge, this report describes a novel diagnosis of retropharyngeal leiomyosarcoma in a young cat with a good outcome following marginal surgical excision. This tumour should be considered as a differential diagnosis for a retropharyngeal mass in the cat.
Case description
A neutered male Oriental Shorthair cat, aged 1 year and 6 months, was seen by the primary care practice (PCP) with stridor that had progressively worsened over the course of 3 weeks. Clinical examination was unremarkable. Haematology and serum biochemistry were within normal limits. The patient was sedated and examination of the oral cavity revealed a large retropharyngeal mass ( Figure 1). A fine-needle aspirate of the mass was obtained and thoracic radiographs showed a caudodorsal bronchial lung pattern (Figure 2), prompting bronchoalveolar lavage (BAL) to be performed under general anaesthesia.
The BAL results were consistent with mixed eosinophilic inflammation with suspected allergic disease. Fine-needle aspiration (FNA) results were inconclusive.
CT of the head and thorax were performed by the PCP and interpreted by a diagnostic imaging specialist. CT revealed a discrete right retropharyngeal/tonsillar mass with no local infiltration into the surrounding tis-sues, ipsilateral medial retropharyngeal lymphadenopathy and unspecific pulmonary changes ( Figure 3).
The cat was referred to Wear Referrals, Stockton-on-Tees, UK, and an approximately 1 × 2 cm firm soft tissue mass was palpated on the right lateral neck immediately ventral to the right tympanic bulla and cranioventral to the wing of the atlas, displacing the larynx to the left. The owner reported that the clinical signs were stable. The cat had moderate stridor but no other abnormalities on clinical examination. Owing to the cat's young age, lack of conclusive evidence of neoplasia and the possibility of an inflammatory diagnosis, the cat was prescribed doxycycline (10 mg/kg PO q24h [Ronaxan; Merial]) and prednisolone (1 mg/kg PO q24h [Prednicare; Animalcare]) for 4 weeks. The cat was re-examined 4 weeks later and the stridor had improved, although the mass was unchanged in size. Medications were continued for a further 5 weeks. On re-examination, the stridor had recurred and excisional biopsy was recommended.
The patient was positioned in dorsal recumbency, with the neck extended and supported by a sandbag. A ventral midline cervical incision was created, and a combination of blunt and sharp dissection through subcutaneous tissue and sphincter coli muscle was performed to identify the mass ventral to the right tympanic bulla. The mass was marginally dissected from the local neurovascular structures (branches from the lingual vein, the hypoglossal nerve), muscles (digastricus, styloglossus and hypoglossus) and other adjacent anatomy (the lateral surface of the larynx and ventral surface of the right tympanic bulla) using a combination of delicate, blunt and sharp dissection and bipolar electrosurgery. Lavage and routine closure were performed. The mass was submitted for histopathology.
Histology described an infiltrative but partially encapsulated malignant tumour of mesenchymal origin. While gross cytoreduction had been achieved to maximise diagnostic value and therapeutic benefit for the cat, histologically clear margins were not complete. Adjunctive radiation therapy of the primary tumour location was discussed, but the owners declined this option.
The cat was re-examined 7 months following surgery. The owner reported no recurrence of stridor and the cat was normal on examination. Repeat contrast-enhanced CT 16-slice helical scans (Somaton Emotion; Siemens) of the head and thorax was performed. CT revealed no evidence of primary mass recurrence (Figure 8), static right retropharyngeal mild lymphadenopathy and a poorly defined, mildly contrast-enhancing soft tissue opacity within the cranial mediastinum (possible mildly enlarged mediastinal lymph node). Further investigation was declined.
Discussion
This report describes the first case of retropharyngeal leiomyosarcoma in a young cat and should be considered as a differential diagnosis for a mass in this location.
The main presenting clinical sign was stridor, likely due to the partial obstruction of the upper respiratory tract and displacement of the larynx, and the mass could be readily identified by palpation and direct visualisation.
As far as we are aware, the youngest cats diagnosed with leiomyosarcoma were one cat aged 3 years and 9 months, which was diagnosed with oesophageal angioleiomyosarcoma, 6 followed by a 4-year-old cat with documented primary bladder leiomyosarcoma. 4 While most of the reported cases feature middle-aged to older cats, 1,3,11,15 the cat in this report was 1 year and 6 months old, which represents an uncommon, early presentation for this type of tumour.
The initial lack of a definitive diagnosis by FNA, the young age of the cat and the findings of eosinophilic pulmonary disease influenced the initial decision to provide a therapeutic trial with oral antibiotics and steroids. The initial improvement in clinical signs may be due to reduced peritumoral inflammation. The failure of the mass to respond to treatment then prompted the decision for a more invasive procedure. Excisional biopsy was elected over Tru-cut or incisional biopsy as the clinical signs of stridor were likely due to the mass effect on the trachea and therefore a planned marginal excision would provide therapeutic benefit.
Immunohistochemistry was necessary because the histopathological features of the mass were similar to rhabdomyosarcoma and fibrosarcoma. The location of the mass, particularly in a young cat, also meant that other differential diagnoses would be more likely than leiomyosarcoma. Leiomyosarcomas that develop in the haired skin and subcutaneous tissues are thought to arise from smooth muscle associated with the vasculature or arrector pili muscles. 1 The oesophagus of the cat appeared radiologically normal and would only be expected to contain smooth muscle in the intrathoracic portion. It is possible that this tumour may have arisen from vascular smooth muscle in this area. An additional differential diagnosis considered in this case was laryngeal tumour 17 or cyst. 18 Oral examination and CT ruled out a laryngeal origin of this mass.
Postoperatively, the cat was prescribed prednisolone to decrease postoperative inflammation and respiratory tract obstruction following the extensive dissection in the area, and because eosinophilic lung disease had previously been noted on BAL (with an initial improvement in clinical signs when treated with prednisolone and oral antibiotics). It is unclear whether the treatment with prednisolone had any effect on the outcome, although corticosteroids are not generally considered to have antineoplastic action on mesenchymal tumours. Adjunctive radiation therapy has not been extensively reported for incompletely excised feline leiomyosarcomas, although it would be considered a beneficial therapy for residual microscopic disease. However, radiotherapy was declined by the owners. Long-term follow-up data for cats treated for leiomyosarcoma are largely lacking. The reported prognosis following the surgical removal of the tumour varies from 1 month 7 to 48 months. 4 Numerous case reports documented the short-term prognosis and no tumour reoccurrence 5, 8 6 6,12,15 and 10 months 5 postoperatively, respectively. At the time of writing, the cat had shown no signs of tumour recurrence; the presence of mild unilateral retropharyngeal lymphadenopathy and cranial mediastinal lymphadenopathy could indicate metastatic disease, although leiomyosarcoma do not typically metastasise via the lymphatic route. Other causes of the imaging findings (eg, reactive hyperplasia or normal anatomical variance) were considered more likely. Continued surveillance of the cat and, ultimately, postmortem examination are required for complete information.
Conclusions
Leiomyosarcoma should be a differential diagnosis for cats presenting with a retropharyngeal mass. A good medium-term outcome can be achieved with marginal excision. | 2023-04-20T15:04:00.988Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "7fe2ab043875695b5644d7faa9ff9bdbb4d14054",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/20551169231164612",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8ee2bdde3de65f0c192e5f9a6d3f443bbfcfeaa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235282344 | pes2o/s2orc | v3-fos-license | Tobacco leaf redrying System Control based on Predictive Auto-Coupling PID
Temperature and moisture content in the process of threshing and re-drying can be defined as the same type of object, which is large time-delay with strong coupling. According to the present control strategy, the tobacco leaves control object (temperature and moisture content) occurs with large fluctuations due to the existence of uncertain disturbances. Therefore, the quality and reliability of the production is not good enough to keep the clients around. In order to improve the quality of export tobacco leaves. In this paper, Predictive Auto Coupling PID(PAC-PID) control algorithm is used to design controller for tobacco temperature and moisture content. The simulation results denote that the proposed method can smoothly track the desired signal with fast speed and high accuracy. Meanwhile, the control effect is better than the existing methods. We apply the control method to the actual production, the control effect shows that the stable time of the system is less than 16s, in the same time, the moisture content deviation of export tobacco leaves can be controlled within 1.5%. It means that the method can meet the actual production requirements
INTRODUCTION
Threshing and Re-drying is the key link in cigarette production, it's main mission is to adjust the tobacco leaf moisture content, purify the tobacco leaves, remove the impurities, and kill the tobacco leaf pests and germs by controlling the temperature and humidity of the tobacco leaves .So it can achieve the purpose of conducive to the natural aging of tobacco leaves [1][2][3]. The stability of temperature and humidity during the re-drying process will directly affect the threshing and re-drying quality index, Therefore, the fluctuation of tobacco export quality can be reduced by effectively controlling the stability of tobacco export moisture and temperature ,however, Because in the production process of threshing and re-drying, the tobacco processing machine has characteristics of complicated action process, a lot of influence factors , large reaction time and experience are highly dependent, it's hard to build accurate mathematical models. In the existing control model, the PID control parameter setting value of tobacco processing machine and is totally dependent on the experience of field operators, so the control effect also dependent on the experience of field operators. that may lead to the unstable index of the control object, the accuracy is not high enough, even lead to unqualified products and return. In order to improve production efficiency, the control of tobacco production equipment is the focus of research. In recent years, the control algorithm that integrates traditional PID and intelligent algorithm has shown strong adaptability and solving ability in the temperature and humidity control of tobacco leaves.
Research paper [4][5][6][7][8][9][10][11][12] established a control model and designed a controller for the Chemistry and physics characteristics of tobacco temperature and humidity in the production of tobacco re-drying machines. paper [4][5][6][7]set up first-order plus time delay object base on characteristic of temperature control object , and designed a predictive PI control rate to control the object. Simulation results show that the control effect can basically meet the production needs. Compared with the traditional PID control, the control accuracy is improved, but there is still a lot of room for improvement in the fast responsiveness. For tobacco leaf humidity objects, paper [4][5][6][7][8] established combined integral control model, using anti-delay quasi-PI control algorithm to design the controller, the control effect shows that the response of the controller is more sensitive, but the immunity is not good enough. Different from [4][5][6][7][8], Researcher [9][10][11][12]set up second-order oscillation model for tobacco leaf humidity objects, and used PI+Smith controller, simulation results shows that the controller's rapidity and immunity can meet the requirements of industrial production, but the controller needs to be adjusted with too many parameters. For producers with different tobacco leaf recipes, tuning the parameters of the controller is undoubtedly a big problem. In addition to the above research. Paper [13][14][15][16][17] proposed the fuzzy control algorithm in the production of threshing and re-drying, The experimental results show that relatively sound control effect, and the response speed and immunity meet the production requirements. But the disadvantage is that the formulation of fuzzy rules in fuzzy controller is based on the experience of onsite production personnel and relevant experts, that involves artificial subjective control factors, it's hard to ensure that the same set of fuzzy rules can be applied to the entire production line system.
Predictive Auto Coupling PID(PAC-PID) is a control algorithm formed by combining the Auto Coupling-PID(AC-PID) and PI + Smith estimator. It inherit the Smith estimator's advance compensation for the system with time delay, At the same time, it also has the advantages of less parameter tuning, global robust stability and good anti-disturbance robustness of AC-PID, it has excellent tracking performance and disturbance recovery performance for long time-delay and strongly coupled objects.
In order to solve long-time delay, strong coupling object control problem, Based on the PAC-PID control principle, this paper designs PAC-PID controller for temperature and water content object , effect of various external disturbances on tobacco leaf quality was effectively inhibited. It is very hard to establish a strict mechanism model because of the complexity of the re-drying process
TEMPERATURE AND HUMIDITY CONTROL OBJECTE
Threshing and re-drying is a long time-delay, strongly coupled, nonlinear and multi-interference process. The variation of factors in any region of different sections will affect the parameters and exit indexes of subsequent sections, Therefore, the establishment and design of process model and control object are particularly important. But the establishment of model and control object is the key to implement advanced control algorithm, the accuracy of the model is closely related to the quality of the control, especially temperature and humidity control object, In this paper, literature [4][5][6][7][8][9][10][11][12]is comprehensively summarized to determine the temperature and humidity objects in tobacco leaves.
For most industrial processes, temperature control is a process with long-time delay, threshing and re-drying is no exception. Based on the research results [4][5][6][7], it was found that, the mechanism of the temperature control object conforms to the first-order plus time delay model. Its transfer function is shown below: In the transfer function(1), ′ mean the self-tuning proportionality coefficient of the control object, ′ means the time delay coefficient of the control object, ′is the time constant of the control object, Take the actual situation into consideration. Temperature object transfer function parameter values: ′ 1, 60 , 250 . According to the moisture mechanism analysis of tobacco leaves [6][7][8][9][10], the transfer function of the control object of tobacco leaf moisture content is as follows: (2)
Control system analysis
Because the temperature and humidity objects of tobacco leaf in re-drying production have the characteristics of strongly coupling and long-time delay. In this paper, PAC-PID algorithm is used to design the controller for the temperature and humidity control object of tobacco leaves, the design steps are as follows, As the transfer function of the control object (1) and (2) shows, the differential equation of the object is: The corresponding relation of model parameters of the system is shown in Table 1
/
In the last parameters of the differential equation, is the external bounded disturbance, is the input of system, , represents the internal system state, is the output of system, suppose , 0 are rough estimates of the system model parameter , the total disturbance of a firstorder system: (5) The total disturbance of a second-order system: (6) and| | ∞,so (3), (4) can be expressed as: Both (7) and (8) are uncertain systems with long time-delays, according to the characteristics of the system with long-time delay, when 0 , the control object has no valid output, so 0, and tracking error is: At this moment, the tracking error is at its maximum, but when the , the controlled object begins to transition to effective output, and: ,tracking error:
Smith predictive compensation controller
In order to solve the problem of large early error in tracking, we use smith predictive controller to compensate the system output. The first-order smith predictive controller: 1 The second-order smith predictive controller: 1 In the above two formulas, , , , , , , is the estimated value of model parameter , , nd , , , .So the differential equation of Smith estimator (9)-(10) is: is the compensation output of Smith's prediction model under the action of without time delay, is the compensated output with time delay. Therefore, after the combining of (7), (8), (11) and (12), the prediction compensation link of the system is defined as: , , (13) Assuming that , the predicted compensation output (13) can be divided into three cases: (1) Case 1, when : 0 0 there has ,it means that predictive compensation output of the system is completely determined by the no-delay predictive model of smith controller (11)- (12) , after that , the predictive compensation system, The first-order and second-order is shown in formula (14)and (15): (2) Case 2, is adjusting time of the system. When , ,after combining (13): It means that predictive compensation output of the system is still determined by the no-delay predictive model of smith controller (11)- (12). The first-order and second-order is shown in formula (16) and (17) (3) Case 3, when : Base on (13), there has , It means that the predictive compensation output of the system is completely determined by the inside state of the object with time-delay, the compensation output of Smith controller (11)-(12) completely cancels out, the system is totally transitioned to steady state.
The second-order: It can be seen from the system (18) and (19) that the time-delay system has become a dynamic system without time delay after entering the steady state.
Base on the above three case, we can know, During the whole process control period of the system with time delay, the predictive compensation output of the system is determined by output of smith controller without time-delay: or the inside state of the object with time-delay: . The inside state of system (16)-(17) is similar to System(18)- (19),but the disturbance range of (18)- (19)is larger than (16)-(17),So system (16)-(17) is just A special column of System(18)- (19), Therefore, the internal dynamic system (18)- (19) can be taken as the controlled object for controller design.
Predictive Auto Coupling PID methods
Assumed anticipation error is y ,so the predictive compensation output of the control object is defined as: ,and tracking errors is defined as: (20) The error integration: (21) The differential error: According (20)-(22) can get the error dynamic system of first-order system is: Tuning rules of PAC-PI parameter selection: 2 (24) (25) The error dynamic system of second-order system is: Tuning rules of PAC-PID parameter selection: 3 3 (27) In the parameter setting principle, 0 is the velocity factor of PAC-PID, combines the physical elements with different attributes, such as proportion, integral and differential, to form a collaborative control signal.
According to the tracking errors system and tuning rules (23-27), PAC-PID is defined as: First-order system: 2 (28) Second-order system: 3 3 (29) The PAC-PID control principle diagram for the temperature and humidity control object of leafbeating re-drying is shown in Figure 1 In closed-loop control system that contains PAC-PID controller, the parameters to be tuned include :the parameters of estimator model , , ,and the velocity factor of PAC-PID. Theoretically, in order to improve the response speed of the control system, should be tuned bigger, but when is tuned too much, system may cause overshot and oscillations, because of the integral saturation problem at the initial stage of response. After research ,we found that base on the time constant of the control object , Initial estimate of time delay and Time delay online timing estimate to tuning can get better results [18]. The tuning rules as: Second-order: , , In first-order system, 0, When events occur that increase the time constant , should be the minimum, Otherwise, take the opposite case.
is an estimate of the system time constant . In second-order system, 1 10, When events occur that increase the time constant , should be the minimum, Otherwise, take the opposite case, is the transition time of the system into stable state.
Besides that, both initial estimate of time delay and time delay online timing estimate were needs to be get by follow: .
0,
State observations will result in new delays, but the effects can be ignored, because it only adds one sampling period delay time on the basis of delay. In practical application, gain parameters in PAC-PID control are determined according to parameter tuning rules, and then fine-tuned according to specific indicators to get good control results.
SIMULATION EXPERIMENT
In order to verify the validity of PAC-PID in the process of tobacco leaf temperature and humidity object control, take the tobacco leaf temperature object (1) and the tobacco leaf humidity object (2) as an example to carry on the simulation comparison test.
Temperature control object
In the experiment, the desired trajectory is taken as the unit step signal. Sampling frequency: 1 ,step length: ℎ 1, during simulation, the unit step disturbance is added in T=1500s to simulate the change of operating conditions. The proposed control results are compared with the control results of PI-Smith algorithm in paper [12] and Predictive PI algorithm in paper [4][5][6][7].
The parameter tuning of PAC-PID control is preliminarily selected as 3.2 according to rule (30), The PI-Smith control parameter tuning rule is set according to literature [12], while the predictive PI tuning rule is set according to literature [4][5][6][7]. It can be clearly observed from the comparison experiment in Figure 2, In the tobacco temperature object control process, PAC-PID algorithm can track faster and more accurately than PI-Smith and predictive PI; In the phase of disturbance rejection, the dynamic and steady-state effects of PAC-PID algorithm are significantly better than the other two controllers
Humidity control object
The simulation experiment of tobacco humidity control adopts (2) as the control object model, the simulation experiment methods are PAC-PID, Predictive PID and Pi-Smith respectively, The parameters of PAC-PID controller are tun according to rules (31), and the parameters of PI controller and PI+SMITH controller are set according to rules of paper [4][5][6][7] and [12], The remaining parameters are adjusted adaptively according to the control object (2), the unit step disturbance is added in T=750s to simulate the change of operating conditions. It can be observed from the comparison test in Figure 3 that in the comparison of the control effects of the predictive PI and PI-Smith controllers for the humidity object, PAC-PID has better rapidity and ability against disturbances, overall control effects also better than the predictive PI and PI-Smith controllers.
(a) Comparison of step tracking results (b) Comparison of disturbance immunity Fig.3 Comparison of control effects of temperature objects
Field production control effect
The PAC-PID controller was applied to line A of Changzhou Re-drying Factory for experimental verification. The actual production effect can be seen from Figure 4. The deviation between the actual value and the set value of moisture content after tobacco leaf moisture stabilization was 0.17, the value of deviation rate is 1.3%, the national standard for deviation rate is 3.0% [19], It is proved that PAC-PID controller has excellent accuracy control effect on tobacco humidity object. It can be obtained by observing the moisture content curve of tobacco export. The moisture content of tobacco leaves rise to the peak of 13.1 within the time 18 ,the overshoot of moisture content is 2.3%,under the action of PAC-PID controller, the stable value reached 12.82 rapidly after 24s, So system adjustment time 42 .At T=150s, the moisture content caused by disturbance fluctuates from 12.4 to 13.0, but under the control of the controller, the moisture content of tobacco leaves was adjusted back to the stable value only after 16s.It was proved that PAC-PID had the same excellent performance in the control of moisture content and disturbance resistance in the production pr ocess.
CONCLUSION
The temperature and humidity of tobacco leaves are controlled with large time delay and strong coupling in the process of leaf threshing and re-drying. Although many researchers have proposed excellent control algorithms for this subject, re-drying tobacco leaf is always a high-cost production process, and often a small carelessness will cause huge losses. However, The optimization of production control will also improve the production efficiency of re-drying. In the proposed paper, the characteristics of tobacco control object design in temperature and humidity mechanism is presented, which applied the large time-delay predictive auto-coupling PID control strategy (PAC-PID). With the comparison from the simulation results, the effect of PAC-PID method is better than that of the tradition ones, and it fulfils the requirement of all the current threshing and re-drying production. Finally, the control method is applied to the actual production, and the production results show that the control effect is excellent. | 2021-06-02T23:54:33.125Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "9bcce89375f2beab953523de4b0ce9e0505d2d2a",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1906/1/012034/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9bcce89375f2beab953523de4b0ce9e0505d2d2a",
"s2fieldsofstudy": [
"Engineering",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
} |
209366827 | pes2o/s2orc | v3-fos-license | Estimation-Action-Reflection: Towards Deep Interaction Between Conversational and Recommender Systems
Recommender systems are embracing conversational technologies to obtain user preferences dynamically, and to overcome inherent limitations of their static models. A successful Conversational Recommender System (CRS) requires proper handling of interactions between conversation and recommendation. We argue that three fundamental problems need to be solved: 1) what questions to ask regarding item attributes, 2) when to recommend items, and 3) how to adapt to the users' online feedback. To the best of our knowledge, there lacks a unified framework that addresses these problems. In this work, we fill this missing interaction framework gap by proposing a new CRS framework named Estimation-Action-Reflection, or EAR, which consists of three stages to better converse with users. (1) Estimation, which builds predictive models to estimate user preference on both items and item attributes; (2) Action, which learns a dialogue policy to determine whether to ask attributes or recommend items, based on Estimation stage and conversation history; and (3) Reflection, which updates the recommender model when a user rejects the recommendations made by the Action stage. We present two conversation scenarios on binary and enumerated questions, and conduct extensive experiments on two datasets from Yelp and LastFM, for each scenario, respectively. Our experiments demonstrate significant improvements over the state-of-the-art method CRM [32], corresponding to fewer conversation turns and a higher level of recommendation hits.
INTRODUCTION
Recommender systems are emerging as an important means of facilitating users' information seeking [6,17,20,30]. However, much of such prior work in the area solely leverages the offline historical data to build the recommender model (henceforth, the static recommender system). This offline focus causes the recommender to suffer from an inherent limitation in the optimization of offline performance, which may not necessarily match online user behavior. User preference can be diverse and often drift with time; and as such, it is difficult to know the exact intent of a user when he uses a service even when the training data is sufficient.
The rapid development of conversational techniques [19,22,23,26,35] brings an unprecedented opportunity that allows a recommender system to dynamically obtain user preferences through conversations with users. This possibility is envisioned as the conversational recommender system (CRS), for which the community has started to expend effort in exploring its various settings. [40] built a conversational search engine by focusing on document representation. [23] developed a dialogue system to suggest movies for cold start users, contributing to language understanding and generation for the purpose of recommendation, but does not consider modeling users' interaction histories (e.g., clicks, ratings). In contrast, [9] does considers user click history in recommending, but their CRS only handles single-round recommendation. That is, their model considers a scenario in which the CRS session terminates after making a single recommendation, regardless of whether the recommendation is satisfactory or not. While a significant advance, we feel this scenario is unrealistic in actual deployments.
In particular, we believe CRS models should inherently adopt a multi-round setting: a CRS converses with a user to recommend items based on his click history (if any). At each round, the CRS is allowed to choose two types of actions -either explicitly asking whether a user likes a certain item attribute or recommending a list of items. In a session, the CRS may alternate between these actions multiple times, with the goal of finding desirable items while minimizing the number of interactions. This multi-round setting is more challenging than the single-round setting, as the CRS needs to strategically plan its actions. The key in performing such planning, from our perspective, lies in the interaction between the conversational component (CC; responsible for interacting with the user) and the recommender component (RC; responsible for estimating user preference -e.g., generating the recommendation list). We summarize three fundamental problems toward the deep interaction between CC and RC as follows: • What attributes to ask? A CRS needs to choose which attribute to ask the user about. For example, in music recommendation, it may ask "Would you like to listen to classical music?", expecting a binary yes/no response 1 . If the answer is "yes", it can focus on items containing the attribute, benefiting the RC by reducing uncertainty in item ranking. However, if the answer is "no", the CRS expends a conversation turn with less gain to the RC. To achieve the goal of hitting the right items in fewer turns, the CC must consider whether the user will like the asked attribute. This is exactly the job of the RC which scrutinizes the user's historical behavior. • When to recommend items? With sufficient certainty, the CC should push the recommendations generated by the RC. A good timing to push recommendations should be when 1) the candidate space is small enough; when 2) asking additional questions is determined to be less useful or helpful, from the perspective of either information gain or user patience; and when 3) the RC is confident that the top recommendations will be accepted by the user. Determining the appropriate timing should take both the conversation history of the CC and the preference estimation of the RC into account. • How to adapt to users' online feedback? After each turn, the user gives feedback; i.e., "yes"/"no" to a queried attribute, or an "accept"/"reject" the recommended items. (1) For "yes" on the attribute, both user profile and item candidates need to be updated to generate better recommendations; this requires the offline RC training to take such updates into account. (2) For "no', the CC needs to adjust its strategy accordingly. (3) If the recommended items are rejected, the RC model needs to be updated to incorporate such a negative signal. Although adjustments seem only to impact either the RC or the CC, we show that such actions impact both. Towards the deep interaction between CC and RC, we propose a new solution named Estimation-Action-Reflection (EAR), which consists of three stages. Note that the stages do not necessarily align with each of the above problems. (a) Estimation, which builds predictive models offline to estimate user preference on items and item attributes. Specifically, we train a factorization machine [29] (FM) using user profiles and item attributes as input features. Our Estimation stage builds in two novel advances: 1) the joint optimization of FM on the two tasks of item prediction and attribute prediction, and 2) the adaptive training of conversation data with online user feedback on attributes. (b) Action, which learns the conversational strategy that determines whether to ask or recommend, and what attribute to ask. We train a policy network with reinforcement 1 Note that it is possible to compose questions eliciting an enumerated response; i.e., "Which music genre would you consider? I have pop, funk ...". However, this is a design choice depending on the domain requirements. In describing our method, we consider the basic single-attribute case. However in experiments, we also justify the effectiveness of EAR in asking such enumerated questions on Yelp. For the purpose of exposition, we have chosen to avoid open questions that do not constrain user response for now. Even interpreting user responses to such questions is considered a challenging task [5]. learning, optimizing the reward of shorter turns and successful recommendations based on the FM's estimation of user preferred items and attributes, and the dialogue history. (c) Reflection, which adapts the CRS with user's online feedback. Specifically, when a user rejects the recommended items, we construct new training triplets by treating the items as negative instances and update the FM in an online manner. In summary, the main contributions of this work are as follows: • We comprehensively consider a multi-round CRS scenario that is more realistic than previous work, highlighting the importance of researching into the interactions between the RC and CC to build an effective CRS. • We propose a three-stage solution, EAR, integrating and revising several RC and CC techniques to construct a solution that works well for the conversational recommendation. • We build two CRS datasets by simulating user conversations to make the task suitable for offline academic research. We show our method outperforms several state-of-the-art CRS methods and provide insight on the task.
MULTI-ROUND CONVERSATIONAL RECOMMENDATION SCENARIO
Following [9], we denote one trial of recommendation as a round. This paper considers conversational recommendation as an inherently multi-round scenario, where a CRS interacts with the user by asking attributes and recommending items multiple times until the task succeeds or the user leaves. To distinguish the two, we term the setting single-round where the CRS only makes recommendations once, ending the session regardless of the outcome, as in [9,32]. We now introduce the notation used to formalize our setting. Let u ∈ U denote a user u from the user set U and v ∈ V denote an item v from the item set V. Each item v is associated with a set of attributes P v which describe its properties, such as music genre "classical" or "jazz" for songs in LastFM, or tags such as "nightlife", "serving burgers", or "serving wines" for businesses in Yelp. We denote the set of all attributes as P and use p to denote a specific attribute. Following [32,40], a CRS session is started with u's specification of a preferred attribute p 0 , then the CRS filters out candidate items that contain the preferred attribute p 0 . Then in each turn t (t = 1, 2, ...,T ; T denotes the last turn of the session), the CRS needs to choose an action: recommend or ask: • If the action is recommend, we denote the recommended item list V t ⊂ V and the action as a r ec . Then the user examines whether V t contains his desired item. If the feedback is positive, this session succeeds and can be terminated. Otherwise, we mark V t as rejected and move to the next round. • If the action is ask (where the asked attribute is denoted as p t ∈ P and the action as a ask (p t )), the user states whether he prefers items that contain the attribute p t or not. If the feedback is positive, we add p t into P u to denote the preferred attributes the user in the current session. Otherwise, we mark p t as rejected; regardless of rejection or not, we move to the next turn. This whole process naturally forms a interaction loop (Figure 1) where the CRS may ask zero to many questions before making recommendations. A session terminates if a user accepts the recommendations or leaves due to his impatience. We set the main goal of the CRS as making desired recommendations within as few rounds as possible.
PROPOSED METHODS
EAR consists of a recommendation and conversation component (RC and CC) which interact intensively in the three-stage conversational process. The system starts working at the estimation stage where the RC ranks candidate items and item attributes for the user, so as to support the action decision of the CC. After the estimation stage, the system moves to the action stage where the CC decides whether to choose an attribute to ask, or make a recommendation according to the ranked candidates and attributes, and the dialogue history. If the user likes the attribute asked by the RC, the CC feeds this attribute back to the RC to make a new estimation again; otherwise, the system stays at the action stage: updates the dialogue history and chooses another action. Once a recommendation is rejected by a user, the CC sends the rejected items back to RC, triggering the reflection stage where the RC adjusts its estimations. After that, the system enters the estimation stage again.
Estimation
As discussed before, the multi-round conversational scenario brings in new challenges to the traditional RC. Specifically, the CC interacts with a user u and accumulates evidence on his preferred attributes, denoted as P u = {p 1 , p 2 , .., p n } 2 . Importantly, different from traditional recommendation methods [17,30], the RC here needs to make full use of P u aiming to accurately predict both user's the preferred items and preferred attributes. These two goals exert positive influence on EAR, where the first directly contributes to success rate of recommendation, and the second guides the CC to choose better attributes to ask users so as to shorten the conversation. In the following, we first introduce the basic form of the recommendation method, followed by detail on how we adapt our proposed method to achieve both goals simultaneously.
3.1.1 Basic Recommendation Method. we choose the factorization machine (FM) [29] as our predictive model due to its success and wide usage in recommendation tasks. However, FM considers all pairwise interactions between input features, which is costly and may introduce undesired interactions that negatively affect our two goals. Thus, we only keep the interactions that are useful to our task and remove the others. Given user u, his preferred attributes in the conversation P u , and the target item v, we predict how likely u will like v in the conversation session as: where u and v denote the embedding for user u and item v, respectively, and p i denotes the embedding for attribute p i ∈ P u . Bias terms are omitted for clarity. The first term u T v models the general interest of the user on the target item, a common term in FM model [17]. The second term v T p i models the affinity between the target item and user preferred attributes. We have also tried to include v's attributes P v into FM, but found it brings no benefits. One possible reason is that the item embedding v may have already encoded its attribute information. Thus we also omit it.
To train the FM, we optimize the pairwise Bayesian Personalized Ranking (BPR) [30] objective. Specifically, given a user u, it assumes the interacted items (e.g., visited restaurants, listened music) should be assigned higher scores than those not interacted with. The loss function of traditional BPR is: where D 1 is the set of pairwise instances for BPR training, where v is the interacted item of the conversation session (i.e., the ground truth item of the session), V − u := V\V + u denotes the set of non-interacted items of user u and V + u denotes the items interacted by u. σ is the sigmoid function, and λ Θ is the regularization parameter to prevent overfitting.
3.1.2 Attribute-aware BPR for Item Prediction. However, in our scenario, the emphasis of CRS is to rank the items that contain the user preferred attributes well. For example, if u specifies "Mexican restaurant" as his preferred attribute, a good CRS needs to rank his preferred restaurants among all available Mexican restaurants. To capture this, we propose to sample two types of negative examples: where V − u is the same negative samples as in the traditional BPR setting, i.e., all non-interacted items of u. V cand denotes the current candidate items satisfying the partially known preference P u in the conversation, and V − u is the subset of V cand that excludes the observed items V + u . The two types of pairwise training instances is defined as: We then train the FM model by optimizing both D 1 and D 2 : where the first loss learns u's general preference, and the second loss learns u's specific preference given the current candidates. It is worth noting adding the second loss for training is critical for the model ranking well on the current candidates. This is very important for CRS since the candidate items dynamically change with user feedback along the conversation. However, the state-ofthe-art method CRM [32] does not account for this factor, being insufficient in considering the interaction between the CC and RC.
Attribute Preference Prediction.
We formulate the task of the second goal of accurate attribute prediction separately. This prediction of attribute preference is mainly used in the CC to support the action on which attribute to ask (c.f. Sec 3.2). As such, we take u's preferred attributes in the current session into account: which estimates u's preference on attribute p, given u's current preferred attributes P u . To train the model, we also employ BPR loss, and assume that the attributes of the ground truth item v (of the session) should be ranked higher than other attributes: where the pairwise training data D 3 is defined as: where P v denotes item v's attributes.
Multi-task Training.
We perform joint training on the two tasks of item prediction and attribute prediction, which has the potential of mutual benefits since their parameters are shared. The multi-task training objective is: Specifically, we first train the model with L it em . After it converges, we continue optimizing the model using L at t r . We iterate the two steps until convergence under both losses. Empirically, 2-3 iterations are sufficient for convergence.
Action
After the estimation stage, the action stage finds the best strategy for when to recommend. We adopt reinforcement learning (RL) to tackle this multi-round decision making problem, aiming to accomplish successful recommendation in shorter number of turns. It is worth noting that since our focus is on conversational recommendation strategy, as opposed to fluent dialogue (the language part), we use templates as wrappers to handle user utterances and system response generation. That is to say, this work serves as an upper bound study of real applications as we do not include the errors for language understanding and generation.
Each of the vector components captures an assumption on asking which attribute could be most useful, or whether now is a good time to push a recommendation. They are defined as follows: • s ent : This vector encodes the entropy information of each attribute among the attributes of the current candidate items V cand . The intuition is that asking attributes with large entropy helps to reduce the candidate space, thus benefits finding desired items in fewer turns. Its size is the attribute space size |P |, where the i-th dimension denotes the entropy of the attribute p i . • s pr e : This vector encodes u's preference on each attribute. It is also of size |P |, where each dimension is evaluated by Equation (6) on the corresponding attribute. The intuition is that the attribute with high predicted preference is likely to receive positive feedback, which also helps to reduce the candidate space. • s his : This vector encodes the conversation history. Its size is the number of maximum turns T , where each dimension t encodes user feedback at turn t. Specifically, we use -1 to represent recommendation failure, 0 to represent asking an attribute that u disprefers, and 1 to represent successfully asking about an attribute that u desires. This state is useful to determine when to recommend items. For example, if the system has asked about a number of attributes for which u approves, it may be a good time to recommend. • s l en : This vector encodes the length of the current candidate list.
The intuition is that if the candidate list is short enough, EAR should turn to recommending to avoid wasting more turns. We divide the length |V cand | into ten categorical (binary) features to facilitate the RL training. It is worth noting that besides s his , the other three vectors are all derived from the RC component. We claim that this is a key difference from existing conversational systems [9,23,26,32,40]; i.e., the CC needs to take information from the RC to decide the dialogue action. In contrast to EAR, the recent conversational recommendation method CRM [32] makes decisions based only on the belief tracker that records the preferred attributes of the user, which makes it less informative. As such, CRM is less effective especially when the number of attributes is large (their experiments only deal with 5 attributes, which is insufficient for real-world applications).
Policy Network and Rewards.
The conversation action is chosen by a policy network in our CC. In order to demonstrate the efficacy of our designed state vector, we purposely choose a simple policy network -a two-layer multi-layer perceptron, which can be optimized with the standard policy gradient method. It contains two fully-connected layers and maps the state vector s into the action space. The output layer is normalized to be a probability distribution over all actions by so f tmax. In terms of the action space, we follow the previous method [32], which includes all attributes P and a dedicated action for recommendation. To be specific, we define the action space as A = {a r ec ∪ {a ask (p)|p ∈ P}}, which is of size |P | + 1. After the CC takes an action at each turn, it will receive an immediate reward from the user (or user simulator). This will guide the CC to learn the optimal policy that optimizes long-term reward. In EAR, we design four kinds of rewards, namely: (1) r suc , a strongly positive reward when the recommendation is successful, (2) r ask , a positive reward when the user gives positive feedback on the asked attribute, (3) r quit , a strongly negative reward if the user quits the conversation, (4) r pr ev , a slightly negative reward on every turn to discourage overly lengthy conversations. The intermediate reward r t at turn t is the sum of the above four rewards, r t = r suc + r ask + r quit + r pr ev .
We denote the policy network as π (a t | s t ), which returns the probability of taking action a t given the state s t . Here a t ∈ A and s t denote the action to take and the state vector of the t-th turn, respectively. To optimize the policy network, we use the standard policy gradient method [33], formulated as follows: where θ denotes the parameter of the policy network, α denotes the learning rate of the policy network, and R t is the total reward accumulating from turn t to the final turn T : where γ is a discount factor which discounts future rewards over immediate reward.
Reflection
This stage also implements the interaction between the CC and RC. It is triggered when the CC pushes the recommended items V t to the user but gets rejected, so as to update the RC model for better recommendations in future turns. In the traditional static recommender system training scenario [17,30], one issue is the absence of true negative samples, since users do not explicitly indicate what they dislike. In our conversational case, the rejection feedback is an explicit signal on user dislikes which are highly valuable to utilize; moreover, it indicates that the offline learned FM model improperly assigns high scores to the rejected items. To leverage on this source of feedback, we treat the rejected items V t as negative samples, constructing more training examples to refresh the FM model. Following the offline training process, we also optimize the BPR loss: Note that this stage is performed in an online fashion, where we do not have access to the ground truth positive item. Thus, we treat the historically interacted items V + u as the positive items to pair with the rejected items. We put all examples in D 4 into a batch and perform batch gradient descent. Empirically, it takes 3-5 epochs to converge, sufficiently efficient for online use.
Note that although it sounds reasonable to also update the policy network of the CC (since the rejection feedback implies that it is not an appropriate timing to push recommendation), we currently do not perform this due to high difficulty of online updating RL agent and leave it for future work.
EXPERIMENTS
EAR 3 is built based on the guiding ideology of interaction between the CC and RC. To validate this ideology, we first evaluate the whole system to examine the overall effect brought by the interaction. Then, we perform ablation study to investigate the effect of interaction on each individual component. Specifically, we have 3 Datasets, source code and demos at our project homepage: https://ear-convrec.github.io
Datasets.
We conduct experiments on two datasets: Yelp 4 for business recommendation and LastFM 5 for music artist recommendation. First, we follow the common setting of recommendation evaluation [17,30] that reduces the data sparsity by pruning the users that have less than 10 reviews. We split the user-item interactions in the ratio of 7:2:1 for training, validation and testing. Table 1 summarizes the statistics of the datasets. For the item attributes, we preprocess the original attributes of the datasets by merging synonyms and eliminating low frequency attributes, resulting in 590 attributes in Yelp and 33 attributes in LastFM. In real applications, asking about attributes in a large attribute space (e.g., on Yelp dataset) causes overly lengthy conversation. We therefore consider both the binary question setting (on LastFM) and enumerated question (on Yelp). To enable the enumerated question setting, we build a two-level taxonomy on the attributes of the Yelp data. For example, the parent attribute of {"wine", "beer", "whiskey"} is "alcohol". We create 29 such parent attributes on the top of the 590 attributes, such as "nightlife", "event planning & services", "dessert types" etc. In the enumerated question setting, the system choose one parent attribute to ask. This is to say, we change the size of the output space of the policy network to be 29 + 1 = 30. At the same time, it also displays all its child attributes and ask the user to choose from them (the user can reply with multiple child attributes). Note that choosing what kinds of questions to ask is an engineering design choice by participants, here we evaluate our model on both settings.
User Simulator For Multi-round Scenario.
Because the conversational recommendation is a dynamic process, we follow [32,40]) to create a user simulator to enable the CRS training and evaluation. We simulate a conversation session for each observed interaction between users and items. Specifically, given an observed user-item interaction (u, v), we treat the v as the ground truth item to seek for and its attributes P v as the oracle set of attributes preferred by the user in this session. At the beginning, we randomly choose an attribute from the oracle set as the user's initialization to the session. Then the session goes in the loop of the "model acts -simulator response" process as introduced in Section 2. We set the max turn T of a session to 15 and standardize the recommendation list length V t as 10.
Training Details.
Following CRM [32], the training process is divided into offline and online stages. The offline training is to build the RC (i.e., FM) and initialize the policy network (PN) by letting them optimize performance with the offline dialogue history. Due to the scarcity of the conversational recommendation dialogue history, we follow CRM [32] to simulate dialogue history by building a rule-based CRS to interact with the simulator introduced in Section 4.1.2. Specifically, the strategy for determining which attribute to ask about is to choose the attribute with the maximum entropy. Each turn, the system chooses the recommendation action with probability 10/max(|V |, 10) where V is the current candidate set. The intuition is that the confidence of recommendation grows when the candidate size is smaller. We train the RC to give the groundtruth item and oracle attributes higher ranks given the attribute confirmed by users in dialogue histories, while training the policy to mimic the rule-based strategy on the history. Afterwards, we conduct online training, optimizing the PN by letting EAR interact with the user simulator through reinforcement learning.
We tuned all hyper-parameters on the validation set, and empirically set them as followed: The embedding size of FM is set as 64. We employ the multi-task training mechanism to optimize FM as described in Section 3.1.4, using SGD with a regularization strength of 0.001. The learning rate for the first task (item prediction) and second task (attribute prediction) is set to 0.01 and 0.001, respectively. The size of the two hidden layers in the PN is set as 64. When the pre-trained model is initialized, we use the REINFORCE algorithm to train the PN. The four rewards are set as: r suc =1, r ask =0.1, r quit =-0.3, and r pr ev =-0.1, and the learning rate α is set as 0.001.
The discount factor γ is set to be 0.7.
Baselines.
As our multi-round conversational recommendation scenario is new, there are few suitable baselines. We compare our overall performance with the following three: • Max Entropy. This method follows the rule we used to generate the conversation history in Section 4.1.2. Each turn it asks the attribute that has the maximum entropy among the candidate items. It is claimed in [12] that maximum entropy is the best strategy when language understanding is precise. It's worth noting that, in enumerated question setting, the entropy of an attribute is calculated as the sum of its child attributes in the taxonomy (similar approach for attribute preference calculation). • Abs Greedy [10]. This method recommends items in every turn without asking any question. Once the recommendation is rejected, it updates the model by treating the rejected items as negative examples. According to [10], this method achieves equivalent or better performance than popular bandit algorithms like Upper Confidence Bounds [1] and Thompson Sampling [4]. • CRM [32]. This is a state-of-the-art CRS. Similar to EAR, it integrates a CC and RC by feeding the belief tracker results to FM for item prediction, without considering much interactions between them. It is originally designed for single-round recommendation. To achieve a fair comparison, we adapt it to the multi-round setting by following the same offline and online training of EAR. It is worth noting that although there are other recent conversational recommendation methods [10,23,26,40], they are ill-suited for comparison due to their different task settings. For example, [40] focuses on document representation which is unnecessary in our case. It also lacks the conversation policy component to decide when to make what action. [23] focuses more on language understanding and generation. We summarize the settings of these methods in Table 6 and discuss differences in Section 5.
Evaluation Metrics.
We use the success rate (SR@t) [32] to measure the ratio of successful conversations, i.e., recommend the ground truth item by turn t. We also report the average turns (AT) needed to end the session. Larger SR denotes better recommendation and smaller AT denotes more efficient conversation. When studying RC model of offline training, we use the AUC score which is a surrogate of the BPR objective [30]. We conduct one-sample paired t-test to judge statistical significance. Table 2 shows the scores of the final success rate and the average turns. As can be clearly seen, our EAR model significantly outperforms other methods. This validates our hypothesis that considering extensive interactions between the CC and RC is an effective strategy to build conversational a recommender system. We also make the following observations:
Performance Comparison (RQ1)
Comparing with Abs Greedy, the three attribute-based methods (EAR, Max Entropy and CRM) have nearly zero success rate at the beginning of a conversation (t < 2). This is because these methods tend to ask questions at the very beginning. As the conversation goes, Abs Greedy (which only recommends items) gradually falls behind the attribute-based methods, demonstrating the efficacy of asking attributes in the conversational recommendation scenario. Note that Abs Greedy has much weaker performance on Yelp compared to LastFM. The key reason is the setting of Yelp is to ask enumerated question, and user's response with multiple finer-grained attributes sharply shrinks the candidate items.
CRM generally underperforms our EAR methods. One of the key reasons is that its state vector cannot help CC to learn sophisticated strategy to ask and recommend, especially in a much larger action space, i.e., the number of attributes (nearly 30 in our experiments versus 5 in theirs [32]). This result suggests that in a more complex multi-round scenario where the CC needs to make a comprehensive utilization of both the CC (e.g., considering dialogue histories) and RC (considering statistics like attribute preference estimation) when formulating a recommendation strategy.
Interestingly, Figure 2 indicates that in Yelp, EAR's gain over CRM enlarges in Turns 1-3, shrinks in Turns 4-6 and widens again afterwards. However, in LastFM it has a steadily increasing gain. This interesting phenomenon reveals that our EAR system can learn different strategies in different settings. In the Yelp dataset, the CRS asks enumerated questions where the user can choose finer-grained attributes, resulting a sharp reduction in the candidate space. The strategy that the EAR system learns is more aggressive: it attempts to ask attributes that can sharply shrink the candidate space and make decisive recommendation at the beginning turns when it feels confident. If this aggressive strategy fails, it changes to a more patient strategy to ask more questions without recommendations, causing less success in the medial turns (e.g., Turns 5-7). However, this strategy pays off in the long term, making recommendation more successful in the latter half of conversations (e.g., after Turn 7). At the same time, CRM is only able to follow the strategy of trying to ask more attributes at the beginning and making recommendations later. In the LastFM dataset, the setting is limited to binary attributes, leading to less efficiency in reducing candidate space. Both EAR and CRM adapt and ask more questions at the outset before making recommendations. However, as EAR incorporates better CC and RC to model better interaction, it significantly outperforms CRM.
Effectiveness of Estimation Designs (RQ2)
There are two key designs in the estimation stage that trains the recommendation model FM offline: the attribute-aware BPR that samples negatives with attribute matching considered, and the multi-task training that jointly optimizes item prediction and attribute prediction tasks. Table 3 shows offline AUC scores on the two tasks of three methods: FM, FM with attribute-aware BPR (FM+A), and FM+A with multi-task training (FM+A+MT).
As can be seen, the attribute-aware BPR significantly boosts the performance of item ranking, being highly beneficial to rank the ground truth item high. Interestingly, it harms the performance of attribute prediction, e.g. on lastFM, FM+A has a much lower AUC score (0.629) than FM (0.727). The reason might be that the attributeaware BPR loss guides the model to specifically fit to item ranking in the candidate list. Without an even optimization enforced for the attribute prediction task, it may suffer from poor performance. This implies the necessity of explicitly optimizing the attribute prediction task. As expected, the best performance is achieved when
Ablation Studies on State Vector (RQ3)
What information helps in decision making? Let us examine the effects of the the four forms of information included in EAR state vector s (Equation 10), by ablating each information type from the feature vector (Table 4).
Comparing the performance drop of each method, we uncover differences that corroborate the intrinsic difference between the two conversational settings. The most important factor is question type: i.e., s ent for LastFM (binary question) and s l en for Yelp (enumerated question). The entropy(s ent ) information is crucial for LastFM, it is in line with the claim in [12] that the maximum entropy is the best strategy when language understanding is precise. If we ablate s ent on LastFM, although it reaches 0.051 in SR@5, future SR greatly suffers, due to the system's over-agressiveness to recommend items before obtaining sufficient relevant attribute evidence. As for the enumerated question setting (Yelp), the candidate list length (s l en ) is most important, because the candidate item list shrinks more sharply and s l en is helpful when deciding when to recommend.
Apart from entropy and candidate list length, the remaining two factors -i.e., attribute preference, conversation history -both contribute positively. Their impact is sensitive to datasets and metrics. For example, the attribute preference (s pr e ) strongly affect SR@5 and SR@10 on Yelp, but does not show significant impacts for SR@15. This inconsistency provides an evidence for the intrinsic difficulty of decision making in the conversational recommendation scenario, which however has yet to be extensively studied.
Investigation on Reflection (RQ4)
To understand the impact of online update in the reflection stage, we start from the ablation study. Table 5 shows the variant of EAR that removes online update. We find that the trends do not converge on two datasets: the updating strategy helps a lot on LastFM but has very minor effect on the Yelp dataset. Questioning this interesting phenomenon, we examine the individual items on Yelp. We find that the updating does not always help ranking, especially when the offline model already ranks the ground truth item high (but not at top 10). In this case, doing updates is highly likely to pull down the ranking position of the ground truth item. To gain statistical evidence for this observation, we term such updates as bad updates, and show the percentage of bad updates with respect to the offline model's AUC on the users. As seen from Figure 3, there is a clear positive correlation between bad updates and AUC score. For example, ∼3.5% of the bad updates come from users with an offline AUC of 0.9.
This explains why online update works well for LastFM, but not for Yelp: our recommendation model has a better performance on Yelp than LastFM (0.870 v.s. 0.742 in AUC as shown in Table 3). This means the items on Yelp are more likely to get higher AUC, resulting in worse updates. More such observations and analyses will help further the community understanding the efficacy of online updates. Although bandit algorithms have devoted to exploring this question [11,14,21,24,37], the issue has largely been unaddressed in the context of conversational recommender system.
RELATED WORK
The offline static recommendation task is formulated as estimating the affinity score between a user and an item [17]. This is usually achieved by learning user preferences through the historical useritem interactions such as clicking and purchasing. The representative methods are Matrix Factorization (MF) [20] and Factorization Machine (FM) [29]. Neural FM [16] and DeepFM [15] have improved FM's representation ability with deep neural networks. [3,13,18] utilize user's implicit feedback, commonly optimizing BPR loss [30]. [7,8] exploits user's reviews and image information. However, such static recommendation methods suffer from the intrinsic limitation of not being able to capture user dynamic preferences.
This intrinsic limitation motivates online recommendation. Its target is to adapt the recommendation results with the user's online actions [25]. Many model it as a multi-arm bandit problem [34,36,37] , strategically demonstrating items to users for useful feedback. [39] makes the preliminary effort to extend the bandit framework to query attributes. While achieving remarkable progress, the bandit-based solutions are still insufficient: 1) Such methods focus on exploration-exploitation trade-off in cold start settings. However, in warm start scenario, capturing the user dynamic preference is critical as preference drift is common; 2) The mathematical formation of multi-arm bandit problem limits such method only recommend one item each time. This constraint limits its application, as we usually need to recommend a list of items.
Conversational recommender systems provide a new possibility for capturing dynamic feedback as they enable a system to interact with users using natural language. However, they also pose challenges to researchers, leading to various settings and problem formulations [2, 9, 10, 23, 26-28, 31, 32, 38-40]. Table 6 summarizes these works' key aspects. Generally, prior work considers conversational recommendation only under simplified settings. For example, [10,38] only allow the CRS to recommend items without asking the user about their preferred attributes. The Q&R work [9] proposes to jointly optimize the two tasks of attribute and item prediction, but restricts the whole conversation to two turns: one turn for asking, one turn for recommending. CRM [32] extends the conversation to multi-turns but still follows the single-round setting. MMN [40] focuses on document representation, aiming to learn better matching function for attributes and products description under a conversation setting. Unfortunately, it does not build a dialogue policy to decide when to ask or make recommendations. In contrast, situations for various real applications are complex: the CRS needs to strategically ask attributes and make recommendations in multiple rounds, achieving successful recommendations in the fewest turns. In recent work, only [23] considers this multi-round scenario, but it focuses on language understanding and generation, without attending to explicitly model the conversational strategy.
CONCLUSION AND FUTURE WORK
In this work, we redefine the conversational recommendation task where the RC and CC closely support each other so as to achieve the goal of accurate recommendation in fewer turns. We decompose the task into three key problems, namely, what to ask, when to recommend, and how to adapt with user feedback. We then propose EAR -a new three-stage solution accounting for the three problems in a unified framework. For each stage, we design our method to carefully account for the interactions between RC and CC. Through extensive experiments on two datasets, we justify the effectiveness of EAR, providing additional insights into the conversational strategy and online updates.
Our work represents the first step towards exploring how the CC and RC can collaborate closely to provide quality recommendation service in this multi-round scenario. Naturally, there are thus a few loose ends for further investigation, especially with respect to incorporating user feedback. In the future, we will consider | 2019-12-15T03:40:32.585Z | 2020-01-20T00:00:00.000 | {
"year": 2020,
"sha1": "1abe8ff4bd56c93b3c6c560783e1949d104546e9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2002.09102",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5f8c94945cb5a7e46eeac0f6671353bd6584dafe",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
264478688 | pes2o/s2orc | v3-fos-license | Research on Traditional Philosophical Thoughts in Chinese Classical Music Creation
: Chinese traditional culture is rooted in the soil of the Chinese nation, created by ancestors and improved and evolved from generation to generation by the Chinese nation. The cultural background has a long history and profound cultural connotations. Chinese classical music carries the shining wisdom of excellent traditional Chinese culture and is the spiritual fruit of the integration of Chinese philosophical consciousness and artistic image. Chinese classical music culture is not an unchanging text, occurring in a historical context, but always open to the inheritors of traditional culture. Traditional music culture is an inseparable part of the traditional cultural system, which includes the beauty of traditional Chinese music and is an artistic crystallization that fully reflects the level of classical Chinese music art. In the process of development and inheritance of Chinese classical music, the adaptability to philosophical hermeneutics is reflected at multiple levels, which are respectively reflected in the context, inter subjectivity and horizon integration in the process of inheritance. Chinese classical music has the value of cultivating students' temperament in the education of excellent traditional Chinese culture, making it an inheritor of Chinese cultural emotional experience. This article mainly studies the traditional philosophical ideas in the creation of Chinese classical music.
Introduction
The strength of a country and the rejuvenation of a nation cannot be separated from the inheritance and promotion of traditional culture.The Chinese nation has created a brilliant traditional culture.As an artistic form of expression, music is rooted in the cultural soil of different regions, reflecting the local customs, cultural characteristics, spiritual pursuits, and values.Its impact on people also varies [1].With the continuous development of the social economy, the cultural field is gradually moving towards globalization.Under the continuous impact of various popular cultures around the world, Chinese classical culture is gradually being eliminated by the market in the current market.And Chinese classical music is the core of the sound of China, which gathers the essence of music of all ages in China, is the profound embodiment of the Chinese nation's feelings and ideals, and is the main artistic expression form of Chinese ancient philosophy.Based on the current new media market environment, Chinese classical culture is also actively exploring new forms of development [2].China is an ancient civilization with a history of thousands of years, and its cultural heritage is also extremely profound, especially during the Spring and Autumn Period and the Warring States period, when a hundred schools of thought were competing and ideological and cultural forms were colourful.Music, as a unique form of artistic expression, can serve as a regulatory valve for governing the hearts of people throughout China's thousands of years of history and culture, and can play an important social value [3].Since the pre-Qin period, the representative figure of Taoism, Zhuangzi, has proposed the idea of "heaven and earth coexist with me, and all things are the same as me".Compared to the Confucian color of human relations, Taoism values nature more and advocates harmonious coexistence with nature [4].The emergence of this philosophical thought has also had an extremely profound impact on traditional Chinese music and art.The music ideology of Taoism mainly comes from some literati who are opposed to Confucianism.Unlike the Confucian concept of etiquette and music, this ideology is less utilized by the ruling class and more influenced by traditional Chinese literati culture.Among them, there is also a more important layer of content, which is to express people's natural emotions through music.The so-called natural emotions mean breaking free from various constraints.Therefore, in order to reflect the aesthetic taste of unity between humans and nature in music, one must immerse oneself in the embrace of nature [5].In today's music aesthetic activities, these concepts of Zhuangzi also endow people with a different charm in understanding Chinese classical music.Learning Chinese classical music can promote the inheritance and development of traditional Chinese music culture, and help children and college students establish national self-esteem and confidence.When exposed to Western music, it is important for students' physical and mental health not to feel inferior.Chinese traditional culture contains many valuable musical and cultural elements, from classical music to ethnic music, from folk songs to ethnic instruments, which are valuable materials that can broaden students' musical cognition and help them establish diverse musical cognition [6].
Cultivate Temperament and Reconcile the Crowd
The Five Elements Theory is a very important concept in Chinese philosophy, which is fully expressed in Chinese classical music in an intuitive way, interacts with the theory of traditional Chinese medicine, and plays an active role in people's physical and mental recuperation and emotional cultivation.Choosing music appreciation based on the characteristics of the five tone modes of classical Chinese music and the relationship between the five elements and five organs can play a balanced and proportionate role in human health, promote the balanced development of the five organs, and thus regulate emotions.Chinese people have always liked to regard mountains and rivers as their emotional sustenance and destination for life.They freely seek warmth, harmony, and tranquility in the embrace of nature.Music, especially instrumental music, is not like poetry, which cannot entertain the mind through words, nor does it express many objective things such as natural scenery through color, lines, and other means, such as modeling, sculpture, and painting, to entertain the eyes.The mission of music, the sound art, is to influence people's emotions, entertain people and purify their minds [7].
Music not only has a positive effect on personal physical and mental health and cultivation, but also plays a great role in governing the country, harmonizing the world, and promoting social order and civilization in society, the country, and even the world.For example, in the piano piece "Fisherman's Song" (as shown in Figure 1), the music in this work less portrays the rhythm of water and the swaying of ships, but more creates a free and unrestrained emotion that is far from worldly troubles through some gentle and elegant melodies, thus triggering people to have a richer feeling.The reason why classical music in ancient China has been passed down to this day and has been enduring is precisely because these works themselves carry the essence of truth, goodness, and beauty, making people emotional and moved by them during the appreciation process.
Beneficial to Shaping the Physical and Mental Health of College Students
Chinese classical music is rooted in the land of China, carrying local cultural genes and inheriting traditional Chinese medicine theory.It integrates the five tones and five organs into the Five Elements theory, playing a positive role in regulating people's physical and mental health.Traditional culture is rooted in national history and culture, with a solid cultural heritage and historical logic.Music education belongs to specialized subject types and has a mature and stable teaching system.After students successfully establish a traditional cultural cognitive system through education infiltration, their personal cultural cultivation can be unprecedentedly improved, and their comprehensive quality level will inevitably be upgraded.Music education is a highly professional type of education with a mature and complete information dissemination and reception system.It can help students understand and accept certain targeted information through standardized and mature educational techniques.College students come from all corners of China, gathering together to pursue their personal ideals and pursue further education.At an age when their values are not yet mature, some of them are ambitious or at a loss, requiring education and guidance from schools and society in various aspects, as well as continuous accumulation and shaping of time and experience [8].
Extracting content that can be highly integrated with the music education system in traditional culture can effectively achieve the targeted goal of making students accept and understand traditional culture.Regularly exposing college students to Chinese classical music, through the action of the five tones on the five internal organs, accompanied by the adjustment of rhythm density and melody, to regulate corresponding emotions, dispel restlessness and anxiety, and eliminate the harm of negative emotions to the body.Appropriately appreciating Chinese classical music for college students can not only assist in treating and interrupting negative emotions that continue to harm themselves, but also cultivate an open-minded and nourishing mind after developing good appreciation habits, thereby cultivating temperament and transforming temperament, and becoming a inheritor with emotional experiences of Chinese culture [9].
Chinese Classical Music Contains
Unique Chinese Intellectual Wisdom
The Philosophical Universe View of "Unity of Heaven and Man"
The universe view of Zhuangzi believes that the "Dao" that follows the law of inaction and is infinite and free is beautiful.It believes that beauty lies in freedom and the unity of freedom and objective laws, which is its understanding of the essence of beauty.In Zhuangzi's view, "Dao" is nature, an objective law, and a combination and unity of truth, goodness, and beauty.Chinese musicians have always pursued an artistic conception of "unity of things and scenery" in their creative process.On the one hand, they immerse themselves in the embrace of nature, and on the other hand, they also embrace nature in their own hearts.In nature, there are landscapes, customs, and rural landscapes, and various arts draw inexhaustible materials from the natural world around us.The philosophical idea of "harmony between heaven and man" is also the highest value pursuit in the development process of Chinese classical music.It embodies a unique artistic expression in which humans integrate their thoughts with all things in the universe when contemplating and comprehending natural environments and social phenomena, and are able to experience the joy of it.Generally, the scenery is harmoniously unified with the expressed artistic conception, allowing the viewer to achieve the beauty of subjective and objective balance, and feel the beauty of the author's expression of profound philosophical consciousness through artistic techniques [10].
Chinese classical music, as a form of artistic expression, carries the wisdom of excellent traditional Chinese culture, blending philosophical consciousness with artistic images, and achieving the goal of artistic transformation.College students are an advanced generation of young people who master technological and cultural knowledge.Implementing Chinese classical music education for them is a powerful way to actively promote excellent traditional Chinese culture.In terms of specific musical works, influenced by Zhuangzi's concept of "natural music", traditional Chinese music has achieved perfect integration of emotions and scenes.When composers draw inspiration from natural scenery descriptions, they are always associated with the environment in which specific characters and related characters live.On the one hand, it is the space for character activities, and on the other hand, it can display the character's personality and thoughts.
Cultivate the Ideology of Harmonious Coexistence, Mutual Benefit and Win-Win Situation among College Students
Chinese classical music has undergone great changes in the Chinese nation, exerting its own energy in different eras, nourishing the Chinese land, infiltrating the hearts of generations of Chinese children, and working together with all excellent Chinese civilizations to make the Chinese family, which carries 56 ethnic groups, live together in a happy, orderly, and inclusive manner.The philosophical concept of "harmony between heaven and man" embodies Chinese wisdom and is an important manifestation of our spiritual wealth.We not only implement this spirit in the development concept and strategy of harmonious coexistence between humans and nature with "ecological civilization" as the core, guiding people's daily life and work; At the same time, this philosophical worldview reflects China's broad mindedness in its foreign exchanges -the proposition and practice of building a community with a shared future for mankind that is "peace, development, and win-win cooperation", which has become another important contribution of China's wisdom to the world today.Through continuous practice in appreciating Chinese classical music art, we enhance our ability to comprehend, integrate the spirit of traditional culture, and experience the aesthetic height of artistic conception.By using our own practical actions, we will carry forward the light of wisdom that Chinese traditional culture has contributed to humanity, integrate the philosophical worldview integrated into our bloodline with the construction practice of building a community with a shared future for mankind, and consciously play a role in the peaceful and sustainable development of humanity.This will enable the light of human wisdom to continue to be passed down and developed, shining brightly in various parts of the world.
Conclusion
Chinese classical music is a classic work that has gone through time, eliminated the dross, left its essence and shone in the long history of Chinese music.The contextual, inter subjective, generative and other characteristics of the inheritance paradigm of Chinese classical music highlight the implication of philosophical hermeneutics theory.A full understanding of traditional Chinese culture can re-examine the current situation of music teaching, think about new ways out, improve teacher-student relationships, and create an equal and harmonious teaching environment.Fully understanding Chinese classical music can promote harmonious coexistence between humans, humans and nature, and humans and society.Classical music greatly promotes traditional Chinese philosophical thought, enhancing its influence and expanding its scope.Scientific guidance on the creation and dissemination of classical music can better spread Chinese culture and carry forward the profound connotations of Chinese thought.Integrating traditional cultural elements into music education can not only utilize the characteristics of the education system to effectively inherit traditional culture, but also make music education more "localized", thereby effectively establishing students' local cultural confidence.Zhuangzi's music aesthetics have a significant position and profound influence in both the history of ancient music aesthetics and the aesthetics of today's music.College students are the mainstream group of future society and young people who master scientific knowledge.Through the practice and promotion of excellent virtues by generations of young students, the Chinese nation will inevitably form a positive and virtuous force, which is not only a noble collective moral realm worth the unremitting pursuit of the Chinese nation, but also a prerequisite and unstoppable spiritual force for the Chinese nation to stride forward on the path of becoming a strong country. | 2023-10-26T15:13:50.160Z | 2023-09-03T00:00:00.000 | {
"year": 2023,
"sha1": "6fc463ad709577b82d6f1a048e5a57ff482c3f4a",
"oa_license": "CCBY",
"oa_url": "https://drpress.org/ojs/index.php/hiaad/article/download/11561/11258",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ce3a23844a2ae63f86ce13dd05b0a4b850526edf",
"s2fieldsofstudy": [
"Art",
"History",
"Philosophy"
],
"extfieldsofstudy": []
} |
2157677 | pes2o/s2orc | v3-fos-license | Direct DPSK modulation of chirp-managed laser as cost-effective downstream transmitter for symmetrical 10-Gbit / s WDM PONs
This paper proposes the use of chirp-managed lasers (CML) as cost-effective downstream (DS) transmitters for next generation access networks. As the laser bandwidth is as high as 10 GHz, the CML could be directly modulated at 10 Gbit/s for downstream transmission in future wavelength division multiplexing passive optical networks (WDM PON). The laser adiabatic chirp, which is the main drawback limiting the transmission performance of directly modulated lasers, is now utilized to generate phase-shift keying (PSK) modulation format by direct modulation. At the user premise, the wavelength reuse technique based on reflective colorless upstream transmitter is applied. The optical network unit (ONU) reflects and orthogonally remodulates the received light with upstream data. A full-duplex transmission with symmetrical 10-Gbit/s bandwidth is demonstrated. Bit-error-rate measurement showed that optical power budgets of 29 dB at BER of 10 or of 36 dB at BER of 10 could be obtained with direct phase-shift-keying modulation of CML which proves that the proposed solution is a viable candidate for future WDM-PONs. ©2012 Optical Society of America OCIS codes: (140.5960) Semiconductor lasers; (140.3518) Lasers, frequency modulated; (060.0060) Fiber optics and optical communications; (060.2330) Fiber optics communications; (060.4510) Optical communications; (060.2630) Frequency modulation. References and links 1. FTTH Council Europe, “Reshuffling Europe’s Fibre to the Home leadership,” FTTH Conference 2012, Munich, 15 February 2012. 2. F. Ponzini, F. Cavaliere, G. Berrettini, M. Presi, E. Ciaramella, N. Calabretta, and A. Bogoni, “Evolution Scenario Toward WDM-PON,” J. Opt. Commun. Netw. 1(4), C25–C34 (2009). 3. N. Genay, P. Chanclou, T. Duong, N. Brochier, and E. Pincemin, “Bidirectional WDM/TDM-PON access networks integrating downstream 10 Gbit/s DPSK and upstream 2.5 Gbit/s OOK on the same wavelength,” in Proc. European Conference on Optical Communications (ECOC'06) (Cannes, France, 2006), Th.3.6.6. 4. J. Prat, V. Polo, C. Bock, C. Arellano, and J. Olmos, “Full-duplex single fiber transmission using FSK downstream and IM remote upstream modulations for fiber-to-the-home,” IEEE Photon. Technol. Lett. 17(3), 702–704 (2005). 5. R. Maher, L. Barry, and P. Anandarajah, “Cost efficient directly modulated DPSK downstream transmitter and colourless upstream remodulation for full-duplex WDM-PONs,” in Proc. Optical Fiber Communications (OFC'10) (San Diego CA, 2010), JThA29. 6. D. Mahgerefteh, Y. Matsui, X. Zheng, and K. McCallion, “Chirp Managed Laser and Applications,” IEEE J. Sel. Top. Quantum Electron. 16(5), 1126–1139 (2010). 7. W. Jia, J. Xu, Z. Liu, K.-H. Tse, and C.-K. Chan, “Generation and Transmission of 10-Gb/s RZ-DPSK Signals Using a Directly Modulated Chirp-Managed Laser,” IEEE Photon. Technol. Lett. 23(3), 173–175 (2011). 8. J. Franklin, L. Kil, D. Mooney, D. Mahgerefteh, X. Zheng, Y. Matsui, K. McCallion, F. Fan, and P. Tayebati, “Generation of RZ-DPSK using a Chirp-Managed Laser (CML),” in Proc. Optical Fiber Communication Conference (OFC'08) (San Diego, California, 2008), JWA67. 9. C.-K. Chan, W. Jia, and Z. Liu, “Advanced modulation format generation using high-speed directly modulated lasers for optical metro/access systems,” in Proc. Communications and Photonics Conference and Exhibition (ACP'11), 8309, 83090X. #177121 $15.00 USD Received 1 Oct 2012; revised 22 Nov 2012; accepted 23 Nov 2012; published 3 Dec 2012 (C) 2012 OSA 10 December 2012 / Vol. 20, No. 26 / OPTICS EXPRESS B470 10. Q. T. Le, K. Zogal, T. von Lerber, C. Gierl, A. Emsia, D. Briggmann, and F. Kueppers, “Direct DPSK Modulation of Chirp Managed Lasers for Symmetrical 10-Gbit/s WDM-PONs,” in Proc. European Conference on Optical Communications (ECOC'12) (Amsterdam, Netherland, 2012), P6.14. 11. R. A. Saunders, J. P. King, and I. Hardcastle, “Wideband chirp measurement technique for high bit rate sources,” Electron. Lett. 30(16), 1336–1338 (1994). 12. G. W. Lu, N. Deng, C.-K. Chan, and L.-K. Chen, “Use of downstream inverse-RZ signal for upstream data remodulation in a WDM passive optical network,” in Proc. Optical Fiber Communications (OFC'05) (Anaheim, CA, 2005), OFI8. 13. W. Lee, M. Y. Park, S. H. Cho, J. Lee, C. Kim, G. Jeong, and B. W. Kim, “Bidirectional WDM-PON based on gain-saturated reflective semiconductor optical amplifiers,” IEEE Photon. Technol. Lett. 17(11), 2460–2462 (2005). 14. C. Kazmierski, “Remote amplified modulators: Key components for 10 Gb/s WDM PON,” in Proc. European Conference on Optical Communications (ECOC'10) (Torino, Italy, 2010), Mo.1.F.1. 15. P. Chanclou, F. Bourgart, B. Landousies, S. Gosselin, B. Charbonnier, N. Genay, A. Pizzinat, F. Saliou, B. Le Guyader, B. Capelle, Q. T. Le, F. Raharimanitra, A. Gharba, L. Neto, J. Guillory, Q. Deniel, and S. Deniel, “Technical options for NGPON2 beyond 10G PON,” in Proc. European Conference on Optical Communications (ECOC'11) (Geneva, Switzerland, 2011), We.9.C.3. 16. A. Tervonen, M. Mattila, W. Weiershausen, T. von Lerber, E. Parsons, H. Chaouch, A. Marculescu, J. Leuthold, and F. Kueppers, “Dual output SOA based amplifier for PON extenders,” in Proc. European Conference on Optical Communications (ECOC'10) (Torino, Italy, 2010), P6.18. 17. Q. T. Le, F. Saliou, R. Xia, P. Chanclou, T. von Lerber, A. Tervonen, M. Mattila, W. Weiershausen, S. Honkanen, and F. Kueppers, “TDM/DWDM PON extender for 10 Gbit/s downstream transmission,” in Proc. European Conference on Optical Communications (ECOC'11) (Geneva, Switzerland, 2011), Th.12.C.2.
Introduction
Fiber to the Home (FTTH) or Building (FTTB) are access network methods that deliver the highest possible speed of Internet connection by using optical fiber that runs directly into the home, building or office.Deployment of FTTH/B access networks has already started in many countries.In Europe, the annual progress of optical access network deployment is 41%, with nearly 28 million homes passed at the end of 2011 [1].With the continuous increase in bandwidth demand generated by consumer and business applications (high-definition TV, cloud computing, online gaming, videoconferencing, etc.), and the required high-speed mobile backhaul for Long Term Evolution (LTE) networks, the need for a new, higher capacity access architecture becomes clear.
Wavelength-division-multiplexed passive optical network (WDM-PON) is an efficient choice for future fiber access networks as it can provide a point-to-point connectivity to multiple remote locations sharing the major part of the fiber plan.In spite of the numerous advantages associated with WDM-PON, the high cost attributed to wavelength specific transmitters at the optical line termination (OLT) and within each optical network unit (ONU) has reduced the competitiveness of this technology [2].Several network architectures have been proposed to achieve full-duplex transmission over a single fiber for implementation in WDM-PONs.Among them, phase [3] or frequency shift keyed [4] (PSK or FSK) modulation formats were proposed for the downstream (DS) transmission, which provided an almost continuous wave signal for colorless upstream (US) re-modulation in the ONU.However, most of the solutions that have been studied so far were limited by the use of high-cost and power-budget-consuming external modulators to generate differential PSK (DPSK) signals.Some directly modulated DPSK solutions were proposed by the use of high-bandwidth threelevel driving signals [5].
Recently, the high performance transmission of directly modulated chirp-managed lasers (CML) have been demonstrated [6].Generation of return-to-zero DPSK (RZ DPSK) signal by direct modulation of CML has also been shown.However, the use of external modulator as pulse carver [7], or high-bandwidth three-level driving signals are necessary [8].In Ref [9], a summary of advanced modulation formats that could be generated by direct modulation of CML was presented.This reference has also proposed a WDM-PON configuration using 10-Gbit/s inverse RZ (IRZ) duobinary modulation for the downstream and remodulation at 2.5 Gbit/s for the upstream.However, with the use of high-extinction-ratio IRZ duobinary signal for the downstream, high crosstalk is present when remodulation technique is used for the upstream.Therefore, in this reference, the upstream data rate was only 2.5 Gbit/s and low pass filter was used to suppress the residual modulation at 10 Gbit/s.
Here, in this paper, we investigate the CML as a cost-effective DPSK downstream transmitter in a symmetrical 10-Gbit/s WDM-PON configuration [10].The CML-based transmitter configuration is similar to the IRZ duobinary transmitter proposed in Ref [9].However, instead of using the optical spectrum reshaper (OSR) integrated in the CML to increase the intensity extinction ratio, the OSR is now red-shifted in order to equalize the intensity fluctuations.As a consequence, pure DPSK signal with an intensity almost constant could be generated.This technique, to the best of our knowledge, has never been demonstrated before.Thanks to the low intensity fluctuations, the downstream DPSK signal can be re-modulated at the ONU for on-off keying (OOK) upstream transmission.A fullduplex transmission with symmetrical 10-Gbit/s bandwidth is demonstrated.
Principle of operation
Figure 1 shows the schematic of the CML 10-Gbit/s DPSK transmitter.A CML consists of a semiconductor laser (DFB -distributed feedback laser) followed by an optical filter (OSRoptical spectrum reshaper).The driving electrical signal is encoded in inverse return-to-zero (IRZ) format, via a commercial logic NAND gate, before being sent to the CML. Figure 2 illustrates the operation principle through driving current/output intensity, frequency and phase characteristics of the output signal.By direct modulation, a corresponding frequency shift was generated due to the adiabatic chirp of the laser.In this application, the bias current is set far above the threshold.The transient chirp can therefore be neglected.As the optical phase is a time integral of the instantaneous frequency, a phase shift where ∆f(t) is the optical frequency deviation, is generated during a pulse duration (T).If the pulse shape is correctly chosen, DPSK signal (the "1" value is coded by a constant phase, and the "0" is coded by a phase shift of π) could be directly generated.In this case, the driving voltage is adjusted to induce adiabatic chirp of ∆f = 1/T.The phase shift generated by the inverse pulses is thus
. As a consequence, in order to obtain a phase shift of π with a 50%-duty-cycle IRZ signal, a maximum frequency shift of about 10 GHz is required.The resulting phase modulation is intrinsically differentially encoded, eliminating the need for a differential encoder.In order to achieve pure DPSK signal in which information is only carried by the optical phase, the optical spectrum reshaper integrated at the output of the laser is red-shifted to equalize the output intensity.In order to assess the frequency modulation efficiency of the laser, the chirp characteristic of the laser is investigated.A commercial CML module (AZNA DM200-01) was used in this experiment.The input impedance and threshold current of the laser module were 50 Ohms, 25 mA, respectively.Wide-bandwidth time-resolved chirp measurement technique was used [11].Figure 3 shows the laser adiabatic chirp versus driving voltage.The laser was biased at 80 mA, about three times above threshold in order to generate small residual intensity fluctuations, proper chirp (no transient chirp was observed).The resulting adiabatic chirp is almost linearly proportional with the driving voltage.In order to achieve a frequency shift of 10 GHz, a peak-to-peak driving voltage of 2 V is required, showing a frequency-modulation efficiency of 0.24 GHz/mA.
Network architecture
Figure 4 illustrates the schematic diagram of the considered WDM PON.Each downstream transmitter consists of a 10-Gbit/s DPSK directly modulated CML.Two identical arrayed waveguide gratings are used at the OLT and remote node to combine and separate downstream wavelength channels that carry signals from the OLT to the ONUs, as well as upstream wavelength channels that carry signals from the ONUs to the OLT.The main advantage of the downstream DPSK format is that the laser power is preserved in half bit duration where the signal phase is stable.As a consequence, the demodulated signal at the receiver would not be degraded by residual intensity fluctuations.In addition, this stable phase and stable power slot could be used for symmetrical-rate colorless upstream based on remodulation technique.As a consequence, each ONU is assigned with one wavelength for both downstream and upstream.The upstream transmitters consist of reflective semiconductor modulators (RSOA) or reflective electro-absorption modulators (REAM).The colorless remodulation scheme at the ONU used in this experiment is shown in Fig. 5.The signal was split using a 3-dB coupler with one arm fed directly into a downstream receiver (DS Rx) which comprises a delay interferometer (DLI) and a single-ended receiver (APD-avalanche photodiode).The second arm of the 3-dB coupler in the ONU was fed into the colorless upstream remodulation transmitter (US Tx) which consists of an optical delay line (ODL), a semiconductor amplifier (SOA) and an electro-absorption modulator (EAM).The line coding for upstream signal was 50% RZ format that has exactly the same bit rate as the downstream signal.If the RZ modulation is interleaved by half-bit in respect to the incoming IRZ pattern, then RZ modulation could be performed over a stable and high-power slot [12].This can eliminate the process of erasing the downstream data from the received optical carrier.As a consequence, the constraint on downstream signal power to saturate the SOA [13] in the ONU could be released.The synchronization between upstream and downstream is principally not an issue as the downstream clock is recovered at the downstream receiver.The synchronization delay is fix for each ONU which corresponds to the traveling time between OLT and ONU modulo the bit duration.The adjustment could be performed either in optical or electrical domain.In this work, this was manually done by optical delay line.Figure 6 In addition, as the DPSK modulation corresponding to the downstream data is still remained in the upstream signal, some adjacent upstream RZ pulses have a phase shift of π.The inter-symbol interference due to chromatic dispersion could be partly reduced thanks to destructive interference.In order to reduce the cost and complexity of the ONU, the circulator, SOA and the EAM would ideally be replaced by a reflective EAM-SOA [14].
Experimental results
The experimental setup is shown in Fig. 7, only one channel was considered in this work.For simplicity, in this experiment, we used different fibers for upstream and downstream transmission.Two attenuators (ATT) were used to emulate the upstream and downstream optical budgets.The downstream data was a 10-Gbit/s PRBS 2 31 -1 generated by a pulse pattern generator.The required frequency shift of 10 GHz was obtained by applying a peakto-peak driving voltage of 2 V.The laser was biased at 80 mA and the output power was 4 dBm.The central wavelength of the signal was 1536.88 nm.The integrated OSR is a Fabry-Pérot filter with 3-dB bandwidth of 0.06 nm. Figure 9 shows the measured bit error ratio (BER) performance of DPSK signal generated by direct modulation of CML and external dual-drive Mach-Zehnder modulator (MZM).The back-to-back receiver power sensitivities at a BER of 10 −9 for CML and MZM are −26 dBm and −27 dBm, respectively.The 1-dB penalty is mainly due to the residual intensity noise at the output of the directly modulated laser.
-34 -32 -30 -28 -26 -24 -11 - Receiver input power (dBm) Figure 10 shows the BER of the 10-Gbit/s DPSK downstream signal in the case of backto-back (B2B), after 10 km and 25 km of transmission in standard single-mode fiber (SMF), respectively.After propagating over 25 km, the power penalty compared to B2B scenario was only 0.5 dB, which proves once again the high tolerance to chromatic dispersion of directly modulated CML.Error-free (BER < 10 −9 ) transmission was achieved for the considered scenarios at receiver power of −25 dBm, which corresponds to a downstream optical budget of 29 dB.If we consider the BER of 10 −3 which is the limit of forward error correction code (FEC), an optical budget of 36 dB was obtained.This achieved optical budget covers the losses of WDM MUX/DEMUX (2 × 4 dB), downstream/upstream separators (4 dB), and DLI (5 dB).An extra budget of 19 dB could be used for fiber transmission and an integration with the formers deployed PON systems, as a smooth mitigation of infrastructure is one of the major requirements from operators for WDM PONs [15].This extra optical budget could also be used to employ power splitters to share time division multiplexed 10-Gbit/s bandwidth to a number of users.For the upstream, an RZ encoder was used with PRBS of 2 31 -1.The SOA has a saturated power of 10 dBm at 1550 nm with 200-mA injection current.The SOA optical input power was set at −10 dBm.The EAM was modulated with 3-V peak-to-peak driving signal which provided an output extension ratio of 9 dB, its total loss was 15 dB.
Figure 11 shows the BER analysis of the 10-Gbit/s RZ upstream signal.The squaremarked curve corresponds to the case of upstream back-to-back without downlink DPSK modulation.The rhombus-marked curve is for the case with downlink DPSK modulation and the remodulation delay is correctly adjusted.A power penalty of only 1 dB is observed due to the downlink residual intensity modulation.After transmission over 10 km (circle) and 25 km (triangle), the power penalties relative to B2B case are 2 dB and 2.5 dB, respectively.Although, error-free transmission was achieved at input power of −24 dBm, corresponding to an upstream optical budget of only 19 dB.With the use of FEC, the achieved upstream budget is 25 dB.Better performance could be obtained by the use of a reflective EAM-SOA with higher effective gain and output power.
Conclusion
We have investigated for the first time the performance of a cost-effective downstream transmitter for symmetrical 10-Gbit/s WDM-PONs chirp-managed laser.By using the inverse RZ driving signal, an optical DPSK signal was intrinsically obtained at the laser output.need for either high-bandwidth driving signal, differential encoder or high-cost and power budget consuming external modulator was eliminated.The integrated optical filter equalized the intensity levels, the residual intensity fluctuations were thus reduced.Excellent system performance was demonstrated for the phase encoded downstream signal.In back-to-back measurement, a power penalty of only 1 dB compared to the Mach-Zehnder-modulator-based DPSK signal was obtained.After the propagation over 10 km and 25 km of single mode fiber, the power penalty did not exceed 0.5 dB.The achieved downstream optical budget, 29 dB at BER of 10 −9 or of 36 dB at BER of 10 −3 , proved that the proposed solution could be a strong candidate for future WDM PONs.This power budget could be drastically increased by the use of a DPSK-compatible PON extender called saturated collision amplifier [16,17] for high-capacity TDM/WDM PON systems.At the ONU, a reflective colorless upstream transmitter scheme based on EAM-SOA combination was used.Symmetrical-rate transmission was obtained by the use of synchronized remodulation of the high power slot in downstream signal with RZ modulation format.This can eliminate the process of erasing the downstream data from the received optical carrier.As a consequence, the constraint on downstream signal power to saturate the SOA in the ONU could be released.A power penalty of 2.5 dB was obtained for 10-Gbit/s upstream data transmission after 25 km.To reduce the complexity of the upstream transmitter, an integrated reflective EAM-SOA combination could be employed.
Figure 8 (
a) shows the eye pattern of the CML output during two bit durations, IRZ intensity modulation is equalized by the integrated OSR resulting in an almost constant intensity for upstream remodulation.The lowest residual intensity fluctuation parts are highlighted, where synchronous RZ remodualtion can be applied.Demodulated DPSK signals at the constructive port (b), and at the destructive port (c) are also shown with similar performance.Single-end detection was used in this work. | 2018-04-03T03:18:47.645Z | 2012-12-10T00:00:00.000 | {
"year": 2012,
"sha1": "3943d8d2e1f8cb14596ac169168e9449ea94e15f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.20.00b470",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8d31571fb78efb56597c5aa487a562be9bde79d4",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
118899416 | pes2o/s2orc | v3-fos-license | Macroscopic quantum effects in nanomechanical systems
We investigate quantum effects in the mechanical properties of elastic beams on the nanoscale. Transverse quantum and thermal fluctuations and the nonlinear excitation energies are calculated for beams compressed in longitudinal direction. Near the Euler instability, the system is described by a one dimensional Ginzburg-Landau model where the order parameter is the amplitude of the buckling mode. We show that in single wall carbon nanotubes with lengths of order or smaller than 100 nm zero point fluctuations are accessible and discuss the possibility of observing macroscopic quantum coherence in nanobeams near the critical strain.
Introduction. -The progress in miniaturization of electromechanical devices towards the nanometer scale (NEMS) is beginning to reach the limit, where quantum effects play an important role [1,2,3]. For example, in nanoscale beams phonons may propagate ballistically, leading to a quantized thermal conductance [4]. Moreover a sizeable contribution to the forces between plates and beams which are separated by less than one micron is the Casimir force between neutral objects due to the modification of the electromagnetic vacuum [5,6]. The combination of electrical and mechanical properties may be studied via quantized transverse deflection due to charge quantization of charged, suspended beams in an electric field [7]. Similarly the standard Coulomb-blockade in small metallic islands or in semiconducting quantum dots may be used to mechanically transfer single electrons with a nanomechanical oscillator [8,9]. Regarding possible applications of nanomechanical sensors, Si-based resonators in the radio-frequency regime were recently fabricated and manipulated [10]. In the present work we focus on quantum effects in mechanical resonators on the nanometer scale, in particular in single wall carbon nanotubes (SWNT). Due to their small masses and remarkable elastic properties down to nanometer scale, carbon nanotubes are ideally suited to study effects like phonon quantization [11], the generation of non-classical states of mechanical motion [12] or macroscopic quantum tunnelling out of a metastable configuration [13]. On the classical level, both the thermal Brownian motion of single nanotubes clamped on one side [14] and the discrete eigenmodes of charged multiwall nanotubes excited by an ac-voltage [15] have been detected experimentally. More recently, the thermal vibrations of doubly clamped SWNT's down to lengths of around 0.5µm have been observed with a scanning electron microscope [16]. In all of these cases it turns out that the measured transverse vibrations of nanotubes agree reasonably well with the predictions of an elastic continuum model. Its applicability even on the nm scale is also supported by molecular dynamics simulations which show that SWNT's down to lengths of around 10nm are well described by an effective elastic continuum, responding in a reversible manner up to large deformations [17]. In the following, we will therefore use the standard theory of an elastic continuum [18] for carbon nanotubes which are clamped between two fixed end points. We calculate both thermal and quantum fluctuations of the nanotube under longitudinal compression, including properly the nonlinearity in the bending energy. It is shown that in SWNT's with a length below 100nm the crossover from thermal to quantum zero point fluctuations is reached at accessible temperatures of around 30mK. We also discuss the possibility to realize coherent superpositions of macroscopically distinct states by observing the avoided level crossing near the degenerate situation above the critical force of the well known Euler-buckling instability.
The Model. -Our model system is a freely suspended SWNT of length L and diameter D which is fixed at both ends, allowing only transverse vibrations. In addition we consider a mechanical force F which acts on the beam in longitudinal direction (F > 0 for compression). In a classical description the beam is then completely described by the transverse deflection φ(s) parametrized by the arclength s ∈ [0, L]. We assume the beam to be incompressible in longitudinal direction and only keep a single transverse degree of freedom for simplicity (see below). For arbitrary strong deflections φ(s) the nonlinear Lagrangian of the system is then [19] Here σ = m/L is the mass density, while the bending rigidity µ = EI is the product of the elasticity modulus E and the moment of inertia I = πD 3 d/8, with d an effective wall thickness. For small deformations |φ ′ (s)| ≪ 1 the Lagrangian is quadratic, leading to the standard linear equation of motion σφ + µφ ′′′′ + F φ ′′ = 0 (2) for the transverse vibrations of an elastic beam under compression. The corresponding eigenmodes φ n and eigenfrequencies ω n depend on the boundary conditions. We assume that the experimental realization [16] is well described by clamped ends at both sides, φ(0) = φ(L) = 0 and φ ′ (0) = φ ′ (L) = 0. The exact φ n 's are then given by a superposition of trigonometric and hyperbolic functions, and the ω n 's by solving a transcendental equation [18]. In the following, for some of the analytic expressions, we will use boundary conditions without bending moments at the ends of the beam, φ ′′ (0) = φ ′′ (L) = 0. This leaves the essential physics unchanged and permits one to write down simple expressions for the eigenfunctions in the normal mode expansion and its eigenfrequencies Clearly the modes soften with increasing compression F , up to a critical force F c = µ(π/L) 2 where the fundamental frequency ω n=1 (F ) vanishes. Then the system reaches a bifurcation point, the well known Euler instability, beyond which φ 1 ∼ sin (πs/L) becomes the new stable solution of the static problem. For clamped boundary conditions with finite bending moments at s = 0, L the critical force is four times larger, and the shape of the stable solution in the static problem for F > F c has the form sin 2 (πs/L). Near criticality, the frequencies of higher modes n = 2, 3, . . . remain finite. The dynamics at low frequencies is thus determined by the fundamental mode alone. The nonlinear field theory eq.( 1) may be quantized in the standard manner by requiring canonical commutation relations [φ(s, t),Π(s ′ , t)] = ihδ(s − s ′ ) between the field φ and its canonically conjugate momentum Π = σφ at equal times. In the linear regime, the problem is reduced to an infinite number of harmonic oscillators. Introducing oscillator lengths l 2 k =h/(m k ω k ) with k = nπ/L and n = 1, 2, . . . , the amplitudes are expressed by the standard creation and annihilation operators a † k and a k . The effective masses m k arising in l k turn out to be m eff ≃ 3/8 of the beam mass for the fundamental mode but are generally mode dependent for clamped boundary conditions. Thermal vibrations. -In the linearized theory, the mean square displacement of the beam is trivially calculated from the normal mode expansion eq.( 3). Assuming a thermal occupation of the discrete phonon modes one obtains a maximum value at the center of the beam, which for unclamped boundary conditions reads Here the scale is set by the oscillator length . The parameter δ = (F c − F )/F c determines the dimensionless distance from the critical compression force. At temperatures larger than T 0 one obtains the usual equipartition theorem result, where σ 2 ∼ T /ω 2 0 increases linearly with T as observed on sufficiently long nanotubes [14,16]. For low temperatures the mean square displacement remains finite due to zero point fluctuations. As shown in Fig. 1 the crossover to this regime occurs at T ⋆ ≃ 0.4T 0 in the absence of an external force δ = 1, giving accessible temperatures of around 30mK for typical SWNT's (see Table I). Unfortunately, the associated transverse deflection amplitude l 0 is only of order 10 −2 nm and thus beyond accessibility of standard displacement detection techniques. There are a number of ways, however, to measure such tiny amplitudes, for instance by capacitively coupling the beam to the gate of a single electron transistor [20] or using its electrostatic interaction with a free-standing quantum dot [21]. Of course the fluctuations are strongly enhanced near the critical force F c , where only the fundamental mode n = 1 contributes and thus σ 2 (T = 0) increases like l 2 0 / √ δ. In this case, the problem of measuring the thermal to quantum crossover is somehow inverted, since now the decreasing crossover temperature T ⋆ (δ) ∼ T 0 √ δ poses a limiting factor for an observation. Moreover the divergence in eq.(6) for δ → 0 marks the breakdown of the linearized theory. To explore the quantitative enhancement of the fluctuations and the change of the relevant length and energy scales near criticality, one has to include the nonlinear terms in the bending energy eq.( 1).
Buckling instability. -In order to describe the behaviour near the buckling instability, we insert the Fourier expansion eq.( 3) into the corresponding nonlinear Hamiltonian. Keeping only the fundamental mode, since all higher modes have no influence near criticality due to their nonvanishing frequencies, the interacting field theory is reduced to a one particle problem in terms of the coordinate A 1 , and its canonically conjugate momentum P ≡ −ih∂/∂A 1 . The force term generates a negative contribution to the quartic term in A 1 , driving the system unstable. The nonlinearity in the curvature term, however, over-compensates this and guarantees stability even for fixed length of the beam [19]. For clamped boundary conditions one can use the approximate shape sin 2 (πs/L) which becomes exact near δ → 0. One ends up with a quantum mechanical one particle Hamiltonian with an anharmonic oscillator potential and an anharmonic coefficient b 4 = (π/L) 4 F c L. A similar effective description of quantum effects near the buckling instability has been derived in [22]. The nonlinearity there, however, arises from longitudinal stretching while we keep the length of the nanotube fixed. It is now convenient to define a dimensionless coordinate y by A 1 =ly, wherel = l 0 (2π 2 ) −1/6 (L/l 0 ) 1/3 is the characteristic magnitude of the deflection, where the quartic term due to the nonlinear bending energy is of the same order than the kinetic energy. The Hamiltonian is thus transformed to a dimensionless form withω =h/(m effl 2 ) as the characteristic frequency scale near the critical compression force F c . It differs from the fundamental frequency ω 0 of the classical transverse vibrations by a factorω which also determines the sizeδ of the critical regime. For SWNTs with length L = 0.1µm, δ 1/2 is of order or smaller than 10 −2 (see Table I). The potential energy in eq.( 8) exhibits the standard Landau bifurcation from a single to a double well as the external force is increased through its critical value F c . Indeed our zero dimensional quantum problem is equivalent to a one dimensional classical Ginzburg Landau theory [23]. Let us consider first the mean square displacement at the center of the beam. In the harmonic approximation this diverges as F approaches the critical value from below. For F much larger than F c it is simply determined by the stable minimum of the effective Landau energy at y min = ± |δ|/δ 1/2 giving σ 2 =l 2 |δ|/δ. As shown in Fig. 2, the exact result smoothly interpolates between those two limits giving a finite value σ(F c ) = 0.68l at F c , which is of order 0.1nm for typical SWNT's (see Table I). A similar behaviour is found for the lowest excitation frequency of the beam. In the harmonic approximation it vanishes like ω 1 (F ) =ω · δ/δ 1/2 . Above the critical force the lowest excitation is the small oscillation in one of the degenerate minima of the anharmonic oscillator. This is true, however, only in a classical description. Quantum mechanically, the lowest excitation is the exponentially small tunnelsplitting ∆ which lifts the degeneracy between the two states localized in the left or right well of the effective potential. Again, the exact numerical result for ω = (E 1 − E 0 )/h starts to deviate from the harmonic expression at around δ ≃ 5δ and approaches a finite excitation frequency ω = 1.1ω at F c (see Fig. 3). For δ < −3δ it vanishes exponentially in good agreement with the WKB result eq.( 10) for the tunnelsplitting ∆. It is remarkable that the excitation frequency precisely at F c , which is of order 2π · 0.01GHz for the parameters of Table I, is no longer related to the characteristic frequency ω 0 of the classical problem but scales likeω = ω 0 (4hω 0 /F c L) 1/3 , remaining finite only through a genuine quantum effect. Unfortunately, the smallness of the sizeδ ≈ 10 −4 of the critical regime requires fine tuning the compression force F very close (δ ≃δ) to its critical value in order to see deviations from classical behaviour near the buckling instability.
Macroscopic Quantum Coherence (MQC). -In the regime beyond the critical buckling force, the state of lowest energy corresponds to a stationary finite deflection amplitude y min = ± |δ|/δ 1/2 which is lower in energy byhω(δ/δ) 2 /4 than the configuration with no deflection.
The direction in which the buckling occurs is arbitrary, however, in a realistic setup like that in ref. [16] the boundary conditions in the buckled state are likely to break the perfect rotation symmetry assumed in [24]. In this situation, the transverse deflection is described by a single degree of freedom with only two degenerate states. Quantum mechanically, these states are split into a narrow doublet with energy separationh∆ due to tunnelling. Sufficiently far above Table I -Characteristic parameters for quantum effects in SWNT's of length L = 0.1µm and diameter D = 1.4nm. We assume a Young's modulus E = 1TPa and an effective wall thickness d = 5 · 10 −2 nm. These parameters are consistent with recent measurements [16]. the critical force its magnitude may be determined from a WKB-calculation in the anharmonic oscillator potential, giving with A = 3.8 and B = 0.94 [25]. Note that the validity of the WKB approximation is limited to δ/δ < −2, above which the zero point energy of a harmonic approximation in one of the energy minima reaches the barrier height. In practice the perfectly degenerate situation is hardly achieved, introducing some bias energy ε which singles out a preferred ground state in which the beam is bent either to the left or right. We have thus an effective two state system with HamiltonianĤ and eigenenergies separated by (h∆) 2 + ε 2 , (σ x andσ z are the Pauli spin matrices). In the absence of any asymmetry ε its eigenstates are coherent superpositions ∼ |L ± |R of theσ z eigenstates |L and |R , in which the nanotube is bent towards the left or right. The two states |L or |R are clearly macroscopically distinct [26]. Similar to experiments on flux qubits in SQUID rings [27,28], the existence of linear superpositions of these states may indirectly be verified by observing the avoided level crossing at ε = 0. Such a macroscopic quantum coherence experiment with SWNT's requires that the small level splitting ∆ can be detected against noise and damping in a mechanical resonance experiment and moreover, that the asymmetry ε can be tuned through zero from any accidental nonzero value by external means. As shown in [15], spectroscopy of the transverse vibrations of nanotubes is in principle possible by applying dc-plus ac-voltages on a charged nanotube. Moreover with a capacitive coupling the bias energy ε may be changed via an appropriate electrostatic gate potential. Due to the still large mass involved, the tunnel splitting is rather small (around ∆ = 2π · 1MHz for δ/δ = −3), and thus coherent superpositions with an accessible value of ∆ require nanotubes close to the buckling instability. Regarding the influence of damping effects, it is known [29] that the dynamics of a two-level system subject to an ohmic dissipation mechanism is determined by the size of the parameter α = ηq 2 0 /(2πh). Here q 0 = 2y minl is the distance between the two minima and η the phenomenological damping parameter which appears in the equation of motion Coherence in the two level system is present only for α < 1 2 at T = 0 and k B T <h∆/α for finite temperature and very small α. This requires that the quality factor of the SWNT in the uncompressed case (which is related to η by Q = m eff ω 0 /η) obeys Q > 4|δ|/(πδ 3/2 ). For the above value ofδ, this leads to Q > 220 in the relevant regime |δ| ≃δ. This condition does not seem too stringent for SWNTs, note that a quality factor of Q = 500 has been reached for Si-based resonators in the GHz regime [20].
Conclusions. -We have discussed quantum effects in the mechanical properties of single wall carbon nanotubes, in particular zero point fluctuations in the transverse vibrations and the possibility to see the analog of MQC in nanobeams below the Euler buckling instability. While thermal vibrations of clamped SWNT's down to lengths L = 0.5µm have indeed been observed very recently [16], it remains a considerable challenge to measure the tiny zero point vibrations of order 0.1 nm predicted for SWNTs of length L = 0.1µm near the buckling instability. With the sensitivity attained very recently with Si-based resonators [20], however, reaching this goal in the near future seems to be quite realistic. As regards the possibility to see the analogue of MQC near or below the Euler buckling instability, this requires to tune these systems rather closely below the instability point and doing spectroscopy with both dcand ac-driving. Provided the methods used for multiwall nanotubes with lengths of several µm [15] can be scaled down to clamped single wall nanotubes, quantum mechanics in its literal meaning would finally be of relevance in truly mechanical devices. | 2019-04-14T02:03:12.211Z | 2003-08-11T00:00:00.000 | {
"year": 2003,
"sha1": "7ba9ac19caa1249368badce15f45e26801dd76e3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0308205",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a34bcfcfc08778a0733b86d779bbd31760720579",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14204689 | pes2o/s2orc | v3-fos-license | “I Just Can’t Do It Anymore” Patterns of Physical Activity and Cardiac Rehabilitation in African Americans with Heart Failure: A Mixed Method Study
Physical activity and cardiac rehabilitation (CR) are components of heart failure (HF) self-care. The aims of this study were to describe patterns of physical activity in African Americans (n = 30) with HF and to explore experience in CR. This was a mixed method, concurrent nested, predominantly qualitative study. Qualitative data were collected via interviews exploring typical physical activity, and CR experience. It was augmented by quantitative data measuring HF severity, self-care, functional capacity and depressive symptoms. Mean age was 60 ± 15 years; 65% were New York Heart Association (NYHA) class III HF. Forty-three percent reported that they did less than 30 min of exercise in the past week; 23% were told “nothing” about exercise by their provider, and 53% were told to do “minimal exercise”. A measure of functional capacity indicated the ability to do moderate activity. Two related themes stemmed from the narratives describing current physical activity: “given up” and “still trying”. Six participants recalled referral to CR with one person participating. There was high concordance between qualitative and quantitative data, and evidence that depression may play a role in low levels of physical activity. Findings highlight the need for strategies to increase adherence to current physical activity guidelines in this older minority population with HF.
Introduction
Heart Failure (HF) affects over 5.7 million adults in the United States with Black men and women having the highest prevalence [1]. African Americans had the highest risk of developing HF and have the highest proportion of HF that is not preceded by myocardial infarction [1].This risk differential reflects disparities in the prevalence of hypertension and diabetes, as well as the effects of disparate socioeconomic status on access to medical care. Less than 25% of African Americans with HF receive treatment according to recent guidelines, and have a higher fatality rate than Whites [2]. Additionally, in 2012, there were 870,000 new cases of HF in adults >55 years old [1].
Exercise is recommended as one of many self-care behaviors for those with HF, but adherence rates are low [3,4]. In a study of 139 patients with HF, over half stated they engaged in no regular physical activity [5]. In the United States, very few non-Hispanic Black adults (17.3%) meets physical activity guidelines for aerobic and muscle-strengthening activity [6], which is unfortunate given insufficient physical activity accounts for almost 12% of the risk of myocardial infarction, even after accounting for other cardiovascular risk factors [7]. Racial disparities in cardiac rehabilitation participation (CR) have been previously reported. In a review of Medicare beneficiaries after myocardial infarction or coronary artery bypass graft surgery, Whites (19.6%) were more likely than non-Whites (7.8%) to participate in a CR program [8]. Additionally, patients with low socioeconomic status report greater barriers to CR, including lack of referral, and lower enrollment and participation rates than those of higher socioeconomic status [9]. Therefore, the aims of this mixed methods study were to describe the patterns of physical activity in a small sample of low-income African Americans with heart failure, and to explore the pattern of referral and participation in CR.
Engagement in physical activity including cardiac rehabilitation is influenced by multiple factors. An integrative review on the barriers to CR participation included referral, comorbidities, transportation, and knowledge, as barriers. Racial disparities were also found in the referral process, with minority women less likely to receive a referral [10].
However, most of the research describing physical activity and exercise practices in patients with HF has been limited to Caucasian populations [11,12] or the race/ethnicity has not been disclosed. There is a need to know more about the populations that are seen in daily clinical practice. This is challenging since it includes those groups not typically studied in clinical trials: women, elderly and minority groups [13]. More recently, a large clinical trial of exercise in HF with over 2000 subjects included 40% racial and ethnic minority adults [14].
It has previously been reported that engagement in self-care in an ethnic minority population was behavior-specific, with adequate adherence to medication regimens but poor adherence to other self-care behaviors [15]. In that sample, subjects described poor adherence to symptom monitoring, which is essential for engaging in a program of physical activity. Cultural beliefs including the meaning of HF and its inevitability as a diagnosis, along with social norms, seemed to influence engagement in self-care in the ethnic minority population. Therefore, the aims of this study were to analyze qualitative and quantitative data collected in a mixed methods study to describe the physical activity patterns of African American patients with HF and examine patterns of CR referral and participation.
Theoretical Framework
The Situation-Specific Theory of Heart Failure Self-Care [16] guided this study. Two components comprise the Self-care of HF model: self-care maintenance, composed of symptom monitoring and treatment adherence, and self-care management, where a patient recognizes a change in health, decides to take action and evaluates the effectiveness of the treatment. Integral to this theory is confidence, which is thought to moderate and/or mediate the effect of self-care on health outcomes [16]. Routine physical activity can be incorporated into this model as one of the recommended self-care behaviors that are part of a patient's daily self-care.
Methods
This secondary analysis was part of a larger study examining the sociocultural influences on HF self-care in a sample of African Americans [15]. This was a mixed method, concurrent nested study with the quantitative data embedded in a predominantly qualitative study. Given the exploratory aims of this study, the priority was qualitative data, [17] which were collected via semi-structured interview exploring self-care practices, typical physical activity, and questions about CR referral and participation. The quantitative data were used to describe the sample and augment qualitative data. Physical activity and functional status data were collected with valid and reliable instruments. The study received the appropriate Institutional Review Board (IRB) approval of New York University School of Medicine, and the Health and Hospital Consortium in 2009.
Sample and Setting
This was a convenience sample of patients attending an urban HF clinic in a large municipal hospital that provides outpatient care (including medications) at little or no cost. This clinic serves a low socio-economic population. A research assistant, who was not involved in clinical care, regularly attended the heart failure clinic and distributed IRB approved flyers to potential participants in the waiting area. Patients that self-identified as African American were invited to participate in the study. Other inclusion criteria included: confirmed HF diagnosis based on echocardiography or clinical evidence for at least three months; relatively stable New York Heart Association Class III or IV; and over the age of 18 years. The diagnosis of HF could include both reduced or preserved ejection fraction, as well as ischemicor non-ischemic. Those unable or unwilling to provide informed consent, or with a history of a prior neurological events that could cause dementia, or those unable to perform tests or participate in an interview were excluded.
Data Collection and Analysis
The research assistant collected both qualitative and quantitative data in one session in the HF clinic. After obtaining informed consent, data collection sessions started with the administration of quantitative instruments. Then, the research assistant, trained in qualitative interviewing, conducted the qualitative interview. Each data collection session lasted approximately one hour. Participants were given a small non-coercive incentive to compensate for their time.
Quantitative Data
Valid and reliable instruments were used to collect socio-demographic data, physical activity information, self-care, physical functioning and depression. Information regarding recent physical activity was obtained from the question "During the past week (even if it was not a typical week), how much total time did you spend on exercise (including strengthening exercises, walking, swimming, gardening, active housework or other types of aerobic exercise)?" The responses ranged from "none" to "more than three hours per week". Although exercise is often referred to as a division of physical activity that is more structured [18], in this study the terms are used interchangeably. Participants were also asked "What were you told about exercise by your health care provider?" with the following possible responses: "not to exercise"; minimal exercise only"; use a home or out-of-hospital program"; "attend cardiac rehabilitation"; or "nothing".
Self-care including exercise was measured by the Self-care of Heart Failure Index (SCHFI) version 6.2. The SCHFI has three scales measuring self-care maintenance, management and confidence and a score of ≥70 indicating adequate self-care [19]. There is an exercise question "How routinely do you exercise for 30 min?" embedded in the self-care maintenance scale. The Chronbach's alpha for each subscale were mixed (SCHFI maintenance = 0.665; SCHFI management = 0.500; SCHFI confidence = 0.827). The SCHFI management scale was lower than desired and may reflect some of the inconsistencies seen in this population.
The Duke Activity Status Index (DASI) [20] is a 12-item questionnaire measuring functional capacity with a possible score ranging from 0 to 58.2; in this study it had an alpha level of 0.77. The Patient Health Questionnaire (PHQ-9) [21] was used to measure depression (score of ≥10 indicating depressive symptoms). The alpha level for this study was 0.787.
New York Heart Association (NYHA) classification was measured using a standardized survey [22] and used to describe the sample. This classification ranges from NYHA class I (no limitation of physical activity) to class IV (symptoms of HF present at rest) [23]. A single question on the sociodemographic questionnaire assessed quality of life ("Overall, how would you rate your quality of life? "poor" "satisfactory" "good" "very good"). The quantitative data were analyzed using SPSS (version 18). Descriptive statistics of the sample, and correlations of PHQ-9, DASI and self-care maintenance were computed.
Qualitative Data
Narratives about physical activity or exercise as a component of HF self-care were elicited using a semi-structured interview guide. Each interview was tape recorded and transcribed verbatim. Each qualitative interview began by asking two open-ended questions ("Tell me about your heart failure". To gain insight into physical activity, participants were asked about typical daily activities ("Tell me about a typical day for you"), followed by more specific questions about exercise. Finally, they were asked about experience with cardiac rehabilitation referral ("What have you been told about cardiac rehabilitation?") and experience if individuals indicated if they had attended. The qualitative data were analyzed with thematic content analysis [17], using Atlas.ti (V6). This method involves identifying codes and themes within each case and then looking for commonalities that may transcend cases. Two researchers, who were not involved in clinical care, conducted each stage of this analysis. Victoria Vaughan Dickson is an expert in qualitative data analysis and Margaret McCarthy received training in qualitative analysis.
In this study, this entailed a preliminary line-by-line review of the transcriptions that yielded clusters of data labeled into brief headings of physical activity (for example "past physical activity" "current physical activity"). Themes derived from these data revealed patterns of physical activity and cardiac referral experiences. Finally, emerging themes within-cases were compared across cases to identify commonalities. Methodological rigor was maintained through an audit trail and periodic peer debriefing with experts in HF and minority population research that supported the credibility of the study [24].
Data Integration
In the final step of analysis, the data were integrated through assessment of concordance or agreement between quantitative data (how much exercise was completed in past week) and qualitative descriptions of typical daily activity. The percent of agreement between the two sources was calculated. Given the degree of depressive symptoms evidenced in the PHQ-9, the qualitative data were then reviewed for evidence of depressive symptoms affecting physical activity. An informational matrix [25] was developed to compare and contrast the emergent qualitative themes and the quantitative evidence of recent physical activity across the cases.
Sample Characteristics
This was a sample of 30 participants who self-identified as African American but were born in many different countries including the U.S., the Caribbean and Africa. The mean age was 60 ± 15 years, 60% were men, with a mean BMI in the overweight category (29.3 kg/m 2 ). The majority (65%) was NYHA class III HF with a mean of 10.9 ± 4.7 years of education. Most (60%) were single, divorced or widowed with the majority (83%) having some type of government insurance (Table 1). Note: NYHA = New York Heart Association; SD = standard deviation; kg/m 2 = kilogram per meter squared.
Quantitative Results
Almost half of the sample (43%) reported that they did "none", or "less than 30 min of exercise in the past week". When asked what they were told about exercise by their health care provider, almost one in four (23%) were told "nothing" about exercise, and over half (53%) were told to do "minimal exercise only".
The mean DASI score was 16.8, which translates into approximately five metabolic equivalents (METS). This corresponds to an ability to perform moderate activity, such as walking at a leisurely pace. The DASI scores ranged from the lowest possible score of 0 (2.7 METS) to the highest of 58.2 (10 METS). The mean PHQ-9 score was 7.6 ± 5.3, but 40% of the participants had scores of 10 or greater indicating depressive symptoms. The PHQ-9 score was significantly correlated with the DASI score. Those with higher levels of depression had lower levels of functioning on the DASI (r = −0.318; p = 0.024).
The mean score for each subscale of the SCHFI was less than adequate (maintenance = 60 ± 18; management = 51 ± 18; and confidence = 62 ± 18). Fewer than 25% achieved an adequate score of ≥70 on any scale. The participants' quality of life tended to be rather low, with only 11% describing their quality of life as very good, and 21% stating it was poor.
Qualitative Results
The narratives about exercise and physical activity revealed insight into the patterns of physical activity and the impact on daily life among this ethnic minority population. Specifically, reflections of past physical activity uncovered a theme of intrinsic benefits that included enjoyment of physical activity. Two related themes stemmed from the narratives describing current physical activity: "given up" and "still trying".
Past Physical Activity
Individuals in this study spoke about the activities that they used to enjoy but can no longer do ("…I was really athletic at one time … I miss … playing basketball…"). Importantly, they discussed how HF symptoms interfered with their daily physical activity and the consequential impact on their quality of life. A female with NYHA class III HF, recounted her active lifestyle before HF symptoms restricted both physical and social activities and socialization… "I used to walk a lot, I used to be happy go lucky going places and now … it's like I feel like I'm tied down. I can't really do the things I like to do because I end up getting sick again … I used to go out dancing".
Current Physical Activity
The participants spoke about what they were able to do now that they had HF, usually as measured against previous physical activity. A 70-year old male recalled "I can't do things anymore. I have to take it easy. I cannot run, play baseball like I used to, with my grandkids" One 65 year old male said simply "I am not exercising the way I was".
Fear emerged as a factor that influenced one's willingness to engage in physical activity now. For example, fear that physical activity would precipitate symptoms affected the current daily activities of one 70-year-old female with Class III HF. "…I'm afraid if I go out, I can't make it back … I can't even carry three pounds of nothing, a carton of milk is too much. That's very hard".
"Given up"
Generally, the narratives in this sample revealed significantly limited physical activity levels that impacted all aspects of daily life and as a result many described having "Given up". For example, a 63-year-old male with Class III HF described a typical day as "…Sometimes I don't feel good and I just stay in bed all the time. I just get up, eat something and go back to sleep … all day … sometimes for three days".
Another 72-year-old woman with Class III HF talked about barely being able to let her home health aide in the door in the morning. "…now I don't even bother to get up. Sometimes I can't get up … and let my girl in. Sometimes I go back to bed 'cause that's all I have any energy for".
Individuals described "giving up" despite the desire to remain active. "I don't really do nothing now. I just do things for myself, just for me … I want to go do things all the time. I never lost interest. But I just cant' do it no more".
"Still Trying"
However, despite symptoms of HF, some remained optimistic. One 70-year-old man with NYHA class III HF expressed his belief that if he could just get back to the gym he would be all right. "If I go back to the gym … I think I will be right where I was before".
A 60-year-old female with NYHA class III HF recognized the importance of exercise, but adjusted her actively level given HF symptoms. "I still go to the beach but very little swimming 'cause I find when I swim I get tired quick so I have to still be careful. So I have to cut down on a lotta activities … but I still try…"
Cardiac Rehabilitation Referral and Experience
When asked about cardiac rehabilitation, six participants (20%) stated they were referred to cardiac rehabilitation. No participants recalled asking for information about exercise. Only one person completed a program. He described attending the exercise program twice weekly, and its positive impact on his health "I think it really helped me with the diabetes". There were three reasons that individuals cited for not attending CR: lack of knowledge, no insurance coverage, and inability to complete the exercise. Twenty-four participants reported they had never been referred; many knew nothing about CR. A typical response was, "No. They never told me nothing about that".
Data Integration
Qualitative descriptions of a typical day were triangulated with quantitative responses regarding how much exercise was completed in the past week (categorical responses ranging from "none" to "greater than 3 h"). There was 82% concordance, or agreement, between qualitative descriptions and the quantitative data. Some that were not concordant appeared to overestimate the amount of exercise done in the past week, while qualitative descriptions of their typical day revealed very little activity at all. Reasons for decreased physical activity centered on a low level of physical functioning and conditioning, which is consistent with the majority of our sample having NYHA Class III HF. However, there was also compelling evidence that depression may play a role in low levels of physical activity.
A 57-year-old woman with evidence of depression (PHQ-9 = 11) and a low DASI score (1.75) indicating low physical function, reported less than 30 min of exercise in the past week but in the past had enjoyed dancing. She described trying to dance recently at a family party but had to sit down after less than five minutes "I used to go out dancing. But now I can't".
Although the mean DASI score reflected an ability to do moderate activity, the qualitative descriptions of their day reflected much less; and revealed that depression might be a factor. For example, one man revealed how HF symptoms and depression have affected him, "after you get chest pains and everything, you, you don't feel like doing nothing, you just get depressed".
Discussion
This mixed method study examined the patterns of physical activity and cardiac rehabilitation referral in a sample of older African American adults with HF, and revealed insights into influences on their daily physical activity. Specifically, low levels of physical activity and elevated levels of depressive symptoms were found in this population. Our study provides important information about why physical activity may be so poor, including the prevalence of symptoms that interfere with both routine and planned physical activity, as well as a lack of information about how to engage in regular physical activity or exercise.
Unfortunately, in general, the self-care practices in this sample were sub-optimal, which parallels their lack of physical activity as a part of self-care, and is not a new finding in low-income or ethnic minority populations [15,26,27] The principal signs of HF (dyspnea and fatigue) [4] which results in exercise intolerance, were seen in this sample. Participants reported symptoms of extreme fatigue and shortness of breath, which interfered with day-to-day physical activity. Although regular exercise can improve these primary symptoms of HF [28], the participants in our study exercised very little according to both the quantitative data and qualitative accounts. Our study provides important insight into this paradox. Despite current clinical guidelines, our sample reported that they received minimal explicit instruction on how to incorporate physical activity into their daily self-care routine. The factors that contribute to this finding are likely complex and cannot be discerned from the current study design. One of the goals of Healthy People 2020 is to increase the proportion of medical office visits that include counseling or education about exercise with patients diagnosed with heart disease, diabetes or hyperlipidemia [6]. This inconsistency is particularly critical for ethnic minority populations who have a higher prevalence of HF, lower levels of physical activity, and may benefit from direct and explicit counseling during an office visit. Additionally, higher levels of physical activity have been associated with better cognitive function in older adults with HF [29]. This may be a much-needed additional benefit in this population of adults with HF. The results of a systematic review of cognitive impairment in HF reveal adults with HF have increased odds (OR = 1.62; 95% CI: 1.48-1.79; p < 0.0001) of cognitive impairment [30].
In addition, few individuals in our sample reported they had received a referral to cardiac rehabilitation. According to the American Heart Association [3] a tailored exercise program is viewed as a safe, adjunctive component of treatment for HF patients. In addition to providing a place for monitored exercise, cardiac rehabilitation programs can be a source of self-care counseling, with a focus on education and skill development as well as an opportunity for frequent symptom assessment [31]. In 2014, the Centers for Medicare and Medicaid Services added cardiac rehabilitation services to beneficiaries with stable chronic HF [32]. However, at the time of this study, there was little opportunity for our study population to attend cardiac rehabilitation and most cited lack of insurance coverage. This finding is not new and highlights a health disparity for ethnic minority patients with HF. Racial disparities in the referral process to cardiac rehabilitation have been noted in the past. In a study of almost 2000 cardiac patients eligible for cardiac rehabilitation, Whites were more likely to be referred than Blacks (OR = 1.81; 95% CI: 1.22-2.68) even after controlling for age, education, socioeconomic status, and insurance [33]. The barriers to cardiac rehabilitation identified in our ethnic minority population sample, particularly lack of referral, were very similar to those in other populations. Our study adds to the literature by explicating the lack of exercise counseling, CR referral, and minimal awareness of the rehabilitation program in this population. It is important to note that many of these participants had non-ischemic HF that may have precluded providers from referring to CR given the lack of insurance coverage.
Our study reinforced that targeted education about the benefits of CR is needed, particularly in patients with low levels of education and low socioeconomic status who have had low adherence rates to CR in the past [34]. Education about medication adherence, maintenance of healthy body weight, and management of coexisting conditions (hypertension, diabetes) included in CR [29] may also benefit this population. In fact, the higher prevalence of HF in African Americans has been attributed to modifiable risk factors such as high blood pressure, high blood sugar and smoking. Obesity and physical inactivity are additional risk factors that may also be modified [35]. However, lack of a safe environment may inhibit engaging in physical activity in the surrounding neighborhood [35] and this systems level barrier is harder to providers to address. One solution may be to collaborate with local churches to promote a physical activity intervention, as the church is a known source of trust and would be an ideal partner between African Americans and the health care community [36]. A similar approach has been used in a comprehensive health counseling study set in a community health center. The aims of this study were to increase physical activity and improve the dietary habits in African American women at risk for heart disease [37].
Sociocultural influences have previously been reported elsewhere as an influence on overall self-care in this sample [15] and may also help to explain lower levels of physical activity in this population. Cogbill [38] explored whether sociocultural attitudes were associated with self-reported physical activity in African American adults age 45-75 (n = 446). Results indicate that individuals with strong concerns for family and community were more likely to report meeting recommended levels of physical activity. Our study adds to the current literature by exploring how this sample has adapted to living with the symptoms and limitations of HF. Some were still trying to do as much as possible, while others have given up trying to do what had been possible before they had HF. Participants spoke often of all that was lost: Playing ball with grandchildren; dancing at a party; swimming at a beach; shopping at a mall; or even going to church. Previous clinical trials indicate regular exercise is safe and can improve functional capacity in patients with HF [14]. The results of our study suggest a tailored approach that incorporates past physical activity preferences may be beneficial in promoting physical activity in this population.
In addition, physical activity in participants in our sample was associated with depressive symptoms. In accord with our finding of a high prevalence of depressive symptoms detected by the PHQ-9, a recent study using the Multiple Affective Adjective Checklist (MAACL) found that in patients diagnosed with HF, 63% had mean scores that exceeded the level for depression [39]. In subjects from the HF-ACTION (Heart Failure: A Controlled Trial Investigating Outcomes in Exercise Training) study, which was the largest randomized trial of exercise in HF, 28% had clinically significant levels of depressive symptoms [40].
Our results suggest that depression as a common comorbid condition plays a potent role in physical activity levels. Specifically, individuals with low exercise levels and PHQ-9 scores consistent with depressive symptoms described how mood influences ability to be physically active. Additionally, functional status was negatively correlated with depression; those with higher depression scores on PHQ-9 had lower functional status on the DASI. In a larger sample (n = 256; 18% minority) of heart failure patients, a change in depressive symptoms was the strongest predictor of one-year health-related quality of life, after controlling for functional status, demographics and other clinical factors [41]. In the HF-ACTION study, those subjects with a higher level of depression had a greater risk of HF hospitalization and HF death. But importantly, subjects randomized to the exercise program had significantly lower levels of depression as compared to the usual care group by three months, which persisted through the first year [40]. Given the triad of depression, low functional status, and poor health-related quality of life in patients with HF, identifying culturally appropriate physical activity interventions may provide improvements across these areas.
The patients in the current study had evidence of low functional capacity, but many were trying to stay active in their own way. Nevertheless, this low level of function may not preclude them from benefitting from regular physical activity. In the HF-ACTION study, subjects who identified as Black had lower baseline functional capacity compared to Whites as measured by the six-minute walk and cardiopulmonary exercise test [42]. Although they experienced higher HF hospitalization, there was no evidence Black subjects exhibited a differential response to the exercise training. The reason for this outcome is unclear, but still supports the use of routine exercise as therapy for all patients with HF.
Finally, our study was unique in its focus on physical activity as a component of self-care as guided by the situation specific model of self-care. Accordingly, self-care requires knowledge about and skill in the particular self-care behavior as well as compatibility with one's values [16]. Our study revealed that the lack of information about engaging in physical activity or regular exercise was a barrier to participation. Sociocultural factors, including their desire to remain active with family and friends, also likely contributed to how individuals engaged in physical activity, both before and after their HF diagnosis. Culturally sensitive interventions that increase knowledge, and help individuals develop the necessary skills to safely engage in exercise, are needed.
Limitations and Strengths
There were several limitations of this study, including the small sample size limited to African American participants, without other minority individuals. Although this sample size was appropriate for the qualitative aims, the quantitative analysis was limited. For example, the relationship between functional status, physical activity, and depression in an ethnic minority population needs to be fully explored in a larger sample. This study was cross-sectional and no causal links could be established between depression and physical activity. Additionally, the influence of family members' or caregivers' social support on levels of participants' physical activity was not explored in this study. It has been found that with high levels of social support, patients with HF were more likely to exercise on a regular basis [43].
Health literacy is an important potential confounder that was not formally assessed in this study. The low education level of this sample was learned only during data collection. This may have affected their understanding of health information like physical activity instructions provided to them in the past. In addition, there may have been an element of social desirability in completing the instruments that may help explain lack of concordance in some cases [44]. Despite these limitations, our findings provide important new insight into the barriers that impact physical activity behavior in African Americans with HF.
Conclusions
Our findings highlight the need for the development of strategies to increase adherence to guideline recommendations for exercise in this population. This may be one avenue to reducing the disparate outcomes seen in African American with HF. Given the high prevalence of depression in this sample, additional work is required to better understand the interaction between depression, physical activity and HF. Additionally, understanding the influence of culture in minority patients with HF is essential in developing physical activity interventions. The perception of safety in engaging in physical activity for this vulnerable population may need further exploration. Providers may need to repeatedly endorse the benefits of physical activity for their patients. Finally, given the known benefits of CR on both functional status and depression, progress needs to be made in making this program not only affordable but accessible for all individuals with HF. | 2016-03-14T22:51:50.573Z | 2015-10-15T00:00:00.000 | {
"year": 2015,
"sha1": "9b6ed100b77030a25a6a4c64265e8507a4cd0990",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/3/4/973/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b6ed100b77030a25a6a4c64265e8507a4cd0990",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
116624400 | pes2o/s2orc | v3-fos-license | Supporting the Changing Research Practices of Civil and Environmental Engineering Scholars January 16 , 2019
In fall 2017, the University of Colorado Boulder (CU Boulder) Libraries joined with ten other university libraries to conduct an Ithaka S+R study investigating the research practices and needs of civil and environmental engineering faculty. Ithaka S+R is a not-for-profit organization doing research and strategic guidance for colleges, universities, libraries, museums, scholarly societies and other institutions that support higher education. This study was part of their ongoing research support services program on how the practices of scholars vary by disciplines.
Introduction
In fall 2017, the University of Colorado Boulder (CU Boulder) Libraries joined with ten other university libraries to conduct an Ithaka S+R study investigating the research practices and needs of civil and environmental engineering faculty. Ithaka S+R is a not-for-profit organization doing research and strategic guidance for colleges, universities, libraries, museums, scholarly societies and other institutions that support higher education. This study was part of their ongoing research support services program on how the practices of scholars vary by disciplines.
Methodology
Librarians conducted eight interviews with civil and environmental engineers at the University of Colorado Boulder. Of the eight engineers, seven were faculty members from the Department of Civil, Environmental, and Architectural Engineering and one was faculty from the Department of Mechanical Engineering. Interviews were held and recorded in the engineers' offices on campus with a semi-structured interview guide provided by Ithaka S+R. Relying on the interview guide enabled the overall tone of the interviews to be more conversational, allowing the opportunity for interviewers to ask follow-up questions and to explore topics further as needed.
Interviews were transcribed and anonymized to remove names and other identifying information. Qualitative coding of transcriptions was performed using the analysis software application Dedoose. The team of librarians developed consensus regarding the codes used with at least two librarians coding any single transcript. Any remaining discrepancies were resolved by the group as a whole. Coding of the transcripts allowed the librarians to view common themes, around which the findings of this report are based.
The interviews covered a range of topics that allowed scholars to both speak to their own experiences and to address the broader context for their discipline. We found that the interviews provided valuable information on the research practices of civil and environmental engineers. This insight into the work habits and information concerns of scholars is particularly useful to librarians as we continue to develop and refine our services and collections to meet their needs.
Diversity of Research Practices within Civil and Environmental Engineering
One of the themes observed in the data was the diversity of the fields of civil and environmental engineering. Scholars are collaborating with professionals from a variety of fields and types of organizations. In addition, the kind of work they are doing is varied (not what might be thought of as traditional civil or environmental engineering), and interdisciplinary. Finally, researchers are using varied research methods including modeling, experimentation, fieldwork, and social sciences methods.
Multidisciplinarity and Collaborations
Scholars described collaborations with a large variety of individuals and organizations, both within and outside of the University. Graduate students were mentioned as collaborators in every case, and they were generally the first collaborators described, indicating their importance to scholars and the close proximity that the researchers work with the students. Scholars also discussed collaborating with other university researchers, both in the organization and at other universities, and with other engineering scholars, including civil or environmental engineers, electrical engineers, mechanical engineers, chemical engineers, and computer scientists. There were also collaborations with researchers from other fields, mostly in the sciences (chemistry, biology, medical sciences), but with some social science researchers as well (sociology, geography, behavior sciences, legal studies). Scholars mentioned collaborations with other university researchers in Colorado, but also described national and international university collaborations. Outside of academia, scholars discussed partners in industry (such as the oil and gas industry), and non-profit or governmental organizations including local community groups, public health organizations, the National Renewable Energy Lab, the National Academies of Science, Engineering, and Medicine, US Geological Survey, the US Environmental Protection Agency (EPA), and the World Health Organization. These collaborations also provide insight into scholar's funding sources. Collaboration is easier with colleagues at the same institution, but with tools like video conferencing and shared document applications, collaborating across distance becomes more manageable. In terms of barriers to collaboration, time was identified as one of the main obstacles. They specifically mentioned difficulty in having time to find collaborators, learning new tools for collaboration, or setting up larger collaborative projects. Also identified were the different expectations for collaborators from diverse organizations (such as federal or state government agencies) or from different disciplines (such as medical sciences) in terms of research goals and publication practices. Researchers from other organizations also had differential access to information sources like journal articles, and this was an impediment to collaborative practices.
Interdisciplinarity and Fit in Civil and Environmental Engineering
Many interviewees discussed not fitting into a traditional civil engineer or environmental engineer mold, which also highlights the interdisciplinary nature of civil and environmental engineering. One scholar spoke to environmental engineering traditionally having a stronger emphasis on water issues, and working in the area of air quality placed the scholar outside the norm in terms of research topic and research methods. In addition to interviewees feeling their topics or methodologies do not fit with traditional civil or engineering research, one interviewee discussed the disconnect between practicing civil engineers and the work done by engineering scholars: "Academia is so far ahead of practice that it's hard to make an impact, so that's a big challenge where sometimes people get down on themselves about academia and then they don't feel like they're making a difference." These experiences highlight the wide landscape seen in the fields of civil and environmental engineering, across research topics and approaches, and as scholars communicate between practice and research.
Issues regarding how a scholar's work 'fits' into the field's publishing practices impacts a scholar's publishing practices. If their work does not fit into traditional civil or environmental engineering, then the work might not fit into traditional publishing venues: "In my field in particular conferences are actually somewhat prestigious and it seems like in civil engineering they're not and so I made a reference about "oh what if I get a conference paper in should I put it on my [annual faculty report]" and then another faculty member said "oh God no nobody cares about conferences" and I said "well this conference is a 20% acceptance rate that's actually lower than a lot of the journals that I submit to". So I feel like there is the stigma [of] I have a lot of conference publications but a lot of people in the department don't care about this." In putting their work into the most appropriate venues, they may run into issues with tenure and promotion expectations.
Methodological Diversity
Civil and environmental engineers use a wide variety of methods, including fieldwork, lab experimentation, modeling, and social science methods like interviews and surveys. Modeling was described most often, potentially indicating its importance or popularity as a research method. There were also several terms used to describe modeling methods, including computational mechanics, multiscale modeling, numerical methods, simulation models, and computer modeling. Each scholar may employ several different methods, for example, "most of my research is measurement based. So, we'll deploy instruments and we'll gather measurements... and then laboratory work is for calibration and instrument storage and manipulation." The scholars described using multiple methods because they were supervising a variety of graduate students, and each student's work might use a different methodology. Finally, most interviewees described literature review as one of the methods they use over the course of their research, but one scholar's methods are based on analyzing historical documents.
Data Practices and Data Management
Scholars in civil and environmental engineering utilize many types of data, from a variety of sources, which are stored in many locations. However, data management practices are not consistent in civil and environmental engineering, and this causes some anxiety for the researchers. One interviewee stated "I do have concerns that I may not be quite living up to the expectations of the funding agencies as far as data management." Data Types, Sources, and Sharing Scholars are collecting their own data and reusing data collected by others. They set up instruments to collect environmental monitoring data, generate data for analysis through modeling, or create data in the lab. Interviewees also described using data from others, including the EPA, public health departments, infrastructure maintenance departments, the campus facilities department, and from other researchers around the country. Requesting and using data produced by others came with a number of frustrations, including waiting long amounts of time or not hearing back. One scholar described their frustration with reusing data: "Well, I guess a lot of times the data isn't always clean -there's some missing values and some of the measurements are wrong, it's just stuff that people have been passing around and they might've modified it, so it's hard to tell if it's the original data set or if somebody gave you a different version." While reusing and sharing data seemed to be commonly practiced by civil and environmental engineers some interviewee's encountered instances where "sometimes people were a little guarded with their data." The timing of when data is shared matters, as "people in my field don't like sharing their data before it's out." Interviewees described how they did not want to share their data with others until they had had published, in addition to encountering that practice in others, indicating a routine practice in the field. Interviewees were open to sharing the data they produced, or their models with other researcher after they had published.
Data Storage
A surprising number of scholars stated their data was stored on their students' computers. One stated, "my students keep it on their computer and then at some level it gets aggregated and sent to me, but I don't actually have myself most of it at this point." Other data storage locations include their own computers or external hard drives, having files in their email accounts, cloud storage applications, and one researcher described using a large data storage facility at the university. Many interviewees described using multiple options so that they would have file backups, but several expressed a lack of organization and not knowing where all their data was.
Data Management
If interviewees were following data management practices or plans, most described having someone else, such as a colleague or student doing it. When one interviewee was asked about keeping any of their work on GitHub, or any other similar resource, they answered: "I've been asking my students to do this. We aren't using it quite well yet, but we are moving that way." Another described leaving the data management to a collaborator on a grant. Overall, data management was a challenge to scholars "I have challenges with data management in general. I promise to do things on my grants that I don't have the skills or time to actually do very well" and when another was asked if they had any plans for managing data beyond current use, preserving their data, or making them publicly accessible one scholar answered "I don't have any plans to do that, because I don't really know how" while another said they wanted to do "something other than what I'm doing." In addition, scholars spoke to the need for things like data format standardization and a shared disciplinary repository for data, similar to what DNA researchers use, so others working in similar areas could re-use the data in new ways.
Finding Information
The multidisciplinary tendencies of civil and environmental engineering renders many library databases insufficient for gathering relevant and comprehensive information. While these are still valued for their precision and features for enhancing a search strategy, the actual pool of searched information must be broad and incorporate materials such as grey literature and technical reports. Google Scholar is the preferred tool for picking up the range of materials necessary for this kind of research, although one scholar reported searching Google before switching to Google Scholar to broaden the initial search even further. Databases like Web of Science, Engineering Village, or IEEE Explore provide a deep dive into the academic literature, but lack the coverage for successful searching beyond that.
However, even Google and Google Scholar have significant limitations when it comes to finding information produced by governments and various non-governmental organizations. This puts scholars in the difficult situation of needing to know what exists and where it exists before searching for it. One noted, "Just to find it is very challenging. A lot of time. So it would be great to have better access to reports that governments put out, maybe even ones in other languages.
Reports that NGOs put out, reports that say maybe UNICEF or USAID or organizations like that are putting out, because those have a lot of important literature that don't get published, but you know, as researchers we could evaluate the robustness of that or see if it's up to peer review quality and can utilize that data in the work that we're doing too, but oftentimes that gets lost and I feel that that's a big challenge." Some scholars voiced frustrations with the abilities of graduate students in their labs to conduct adequate searches for information. Lack of familiarity with database features and proper search string composition was an understandable concern and one with which librarians are familiar. Even Google Scholar with its broader search capabilities is subject to students' shortcomings when navigating the information landscape. "If they cannot find [it] on a Google search on the first page they think it does not exist," one scholar explained. Nevertheless, a lack of awareness of the library's ability to assist with comprehensive searches was also evident. Few scholars provided search training for students in their labs with most of it occurring on an ad hoc basis.
Sources Used
Every scholar interviewed indicated regular use of peer-reviewed academic literature, but also mentioned routinely seeking out other types of scientific publications. Grey literature of all types and from a wide variety of sources is both heavily used and highly valued. The actual use can be plainly informative or have a more practical approach if scholars need to assess the quality of the research contained therein or use the document's references to find literature from other organizations.
Conferences were viewed very differently depending on a given scholar's niche within the field. "People are still somewhat siloed in their conferences," remarked one scholar for whom conferences did not regularly come into play, but recognized their varied reception and use throughout the discipline. For some, conferences are a way of keeping up with research and discovering who the major players are. Others saw conference presentations as tantamount to a publication in an academic journal, but struggled with poor indexing and outdated dissemination practices that prevented access if they were not at the conference themselves.
While current information is valued for innovation and helping scholars stay up to date on research trends, older information is also sought because of the insight it can provide into past engineering practices, especially as scholars find themselves dealing with structures and materials developed decades before. The library is viewed as particularly useful to scholars needing access to this kind of information.
Preprints are used by scholars whose fields are developing extremely quickly as a way to discover and disseminate research with which traditional modes of scholarly communication, like peer-reviewed academic journals, are not able to keep pace. Waiting a year for an accepted journal article to be published and available is not efficient or effective for these scholars or the research community they wish to reach. Still, some researchers expressed wariness over sharing their research before it is officially published. Preprints also posed a problem with multiple versions of the same information being available.
Information Management
Managing information through software such as EndNote or Zotero is frequently delegated to graduate students. Several scholars were quick to point out that their students knew far more about these systems than they did. Some scholars use citation management software themselves, but struggle with finding time to create efficient workflows for saving and indexing documents. Others rely on a folder on their own computer for saving PDFs related to a particular project, although storage space becomes an issue for long-term retention. Files might only be saved for the duration of a project in these cases and if the citation is needed at a later date, the scholar can turn to the references in their own paper. Somewhat surprisingly, better search capabilities were a factor for deciding to save something at all, but this is related to the time required to learn new software that could be viewed as peripheral to the actual research, "There's more and more stuff that's just available with a search. It's harder to justify keeping it yourself...if I had more time I can imagine I would have a better system."
Research Communication Practices Open Access Considerations
The nuances of a changing publication landscape can be a messy subject for scholars to untangle with regard to open access. Most scholars interviewed reacted favorably to the idea of open access, understanding the value of making their research freely available to a wide audience. Nevertheless, cost is an overwhelmingly prohibitive factor given that some open access journals require thousands of dollars to make an article open access. One scholar put it well, "I like that it's accessible to everybody and that they don't have to pay for it, but I'm torn because it means I pay for it." Grant money, particularly for federally funded research, can help with this cost, but publication funds are not a standard practice in the field.
For those scholars able to publish in open access journals the experience seems to have been positive and one they would seek out again. "Once I published in a journal that was open access, and it's been by far my most cited paper. I would love to have the money to pay for the open access option for my journals." Others have had some luck with participating in open access through preprints or even making research available on their own websites. Reviewing for open access journals is also an option with credits towards publication costs sometimes given to reviewers as compensation for their time.
If cost presents the greatest restriction for publishing in an open access journal, legal ramifications are its counterpart in making research available through an institutional repository. Scholars are familiar with the University of Colorado's open access policy 3 and with the increased ability to share their research the repository allows, but they are also mindful of the copyright agreement they sign each time they publish an article. "I haven't done it because I was worried about it being illegal," and "I'd be happy to do it, I'm just ignorant on what I'm allowed to do," were statements typical of the sentiments expressed during the interviews when scholars addressed questions about repositories.
There is also some suggestion that the idea of what an online institutional repository is and what function it serves remains a bit unclear, especially to newer faculty who may not yet be familiar with this type of library service. One scholar seemed to think this was a type of subscription database. Others viewed institutional repositories as akin to ResearchGate or Academia.edu. No scholars reported using CU's institutional repository as a means of sharing their research.
Workplace Considerations
Most scholars have a few journals in which they preferred to publish. These often align very closely with the specific focus area of the scholar's research. Scholars have a strong sense of the journal's audiences, acceptance rates, and turn-around times for publication. One scholar with a non-traditional background mentioned wanting to publish in civil engineering journals, which they had not done previously, but still feeling like an outsider in terms of knowing exactly where to publish.
Metrics also factor heavily into publishing decisions, particularly for pre-tenure faculty and those seeking promotion. Acceptance rates were frequently mentioned as an indicator of prestige, as well as impact factor, which serves both as a means of distinguishing what "good" journals are and the potential for recognition in the form of citations. A pre-tenure faculty member lamented, "I want my citations to go up because there's people telling me 'you know your H index has to be at least 10 to get tenure' or something like that and mine's only at seven so I'm like really insecure about it." Tenure and promotion make publication considerations a tricky puzzle to solve and scholars must balance impactful publishing with frequent publishing. One scholar mentioned these pressures made them less selective on some decisions, "Now that I'm thinking about tenure as the end goal, I am changing my opinion of how often I'm publishing and where I'm publishing. Much more open-minded now, is the right word." The pressures of academia could also very well be an impediment to open access when it comes to choosing where to publish, "At the national labs, where they may not have the same pressures that we do here, to publish, or to be ahead, or to be competitive in that sense, get the next grant, or whatever. There's more willingness to just be open." Post-tenure faculty have the luxury of making the audiences of their publications, even nonacademic ones, the primary factor in these decisions. "If we're really trying to reach other climate researchers, we might publish in Climate Change. If we're trying to reach people who are talking about what's the economic impact on things, we might publish in Development Economics." Outside of academia, government organizations at the state and federal levels were frequently mentioned as desirable audiences. There is an earnest and deeply professional desire among civil and environmental engineering scholars to reach practicing engineers who are actually "doing" the work and to see one's own work have an impact that resonates outside of bibliometrics.
Recommendations Data Management Support
Recommendations regarding data practices are to provide educational opportunities for data management practices, and to address benefits and options for scholars in civil and environmental engineering. Taking time to learn about data management and related services will help scholars reduce the stress they feel with regard to their own practices. Experts are available at the University of Colorado Boulder Libraries to assist scholars in learning about and using these services. Because time was identified as a limiting factor, offering tools and services that can help them competently practice data management in an efficient manner is important. Offering guidance and best practices about data storage options would help research teams keep their data in safe locations that are accessible to the entire team, and provide a solution for long-term storage. Scholars also mentioned the need for repositories and metadata provision to help preserve and make accessible data sets.
Publication and Promotion of Research
Scholars made several requests related to publishing practice. While they are overall aware of many different publishing venues and which are the best fit for their work, questions remain with regard to open access and predatory publishing. To address this need for greater clarity on publishing options and pitfalls, the Libraries could better market to civil and environmental engineers their information sessions, consultation services, and written materials that address these publishing issues. The Libraries currently have personnel well versed in publication through open access journals and institutional repositories who help on a case-by-case basis with author agreements, publication funds, and journal evaluation, services of which many scholars are unaware. This expert assistance could also help to calm scholars' fears they are doing something "illegal" when making publications available through the institutional repository, or that they are publishing in a less reputable journal. Additionally, scholars requested more support for publicity for themselves, their students, and the work they are doing with their teams so they can more effectively promote their work. The Libraries frequently feature research from other departments in events and exhibits and should seek opportunities for this kind of collaboration with civil and environmental engineering faculty.
Education for Graduate Students
While civil and environmental engineering scholars were overall extremely competent at finding information related to their areas of research, the disconnect between their abilities and those of the graduate student workers in their labs suggests a potential for librarian assistance. Additional graduate student skill gaps mentioned were in writing, properly citing sources, and statistical analysis. Librarians offer graduate students assistance with literature review, citation management, and data management, and more effective promotion of those services to graduate students is important. Librarians can be called on to help as questions arise, or can offer semi-embedded assistance, where they would be more formally integrated with developing the lab's workflows related to information seeking and management. A new possible service model is for librarians to partner with writing center and statistical analysis center campus services to provide a graduate student research orientation. This would provide graduate students with the opportunity to learn about a number of services available on campus, and develop skills to help them throughout the research cycle.
Conclusion
This study has offered valuable insight on the information needs, research practices, and concerns of civil and environmental engineers at the University of Colorado Boulder. While the study takes into account the broader landscape of the civil and environmental engineering field, viewing the findings through the lens of librarianship offers actionable directions for greater support and better services for these scholars. University Libraries at the University of Colorado Boulder have positioned themselves to offer support not just through traditional liaison librarianship, but also through services centered on data management, open publication and dissemination of research, and dedication to student success. Knowing the particular ways in which these services may be adapted for civil and environmental engineering provides a foundation for robust and nuanced services that speak directly to the scholars who need them. | 2019-04-16T13:29:24.591Z | 2019-01-16T00:00:00.000 | {
"year": 2019,
"sha1": "5537253bec95d90f113cb9ba46321d324f478722",
"oa_license": "CCBYNC",
"oa_url": "https://sr.ithaka.org/wp-content/uploads/2019/01/SR-Report-RSS-Civil-Environmental-Engineering-01162019.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "9bf5d895419393783698c17ef3c9cc4ef1755708",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering",
"Political Science"
]
} |
122539555 | pes2o/s2orc | v3-fos-license | Relationship between Community or Home Gardening and Health of the Elderly: A Web-Based Cross-Sectional Survey in Japan
There have been many reports indicating the relationship between gardening and health or healthy lifestyles among adults in developed countries all over the world. However, Japanese evidence is lacking. The aim of this study was to clarify the relationship between community or home gardening and health status or a healthy lifestyle using a web-based survey with Japanese elderly living in the community. A survey was conducted to gather data from 500 gardeners and 500 nongardeners aged 60 to 69. As a result, significant relationships were shown between community gardening and exercise habits, physical activity, eating vegetables, and connections with neighbors. Moreover, the significant relationships between home gardening and the following items were indicated: Subjective happiness, exercise habits, physical activity, sitting time, eating breakfast, eating vegetables, eating balanced meals, and connections with neighbors. No item demonstrated a significant relationship with gardening frequency. A significant relationship was demonstrated between gardening duration and health problems affecting everyday life. Further significant relationships were shown between gardening with others and subjective happiness, having a reason for living. In conclusion, promising positive relationships between community or home gardening and health or healthy lifestyles were indicated.
Japan's aging rate is one of the highest globally [9]; the fact that gardening can help in improving the health of the elderly in Japan is an important finding from a global perspective. However, in Japan, there have only been two studies on gardening and health or healthy lifestyles in the general community [10,11]. One study was conducted in Tokyo, the capital of Japan and one of the most urbanized areas in the world [10]. In this study, the relationship between community gardening and health was analyzed [10]. Positive relationships between community gardening and mental health, subjective health, social cohesiveness, and vegetable intake frequency were reported [10]. Another study was conducted in a city in Gunma Prefecture, which is a suburban setting [11]. This study analyzed the relationship between home or community gardening and health or healthy lifestyles [11]. However, the study was limited by a small sample size [11]. Moreover, some of the results of these two studies were inconsistent [10,11]. For example, in the suburban area, there was no significant association between fruit and vegetable intake frequency or subjective health and community gardening [11]. Therefore, it is necessary to accumulate further evidence in Japan about these topics.
The aim of this study was to clarify the relationship between community or home gardening and health status or a healthy lifestyle using a web-based survey for Japanese elderly living in the community. After comparing findings with those from previous studies, the relationship between gardening and health in Japan is discussed.
Study Design and Participants
This was a web-based cross-sectional study in Japan. The survey was conducted by Cross Marketing Inc. [12] on 6 December 2012, as a part of the Committee on Evidence on Farm Work and Health by the Japanese Ministry of Agriculture, Forestry and Fisheries [13]. Cross Marketing Inc. is a survey company with approximately 4.2 million people registered, the largest in Japan, and it is also possible to specify the characteristics of the surveyed population [12]. Here the survey was conducted to gather data on 500 gardeners and 500 nongardeners aged 60-69 [13]. In addition, professional farmers were excluded from the participants [13]. Moreover, respondents included were from all 47 prefectures (administrative subdivision) in Japan [13]. To prevent only those who were concerned about agricultural work and health from responding to the survey, the terms "farm work" and "health" were omitted from the research title, which was entitled "Questionnaire Survey on Daily Life" [13].
There were no ethical problems conducting this study. All participants provided informed consent for inclusion before they participated in the study. The survey was conducted anonymously, and all responses were optional [13]. When conducting this research, no issue arose that contradicted the Declaration of Helsinki. The survey data were published on the website of the Ministry of Agriculture, Forestry, and Fisheries [13]. The data were used after obtaining permission from the person in charge of the Ministry of Agriculture, Forestry, and Fisheries. What are disclosed are anonymous data, and outside the scope of the ethical guidelines concerning medical research on human subjects in Japan [14]. Therefore, review and approval from the Takasaki University of Health and Welfare, IRB office, was not requested.
Outcomes: Health Status and Healthy Lifestyle
To assess health status, items were used as follows: Subjective symptoms, periodic visit with illness or injury to clinic, health problems affecting everyday life, subjective happiness, feeling a reason for living, psychological distress, and BMI.
Subjective symptoms, periodic visits with illness or injury to clinic, and health problems affecting everyday life were indicated as "present (yes)" or "none". These three items were used in Comprehensive Survey of Living Conditions in Japan [15].
Subjective happiness was rated on an 11-point scale (unhappy = 0 to happy = 10). This item has been used in previous research studies [16,17]. The reliability of measuring happiness levels using a single item has been clarified by a previous study [18]. On the basis of previous reports [16,17], we divided the participants into two groups using the median happiness of the Japanese, that is, less than 6 and 7 or more.
The participants were asked "Do you feel a reason to live (fun and pleasure)?" and rated as "none, little, moderate, a lot". This was the item used by an investigation of the Cabinet Office in Japan [19]. The answers were divided as "little or none" and "moderate or a lot".
Psychological distress was rated using four items among six items of the K6 [20], Japanese version [21] (nervous, restless or fidgety, so sad that nothing could cheer you up, and everything was an effort; questions about hopelessness and worthlessness were not asked). The total score ranged from 0 to 16. Moreover, Cronbach's alpha for the four items was 0.850, and there was no problem in internal reliability. In the previous studies using K6, less than 12 points was regarded as a cutoff point [22,23]. Since only four items were used this time, participants were classified into two groups according to their K6 score of less than 9 points.
For BMI, they were asked their height and weight and it was calculated from there. On the basis of the BMI standard value for people aged 60 years or older according to Dietary Reference Intakes for the Japanese (2015) [24], they were divided into three groups: Underweight (<20), normal weight (20-24.9), and overweight or obese (≥24.9).
To assess a healthy lifestyle, the following items were used: Exercise habits, physical activity, sitting time, walking speed, eating breakfast, eating vegetables, eating balanced meals, rest due to sleep, connections with neighbors, and having friends.
Exercise habits referred to exercising continuously for more than 30 min, for more than 2 days a week, for more than 1 year. Physical activity referred to whether walking or physical activity equal to or more than daily life activity was carried out for more than 1 h a day. These are items from the standard questionnaire used for specific health check-up in Japan [25].
Sitting time referred to "How long you spend sitting per day while you are awake normally?", and answers were obtained for "less than 3 hours", "3 to 6 hours", and "6 hours or more". It has been reported that if sitting time is extensive, the risk of mortality due to total cause cardiovascular disease is high [26,27].
The walking speed referred to "If the speed of walking was faster than that of the same generation of the same gender?", and answers were obtained with "yes" or "no". This item is from a specific health check-up questionnaire used in Japan [25].
Breakfast intake referred to "Do you eat breakfast usually?" and answers included the following options: "I eat it every day", "I do not eat it 2-3 days a week", "I do not eat it 4-5 days a week", and "almost do not eat it". This item was used in the National Health and Nutrition Survey of the Ministry of Health, Labor, and Welfare [28]. On the basis of national recommendations [29], it was divided into "eat every day" and "not eating every day".
Vegetable consumption was assessed as follows: "Usually, do you eat enough vegetables?" and answered "enough", "moderate", "not enough", and "shortage". It was divided into "enough or moderate" and "not enough or shortage." Increased vegetable intake reduces the risk of total mortality and mortality due to cardiovascular disease [30,31].
Eating balanced meals was assessed as follows: "How many days do you eat meals consisting of grain, fish and meat, and vegetable dishes, two or more times/day?" and the answers were "almost every day", "4 to 5 days/week", "2 to 3 days/week", and "almost none". This was the item used in the investigation of the Cabinet Office in Japan [32]. On the basis of national recommendations [33], it was divided into "eat every day" and "not eating every day".
Rest due to sleep was assessed as follows: "Over the past month, are you well rested by sleeping?" and answered "enough", "moderate", "not enough", and "shortage". It was divided into "enough or moderate" and "not enough or shortage". This item was used in the Comprehensive Survey of Living Conditions in Japan [15].
Connections with neighbors were assessed by asking, "Do you think that the connection between you and your neighbors is strong?" and answered "strong", "somewhat strong", "somewhat weak", "weak", and "unknown". It was divided into "strong" and "weak or unknown". Having friends was assessed by asking "How many close friends do you have?", and answers available were "a lot", "moderate", "little", and "none". It was divided into "a lot or moderate" and "little or none". Both of these items were used in the investigation of the Cabinet Office in Japan [19,34].
Gardening Style and Status
The participants were asked questions regarding their gardening styles. "Do you do farm work? Please answer at least twice a month and not a few times in a year", and the participants answered as "Working at a farm is my job", "I do farm work in community garden as a hobby or leisure activity", "I do farm work in my home garden, which has an area of 15 square meters, as a hobby or leisure activity", and "I do not do farm work". Those who answered "Working at a farm is my job" were not included into the data set. The responses were treated as three groups in the analyses: Community gardener, home gardener, and nongardener.
The frequency of gardening status was also assessed by asking "How many days have you been gardening each week in the past a month?" Answers were in the range of 0 to 7 (day/week). The duration of gardening time of each day was requested, and the duration per week was calculated by combining the frequency and gardening time each day (hours/week). Moreover, to ascertain whether gardening took place alone or with others, the questionnaire asked: "Who do you do garden with?" Responses included "almost alone" or "often with friends or family".
Analysis
First, the relationship between community or home gardening with health status and healthy lifestyles (N = 1000) was analyzed. One-way ANOVA for age, t-test for gardening frequency and duration, and χ 2 test for all other items were conducted. If there was a significant difference, a post hoc test using Bonferroni correction was conducted. Then, unadjusted and adjusted odds ratios using gardening styles as independent variables were calculated (Ref. nongardener). Multinomial logistic regression models were used for BMI and sitting time, and binary logistic regression models were used for all other items. Sex, age, family structure, and employment status were used as covariables in adjusted models. Additionally, among gardeners, the relationship between gardening status and health status or healthy lifestyles (N = 500) was analyzed. Gardening status was used as an independent variable, and adjusted odds ratios were calculated. Gardening style was added for an adjusted model and analyzed similarly with that described above, according to outcomes.
All statistical tests described here were two-sided and performed using IBM SPSS Statistics version 23 (IBM, Armonk, NY, USA). Differences of p < 0.05 were accepted as significant.
Sample Characteristics
The distribution of the responses of 1000 participants according to gardening style is presented in Table 1. Basic characteristics were significantly different in sex, age, family structure, and employment status. In post hoc tests, there was a greater population of women, and living alone among the home gardener group, and of workers among the community gardener group, than the nongardener group. Home gardeners and community gardeners were older than nongardeners. Therefore, it was reasonable to adjust these variables in adjusted models. For the gardening status, frequency, duration, and gardening with others were similar between home gardeners and community gardeners. For outcomes, there were significant differences in subjective happiness, reasons for living, exercise habit, physical activity, sitting time, eating breakfast, eating vegetables, eating balanced meals, connection with neighbors, and having friends. In post hoc tests, compared with the nongardener group, it was discovered that a more significant population had feelings of happiness and reasons for living, had exercise and physical activity habits, sat for short times, ate breakfast and a balanced meal every day, consumed enough vegetables, and had strong connections with neighbors among the home gardener group; and exercise and physical activity habits, ate enough vegetables, had stronger connections with neighbors, and had relatively many friends among the community gardener group.
Gardening and Health Status or Healthy Lifestyle
The results of the analysis revealing the relationship between community or home gardening and health status or healthy lifestyle are shown in Table 2. Significant relationships were shown in the adjusted model between community gardening and exercise habits, physical activity, eating vegetables, and connections with neighbors. Moreover, significant relationships between home gardening and the following items were shown: Subjective happiness, exercise habits, physical activity, sitting time, eating breakfast, eating vegetables, eating balanced meals, and connections with neighbors. All of these showed that gardening was positively related to health and a healthy lifestyle. The significant relationships between community or home gardening and feeling a reason for living, or having friends, were shown in unadjusted models. However, the results of adjusted models also revealed these trends, but they were not significant. Similarly, significant relationships between community gardening and sitting time, walking speed, and eating balanced meals were shown in unadjusted models, but these trends were not significant in adjusted models.
Gardening Status and Health Status or Healthy Lifestyle
Results analyzing the relationship between gardening status and health status or healthy lifestyle are shown in Table 3. There were no items with a significant relationship to gardening frequency. A significant relationship was shown between gardening duration and health problems affecting everyday life. Significant relationships were shown between gardening with others and subjective happiness, feeling a reason for living, and sitting time. Subjective happiness and feeling a reason for living were positively correlated with health and with gardening with others. On the other hand, sitting time was negatively correlated with health. That is, people who garden with others tend to spend a longer time sitting.
Discussion
This study clarified the relationship between community or home gardening and health status or a healthy lifestyle for Japanese elderly living in the community. The results reveal the positive relationship between community gardening and exercise habits, physical activity, eating vegetables, and connections with neighbors; and between home gardening and subjective happiness, exercise habits, physical activity, sitting time, eating breakfast, eating vegetables, eating balanced meals, and connections with neighbors. Overall, gardening was positively associated with health and healthy lifestyles among the elderly in Japan. In other words, it was suggested that gardening contributes to promoting health for the elderly in Japan. Moreover, positive relationships between the time spent gardening and health problems affecting everyday life, between gardening with others and subjective happiness, and feeling a reason for living were shown. However, there was a negative relationship between gardening with others and sitting time. The presence or absence of such differences because of gardening status provides useful suggestions for implementing health promotion by gardening.
The relationships between exercise habits, physical activity, eating vegetables, and connections with neighbors were common to both community gardening and home gardening. This study was the first to confirm the relationship between gardening and exercise habits. Those who are gardening are active, and it seems that they are exercising on a daily basis. In addition, these findings on the relationship between gardening and physical activity are not consistent with the previous studies conducted in Japan [10,11]. There was no difference in an urban study [10]. On the other hand, a study in a suburban area indicated that gardeners tend to be very physically active [11]. When considering results over time, gardeners tend to be more physically active as a whole. Even in a study in the Netherlands, it was reported that allotment gardeners were much more physically active than their neighbors over the summer [35]. Generally, in Japan, many people choose walking as transportation in urban areas and tend to be quite active. On the other hand, in suburban areas, many more people use cars and are less physically active. There are perhaps many people with high levels of physical activity level in urban areas, where gardening does not contribute much to physical activity. However, in suburban areas, gardening provides a good opportunity for physical activity. Moreover, positive associations between gardening and vegetable intake have also been confirmed in previous studies in Japan [10,11] and the United States [36][37][38][39][40]. Therefore, the relationship between gardening and a large vegetable intake is definite. Furthermore, the association between gardening and social cohesion has been confirmed in prior Japanese research [10,11]. Similarly, the relationship between gardening and social involvement or perceived collective efficacy have also been reported in the United States [41]. By increasing opportunities to go out, contacts with the local residents become more frequent, and connections become stronger.
The relationships between subjective happiness, sitting time, eating breakfast, and eating balanced meals were different depending on whether community gardening or home gardening was involved. Sitting time odds ratios were similar for the community gardener and home gardener. Thus, there was a possible β error due to a small sample of community gardeners. Because of the similar trends observed, differences were not associated with gardening style. In a previous study in Japan, a similar sitting time for all kinds of gardeners was also reported [11]. In addition, the relationship between subjective happiness, eating breakfast, and eating balanced meals tended to be different between home gardeners and community gardeners. For balanced meals, studies in the United States have identified positive associations with community gardening [7]. As a hypothesis, it could be inferred that there were conflicts with economic status, because the relationship between these items and economic status has been reported in Japan [17,[42][43][44][45]. People who have a home garden may be economically better off. In this survey, economic status was not assessed, so it will be necessary to examine correlations with economic status in the future.
It was suggested that a mutual relationship exists between gardening duration and health problems affecting everyday life, such that, for those without a health problem, long-term gardening is possible, and gardening keeps them healthy. There were no other items related to the frequency or duration of gardening. This result is consistent with a previous study [10], which found that irrespective of frequency and duration, gardening is related to the health and healthy lifestyles recognized in this study. A new implication is that people working alongside others feel a reason for living and more happiness. Gardening with others is a part of social participation and may increase the gardener's happiness and pleasure [40,46]. Therefore, it would be good to do gardening with others. For example, gardening with family members, or having opportunities to gather in harvest festivals at community gardens to promote gardening with other people, may lead to a feeling of happiness and reason for living. However, there are not only positive outcomes to shared gardening. According to the results of this study, those who do gardening with others also spend a longer time sitting.
Limitations
These were self-reported data and have a probability of recall bias [47]. The gardeners who believe that gardening is good for health may systematically overestimate their own health status and healthy lifestyle, as health benefits related to gardening are widely known [1][2][3][4][5][6][7][8]. Additionally, this research was an Internet survey, and the possibility of sampling bias cannot be denied. Further, participants were only aged 60 to 69 years, and it is not known if the same trend exists in other age groups. Even those targeting adults in Japan in previous research were about 60 years old on average [10,11], and a question to consider in the future is whether gardening will be as effective for younger adults in Japan. Finally, this was a cross-sectional study, and further longitudinal studies are required to clarify the causal relationships.
Conclusions
This study examined the relationships between community or home gardening and health or healthy lifestyles for Japanese people aged 60-69 living in the community. The results revealed promising relationships between health and gardening. Moreover, many reports have indicated a positive relationship between gardening and health or healthy lifestyles in adults in developed countries all over the world [1][2][3][4][5][6][7][8]. According to the above, gardening contributes positively to the health of Japanese people aged 60 years and older living in the community. It is hoped that the promotion of health through gardening will be practiced more often in Japan. | 2019-04-20T13:03:22.317Z | 2019-04-01T00:00:00.000 | {
"year": 2019,
"sha1": "44cbf2eda3b0453376a2074e6f1e0dcb56ddc682",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/8/1389/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "44cbf2eda3b0453376a2074e6f1e0dcb56ddc682",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
214774937 | pes2o/s2orc | v3-fos-license | From fundamental physics to tests with compact objects in metric-affine theories of gravity
This work provides a short but comprehensible overview of some relevant aspects of metric-affine theories of gravity in relation to the physics and astrophysics of compact objects. We shall highlight the pertinence of this approach to supersede General Relativity on its strong-field regime, as well as its advantages and some of its difficulties. Moreover, we shall reflect on the present and future opportunities to testing its predictions with relativistic and non-relativistic stars, black holes, and other exotic horizonless compact objects.
I. INTRODUCTION
A. The strong-field era of gravitational physics Einstein's General Theory of Relativity (GR) is alive and well. Already one hundred years after the expeditions to Principe Island (Eddington and Cottingham) and Sobral (Crommelin and Davidson) [1] to test Sun's light deflection, we have accumulated plenty of evidence from solar system experiments, post-Newtonian tests, gravitational lensing, test on the equivalence principle, frame-dragging effects, etc, on the reliability of this theory to describe gravitational phenomena [2]. Moreover, the theory has been built in the cosmological concordance ΛCDM model, which has successfully met all observations at small and large scales [3]. In addition, we have witnessed the beginning of a new era on astrophysics of compact objects following the discovery of gravitational waves, consistently interpreted as the coalescence of two compact objects: black hole -black hole [4], and black hole -neutron star [5]. Tests on the Kerr black hole hypothesis itself have been performed according to the radiation emitted from the accretion disks surrounding them [6], as well as by the measurement of the shadow of the supermassive central object of the M87 galaxy [7]. In all these observations GR has fully met if not surpassed our expectations. This has been possible thanks to more than half a century of powerful theoretical developments, by the huge improvement in the capabilities of numerical relativity, and by the support received from large international collaborations. This has triggered the beginning of a new era where the possibility of testing the strong-field regime of the gravitational interaction is at hand. But will new Physics be found?. What could we expect?.
B. The need to go beyond GR
If GR is a successful theory, why the need to going beyond it?. First of all, the ΛCDM model requires the * drubiera@ucm.es introduction of extra matter fields (inflation, dark matter and dark energy) with unusual properties and which, despite intense and varied observational searches [8], has not been directly detected at any terrestrial experiment yet. Moreover, the tension in the value of the Hubble constant as given by the discrepancies from the direct local measurements and the model-dependence inference from CMB data continues to puzzle cosmologists [9]. On a more fundamental level, the well known incompatibility between GR and quantum mechanics has been a powerful drive for decades to search for an hypothetical quantum theory of gravity superseding GR [10], though with relatively little success. Moreover, GR is prone to the existence of space-time singularities, which unavoidably arise in the innermost region of black holes and in the early Universe [11]. From the astrophysics of compact objects, we have the challenge of generating neutron stars above two solar masses with realistic equations of state to meet observations [12], a problem that could worsen with time as even heavier neutron stars are detected. Other issues of interest are the recent suggestion about the existence of "Super-Chandrasekhar" white dwarfs with masses in the range 2 − 2.8M ⊙ [13], which would defy the standard picture of stellar evolution, and the potential existence of new (exotic) compact objects with different properties than the Kerr solution, whose existence could be revealed thanks to gravitational waves [14].
Within this context, "modified gravity" becomes a buzzword for many proposals to extending GR following different prescriptions. The corresponding literature is exceedingly large [15][16][17][18][19], and a bunch of predictions have been developed within astrophysical and cosmological scenarios. Nowadays, many such proposals (mostly those motivated by cosmological considerations) are heavily constrained by the observation c GW = c (up to a ∼ 10 −15 precision) by the LIGO/VIRGO Collaboration, as discussed in Ref. [20]. In most such extensions of GR, the metric tensor is regarded as the only player in town (metric approach), while the connection is violently imposed [21] to be given by the Christoffel symbols of the metric (that is, the Levi-Civita connection). This ad hoc constraint on the nature of the connection has been inherited from traditional/educational reasons on the way GR is usually seen and thought, and has been consequently propagated through most of the literature in the field. However, there are alternatives to this paradigm.
C. The role of the affine connection
It is indeed known by mathematicians since a very long time ago that, in general, any affine connection Γ ≡ Γ λ µν can be decomposed into its curvature (associated to a Levi-Civita connection), torsion (associated to its antisymmetric part) and non-metricity (associated to the failure of the connection to yield ∇ Γ α g βγ = 0) pieces [22][23][24]. We are used to think on GR as the (Riemannian) theory of gravity where torsion and non-metricity are set to zero, while we build the lowest-order action on scalar objects made up of the curvature piece (that is, the Einstein-Hilbert action). However, in what has been recently popularized as the geometrical trinity of gravity [25], we know now that there exists three equivalent (modulo some technicalities regarding boundary terms) formulations of GR. The first alternative formulation to the standard curvature-based one is the teleparallel equivalent of GR [26], in which we switch off curvature and non-metricity, but keep torsion. On the other hand, in the symmetric (or coincident) teleparallel GR [27] we switch off curvature and torsion, but keep non-metricity. The corresponding theories succeed in yielding the same observational predictions as those of (curvature-based) GR when the lowest-order scalar objects are considered in the action. This result puts forward the richness encoded in the affine connection, which therefore could play a more fundamental role in the gravitational dynamics that thought in the past. Moreover, this observation offers a new landscape of possibilities for extending GR depending on the assumptions made upon these three pieces of the affine connection. In this work we shall focus on a particular formulation of modified gravity which, for the sake of this paper, shall be dubbed as metric-affine theories of gravity, and discuss the open opportunities to test the predictions of these theories within the astrophysics of compact objects.
II. METRIC-AFFINE FORMULATION OF THEORIES OF GRAVITY
By metric-affine gravity we mean those theories where metric and connection are regarded as independent degrees of freedom [28]. Current research has identified a promising family of such theories to be theoretically and observationally viable, dubbed as Ricci-based gravities (RBG), and given by the action where κ 2 is Newton's constant in suitable units, and g is the determinant of the space-time metric g µν . In order for this action to be a scalar, the dependence on the geometrical objects in the gravitational Lagrangian L G must appear in terms of powers of traces of the object M µ ν ≡ g µα R αν , where R µν (Γ) is the symmetric part of the Ricci tensor (which is a priori independent of any metric). In this version of metric-affine gravities, torsion can be safely set to zero since in the matter sector S m we are assuming that the matter fields ψ m couple to the metric, but not to the connection [29] (which would be relevant, for instance, should one consider fermions in the matter sector), in order to comply with the equivalence principle. The requirement of a symmetric Ricci tensor is justified on the grounds that the consideration of an antisymmetric piece would make the theory bump into troubles with ghost-like instabilities [30]. This family of actions is wide enough to cover many interesting cases in the literature such as GR itself, f (R) theories, quadratic gravity, Born-Infeld inspired theories of gravity, and so on. It is worth pointing out that, should the action be formulatedá la metric, that is, by imposing the metricconnection compatibility condition, ∇ Γ µ ( √ −gg αβ ) = 0, then one would immediately get lost into troubles with higher-order field equations, ghost-like instabilities, incompatibility with solar system experiments, etc, though some restrictions on L G may alleviate some of these problems.
Let us consider the metric-affine formulation of this family of theories. When g µν and Γ α βγ are independent then the equations of motion obtained from the variation of the action (1) with respect to both of them can be cast under the Einstein-like form [31,32] where T is the trace of the stress-energy tensor T µ ν = −2 √ −g δSm δg µν , while G µ ν (q) is the Einstein tensor of a new metric q µν satisfying ∇ Γ µ ( √ −qq αβ ) = 0 (so that the independent connection can be obtained as the Christoffel symbols of q µν ), which is related to the space-time metric as The deformation matrix Ω α ν depends on the particular L G chosen, but it can always be written on-shell as a function of the stress-energy tensor, T µ ν . For instance, in the f (R) case the relation above becomes conformal: q µν = f R (T )g µν , while for other RBGs this relation will be a full algebraic transformation involving the mixture of all components of the deformation matrix with the space-time metric. Note that in GR, L G = R, one has G µν (q) = κ 2 T µν (perhaps supplemented with a Λ term) and q µν = g µν (modulo a trivial re-scaling) and thus the metric-affine formulation of GR yields exactly the same dynamics and predictions as the standard one 1 .
The RBG family (or, at least, most of its members, such as those modifying GR only in the ultraviolet limit) enjoys a number of distinctive and physically appealing features: • Second-order field equations.
• Vacuum solutions are those of GR.
• c GW = c and two tensorial polarizations.
The above properties ensure the consistence of (most) RBGs with solar system experiments and with gravitational wave observations so far. The Einstein-field representation (2) clearly shows that these theories in their dynamics for q µν can be interpreted as GR with new matter couplings engendered via both the deformation matrix Ω α β (T µ ν ) and the gravitational Lagrangian L G (T µ ν ). This observation shall be of great relevance later. Moreover, due to this, the trademark of RBGs is that the new physical effects will be fed by the energy density of the matter fields, and not just by the integration over sources. This has important consequences both for fundamental physics and for the expectations regarding the properties of compact objects, and provides a fertile playground to explore new physics beyond GR.
III. THEORETICAL PHYSICS OF BLACK HOLES
The Einstein-like representation of the field equations (2) has allowed to introduce suitable methods for the sake of finding explicit solutions in different contexts, which has triggered a quick progress in the understanding of black holes and other compact objects within these theories. Extensions of these theories and methods for other cases, such as the role of the Riemann tensor or the presence of torsion, has also begun to be unravelled. In this section, we shall highlight some theoretical findings regarding black hole physics within metric-affine/RBG theories, which shall pave the path to make contact with the astrophysics of compact objects.
A. Spherically symmetric black holes
Spherically symmetric black holes have been the most frequent playground to test the predictions of these theories for compact objects, thanks to the possibility of finding solutions under analytic form corresponding to different matter sources, which simplifies their analysis. The simplest models at this regard are f (R) theories, since the relation between the two metrics becomes conformal. However, in this case, the trace of the corresponding field equations yields Rf R − 2f = κ 2 T , telling us that R = R(T ). This implies that the dynamics encoded in the new contributions to the field equations (2) can only be excited in presence of matter-energy sources with a non-vanishing trace. This result prevents using Maxwell electrodynamics to find the counterpart of the Reissner-Nordström solution of GR, and forces one to use non-linear electrodynamics instead [34].
For more general RBGs, however, the full stress-energy tensor will appear in the dynamics of the theory. In such cases, the main difficulty to be sorted out is to resolve Eq.(3) to find the relation between curvature and stress-energy tensor, which requires an analysis case-bycase. For instance, in the case of quadratic gravity, L G = R + aR 2 + bR µν R µν , with a and b some parameters, this relation can be explicitly found for spherically symmetric solutions after some algebra [35]. A particularly interesting theory at this regard is the so-called Eddingtoninspired Born-Infeld gravity (EiBI) [36], which is given by the action where ǫ is a parameter with dimensions of length squared. Moreover, the theory features an effective cosmological constant given by Λ = λ−1 κ 2 ǫ . In this case, the expression for the deformation matrix is remarkably simple, which can be explicitly solved for a given T µ ν via an ansatz for the deformation matrix mimicking its algebraic structure (plus a diagonal term if not present). For some matter sources such as electromagnetic fields [37] and, more generally, some types of anisotropic fluids [38], this strategy allows to find exact black hole solutions out of the RBG field equations (2) following pretty much the same procedure for their resolution as in the GR case.
A key aspect in this analysis is to set two line elements for the g µν and q µν geometries respecting the symmetries of the problem (spherical symmetry in this case), while working out explicitly the relation (3) between the metric functions in both frames. This procedure is quite efficient, and allows to circumvent any need to solve the (highly complicated) structure of the RBG field equations should they be written directly in terms of the g µν geometry.
The final conclusion of this analysis is that, for all RBGs studied so far with the matter fields above, the line element of any static, spherically symmetric solution can be conveniently cast under the form where dΩ 2 = dθ 2 + sin 2 (θ)dϕ 2 is the volume element in the unit two-sphere, while the functions Ω 1 (x), Ω 2 (x) characterize the particular combination of RBG + matter field description, and typically contain the mass and charge of the solution as well as additional parameters coming from the RBG Lagrangian density. Moreover, if the RBG theory modifies GR in the strong-field regime (like in the case of EiBI gravity (4)), then the corresponding solutions will boil down to the Reissner-Nordström one of GR in their weak-field limit. The function A(x) in (6) encodes the modified description of horizons. Due to the fact that high-enough local energy densities are typically attained only in the innermost region of black holes, the effects of RBGs manifest also there, while presumably leaving only very tiny imprints on the region outside the event horizon 2 . Despite this, the general structure of horizons may undergo large modifications, finding, in addition to the standard two, single (degenerate) or none horizons of the Reissner-Nordstöm solution of GR, configurations with a single (non-degenerate) horizon (thus bearing a closer resemblance to the Schwarzschild solution instead), or solutions where the metric functions are finite at the center. This structure of horizons mimics the one found in certain models of non-linear electrodynamics in the context of GR [40].
As for the radial function r 2 (x), for matter-energy sources whose stress-energy tensor can be expressed as it is given by [41] It is worth stressing that this functions does not need to be monotonic. When this happens, this radial function is capable to yield a bounce at some x = x c (r = r c ), which can be interpreted in some cases as a signal of a wormhole structure, which typically allows for the extensibility of geodesics beyond r = r c [42], as discussed in next section. Though wormholes unavoidably violate standard energy conditions within the context of GR, this is not necessarily so within RBGs thanks to the extra gravitational corrections, that can be understood as yielding an effective stress-energy tensor. Let us also note that is possible to introduce new coordinates to rewrite the line element (6) in the more canonical form ds 2 = −Ã(y)dt 2 +à −1 (y)dy 2 + r 2 (y)dΩ 2 (so therefore the contributions of Ω 1 , Ω 2 would be hidden withinà and the radial coordinate y) though this change usually spoils any explicit simple representation of the radial function r 2 (y).
B. Regular black holes
The theorems on singularities developed by Penrose and Hawking, among others [43][44][45][46], tell us that GR 2 The case with scalar fields is an exception to this general rule [39], which shall be discussed in Sec.IV D.
is prone to the existence of incomplete causal geodesic curves in the manifold. As null geodesic curves are associated to the paths of light rays and time-like geodesics to the free-falling of physical observers, the existence of any such curve would imply the breakdown of the predictability of GR. This unavoidably happens, for instance, deep inside black holes and in the Big Bang singularity. Therefore, geodesic completeness nicely captures the intuitive idea that in a physically reasonable space-times observers or information should not suddenly cease to exist or to emerge from nowhere [47], and has become the main criterion in the literature to classify regular/singular spacetimes.
The gravitational community has engaged for decades in the search for black hole solutions overcoming such singularities, yielding a fruitful field of research dubbed as regular black holes. To build such solutions one has to remove any of the hypothesis of the singularity theorems, which in one of their canonical formulations read [11,48] • A future trapped surface is developed.
• Fulfilment of the null congruence condition (equivalent to the fulfilment of the null energy condition via Einstein equations).
These three conditions guarantee the existence of a focusing point preventing the continuation of the wordline of every observer. Unsurprisingly, the literature on this field has truly blossomed [49], with quite a fair collection of such regular black hole solutions removing any of these hypothesis. In this quest, most attempts have focused on finding black holes whose curvature scalars are everywhere regular rooted on the fact that 3 , though the singularity theorems speak nothing on the behaviour of curvature scalars, almost every geodesically incomplete solution ever found in GR has also divergent curvature scalars, with a few exceptions [50].
Metric-affine gravities have their share of regular black holes [51]. As with many other regular black holes in modified theories of gravity, the fact that the field equations are different from the Einstein equations allows for the second of the conditions above to relate in a different way the focusing of geodesics and the fulfilment of the energy conditions. In other words, there is the possibility for some effective stress-energy tensor sourcing the new set of generalized Einstein equations to violate the focusing condition, but such that the physical stressenergy tensor (the one derived from the matter action (1)) satisfies the energy conditions. Regardless of these considerations, for a stress-energy tensor of the form (7) Figure 1. Two different mechanisms for the extensibility of the affine parameter at x = rc (or z ≡ r/rc = 1 in dimensionless variables) for null radial geodesics. Left figure: via a bounce in the radial function (using quadratic gravity coupled to Maxwell electrodynamics), extracted from Ref. [54]. Right figure: the central region lies on the future (or past) boundary of the manifold, requiring an infinite affine time to reach it (using f (R) = R + αR 2 coupled to Born-Infeld electrodynamics), extracted from Ref. [55]. The straight line(s) λ = ±x in both cases corresponds to the incomplete null radial geodesics of the Reissner-Nordström solution of GR.
one can write the geodesic equation for the line element (6) as where λ is the affine parameter (the proper time for a time-like observer), k = 0, −1 for null and time-like geodesics, and E, L are the energy and angular momentum per unit mass, respectively. From a conceptual point of view one can envisage two basic mechanisms for Eq.(9) to yield complete geodesics: i) either some bounce arises in r(x) near that region where the point-like singularity should be, x = x c , allowing geodesics to defocus and continue their path to another region of space-time, or ii) the central region is displaced in such a way that every (null and time-like) geodesic takes an infinite time to reach to it (see Refs. [52,53] for an extended discussion on these two mechanisms and their interpretation and consequences). In Fig.1 we depict explicit implementations of both mechanisms within RBGs coupled to electromagnetic fields, where null radial geodesics (which are incomplete in the Reissner-Nordström geometry of GR) turn out to be complete. Similarly, one can verify that in both cases every other null and time-like geodesic sees an effective potential such that in case i) those geodesics able to overcome the potential barrier can also cross the bouncing region x = x c (r = r c ) and expand to another region of space-time, while in case ii) they require an infinite amount of energy to overcome it and get to x → −∞. This guarantees the null and time-like geodesic completeness in these two classes of space-times.
In the latter case one does not need to worry about pathologies in the behaviour of curvature scalars there, since no geodesic will be able to interact with such regions, not even observers with arbitrary (but bounded) acceleration [56]. In the former case one would be tempted to associate this defocusing phenomenon to the curvature of space-time being regularized somehow. This is not necessarily true in metric-affine gravities. Indeed, there is by now evidence of a breakdown in the correlation between (in)completeness of geodesics and the existence of divergences in (some) curvature scalars. Table I of Ref. [57] (which corresponds to quadratic f (R) gravity coupled to an anisotropic fluid satisfying standard energy conditions) clearly shows this: neither curvature divergences prevent the extensibility of geodesics, nor the existence of an incomplete geodesic is triggered by infinite curvature. At most, the only correlation found there is that the presence of infinite energy densities do imply incompleteness of geodesics. This last statement could be related to the specific role played by the local energy densities to trigger the new dynamics encoded in RBGs.
What is then the impact of divergent curvature?. One might indeed worry that, no matter what the behaviour of geodesics might be, a physical (extended) observer passing through a divergent-curvature region surely would undergone any utterly disruptive process. From the criteria widely employed in the literature [58] this will be so when the divergence is strong enough so as for the volume element of an extended observer to shrink to zero, as it happens, for instance, in the standard Schwarzschild solution of GR. This issue is still a matter of controversy, with some cases running away from simple interpretations [59]. To overcome such difficulties, another idea recently brought forward in the literature to look for possible pathologies is to study the interactions which bind together any extended body. For the latter to hold such interactions must be of course strictly causal. This way, in geodesically complete space-times with divergent curvature scalars one would need to study the propagation of light rays from one part or the body to another to determine whether it would be unavoidably destroyed or not, finding that this is not necessarily the case, and that observers could actually survive the trip across such regions [60].
C. Dynamical scenarios
The replacement of point-like singularities of (spherically symmetric) black holes in terms of extended structures allowing for the bounce of geodesics raises new conceptual problems. Indeed, where the GR manifold is single-connected, in these new geometries we have structures with non-trivial topologies, so we have to face the always problematic issue of topology change [61]. One might think that these are highly idealized scenarios, where the condition of staticity allows one to play tricks to obtain a desired results. It is therefore of relevance to study whether such geometries generated dynamically, that is, via gravitational collapse [62], can lead to the desired result of the generations of finite-size structures in the central region of black holes. This has been investigated using simplified dynamical scenarios where either vacuum or a pre-existing black hole is sourced by a flux of particles carrying mass and charge with large enough intensity (Vaidya-type solutions), finding that this is indeed the case [63]. Such fluxes open up an evolving finitesize structure which relaxes into the static throat once the flux is over, and allows for the completeness of geodesics.
One can look for further insights on this phenomenon by borrowing an analogy with well studied laboratory systems. Indeed, in the solid state physics field of crystalline structures (which have a regular pattern arranging its microscopic constituents) the existence of different types of defects are very well known and which, rather than inducing any pathology are essential in the generation of macroscopic (collective) properties of the material, such as viscosity, plasticity, etc. [64]. What is perhaps less known is that such materials admit (actually require) a geometry of metric-affine type for their proper description in the continuum (macroscopic) regime, with deep implications for the interpretation of space-time singularities and their relation with specific geometries [65].
In this section we have illustrated with the case of spherically symmetric black holes how metric-affine theories of gravity offer an interesting playground to test modifications of GR on its strong-field regime. Next we shall study some phenomenological aspects of interest attached to the astrophysics of compact objects within these theories. For a broad overview of the state-of-theart of this field regarding compact stars see the recent review [66].
A. Relativistic stars
Thanks to the quick technological progress achieved in the last few decades, the field of compact stars has seen a great leap in our understanding of the span covered by neutron stars masses and radii [67]. The main conclusion is that neutron stars' radii typically lie between ∼ 10 − 14km, and that they can be as massive as 2M ⊙ [12] and possibly even more. These improvements in our capabilities to measure the properties of these objects has sparked a renewed interest in testing the predictions of both GR and modified gravity against the phenomenology of neutron stars. A fundamental and long-standing difficulty to tackle this challenge is the fact that at the densities reached at the neutron star's center (up to 5 − 10 times the nuclear saturation density), the equation of state (EoS) relating energy density and pressure, P = P (ρ), is unknown. Therefore, obtaining any such EoS requires extrapolations from nuclear physics knowledge using QCD, effective models, etc. This information is needed in order to feed the Tolman-Oppenheimer-Volkoff (TOV) equations [68,69], describing spherically symmetric stars in hydrostatic equilibrium. As hundreds of EoS exist in the market following different prescriptions (see e.g. Ref. [70] for details), such predictions are highly degenerate. Introducing metricaffine gravities into the game further complicates things, since every such theory typically carries an additional parameter, and it becomes a hard challenge to extract clear and clean observational discriminators against GR predictions.
For spherically symmetric stars the main relevant outcome of numerical simulations aimed to resolve the TOV equations based on a given EoS, once an RBG theory is selected, is the mass-radius relation since it can be directly confronted with observations by tracking enough neutron stars [71][72][73][74][75]. In particular, the compatibility of the maximum allowed mass with the 2M ⊙ threshold becomes a direct test on the viability of any such scenario (combined with a given EoS). For rotating (slowly, rapidly and differentially) stars, the moment of inertia is another sensible quantity which can be measured, though research in this context is quite scarce [76], as opposed to the metric formalism. In the latter case, the deviations triggered by modified gravity are larger in the moment of inertia than in the mass-radius relations, thus suggesting a better opportunity using this feature to testing the predictions of metric-affine theories as well.
Though the TOV equations for many metric-affine gravities are known, specific predictions require a caseby-case analysis, which is met with some technical difficulties for specific models. In particular, special care is required to handle the matching to an external (vacuum) solution, since discontinuities in density profiles may be ill-defined in metric-affine gravity. The so-called surface singularities [77] arise precisely from an attempt to employ matched GR solutions on the metric-affine side, which induces the emergence of local divergences in curvature scalars when certain polytropic EoS of physical interest are employed. Therefore, simple GR models seem not to have counterpart on the metric-affine side (the same happens in metric f (R) gravity [78]) requiring both a upgrade of the junction conditions at the stellar surface [79] and a consideration of dynamical aspects (atmospheres, thermodynamics, radiation fluxes, etc) to correctly model stellar surfaces in metric-affine gravity.
B. Non-relativistic stars
For non-relativistic stars, P ≪ ρ, the TOV equations can be reduced to their Newtonian counterparts. The relevance of this non-relativistic limit is that white, brown, and red dwarfs can be well modelled in this regime by polytropic EoS, namely: where K is the polytropic constant and n the polytropic index. The corresponding modified (Poisson) equation will typically arise as a number of terms correcting the Lane-Emden equation of GR [80] 1 where ξ and θ are the radial coordinate and the density in suitable rescaled coordinates, respectively. In both GR and on its modified metric-affine version, the zeros of the function θ(ξ) allow to find the star's masses and radii. The effect of the new RBG corrections is to yield a strengthening/weakening of the gravitational interaction inside astrophysical bodies, with a large impact on many of the properties of such stars. Indeed, since nonrelativistic stars are more weakly dependent on unknown non-gravitational elements than their relativistic counterparts, they offer a cleaner scenario in putting to experimental test specific predictions of RBGs. Let us illustrate this with some examples. Brown dwarfs encompass a large family of objects with different chemical properties and evolutions [81], spanning the range of masses between Jupiterian planets (lowmass brown dwarfs) to substellar objects lying at the bottom of the main sequence (high-mass brown dwarfs). It is precisely at these two limits where ideal scenarios for testing the predictions of metric-affine gravities are found. For high-mass brown dwarfs, which can be modelled with n = 3/2 in Eq.(10) [82], GR yields an analytic estimate of M MMSM ∼ 0.09M ⊙ for the minimum mass required for a star to burn sufficiently stable hydrogen to compensate photospheric losses 4 . The same computations can be done within the context of RBG, for instance, in quadratic (Starobinsky) f (R) gravity f (R) = R + βR 2 . Indeed, an explicit formula for the M MMSM can be obtained in this case depending on α ≡ κ 2 c 2 βρ c [83], where we see again the dependence of the new dynamics of metric-affine gravities on the local energy densities, this time via the star's central density ρ c . The power of this scenario is clearly seen from the fact that the branch α > 0 leads to an strengthening of the gravitational interaction allowing for larger minimum masses and, indeed, for α 0.010 this M MMSM limit becomes comparable to M = (0.0930±0.0008)M ⊙ , corresponding to the M-dwarf star G1 866C [84], which is the lowest main-sequence star mass ever observed. Therefore, significantly higher values than this one would presumably be in conflict with observations. On the branch α < 0 this effect is reversed and compatibility with current observations is guaranteed.
Regarding low-mass brown dwarfs, there is another feature suitable for observational purposes: the minimum mass required for deuterium-burning, which marks the minimum mass limit of a brown dwarf (described by n = 1 in Eq. (10)). In GR this value is given by M MMDB ∼ 0.011 − 0.016M ⊙ [85] but, as it happens with the minimum hydrogen burning mass limit, it significantly depends on the assumptions upon the internal composition or the metallicity. This way, one can track the predictions of RBGs for this limit too, since it can be confronted with observations. For instance, as shown in Ref. [86] in the case of EiBI gravity, the combination of both limiting masses M MMSM and M MMDB via statistical analysis allows to constraint EiBI parameter as −1.59 × 10 2 ≤ ǫ ≤ 1.16 × 10 2 m 5 kg −1 s −2 . Finally, the radius plateu (the constancy of the star's radius with the mass) of low-mass brown dwarfs could be also another test for the predictions of these theories [87].
White dwarfs, arising from fuel-exhausted mainsequence stars where the gravitational collapse is halted by electron's degeneracy pressure, also offer suitable scenarios to test the predictions of RBGs. For instance, tests on the Chandrasekhar's 1.44M ⊙ limit can be performed [88], while the suggestion in the literature on the existence of super-Chandrasekhar stars, with masses up to 2.8M ⊙ [13] has not been addressed within RBGs yet.
C. Rotating black holes and the mapping method
Real black holes in the Universe do rotate. While a tiny amount of charge is expected to be retained by such objects, it can be completely disregarded in astrophysically sensible scenarios, reducing the Kerr-Newman family to the simpler Kerr one. That the Kerr solution of GR can be reliably used to describe black holes has been confirmed by several means, including the continuum fitting method and X-ray reflection spectroscopy [89]. Moreover, the detection of a gravitational wave signal GW150914 [4] by the LIGO/VIRGO Collaboration, consistently interpreted as the output of the merger of two black holes, quickly followed by the discovery of a merger of two neutron stars GW170817 together with its optical counterpart GRB170817A [5], and the imaging in 2019 by the Event Horizon Telescope Collaboration of the shadow of the central object of the M87 galaxy [7], have further strengthen the reliability of this solution.
As we have seen in the past section, exact analytical, spherically symmetric black holes can be generated in RBGs out of (non-linear) electromagnetic fields and (anisotropic) fluids with some ease, but finding axially symmetric (rotating) black holes represent a daunting challenge for any modified theory of gravity. In this sense, the difficulty to extract exact solutions could spoil the open opportunities present to test new physics beyond GR, for instance, in the ringdown tail of gravitational waves out of binary mergers [90]. To work out such scenarios within RBGs one faces a fundamental difficulty: from the structure of the field equations (2) and the fundamental relation (3) one must note that the deformation matrix Ω α β is, in general, a nonlinear function of T µ ν , which itself depends on g µν , while in the left-hand side of the field equations (2) it is q µν who appears instead. There are certain configurations with high symmetry (cosmology, spherically symmetric black holes, etc) in which the dependence on g µν can be fully removed out in favour of the matter sources, allowing to find explicit solutions using this procedure. However, dynamical scenarios with less symmetry are plagued by technical difficulties. Moreover, the application of numerical methods on RBGs would be strongly model-dependent and computationally expensive because of the need to invert the relation between the two q µν and g µν metrics at each step. Moreover, such methods are tightly attached to the structure of Einstein's field equations, largely preventing any prospects to efficiently use their full power beyond GR.
To overcome this difficulty, an important technical progress has been recently developed and implemented, dubbed as the mapping method [91]. It works first by introducing an Einstein frame G µν (q) = κ 2T µν (q), where comparison with Eq.(2) yields the relatioñ The new stress-energy tensorT µ ν (q) can be derived from some new matter LagrangianL m (q µν ,ψ m ). This establishes a correspondence between RBGs + L m (g µν , ψ m ) and GR +L m (q µν , ψ m ), which also holds true at the level of specific solutions when supplemented with the matter field equations and the fundamental relation (3). To describe how this process works, let us consider the case of matter-energy sources described by anisotropic fluids, which includes a number of interesting scenarios. First, we need to write the corresponding stress-energy tensors on the RBG and GR frames as where (ρ, p r , p ⊥ ) are the energy density, radial pressure, and tangential pressure of the fluid on the RBG frame, respectively, while (ρ q , p q r , p q ⊥ ) are their counterparts on the GR frame. Plugging these expression into Eq. (12) one finds the mapping equations in this case as These mapping equations set the following cooking recipe to produce new solutions on the RBG side out of known solutions on the GR side: • Select a particular RBG coupled to some matter source described by (ρ, p r , p ⊥ ) and compute Ω µ ν and |Ω| using the fact that the latter are a function of the former.
• Use the mapping equations (15), (16) and (17) to find (ρ q , p q r , p q ⊥ ) and reconstruct the matter Lagrangian on the GR side.
• Use any known solution on GR coupled to that matter source, given by q µν , to generate the one in RBG, g µν , via the fundamental relation (3).
Let illustrate the usefulness of this program for black hole physics using two explicit examples. The first one (for purely electric fields 5 ) maps GR coupled to Born-Infeld electrodynamics into EiBI gravity coupled to Maxwell electrodynamics, that is [32,92] S EH + 1 where X = − 1 2 F µν F µν is the electromagnetic field invariant. Observe how the square-root structure of Born-Infeld is transferred from the matter side to the gravity side via this correspondence. Since the corresponding black hole solutions on the GR side are known in exact form (and have been thoroughly characterized when ǫ < 0 [40]), those of the right-hand-side of this correspondence can be worked out right away from the mapping equations (15), (16) and (17), without any need to directly solving the corresponding field equations. This has been discussed in detail in Ref. [92] for the case of electrostatic field starting from the Reissner-Nordström solution of GR, showing how the hard-won solutions of Ref. [37] can be much more easily re-obtained using this procedure. This also explains why some features of the solutions obtained on the RBG side in Ref. [37] closely resemble those of the GR side.
The second example of this mapping is that GR coupled to Maxwell electrodynamics maps (surprisingly!) into EiBI gravity coupled to Born-Infeld electrodynamics, that is For electrostatic fields the solution on the left-hand side of this mapping is the Reissner-Nordström one, allowing to find via the mapping the solutions (for ǫ < 0) derived in Ref. [93] by direct calculation.
Transferring the results of any of these two mappings to the axially symmetric scenario would allow to find rotating black holes on the RBG side. In the second example (19), since the solution on the left-hand side of the mapping is the Kerr-Newman one of GR, one could be able to obtain the counterpart on the EiBI + BI side. The ǫ-corrections induced by this combination on the RBG side should induce qualitative and quantitative changes as compared to GR solution in terms of a modified description of horizons, ergospheres and photosphere, which would allow to test alternatives to the Kerr hyphotesis 6 via accretion disks or different patterns in the generation of gravitational waves or in black hole shadows, all of which would presumably be degenerated with GR predictions on M , J, and ǫ. How thus to disentangle this degeneracy in the predictions of modified gravity as compared to GR ones is still an open problem in the community. On the other hand, in the first example of the mapping above (18), one could use the rotating black hole of GR coupled to Born-Infeld electrodynamics found in Ref. [94] 7 to find the counterpart of the Kerr-Newman solution in the context of EiBI gravity, which would be regarded as a more sensible scenario from an astrophysical point of view.
The fact that the new gravitational dynamics triggered by the matter fields in metric-affine theories becomes significant in presence of high energy densities, which allow to naturally past (for most RBGs) weakfield limit tests, also narrow the search for clear and clean observational discriminators at the typical scales of the event horizon and larger. Open opportunities can however be found in two aspects of the astrophysics of these objects. First, the modifications to the location of the photosphere would slightly change the propagation of light rays around such objects, which could be revealed via (strong) gravitational lensing, as discussed in Ref. [96,97] and, more generally, could also be seen via black hole shadows [98]. Second, though the propagation of gravitational waves in vacuum within RBGs are the same as in GR [99,100], their generation within binary mergers would not, suggesting the search for tiny imprints of new dynamics within the quasi-normal modes spectrum of these solutions [101]. This strategy can be further reinforced by the fact that the mapping may allow also to directly implement numerical computations thanks to their mimicking of the structure of the Einstein field equations, once the correspondence between theories is identified.
D. Exotic horizonless compact objects
Are there any other (horizonless) compact objects rather than canonical stars?. The answer to this question is positive, and indeed a zoo of such objects can be found in the literature: gravastars, boson stars, fuzzyballs, hairy solutions, scalar clouds, gravitational solitons, etc. Many of these objects are ultra-compact, in the sense of being close to the Buchdahl limit on compactness, namely, C 4/9, therefore closely resembling a black hole (which has C = 1/2) which troubles their detection via purely optical means. However, the replacement of the would-be event horizon of a black hole by a hard surface makes a fundamental difference regarding gravitational wave radiation out of binary mergers. Indeed, in one such event, besides the usual burst of gravitational waves, there will be additional modes trapped between the photosphere and the object's surface, producing a period release of secondary gravitational waves with decreasing amplitude, so-called echoes [102].
Models with scalar fields offer indeed suitable avenues for the construction of such exotic compact configurations. However, despite its apparent simplicity, the resolution of the RBG field equations for scalar fields turns out to be much more harder than in the electromagnetic case due to the loss of some symmetries in the stressenergy tensor. Luckily, we have now the mapping method at our disposal. To take the simplest example of this scenario, let us consider a free, real scalar field. The mapping equations (15), (16) and (17) can be conveniently applied in this case by taking into account that GR coupled to this matter source has an exact solution, studied to some detail by Wyman in Ref. [103]. Therefore, if we consider quadratic f (R) gravity as the target theory on the RBG side, one finds the following correspondence where α is a constant and Z ≡ g µν ∂ µ φ∂ ν φ is the kinetic term of the scalar Lagrangian density (recall that we are taking V (φ) = 0). Alternatively, if we use EiBI gravity, then the mapping becomes From all the mappings performed so far one can see that the nonlinear structure defining either the RBG or the matter sector (either on the GR or RBG frames) somewhat transfers from both one of the frames to the other and/or from the gravity to the matter sector (or viceversa). Therefore, starting from Wyman's seed solution one can generate exact solutions in the f (R)/EiBI gravity setting by direct algebraic transformations, as shown in Ref. [39]. The neatness of this approach is in sharp contrast with the long and cumbersome direct derivation performed in Ref. [104].
As opposed to the case of the electromagnetic fields, where the new gravitational dynamics is typically excited at the innermost region of the solutions, which is where the energy density reaches its highest values, thus leaving only tiny imprints at astrophysical scales, for scalar fields the energy density grows significantly already near the Schwarzschild radius, thus triggering a number of new properties at astrophysically-relevant scales [39]. Such properties include the presence of asymmetric wormholes, and the emergence of a kind of surface extremely close to the location of the would-be Schwarzschild horizon. As with some of their GR cousins, these exotic objects bear such a close resemblance to black holes that are hard to detect. Perhaps the best opportunity available here is also related to small differences in the generation of gravitational waves, and in the presence of echoes.
V. CONCLUSIONS
The physical implications of the affine connection have been mostly overlooked in the literature until very recent times, with most of the community taking it to be given by the Christoffel symbols of the metric. When its independent character is restored, it can be removed out in favour of new non-linear couplings on the matter fields (at least, in the case of minimal coupling [105]). The corresponding metric-affine theory, when it is of RBG type, yields second-order, ghost-free, Einstein-like equations compatible with all (for most members of the family) weak-field limit and gravitational wave observations. The resemblance of the field equations formulated this way with the standard Einstein equations allows to follow similar procedures in solving them, which has allowed to uncover a large number of theoretical results, particularly regarding exact black hole solutions.
It turns out the new gravitational contributions engendered by the matter fields in RBGs yield a number of ap-pealing features for such solutions, which are of relevance, in particular, for the issue of singularity avoidance inside black holes. Indeed, the research carried out so far has shown that the existence of regular black holes are a resilient feature of RBGs, including f (R) gravity, quadratic gravity, and EiBI gravity, when coupled to electromagnetic fields or to different types of fluids satisfying standard energy conditions. The singularity avoidance is implemented via two different mechanisms: either through a bounce in the radial function, or by the displacement of the would-be central singularity to the future (or past) infinity of the manifold. In the former case, geodesics can naturally cross the wormhole throat, while in the latter they take an infinite time to reach to the center of the solution. Moreover, since such an avoidance turns out to be independent on the canonical scalar invariants being divergent or not, this also raises questions on the physical meaning of such scalars to characterize space-time singularities within metric-affine gravities (see Ref. [33] for a recent discussion on this point). Further extensions of RBGs, including for instance the addition to the action of scalars constructed with other contractions of the Riemann tensor requires further technical progress beyond the state-of-the-art.
When moving to realistic scenarios of interest within astrophysics, things become more involved and new strategies have to be developed. This is needed in view of the good prospects offered by these theories to testing the possible existence of new gravitational physics beyond GR within the astrophysics of compact objects. We have highlighted some of such opportunities regarding relativistic and non-relativistic stars, black holes, and further horizonless compact objects, and some of the contributions of RBGs analyzed in the literature. Given the fact that the new gravitational dynamics in metric-affine gravities is strongly dependent on the local stress-energy densities, imprints of interest at astrophysical scales which can act as clear discriminators with respect to GR predictions are hard to find. Nonetheless, we have hinted at a few specific predictions of these theories for these objects that offer good prospects within the context of multimessenger astronomy. A combination of such predictions for every RBG for different kinds of compact objects could allow to determine the viability of any such theory to account for different observations, thus helping to alleviate the degeneracy problem present in any modification of GR.
We have also discussed a new powerful tool to circumvent the highly non-linear character of the RBG field equations, the latter largely preventing the finding of solutions of astrophysical interest in dynamical scenarios. This tool, dubbed as the mapping method, consists on casting the RBG field equations in purely Einstenian form coupled to a new (non-canonical, in general) matter Lagrangian, in such a way that once the solution in GR for such a setting is found, the solution on the RBG side can be obtained out of it via purely algebraic transformations. The power of this method is apparent, in the sense that one can use the full machinery of analytical solutions and numerical methods developed within GR to find new solutions on the RBG side. We have illustrated how this mapping works by discussing the way solutions found via direct resolution of the RBG field equations coupled to electromagnetic fields can be re-obtained using it. Moreover, we have discussed how new compact solutions from scalar fields can be obtained. This way of finding new solutions is absurdly simpler than the usual procedure of finding them by brute force out of the field equations, which greatly shortens computation times and reduces the chances of mistakes. The mapping moreover allows to tackle scenarios previously unaccessible to analytic treatment, and which might be also useful for the sake of numerical simulations.
To conclude, the prospects for extracting phenomenology of interest for the physics of compact objects within metric-affine theories of gravity are exceedingly hopeful, and we cannot but to be optimistic that the field will continue to blossom in the near future. | 2020-04-03T19:08:30.436Z | 2020-04-02T00:00:00.000 | {
"year": 2020,
"sha1": "0b80b6d4a1bc3e5d754faeb11bf1559161d0e3a9",
"oa_license": null,
"oa_url": "https://eprints.ucm.es/id/eprint/62607/1/Rubiera,%20D%2009%20Preprint.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "59b3e6a7832563bcc8fe1ea0d4608b58d29b659b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
251162760 | pes2o/s2orc | v3-fos-license | Antibacterial and anti-trichomunas characteristics of local landraces of Lawsonia inermis L.
Background Henna (Lawsonia inermis) with anti-bacterial properties has been widely used in traditional medicine especially Persian medicine. Henna oil is suggested for diseases of infectious origin, such as cervical ulcers. Group B Streptococcus agalactiae, Pseudomonas aeruginosa and, Trichomonas vaginalis are involved in the infection of women especially cervicitis. Henna grows in dry and tropical regions. The main important landraces of henna landraces are cultivated in Kerman, Sistan and Baluchestan, Hormozgan, and Bushehr provinces in Iran. Proper use of antimicrobial agents, use of new antimicrobial strategies, and alternative methods, such as herbal methods may help reduce drug resistance in the future. This study’s objective was to investigate the anti-Trichomonas vaginalis activity of three different henna landraces and antimicrobial effects against group B Streptococcus agalactiae and, Pseudomonas aeruginosa. Methods Total phenol content was measured by Folin ciocaltu method. Antibacterial effect of landraces of Henna against P. aeruginosa and S. agalactiae were assayed by well diffusion method and minimal inhibitory concentration assessments were done using the broth micro-dilution technique. Anti-Trichomonas effect of Henna landraces were assayed by Hemocytometery method. Results Total phenol content of Shahdad, Rudbar-e-Jonub, and Qaleh Ganj was 206.51, 201.96, and 254.85 μg/ml, respectively. Shahdad, Rudbar-e-Jonub, and Qaleh Ganj had MIC against GBS at 15, 15 and, 4 μg/ml. The growth inhibition diameter of the most effective henna (Shahdad landrace) at a concentration of 20 μg/ml on P. aeruginosa was 2.46 ± 0.15 cm and in the MIC method at a concentration of 5 μg/ml of Shahdad landrace, P. aeruginosa did not grow. IC50 of shahdad Henna after 24 h, 48 h, and 72 h was 7.54, 4.83 and 20.54 μg/ml, respectively. IC50 of Rudbar-e-Jonub extract was 5.76, 3.79 and 5.77 μg/ml in different days. IC50 of Qaleh Ganj extract was 6.09, 4.08 and 5.74 μg/ml in different days. Conclusions The amount of total phenol in Qaleh Ganj was higher than the other varieties. In the well diffusion method, Qaleh Ganj was more effective against group B Streptococcus (Gram-positive bacterium) than the other two landraces, and Shahdad landrace was more effective against P. aeruginosa (Gram-negative bacterium) than other. In the MIC method, the same result was obtained as in the well diffusion method, but at a lower concentration.
Background Classically, herbs have been applied because of their antibacterial effects, which is because of bioactive principles [1]. Medicinal plants, like Lawsonia inermis have been employed as antimicrobial drugs to prevent the growth of multi-drug resistant bacteria [2]. The antimicrobial resistance-associated infectious diseases, such as hospital-acquired Gram-negative bacterial infection, and the resulting mortality and morbidity, have been increasing at an alarming rate. Many antibiotics have become ineffective in treating and controlling multidrug-resistant bacterial pathogens worldwide [3]. Lawsonia inermis L. (Lythraceae), known as henna, is found in tropical and subtropical areas and has long been used worldwide [4]. Iran is one of the habitats of this plant. The main important landraces of henna are cultivated in Kerman, Sistan and Baluchestan, Hormozgan, and Bushehr provinces in Iran [5]. Climatic conditions are one of the most critical variables of the natural environment. Regression analysis of the climatic factors and henna yield showed that these variables described 93% of henna yield changes. All climatic characteristics of the cultivated areas except the sea level, growth period temperature, and annual temperature have a positive relationship with henna yield. About 87% of henna yield changes can be described by two factors, including relative humidity and rainfall during the growth period. These two parameters are more effective in henna yield in cultivated areas. Also, soil nitrogen and phosphorus elements have been reported to have a significant role in henna yield. Rudbar-e-Jonub, Shahdad, and Qaleh Ganj are in Kerman province in the southeast of Iran, located at latitude 30.29 and longitude 57.06 with an area of 180,726 km 2 . The average rainfall in Kerman is 138 mm. Rainfall deficiency and high evaporation rate make Kerman a dry area. Kerman's climate is different due to high evaporations and special climatic conditions in different regions. These different climatic conditions can affect the secondary metabolites of plants [5].
S. agalactiae, also called B Streptococcus (GBS), is a Gram-positive coccus [12]. GBS as a harmless commensal bacterium is part of the human microbiota that is colonized in the genitourinary and gastrointestinal tract of 30% of healthy adults (carriers with no symptoms). However, GBS causes severe invasive infections, particularly in the elderly, infants, and those with compromised immune systems [13]. S. agalactiae is a major neonatal pathogen [14]. Group B streptococcus causes neonatal sepsis as well as meningitis in most countries [15]. P. aeruginosa, as a common facultatively aerobic Gramnegative bacterium, causes disease in animals, plants, and humans [16], which has become a health concern, in particular, in immunocompromised and critically ill patients. The drug-resistant strains cause high mortality [17]. It is the most common colonizer of medical devices, such as catheters. P. aeruginosa is transmitted by contaminated devices that are not appropriately cleaned or those in the hands of healthcare staff [18].
Trichomonas vaginalis, as a flagellated protozoan parasite in the human genital tract, is the cause of remediable sexually transmitted diseases worldwide. Genital tract infections in females can cause several symptoms, such as cervicitis and vaginitis. Recently, T. vaginalis infection is associated with several serious conditions, like cervical cancer, prostate cancer, adverse pregnancy outcomes, and a high likelihood of HIV infection; attempts have been made to treat and diagnose patients harboring T. vaginalis [19].
So far, no detailed study has been done on the effect of anti-microbial and anti-parasite activity of henna in various landraces. We investigated the phytochemical attributes and antimicrobial impact of the extracts from Shahdad, Rudbar-e-Jonub, and Qhaleh Ganj landraces, as local henna landraces.
The plants' collection and extraction process
The local landraces of L. inermis, including Shahdad, Rudbar-e-Jonub, and Qhaleh Ganj were gathered and authenticated by the Medicinal and Industrial Research Institute, Ardakan. The voucher numbers are SSU 0062, SSU 0061, and SSU 0066, respectively. The dried leaves were sieved to prepare henna powder. We prepared the hydroalcoholic extract through the maceration method. L. inermis leaves were ground into a fine powder, passed through the sieve, and macerated separately at 10 g of ground plant material in 70% (v/v) ethanol (80 ml) for 72 h. Extraction was performed at room temperature while shaking using a magnetic stirrer. The solution was then purified using the Buchner funnel and concentrated. The concentrated extract was stored far from the light and heat [20].
Total phenol content
The Folin-Ciocalteu method was used to determine the total phenolic content of extracts and oil. Gallic acid was applied as a standard, and total phenol was represented as mg/g of gallic acid equivalents (GAE). GA at 10, 20, 40, 60, 80, 100, and 200 μg/ml concentrations was prepared, followed by mixing with 0.5 ml of a 10-fold diluted Folin-Ciocalteu reagent as well as 0.4 ml of 7.5% sodium carbonate after 3-8 min. Parafilm was used to cover the tubes and they were kept for 30 min at room temperature before reading the absorbance at 760 nm spectrophotometrically. All assessments were done in triplicate. Total phenolic content was calculated as mg of GA per gram using the equation achieved from a standard GA calibration curve [21].
Preparation of the bacterial strains
The commercial strains of P. aeruginosa ATCC 27853 and Streptococcus B PTCC1864 were prepared using the Laboratory of Industrial Microbiology, Shahid Sadoughi University of Medical Sciences, Yazd, Iran.
Stock bacterium cultures were kept for 2 hours at room temperature. All strains were streaked on nutrient and blood agar plates, followed by incubation at 37 °C for 24 hours. The inoculums were provided by emulsifying at least three colonies from the plates in sterile 0.9% NaCl (w/v) until forming 10 8 10 8 CFU/ml (0.5 McFarland scale). The sterile conditions of the procedures were assured using laminar hood equipment.
Preparation of trichomonas vaginalis
The vaginal discharge of females with Trichomonas vaginitis referring to the healthcare centers of Yazd, Iran were used to isolate T. vaginalis strains, and were transferred to the TYI-S-33 culture medium, and stored in the University's Parasitology Research Laboratory until usage. T. vaginalis cells were collected from the logarithmic growth phase and the number of cells was estimated using a hemocytometer slide. Then, 1 × 10 5 10 5 /ml cells were applied for the anti-T. vaginalis impacts of L. inermis landraces.
Antibacterial effects
The standard bacteria (P. aeruginosa and S. agalactiae) were passaged on a blood agar medium and incubated at 37 °C for 24 hours. For the agar well diffusion method, the Muller Hinton agar plate was covered with bacterial suspension, and wells were created with a sterile Pasteur pipette in each plate with a 6-mm diameter. Then, 50 μL of various concentrations of plant extracts were transferred to the respective wells in the plate media. Ciprofloxacin was employed as a positive control and sterile distilled was applied as a negative control. The zones of inhibition were measured by a ruler and recorded (in mm). All tests were performed in triplicate.
Determination of minimal inhibitory concentration (MIC)
MIC assessments were done using the broth microdilution technique and were performed in Mueller Hinton broth (MH), according to the National Committee for Clinical Laboratory Standards (NCCLS 1999b) [22]. After preparing serial dilutions of plant extracts (8,10,20,30, and 40 mg/ml) in MH broth, each dilution (50 μL) was dispensed into the wells, followed by inoculation with 25 μL of the bacterial suspension (0.5 Macfarland) and 25 μl MH broth and mixing completely. Negative (growth) and positive (sterility) controls were applied for all experiments. Bacterial growth was controlled by replacing the extract with ethanol 10% (the same volume) for eliminating the probable antibacterial activity of the solvents. MH broth medium was used for preparing sterility controls. The ultimate volume in wells was 100 μl. After covering the plates with a sterile plate sealer, they were subjected to incubation at 37 °C for 24 hours. Following incubation, MIC was regarded as the lowest sample level with no color change (clear) and showing complete inhibition of bacterial growth.
Minimal bactericidal concentration (MBC)
For determination of the MBC, 10 μl of broth aliquots was obtained from each well with an extract level higher
In vitro anti-Trichomonal assay
For evaluating the anti-Trichomonal impacts of Henna extracts, 0.013-26.6 μg/ml of the extract concentrations were disposed in phosphate buffer saline (PBS) and mixed in the microtubes. PBS and metronidazole (50 μg/ml) were applied as negative and positive controls, respectively. Afterward, 100 μl of medium with about 10 5 live T. vaginalis microorganisms were transferred to each tube and they were subjected to incubation at 37 °C, and the count of live parasites in each tube was calculated 24, 48, and 72 hours after incubation. For all samples and each time, after shaking the tube, the live cells were calculated by a hemocytometer slide. The active parasites, as well as parasites with moving flagellum, were regarded as alive. All experiments were done in triplicate. The count of live parasites was compared with the negative and positive controls. After calculating the growth inhibitory percentage (GI %), it was reported using the following formula: Where, a is the mean of live parasites in the negative control tube and b indicates the mean of live parasites in the test tube [23].
Statistical analysis
The results were computerized and analyzed by SPSS 25. One-way ANOVA and Tukey's test were used to analyze the results.
Total phenol
Total phenolic content of henna landraces were calculated by the standard curve of gallic acid. Table 1 reveals that the total phenol content of Shahdad, Rudbar-e-Jonub, and Qaleh Ganj was 206.51 ± 0.07, 201.96 ± 0.09, and 254.85 ± 0.01 μg/ml, respectively and their difference was significant (P < 0.05).
Antimicrobial effect
The growth inhibition of henna landraces against GBS and P. auroginosa is shown in Table 1. There was no growth inhibition at the concentration of 4 mg/ml of Shahdad and Rudbar-e-Jonub samples and no growth inhibition at the concentration of 3 mg/ml of Ghaleh Ganj samples.
The results of comparing the GI% of T. vaginalis can be seen in Fig. 3.
Discussions
We found that different landraces of L. inermis have various effects against GBS and P. auroginosa. The MIC of Shahdad, Rudbar, and Qaleh Ganj against P. auroginosa was 5, 15, and 7.5 μg/ml, respectively ( Table 2). Rudbare-Jonub landrace showed an antimicrobial effect against GBS at 4 μg/ml but it had a smaller GI diameter than other landraces. However, Qaleh Ganj landrace at 4 μg/ml showed an antimicrobial effect against GBS but it had a bigger GI diameter than other landraces. Also, the Qaleh Ganj landrace had a bigger GI against GBS than P. auroginosa (Tables 1 and 2). P. aeruginosa is a non-fastidious microorganism with no need for special cultivation conditions. This bacterium is a common cause of infection among non-fermenting Gram-negative bacteria, mainly influencing immunocompromised patients. Increased prevalence of resistance leads to prolonged therapy and high mortality [24]. Habbal et al. reported that different landraces of henna had higher antimicrobial effects against P. auroginosa than other microorganisms [25].
In this study, Shahdad and Qaleh Ganj landraces had a greater antibacterial effect against Gram-negative bacteria than Gram-positive bacteria. This effect may be related to Lawson. Lawson (2-hydroxy 1,4 naphthoquinone) is a naphthoquinone that is found in henna. Gram-negative bacteria are more resistant to antimicrobial agents compared to Gram-positive bacteria due to their cell walls. Gram-positive bacteria have a porous and thick cell wall with inter-connected peptidoglycan layers surrounding a cytoplasmic membrane, whereas Gramnegative ones possess a thinner peptidoglycan layer, an outer membrane, and a cytoplasmic membrane. Grampositive bacteria have a porous layer of peptidoglycan as well as a single lipid bilayer, whereas Gram-negative ones possess a double lipid bilayer that sandwich the peptidoglycan layer and also an outer layer of lipopolysaccharide, leading to a low level of permeability for lipophilic small molecules [28].
Hydroxyl (−OH) group of phenolic compounds possibly causes bacterial inhibition and the double bonds (position and number) can cause an antimicrobial effect. Two carbonyl groups in an aromatic ring as part of the naphthoquinone structure could describe antimicrobial effects. This hypothesis is supported by the oxygen reduction activity of the quinone structure in 2-hydroxy-1,4-naphthoquinone with the production of reactive oxygen species (ROS) and damage of macromolecules, like proteins, DNA, and lipids [29]. Chemical compounds target the infection-inducing bacterial cells; ROS production is an important process in apoptosis.
The mechanism of such antibacterial agents is elevated ROS production and consequently, apoptotic cell death.
Regarding the ability to produce ROS, naphthoquinone analogues are very cytotoxic for the infected cells and are capable of restricting cellular enzymes involved in cell growth and apoptosis [30]. Paiva et al. in 2003 reported that Plambajine, a naphthoquinon, had an antimicrobial effect against [32]. Under biotic/abiotic stress conditions, the composition of plants and their extracts change and cause them to have different effects on the same microorganisms [33]. Because of climate diversity in Kerman province, total phenol and Lawson content of different varieties of henna are different; thus, they have different effects on microorganisms. In this study, Qaleh Ganj extract had a greater GI diameter against GBS, and Shahdad extract showed a greater GI diameter against P. auroginosa.
According to the results of anti-Trichomunas activity, all evaluated extracts showed effectiveness in preventing the growth of T. vaginalis trophozoites dose-dependently after 24, 48, and 72 h of incubation. Moreover, the Rudbar-e-Jonub henna significantly showed more effectiveness due to lower IC50 values for trophozoites of T. vaginalis after 24 h (P < 0.05) (Fig. 2).
The maximum GI% of the Shahdad, Rudbar-e-Jonub, and Qaleh Ganj landraces was 81, 83, and 80% after 48 h, respectively (Fig. 2). The GI% of Rudbar-e-Jonub extract was significantly different compared to two other landraces after 24, 48, and 72 h; but the total phenol in Rudbar was lower than others significantly (P < 0.05). Therefore, other constituents of henna have a role in its anti-Trichomunas effect.
Multidrug resistance is one of the causes of human mortality in the past few years. Bacteria, parasites, and fungi have numerous resistant mechanisms against the current antibiotics, causing severe effects on patients' health. In addition, the use of synthetic chemicals to control microorganisms is still limited due to their environmental and carcinogenic effects and acute toxicity. Hence, a demand for new antibiotics is urgently raised in the scientific community to deal with multidrug resistance. The therapeutic agents from herbs have long emerged as a potential natural source for treating infectious diseases [34,35]. Some studies have reported anti-Trichomunas effect of essential oils such as Atalantia sessiflora [35,36]. Motazedian et al. and Serakta et al. declared the anti-leishmania effect of henna extract [37,38]. The ethyl extract (33 mg/L) and petroleum ether extract (27 mg/L) of henna have an anti-falciparum effect [39].
In this study, the extract was not dissolved further, otherwise, it might have shown a higher GI. The IC50 of Shahdad extract after 72 h was more than others, More studies are needed for the detection of constituents responsible for this difference.
Limitations
The most important limitation of this study was the impossibility of preparing a more concentrated extract solution. | 2022-07-30T13:30:26.756Z | 2022-07-30T00:00:00.000 | {
"year": 2022,
"sha1": "163ce91988b5e8ad88c444cd48a4343734fc4d95",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "163ce91988b5e8ad88c444cd48a4343734fc4d95",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
239770872 | pes2o/s2orc | v3-fos-license | Fragmentation of microspheres after bronchial artery injection: a case report and review of the literature
Background Massive hemoptysis due to aspergilloma is a rare but life-threatening complication. Bronchial artery embolization is recommended as a definitive treatment for massive hemoptysis. Polyvinyl alcohol is widely used in bronchial artery embolization. A very small number of studies have reported disrupted polyvinyl alcohol, which may cause ectopic embolism. Case presentation This case highlights an unusual phenomenon in which polyvinyl alcohol fragments appeared on pathological examination in a 61-year-old man, ethnic Han, with massive hemoptysis caused by aspergilloma for whom bronchial artery embolization failed. Lobectomy was carried out successfully. Hematoxylin and eosin stain provides clear images of polyvinyl alcohol fragments, while alpha-smooth muscle cell actin and cluster of differentiation-34 immunohistochemistry revealed their localization in bronchioles. Conclusion Thus far, only two cases of polyvinyl alcohol fragments in the lung have been reported, and the mechanism has not been elucidated. These two cases revealed no counter-indication for the use of polyvinyl alcohol. However, in some cases of off-target embolization causing fatal complications, such as stroke, paraplegia, and myocardial, polyvinyl alcohol fragmentation needs to be taken into consideration.
Introduction
Aspergillomas are mass-like fungus balls that are typically composed of Aspergillus fumigatus, most of which are secondary to structurally abnormal lungs, especially those with preexisting cavities. Their main clinical features are recurrent hemoptysis and different amounts of hemoptysis [1]. Massive hemoptysis due to aspergilloma is a rare but deadly complication, with an estimated mortality as high as 38% [2]. Bronchial artery embolization (BAE) is recommended as a temporary measure before surgery, or as a definitive treatment for massive hemoptysis [2]. Polyvinyl alcohol (PVA) is widely used in BAE for its permanent embolization effect. A very small number of studies have reported disrupted PVA, which may cause ectopic embolism [3].
This case provides unique and clear images of PVA fragments in the lung specimen, which probably provide a new explanation for ectopic embolism.
Case presentation
Written consent was obtained from our institutional review board and the patient for this case report as well as accompanying images. A 61-year-old man, ethnic Han, presented with massive hemoptysis of nearly 500 mL of fresh blood. He did not complain of any ongoing respiratory symptoms. His past medical history included pulmonary tuberculosis (TB) with 6 months of standard antituberculosis therapy (2HRZE/4HR, 2HRZE: isoniazid 300 mg once daily, rifampin 450 mg once daily, pyrazinamide 750 mg twice daily and ethambutol 750 mg once daily for 2 months; 4HR: isoniazid 300 mg once daily plus rifampin 450 mg once daily for 4 months), and outpatient follow-up showed resolution of his TB. This patient was a farmer, while social, environmental, family, and psychosocial history was unremarkable. He did not smoke or consume alcohol. The patient had a respiratory rate of 30 breaths/minute and oxygen saturation of 92% on ambient air. Chest physical examination revealed mild respiratory distress, that is, decreased breath sounds on the top right side of the chest. Other physical examination was unremarkable. Chest computerized tomography scan showed bilateral apical post-tuberculosis lung fibrosis and a right apical 2.5 × 2 cm 2 thick-walled cavity with a solid intracavity mass bearing the air crescent sign, while enhanced computerized tomography (CT) scan indicated remarkable enhancement around the lesion and no obvious fistula. (Fig. 1a). BAE was carried out, and digitally subtracted angiography demonstrated that the right bronchial arteries were abnormal, with tortuosity, hypertrophy, and extravasation of contrast material into the right bronchus (Fig. 1b). Because of the tortuosity of bronchial artery, the microcatheter could not be reliably and stably imported; thus, steel platinum coils were not an option. One gram of PVA microspheres (Hegui, China) with a diameter of 700-900 µm were chosen to embolize the culprit bronchial artery. However, the embolic agents appeared quickly in the right upperlobe bronchus after slow and gentle injection into the bronchial artery. Rapid deterioration during the procedure, including ongoing hemoptysis, tachycardia, and hypotension, necessitated surgical resection of the right upper lobe (Fig. 1c). Pathological examination demonstrated not only septate hyphae in a resected cavity with a chronic inflammatory reaction ( Fig. 2a) but also a basophilic-appearing PVA fragment in the lung (Fig. 2b, c). After several days of stay in the intensive care unit, the patient, lacking any symptoms of hemoptysis and ectopic embolism, was transferred to a normal ward and discharged 2 weeks later. This patient received 1600 mL suspended erythrocyte and 1600 mL plasma transfusion during hospitalization. One month later, outpatient follow-up showed good recovery except a little bit of right chest pain. Figure 3 is a timeline demonstrating the important dates for the patient in hospital and on outpatient follow-up.
Discussion
PVA has been widely used in BAE for treatment of massive hemoptysis for its permanent embolization effect and relative easy drug-delivery access (no need for microcatheter) compared with gelatin sponge particles and stainless steel platinum coils. In this case, PVA was chosen in the hope of occluding the fistulas. With the accumulation of PVA with contrast material, the shape of the right main bronchi was clear; therefore, surgical resection was carried out. Pathologic examination demonstrated PVA fragments. Alpha-smooth muscle cell actin (Alpha-SMA) and CD34 immunohistochemistry was carried to locate the fragments. To the best of our knowledge, few cases have reported images of disrupted PVA in the human lung [3,4].
A systematic search of MEDLINE and EMBASE was conducted from inception to 25 July 2021, using the Fig. 1 a Chest computerized tomography scan showing bilateral apical post-tuberculosis lung fibrosis and a right apical 2.5 × 2 cm thick-walled cavity with a solid intracavity mass bearing the air crescent sign. b Right bronchial artery angiography showing tortuosity, hypertrophy, and extravasation of contrast material into the right upper-lobe bronchus (red arrow). c Gross pathologic specimen after surgical resection of the right lobe search terms "PVA AND bronchial artery embolism, " "massive hemoptysis AND bronchial artery embolism, " "massive hemoptysis AND PVA AND bronchial artery embolism, " "massive hemoptysis AND aspergillomas AND PVA AND Bronchial artery embolism. " Only two reports were found (Table 1). Robbins and colleagues reported that microsphere fragments appeared in the lung vessels, while Bonnefoy et al. also captured images of particles in the lungs [4,5].
As can be seen in Fig. 2d, e, the fragments were not in the vessel but in the bronchioles. This could be explained Fig. 2 a Septate hyphae in a resected cavity with a chronic inflammatory reaction. Hematoxylin and eosin (H&E) staining, original magnification ×400. b Basophilic-appearing PVA fragment (thick arrow) and red blood cells in the lung (thin arrow). H&E staining, original magnification ×400. c Basophilic-appearing PVA fragment. H&E staining, original magnification ×200. d Alpha-SMA immunohistochemistry showing the fragments were not in the vessel but in the bronchioles, original magnification ×200. e CD34 immunohistochemistry showing the fragments were not in the vessel but in the bronchioles, original magnification ×200 Fig. 3 Timeline demonstrating important dates for the patient in hospital and on outpatient follow-up by the fistula between bronchial artery and the bronchioles. Of interest to us was the size of the PVA fragments, which scattered around the bronchioles with different diameters. Some of the fragments were just as large as the red blood cells (Fig. 2b). This phenomenon raises the question of whether the PVA fragmentation occurred during specimen preparation or in the human body or associating with TB or Aspergillus. For the first possibility, our slice thickness was 5 µm, while PVA exceeding this thickness would be expected to break during histological preparation. We are more curious about the latter possibility. As Fig. 2b, c shows, the contour of the PVA fragments is not clear and lacks a cutting edge, while many small fragments of different sizes are scattered in the vessel. The mechanism of PVA fragmentation in vivo was not clear, though it may be related to the mechanical force of injection. However, we used a 5F Cobra angiography catheter (Terumo, Japan) with 1.65 mm inner diameter, which is much larger than 900 µm of largest PVA size, so this probability is very low. We could exclude the possibility that PVA fragmentation was associated with TB or Aspergillus according to the pathological features. The PVA fragmentation appeared in the bronchioles, and this is cause for great concern, because embolic agents smaller than 50 µm might pass through the physiological arteriovenous shunt to the systemic arteries, resulting in ectopic embolisms [6]. In fact, there are several reported cases of off-target embolization causing stroke, though we were not convinced by some of the proposed mechanisms. In these cases, there was no presence of collateral circulation, no visible shunt, and no known mechanisms proposed by Knight [7], but the strokes happened after BAE [8,9]. The authors hypothesized that the microspheres probably passed through an unvisualized right-to-left shunt from the right pulmonary arteries to the right pulmonary veins or created a thrombus during the procedure, dropping into the vertebral artery and causing an embolic stroke [8,9]. Moreover, we also observed a stroke during the procedure of drug-eluting bead bronchial arterial chemoembolization in a lung cancer patient. Although we proposed that mechanical forces disrupted the unvisualized anastomoses, which opened errant emboli passages through the pulmonary vein and allowed off-target embolization of the intracranial arteries [10], we could not rule out the possibility of PVA fragmentation.
Conclusions
This case raised a concern about the safety of PVA when applied in humans. Thus far, only two cases have reported the fragmentation of PVA, and the mechanism has not been elucidated. We could not make any conclusion based on the two cases. Moreover, there are tens of thousands of cases treated successfully with PVA every year, so this does not contradict the use of PVA. However, in some cases of off-target embolization causing fatal complication, such as stroke, paraplegia, and myocardial, PVA fragmentation needs to be taken into consideration. | 2021-10-26T13:38:51.785Z | 2021-10-26T00:00:00.000 | {
"year": 2021,
"sha1": "6d6ef8e0815fdea069d2f3c82d8b9b85b6bf1bad",
"oa_license": "CCBY",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/s13256-021-03099-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d6ef8e0815fdea069d2f3c82d8b9b85b6bf1bad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119638049 | pes2o/s2orc | v3-fos-license | A comparison method for heavy-tailed random variables
We investigate a way of comparing and classifying tails of random variables. Our approach extends the notion of classical indices, such as exponential and moment indices, which are widely used measuring heaviness of tail functions. A non-parametric risk measure applicable for all heavy-tailed random variables is obtained as a concave function that represents the decay speed of tail function. Many key properties of the distribution of a random variable are encoded into this function, which enables a new way to estimate tails. The latter half of the paper is devoted to numerous examples illustrating properties of the results developed during the first half.
Introduction
Suppose (Ω, F , P) is a probability space where subsequent random variables are defined. For a random variable X, with distribution function F X (x) = P(X ≤ x) and tail function F X = 1 − F X , we define the hazard function by R X = − log F X . All random variables are assumed essentially unbounded from above, that is, P(X > a) > 0 for all a > 0. where x + = max(0, x), are valid as can be seen from e.g. [1] and [2]. We will mainly study non-negative random variables. However, most of the properties can be transferred to the unrestricted case simply by considering variable X + instead of X, since these two variables have the same right tail. To study left tails, one can replace X by −X.
The main problem and proposed solution
The difficulty with indices defined in formulas (1) and (2) is that neither of them can be used to compare tails of random variables X and Y if their indices share the same value 0 or ∞. In this situation the scale h does not represent correctly the scale of the hazard functions R X and R Y . This raises two questions: Q1. Given two general random variables, how can their tails be compared?
Q2. How could one measure the heaviness of a general heavy-tailed random variable?
It seems that questions 1 and 2 have not been studied extensively in the past. However, these kinds of questions have recently attracted attention among practitioners of risk management. In [6] one can find an applied approach with related discussion to the tail comparison problem. We will provide a completely different solution that is applicable to a wider class of probability distributions.
To answer the question 1, we propose a direct comparison between the associated hazard functions via quantity (3) lim inf If the quantity of formula (3) equals a ∈ (0, ∞), we may deduce that for any small ε > 0 there exists a number x ε such that for all x > x ε the inequality holds. This enables comparison between X and Y even when indices (1) and (2) fail to characterise the proper decay speed. In addition to the direct comparison of type (3), it will be shown that any risk function of a heavy-tailed random variable can, in a sense, be replaced by a suitable concave scale function that adequately represents its asymptotic scale. This answers the question 2: Heaviness is measured by the asymptotic growth speed of this deterministic function. Using a concave function is beneficial because it has, in many cases, much simpler representation than the original hazard function.
Structure of the paper
The rest of the paper is arranged as follows. In chapter 2 necessary background information is given with preliminary results. In chapter 3, we develop the main properties that are used in the applications chapter 4. Lastly, technical constructions omitted in section 3 are given in appendix chapter A.
Motivation
A random variable with a positive exponential index is called light-tailed. For such variables one can always deduce that the speed at which the tail function F decreases is at least exponential, that is, for some a > 0 and all x large enough. If a random variable is not light-tailed, it is called heavy-tailed. In the case of a positive and finite moment index polynomial bound for some b > 0 and all large enough x may be obtained, whereas inequality of the type (4) is not possible. We aim to provide a bound suitable for all heavy-tailed random variables in the form for all x large enough, where the function h is an accurate representation of the true decay speed of the tail F. In example 2 we will see how this can be achieved in the case of Weibull or log-normal type distributions. In order to find a suitable function h we introduce the following definition.
Definition 1. Suppose X is a random variable and h is a scale function. Then is called the h-order of X. If I h (x) = 1, the function h is called a natural scale (function) of X. Hereafter, h X denotes a natural scale of X.
Remark 1. In definition 1, the concept of natural scale does not uniquely define any function h. Instead, there are many different choices. Trivial candidate is always h = R X = − log F X . This, however, may turn out to be a cumbersome choice.
Suppose h X is a natural scale of a random variable X, then for every ε > 0 inequality similar to (5) holds: for all x large enough. We will see in Theorem 3 that, for a heavy-tailed random variable, a function h X can always be chosen so that: III h X is essentially the best choice in (6) .
Properties I and II can make the function h X smoother than the original risk function R itself. However, h X conveys useful information about the asymptotic behaviour of the tail F. This becomes apparent when studying expectations. We will see that there exist numbers a, b ∈ (0, ∞), where a < b, such that This means that the function h X regularises the random variable X so that the expectation (7) is finite, but sparingly enough for expectation (8) to be divergent.
Interpretation of this is that the deterministic function h X captures the scale of the random variable X and thus measures how risky the variable is. Precise information about the magnitude of the possibility of very large realisations is of crucial importance in many fields. For example, in insurance and finance large losses are possible, say, in catastrophe insurance or in derivatives trading. Theorems for general moment properties of heavy-tailed random variables can be found from chapter 2 of [4]. We conclude the chapter by recalling one of these theorems.
Theorem 1 (Theorem 2.9 of [4]). Let X ≥ 0 be a heavy-tailed random variable. Suppose g is a real valued function for which g(x) → ∞, as x → ∞.
From now on, we will omit the lower index indicating the random variable from hazard and tail functions whenever the variable in question is clear from context. In addition, sequences will be denoted in short as (x n ) := (x n ) ∞ n=1 , where := denotes equality by definition.
Existence of a suitable natural scale
One of the main results of the paper is Theorem 3, where the existence of a desired function h satisfying requirements I-III of section 2 is shown. Its proof requires the next theorem, which translates a relation similar to (1) or (2) into a more general environment.
Theorem 2. Suppose X is a random variable. Assume further that h is continuous and h(x) → ∞ as x → ∞. Then Proof. We divide the proof in two parts.
1. Suppose first that the function h is strictly increasing. Applying (1) to random variable h(X) yields Since h is invertible, we obtain This ends the proof of part 1.
2. Suppose then that the function h is increasing, but not necessarily strictly increasing. Let η > 0. We may choose a strictly increasing continuous function h η such that for all x ≥ 0 holds. See appendix A.1 below for the actual construction of function h η . By part 1 the result (9) holds for the function h η . Using (10) it is easy to see that and sup{s ≥ 0 : E(e sh η (X) ) < ∞} = sup{s ≥ 0 : E(e sh(X) ) < ∞}, which ends the proof.
We are now in a position to show that a natural scale, defined in definition 1, can always be chosen in the following way.
Theorem 3. Suppose X is a heavy-tailed random variable. Then there exists a concave function h for which h(0) = 0 such that Equivalently, there exist numbers a, b ∈ (0, ∞) and a concave function h * for which h * (0) = 0 such that Proof. Equivalence of the above assertions is immediate. If (11) holds, we may choose h * = h, a = 1/2 and b = 3/2 in (12) and (13). Result is implied by Theorem 2. For the other direction, Theorem 2 tells us that α : Setting h = αh * gives the required function. We will thus concentrate on proving formula (11). To see this, let g be a nonnegative continuous function for which g(x) = o(R(x)) holds, as x → ∞. For explicit construction of such a function see appendix A.2 below. Now, there exists a functionĥ satisfying conditions 1-3 of Theorem 1.
Note that the functionĥ cannot be bounded from above. Ifĥ was bounded by a positive constant M, we would get E(eĥ (X)+g(X) ) ≤ e M E(e g(X) ).
Finally, denoting gives the desired function.
Remark 2.
If h is a concave function with h(0) = 0, the subadditivity requirement holds for all a, b > 0. In addition, it is easy to check that for any a ∈ (0, 1) relation lim sup x→∞ h(ax)/h(x) < ∞ holds. This means, in particular, that the function h belongs to dominatedly varying class D. See appendix A.3 below for details. Moreover, if h is also a scale function, it satisfies the asymptotic relation h(x) → ∞, as x → ∞. This implies that h is continuous and strictly increasing.
Theorem 3 gives a way to classify random variables purely by the thickness of their tails. This thinking is different from many other classifications of heavy-tailed random variables where an analytic property, not explicitly related to the tail decay speed, is required.
Remark 3. Theorem 3 shows how to find a natural scale for a heavy-tailed random variable. Namely, if a concave function satisfying (12) and (13) is found, it is a natural scale up to a positive multiplicative constant. Good initial guess for finding a suitable function h is R = − log F or a suitable dominating component of R. In example 7 we will illustrate properties of this choice.
Properties of natural scales
Properties of indices (1) and (2) are different. For example, if X and Y are positive and independent, the equality I(XY ) = min(I(X), I(Y )) is always valid whereas We provide necessary conditions ensuring that the h-order of the sum and product of independent variables is the minimum of the associated h-orders. These properties allow one to make simple and fast estimates even if the exact computation in not feasible. The next theorem gives sufficient conditions for simple computational rules to hold. The aim is to establish results that can be tested with natural scales of random variables.
Theorem 4. Suppose X and Y are positive, independent and essentially unbounded random variables. Assume that h is a continuous function and h(x) → ∞, as x → ∞.
Then, the following implications hold: Proof. We use the representation (9) of Theorem 2 with the facts E(e sh(X+Y ) ) ≤ E(e sh(X) )E(e sh(Y ) ) and E(e sh(XY ) ) ≤ E(e sh(X) )E(e sh(Y ) ) from formulas (17) and (18) Purpose of the Theorem 4 is to give conditions that enable simple calculations. This is why random variables are assumed independent. However, the following result confirms that in certain cases we may infer scales even without independence.
Theorem 5. Suppose X and Y are positive heavy-tailed random variables. Let h X and h Y be concave natural scales of X and Y with h Y (0) = 0 (obtained e.g. from Theorem 3). Assume further that Then there exists c ∈ (0, ∞) such that is a natural scale of X +Y .
Proof. Because of (21) is it clear that which implies, using Theorem 2, that E(e sh Y (X) ) < ∞ for all s > 0. Now, since X and Y are assumed positive, .
On the other hand, because h Y is by remark 2 subadditive, we get .
The last theorem allows one to estimate tails of transformations of IID (independent and identically distributed) variables using the tail of a single variable. This estimate is useful in the study of products, see examples 3, 4 and 6 below. Before this result, we need a lemma that expands a central property of indices (1) and (2). Lemma 1. Suppose X and Y are random variables and h is a scale function. Then where the last equality follows from e.g. [ where Z = X or Z = Y . This proves the claim.
Theorem 6. Let n ≥ 2 be fixed and suppose X, X 1 , X 2 , . . . , X n are positive heavytailed IID variables with continuous common distribution function F. Assume further that g : R n → R is a function with properties: 1. Each component g i , i ∈ {1, 2, . . . , n}, of function g is an increasing function and g i (x) → ∞, as x → ∞.
In conclusion, we deduce that the functionĥ differs from the natural scale only by a positive constant factor. Therefore, there exists c ∈ [1, n] such that (23) holds. Now, using the definition of natural scale, we get from which the claim (24) directly follows.
Applications and examples
Suppose we are given two sequences of random variables (A i ) and (B i ). Define (27) on the other hand can be viewed as a randomly discounted random cash flow. During the following examples we will see how the h-orders of variables S n and Y n can be studied using results of the previous chapter. We will begin with an example that clarifies why lower limit is used in the definition of h-orders.
Example 1 (Justification of limes inferior in definition 1). Consider continuous
for all x ≥ 0. Assume further that the functions h 1 and h 2 are strictly increasing and that h 2 (x) → ∞, as x → ∞. It is now possible to construct a random variable X whose risk function R satisfies x 0 Figure 1: Illustration of the hazard function R is drawn using a dashed line.
It is worthwhile to notice that the behaviour of (29) defines the integrability properties of the random variable X, that is, the function h 2 is solely responsible of the integrability of X. In illustration 1 a situation where h 1 is concave and h 2 is convex is shown. This illustration depicts the fact that a natural scale can be easier to handle than the original function R = − log F itself.
Even if the function R is smooth, there may be a better choice of natural scale. This phenomenon can be seen in the following example in the case of log-normal type tails.
Example 2 (Two difficult cases in classical theory). Two distributions that escape the scope of indices (1) and (2) are the Weibull distribution and log-normal type distributions. Weibull distribution is concentrated on [0, ∞) and its tail function has the form F(x) = e −λ x α with λ > 0 and α ∈ (0, 1). This is the distribution of X (1/α) when X is exponentially distributed with parameter λ . We say that a random variable is of log-normal type, if its tail satisfies Here β ∈ R, λ > 0, γ > 1 and c is a positive norming constant.
Let B denote a random variable having Weibull distribution or log-normal type distribution. It is easily seen that E (B) = 0 and I(B) = ∞. Hence, the classical indices reveal little information about the distribution. However, since B is heavytailed, remark 4 ensures that there is a natural scale of the distribution of B that satisfies condition (17) . Suppose (B, B 1 , . . . , B n ) are IID variables. Now, the tail of the sum S n is bounded by the tail of a single variable, that is, Here, a natural scale can be chosen to be h(x) = λ x α for the Weibull distribution and h(x) = λ (log x) γ for the log-normal type distribution respectively.
Next, we calculate up to a constant factor a scale function for a product of two Weibull distributed random variables.
Example 3 (Product of two independent Weibull variables). Suppose X and Y are IID Weibull distributed random variables with tail F(x) = e −λ x α , where λ > 0 and α ∈ (0, 1). By selecting g(x 1 , x 2 ) = x 1 x 2 in Theorem 6 we get where c > 0 is a constant and g −1 Since XY has a natural scale of the form dx α/2 , where d > 0, we see that the calculation rule for IID variables X and Y implied by condition (18) of Theorem 4 cannot be valid for h = h X . In fact, the scale function of XY grows at a significantly slower speed than the scale function of X, which is why the functions h XY and h X differ by more than a multiplicative constant factor.
The previous example dealt with the product of two random variables. The following example shows that Theorem 6 can be used to obtain simple and general bounds in situations where structure of the model is based on product of IID variables. We study the utility of an economic agent under random IID endowments of commodities using the celebrated Cobb-Douglas model. Different ways of introducing randomness into Cobb-Douglas model with deeper discussion can be found in e.g. [8].
Example 4 (Tail asymptotics in an economic model based on product structure). Suppose g(x 1 , x 2 , . . . , x n ) = x a 1 1 x a 2 2 . . . x a n n , where a 1 + a 2 + . . . + a n = 1 and a i ≥ 0 for all i ∈ {1, 2, . . . , n}. Let X 1 , X 2 , . . . , X n be positive IID variables with common continuous distribution function F. Now, g d (x) = g −1 d (x) = x and, for a given ε > 0, application of Theorem 6 yields (30) P(g(X 1 , X 2 , . . . , X n ) > x) ≤ P(X 1 > x) 1−ε for all x large enough. The function g can be interpreted as a utility function of an economic agent in Cobb-Douglas model. Formula (30) shows that the tail of the utility in random IID allocation of goods is dominated by the tail of a single variable.
Next, we move on to the study of the process (Y n ). The following example expands previously knows results to the case of even heavier tails.
Example 5 (On asymptotics of tail F Y n ). Suppose (A i ) and (B i ) are independent sequences of positive random variables. In many different fields, such as in insurance and queuing theory, randomly weighted random sums of the type (27) appear constantly, see e.g. [9] for background. In this context we set Y n = 0.
A variable of interest isȲ In [11] Theorem 4.1. the moment index of the random variableȲ n is solved: Using the theorems from previous chapter it is possible to extend this result beyond the scope of polynomial decay. Namely, if the scale function h satisfies properties (17) and (18), a deduction similar to that of [11] can be generalised. We recall (e.g. from [11]) that the process (Ȳ n ) admits the recursive represen- for n ∈ N and d = signifies equality in distribution. Here U n is independent of vector (A n+1 , B n+1 ). From this it is clear that Interpretation of formula is that the risk with a more slowly increasing hazard function determines the asymptotic behaviour of the process (Ȳ n ). Heuristically, the condition (18) is satisfied when the hazard function grows slower than the logarithmic function, which is not the case with Weibull distributed random variables. This is, however, the case with log-normal type distributions. Weibull distribution fails to satisfy (18) and example 2 shows that property (31) cannot be valid for this distribution.
Example 6 (Special case of disconted sum Y n : B = 1). In example 5 conditions (17) and (18) were taken as assumptions. However, the scale of random variable Y n is possible to deduce from theorems 5 and 6 alone. Consider the discounted sum Y n , where n ≥ 3. We will make the simplifying assumption B i = 1 for all i ∈ {1, 2, . . . , n}. In addtition, the sequence (A i ) is assumed to be IID sequence of positive variables with a common continuous distribution function. Using Theorem 5 we see that the scale of Y n is determined by the heaviest of the summands of which is A 1 . . . A n−1 . Now Theorem 6 shows that a natural scale for Y n is up to a positive constant the function x → h A (x 1/(n−1) ). Using Lemma 1 we see that this scale is also a natural scale forȲ n .
The final two examples are of more theoretical nature. Following example shows how a deterministic transformation of a random variable can be used to alter the moment index. For stochastic way to change the moment index the reader is advised to see [7].
Example 7 (The scale h = − log R). For any random variable X with continuous distribution function, by Theorem 2, holds. This way it is possible to give a deterministic transformation, depending on X itself, with which X may be transformed to a random variable with any positive moment index. If the required moment index is α > 0, the transformation can be used.
The last example gives a sufficient condition for the moment determinacy of a general non-negative heavy-tailed random variable. For background on the moment problems reader is advised to see [5]. The condition is purely a tail condition, that is, modifications of the distribution on a finite interval of [0, ∞) do not change the result.
Example 8 (Moment determinacy via decay speed of scale function). Suppose X ≥ 0 is a heavy-tailed random variable and let h be its natural scale. We recall that a random variable X ≥ 0 is determined by its moments if Using the Hardy's condition it is possible to give a limit test for the moment determinacy involving the concept of natural scale. This test has two benefits compared to other tests such as Carleman condition or finiteness of the Krein integral (see [5] for these tests).
1. Decision is done using the asymptotic properties of the tail function: small realisations of X are irrelevant.
2.
No assumption about the absolute continuity w.r.t. Lebesgue measure is needed.
Suppose
Then the distribution of X is determined by its moments. Implication is easily verified by observing that (32) together with the definition of natural scale and positivity yields Hence, by connection of Theorem 2, there exists c > 0 such that and Theorem 1 of of [10] confirms that X is determined by its moments.
Remark 5.
In example 8 X is assumed heavy-tailed. This is not a limitation, because any light-tailed random variable would automatically satisfy condition E(e cX ) < ∞ for some c > 0 and instantly be determined by its moments.
A Omitted technical details
A.1 Construction of the function h η of Theorem 2 In Theorem 2 it was claimed that a function satisfying (10) exists. To construct function h η , recall that the function h itself is assumed increasing and continuous. Therefore, we may construct sequence (y k ), where y k is defined to be the unique solution of equation h(x) = kη. Now, we set h η (y k ) = kη for all k ∈ N and define families of functions A k in the following way: f ∈ A k if and only if A.2 Construction of the function g of Theorem 3 Suppose we are given a hazard function R of a heavy-tailed random variable. We must now find a continuous function g, depending on R, such that g(x) = o(R(x)), as x → ∞.
Since R is right continuous, we may define sequence (x n ) by x 0 := 0 and x n := min{x : R(x) ≥ n}, n ∈ N.
We note that x n → ∞, as n → ∞. Define g(x 0 ) = g(0) = 0 and g(x n ) := √ n − 1 for all n ∈ N. Between points of the sequence (x n ) values of g are given by linear interpolation for all n ∈ {0, 1, 2, . . .}. The function g is continuous and by construction g(x) = o(R(x)), as x → ∞.
A.3 Details for remark 2
Suppose we are given a concave function function of remark 2. By concavity, for any y ∈ (0, 1) and x > 0, | 2013-10-04T08:40:03.000Z | 2013-10-04T00:00:00.000 | {
"year": 2013,
"sha1": "972fd6f62d8a6433cfd8e0c8e5698ecc1604e71e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "972fd6f62d8a6433cfd8e0c8e5698ecc1604e71e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257914961 | pes2o/s2orc | v3-fos-license | The impact of road traffic context on secondary task engagement while driving
Introduction Driver distraction has been recognized for a long time as a significant road safety issue. It has been consistently reported that drivers spend considerable time engaged in activities that are secondary to the driving task. The temporary diversion of attention from safety-critical driving tasks has often been associated with various adverse driving outcomes, from minor driving errors to serious motor vehicle crashes. This study explores the role of the driving context on a driver’s decision to engage in secondary activities non-critical to the driving task. Method The study utilises the Naturalistic Engagement in Secondary Tasks (NEST) dataset, a complementary dataset derived from the SHRP2 naturalistic dataset, the most extensive naturalistic study to date. An initial exploratory analysis is conducted to identify patterns of secondary task engagements in relation to context variables. Maximum likelihood Chi-square tests were applied to test for differences in engagement between types of driver distraction for the selected contextual variables. Pearson residual graphs were employed as a supplementary method to visually depict the residuals that constitute the chi-square statistic.Lastly, a two-step cluster analysis was conducted to identify common execution scenarios among secondary tasks. Results The exploratory analysis revealed interesting behavioral trends among drivers, with higher engagement rates in right curves compared to left curves, while driving uphill compared to driving downhill, in low-density traffic scenarios compared to high-density traffic scenarios, and during afternoon periods compared to morning periods. Significant differences in engagement were found among secondary tasks in relation to locality, speed, and roadway design. The clustering analysis showed no significant associations between driving scenarios of similar characteristics and the type of secondary activity executed. Discussion Overall, the findings confirm that the road traffic environment can influence how car drivers engage in distracted driving behavior.
Introduction
Driver distraction involves a secondary task engagement while driving.Driver distraction has long been recognized as a major concern for road traffic safety.Regan et al. (2011) explain that driver distraction occurs when attention is diverted from safety-critical driving activities towards a competing activity.This temporary diversion of attention caused by the execution of competing activities at critical times has been recognized as a contributor to unwanted driving outcomes from minor errors to motor vehicle crashes (Dingus et al., 2016;Oviedo-Trespalacios et al., 2018b).Research has shown that drivers spend a considerable amount of time engaged in secondary activities while driving.A naturalistic study by Stutts et al. (2005) reported that drivers spent around 30% of the total motion time executing a distracting activity.Similarly, a more recent observational study (Young et al., 2019) reported that on average drivers engaged in nine secondary tasks per trip and spent 44.4% of the total driving time engaged in at least one secondary task.Over the years, the adoption of in-vehicle and portable technologies has added to an already long list of potential sources of distraction.Although order of prevalence may vary among studies, the most commonly distractions are usually conversing with passengers, eating and drinking, smoking, and manipulation of in-vehicle and portable electronic devices.
Driving behavior and decision-making can be modelled as a multi-level process.Michon (1985) proposed a hierarchical behavioral model for the driving task that explains action taking within three levels of resolution.The first two levels (operational and tactical) comprise factors prior to engagement and are key to identify the determinants of driving actions including any tasks considered as distractors.At the top level (strategic), the driver plans the journey including trip goals, route to take, time, and even defines which distracting activities are deemed acceptable to perform if the opportunity arises.At the tactical level, the driver negotiates the execution of a particular action depending on the overall demand of the ongoing driving task, the prevailing driving context circumstances and the expected demands of the new task.Lastly, operational level decisions are made post-engagement.Although these decisions are not determinants of the behavior, they come as a result of the actions taken.For example, reducing the speed to diminish overall workload and accommodate for the demands of new tasks being introduced.Between both pre-engagement decision making levels, the strategic level has received considerably more attention regarding the determinants behind secondary task engagement while driving compared to the tactical level.Psychosocial theories focused on planned behavior have been applied to explain drivers' actions from an intentional or strategic point of view.On the other hand, research on tactical level decision making to engage in secondary tasks has been more limited.Considering several studies have provided evidence of discrepancies between an individual's reported intentions and actual future behavior (Preece et al., 2018), analyzing the decisionmaking process at a tactical level is key to determine when and where engagement is more likely to take place.
In literature some evidence can be found that suggests drivers avoid engaging in secondary activities in high demanding driving scenarios (Liang et al., 2015;Kidd et al., 2016;Oviedo-Trespalacios et al., 2018a,b, 2019) while favoring those scenarios that they perceive to be less demanding.For instance, secondary task engagement has been reported to be more prevalent among drivers at standstill (i.e., stopped at controlled intersections) when compared to drivers in motion (Funkhouser and Sayer, 2012, Metz et al., 2014, Huisingh et al., 2015).Conditions that drivers seem to avoid include sharp curves, bad weather conditions, school areas, and high speeds.However, evidence to the contrary has also been presented.For instance, some research efforts, based on naturalistic data, have been unable to find associations between secondary task engagement and the characteristics of the driving environment including road surface conditions, time of drive, etc. (Stutts et al., 2001;Klauer et al., 2006).Furthermore, most research on the prevalence of secondary task while driving has concentrated on mobile phone use, with only a few studies examining the prevalence of other secondary tasks (Kidd et al., 2016).
This study further investigates the relationship between tactical components of the driving task and the decision to engage in driver distraction related activities using naturalistic data.Particular attention will be given to identifying what categories of secondary tasks share similar contextual characteristics for execution.It is hypothesized that secondary tasks with a similar level of complexity and resource demands will be executed in comparable driving environments.
Naturalistic engagement in secondary tasks dataset
The data used for this study was retrieved from the Naturalistic Engagement in Secondary Tasks (NEST) dataset.The NEST dataset is derived from the naturalistic driving data gathered by the Second Strategic Highway Research Program (SHRP2), a large-scale naturalistic study covering over 3,500 participants and six U.S. states over a three-year collection period.Data from the SHRP2 program was collected from instrumented vehicles equipped with a data acquisition system (DAS) with multiple channels for video and sensor data.Recorded data included information on variables such as vehicle speed, acceleration, lane position and location, as well as forward and rear camera views, and recordings of the drivers' face and hands.
Publicly available SHRP2 data regarding secondary task engagement and its relation to crashes and near-crashes has a short time span of 6 s surrounding critical events.The main advantage of the NEST dataset over the SHRP2 data is that it allows for the study of engagement in secondary tasks for an extended period of time.The NEST dataset consists of close to a thousand excel files with both timeseries and summary data related to secondary task engagement.Specifically, the NEST dataset was developed to provide extended detailed information on multiple factors related to secondary task engagement for time periods surrounding distraction-related safetycritical events (crash or near-crash) and baseline epochs with no safety-critical events associated.To select the trips to code from the SHRP2 dataset, all crash and near crash events preceded by engagement in a secondary task as a potential contributing factor were identified.A total of 236 safety-critical events were recognized and for each driver experiencing a safety-critical event, four baseline epochs were coded, for a total of 944 baseline epochs in the dataset.
In this study, only baselines epochs are of interest.Baselines are defined as 20 s epochs in which the driver was not involved in a crash or near crash.Contextual variables are coded in baseline epochs at different levels of resolution.Both summary and time-series data are used to describe contextual characteristics during the period of time to be analyzed.
Time-series data are coded using frame-by-frame analysis and are recorded for every millisecond in time.Summary data describes the baseline epochs at an event level.The reduced summary data describes the 20s period in two levels of resolution.Variables such as weather are coded once for the entire 20s period while variables such as traffic density are coded at the end of every 10 s of the event (i.e., two times per event) as seen in Figure 1.Table 1 lists the contextual variables of interest along with their respective levels of resolution.It is important to highlight that only variables that were possible to accurately extract from the dataset were included.
Further data reduction
Note that it is possible to identify one, multiple or none of the secondary tasks during a baseline epoch.In addition, baseline epochs may not contain the point of initiation of a secondary task.
To obtain the final dataset, additional filtering and data reduction is performed, the process is as follows: 1. Baseline events containing at least one secondary task are filtered.
This filter eliminated 208 baselines events with no secondary tasks associated, for a remaining total of 736 baseline events.2. The remaining baselines events are filtered to include only those in which the engagement point of the secondary task is recorded in the 20s epoch and those where there is not a simultaneous occurrence of more than one secondary task.3.For analysis purposes, similar secondary tasks are grouped together into new categories as shown in Table 2. 4. As summary data is recorded in 20s or 10s blocks, it is necessary to match summary data to a time series level of resolution at the point of secondary task initiation.The logic is as follows: All secondary tasks starting between 5 and 14 s are allocated the summary data recorded at the end of the first 10s block, while all secondary tasks starting from the 15 s mark are allocated the summary data recorded at the end of the second 10s block.All secondary tasks starting within the first 5 s of the 20s window are excluded as the summary data collection points are distant from the occurrence of the secondary task (Figure 2).
Statistical analysis
The statistical analysis is divided in two phases.First, a descriptive statistical analysis is performed to describe patterns of engagement of the selected secondary tasks under different contextual variables (Table 3).A maximum likelihood Chi-square test was applied to test for differences in engagement between types of driver distraction for the selected contextual variables.A p-value of <0.05 was considered statistically significant.Pearson residuals graphs were used to visualize the cells contributing the most to the Chi-square score.The residuals quantify the difference between the observed data proportions and the expected data proportions under the assumption that there is no relationship between the row and Vehicle speed calculated from change in GPS position or indicated by speedometer.Time series Next, a two-step cluster analysis procedure was used to analyze the influence of contextual factors on secondary task engagement while driving.The procedure is an exploratory tool that groups cases (objects to be clustered) of data based on homogeneous responses to several variables (attributes).The objective of the clustering analysis is to determine if any of the secondary tasks shared similar contextual characteristics for execution.The analysis is based on the idea that tasks executed at the same time compete for a shared pool of multiple resources as suggested by several attentional resource theories.As a result, the extent in which the driving task and any secondary tasks performed simultaneously are able to allocate available resources will determine if the driver deems their execution as having non-significant cross-task interference.Based on this, driving scenarios with comparable demands would allow for similar levels of free resources to be allocated to other tasks, and therefore, secondary tasks of similar characteristics would be able to be accommodated in driving contexts with comparable demands.The analysis is conducted in IBM SPSS Statistics (version 27) and as indicated by its name; it consists of two major steps.
Step 1.In the first step, a sequential clustering approach is used to create many subclusters.The process consists of constructing a Cluster Features (CF) tree of the cases.After the initial case is placed at the root of the tree, then successive decisions on whether the next case joins an already formed cluster or a new cluster are made based on a similarity measure.If all attributes are continuous, cases are grouped in the subcluster using the smallest Euclidean distance.To handle continuous and categorical variables, the log-likelihood distance measure, a probability-based distance, is used.Cases are grouped in the cluster with the highest likelihood measure.To implement this measure, continuous variables are assumed to have a normal distribution while categorical variables are assumed to have a multinomial distribution.Additionally, all variables are assumed to be independent.However, the two-step clustering procedure has proven to be robust to violations of independence and distributional assumptions.
Step 2. In the second step, an agglomerative hierarchical clustering algorithm is used to merge the subclusters stepwise into the desired the number of clusters.The process starts by selecting a starting cluster for each of the sub-clusters formed in Step 1. Clusters are compared, and the pair of clusters that yield the smallest distance are merged.The measure of the distance between two clusters corresponds to the decrease in log-likelihood when the two clusters are merged.The merging process is repeated recursively until the final number of clusters is reached.The final number of clusters can be a previously fixed number, or it can be automatically determined by choosing between two possible options.Either the Schwarz's Bayesian Criterion (BIC) or the Akaike Information Criterion (AIC) can be determined as the clustering criterion.The optimal value is found by comparing the values of the chosen clustering criterion across different clustering solutions.
For validation purposes, maximum likelihood Chi-square tests were carried out after cluster formation.The tests assessed whether significant differences were present for contextual variables betweenclusters.If differences were not significant, the cluster analysis was repeated maintaining the contextual variables found to be of relevance in cluster partitioning (Table 4).
Descriptive analysis
General descriptives of the data analyzed in the present study are presented in Table 3. Detail analysis are presented in the following subsections.
Locality
Occurrences of secondary task engagement were more prevalent in Business/Industrial localities defined as areas where any type of business or industrial structure is present.Overall, around 35.8% of all secondary tasks started in Business/Industrial localities.Other common localities for secondary task engagement were moderate residential areas (multiple houses or apartment buildings are present) with close to 17.8% of occurrences and the Interstate/bypass/divided highway with no traffic signals category with around 22.9% of occurrences.
The association between locality and type of secondary task was significant according to the maximum likelihood Chi-square test (G = 55.891,p < 0.05).For all secondary tasks, except grooming, engagement was more frequent in Business/Industrial localities.Passenger interactions and dancing returned the highest rates of occurrences in Business/Industrial localities with 42 and 50% of their total occurrences executed in this category, respectively.Grooming was more common in moderate residential areas.
Engagement rates for mobile phone use and internal device use were higher for the Interstate/bypass/divided highway with no traffic signals category when compared to the engagement rates of the remaining secondary activities within this category.Pearson residuals (Figure 3) visually confirmed some of the findings including a higher engagement than expected in grooming tasks in moderate residential localities.Other associations that were noticeable were a higher engagement in passenger interaction in highways with traffic signals and a lower engagement for tasks such as dancing, grooming and passenger interactions tasks in highways with no traffic signals.
Roadway alignment
All secondary activities were more commonly executed in straight segments as expected given that other road configurations are not as common during the driving task.Engagement rates in right curves were close to doubled when compared to engagement rates in left curves for mobile phone use (both texting and holding) and tripled for internal device use.Dancing and grooming were reported to have similar engagement rates for both right and left curves.Only passenger interactions showed a higher engagement rate in left curves when compared to right curves which accounted for a third of the total occurrences in curved segments.
Level of service
The level of service variable was used to describe the density of the traffic during the driving task.Six different traffic density levels, from A to F, were defined based on the number of vehicles, and the ability of the driver to select the driving speed.For all secondary tasks, engagement decreased as traffic density increased.Levels A and B were shown to have the higher rates of engagement with 61.6 and 28.8% of total secondary task occurrences, respectively.Level A is defined as free flow conditions with drivers unaffected by the presence of others and unrestricted manoeuvrability and ability to select desired speeds.Level B comprises stable flow conditions where influence from other users starts to be noticeable, while desired speeds are relatively unaffected, manoeuvrability within the traffic stream is slightly diminished when compared to free flow conditions.Engagement in the remaining levels of service was shown to decrease as traffic density levels increased.
Roadgrade
As expected, given predominant road configurations, all secondary activities were more commonly executed in road segments with a level grade (83%).Higher rates of engagement were found while drivers circulated in grade up roads when compared to grade down roads for all secondary activities considered.
Intersections entered
The variable contains the number of controlled intersections entered during the recorded event.When comparing secondary tasks, the higher rate of engagement at intersections was reported for grooming tasks while the lower rate of engagement was reported for dancing.
Ambient lighting
All secondary activities were more commonly executed under daylight conditions with 72.3% of total occurrences.Engagement under dark ambient lighting, both with lighted and non-lighted roads, accounted for close to 23% of total occurrences.For secondary tasks such as texting, passenger interactions and Pearson residuals for locality and secondary task type.
Variables
Cluster dancing, engagements rates under darkness conditions with road lights more than doubled engagement rates under darkness conditions in non-lighted roads.Holding a mobile phone, internal device use and grooming recorded similar engagement rates regardless of the presence of road lights.Dusk and dawn reported very low rates of engagement, however, in general, engagement rates during dusk were higher.
Time bins
Engagement in secondary tasks while driving was higher during the afternoon periods compared to morning periods.The highest peaks of engagement in secondary task occurred during the three time bins comprising the 12-9 PM time window with an even distribution.About 16.2% of secondary task occurrences started between 6 and 9 PM, where holding a mobile phone and internal device use reported the highest number of occurrences for their category during the day.Texting, dancing and grooming reached their peak during the 12-3 PM time period.Mobile phone use rates were higher between 6 and 9 PM when compared to the remaining activities.No secondary task occurrences were recorded between 9 PM and midnight for all activities in consideration.
Number of lanes
All secondary activities were more commonly executed while circulating in two-lane roads with about 42% of the total number of occurrences.In general, engagement was more prevalent between 2 to 4 lane roads.Engagement in three lane roads constituted around 19.2% of occurrences while engagement in four-lane roads accounted for 15.12% of total occurrences.For internal device use and holding a mobile phone, the rates of engagement close to double in four-lane roads when compared to three-lane roads.Texting and dancing displayed similar rates of engagement for both 3-lane and 4-lane roads, while passenger interactions and grooming displayed lower rates of engagement in four-lane roads when compared to three lane roads.Interestingly, the grooming engagement rate in 5-lane roads was considerably higher compared to all other activities.
Roadway design
Engagement in secondary tasks while driving was similar for divided (median strip or barrier) and non-divided roads with 48.3 and 44.2% of total occurrences, respectively.The association between roadway design and type of secondary task was significant according to the maximum likelihood Chi-square test (G = 28.51,p < 0.02).When analyzing by secondary task type, mobile phone use rates in divided roads were considerably higher when compared to engagement rates in non-divided roads, as visually corroborated by the Pearson residuals graph in Figure 4. Engagement rates were also higher for passenger interactions in divided roads when compared to non-divided roads, however, the difference was smaller.All other secondary tasks were more prevalent in non-divided roads than in divided roads.Dancing and grooming engagement rates in non-divided roads close to double engagement rates in divided roads.Internal device use was also higher in non-divided roads, but the difference was considerably smaller.One-way traffic roads registered low engagement rates for all secondary activities considered.
Speed
The mean speed of engagement for most secondary tasks ranged from 60 to 70 km/h.The association between speed and type of secondary task was significant according to the maximum likelihood Chi-square test (G = 33.877,p < 0.03).
Engagement in texting occurred at higher rates between 45 and 110 km/h with an increasing trend between that range.Rates of engagement were more evenly distributed for mobile phone holding with higher engagement rates between 0 and 45 km/h when compared to texting.Both mobile phone activities showed higher proportion of engagement above 110 km/h when compared to other activities (Figure 5).For internal device use, two peaks of use are noticeable, one between 45 and 70 km/h and another between 90 and 110 km/h with a gap of lower occurrences in between.Occurrences in both peaks are noticeably higher compared to other secondary tasks in the same speed ranges.Internal device use between 20-45 km/h and 65-90 km/h was also noticeably lower when compared to other secondary tasks in the same speed bins.Most occurrences of passenger interactions took place between 20 and 110 km/h with a fairly even distribution with a slightly decreasing trend, engagement rates below 20 km/h and above 119 km/h were markedly lower in comparison.
Engagement in grooming tasks increased steadily between 0 and 90 km/h with the lowest rates of engagement for speeds above 90 km/h.Dancing tasks seem to favor lower speeds with most occurrences below 90 km/h.Engagement rates between 20-45 km/h were higher when compared to other secondary tasks.
Cluster analysis
Two-step cluster analysis was carried out in 271 occurrences of secondary task engagement while driving to identify common scenarios for execution.The final number of clusters was determined in accordance with the Schwarz's Bayesian criterion (BIC).Two context variables were eliminated during the validation phase, alignment and road grade (Table 4).
Two distinctive clusters were identified, Cluster 1 comprising 55.7% of secondary task occurrences while Cluster 2 comprised 44.3% of secondary task occurrences.The Silhouette measure which contrasts the average distance to elements within the same cluster yielded a value of 0.3 (fair) while the ratio of sizes from largest to smallest cluster yielded a value of 1.26.Predictor importance is displayed in Figure 6.
Roadway design was the most important predictor for cluster classification.In Cluster 1, the majority of secondary task occurrences (79.5%) took place on non-divided roads.On the other hand, secondary task occurrences in Cluster 2 were more predominant in divided roads (95.8%).The second most relevant predictor was locality.In Cluster 1, the business/industrial category was the most common locality for engagement (39.07%) and a higher engagement was evidenced in residential areas, both open and moderate, when compared to Cluster 2. For Cluster 2, most engagement occurrences took place in highways with no traffic signals (47.5%), business/ industrial localities (32.5%), and highways with traffic signals (18.3%), in that order (Figures 7, 8).
Speed was ranked third in terms of importance.Cluster 1 was characterized by lower speeds with a mean speed of 54.21 while the mean speed for Cluster 2 was higher at 83.35.
The fourth predictor in importance was the number of lanes.Occurrences in Cluster 1 were markedly higher in 2 lane roads, comprising 54.3% of the total occurrences.The remaining bulk of occurrences was mostly allocated in 3-and 5-lane roads, with around 13.2 and 12.6% of total instances, respectively.Oppositely, Cluster 2 was characterized by higher peaks of engagement between 2-and 4-lanes roads, each configuration accounting for around a quarter of the total occurrences (Figures 9,10).
Level of service occupied the fifth position in importance.While secondary tasks occurrences were more common in A and B level of service conditions for both clusters, the distribution was not the same.Cluster 1 contained a markedly higher (78.1%) number of occurrences under level of service A conditions compared to a merely 19.9% of occurrences under B level of service conditions.For cluster 2, the number of occurrences under A and B level of service conditions was fairly similar, with 40.83 and 40%, for levels A and B, respectively.In addition, occurrences for level of service C were markedly higher in Cluster 2 when compared to Cluster 1. Occurrences for levels of service D-F were low for both clusters, however, Cluster 1 contained less occurrences of engagement compared to Cluster 2 (Figure 11).
The influence of intersections was ranked as the sixth predictor in importance.Occurrences of engagement in secondary activities at intersections were less common in instances contained in Cluster 2 (4.2%) when compared to instances in Cluster 1 (25.2%; Figure 12).Time bin and ambient lighting were positioned seventh and eighth in importance among predictors.In Cluster 1, the lowest rates of engagement occurred between 3 AM and 12 PM, with a rising trend.From 12 PM to 3 AM, engagement rates were the highest with a mostly uniform distribution with slight peaks in the 12-3 PM and the 6-9 PM time windows.In Cluster 2, Lower rates of engagement were evidenced between 12 and 6 AM, for the rest of the day (6 AM to 9 PM), engagement rates were higher but mostly uniformly distributed with a slight peak between (3 and 6 PM).As a result, Cluster 1 contained more instances of engagement during darkness when compared to Cluster 2 (Figures 13, 14).The distribution of secondary tasks clusters is shown in Table 5.Both mobile phone secondary tasks, texting and holding, were more prominent in Cluster 2 when compared to Cluster 1, however, the difference was not large.The remaining activities were more prominent in Cluster 1, while internal device and passenger interaction were more evenly distributed among the two clusters, dancing and grooming were noticeably more prominent in Cluster 1. None of these differences were deemed significant when conducting maximum likelihood chi-square tests.
Discussion
This study investigated the relationships between road traffic context of the drivers' decisions to engage in distracted driving.An initial descriptive analysis found several driving behavioral patterns in relation to contextual variables and secondary task engagement.For instance, higher engagement in secondary tasks was reported during right curves when compared to left curves.A possible explanation for this might be that drivers are under higher attentional demands while driving in left curves due to oncoming traffic and blind spots.Although the rates varied depending on the secondary tasks, interestingly, secondary tasks that require manual interactions such as mobile phone activities and internal device use were more prominently executed in right curves when compared to left curves.On the contrary, passenger conversations were the only task in which engagement was markedly higher in left curves.It is possible that the perceived higher demands of left curves might compel the occupants of the vehicle to engage in conversations to provide input regarding the ongoing driving manoeuvre or the driving context.
Engagement in secondary tasks was also higher while driving uphill compared to driving downhill, which was consistent for all secondary tasks.The most evident explanation for this is that maneuverability requirements and the rapid increase of speed while going downhill discourages engagement in secondary tasks.A naturalistic study by Deng et al. (2019) found greater physical load is required for foot-operated control on downhill segments compared to uphill segments.The study found that pedal force can be regarded as an index that effectively describes the driver's psychological workload.As such, the continuous speed increase under the action of the force component along the slope direction while driving downhill results in a higher workload in which drivers use more pedal force to avoid loss of directional control.
Another finding that applied for all secondary tasks was a clear trend of decreasing engagement as traffic density increases.Previous research has shown that traffic density increases workload as drivers need to monitor more closely elements of the dynamic traffic conditions such as speed variations, headway distances, traffic flow conditions, changes in lateral position and presence of lane changes (de Waard et al., 2008;Teh et al., 2014).Therefore, it is expected that less resources are available to be allocated for secondary task engagement.Similar results were obtained in a naturalistic study by Gershon et al. (2017) who found that engagement in secondary activities among teens was more prevalent in free flow conditions compared to flow conditions with restrictions.Analysis conducted using data from the Australian Naturalistic Driving Study (ANDS) also showed that secondary task engagement while driving decreased as traffic density increased (Young et al., 2019).
Engagement in secondary tasks while driving was higher during the afternoon periods compared to morning periods.The most common hours for engagement were between 12 and 9 PM.Although, studies tend to differ in the grouping of time bins, a previous observational study by Kidd et al. (2016) also Frontiers in Psychology 14 frontiersin.orgreported higher rates of engagement the afternoon (11 AM-1 PM) and the evening (4:30-7 PM).In addition, several research efforts focusing solely on mobile phone use while driving have also indicated a prevalence of engagement during the afternoon periods (e.g., Xiong et al., 2014;Sullman et al., 2015).A potential explanation for this is that drivers are using the mobile phone when commuting, to make the experience more enjoyable or useful (Jachimowicz et al., 2021).When the workday is over, individuals may use their mobile phones while commuting as an attempt to replenish the resources lost during the workday, for example by chatting with family or friends or detaching from work by looking at posts or pictures (Ohly and Latour, 2014).Previous research with workers in Italy also demonstrated that work experiences influence phone use during driving commutes (Costantini et al., 2022).To determine whether there was an association between contextual variables and the types of secondary tasks, Maximum likelihood Chi-square tests were carried out.Associations between roadway design, locality, and speed with types of secondary task were found to be significant.In relation to roadway design, mobile phone use in divided roads was considerably higher when compared to non-divided roads.This is in line with previous findings by Sharda et al. (2019) using the SHRP2 dataset who theorized that the prevalence of engagement in divided roads was the result of the sense of safety given by the existence of barriers shielding the driver from oncoming vehicles in their path.In contrast, grooming and dancing 10.3389/fpsyg.2023.1139373Frontiers in Psychology frontiersin.orgwere noticeably higher in non-divided roads when compared to divided roads.Both tasks also more common on residential areas in comparison with other secondary tasks.A potential explanation is that drivers in residential areas may be near home, which results in poor risk calibration.Familiarity with the road activities results in automaticity and inattention which generally explains why it is more likely that drivers will have a crash closer to home (Charlton and Starkey, 2013).In this case, that familiarity could be linked with a perception of more spared capacity and therefore higher likelihood of engagement.
Regarding localities, for all secondary tasks (except grooming), engagement was more frequent in Business/Industrial localities.Grooming was more common in moderate residential areas, which might be logical as drivers may try to complete grooming tasks after leaving their residencies when pressed for time.Similarly, internal device use in moderate residential was also shown to be higher compared to most secondary tasks which could be explained by drivers setting internal components of the vehicle at the beginning of the trip.In highways with no traffic signals, rates of mobile phone and device use were markedly higher in comparison to tasks such as grooming and dancing.In contrast, highways with traffic signals seemed to favor tasks such as passenger interactions and dancing.
Mobile phone use tasks exhibit similar engagement patterns when compared to the remaining secondary tasks for speeds above 45 km/h.For speeds below 45 km/h, the two tasks exhibit contrasting behavior with lower engagement rates for texting.These results are partially in line with findings from a systematic literature review on mobile phone distraction while driving conducted by Cuentas-Hernandez et al. (2023).While lower engagement at the highest speeds was a common finding, the results for low speeds were dissimilar.Most of the studies reviewed by Cuentas-Hernandez et al. ( 2023) that considered visual manual mobile phone tasks reported higher engagement rates at low speeds or while the vehicle was stopped.On the contrary, the distribution of instances of engagement for texting and holding activities in the NEST dataset showed the lowest rates of engagement for speeds below 25 km/h.Passenger interactions, dancing and grooming also share similar speed preferences for engagement when compared to the remaining tasks.Internal device use displayed the most dissimilar engagement preferences when compared with the remaining tasks.Engagement between 20-45 km/h and 65-90 km/h was markedly lower while engagement between 45-70 km/h and 90-110 km/h was markedly higher when compared to all other tasks.
A cluster analysis was conducted to determine which secondary tasks shared similar driving scenarios for execution.Several attentional resource theories suggest that when executing tasks simultaneously, the driver allocates free resources from a shared pool within the tasks.Therefore, driving scenarios sharing similar levels of complexity should allow for a similar number of free resources to be allocated to secondary tasks.It was expected that secondary tasks that share similar resource demands characteristics would be able to be accommodated in driving scenarios with comparable levels of complexity.The less complex the driving scenario, the greater the driver's ability to accommodate high-demand secondary tasks (Onate-Vega et al., 2020).
The two-step cluster analysis yielded two clusters using eight contextual variables.There were several distinctive characteristics between the two clusters.Driving scenarios in Cluster 2 include some contextual characteristics that have been commonly associated with high demand scenarios.For instance, occurrences in Cluster 2 took place at higher speeds, and were more common in more densely traffic scenarios when compared to Cluster 1.In addition, execution was not as common at intersections, which are often preferred when executing complex secondary tasks due to the momentarily reduction of driving demands while at a standstill.While Cluster 1 occurrences were more prominent in business/industrial locations and residential areas, Cluster 2 occurrences were rarely executed in residential locations.Most instances of engagement in Cluster 2 took place in highways and business/industrial locations.
Another important difference is that occurrences in Cluster 1 were less common within 6 AM and 12 PM and more common between 12 and 3 AM when compared to Cluster 1.Therefore, Cluster 1 contains more instances of secondary tasks performed under dark light ambient conditions.It is possible that lower demand driving scenarios contained within Cluster 1 allowed for execution under dark light ambient conditions while the higher demand scenarios contained in Cluster 2 discouraged execution while driving in dark light ambient conditions.
When considering secondary tasks distribution among clusters, no significant differences were found for secondary tasks between the two clusters.Mobile phone secondary tasks were slightly more prominent in Cluster 2 when compared to Cluster 1.For texting, this suggests that motivation may impaired drivers' self-regulation processes as the expected lower engagement in texting in high complexity scenarios was not observed.Additionally, as suggested by Oviedo-Trespalacios et al. (2019), texting may be considered by drivers a shorter and less intrusive task which may lower their perceptions of risk.
The remaining activities were more prominent in Cluster 1, but again differences were not found to be significant.These results seem to confirm that scenario-related variables alone only explain a small part of distractions while driving and that larger consideration should be given to task-related and personal-factors.Drivers do indeed use road traffic environment to assess engagement opportunity, but other systemic factors need to be addressed.This research confirms the conclusion by Oviedo-Trespalacios et al. (2018a,b) that self-regulation of mobile phone use depends on the context, the individual and the secondary task at hand.
Limitations
Limitations associated to the use of the NEST dataset were identified during the data reduction process.For baseline epochs, summary and frame-by-frame data is only available for a 20s time frame, which did not necessarily include the point of initiation of the secondary task.Therefore, only secondary tasks that during the 20s-time window were included for further analysis.Tasks with a shorter execution period were favored for inclusion while tasks with longer execution periods such as phone calls were less likely to register an initiation time during the 20s time frame.As a result, some secondary activities of interest could not be included due to the low quantity of events retrieved.In addition, instances of missing data were encountered in the dataset, mostly impacting time series/frameby-frame data variables.Ultimately, contextual variables such as weather and road surface condition were also excluded from the analysis as the number of events retrieved during adverse weather conditions (rain, snow, fog) was relatively low.
Conclusion
This study investigated the relationship between contextual components of the driving task and the decision to engage in driver distraction activities.The analysis was carried out using the NEST dataset derived from the Second Strategic Highway Research Program (SHRP2), the largest to date naturalistic driving study in the United States.Several engagement behavioral patterns were identified.For instance, higher engagement in secondary tasks was reported driving uphill compared to driving downhill, and during afternoons compared to morning periods.In addition, engagement in secondary tasks consistently decreased while traffic density increased.Drivers have demonstrated a preference for engaging in distractions while driving along right curves, compared to left curves.
Significant associations between context and the type of secondary task were found for three variables: roadway design, locality, and speed.In addition, a clustering analysis was conducted to identify secondary tasks that share similar contextual characteristics for execution.The two-step cluster analysis yielded two distinctive clusters with one cluster encompassing scenarios that are associated to higher driving demands compared to the other.No significant differences were found for secondary tasks when considering their distribution among the two clusters.Result suggest that scenariorelated variables alone only explain a small part of distractions while driving and that more significant consideration should be given to task-related and personal factors.
FIGURE 2Times series data approximation.
FIGURE 4
FIGURE 4Pearson residuals for roadway design and secondary task type.
FIGURE 5
FIGURE 5Pearson residuals for speed and secondary task type.
FIGURE 6
FIGURE 6Predictor importance for two-step clustering.
FIGURE 7
FIGURE 7Roadway distribution in clusters.
FIGURE 8Locality distribution in clusters.
FIGURE 9
FIGURE 9Speed distribution in clusters.
FIGURE 10
FIGURE 10Number of lanes distribution in clusters.
FIGURE 11
FIGURE 11Level of service (LOS) distribution in clusters.
FIGURE 12
FIGURE 12Intersections entered distribution in clusters.
FIGURE 13
FIGURE 13Time bins distribution in clusters.
FIGURE 14Lighting distribution in clusters.
TABLE 2
Secondary task categorization.
TABLE 3
Frequency of engagement in secondary tasks.
TABLE 4
Context variables between-cluster differences.
TABLE 5
Distribution of secondary tasks in clusters. | 2023-04-04T13:14:39.632Z | 2023-04-03T00:00:00.000 | {
"year": 2023,
"sha1": "4e4eecc79b19a94830ec0bc84adbeabe39d13b6a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1139373/pdf",
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "6178e2eef717fdc8819464a75cf6052563387509",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
10006339 | pes2o/s2orc | v3-fos-license | Canadian Thoracic Society COPD Guidelines : Summary of highlights for family doctors
Denis E ODonnell MD*,, Paul Hernandez MD,, Shawn Aaron MD, Jean Bourbeau MD, Darcy Marciniuk MD, Rick Hodder MD, Meyer Balter MD, Gordon Ford MD, Andre Gervais MD, Roger Goldstein MD, Francois Maltais MD, Jeremy Road MD, Valoree McKay§, Jennifer Schenkel§ *Chair, CTS COPD Guideline Development Committee; Chair, CTS Implementation/Dissemination Committee; Editorial Committee; §Canadian Lung Association Administrative Staff
Assessing Disability in COPD
a combination of a long-acting anticholinergic and an LABA is recommended in addition to an as-needed short-acting beta 2 -agonist for immediate symptom relief.• In patients with severe symptoms despite the use of both a long-acting anticholinergic and an LABA, a long-acting oral theophylline may be tried.Monitoring of theophylline blood levels for adverse effects and for drug interactions is necessary.• Long term maintenance treatment with oral corticosteroids has no proven benefit in COPD and is associated with a high risk of serious adverse effects.• Unlike asthma, inhaled corticosteroids (ICSs) should not be used as a first-line medication in COPD.However, ICSs should be considered in patients with moderate to severe COPD who experience three or more acute exacerbations per year, especially if these exacerbations require treatment with oral steroids.• Patients who remain breathless despite optimal bronchodilator therapy may benefit from the addition of a combination of ICS/LABA, but this should be considered on an individual basis.
Key Message #13:
All COPD patients should be encouraged to maintain an active lifestyle.COPD patients with activity-related shortness of breath tend to reduce activity levels to avoid precipitating respiratory discomfort.Deconditioning due to inactivity contributes to generalized skeletal muscle dysfunction so that even minor activity provokes limb fatigue.Clinically stable COPD patients who remain breathless and limited in their activity despite optimal bronchodilators should be referred to an exercise training program.Formal pulmonary rehabilitation programs that include supervised exercise training and patient education have been shown to consistently improve breathlessness, exercise endurance and quality of life, and may reduce emergency visits and hospitalizations in patients with COPD.
Key Message #14: Acute exacerbations of COPD (AECOPD) are the most frequent cause of medical visits, hospitalizations and death among COPD patients. AECOPD is defined as a sustained worsen-ing of dyspnea, cough or sputum production leading to an increase in the use of maintenance medications and/or supplementation with additional medications. AECOPD is further classified as either purulent or nonpurulent. Antibiotics should only be considered in patients with purulent AECOPD.
History, physical examination and chest x-rays are recommended for patients with AECOPD.Sputum Gram stain and culture should be considered for patients with very poor lung function, those with frequent exacerbations or those who have been on antibiotics in the previous three months.Spirometry should be completed in patients suspected of having COPD only after recovery and when they are stable.
• Combination therapy with short-acting beta 2 -agonists and anticholinergic bronchodilators should be used to treat dyspnea in AECOPD.Patients already on an oral methylxanthine may continue this therapy during AECOPD, but there is no role for the new initiation of therapy.• Oral or intravenous steroids should be administered for 14 days in most moderate to severe COPD patients with AECOPD; however, shorter treatment periods of between seven and 14 days may also be effective.Doses equivalent to 25 to 50 mg of prednisone per day are recommended.
TABLE 1 Canadian Thoracic Society chronic obstructive pulmonary disease (COPD) classification by symptoms/disability* COPD stage Symptoms
*Postbronchodilator forced expiratory volume in 1 s (FEV 1 )/forced vital capacity (FVC) less than 0.7 and FEV 1 less than 80% predicted are both required for the diagnosis of COPD to be established; † In the presence of non-COPD conditions that may cause shortness of breath (eg, cardiac dysfunction, anemia, muscle weakness, metabolic disorders), symptoms may not appropriately reflect COPD disease severity.Classification of COPD severity should be undertaken with care in patients with comorbid diseases or other possible contributors to shortness of breath.MRC Medical Research Council Figure 1) Medical Research Council dyspnea scale.COPD Chronic obstructive pulmonary disease.Data from reference 1
Early Diagnosis (spirometry) + prevention End of Life Care Rx AECOPD Follow-up Education / Self-management Short-acting bronchodilators Long-acting bronchodilators Rehabilitation Inh. steroids O 2 Sx Dyspnea FEV 1 B. Management of COPD (Current) Rx AECOPD Follow-up Education / Self-management O 2 Sx Dyspnea FEV 1 Short-acting bronchodilators Long-acting bronchodilators Inh. steroids Rehabilitation End of Life Care Early Diagnosis (spirometry) + prevention Figure 2) A Escalating
For patients with activity-related breathlessness and minimal disability, initial treatment should be with a short-acting beta 2agonist as needed (or a regular anticholinergic or combination anticholinergic/beta 2 -agonist).The choice of first-line therapy is based on clinical response and tolerance of adverse effects.•If symptoms persist despite this, add a long-acting bronchodilator such as an anticholinergic (tiotropium 18 µg qd) or a long-acting beta 2 -agonist (LABA) (formoterol 12 µg bid or salmeterol 50 µg bid) should be added.Continue a short-acting bronchodilator as needed for immediate symptom relief.• For patients with moderate to severe persistent symptoms, management paradigm for chronic obstructive pulmonary disease (COPD) based on increasing symptoms and disability.B Current management deficiencies include lack of screening spirometry, education and rehabilitation, overuse of inhaled corticosteroids (Inh.steroids) in early disease and lack of structured end-of-life care.FEV 1 Forced expiratory volume in 1 s; O 2 Oxygen therapy; Rx AECOPD Treatment of acute exacerbations of COPD; Sx Lung volume reduction surgery or lung transplantation Pharmacotherapy in COPD •
15: COPD is a progressive, disabling condition that ultimately ends in respiratory failure and death. Physicians have a responsibility to provide support to COPD patients and their care- givers at the end of life.
• Antibiotics are beneficial in severe AECOPD, which are episodes associated with increased dyspnea and increased sputum purulence or volume.Patients with simple AECOPD (Table2) have no risk factors for treatment failure; relatively inexpensive antibiotics can target likely pathogens.Patients with complicated AECOPD have risk factors for treatment failure and/or infection with more virulent or resistant organisms.If a patient requires repeated antibiotic therapy within three months, a different class of antibiotics should be used to minimize the risk of resistance.•In severe AECOPD complicated by acute respiratory failure not responsive to initial bronchodilator therapy, ventilatory support may be indicated and beneficial.Consultation with a COPD specialist is recommended in this setting.
Please refer to the Canadian Respiratory Journal, Volume 10, Supplement A, for the complete document of CTS COPD Guidelines.CTS COPD GuidelinesCan Respir J Vol 10 No 4 May/June 2003 185
TABLE 2 Antibiotic treatment recommendations for purulent acute exacerbations of chronic obstructive pulmonary disease (COPD)
REFERENCE1.Fletcher CM, Elmes PC, Wood CH.The significance of respiratory symptoms and diagnosis of chronic bronchitis in a working population.Br Med J 1959;1:257-66. | 2017-10-11T05:37:46.002Z | 2003-01-01T00:00:00.000 | {
"year": 2003,
"sha1": "d14dc8676df8afb884b3be322ca5bb48b3bb7a22",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/crj/2003/861521.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "11b3ee6177652d2a4c053bf19e99edf0ddfbec16",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244073535 | pes2o/s2orc | v3-fos-license | Combined CFA-AFM Analytical approach for precipitation reaction regarding crystal growth building single and multiple monolayers based on surface area calculation with image surface roughness analysis (unseen surface)
A reaction of Mebeverine hydrochloride (0.03mM) in pure form with sodium nitroprusside (0.07mM) to form OFF White precipitate. A constant feed was used to collect an enough amount in weight for AFM study. Continuous flow injection analysis was conducted as it is the aim of this study to combine FIA with AFM. To elucidate the study of surface morphology. Various parameters of AFM image surface roughness analysis were discussed in relation to the kind of precipitate formed. Skewness, kurtosis, peak-peak, ten peak height, fractal dimension, wavelength, core roughness depth, and reduced valley depth. All these with the four parameter mainly amplitude, hybrid, functional and spetial. Since no previous study of such was conducted; all usual mode of imaging was dealt with i.e., contact, non-contact and intermittent contact was defined. A non contact mode was used in this study. A detailed study of how the crystal growth buildup first mono layer (hypothetical based on the date obtained also how much each grain carry a concentration with the number that are on the first mono layer. Number of given samples of surface area calculation plus a demonstration of the hypothetically formed multi mono layers specially at high reactant concentration. The main aim of this project was the binding of AFM with FIA which is regarded as an new approach which might be a very useful knowledge for other researcher.
Introductory of AFM :
The imaging ( Fig .no.1) , measuring and manipulating surfaces at the atomic scale is the aim of AFM and this study ,the main idea in the principle involved in the tip -sample interaction affect how the probe inter acts with the sample. If the probe experiences repulsive force the probe will be in contact mode otherwise as the probe moves further away from the surface dominate and the probe will be in non-contact mode.The primary imaging mode in AFM are available ( Fig .no.2-5) : 1-Contact mode when the probe surface separation is less than 0.5nm 2-Intermittent contact that occurs in the range of 0.5 and 2nm 3-Non-contact mode when the probe surface separation ranges from 0.1 to 10nm The use of two techniques that never been used in the combined modes in explaining the topography of the precipitate formed that is measured as a turbid cloud causing either absorption or divergence or attenuation of incident photons released from white snow leds in a long distance and the signal is accumulated to cause a remarkable response or signal (any thing that serve to indicate assuming and trying in the design to avoid the noise (an electrical disturbances). The resultant peak (i.e., mountain with a pointed 3rd international virtual conference of Chemistry (IVCC) 2021 Journal of Physics: Conference Series 2063 (2021) 012031 IOP Publishing doi: 10.1088/1742-6596/2063/1/012031 2 summit) within a given range i.e., the extent to which or the limit between which variable is possible. The instrument in use i.e., FIA , it has many uses for determination different species [1][2][3][4][5][6][7][8] .
The combination of this technique with Atomic Force microscopy (AFM) is a new trend in achieving a new approach for understanding the kind of formed precipitate and its properties. As for the author and the cited research article no preview record is available for such studies. Therefore the title of this chapter was chosen on this basis(ISDS-AFM) ( Irradiation of Solo -Dual -System -AFM ) [9 ] .
The imaging , measuring and manipulating surfaces at the atomic scale is the aim of AFM and this study the main idea in the principle involved in the tip-sample interaction affect how the probe inter acts with the sample. If the probe experiences repulsive force the probe will be in contact made otherwise as , the probe moves further away from the surface, attractive forces dominate and the probe will be in non-contact mode. Three primary imaging in AFM are available 1-Contact mode where the probe-surface separation is less than 0.5nm 1. Intermittent contact that occurs in the range of 0.5 and 2nm 2. Non-contact made where the probe be surface separation ranges from 0.1 to 10nm Details of this study: the size of spotted area of imaging & the author will discuss four main item : a) Amplitude parameter b) Hybrid parameter c) Functional parameter d) Spatial parameter Taking into account that number of particles, size represented by diameter for range of particulate assuming a sphere counting, skewness, kurtosis, roughness, ….etc. While for continuous flow analysis (Fig. 6)where a known simple segment is injected via an injection valve through loading known precise volume measured by a sample loop and known volume through the available six port ; their function is to facilitate the repeatable precise measurements of volumes port lead 1-4 joined together loop sample ( pre chosen volume of most probably the analyte ) 2 inlet port , 3 outlet port , 5-excess ( more than required ), 6-sample injection port.
Chemicals
All chemicals were used of analytical-reagent grade while distilled water was used to prepare the solutions. A stock solution of Mebeverine HCl, 10 mmol.L -1 , C25H36ClNO5) was prepared by dissolving 0.466 g in 100 ml. A stock solution (50 mmol.L -1 ) of Sodium nitroprusside (SNP), C5FeN6Na2O , (261.918 g/mol, BDH) was prepared by dissolving 3.2739 g in 250 ml D.W .
2.2Apparatu
The flow system consist of four parts as shown in figure 6 -Peristaltic pump -4 channels (Switzerland) an Ismatic type ISM796 , A rotary 6-port injection valve (Teflon-chem inert ) ,(IDEX corporation, USA , Electronic measuring [9 ] & readout system 2.3Methodology Two lines manifold system (Fig. 6) was used .The sample is injected on a carrier stream line. The solution of which is propelled by the movement of a peristaltic pump of known flow rate (ml/min) to a manifold system. Fig.no.1 show a simplified diagram used in this study. Until the reaction product reaches the measurement of the formed reaction product where a signal of incident irradiation source is weakened. The obtained response is recorded via X-t potentiometric recorder or any available readout system . Various different designs were patented [10][11][12][13][14][15]. In this study many variable was dealt with leading to a fine conclusion and all result were discussed in a combined aspect. Sampling : collection of precipitate formed during the resulted reaction product from reactant Mebeverine hydrochloride with sodium nitroprusside ( Fig.6 ) The sampling is carried out in two ways ( Fig.6) which should be followed : First :Using an arbitrary concentration of reactants (mentioned above) until an enough amount of precipitate is formed on the filter paper placed in the funnel for separation of precipitate from solutions (carrier stream and reagent stream ) .This collection will give you a constant feed of homogeneous precipitate . Second :Using the same set up but the precipitate is collected during scatter ploted calibration graph build up .This can be named random samples because it will be collect low (at least repeated for n=3 measurement ) as well as medium and high concentration . Therefore, it contains variable structural formation but at the end , it contain the same precipitate ( what is needed is only few milligrams which quite enough for the study . In both cases of sampling washing of formed precipitate on the funnel is washed from extra reagent and other mother liquid chemical used . The filter paper is left overnight covered gently to prevent any dust, when it is dried. The sample is ready for atomic force microscopy contour scan. The reactant of the intended reaction were not analysed by AFM because they of different population and can not be compared with the precipitate .as fineness ( i.e., grain size will be completely different that the precipitate collected due to the policy of the manufacturing companies . Discussing the image surface roughness analysis , it can be seen that Ssk (surface skewness) =0.00019 which mean that little or non existing of skewness i.e., the crystal growth is proceeding as a direct straight rise no bending which all depend on the nature of precipitate that is formed from homogeneous solution with homogeneous circular appearance on the image; it can be seen that it is circular no elongation (caused by bending or diverged from straight up rise crystal ; and since skewness is the degree of distortion from symmetrical bell curve or the normal distribution . It measure the lack of symmetry in data distribution. A symmetrical distribution will have a skewness of 0.0 which agree completely with Gaussian distribution , smoothed date with the chosen equation of r=0.9198 (Fig . no. 9) with percentage capital R 2 =84.61 i.e., the r value shows that the model chosen was a good choice which reflect the kind of granules that is of symmetrical shape while Fig no. 12 shows the image surface roughness analysis . The value of Sku (surface Curtosis) of 1.8 (Table 3) which approaches : Leptokurtic (approach the value of 2 . As there are three kinds of kurtosis mainly (+) leptokurtic , (0) mesokutric (normal) and (-) platykutric. Also, it shows that there is little or no outlier crystal such as occluded or adsorbed within the structure of granules. Indication of great symmetry is quite evident. Kurtosis values can take -3 up to +3.
It also indicates of high purity of formed crystals. Even flow injection analysis is an on-line automated measurement method of analyte in this case a drug ( Mebeverine hydrochloride). High purity is a necessity in drug analysis (to know what is measured and determined). Since the precipitated granules are mostly homogeneous which indicate the success of flow injection analysis in conducting the precipitation in high standard of analysis condition i.e., avoiding of interferences where ever they comes from as interfering material. The Sy(peak-peak) and Sx (ten point height ) have the same value which also mean a symmetrical distribution between inter point distance at the x-axis which has the same y-axis height peak-peak also indicate the wavelength here it is 47.7nm which is the spacing between local peaks and valley with the consideration of their relative amplitude and individual frequencies ( λ λ =2R λ /Δ λ) (λ λ : varies according to the crystal type structure ) , also ten point height gave 47.7nm for λλ which meant 5 repeatation of peak-valley that ensure regular formation of crystals in the growth process. Within spatial parameters high density noticed due to the close peaks. Fractional dimension of 2.59 is quite normal (which is a measure of how complicated a self-similar figure is in general it measures how many points lie in a given set .Also it captures the notion of how large a set is. If for example Fractial dimension = log3 N /log 2 N =Nlog3/Nlog2 = 0.47712 / 0.30103= 1.585 Figure 12: The imager surface roughness analysis shows a high density of granules . The probe was not able to see a higher depth of more than 45nm .Indicating that a close and rapid build up of granules even at constant steady flowing reactant reagents (Analyte plus reagents) .This fast grow of crystal indicate a colloidal form of precipitate . 10 In functional parameter the sk(core Roughness Depth =41.5 nm , while the reduced Valley depth = 1.21 nm .The difference is 40.29 nm which is very deep that the speed of precipitation is quite high that traps the water at early stages of precipitation leaving the valley trapped water to be near the surface of scanned area . As the total scanned depth is just little above 46.62 nm .Therefore the probe was not able to go beyond 45 nm . In Sa ( Core Fluid Retention index of 1.49 compered to Svi (Valley Fluid Retention index = 0.0692 .The ratio is 21.53 i.e., the tendency is 22 time reserving power for water i.e., suggesting a gel precipitate . The roughness of 11.9 is the surface morphology of various variable peaks height giving rise to rough surface. ( cast iron class A = 0.0048 inches (121920nm) while for copper 0.000059 inches (1498.6nm ) ) . Dryness of the precipitate will make the surface more rough , while a moist sample will fill up more Valleys . leaving a reflecting surface . The-154702.5357 nm 2 represent the extra area remained after the formation of first ground monolayer . Therefore : Remained surface area of spheres beyond the formation of first ground monolayer is equal to : 154702.5357 nm 2 / 16119.88865 nm 2 = 9.59699778 10 grains 10 grains (extra to the first ground mono layer i.e., start formation of second ground mono layer with this amount of grains ) . What remained in the first ground mono layer
Example of high density of granules
Total number of grainsextra excess grains 382 -10 = 372 grains First ground mono layer will have 372 granules Second ground mono layer will have 10 granules 1 mole of molecules will have 6. There is possibility that less than ( ) will be considered here .
-Supersaturation is low and a spiral has developed , the growth rate should be proportional to the square of supersaturation , and when high , directly proportional to the supersaturation. The formation of monolayer is lands that grow rapidly to the boundaries Van Weimarn concern the variation in size and number of precipitate particles regarding concentration of the precipitating reagents. Van Weimarn postulated that a maximum would occur in the curve of particle size as a function of concentration of reactants and that average particle size would increase with time. Van Weimarn relation Q-S/S 1-if =larg number means small particle size with large number of particles 2-if the results is small number mean large participate particulate with low number of particles Q/S -1 Q=total salt concentration S=Molar solubility (increase with increasing the temperature) Of course temperature of reaction will increase the solubility ,also PH value will be different at different temp. , reagent concentration , sample loop volume will play an important rule in obtaining a supersaturation at the outlet junction (c.f . Fig no.6), even it is not recommended .Teflon tubes are used as it has the nature of hydrophobicity (aid in water movements) . It possible to use the dynamic range (analytical range ) or working range i.e., calibration range and even linear range (linear dynamic range ) but all depend on the analyst . -When subtracting the area of scanned surface by the AFM from the total granules surface area .It is possible that the results of subtraction could be a positive value which mean that the concentration of formed ( due to free space that the granules molecule will move within the second area ) 12 granules will occupy a first ground monolayers and there is still a space for more granules to build up this might happen only at low reactant concentration i.e, at lower part of the scatter point plot ( at a glance of the calibration graph ). While if the value of subtraction is a negative value this indicate that there is no room at first ground monolayer monolayers causing to build up new second ground monolayer .This might happen at the average x & y i.e., the centroid value of the scatter plot of x (analyte concentration ) versus obtained response (y) . Which expected to coincide with average formed precipitate (formed granules), Here at this point depending on the availability of excess extra granules that cover up the second ground monolayer ( or even part of its surface area ) . +154702.5357 nm 2 at this stage there is enough space for the first ground monolayer to accommodate more granules . This extra available space can facilitate the presence of more granules ; concentration wise should help in building up more granules .The affected by the rate of flow rate , sample (analyte ) concentration and reagent concentration and sample size (also temperature will play a good part in depicting the form of contour presented ). B-Scanned area by the probe equal to 6312500 nm 2 . It will occupy the 382 granules by dividing total scanned area over the surface area of a single sphere ( 16119.88865 nm 2 ) 6312500 nm 2 / 6312500 nm 2 = 391 .596997786 392 granules Which mean an extra added granules number of 10 granules ; the first ground monolayer have a space of 10 more granules above what was seen by the probe ( 382 granules ) . Therefore 392 x 51 nm 2 = 19992 nm 2 Scanned surface area by the probe = 6312500 nm 2 6312500 nm 2 -19992 nm 2 = 6292508 nm 2 6292508 nm 2 is a free area not occupied by the granules at the used concentration of reactant concentration and parameters that was mentioned earlier ( this will help in geometrical formation of the alleged crystal structure ) i.e., support this by buildup as shown in Fig. … This will conclude our detailed calculation .The area could be circular , or square ,or oblong .Both square and oblong are can calculate the length of each side .
( a x b )= area (nm 2 ) for oblong or (a x a )= area (nm 2 ) for square . | 2021-11-13T20:07:36.359Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "3eab79728e015ac507ca22ad1266e379adbd6c54",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2063/1/012031",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3eab79728e015ac507ca22ad1266e379adbd6c54",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
219965878 | pes2o/s2orc | v3-fos-license | Emergent cooperation through mutual information maximization
With artificial intelligence systems becoming ubiquitous in our society, its designers will soon have to start to consider its social dimension, as many of these systems will have to interact among them to work efficiently. With this in mind, we propose a decentralized deep reinforcement learning algorithm for the design of cooperative multi-agent systems. The algorithm is based on the hypothesis that highly correlated actions are a feature of cooperative systems, and hence, we propose the insertion of an auxiliary objective of maximization of the mutual information between the actions of agents in the learning problem. Our system is applied to a social dilemma, a problem whose optimal solution requires that agents cooperate to maximize a macroscopic performance function despite the divergent individual objectives of each agent. By comparing the performance of the proposed system to a system without the auxiliary objective, we conclude that the maximization of mutual information among agents promotes the emergence of cooperation in social dilemmas.
Introduction
Artificial intelligence (AI) systems are nowadays ubiquitous in our society, as several AIbased technologies have gone mainstream and are now an essential part of the workings of our phones, social media, search engines, online stores, streaming services, and many other aspects of our day to day lives. This trend is likely to continue, and to become even more pervasive with the advent of technologies like self-driving cars, that will put AI systems straight into our physical reality [1].
As more and more of these artificial agents populate our world, we will soon have to start to consider its social dimension, since they will face social dilemmas similar to the ones we humans encounter, and which, if not properly handled, would act in detriment of their benefit to us. For instance, a set of self-driving cars selfishly trying to cross an intersection as fast as possible to minimize their traveling times, regardless of others, would result in a prisoner's dilemma-like problem in which traffic congestion and probability of accidents increases [2]. In this scenario, we would like instead that our agents coordinate with each other to improve the traveling times of the system as a whole. Such problems, where multiple agents, with possibly conflicting individual objectives, seek to jointly maximize a macroscopic performance function, are termed Cooperative Multi-Agent Systems (CMAS) [3] and are the focus of this work.
In this paper we propose an algorithm for the design of CMAS using deep reinforcement learning (DRL), a combination of reinforcement learning, an area of machine learning where an agent learns by interacting with a dynamic environment [4], and deep learning, a set of techniques based on neural networks which excels at dealing with high dimensional raw data, such as images and speech, and that is responsible for most of the recent milestones achieved in AI research [5]. The application of DRL to CMAS has been attracting increasing research interest in recent years, but although many algorithms have been proposed [6], most of them resort to centralized learning to achieve cooperation, an strategy that is not feasible in many practical problems of an inherently distributed nature [3].
To tackle the problem of decentralized learning, we design the individual learning process of the agents such that cooperation is an emergent property in the system, rather than a hard wired feature. We argue that correlation between the actions of agents is a key ingredient of cooperative systems, as it would measure coordination, and based on this, we propose a DRL algorithm that seeks to maximize a differentiable estimate of the mutual information (MI), a nonlinear correlation index, between the actions of the agents. We hypothesize that by promoting the maximizing of MI as part of the learning problem of each agent, coordination, and possibly cooperation, could emerge in the system.
The maximization of MI in agent-centric problems has been previously treated in the literature on empowerment [7], where the MI between the agent and the environment is proposed as an universal measure of control. Empowerment has been applied in single-agent DRL algorithms, for instance, in [8] and [9], estimates of the MI are used as an intrinsic reward to perform empowerment-based reasoning. Our work is also closely related to the one in [10], where an estimate of the point-wise MI between the actions of agents is proposed as an intrinsic reward to model social influence and foster cooperation, but their approach is poorly scalable to large systems, since its estimation of the MI requires a model of the whole population. We prescind of the need of such model by considering just the actions of other agents in the vicinity of the learner, and encoding them as a continuous variable whose dimension does not depend on the number of agents. Therefore, our algorithm allows for large populations and even populations whose size changes in time. Also, because our MI estimator is differentiable and we optimize it directly using a gradient ascent algorithm, it is reasonable to think that, with a good quality estimator, this approach would provide a better learning signal than if using an intrinsic reward. This paper is organized as follows. We start by proposing a quantitative definition of cooperation based on the correlation between the actions of agents in section 2. Next, in section 3, we define the learning problem of an agent that intuitively could maximize such quantity, and in section 4 the design of a DRL agent to approximately solve it. In section 5 we describe the commons game, a social dilemma of renewable resource consumption, to which we apply our algorithm according to the experimental setup detailed in section 6. The obtained results are shown in section 7, and its implications discussed in section 8. Finally, we present our conclusions in section 9.
An index of cooperation in multi-agent systems
Several definitions of cooperation have been proposed in the literature from the perspective of diverse scientific fields, such as evolutionary biology [11], game theory [12] and information theory [13]. These definitions share several ideas, such as the macroscopic nature of cooperation, being a feature of a set of entities rather than of individuals, the existence of a common objective across the set of entities, and the idea that is the relationships among the elements of the set that results in an improvement towards the objective. Here, in line with these ideas, we define cooperation in the context of multi-agent systems as an attribute of a coordinated set of actions in a multi-agent system that causes the improvement of the system performance , and define a very simple scalar index to quantify it. The coordination of the actions means that these are not independent, and can be quantified with a correlation index. Let J be a system-level performance index, and let ρ be a non-negative positive correlation index between the actions of the agents, then we define a cooperation index, ψ, as The index ψ will be high for highly correlated actions that result in a high performance, and it will be zero for independent actions even if they result in high performance.
Problem setup
The interaction of an agent with its environment in reinforcement learning is formalized as a Markov Decision Process (MDP) [4]. An MDP is defined by the tuple (S, A, R, T, γ). Where S is the set of all possible states. A is the set of all possible actions. The transition function T : S × A × S → [0, 1] defines the probability of a transition from the state s ∈ S to the state s ∈ S given an action a ∈ A. The reward function R : S × A × S → R defines the immediate reward r ∈ R that an agent would receive given that executes action a in state s and is transitioned to state s . Finally, γ ∈ [0, 1] is the discount factor that balances the trade-off between short-term and long-term rewards.
Solving an MDP consists in finding a mapping from states to actions, termed policy, π : S → A, where the optimal policy, π * , is defined as: , with V being the state value function, defined as the expected long-term payoff of being in an initial state, s 0 , if actions are chosen according to the policy, π: In a multi-agent system the MDP turns into a Markov Game (MG), where the transition and reward functions depend on the joint action of all the agents [6]. That is, A is redefined as the set of all possible joint actions, A = A (1) × ... × A (n) , with A (i) being the set of all possible individual actions of the i-th agent, for i = 1, ..., n, in a system of n agents. From a single agent perspective this renders the process non-stationary, since the value function, and thus the optimal policy, depend on the policies of all the other agents in the system that are also changing in time as they learn. Most of the approaches proposed in the literature to deal with non-stationarity in systems of multiple learners resort to centralized strategies, where global information is used by a single learner to learn value and/or policy functions for the whole system [6].
Here, our focus is on problems that must be solved on a distributed manner, and therefore, centralized learning is not feasible. We consider each agent as an independent learner that at each time step receives a local observation, o ∈ O, where O is the set of all possible observations, and a local reward, r. Using only this local information it has to learn a policy that maximizes the global long-term payoff of the system by coordinating with others.
Inspired by the definition of cooperation given in section 2, we hypothesize that by promoting the maximization of correlation between the actions of agents, along with the maximization of individual value functions, we can guide the learning process towards the desired regions of the search space that define highly rewarding and highly correlated policies, and that, following the definition, would likely result in cooperative behaviors and improved global performance.
Let the policy π θ be a neural network parameterized by θ, then the learning problem of the i-th agent is formulated as , where ρ is a non-negative correlation index between the actions of the agent, a, and the joint action of other agents in the system, a (−i) , determined by its joint policy, π (−i) .
Agent architecture
The overall architecture of the agent designed to approximately solve the optimization problem described in equation 4 is illustrated in figure 1 and it is inspired by the modular design proposed in [14], where the agent is composed of a feature extraction component that is trained offline, and a decision making component that is trained online, as the agent interacts with the environment.
The agent is composed of three functional modules. The sensors receive the observations from the environment, that are assumed to be high dimensional and unstructured, and extract relevant information from it to produce estimates of the state of the environment and of the actions of other nearby agents. The social critic receives the estimate of others actions and estimate its MI with the actions of the agent. Just like in actor-critic algorithms the critic component guides the learning dynamics of the policy towards high rewards [15], the gradient of the mutual information estimated by the social critic is used during learning to guide the agent towards more coordinated behavior with its peers. Finally, the controller implements the policy of the agent. It receives as input the states estimated by the sensors, produces as output actions, and updates the policy using the observed rewards and the signal produced by the social critic. Each module is composed of multiple neural networks that are trained using a pipeline of several stages of machine learning. Below we described them in detail.
Sensors
At each time step the agent receives a high dimensional observation, o, typically, a 2D image that is part of a video sequence. We use a neural network, E x , to learn a compressed representation of each observed input frame. E x is implemented as the encoder component of an undercomplete autoencoder [16, pp. 500-501], that receives o as input and produces as output a code x. The training process consists then on minimizing the loss function, ,where L Ex penalizes D x (x) for being dissimilar from o, and D x is the decoder component of the autoencoder. A second encoder, E y , is used to extract the information related to other agents from the code x to another code, y. This information is later used by the social critic to estimate the actions of other agents. E y is also the encoder component of an undercomplete autoencoder, trained to minimize the loss function, ,where o (−i) is the input containing only the information related to other agents in the original observation. L Ey penalizes D y (x) for being dissimilar from o (−i) , and D y is the decoder component of the autoencoder.
While E x and E y compress what the agent sees at each time step, we also want to compress what it sees over time. This is necessary since the agent must deal with a partially observable system, and hence, requires memory to estimate the state of the environment and be able to take optimal decisions [17]. We use a recurrent neural network (RNN) [16, pp. 367-415], M , that serves as a memory for the agent by storing in its state information about past observations. At each time step, it receives as input the current compressed observation, x t and its current state, h t , and outputs its new state, h t+1 . The estimated state of the system,ŝ t , is then defined as the concatenation of the present compressed observation and memory state:ŝ With the intention of reducing the complexity of the learning problem, and considering computational costs, we use as memory an Echo state network [18], a kind of RNN whose weights are fixed after initialization.
Social critic
The social critic uses the estimator proposed in [19] to approximate the MI between the action of the agent in the current time step, a t , and the joint action of other agents in its vicinity, a (−i) t . However, it does not do so directly, because, with practical considerations in mind, we use proxies for both variables.
To be able to maximize the MI between agents with a gradient-based optimizer, we want our representation of the MI to be differentiable with respect to the parameters of the policy. In this work we consider a discrete action set, therefore, we assume an stochastic policy that defines a probability mass function over the action space conditional to the state, π θ : S → [0, 1] |A| , and estimate the MI between the vector of probabilities, p, and the actions of other agents. In problems with a continuous action set, the actions could be directly used with the estimator.
We would also like to make our approach scalable to populations of any size and variable in time, but we are limited by the dimension of a (−i) . As the size of the population grows it also does the complexity of the learning problem for the estimator because it deals with a higher dimensional input. The dimension of the input also should not change in time, as it would happen with a variable population size. To work around this, we make the assumption that the actions of others agents in the current time step can be approximately inferred from the change between the current and the next time step of the code y, given that it encodes an approximate state of the nearby agents, and use then y t+1 as a proxy to a The estimator of the MI between p t and y t+1 is then defined as, , where I(p t , y t+1 ) is the MI between p t and y t+1 , P pty t+1 denotes the joint distribution, P pt and P y t+1 are the marginal distributions, ζ is the softplus function, and F ω is a neural network parameterized by ω.
The expectations in equation 8 in practice are estimated as averages over samples of the distributions. The samples of the joint distribution are observed by the agent during its interaction with the environment, and P pt is simply the policy of the agent, but P y t+1 is unknown and needs to be estimated. To do so, we use a neural network, Y , to predict y t+1 given the action of the agent, a t , and the estimated state of the system,ŝ t . Y is trained to minimize the loss function, , where L Y penalizes Y (ŝ t , a t ) for being dissimilar from y t+1 . The samples from P y t+1 are then estimated by averaging out a t from Y :
Controller
The controller implements the policy of the agent, π θ , and an estimate of the value function,V . The policy is trained using the Proximal Policy Optimization (PPO) algorithm [20], where the loss function of the policy is defined as: , with T being a set of state-action-reward tuples observed by the agent while interacting with the environment, π θ old is the policy before an update of the policy parameters, is an estimate of the advantage function calculated over a trajectory of length l aŝ , clip is the clipping function , and ∈ (0, 1), is a parameter that controls the size of the updates to the policy network. We use a neural network architecture that shares parameters between the policy and the estimate of the value function, so the loss function to be minimized combines both objectives, and an additional entropy maximization term to encourage exploration as suggested in [20]: , where, , H(π θ ) is the entropy of the policy, and c π , c V and c H are constant coefficients.
Finally, we include in the objective function the MI estimated by the social critic to encourage coordination with other agents, so the learning problem of the controller is formulated as: , where T y is the set of encoded observations regarding other agents corresponding to each observed estimated state,ŝ t ∈ T .
Training algorithm
The pseudocode of the training algorithm for the multi-agent system is described in algorithm 1. First, the weights of all the neural networks that compose each agent are randomly initialized. The memory, M , should be initialized such that the spectral radius of its hidden to hidden weight matrix is less than unity [18]. Next, a dataset of observations is obtained by uniformly sampling the set of possible observations, O. This dataset is used to train the encoders E x and E y by applying a gradient descent algorithm to minimize equations 5 and 6, respectively. Once the sensors are trained, the interaction of the agents with the environment begins.
For a maximum of t max time steps (that could be infinity), each agent follows an iterative training procedure. It begins by acquiring experiences by interacting with the environment and with other agents. For a finite number of time steps, l, each agent receives an observation, o t , executes an action, a t , according to its current policy, π θ , and receives a reward, r t+1 . The observed sequence of observations is encoded by the sensors to produce a sequence of estimated states. The sequence of estimated states, actions and rewards, {(ŝ 0 , a 0 , r 1 ), ..., (ŝ l−1 , a l−1 , r l )}, is then used to train the social critic and the controller. Y , F ω , and π θ andV are trained every n Y , n F and n C time steps, respectively, using a gradient descent algorithm. This difference in the frequencies of training was deemed necessary since these functions make use of each of other and it was observed that if they are changing at the same rate, frequently, the learning process would not converge. We suggest using n Y ≤ n F < n C , such that the policies of the agents change slower than the capacity of the function Y to adapt to it, and be able to give a good estimate to F ω . Similarly, π θ should change slower than F ω , to allow it to converge to a good estimate of the MI to guide the learning process of the policy.
Algorithm 1
1: Initialize neural networks parameters 2: Obtain dataset to train E x and E y by uniformly sampling O. 3: Train E x by minimizing equation 5 4: Train E y by minimizing equation 6 5: t total = 0 6: while t total ≤ t max do 7: for each agente do in parallel 8: Interact with the environment for l time steps to obtain trajectories of observations, actions, and rewards 9: if t total mod n Y = 0 then if t total mod n F = 0 then 13: Train F ω according to equation 8 14: end if 15: if t total mod n C = 0 then 16: Train π θ andV according to equation 16 17: end if 18: t total = t total + l 19: end for 20: end while
The commons game
We applied our algorithm to a sequential social dilemma (SSD), a MG with |S| > 1 where an agent can get a higher reward by engaging in non-cooperative behavior, but the total payoff per agent is higher if all agents cooperate [21]. The chosen SSD is the commons game (CG) described in [22] and illustrated in figure 2. In the CG a set of agents (red tiles) have to collect apples (green tiles), which are a limited renewable resource. The apple regrowth rate depends on the spatial configuration of the uncollected apples: more nearby apples implies higher regrowth rate. If all apples in a local area are collected then none ever grow back. Agents also can take an offensive action by shooting others with a beam (yellow tiles), which temporally removes them from the game. This reduces the load on the resource by diminishing the effective population size, and enables the aggressive agents to selfishly exploit the resource without depleting it. Cooperation in the CG is achieved when agents coordinate between them to harvest apples in a sustainable way, such that the resource is not depleted, and every agent in the system gets roughly the same amount.
Our CG implementation 1 uses the map depicted in figure 2 with n = 10 agents and the following features: Figure 2: A frame of the commons game (own implementation). Agents (red) harvest apples (green). An agent in the southeast of the field shots its beam (yellow) pointing west. Its field of vision is the area contained within the white square.
• Agents have an agent-centered field of vision of radius 4, such that o ∈ R 9×9 is an image of the surroundings of the agent. Each agent appears blue in its own field of view, and red in the field of view of other agents.
• There are eight possible actions: stay still, go up, go down, go left, go right, turn left, turn right, and shoot beam.
• The beam extends within the vision field in the direction the agent is looking and has a width of 1 square. Any agent that is in the path of the beam is removed for 25 time steps.
• For every collected apple the agent receives a reward of r = 1.
• At any given time step, a collected apple has a probability p r of respawning, dependent on the number of apples in a vicinity of radius 2, n a : • The game finishes if all the apples in the field are harvested.
Experimental setup
The performance of a system trained with our algorithm, here after called correlation maximizing system (CMS), is compared with a baseline system trained with the standard PPO algorithm, this is, by setting c I = 0 in equation 16. 30 independent experiments for both systems are conducted, each consisting of 10 million time steps of interaction with the environment. The architecture of the neural networks composing an agent is described in table 1. These were defined so that most of the model complexity would reside in the sensors and the social critic, mimicking what was proposed in [14]. No further experimentation was done to look for optimal architectures. All the neural networks are trained using the Adam optimizer [23] with the parameters presented in table 2.
The dataset to train E x was obtained by executing a population of 10 agents with random uniform policies on the environment for 1.28 × 10 6 time steps, and storing the frames seen by each agent. This dataset was divided in a training set of 1 × 10 7 samples and a validation set of 2.8 × 10 6 samples. The dataset to train E y was obtained simply by masking the pixels of the own agent, its sight, apples and wall from the dataset of E x , leaving just the pixels corresponding to other agents. The decoders were trained as classifiers for each pixel, hence the Softmax activation in the output layer, where classes correspond to the possible values that a pixel can take in the CG: agent(blue), other agents(red), apple (green), beam(yellow), agent sight (dark gray), and wall (light gray). This allowed the decoders to make perfect reconstructions. The sensors are only trained once, offline, and are later used by all the agents in the system. The dataset for the social critic and controller consist of l = 1000 time steps of interaction with the environment, as described in algorithm 1.
Performance indices
To evaluate each system we use the macroscopic indices for the CG proposed in [22], designed to characterize the strategies of the whole population of agents. We also define the cooperation index proposed in section 2 for the specific case of the CG. Let G (i) be the total payoff obtained by agent i over a trajectory of l time steps, , and let 1 be the indicator function, , then the five performance indices are defined as follows.
Utilities
Is the average over agents of the obtained payoff:
Equity
Measures the dispersion of the distribution of payoff within agents: Table 1: Architecture of the neural networks that compose the agent. The numbers in the layer column indicate the order from input to output. Two layers with the same number process the same input.
t is the observation of the i-th agent, and o to is the observation that an agent receives when is impacted by the time out beam.
Sustainability
Is the cumulative sum of apples during the trajectory: , where s (i) t is the i-th pixel of the state in the time step t.
Cooperation
The cooperation index in equation 1 is defined for the CG as: , where the system-level performance index, J, is the utility, U , and the correlation index, ρ, is the ratio between the estimated average mutual information (EAMI), , and the average entropy of the policies in the system, , with π (i) θ being the policy of the i-th agent, and I being the estimated MI (equation 8) for the trajectory of the i-th agent. The normalization by the average entropy is made in order to eliminate the dependence of the correlation index from the uncertainty in the system. Figure 3 shows the reconstruction error obtained by the encoders of the sensors on the validation set among epochs. It can be seen that around the fourth epoch for E x , and sixth epoch for E y , the reconstruction error converges to zero. At this point we consider that the encoders have learned to successfully represent any observation of the environment, and therefore, just one experiment was carried out for its training. Figure 4 depicts the temporal evolution of the performance indices across the 30 independent runs. These are calculated at the end of each iteration of the interaction loop in algorithm 1. It can be seen that the CMS surpasses the baseline system on all the indices by the end of training. Table 3 complements this results in terms of the initial and final values of the mean and variance of the performance indices. We also follow the dynamics of information as these can also provide valuable insight about the system. Figure 5 illustrates the temporal evolution of the average entropy and the EAMI across the 30 independent runs.
Results
For the first 5 × 10 5 time steps the performance indices exhibit dynamics akin to the ones described in [22]. Initially, agents go through a learning phase during which they learn to harvest with increasing speed. This phase is characterized by the increase of the utility index and the descent of the sustainability index. The increase in the peace index indicates that agents learn not to use its beam, as the resource is still under-exploited, and there is no need to compete for it. The entropy of the policies decreases as these converge towards over-exploitative strategies. The EAMI, and hence the cooperation index, remain fairly low for both systems, since agents find no need to coordinate its actions, and the policy gradient outweighs the MI maximization term. At the end of this first phase, the speed with which agents harvest apples surpasses the regeneration rate of the resource and the utilities begin to decrease, reaching its minimum at t = 9.81 × 10 5 and t = 1.23 × 10 6 , for the baseline and CMS, respectively. Passed this point, the environment turns competitive, and agents begin to use its beam against each other, resulting in the descent of the peace and equity indices, and the rising of the sustainability index. The average entropy and EAMI also begin to increase since the over-exploitative policies are no longer a good strategy and the agent faces this uncertain scenario in which should take into account the presence of other agents. The EAMI grows much faster for the CMS than for the baseline. Fitting a line to the EAMI in the interval between t = 1.5 × 10 6 and t = 2.5 × 10 6 results in a slope of 4.0258 × 10 9 for the baseline system and 7.0947 × 10 8 for the CMS. As a result of the increase in EAMI, the cooperation index also increases.
Both systems evolve similarly up to t = 2.72 × 10 6 , although slower in the case of the CMS. From this point on, there are significant variations. While in the baseline system the peace index has a downward trend for the duration of training, ending with a mean value of P = 0.5085, in the CMS rises, reaching a final mean value of P = 0.8157. The equity index in the baseline system converges to a value close to the initial one, E = 0.7513, whereas in the CMS increases up to a final mean value of E = 0.9554, notably, with a much lower variance across experiments. In both systems the utilities, sustainability and cooperation indices have a growing trend, but for the CMS the final values, U = 293.4253, S = 9.8541 × 10 4 and ψ = 14.507, are higher than for the baseline system, U = 211.8189, S = 8.4526 × 10 4 and ψ = 2.9994.
The information dynamics of both systems follows alike trends. The average entropy decreases and seems to be converging by the end of training. The EAMI initially increases, reaching its maximum value around t = 3.073 × 10 6 , and then decreases tending to convergence. The descend of EAMI follows the descend of entropy in the system since the minimum entropy of a set of random variables is an upper bound for its MI. Given that for the CMS the EAMI grows more than one order of magnitude faster, its maximum value,Ī = 0.1043, is also much higher than in the baseline,Ī = 0.01, which results in a big improvement in the cooperation index.
Discussion
The common initial dynamics of the CMS and baseline system, and its posterior divergence, could suggest that the search space of the CG has a region characterized by competitive low-correlated policies that, without the inclusion of the MI maximization term, is a local optimum surrounding the region of cooperative policies that is known to be the global optimum in SSDs [21]. This could be a characteristic of the optimization landscape of SSDs, which could explain the difficulties of traditional single-agent deep reinforcement learning algorithms to find optimal policies in such problems. The MI maximization term seems to modify the optimization landscape so that this region is no further a minimum and a gradient-based optimizer can find better solutions. The slower convergence of the CMS with respect to the baseline system could be explained considering that the maximization of MI also encourages the maximization of entropy, and hence, exploration.
The evidence provided in this work shows that the inclusion of a MI maximization term between the actions of the agents in its objective functions, results in a system with improved performance in the CG according to the utility, equity, peace and sustainability indices. The high values of these indices characterizes the behavior of a cooperative system in the CG [22], therefore suggesting, in agreement with previous work [10], and as captured by the proposed cooperation index, that high MI between agents is a characteristic of cooperative systems, and its maximization is a causal factor in the emergence of cooperation in the CG, and possibly, in general in social dilemmas. Extrapolating this idea to the many real world examples of social dilemmas that plague our society [24][25] [26], we could speculate that coordination and cooperation could emerge in such problems by implementing policies that promote the exchange of information between the parties involved.
Conclusions
In this work we have proposed an index of cooperation in multi-agent systems as the product between the correlation of the actions of the agents and the global payoff of the system, and, based on this index, a deep reinforcement learning algorithm for the training of cooperative neural multi-agent systems. In addition to the estimation of the value and policy functions typically used to solve reinforcement learning problems, we also estimated the mutual information between the actions of the agents and, to promote coordination between them, introduced a term for its maximization in the learning problem. The proposed algorithm has the advantage of being decentralized, both in learning and execution, end-toend differentiable, and scalable to populations of any size.
We applied the algorithm to the commons game, a problem that requires cooperation but in which traditional deep reinforcement learning algorithms struggle to find optimal solutions. The performance of our algorithm was compared according to multiple indices with the performance of a baseline system that does not maximize mutual information. The results showed that the system with maximization of mutual information consistently surpasses the baseline system on all indices. Based on this, we conclude that the maximization of mutual information between agents encourages the emergence of cooperation in the commons game.
The proposed algorithm makes several assumptions against which should be tested. Although, in principle, could deal with populations whose composition varies in time, this could affect convergence, as the Y function would have to adapt to the changes to the joint policy product of the arrival and/or departure of agents. Our work also assumes that agents are homogeneous, since there is no way to tell them apart solely by observations. Verifying the robustness of the algorithm to variable populations and heterogeneity of agents is left as future work. One major limitation can also be the disentangling of information related to agents from information concerning the environment, required to train E y . In the commons game this data is easily produced, but this will not hold for many other problems. Methods for unsupervised entity construction [27] could help in this matter. Finally, is also worth to highlight that further experimentation should be done to test the robustness of the algorithm to the selection of its hyperparameters, such as the architecture of neural networks and parameters of the training algorithm. | 2020-06-23T01:00:38.822Z | 2020-06-21T00:00:00.000 | {
"year": 2020,
"sha1": "e6e9d927d496d35a4c2b8bc17c1e4b8c8f413254",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e6e9d927d496d35a4c2b8bc17c1e4b8c8f413254",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
267002829 | pes2o/s2orc | v3-fos-license | Management of Labor and Anesthesia in a Patient With a History of Spontaneous Intracranial Hypotension: A Case Report With Literature Review
Spontaneous intracranial hypotension (SIH) is a rare disorder characterized by continuous or intermittent cerebrospinal fluid (CSF) leakage from the CSF cavity, which causes symptoms such as headache or neck pain upon standing. However, no well-established measures concerning the type of delivery and anesthesia for pregnant women with a history of SIH have been reported. A woman had developed SIH 9 years earlier from lifting luggage into an overhead bin with stretching movements, which required continuous saline epidural infusion for recovery. Upon the patient’s pregnancy at the age of 35 years, although an elective cesarean section (CS) under general anesthesia was planned to avoid SIH recurrence, the patient had an emergency CS at 36 weeks. Since there is no prescribed method of delivery and anesthetic management for patients with a history of SIH, it is important to plan and adapt a treatment strategy based on the patient’s wishes and the institution’s protocols. As a sidenote, we reviewed the available literature regarding the type of delivery and anesthesia for pregnant women with a history of SIH.
Introduction
Spontaneous intracranial hypotension (SIH) (International Classification of Headache Disorders, 3rd Edition) is characterized by continuous or intermittent cerebrospinal fluid (CSF) leakage from the CSF cavity resulting in symptoms, such as headache or neck pain on standing, dizziness, and tinnitus [1][2][3].Although these symptoms are similar to those of low CSF pressure, SIH often lacks any obvious signs of trauma or other causes, and the CSF pressure might remain within the normal range.Few papers have reported the outcomes in pregnant patients with a history of SIH.This report describes a case of delivery in a woman with a history of SIH 9 years ago from lifting luggage into an overhead bin with stretching movements, the recovery for which required continuous epidural saline infusion.Moreover, we reviewed the available literature regarding the anesthetic management of pregnant patients with a history of SIH.Written informed consent was obtained from the patient for the publication of this case report.
Case Presentation
The patient was a 35-year-old woman with a height of 157 cm.Nine years earlier, at the age of 26 years, the patient developed a headache that worsened on standing and improved upon lying down after stretching to lift heavy luggage onto a shelf while working as a flight cabin attendant.Her symptoms did not improve, and she gradually became bedridden.She was admitted to our hospital after 8 days for examination and treatment.Upon admission, physical examination revealed no signs of meningeal irritation or neurological deficits; further, migraine-suggestive prodromal symptoms were absent.She did not completely recover after receiving nonsteroidal anti-inflammatory drugs.
At the initial examination, the patient's CSF pressure was normal (75 mmH2O {> 60 mmH2O}).However, contrast-enhanced computed tomography (CECT) of the spinal cord revealed CSF leakage from the cervical spine to T11; the patient was diagnosed with SIH (Figure 1).Since the patient's symptoms had not improved with supplemental fluids and bed rest, she underwent continuous epidural saline infusion at the L1-2 level, which was selected to avoid accidental puncture of the dura mater, 3 weeks after admission.Although her symptoms mildly improved, the prolonged sitting and standing time and activities of daily living (ADL) did not improve.Her symptoms and ADL gradually improved upon switching the catheter to the T2/3 level 6 weeks after admission.The catheter was removed 2 months after admission, and the patient was discharged from the hospital.She entered nursing school and started working as a nurse, with the final medical examination being conducted 3 years after discharge.Her attending physician advised her to avoid excessive stress on the lumbar region and stretching movements to prevent the recurrence of SIH.
At the age of 35 years, the patient became pregnant with her first child.At a gestation period of 33 weeks and 1 day, she participated in an anesthesia consultation regarding the delivery and anesthesia methods.After discussions with her obstetrician, dura mater protection was prioritized, and delivery via general anesthesia and cesarean section (CS) was planned.At a gestation period of 36 weeks and 1 day, the patient presented with elevated blood pressure and increased urinary protein levels and was admitted to the hospital with severe preeclampsia.At a gestation period of 36 weeks and 3 days, she underwent an emergency CS under general anesthesia for severe preeclampsia.Anesthesia was rapidly induced using 130 mg propofol, 50 μg fentanyl, and 70 mg rocuronium.Anesthesia was maintained using 1.5% sevoflurane in oxygen until delivery, with propofol and remifentanil administered after delivery.Fentanyl was administered to avoid excessive blood pressure elevation and bucking.Postoperatively, we provided patient-controlled analgesia with fentanyl and regular acetaminophen administration.The newborn weighed 2497 g; had an Apgar score of 7 and 9 at 1 and 5 minutes, respectively, and an umbilical artery pH of 7.228; and did not require other treatments such as bag and mask ventilation.The patient has reported no signs of recurrence of headaches since the delivery.
Discussion
According to the International Classification of Headache Disorders, the diagnostic criteria for SIH are based on the symptoms and CSF drainage as confirmed using MRI or CECT [3].SIH management primarily involves rest and the administration of fluids, caffeine, and analgesics.When symptoms do not improve, an epidural blood patch is performed [1].Moreover, in intractable cases, fibrin glue patches [4], dextran and steroid injections [5], and continuous epidural saline infusions [6] to the site of CSF leakage have been described as effective treatment modalities.
There can be three types of options under the anesthetic management of this case: vaginal delivery, CS under regional anesthesia, and CS under general anesthesia.Generally speaking, vaginal delivery is the most common mode of delivery, followed by CS under regional anesthesia and CS under general anesthesia.Our patient underwent a CS under general anesthesia, however, because Valsalva maneuver (straining by vaginal delivery) and CS under regional anesthesia both may take a risk of recurrence of SIH.
The Valsalva maneuver may cause CSF leakage in patients with a weak dura mater [2].Although vaginal delivery is not contraindicated in pregnant women with a history of SIH [7], SIH can be caused by the Valsalva maneuver even without epidural or spinal anesthesia or dural puncture during pregnancy [8].
Only a few reports have described the outcomes in pregnant patients with a history of SIH.A literature search of Pubmed, Cochrane Library, and Embase databases from 1980 to 2022 using the following search terms: "spontaneous intracranial hypotension," "cerebrospinal fluid," and "pregnancy" yielded 13 reports of SIH development during pregnancy and delivery.Table 1 summarizes the patient characteristics and the delivery and anesthesia methods in these patients [5,[9][10][11][12][13][14][15].among them, two delivered vaginally while three underwent CS [14].These reports indicated successful delivery following treatment for SIH without its recurrence; however, the number of reported cases is small (n = 12), and the anesthesia method was only described in two cases.CS under regional anesthesia could possibly damage the dura mater.The mechanism underlying the improvement of SIH symptoms by saline infusion into the epidural space may include the prevention of CSF leakage through the application of continuous pressure at the site of the leakage and maintenance of the CSF pressure and volume through pressure on the dura mater.However, catheter placement with a continuous infusion into the epidural space and frequent epidural blocks can cause inflammation and adhesions in the epidural space [16].Further, the epidural space in pregnant women is narrowed by the development of venous plexus and edema of the connective tissue [17].However, we could not determine whether the epidural space had become adherent or narrowed in our patient.The patient was considered to have a relatively high risk of dural puncture using the epidural needle.Although the rate of dural puncture using Touhy needles during epidural anesthesia is usually only 0.8%, the rate of headache following dural puncture could be as high as 81% [18].Although spinal anesthesia is considered the standard anesthesia protocol for CS, post-dural-puncture headache (PDPH) has been reported in 0.8% of cases even with the use of a 25-G pencil-point needle at our hospital [19].
Our patient considered it the most important thing to protect dura mater as she desperately wishes to avoid the risk of recurrence of SIH.She developed SIH at the age of 26 years and presented with chronic symptoms, which impeded her social life.Since the onset of SIH had been triggered by a stretching action involving lifting and extension, she was asked to minimize stretching movements in her daily life [20].Accordingly, there were concerns regarding the recurrence of SIH due to the Valsalva maneuver during vaginal delivery, dural damage, or PDPH due attributed to regional anesthesia.
There were three factors considered when deciding the type of delivery and anesthesia.One was that we had the capability to treat neonates immediately after being delivered by CS under regional anesthesia (not all hospitals have neonatologists standing by to handle them with special measures).Another was that there was a consensus among all of our medical staff members that we should respect the patient's wishes the most.The other was that all the departments involved in this case considered the patient's requests with her background reasonable from the medical perspective as well.
In our hospital, general anesthesia is performed in patients who cannot undergo regional anesthesia during CS or in cases of emergency CS.Additionally, the neonatologists' backup will facilitate the use of general anesthesia.
Accordingly, we discussed the various options for the delivery (natural vaginal delivery or CS) and anesthesia method (regional or general anesthesia) for the patient among the departments of anesthesiology, obstetrics, and neonatology and concluded that CS under general anesthesia would be the best option.Therefore, with the patient's wishes included, we finalized the performance of CS under general anesthesia.
The baby had an Apgar score of 7 at 1 minute owing to the anesthetic effect; however, it quickly recovered and the baby was discharged without prolongation of hospital stay or complications.The patient was satisfied after the operation and did not present any anesthesia-related or obstetric complications, or SIH recurrence No well-established measures concerning the type of delivery and anesthesia for pregnant women with a history of SIH have been reported.Therefore, treatment needs to be adapted to the patient's wishes and institutional protocols with cooperation and communication among the departments of anesthesiology, obstetrics, and neonatology.Further reports are warranted to provide extended details regarding anesthesia methods for pregnancy and delivery in patients with a history of SIH.
Conclusions
Despite its most uncommon anesthetic management of the three types of options in delivery of this case: vaginal delivery, CS under regional anesthesia, and CS under general anesthesia, the pregnant woman with a history of SIH underwent a CS under general anesthesia.This is completely to eliminate the risk of recurrence of SIH (both vaginal delivery and CS under regional anesthesia may cause recurrence of SIH), considering the respect for the patient's wishes not to damage dura mater, their reasonability and feasibility based on the level of medical support equipped in the institution.Since there have been no well-established measures regarding the types of delivery and anesthesia for pregnant women with a history of SIH, it is absolutely necessary to discuss and decide the types of delivery and anesthesia with patients and institutional protocols with cooperation and communication among the departments of anesthesiology, obstetrics, and neonatology in the perinatal period.
1 2 FIGURE 1 :
FIGURE 1: Contrast-enhanced computed tomography of the spinal cord shows extravasation of contrast into the epidural space
FIGURE 2 :
FIGURE 2: Magnetic resonance imaging myelography sagittal T2 shows fluid accumulation in the epidural space around T1-7
TABLE 1 : Literature review of SIH cases during pregnancy
[11]g them, McGrath et al. described two patients who developed SIH during pregnancy and received epidural blood patches.Both patients underwent vaginal delivery without SIH recurrence[11].Ferrante et al. described five patients who developed SIH during pregnancy and delivered successfully following treatment; | 2024-01-17T16:09:56.227Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "3a8fc91792021d7355013949fefd926de0affb70",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/204634/20240114-1279-wld6kw.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ae367a29b629862af9bfd10bf2c663f9c3d9925",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
129110875 | pes2o/s2orc | v3-fos-license | The Effects of Environment and Family Factors on Pre-Service Science Teachers’ Attitudes Towards Educational Technologies (The Case of Muğla University-Turkey)
In our world, where information and technology is increasingly developing, it is undeniable fact that technology has great impact on education. The main goal of education is to equip individuals with required knowledge and show them how to use this knowledge. To do so, the traditional methods used seem to be inadequate. In this respect, there is a need to make use of educational technologies (Uzunboylu, 1995; Yenice, 2003).
Introduction
In our world, where information and technology is increasingly developing, it is undeniable fact that technology has great impact on education. The main goal of education is to equip individuals with required knowledge and show them how to use this knowledge. To do so, the traditional methods used seem to be inadequate. In this respect, there is a need to make use of educational technologies (Uzunboylu, 1995;Yenice, 2003).
Every type of tool and equipment helping to reduce the interaction between the student and the subject that needs to be learned to the level where the student can understand it is in the scope of educational technologies. In the classroom wide-range of materials ranging from teacher, chalk and blackboard to educational videos and virtual environment can be used (Akpınar, 2004;Hannafin & Peck, 1988). It is of great importance to make use of more educational tools in the classroom to help students understand better. In today's classrooms, visual and auditory materials come to the fore. For these visual and auditory materials to be effectively used, the specific features of the tools should be known. These features may seem to be very simple sometimes, but they can be very important for the effective use of a tool; hence, for the quality of the lesson (Küçükahmet, 1999). Binbaşıoğlu (1994) reported the good sides of educational materials-based teaching as follows: they help maintain the continuity of teaching and enhance motivation, they help teach correctly, they bring variety, reality and concreteness to teaching and learning environment and they are emotionally enriching. On the other hand, they lessen the use of language, they can be really expensive, they may be time-consuming, teachers may not be qualified enough to use such tools and they may lead to deterioration of thinking skills (Rüzgar, 2005). Alper and Gülbahar (2009;124-125) performed a meta-analysis on studies carried out on educational technologies between 2003 and 2007, and they found that most of the studies focus on "the effects of multimedia-enhanced computer" and "integration of technology and internet education". Research looking at the application of various dimensions of education technology in teaching has revealed that educational technology applications have multi-Social Sciences and Cultural Studies -Issues of Language, Public Opinion, Education and Welfare 444 dimensional positive impacts on student achievement. In this respect, various learning materials (games, analogies, sample events, experiments and models) (Aktamış et al., 2002), teaching through models (Şahin et al., 2001), computer-assisted materials (Akdeniz and Yiğit, 2001;Kibos, 2002;Yumuşak and Aycan, 2002) have been found to improve students' achievement. Akpınar et al. (2005) investigated the students' opinions about the use of technology in Science Course at elementary level and the teachers' frequency of use of technological tools and equipments in science courses. They found significant differences between private and state schools and depending on the type of the school, they found significant differences among students' opinions and frequency of use. Can (2010) carried out a study with 184 pre-service teachers from the department of elementary education to determine their attitudes towards the effects of using two teaching materials, over head projector and projector, on learning. At the end of the study, the author found that the pre-service teachers have generally positive attitudes because they think that the use of these materials bring variety and change to teaching environment, eliminates monotony from the class, and provides colorful, lively and smooth learning-teaching. Frantom et al. (2002) carried out a study to investigate the children's attitudes towards technology, and they obtained two-factor scale consisting of interest, ability and alternative characteristics. When the elementary and secondary school students' scores taken from these two sub-dimensions were compared, significant differences were found between them. At the same time, the attitude varies depending on gender. Dalton and Hannafin (1986) evaluated the effects of video, computer-assisted teaching, interactive video applications on learning performance and attitude and they found that the participants think that only computer-assisted teaching is effective and there is no need for interactive videos. On the other hand, when interactive-video teaching was compared to computerassisted teaching and video, it was found that it could significantly affect the attitudes of students of low ability ( Cited in Yavuz & Coşkun, 2008). Tanguma et al. (2002) investigated technology utilization models within a context of a course. They found that the teachers use package programs in subject area, they carry out impressive applications with tools such as scanner, digital recorder and voice recording machine and they make use of technology and internet in their lessons. Woodrow (1992) reported that there is a correlation between the attitudes towards technology and computer experience. Chou (1997) stated that computer experience affects teachers' attitudes towards computer. According to Ropp (1999), there is a significant relationship between computer access and attitudes towards computer and use of computer for an hour in week.
When education is considered as a unity, it is not possible to achieve its objectives by only focusing on information given at school and excluding students' families and the environment where they have been brought up from the process. The training of an individual is not limited to the places of formal education.
One of the most important two factors determining human behavior, environment (the other is heritage) can be defined as the physical, biological, social, economic and cultural settings where individuals maintain their relationships and carry out their mutual interactions throughout their lives. Environment means everything affecting an individual. The individual himself/herself is an internal part of this environment. A social creature, man is in various interactions within the social environment where he/she lives as a complementing part of it. The environment covers all the systems either being physical, chemical and biological (Yiğit & Bayrakdar, 2006). Throughout their lives, humans gain information, skills, attitudes and values as a result of their interactions with the environment. The basis of education is made up by these experiences (Ertürk, 1993).
According to Herman (1998), any person is born with genetically inherited characteristics which make up 30% of his/her personality and the remaining 70% of the personality is shaped by environmental conditions such as the things provided by parents, information gained from formal and informal education, things learned from peer circles, and the culture where he/she is brought up. The ecologic environment and family environment where the individual was born and brought up have great impacts on the formation of the characteristics of the individual. Environmental conditions such as place of residence, housing facilities, transportation, education, health, recreational activities, public utilities etc. and the family conditions such as socio-economic structure of the family, its education level, income level, relationships with neighbors, inter-family relations, the family members' success in performing their role functions have the potential to affect an individual's personal characteristics and skills to communicate with the environment (Kut & Koşar 1989: 19;cited in Deniz, 2003). Within the context of the environment where people are brought up, there may be some important variations observed among the people based on whether they are brought up rural or urban area, socio-economic level and the opportunities possessed to make use of educational facilities. Education process starting in the family and continuing at school and by means of various tools of mess media may vary significantly depending on an individual's coming from rural or urban area and education level of the family.
Besides the high number of students who are not able to attend school continuously without any interruption in Turkey, for many students who can attend a formal education institution regularly, developed technological tools are not available in their personal environment. In this respect, it can be argued that schools exhibit heterogonous structure rather than homogenous structure; hence, students may encounter inequalities stemming from the environments they have been brought up. Moreover, students coming from similar types of families concentrate on similar types of schools and this leads to increasing differentiations and inequalities among schools. This is not the problem specific to only the schools in underdeveloped or developing countries, in developed countries inequalities can be observed depending on local, regional, ethical, racial, linguistics and sexual variables (Berne 1994;Kozol, 1991;Spring, 1998, s.48-49, as cited in Aksoy, 2003. These result in differences in students' chance to encounter educational technologies having an important place in education process.
In rural areas, due to parents' low level of education, the opportunity of drawing on educational facilities and materials is restricted and this may prevent individuals from developing positive attitudes. However, in urban areas where socio-economic level is high, usually the education level of parents is high. Hence, the children of these parents have more opportunities such as the availability of computers, internet, newspapers, magazines, scientific journals, videos, CDs, mobile phones, familiarity with satellite receivers and all the other technological tools and this has positive influence on children's attitudes towards technological tools and their use of frequency of these tools.
In literature, there are various studies looking at the effects of environment where the individual is brought up or those of education level of parents. Akpınar (2003) conducted a study to investigate how teachers graduated from universities located in different regions use internet resources inside and outside the class and found that there is a significant difference favoring teachers graduating from universities located in a metropolis (İstanbul, Ankara, İzmir, Bursa, Adana, Gaziantep) or in a sea city. It was found that teachers graduating from universities located in East, South East and Central Anatolia make less use of internet. Erol and Gezer (2006) found that parents' education level and the environment where they live do not have any significant influence on classroom pre-service teachers' perceptions of environment and environmental problems. Devecioğlu and Sarıkaya (2006) conducted a descriptive study to determine the profiles of the students of school of sports in light of some socio-economic variables including parents' education status.
Determination of the attitudes of pre-service science teachers who make up the core of education towards educational technologies can make important contributions to the efficiency and quality of education in general. Among the studies dealing with educational technologies, the number of studies looking at the effects of the environment where preservice teachers were brought up and their parents' educational status is few and this increases the importance of the present study. Moreover, the present study is thought to have important contributions by drawing the attention to family and environment factors which are important dimensions of education process and providing guidance to researchers, educators and practitioners working in the relevant fields. As stated by Thomas Gordon "The first and most effective teachers of children are their parents", education starts in the family, hence, it is assumed that the environment where pre-service science teachers have been brought up and their parents' educational status can have influences on their attitudes towards educational technologies.
Purpose of the study
The present study aims to determine the effects of the environments where pre-service science teachers have been brought up and the educational level of their parents on their attitudes towards educational technologies. For this purpose, answers to the following questions were sought: -What is the level of pre-service science teachers' attitudes towards educational technologies? -Do the pre-service science teachers' attitudes towards educational technologies significantly vary depending on the environment where the pre-service teachers were brought up? -Do the pre-service science teachers' attitudes towards educational technologies significantly vary depending on their parents' educational level?
Method
The sampling of the study which employed survey method consists of 101 first-year students attending science teacher education department of the education faculty at Mugla University in 2009-2010 academic year.
Data collection
As a data collection tool, personal information form developed by the researcher and 43item Scale of Attitudes towards Educational Technologies develop by Pala (2006) to elicit the participants' attitudes towards educational technologies were used. The students were given detailed information about attitude scale and then the scale was administered to those who were willing to participate in the study. It was observed that completion of the scale lasted about 15-20 minutes. The data obtained from the scale were entered into computer and appropriate statistical analyses were conducted. The reliability of the scale was tested through SPSS 14 program package with Cronbach Alpha coefficient and found to be 0.78. This value shows that the scale is reliable and it is enough for it to be administered. In order to establish the validity of the scale, experts opinions were sought about whether the items in the scale measure the attitudes intended. The scale includes five options ranging from "Strongly agree", "Agree", "Undecided", "Disagree" and "Strongly disagree". Scoring was performed from 5 to 1 for positive statements and from 1 to 5 for negative statements. The lowest possible score to be obtained from the scale is 43 and the highest score is 215. If the score obtained is in the range 43-77, it means "Strongly disagree", 78-111 "Disagree", 112-145 "Undecided", 146-179 "Agree" and 180-215 "Strongly agree".
Data analysis
The data obtained through the scale were analyzed through SPSS program package. Independent-samples t-test was used to test whether there is a significant difference among the students' attitudes based on the environment where they were brought up and One-way ANOVA was used to test whether there is a significant difference among the attitudes based on parents' educational level.
Findings concerning the first sub-problem
The first sub-problem of the study is "What are the pre-service science teachers' attitudes towards educational technologies?" The findings concerning this sub-problem reveal that the mean score for the pre-service science teachers' attitudes towards educational technologies is 169.66, standard deviation is 14.19; the lowest score taken from the attitude scale is 125 and the highest score is 207. According to these scores, the pre-service science teachers' general attitude is in the category of "Agree". This finding shows that the students in general have positive attitudes towards educational technologies. There are similar findings reported in the literature. Gunter, Gunter & Wiens (1998) found that pre-service teachers have more positive attitudes towards working on computer and learning through computer and technology in general. Yılmaz (2005), in his thesis study, investigated the effects of technology on students' achievement and attitude and found positive impacts on achievement and attitude. In another study, Sevindik (2006) found positive effects of using smart classes in higher education on students' academic achievement and attitudes. Yavuz and Coşkun (2008) found that pre-service elementary school teachers have positive attitudes towards and opinions about the use of technological tools and equipments.
Findings concerning the second sub-problem
The second sub-problem of the study is "Do the pre-service science teachers' attitudes towards educational technologies vary significantly depending on the environment where they have been brought up?" T-test was conducted to test whether there is a statistically significant difference among the pre-service science teachers' attitudes towards technology based on the environment they have been brought up and the results are presented in Table 1 and Table 2. Table 2. T-test results for the science pre-service teachers' attitude scale scores in relation to environment where they have been brought up
Environment
According to the t-test results presented in Table 2, there is no significant difference among the attitudes based on the environment where they have been brought up [t (99) = .22, p > .05]. This finding indicates that there is no significant relationship between the environment where the pre-service science teachers have been brought up and their attitudes towards educational technologies. This finding concurs with the findings of Can (2010); Erol and Gezer (2006).
Findings concerning the third sub-problem
The third sub-question of the study is "Is there a significant relationship between the preservice science teachers' attitudes towards educational technologies and their parents' educational status?" The findings concerning this problem are related to the relationship between the pre-service science teachers' attitudes and their parents' educational status. First, the distribution of the pre-service science teachers according to their parents' educational status is given in Table 3 and then the ANOVA test was carried out to determine whether there is a significant correlation between the pre-service science teachers' attitudes and their parents' educational status and then findings are presented in Table 4, Table 5, Table 6 and Table 7. Table 4. Arithmetic means and standard deviations for the pre-service science teachers' mothers' educational status
Educational status
In Table 4, it is seen that there are differences among the arithmetic means. ANOVA test was carried out to determine whether these differences are statistically significant and the results of the test are presented in Table 5.
Source of the variance Sum of squares df Mean of squares F p
Between-groups .14 3 .05 .42 .74 Within-groups 10.75 97 .11 Total 10.89 100 Table 5. Anova results for the pre-service science teachers' attitude scale scores in relation to their mothers' educational status The results of Table 5 show that there is no significant difference based on the mothers' educational status among the pre-service science teachers' attitudes towards educational technologies [F (3-97) = .42 , p>.05]. That is, there is no correlation between the pre-service science teachers' attitudes towards educational technologies and their mothers' educational status. This finding is supported by the findings reported by Erol and Gezer (2006). Table 6. Arithmetic means and standard deviations concerning the pre-service science teachers' fathers' educational status Variance analysis was conducted to see whether the differences seen among arithmetic means in Table 6 are significant, and the results are presented in Table 7. Table 7 show that there is no significant difference based on the fathers' educational status among the pre-service science teachers' attitudes towards educational technologies [F (2-98) = .07, p>.05]. That is, there is no correlation between the preservice science teachers' attitudes towards educational technologies and their fathers' educational status. This finding is in compliance with the findings of Erol and Gezer (2006).
Results
In today's world where information and technology are rapidly changing and developing, it is great importance for students to gain information access and problem solving skills. Therefore, integration of educational technologies into the field of education has an important role in enhancing academic achievement.
A conception of education not drawing on technological opportunities cannot meet the needs and expectations of individuals and societies of the today's world. Today, it is a must for each individual to be equipped with skills of having access to information, organizing this information, evaluating and using it and communication (Toprakçı, 2005; cited in Taşçı et al., 2010). As a result of widespread use of technological tools and devices in the field of education, a need to determine students' opinions about and tendencies and attitudes towards these tools has emerged (Akpınar, Aktamış ve Ergin, 2005;Frantom et al., 2002;Becker & Maunsaiyat, 2002;Tsai at al., 2001;McCoy, et. al., 2001;Gunter at al., 1998). In addition to this, it is assumed that the environment where students have been brought up and their parents' educational status may have some impacts on students' attitudes. In this respect, the present study investigates first-year pre-service science teachers' attitudes towards educational technologies and the effects of environment where they have been brought up and their parents' educational status on their attitudes.
In the present study, it was found that the general attitude of the pre-service science teachers' attitudes towards educational technologies is in the category of "Agree" and they have positive attitude. Moreover, it was found that the attitudes towards educational technologies do not significantly vary depending on the environment where they have been brought up. This result may indicate that whether the pre-service science teachers were brought up in rural or urban areas does not have any significant influence on their attitudes towards educational technologies. The pre-service science teachers may have been encouraged to make more use of internet through the project works or other homework given in their former education and in this way they may have developed more positive attitudes towards educational technologies. Another finding of the present study is that there is no significant correlation between the parents' educational status and the preservice science teachers' attitudes towards educational technologies.
But before making some generalizations in light of the findings of the present study, the limitations of the study should be mentioned. First, the present study is limited to its study group and data collection tools used in the present study. Therefore, further research may look at students from different departments, different faculties or different universities.
Utilization of educational technologies in the field of education can enrich education and enhance students' motivation, in this way; students are promoted to develop positive attitudes towards educational technologies. Positive attitudes developed by pre-service science teachers towards educational technologies may help them to make more efficient use of such technologies in their teaching. | 2019-04-24T13:02:12.251Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "698238d783ef17a008a3911931dff7ad42d75b7e",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/39102",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f2e628cfae8028c67da734abc806d07ee13f1502",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
5913291 | pes2o/s2orc | v3-fos-license | Validation of Ankle Strength Measurements by Means of a Hand-Held Dynamometer in Adult Healthy Subjects
Uniaxial Hand-Held Dynamometer (HHD) is a low-cost device widely adopted in clinical practice to measure muscle force. HHD measurements depend on operator’s ability and joint movements. The aim of the work is to validate the use of a commercial HHD in both dorsiflexion and plantarflexion ankle strength measurements quantifying the effects of HHDmisplacements and unwanted foot’s movements on the measurements. We used an optoelectronic system and a multicomponent load cell to quantify the sources of error in the manual assessment of the ankle strength due to both the operator’s ability to hold still the HHD and the transversal components of the exerted force that are usually neglected in clinical routine. Results showed that foot’s movements and angular misplacements of HHDon sagittal and horizontal planes were relevant sources of inaccuracy on the strength assessment.Moreover, ankle dorsiflexion and plantarflexion force measurements presented an inaccuracy less than 2% and higher than 10%, respectively. In conclusion, the manual use of a uniaxial HHD is not recommended for the assessment of ankle plantarflexion strength; on the contrary, it can be allowed asking the operator to pay strong attention to the HHD positioning in ankle dorsiflexion strength measurements.
Introduction
Measurement of the maximum force that a subject can exert during a volitional contraction is a basic clinical procedure often conducted in clinical and rehabilitation frameworks.It is also referred to as strength assessment [1].Specifically, this technique enables an easy indirect estimation of joint moment, providing basic information about the healthiness of tendons, ligaments, and joint stability [2].Furthermore, strength evaluation enables the diagnosis of weakness as a consequence of muscular diseases and allows the quantitative assessment of functional recovery in rehabilitation programs [3][4][5][6][7][8].
As of today, a widespread and commercially available method to measure muscle strength involves the use of the isokinetic dynamometer [9][10][11][12].This methodology showed a high interrater and intrarater reliability and reproducibility in the measurement of joint forces and torques, on subjects of a wide age range, on both lower and upper limb [5,9,13,14].However, the isokinetic dynamometer is inherently expensive, cumbersome, and not portable and requires a long patient preparation time.
In clinical environments, simpler and faster methods are often preferred to reduce both patient's discomfort and the examination time.Thus, the most adopted methodology to assess strength involves the Hand-Held Dynamometer (HHD), a low-cost, portable, and easy-to-use device.It consists of a small and portable single-axis dynamometer that can be held in hand by a clinician and applied on some defined landmarks, while asking the patient to exert a force against it [9,14].
Despite its advantages, reports on HHD reproducibility and repeatability were controversial [15][16][17][18].Principal causes of low reliability of HHD based method have been identified in poor operators' training and wrong patient's positioning [7].In fact, HHD based method relies on operator's strength and training to contrast the force exerted by the patient, avoiding misplacements [19].
HHD strength measurements can be performed according to two methods [19]: (i) the "make test," in which the examiner holds the dynamometer stable while the subject exerts a maximal force against it, and (ii) the "break test," in which the examiner overcomes the maximum force exerted by the subject, producing a small limb movement in the opposite direction of patient's force.Both methods were proved reliable and repeatable only if the examiner had enough force to contrast the force exerted by the patient [19].Other studies provided similar results, by showing that strength measurements performed through HHD are operator-dependent and the "break test" requires a larger force exerted by the examiner [20,21].The influence of the operator was tested by Kim et al. [9] by comparing three setups: (i) with the HHD fixed to the distal tibia by a Velcro strap; (ii) with the HHD held by the operator; and, finally, (iii) with an isokinetic dynamometer, assumed as a reference.They found that fixed and nonfixed methods showed good interrater reliability and the higher reliability was reached in the fixed methods.The HHD held by the operator is the assessment method widely adopted by clinicians as it does not require a complex experimental setup [22].
Though strength can be assessed for all human muscles, a particular clinical relevance is conferred to the strength of lower limb muscles, due to the important role they play in day-living tasks (walking, chair rise, climbing, etc.), which may be compromised by neuromotor pathologies and aging [13].Among the lower limb joints, ankle deserves a special attention, as dorsi/plantarflexion and inversion/eversion are key movements for balance and general functional ability [23], playing an important role in human gait.In fact, It was observed that ankle kinetics are often affected by neuromotor pathologies and may improve after therapies [24][25][26][27].
Several studies have been conducted to assess the validity of HHD measurements of ankle strength.Ankle strength of healthy subjects was measured by means of HHD and then compared to an electromechanical dynamometer, that is, a fixed dynamometer that allowed evaluation of isometric force [18].Results showed that HHD measurements were poorly correlated to the fixed dynamometer, and statistical differences were found between the two datasets.Researchers attributed the results to low examiner's strength and their inability to position and hold the HHD steady.They concluded that HHD strength measurements of the plantar flexors should not be considered valid [18].However, these results were in disagreement with the results obtained by Spink et al. [23] that found high reliability of ankle strength measurements by means of HHD in both elder and younger participants concluding that HHD is a valid methodology for the evaluation of ankle strength.Hébert et al. [17] found that among all the lower limb joints ankle plantarflexion and ankle dorsiflexion presented the lowest reliability.Therefore, they recommended further studies in this direction, especially regarding the strength evaluation in children with neuromotor disabilities.
From the previously cited studies, operator's inefficiency to hold the HHD in the right position emerged as the main issue in HHD strength measurements related to the ankle joint.In all the reported studies only a reliability analysis was conducted and, to the best of the authors' knowledge, no studies were performed to identify and quantify the sources of inaccuracy that occur in the assessment of ankle strength by means of a HHD.Therefore detailed studies about the quality of clinical measurements are strongly encouraged [28] with the purpose of establishing reliability, reproducibility, and validity of such measurements [29].
The aim of this study was the validation of the manual use of a commercial HHD, which is a uniaxial load cell, in the plantarflexion and dorsiflexion ankle strength measurements quantifying the effects of HHD misplacements and unwanted foot's movements on the measurements.A validation protocol involving a motion capture system and a multiaxial load cell was exploited to measure actual forces and moments exerted by the subject, the HHD position, and the undesired motion of the patient's foot.The present work took advantage of a measurement protocol previously validated and already applied to the analysis of knee strength measurements [22,30].
Materials and Methods
2.1.Subjects.Thirty healthy adult subjects (18 M, 12 F, age: 26.2 ± 2.1 years, height: 173.6 ± 7.2 cm, and weight: 68.1 ± 8.7 kg) were enrolled in the study.Participants had never suffered from any neurological or orthopaedic disorders and had never undergone surgery to the lower limb joints.All the subjects were right-handed even though this was not an inclusion criterion.Measurements were conducted at the MARLab (Movement Analysis and Robotic Laboratory of the Children's Hospital Bambino Gesù).
Study
Approval.This study complied with the principles of the Declaration of Helsinki, and it was approved by the Ethical Committee of the Children's Hospital Bambino Gesù in Rome.
Experimental Setup.
Strength measurements were conducted by means of a six-component load cell, that is, the Gamma F/T Sensor (ATI Industrial Automation, USA).The cell was equipped with a force-transferring aluminium layer and a foam layer on top, designed to increase patient's comfort (Figure 1).The range of measurement of the load cell was 400 N on the principal axis (-axis), 130 N on the transversal axes, and 10 Nm for the moment on each axis.Resolution was 1/20 N for the force and 1/800 Nm for the moment.Weight of the load cell was 0.255 kg, diameter 75.4 mm, and height 33.3 mm.In this study, we used the above-described load cell as a Hand-Held Dynamometer, named HHD in the following.
Motion and displacements were recorded by means of an 8-camera Vicon MX Optoelectronic System (Oxford Metrics, UK), named OS in the following.Sampling frequency was set at 200 Hz.We used Vicon Nexus 1.7 software (Oxford Metrics, UK) to reconstruct markers' trajectories.System calibration was performed before each acquisition session, according to manufacturer's instruction.The overall RMS error of marker reconstruction in three-dimensional space was ∼1 mm in a calibrated volume of about 3 m × 1 m × 2 m.The output signals of the HHD were collected through the analog input ports of the OS ensuring the synchronization between devices.
Motion Capture Protocol.
For this study, we adapted to the ankle joint a protocol previously designed for the knee [30].
Four markers were placed on the HHD (Figures 1(a) and 1(b)).Rigid sticks were used to avoid covering of the markers by the operator's hand.The central marker was placed in the midpoint of the patient-interface area of the HHD.That marker was needed to locate surface center with respect to the other markers.Fourteen markers were placed on the subject's lower limbs (Figure 2).Landmarks were identified as follows: lateral and medial femoral epicondyles (4 markers), lateral and medial malleoli (4 markers), lateral shanks (2 markers), head of first metatarsal (2 markers), and head of fifth metatarsal (2 markers).
A static trial was recorded before the measuring session to identify the reference systems (Figure 1) and to measure the offset signals of the HHD.In the static trial, the HHD was placed on the floor with no load applied on it and the subject was still in a stand-up position.During the measuring session, the central marker was removed and its position was reconstructed by using a localization procedure based on the three fixed markers [31].We included the left leg in the protocol design, to allow processing of strength trials in lefthanded subject.
Strength Protocol.
Strength of the ankle dorsiflexor and plantarflexor muscles has been measured by applying a validated clinical protocol [32], consisting in a "make test" method [19,21].In both ankle plantarflexion and dorsiflexion movements, the subject was lying on the bed with ankle in neutral position (Figure 2).HHD was placed under the foot sole on the metatarsal region for plantarflexion testing and on the upper metatarsal region for the dorsiflexion one.The subjects were instructed to push against the HHD exerting their maximum force.Strength was measured by a trained clinician (male, height 170 cm, weight 73 kg) with a longterm experience in strength assessment.The operator was standing at the bottom of the bed, holding the HHD with both hands in order to keep it in place while he counteracted the patient's force to keep the foot still for about five seconds.The participants were instructed to avoid explosive contraction but to gradually increase force from zero to the maximum achievable value [33].
Trials were repeated five times for both plantarflexion and dorsiflexion with a resting time of about 30 s between trials to avoid fatigue effects in both subject and operator.
2.6.Data Analysis.Before the identification of local reference frames, we defined the knee and ankle centers as the midpoint between the two markers on epicondyles and malleoli, respectively.
The LRS for the HHD (namely, LRS HHD ) is shown in Figure 1(b) and was defined as follows: (i) vmkr1: virtual marker as the projection of HHD4 on the plane defined by HHD1, HHD2, and HHD3 (ii) HHD : unit vector from vmkr1 to HHD1 (iii) HHD : unit vector perpendicular to the plane defined by HHD1, HHD2, and HHD3, pointing outwards (iv) HHD : defined as cross product between and -axes (v) Origin: virtual marker on the line between vmkr1 and HHD4 with an offset from HHD4 equal to the thickness of the force coupling layers.
The LRS HHD was designed in such a way that -axis, -axis, and -axis were directed as the respective internal axes of the load cell.The LRS for the foot (namely, LRS FT ) was defined as follows: (i) FT : unit vector from ankle center to knee center (ii) FT : unit vector perpendicular to the plane defined by the knee center, the ankle center, and midpoint between markers on first and fifth metatarsal, pointing to lateral direction (iii) FT : defined as cross product between FT -and FTaxes (iv) Origin: ankle center.
The designed setup allowed estimating the following kinematic parameters (Figure 3): (i) The Range of Motion (RoM) of the ankle dorsi/plantarflexion angle, defined as the difference between the maximum and minimum of angle measured throughout the trial: Ankle angle was computed on the basis of a three-point procedure between knee center, ankle center, and the midpoint between markers on the first and fifth metatarsal.As the ankle should ideally remain still during the strength measurement, RoM was assumed as a quality indicator of strength measurements: a lower RoM indicates a higher quality of the performed measurement.(ii) The angles between the HHD -axis and the transverse and sagittal planes of the foot, namely, 1 and 2 : 1 and 2 were evaluated when the maximum force from HHD was recorded.Their deviations, that is, 1 and 2 , from their ideal values (90 ∘ ) indicate wrong positioning of HHD during the strength measurement.In the ideal case The kinematic parameters were computed for both ankle plantarflexion and dorsiflexion trials and then averaged between the five repetitions of each subject.To assess repeatability of measurements, we also computed the Coefficient of Variation (CV) for each parameter.The CV was defined as the percentage ratio between standard deviation (SD) and mean between the five repetitions of each subject.Kinetic analysis was conducted in terms of forces and moments acting on the ankle joint.Forces and moments were expressed in the LRS of the foot ( FT F and FT M): where HHD F and HHD M are the outputs of the HHD, FT R HHD is the rotational matrix between LRS HHD and LRS FT , and FT o HHD is the origin of LRS HHD expressed in LRS FT .
From FT F and FT M, we defined (i) , as the maximum value of FT , which represents the measure of the strength; (ii) , as the transverse component of the force exerted by the subject (it represents the intensity of lateral forces that cannot be gathered by means of a singlecomponent load cell): (iii) , as the maximum value of FT , which represents the ankle dorsi/plantarflexion moment when the strength measure is performed; (iv) , as the transverse component of the knee moment: All these parameters were averaged across the five repetitions for each subject.As for the kinematic parameters, we computed the Coefficient of Variation (CV) for all the kinetic parameters, to assess the repeatability of the procedure.In order to simulate the strength measurements that are usually gathered in clinical routine by using a uniaxial HHD, we simulated its output calculating the above-reported parameters considering only the force measured on the axis of the HHD and placing the other force components and the moments equal to zero.The maximum value of force was assumed as the nominal strength measurement ( F) that is the only one that can be measured in clinical routine (see ( 4)).The respective nominal knee moment ( M) was estimated by multiplying F by the lever arm () between the center of HHD and the ankle joint; was measured with a tape measure as made in clinical routine (see ( 5)) The differences between the nominal F , M and the respective reference values obtained using the proposed validation procedure ( and ) were quantified in terms of Root Mean Square Error (RMSE): RMSE and RMSE allowed the quantification of the accuracy of uniaxial HHD in the estimation of ankle strength and moment measurements, respectively.Finally, we calculated also the index index to provide an overall quantification of the quality of the strength measurements [30].Specifically, index (see (7)) takes into account both the angular displacement of HHD and the transverse component of moment.The higher the value of index is, the higher the quality of strength measurement is.Its ideal value is 100% ) . ( The identification of local reference systems (LRS) for body segments and HHD and the estimation of kinematic and kinetic parameters were implemented by means of Matlab (MathWorks, USA).
Statistics.
Repeatability of the measured parameters was assessed by computing CVs while RMSE parameters allowed quantifying the inaccuracy occurring when lateral components of force and moment are neglected, that is, when a commercial uniaxial HHD is used.All data were tested for normality by means of the Shapiro-Wilk test.Since data were proved to be normally distributed, the -test was used to assess differences between means.Tests were assumed significant if was lower than 0.05.Moreover, in order to analyze the influence of an unwanted displacement of HHD on the accuracy of the HHD measurements and on the quality of strength measurements, the Pearson productmoment correlation coefficient was computed to study the correlation between kinematic and kinetic parameters.A strong correlation was assumed if || was higher than 0.7.
Results and Discussion
Means and standard deviations of both kinematic and kinetic parameters and values are reported in Table 1.The observed RoMs were not equal to 0 ∘ , indicating that the ankle was moving during the measurements.Therefore the operator was not able to keep the HHD and the foot completely still with an undesired motion of the foot during the trial.This finding was in line with the results of Kim et al. [9] that demonstrated a decreased measurement validity when the dynamometer is not fixed but held in hand by the operator.Moreover, the observed RoM was slightly higher for plantarflexion trials ( = 0.06), where a higher exerted force was registered implying more difficulty in keeping the foot still when a high level of force occurred.Angular displacements 1 and 2 were higher in plantarflexion trials than dorsiflexion ones (Table 1).This could be due to the higher force exerted in plantarflexion trials; in fact it reduced operator's ability to keep the HHD in place during the measurement.On the contrary, the operator was able to maintain still the HHD during dorsiflexion trials since angular displacements were low.Consequently, the angular misplacements of HHD on sagittal and horizontal planes are relevant sources of inaccuracy mainly in the plantarflexion strength assessment.Comparing the kinetic parameters between the two directions, the plantarflexion trials showed higher differences between the actual and the measurement forces and moments than those in dorsiflexion.The lateral components of force and moment and were both higher for plantarflexion than for dorsiflexion; it could be due to a wrong angular positioning of the HHD on both planes, as observed by means of the 1 and 2 coefficients that were higher in plantarflexion (Table 1).The kinematic and kinetic analysis suggested a higher validity of ankle dorsiflexion trials than the plantarflexion ones.
As regards the accuracy of ankle strength measurements, we observed that F and M were higher than and for both directions, while the transversal components and were not negligible.F and M represented the force and moment commonly measured by means of clinical HHD and and were the actual values.In case of misplacement, the force and moment measured by a commercial HHD differ significantly from the force and moment effectively exerted by the joint.Our findings implied that wrong positioning of HHD increased the lateral components of force reducing the force on the main axis.As regards the analysis of RMSE, we found very low value of RMSE in dorsiflexion (<5%), while it was higher for plantarflexion (<15%) confirming both that the ankle strength assessments were more accurate when low force values occurred and that the analysis of plantar-flexor strength may be more difficult to be performed by clinicians.These findings were confirmed by the higher lateral components of the force exerted by the ankle in the plantarflexion movement.
As regards the repeatability of ankle strength measurements, CVs were computed to quantify the variability within the same subject.The highest values of CVs were observed for CV 1 and CV 2 during dorsiflexion trials (∼50%).This result proved a poor repeatability in terms of HHD positioning also when the operator was able to maintain still the HHD, demonstrating that the strength measurements are likely influenced by the strength of the examiner, in accordance with the findings of other studies [17,18].Average values of CV F were less than 10% and average values of CV M were less than 20% for both plantarflexion and dorsiflexion, indicating a good intrasubject repeatability of the force measurement.The repeatability of moments was lower than the force.This finding is likely due to the wrong positioning of the HDD since an increase of variability could be due to a wrong estimation of the lever arm, that is, the distance between the HHD position and the ankle center.No statistical differences were observed between plantarflexion and dorsiflexion.
Finally, we computed a synthetic index, index , representing the overall quality of the measurement (Table 1).It was conceived to account for both the angular misplacements of the HHD and the undesired lateral components of moment.Its average value resulted lower for plantarflexion than dorsiflexion.It was in accordance with the other parameters that identified the most relevant inaccuracies in the ankle plantarflexion trials.This finding was in agreement with other works that reported a poor repeatability and reliability of ankle strength measurements, especially for plantarflexion trials [17,18].From a comparison of the index values with the ones evaluated for the knee strength measurements [30], it emerges that, among the strength measurements, the plantarflexion analysis is the more complex one to be performed and it implicates low values of accuracy in force and moment measurements and a low ability of the operator to maintain still the HHD.On the contrary, the quality of ankle dorsiflexion strength measurements is comparable with the knee flexion and extension ones.
Correlation analyses between kinematic and kinetic parameters were performed to analyze the influence of an unwanted displacement of HHD on the accuracy of the HHD measurements (Table 2).A strong correlation was found only between the RoM and RMSE indicating that the intensity of the undesired movement of the foot had effect on the measured moment.The accuracy of HHD in the moment measurements was not strongly related to a wrong orientation of the load cell but it depends mainly on the unwanted movement of the foot during the experimental trial.
In conclusion, ankle strength assessment by means of a commercial uniaxial HHD can be considered consistent for dorsiflexion trials, as , , and RMSE measured in this study were relatively low.Differences between F and were low and the average quality index was relatively high.Thus, the estimated inaccuracy could be considered acceptable for the clinical use of uniaxial HHDs.However, it is always recommended to pay attention to HHD positioning.Conversely, plantarflexion trials involved higher exerted force and implied a lower value of the quality index to which higher RMSE values and higher intensity of lateral components of force and moment corresponded.Inherent validity of HHD measurements of plantarflexion strength is consequently low.
Study Limitations.
The main limitations of the work are that only one operator performed the experimental trials and that we analyzed only adult healthy subjects.Since the aim of the study was not the quantification of the ability of operators in performing the ankle strength measurements but it was the analysis of the effects of unwanted HHD displacements on strength measurements, we decided to use only one operator in order to avoid possible confounding effects.Moreover we decided to analyze only adult healthy subjects since they were assumed as the worst-case scenario.In fact, in children and adults with pathology related to the generation of muscle force, the exerted forces are lower than the ones generated by healthy adults and, therefore, lower measurement inaccuracies related to the displacements of HHD should be observed.Further studies involving both the analysis of interoperator reproducibility by comparing the analyzed parameters gathered by operators with different level of ability and the validation of HHD strength measurements in pediatric and patient populations may be performed.
Conclusions
This work validated the use of a commercial HHD in both dorsiflexion and plantarflexion ankle strength measurements quantifying the effects of HHD misplacements and unwanted foot's movements on the measurements performed by an expert and trained clinician.The foot's movements and angular misplacements of HHD on sagittal and horizontal planes were identified as relevant sources of inaccuracy of the strength assessment.The dorsiflexion trials could be considered more reliable than the plantarflexion ones, which showed higher errors and lower values of the quality index.In conclusion, commercial uniaxial HHDs are not recommended for the assessment of ankle plantarflexion strength but they should be used carefully in the estimation of the ankle dorsiflexion strength.Clinical protocols should be revised in order to ensure proper limb fixation and to reduce both the effects of foot motion and the HHD positioning errors on the strength measurements.
Figure 1 :
Figure 1: (a) The six-component HHD equipped with motion capture markers.(b) Schematics of the HHD, equipped with the contact part, markers (in red), and the representation of local reference system.
Figure 2 :
Figure 2: Graphical representation of the subject lying on the bed in measurement position wearing the marker protocol used for the trials.The white cube underneath the right foot represents the HHD position.
Figure 3 :
Figure 3: Parameters computed for the ankle strength assessment, lateral view.
Table 1 :
Mean (SD) values of parameters measured for the ankle plantarflexion and dorsiflexion.The values are reported in the last column.* indicates a significant difference ( < 0.05). | 2018-04-03T03:30:45.872Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "4948f554a2ad84d78ba6e5a90790bd6a0fa6111d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/js/2017/5426031.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4948f554a2ad84d78ba6e5a90790bd6a0fa6111d",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
89753813 | pes2o/s2orc | v3-fos-license | Ethmia iranella Zerny, 1940 – a Spanish enigma (Lepidoptera, Ethmiidae)
Attention is called to the presence of Ethmia iranella in Spain and its occurrence in Italy is recorded for the first time. Ethmia iranella was originally described as an Iranian subspecies of E. bipunctella (Fabricius, 1775); it was subsequently recognized as a valid species (Sattler 1967: 93) that is more widespread, although apparently local. In addition to its type locality (Iran) E. iranella is known from Turkmenistan, Transcaucasia, Syria, and Tutkey, as well as from several European countries (Greece, Romania, Hungary, France, Spain) (Sattler 1967: 93, Zagulajev 1981: 639, Neumann 2000: 69, Leraut 2011: 143). It is here also reported from the extreme south of Italy (Apulia, Lecce, Veglie, Torrelupomonaco, 7.vii.1961 (Hartig), and Taranto, Lido Silvana, 23.viii.1968 (Hartig)), the first record for Italy. The presence of Ethmia iranella in Spain is verified by only three specimens (one male, two females), all collected by myself some 50 years ago. One male from Valencia was collected on 10-11.vi.1960 at night around the lights of a kiosk at the campsite El Saler (39°20’50”N 0°18’57”W). One female from Granada was collected on 28.v.1957 in the morning at rest on a tree trunk in an irrigated poplar grove opposite a now long defunct campsite on the outskirts of the town near the start of the road to the Veleta (Carretera de la Sierra). One female (Sattler 1967: pl. 4, fig. 34) from the Puerto de la Mora (37°15’54”N 3°28’05”W), north-east of Granada, was attracted to the light of a Petromax paraffin lantern on 12.vii.1962 along a local track just off the Puerto. Spain is one of Europe’s lepidopterologically most visited and best collected countries and it still puzzles me that I should have collected those specimens myself on three different trips, in three widely separate localities – yet nobody else before or since should have found this species anywhere on the Iberian Peninsula. E. iranella is rather conspicuous but can be confused with Ethmia bipunctella (Fabricius, 1775). Indeed, I must confess that I did not recognize the true identity of those Spanish specimens until about 1963 when I commenced work on the Ethmiinae for the Microlepidoptera Palaearctica project. In fact, initially I took them for E. bipunctella, although I had Nota Lepi. 41(1) 2018: 125–127 | DOI 10.3897/nl.41.24971 Sattler: Ethmia iranella Zerny, 1940... 126 Figure 1. Ethmia iranella Zerny, ♂. Italy, Apulia, Lecce, Veglie, Torrelupomonaco, 7.vii.1961 (Hartig). NHMUK010862886. Top, dorsal view; bottom, ventral view. Red arrows mark black spot on vertex and ventral dark abdominal patches respectively. (Phot. David Lees). Nota Lepi. @(@): @–@@ 127 noticed that superficially they appeared a little different from typical German bipunctella. The latter species is also present in Spain (I have examined specimens from the provinces of Catalonia, Segovia and Huelva) and is recorded from all provinces of Portugal (Corley 2015: 81). I can only imagine that lepidopterists do not usually collect voucher specimens of such a wellknown species as ‘E. bipunctella’ and thus fail to notice E. iranella. Therefore any Iberian collection containing specimens of E. bipunctella should be thoroughly searched for possible overlooked E. iranella. The latter is easily distinguished externally from bipunctella in the black spot on the vertex (absent in bipunctella) and the large black dots on the abdominal sternites (uniformly orange in bipunctella). The black spots on the vertex and the black abdominal sternites are also shared by E. treitschkeella (Staudinger, 1879) and E. mariannae Karsholt & Kun, 2003, both closely related to E. iranella. The host-plants of all three species are still unknown but are likely to be Boraginaceae. The Spanish specimens are kept in Zoologische Staatssammlung, Munich, Germany, the Italian specimens in Natural History Museum, London, UK; additional specimens from Italy are in coll. Hartig, Museo Regionale di Scienze Naturali, Torino, Italy.
The presence of Ethmia iranella in Spain is verified by only three specimens (one male, two females), all collected by myself some 50 years ago.
One female from Granada was collected on 28.v.1957 in the morning at rest on a tree trunk in an irrigated poplar grove opposite a now long defunct campsite on the outskirts of the town near the start of the road to the Veleta (Carretera de la Sierra).
Spain is one of Europe's lepidopterologically most visited and best collected countries and it still puzzles me that I should have collected those specimens myself on three different trips, in three widely separate localities -yet nobody else before or since should have found this species anywhere on the Iberian Peninsula. E. iranella is rather conspicuous but can be confused with Ethmia bipunctella (Fabricius, 1775). Indeed, I must confess that I did not recognize the true identity of those Spanish specimens until about 1963 when I commenced work on the Ethmiinae for the Microlepidoptera Palaearctica project. In fact, initially I took them for E. bipunctella, although I had noticed that superficially they appeared a little different from typical German bipunctella. The latter species is also present in Spain (I have examined specimens from the provinces of Catalonia, Segovia and Huelva) and is recorded from all provinces of Portugal (Corley 2015: 81).
I can only imagine that lepidopterists do not usually collect voucher specimens of such a wellknown species as 'E. bipunctella' and thus fail to notice E. iranella. Therefore any Iberian collection containing specimens of E. bipunctella should be thoroughly searched for possible overlooked E. iranella. The latter is easily distinguished externally from bipunctella in the black spot on the vertex (absent in bipunctella) and the large black dots on the abdominal sternites (uniformly orange in bipunctella). The black spots on the vertex and the black abdominal sternites are also shared by E. treitschkeella (Staudinger, 1879) and E. mariannae Karsholt & Kun, 2003, both closely related to E. iranella. The host-plants of all three species are still unknown but are likely to be Boraginaceae.
The Spanish specimens are kept in Zoologische Staatssammlung, Munich, Germany, the Italian specimens in Natural History Museum, London, UK; additional specimens from Italy are in coll. Hartig, Museo Regionale di Scienze Naturali, Torino, Italy. | 2018-12-15T08:49:17.062Z | 2018-05-16T00:00:00.000 | {
"year": 2018,
"sha1": "f8a96c2b51af7a393df89b2ecc7aa65a37ca7dbf",
"oa_license": "CCBY",
"oa_url": "https://nl.pensoft.net/article/24971/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "498ff9bbfbc4e59b914d7ec78633162e009a74e0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
271436849 | pes2o/s2orc | v3-fos-license | Iron-coated Komodo dragon teeth and the complex dental enamel of carnivorous reptiles
Komodo dragons (Varanus komodoensis) are the largest extant predatory lizards and their ziphodont (serrated, curved and blade-shaped) teeth make them valuable analogues for studying tooth structure, function and comparing with extinct ziphodont taxa, such as theropod dinosaurs. Like other ziphodont reptiles, V. komodoensis teeth possess only a thin coating of enamel that is nevertheless able to cope with the demands of their puncture–pull feeding. Using advanced chemical and structural imaging, we reveal that V. komodoensis teeth possess a unique adaptation for maintaining their cutting edges: orange, iron-enriched coatings on their tooth serrations and tips. Comparisons with other extant varanids and crocodylians revealed that iron sequestration is probably widespread in reptile enamels but it is most striking in V. komodoensis and closely related ziphodont species, suggesting a crucial role in supporting serrated teeth. Unfortunately, fossilization confounds our ability to consistently detect similar iron coatings in fossil teeth, including those of ziphodont dinosaurs. However, unlike V. komodoensis, some theropods possessed specialized enamel along their tooth serrations, resembling the wavy enamel found in herbivorous hadrosaurid dinosaurs. These discoveries illustrate unexpected and disparate specializations for maintaining ziphodont teeth in predatory reptiles.
Page 3
Supplementary Figure 1.Pigmented cutting edges and tooth tips in museum specimens of Varanus komodoensis Supplementary Figure 1.Pigmented cutting edges and tooth tips in museum specimens of Varanus komodoensis.a Functional tooth (AMNH 74606).b Functional tooth (AMNH 37913).c Replacement teeth, from below the gumline and unworn (AMNH 37912).d Functional tooth (AMNH 37909).e Polished thick section of a functional tooth; the pigmented region is still embedded in a thin layer of resin (J94036-4).f Polished thick section along the mesial serrations of a functional tooth; the clear enamel is exposed on the surface of the polished block and the pigmented regions are still embedded in resin (J94036-2).Asterisks indicate orange pigmented regions.Abbreviations: AMNH American Museum of Natural History (New York, New York, USA), de dentine, en enamel.Scale bars in a-d are 1mm, e-f are 0.1mm.
Supplementary Figure 2. Additional synchrotron-based X-Ray MicroFluorescence (S-µXRF) and Scanning Electron Energy-Dispersive x-ray Spectroscopy (SEM-EDS)
elemental maps for Varanus komodoensis teeth.a S-µXRF map (0.5 µm resolution) of iron (red), calcium (green), and zinc (blue) in a horizontal thick section taken through an unerupted tooth crown (MoLS X-263).Map shows iron and zinc sequestration along the outer enamel of a distal serration.b S-µXRF map (0.5 µm resolution) of iron (red), calcium (green), and zinc (blue) in a horizontal thick section taken through the same tooth crown.Map shows iron and zinc sequestration along the outer enamel of another distal serration of MoLS X-263 (same as main text Fig. 2g).c S-µXRF map (0.5 µm resolution) of iron (red), calcium (green), and zinc (blue) in the same horizontal thick section taken through MoLS X-263 as in b.Map shows iron and zinc sequestration along the outer enamel of a mesial serration of MoLS X-263 (same as main text Fig. 2i).d S-µXRF map (0.5 µm resolution) of iron (red), calcium (green), and zinc (blue) in a longitudinal thick section taken parallel to the serrations.Map shows iron and zinc sequestration along the outer enamel of mesial serrations of X-263.e S-µXRF map (0.5 µm resolution) of iron (red), calcium (green), and zinc (blue) in a longitudinal thick section through an erupted, functional tooth crown (J94036-1).Map shows iron and zinc sequestration along the outer enamel of a distal serration.g S-µXRF map of a distal serration of J94036-5 (0.5 µm resolution).h S-µXRF map through the enamel and dentine off-serration (0.5 µm resolution).Note the lack of prominent iron signal.i Scanning Electron Microscope image of mesial serrations of an erupted, functional tooth (J94036-2).j SEM-Energy Dispersive Spectroscopic image of calcium, k iron, and l oxygen along the mesial serrations of J94036-2.Abbreviations: de dentine, edj enamel-dentine junction, en enamel, zn zinc-enriched region of enamel.Asterisks refer to iron-coated regions.Supplementary Figure 3. Elemental maps derived from Laser Ablation Time-of-Flight Inductively-Coupled Mass Spectrometry (LA-TOF-ICP-MS) of Varanus komodoensis tooth serrations.Maps were first normalized to the calcium counts to account for artefacts that arose from differential ablation of enamel vs dentine (see Methods and Supplementary Fig.
Laser Ablation Inductively-Coupled Plasma Mass Spectrometry (LA-ICP-MS) data processing
Elemental maps generated from the LA-ICP-MS experiments were sensitive to the mechanical properties of each tissue.For example, for a given transect of ablation along the extant Alligator tooth sample, the laser removed more dentine (which is softer) from the tooth than enamel (which is significantly harder).Consequently, element counts were underrepresented in the enamel relative to the dentine.This bias was especially evident in raw elemental maps for calcium, where the results initially indicated higher calcium counts in the dentine than in the enamel, which was opposite to all XRF and SEM-EDS data.To account for this artifact, we applied a correction factor on the LA-ICP-MS count data for iron, calcium, and zinc.
To implement a correction to the raw element counts, we first estimated the amount of enamel lost during the ablation process and compared it with the amount dentine (Supplementary Fig. 8d, f).We calculated the depths of ablation along the enamel and dentine by generating profile lines from z-stacked microscope images of the tooth surface using the Keyence digital microscope's 3-D imaging function.We calculated an average step height for dentine in Alligator tooth 2 ROI1 of -5.43 µm, whereas we could not reliably detect a step height through the enamel.We therefore had to rescale the elemental maps for iron, calcium, and zinc for this tooth, given that we ablated approximately 5.4 times more dentine than enamel.We did this by manually masking every pixel in the dentine in ImageJ and then dividing this region in each elemental map by the difference in step height (5.433).This resulted in re-scaled maps where the counts between enamel and dentine were comparable with elemental data obtained from the other techniques.
To more accurately depict the relative differences in elemental concentrations, each map needed to be processed differently.For the iron map, the counts for iron along the surface of the enamel were approximately three orders of magnitude higher than for the counts in the rest of the enamel and dentine.We therefore rescaled this map to a log scale (from background to ~3000 counts in the outer enamel).However, zinc and calcium showed much smaller differences along the outer and inner enamel, as well as the dentine.We presented the zinc map from 0 (background) to the 95 th percentile to eliminate outliers and produce a more realistic distribution of zinc through the tooth.Calcium also showed much smaller count differences between enamel and dentine and is therefore also presented on a linear scale from 0 to the 95 th percentile to eliminate outliers.The corrected maps here and in the main text are therefore relative representations of the concentrations of iron, calcium, and zinc, and therefore do not include count scale bars for this reason.We also could not detect any difference in step height between ablated enamel and ablated dentine in the fossil tyrannosaurid teeth and therefore present the LA-ICP-MS elemental maps for these teeth without any correction (Supplementary Figs.11, 13).Supplementary Figure 8.
Step-height correction of elemental maps for Alligator mississippiensis tooth.a Raw iron map from LA-ICP-MS.b raw calcium map.Note the higher counts of calcium in the dentine (lower region) compared with the enamel.This is in direct opposition to all other calcium maps generated from XRF and EDS analyses.c Raw zinc map.Note higher zinc counts in the dentine compared with the enamel.d Profile line drawn through the ablated region of enamel in the tooth sample using the profile function through a z-stacked image of the sample using the Keyence VHX digital microscope.No consistent step height could be detected, suggesting negligible ablation of the enamel surface.e Profile line drawn through the dentine.We measured a step height of 5.43µm between ablated and unablated regions of the dentine.This was used as a correction factor for the derived elemental maps.h corrected map for iron, i calcium, and j zinc.Note reversal of counts for zinc and calcium between enamel and dentine in the corrected maps.Abbreviations: de dentine, en enamel, re resin.Distal view of "Tooth 2".l Lingual view of "Tooth 2" showing lack of any obvious pigmentation along the cutting edges under plain lighting.Orange colouration is most obvious in polished thick sections.m, n Raw LA-ICP-MS maps for iron along the enamel and dentine of "Tooth 2" showing the presence of an iron-enriched outer enamel layer in the same positions as the orange-coloured region in j. o Region of interest along the outer layers of enamel in j where more detailed nanomechanical testing was undertaken to directly compare the mechanical properties of pigmented and non-pigmented enamel.
Sample environment
Step size (microns) 8 and text therein for explanation).The calcium map is therefore not shown.a White light image of ablated region of distal serrations of J94036-1 (dashed lines).b Map of magnesium, showing higher counts in the dentine.c Iron map showing its restriction to the outer layer of enamel.d Map of iron and magnesium.e Zinc map, showing its restriction to the outer enamel layer.f Map of zinc and magnesium.Asterisks indicate position of pigmented enamel.Abbreviations: de dentine, en enamel.Supplementary Figure 4. Serration and tooth tip colouration in museum specimens of Varanus.All images taken in lingual or labial views.Asterisks indicate orange pigmentation.All scale bars except the one in d are 1mm.Abbreviations: AMNH American Museum of Natural History, SAMA South Australian Museum.Supplementary Figure 6.Comparisons of tooth crown colouration, iron and zinc sequestration along cutting edges in extant crocodylian teeth.a White light (WL) and b Laser Stimulated Fluorescence images of two teeth from a Varanus komodoensis (Zoological Society of London) showing differential fluorescence of the serrations (asterisks) for comparisons with crocodylian samples.c Anterior tooth of Tomistoma schlegelii in distal view.d LSF image of distal carina, showing different fluorescence patterns between carina (asterisks) and the rest of the tooth crown, similar to V. komodoensis.e Posterior tooth of T. schlegelii under white light (WL) and f Laser Stimulated Fluorescence (LSF) showing differential fluorescence of carina (asterisks).g Mesial view of a posterior tooth crown of Osteolaemus tetraspis.h Closeup of tooth tip showing pigmented cutting edges.i Longitudinal section taken through mesial and distal cutting edges of tooth in g. j Synchrotron X-Ray Microfluorescence (S-µXRF) map of iron (red), calcium (green), and zinc (blue) taken from tip of tooth section in h.Iron and zinc are restricted to the outermost enamel layers along the tooth tip and cutting edges.k Mesial view of a tooth crown of Crocodylus porosus.l Closeup of tooth tip in j, showing a lack of obvious pigmentation along the carina under white light.m Longitudinal section taken through the mesial and distal carinae for S-µXRF analysis.n S-µXRF map of iron (red), calcium (green), and zinc (blue), showing iron and zinc sequestration along the tip and cutting edge enamel in the same tooth as in l.Abbreviations: ca carina, de dentine, en enamel Supplementary Figure 7. Comparisons of elemental compositions of extant and fossil crocodylian teeth.a Posterior tooth of Osteolaemus tetraspis before (left) and after sectioning along mesiodistal axis (right).b S-µXRF map of iron (red) and calcium (green) along the tooth tip.Iron is located only in the outermost enamel layers.c Posterior tooth of a fossil crocodylian from Dinosaur Provincial Park (UALVP 60546) with similar morphology to O. tetraspis before (left) and after sectioning along mesiodistal axis (right).d S-µXRF map of iron (red) and calcium (green) showing the abundance of iron within the dentine and enamel.e Anterior tooth of Crocodylus porosus before (left) and after sectioning along the mesiodistal axis (right).f S-µXRF map of iron (red) and calcium (green) along the tooth tip and g along the carina.h Anterior tooth of a fossil crocodylian from Dinosaur Provincial Park (UALVP 60550) with similar morphology to C. porosus before (left) and after sectioning along mesiodistal axis (right).i S-µXRF map of iron (red) and calcium (green) along the tooth tip showing the abundance of iron within the dentine and enamel.j S-µXRF map of iron (red) and calcium (green) along a carina showing the abundance of iron within the dentine and enamel.Abbreviations: de dentine, en enamel.Asterisks indicate positions of iron-enriched enamel.
Figure 9 .
Comparisons of Iron X-ray AbsorptionNear Edge Structure (Fe-XANES) spectra for the iron layers in extant beaver, crocodile, and Komodo dragon.a Comparisons of XANES spectra for magnetite, haematite, and ferrihydrite standards with the spectra derived from the iron coatings in a V. komodoensis tooth.The V. komodoensis spectra most closely resembled that of ferrihydrite.b Closeup of iron coatings in a polished thick section of a V. komodoensis tooth (J94036-4).c Closeup of iron layer within the outer enamel of a A. mississippiensis tooth ("Tooth 2").d Closeup of iron layer within the outer enamel of a Castor canadensis tooth (UALVP 56017-3).e Comparisons of Fe-XANES spectra of the iron layers in V. komodoensis, C. porosus, and C. canadensis.Though consistent with ferrihydrite, the iron layers in V. komodoensis differ from those of the iron layers in the other two species.Abbreviations: de dentine, en enamel.Asterisks indicate positions of pigmented enamel layers.Supplementary Figure 10.Laser-Stimulated Fluorescence (LSF) imaging of cutting edges in selection of fossil theropod teeth from the NHMUK collections.Note that none of the samples show differential colouration along the cutting edges (worn edges appear darker due to the exposure of underlying dentine).a Distal serrations of a tooth of the tyrannosaurid Albertosaurus showing no differences in fluorescence pattern between serrations and the rest the crown.b Distal serrations of another Albertosaurus tooth showing similar fluorescence patterns between serrations and rest of crown.Blue regions are areas covered in adhesives.c Distal serrations of a tooth of Tyrannosaurus rex showing no differential fluorescence patterns along the crown.Blue colour is the result of the fluorescence of adhesives.d Distal serrations of a partial crown of the megalosaurid Megalosaurus bucklandii.Serrations show no differential fluorescence compared with the remainder of the crown.e Distal serrations along a tooth of the theropod "Megalosaurus" insignis showing no difference between the serrations and remainder of the crown.f Distal serrations of a tooth of "Megalosaurus" dunkeri showing no difference between serrations and remainder of crown.g Distal carina of a spinosaurid tooth showing no difference in fluorescence between cutting edge and remainder of crown.Dark patches along the carina result from breakage of the enamel and exposure of the underlying dentine.Supplementary Figure 11.LA-ICP-MS elemental maps for two tyrannosaurid teeth.a Longitudinal section through distal serrations of UALVP 60555 with elemental map for barium, b Calcium, c Iron, d Zinc, e Magnesium, f Yttrium, g Strontium.h White light image of polished thick section through a distal serration of UALVP 60554.i elemental map of calcium, j Magnesium, k Yttrium, l Barium, m Iron, n Strontium, o Zinc.None of these elemental distributions match those seen from elemental analyses of extant Varanus komodoensis or crocodylian teeth.Supplementary Figure 12.Additional synchrotron-based X-Ray MicroFluorescence (S-µXRF) elemental maps for two tyrannosaurid teeth.a Distal view of a tyrannosaurid premaxillary tooth (UALVP 60553) used for S-µXRF analyses.b Overview image of horizontal section taken through UALVP 60553, showing position of S-µXRF elemental maps in c-f.c S-µXRF map of horizontal section through a premaxillary tooth serration, showing distribution of iron, d Calcium, e Zinc, and f Composite of all three elements.Note the lack of iron and zinc sequestration along the serration enamel towards the bottom left of the image.Instead, iron counts are highest in the dentine and along cracks in the tooth, suggesting iron concentrations are primarily driven by fossilization artifacts.g Closeup of a longitudinal section through a mesial serration of another tyrannosaurid tooth (UALVP 53472).h Lower magnification image showing position of mapped serration.i S-µXRF elemental map for iron, j Calcium, k Zinc, l Tungsten, and m Barium.Abbreviations: UALVP University of Alberta Laboratory of Vertebrate Paleontology.Supplementary Figure 13.LA-ICP-MS elemental maps for a dromaeosaurid dinosaur tooth (UALVP 61165).a Longitudinal section through distal serrations of UALVP 61165 with elemental map for barium, b Calcium, c Iron, d Magnesium, e Strontium, f Yttrium, g Zinc.h Transverse section through a distal serration in UALVP 61165 showing the elemental map for barium, i Calcium, j Iron, k Magnesium, l Strontium, m Yttrium, n Zinc.None of these distributions match those of extant Varanus komodoensis or crocodylian teeth.Abbreviations: UALVP University of Alberta Laboratory of Vertebrate Paleontology.Supplementary Figure 14.Representative XRF spectra for extant reptile and tyrannosaurid teeth examined in this study.a XRF spectra taken from the iron-enriched region, enamel, and dentine of a tooth of Varanus komodoensis (Beamline ID-21, European Synchrotron Radiation Facility, Grenoble, France).b XRF spectra from iron-enriched region, enamel, and dentine of a posterior tooth of Osteolaemus tetraspsis (Beamline BM-28, European Synchrotron Radiation Facility, Grenoble, France).c XRF spectra from ironenriched region, enamel, and dentine of an anterior tooth of Crocodylus porosus (Beamline BM-28, European Synchrotron Radiation Facility, Grenoble, France).d XRF spectra from analogous positions of iron-enriched regions in extant reptiles, enamel, and dentine taken along the serration of a tyrannosaurid tooth (UALVP 53472) (Beamline B-16, Diamond Light Source, Oxfordshire, UK).Note the differences in intensities of iron, calcium, and zinc signals between the three extant reptile teeth and that of the tyrannosaurid.Red boxes in inset elemental map images correspond to regions where "iron hotspot" spectra were taken in each tooth.Grey boxes indicate positions where "dentine" spectra were taken.Black boxes indicate positions where "enamel" spectra were taken.Supplementary Figure 15.Scanning Electron Microscope (SEM) imaging of enamel microstructure across tyrannosaurid tooth crowns.a Lateral view of the partial tyrannosaurid tooth UALVP 60556.b Low-magnification SEM image of horizontal section taken through a distal serration of UALVP 60556.c High-magnification SEM image of the wavy enamel along the serration of UALVP 60556.d High-magnification SEM image of the columnar enamel found elsewhere on the same tooth crown.e Labiolingual view of partial tyrannosaurid tooth (UALVP 60554).f Low-magnification SEM image of a horizontal section through one of the distal serrations of UALVP 60554.g High-magnification SEM image of the wavy enamel found along the distal serration of UALVP 60554.h High-magnification SEM image of columnar enamel along the rest of the same tooth crown.i Distal view of a tyrannosaurid premaxillary tooth (UALVP 60553).j Low-magnification SEM image of a horizontal section taken through UALVP 60553, showing one of the distal serrations.k High-Closeup of distal serrations.h Wholeview of horizontal section taken through Dromaeosaurus sp.tooth for SEM.i SEM image of serration enamel showing mostly parallel crystallite enamel, with slight divergences of crystallites along mid-axis of serration.j SEM image of off-serration, parallel crystallite enamel.k Complete Dromaeosauridae indet.tooth crown sectioned for SEM.l Closeup of distal serrations of dromaeosaurid tooth.m Wholeview of longitudinal section used for SEM.n SEM image of serration enamel, showing microunit and possible wavy enamel (crystallite bundles are not parallel to neighbouring bundles).o SEM image of offserration enamel, showing simpler, parallel crystallites.Abbreviations: en enamel, de dentine, re resin.Supplementary Figure 17.Histological comparisons of wavy enamel along tyrannosaurid serrations and hadrosaurid teeth.a Lingual view of tyrannosaurid tooth UALVP 60556.b Polished thick section through worn mesial serrations of a tyrannosaurid tooth (UALVP 60555).Black arrowheads indicate directions of wear, based on surface striations prior to sectioning (see Suppl.Fig. 15f, g).c Longitudinal thin section of serrations in a tyrannosaurid tooth (UALVP 60398) under cross-polarized light.d Higher-magnification image of a serration in c, showing wavy enamel effect under cross-polarized light.Arrowheads indicate presumed direction of wear.e Isolated hadrosaurid tooth (ROM 58630), showing position of horizontal section in g-i.f Longitudinal section through three functional maxillary teeth (image flipped for comparisons) of a hadrosaurid dental battery (ROM 696) showing complexity of the grinding surface in a hadrosaurid dinosaur.g Horizontal thin section through an isolated hadrosaurid tooth (UALVP 55127), showing general histological features and position of higher-magnification images in subsequent panels.h Higher magnification image of g under cross-polarized light, showing similar wavy enamel optical effect as that seen in the tyrannosaurid serrations.i Higher magnification image of wavy enamel under cross-polarized light.Abbreviations: ce cementum, de dentine, en enamel, idf interdental fold.Supplementary Figure 18.Schematic representation of the machine learning based pipeline used to cluster orientation data by similarity to facilitate parameter extraction and 2D fitting of the 002-diffraction peak(s).a 1D orientation data, vertically offset for clarity, demonstrates greater variability compared with conventional diffraction data.Data is grouped through the application of b principal components analysis and c k-means clustering.Fitting parameters, including peak position and width, are extracted from the orientation data d and are subsequently used for e 2D pseudo-Voigt fitting of diffraction images truncated about the 002 peak(s) to obtain preferred orientation and c axis parameters for constituent crystallite populations.Supplementary Figure 19.Synchrotron-based X-Ray Micro-diffraction (S-µXRD) maps of two serrations in longitudinal section (UALVP 53472).The crystallographic c axis lattice and texture parameters of the three constituent crystallite populations within the tooth enamel are shown arranged by columns for population one (a and d), two (b and e) and three (c and f).Lines within each pixel indicate the average preferred apatite crystal orientation and hotter colours correspond to more highly textured regions (lower full-width half maxima).Orientation direction and FWHM in d, e, and f were used to calculate average values illustrated in main text Figure 4p.Supplementary Figure 20.Synchrotron-based X-Ray Micro-diffraction (S-µXRD) map of a horizontal section through a serration and the surrounding enamel (UALVP 60554) with statistical comparisons of Full Width Half Maxima (FWHM) of enamel on-and offserration.Heat map (left) is derived from the same region as in main text Fig. 4q.Two-tailed t-tests were conducted to compare enamel and dentine regions on either side of the serration.Laterally symmetrical regions (e.g., a and b) were grouped together as single samples and compared with other regions.Mean Full Width Half Maxima (FWHM) values for grouped regions and associated standard deviations are summarised in the table (right).Statistical comparisons between each region were all statistically significant (p<0.001),indicating that each region contained apatite crystallite populations that significantly differed in terms of their Full Width Half Maxima (FWHM), which is a measure of the degree of variation around the principal orientations (small lines in each pixel) derived from S-µXRD analyses.See Extended Data 5 for full statistical analysis outputs and raw data for each grouping.Abbreviations: FWHM Full Width Half Maximum.Heat map colours indicate magnitude of FWHM, with hotter colours indicating lower FWHM values and therefore more highly ordered crystallites around a preferred (principal) orientation.Note that the lowest FWHM values are concentrated towards the serration tip (top right corner of map), corresponding to the wavy enamel identified under SEM.Supplementary Figure 21.Nanoindentation analysis of Varanus komodoensis J94036-2.a Enamel and dentine indentation hardness plotted against the indentation elastic modulus.b Coaxial light image of the polished longitudinal thick section of J94036-2 used for the nanoindentation tests.c Region of interest imaged in the nanoindenter prior to the experiment.d Heat map of Hardness (GPa) in the region indented in c. e Heat map of Reduced Elastic Modulus (GPa) in the region indented in c. f Region of interest imaged in the nanoindenter prior to the experiment.g Heat map of Hardness (GPa) in the region indented in f. h Heat map of Reduced Elastic Modulus (GPa) in the region indented in f.Abbreviations: de dentine, en enamel, re resin.See Methods for experimental parameters for nanoindentation tests.
Supplementary Figure 23 .
Nanoindentation analysis of a tyrannosaurid tooth (UALVP 60555).a Enamel and dentine hardness plotted against the reduced elastic modulus.Note the lack of separation between dentine and enamel measurements and the higher magnitude of both metrics compared with data from extant reptile teeth.b Wholeview image of mesial serrations in polished thick section under coaxial light, showing region of interest in nanoindentation experiments.c Wholeview image of distal serrations under coaxial light, showing region of interest for nanoindentation experiments.d Region of interest along a mesial serration imaged in the nanoindenter prior to the experiment.e Heat map of hardness measured from first region of interest along the middle of the mesial serration.Note the similarity between the enamel and underlying dentine.f Heat map of reduced elastic modulus in same region.g Second region of interest along mesial serration imaged in the nanoindenter prior to the experiment.h Heat map of hardness measured from first region of interest along the middle of the mesial serration.Note the similarity between the enamel and underlying dentine.i Heat map of reduced elastic modulus in same region.j Third region of interest along distal serration imaged in the nanoindenter prior to the experiment.k Heat map of hardness of enamel and dentine measured in third region of interest.l Heat map of reduced elastic modulus in same region.m Fourth region of interest along a distal serration imaged in the nanoindenter prior to the experiment.n Heat map of hardness of enamel and dentine measured in the fourth region of interest.o Heat map of the reduced elastic modulus in the same region.Abbreviations: de dentine, en enamel, re resin.Supplementary Figure 24.Comparisons of indentation hardness and reduced elastic moduli of extant Varanus komodoensis.Alligator mississippiensis, and tyrannosaurid teeth.a Combined plot of nanomechanical properties of tyrannosaurid (red), Varanus komodoensis (black), and Alligator mississippiensis enamel and dentine.Tyrannosaurid tooth indents yielded higher hardness and elastic moduli compared with equivalent regions in the two extant reptiles.b Two hardness heat maps (Supplementary Fig.18k, n) superimposed on polished thick section of the tyrannosaurid tooth Note the more subtle differences in hardness between the dentine and enamel in the fossil tooth, due to chemical and structural alterations to the two tissues.c Comparisons with a relative hardness map of Varanus komodoensis (Supplementary Fig.17g).Note the stark contrast between the enamel and dentine.These comparisons demonstrate the impact of fossilization on the mechanical properties of tyrannosaurid enamel and dentine.tyrannosaurid tooth (UALVP 60553) under reflected light, highlighting many (post-mortem) cracks (arrows) through the columnar enamel of the crown, and the lack of these cracks within the wavy enamel of the serrations.Abbreviations: de dentine, en enamel.Unless otherwise indicated, arrows indicate worn surfaces of teeth.Supplementary Figure 26.Scanning Electron Microscope (SEM) imaging of acid-etched serrations in Varanus komodoensis, showing acid-resistance of the outer iron-rich coating.a Mesial serrations of a V. komodoensis tooth under SEM following a 30-second immersion in 1M HCl.The resistance of the outer iron-rich coating (asterisks) lead to the formation of an overhang (arrow), created by the dissolution of the underlying enamel (J94036-4).b Similar feature after 30 seconds of etching in 1M HCl.The underlying enamel has nearly completely dissolved away, leaving an unsupported outer shell of the iron-rich material (asterisks), which collapsed under its own weight (arrow).c Serration in J94036-2 showing nearly complete dissolution of enamel after 30 seconds of 1M HCl etching.d Higher magnification image showing dissolution of enamel and preservation of the outer iron-rich coating as an unsupported shell over the dentine (arrow).Abbreviations: de dentine, en enamel, re resin.
Survey of tooth pigmentation in reptiles.
Supplementary Table1.
Table 4 . Raw data and t-tests comparing enamel hardness and elastic modulus along unpigmented and pigmented regions via nanoindentation in an Alligator tooth.
See Supplementary Figure22for locations of indents on tooth specimen. | 2024-07-26T15:14:57.825Z | 2024-07-24T00:00:00.000 | {
"year": 2024,
"sha1": "2e7e5ea6affaa5e4cac162710b0a445fd71993ab",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41559-024-02477-7",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc11972bf998c6d4b486ef3d3e93081fc11d749c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253652403 | pes2o/s2orc | v3-fos-license | The battle against perioperative glycaemic control: Hard to win?
© 2022 Indian Journal of Anaesthesia | Published by Wolters Kluwer Medknow In spite of so many advancements in the medical sciences, precise control of hyperglycaemia has always been a tough challenge for anaesthesiologists and intensivists. The battle to achieve euglycaemia in the intensive care unit (ICU) and operation theatres with newer oral and injectable drugs has witnessed varied results. Perioperative hyperglycaemia, especially a plasma glucose level above 180 mg/ dl, is an independent marker of poor clinical and surgical outcomes in both, the known diabetic and non-diabetic population.[1] The unwanted outcomes include delayed wound healing, an enhanced rate of wound infection, postoperative pulmonary complications, prolonged hospital stay and higher postoperative mortality. Hyperglycaemia (plasma glucose levels above 140 mg/dl) is common, with an occurrence of 20–40% in patients undergoing general surgery and the highest incidence of 80–90% in the cardiac surgical population.[2]
In spite of so many advancements in the medical sciences, precise control of hyperglycaemia has always been a tough challenge for anaesthesiologists and intensivists. The battle to achieve euglycaemia in the intensive care unit (ICU) and operation theatres with newer oral and injectable drugs has witnessed varied results. Perioperative hyperglycaemia, especially a plasma glucose level above 180 mg/ dl, is an independent marker of poor clinical and surgical outcomes in both, the known diabetic and non-diabetic population. [1] The unwanted outcomes include delayed wound healing, an enhanced rate of wound infection, postoperative pulmonary complications, prolonged hospital stay and higher postoperative mortality. Hyperglycaemia (plasma glucose levels above 140 mg/dl) is common, with an occurrence of 20-40% in patients undergoing general surgery and the highest incidence of 80-90% in the cardiac surgical population. [2] THE PERIOPERATIVE GLYCAEMIC CHALLENGE The molecular mechanisms and pathophysiology of perioperative hyperglycaemia are now fairly well understood. The preoperative anxiety, stress of surgery, sympathetic response to intraoperative pain, hypoxia, hypercarbia, blood loss and low arterial pressure lead to decrease in insulin secretion and the peripheral utilisation of glucose. This produces an increase in gluconeogenesis and glycogenolysis, thus leading to perioperative hyperglycaemia, and the resultant increased production of pro-inflammatory cytokines and osmotic diuresis that can produce fluid and electrolyte imbalance. All this can lead to the early development of ketoacidosis, immune deregulation, insulin resistance and an inflammatory state which can play havoc with patient recovery after surgery. [3] A study has shown that marked insulin resistance can develop in surgical patients during upper abdominal surgery even when the endocrine response is minimal. [4] Lengthy preoperative fasting and inability to take oral feeds postoperatively add to the difficulties in the management of perioperative blood glucose.
Perioperative stress reduction and allaying of preoperative anxiety have been an all-time favourite of researchers. [5][6][7] The comparison of the effect of different anaesthesia techniques on the perioperative glycaemic level has also been a topic that has been often researched upon. Studies have shown that inhalational agents such as sevoflurane and isoflurane increase the blood glucose levels by affecting the neuroendocrine surgical response to stress, and there is not much difference between the two. [8] Also, propofol produces lesser rise in blood glucose levels than inhalational agents like sevoflurane. [9] postoperative complications and mortality, and it is necessary to optimise them. However, are we always able to achieve 100% success in this endeavour?
PREOPERATIVE GLUCOSE CONTROL
Our own national guidelines on preoperative investigations mention that preoperative blood glucose estimation is not recommended in American Society of Anesthesiologists (ASA) grade I and II patients without a known history of diabetes undergoing any type of surgery. [10] However, several authors say that screening for diabetes is recommended in every patient being planned for surgery. [11] The preoperative assessment of diabetes control is done using glycosylated haemoglobin (HbA1c), and higher HbA1c values are closely related to morbidity, cardiac injury, postsurgical infection and mortality. The optimum level of HbA1c and the cut-off point for the postponement/ cancellation of elective surgery are still not clear. Nevertheless, preoperative HbA1c levels in the range of 5%-9% are considered optimal, and elective surgery has to be delayed if HbA1c level is ≥9% and random plasma glucose level is >216 mg/dl. [12] When to order an endocrine consult is another question. As suggested in Chinese guidelines, endocrine consultation is recommended for patients with preoperative acute/ chronic complications of diabetes mellitus and for high-risk patients.
PERIOPERATIVE GLYCAEMIC TARGETS
In most elective and emergency perioperative patients, maintaining the blood glucose in the range of 140-180 mg/dl with the aim of preventing both hypoglycaemia and severe hyperglycaemia is a reasonable goal. The recommendation by the American Diabetes Association (ADA) is to keep perioperative glucose in the range of 80-180 mg/dl and 140-180 mg/dl for most critically ill and non-critically ill patients, respectively. [13] The target intraoperative blood glucose level is generally 108-180 mg/dl for long-and medium-length surgeries, with a slightly higher target range for cardiac and neurosurgery. For minimally invasive surgery, which is most commonly done nowadays, the target blood glucose levels are 90 mg/dl-129 mg/dl. Postoperative blood glucose levels have been found to be highest on the first postoperative day than at any other perioperative time-point, after which they usually decrease. [14] The postoperative target level is 216 mg/dl for all kinds of surgery. However, for those in the postoperative ICU or on mechanical ventilation, the target blood glucose level is 126 mg/dl-144 mg/dl with a higher range up to 216 mg/dl in those with cardiovascular/ cerebrovascular disease. [12] CONTEMPORARY RESEARCH In a randomised controlled study being published in this issue of the Indian Journal of Anaesthesia (IJA), the authors have compared the effect of sevoflurane and desflurane on hourly intraoperative blood glucose levels in non-diabetic patients undergoing intracranial surgery. The study concludes that sevoflurane causes a gradual increase in intraoperative glucose, whereas desflurane produces an initial rise followed by a decline in glucose level. [15] These changes, though statistically significant, remained clinically insignificant. Propofol, remifentanil and regional anaesthesia techniques including spinal anaesthesia and thoracic-epidural have been reported to improve intraoperative glucose homeostasis in both diabetics and non-diabetics. [16,17] Glucovigilance is required in non-diabetic persons undergoing anaesthesia as well. In a randomised double-blind controlled trial involving 150 non-diabetic patients being published in this issue of the IJA, ASA physical status I or II subjects were administered a single dose of IV dexamethasone 0.15 mg/kg following intubation. It was found that even a single dose of dexamethasone in non-diabetic adults causes statistically significant and higher, earlier and prolonged postoperative hyperglycaemia up to 72 hours. [18] Nevertheless, in studies related to perioperative blood glucose, one has to keep in mind that several factors such as temperature, altitude, oxygen concentration in the blood, plasma uric acid and triglyceride levels, patient haematocrit and medications such as acetaminophen and vasopressors can affect the capillary blood glucose (CBG) readings. Getting a capillary blood sample can be difficult in the presence of perioperative hypotension and hypothermia. Hence, all these factors need to be standardised in the study subjects and considered while designing the studies. [19,20]
ENSURING PERIOPERATIVE EUGLYCAEMIA: THE ANAESTHESIOLOGIST'S DOMAIN
A modification of the insulin and oral hypoglycaemic agent regimens for the perioperative care of diabetic patients undergoing elective surgery is commonly done. [21] A basal dose of insulin at all times is recommended in persons with Type 1 diabetes at all times, unless the patient is hypoglycaemic. The different types of insulin preparations, strengths, delivery devices, their route, infusion rate and regimens add to the confusion over which one to choose. The frequency of measurement of the CBG is another matter of debate. Usually, CBG measurements are taken hourly at the beginning of the infusion, every 1-2 hours intraoperatively, every 2-4 hours postoperatively and more frequently in case of unstable or fluctuating glycaemia. [3] Nevertheless, each anaesthesiology unit should have its own insulin stewardship protocol, similar to those used for antibiotic stewardship. [22] The guidelines regarding older hypoglycaemic drugs and their anaesthetic concerns are already wellestablished. Almost all oral hypoglycaemic agents except metformin are omitted on the day of surgery. However, all of us need to get ourselves familiarised with the newer oral hypoglycaemics. The newer oral hypoglycaemic drugs can be categorised under sodium-glucose cotransporter-2 inhibitors (gliflozins), glucagon-like peptide 1 receptor agonists (GLP-1RAs) and glimins. There are multiple drugs under the umbrella of gliflozins. The mechanism of action is sodium-glucose cotransporter-2 inhibition. These are a new class of oral antihyperglycaemic agents that promote glycosuria by blocking renal glucose reabsorption. The most used agents in the world are canagliflozin, dapagliflozin, and empagliflozin. Their distinct mode of action is unrelated to beta-cell function. The key feature of these drugs is that they have additional extra-glycaemic cardiovascular benefits including weight loss, reduction of blood pressure and a reduction in cardiovascular mortalities. The gliflozins must be discontinued two to three days prior to surgery. They cause problems which are likely to influence the anaesthetic management such as coincident volume contraction, a higher incidence of euglycaemic ketosis, hyperkalaemia and postoperative fluid imbalance. The glycaemic control should be achieved with perioperative insulins after discontinuation. [23] In a letter to the editor being published in this issue of the IJA, the authors have reported the occurrence of euglycaemic diabetic ketoacidosis (EuDKA) in a 42-year-old female posted for bariatric surgery and on empagliflozin. The drug was omitted on the day of surgery, but EuDKA occurred towards the end of surgery and was corrected within 72 hours. [24] Another class of drugs is GLP-1RAs. These drugs come as both oral and parenteral preparations. Injectable dulaglutide, lixisenatide and liraglutide as well as oral semaglutide are available in India. Nausea, vomiting and diarrhoea are the most often reported gastrointestinal adverse effects with GLP-1RAs. Reports indicate that these side effects are more common in the early weeks of initiation. The changes in milieu like alkalosis and electrolyte disturbances should be kept in mind when anaesthetising such patients. There are reports of better cardiovascular outcomes in patients receiving these drugs. The key factor is that the drug administration may be only once a week, and the withdrawal of drug seems impractical. Hence, it is ideal to continue and watch for any side effects and act accordingly. [25] Imeglimin is a novel oral agent under investigation for the management of type 2 diabetes. Stimulation of muscle glucose uptake and reversal of pancreatic beta cell function are the possible described mechanisms of glucose control with imeglimins. [26] Imeglimin may also have the potential to address essential diabetes-related complications such as cardiac dysfunction and nephropathy. Till now, no deleterious effects have been detected in terms of cardiovascular safety, and current clinical data also demonstrate a lack of QT prolongation. There are no real data regarding their use and perioperative outcomes so far. [27] This is a topic for future research.
THE WAY FORWARD
Nonetheless, the area of perianaesthesia glucose control opens up newer vistas of research. Universal and latest Indian guidelines for pre-, peri-and postoperative glycaemic management are the need of the hour. It is known that ethnic differences in insulin resistance and glucose metabolism exist, and hence, prospective studies related to perioperative glucose in the Indian population need to be encouraged. Currently, though we have an ever-increasing choice of drugs and strategies to tackle hyperglycaemia, the question that lingers foremost in our minds is 'Have we been successful in mastering perioperative glucose control?' Financial support and sponsorship Nil.
Conflicts of interest
There are no conflicts of interest. | 2022-11-19T16:47:27.401Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "05a8422cb59d9879b3ff6da9ffdb6c9e933d9408",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ija.ija_923_22",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3edde3458e5aa02a5353bba251a4c768b6fb80ce",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
252464470 | pes2o/s2orc | v3-fos-license | The role of inflammation, oxidation and Cystatin-C in the pathophysiology of polycystic ovary syndrome
Objective: The relationship between Cystatin-C levels and inflammatory, oxidant, and antioxidant markers in polycystic ovary syndrome (PCOS) was investigated. Materials and Methods: A total of 96 participants were included in the study as PCOS (n=58) and control (n=38) groups. Tumor necrosis factor-alpha (TNF-α), interleukin-1 beta (IL-1B), interleukin 6 (IL-6), malondialdehyde (MDA), superoxide dismutase (SOD), and Cystatin-C were evaluated by ELISA method. Relationships metabolic and endocrine parameters seen in PCOS were examined. Univariate and multivariate logistic regression analyzes were performed to identify risk factors that may affect the PCOS group. Bivariate correlations were investigated by the Spearman’s correlation analysis. Results: While Cystatin-c, TNF-α, IL-1B, IL-6, MDA were found to be higher in patients with PCOS compared with the control group, SOD was found to be lower than the control group (p<0.05). In the correlation analysis, increased Cystatin-C levels were found to be associated with high IL-6 (r=0.214, p=0.037) and low SOD levels (r=-0.280, p=0.006). Conclusion: In our study, it was found that the increase in Cystatin-C levels was associated with an increase in IL-6 and a decrease in SOD. These results may bring up different treatment options to reduce cardiovascular risks for treating PCOS.
Introduction
Polycystic ovary syndrome (PCOS) is a disease with clinical or laboratory findings of hyperandrogenism, polycystic ovary appearance, and menstrual irregularity. It is often observed in women of reproductive age (1) . The international prevalence rate ranges from 5 to 21% (2) . Although the etiology is not clearly known, disruption of oxidant mechanisms and increased inflammatory mediators are thought to be the cause (1,3,4) . Studies have shown that, depending on the increase in adipose, inflammatory mediators such as tumor necrosis factor alpha (TNF-α), interleukin-1 (IL-1), interleukin-6 (IL-6) and malinaldehyde (MDA) levels increase, while superoxide dismutase (SOD) decreases (5,6) . As a result, the deterioration in oxidant-antioxidant balance and increase in inflammatory markers are observed due to increased adipose tissue and hyperandrogenemia. This situation increases diseases that increase cardiovascular risk, such as insulin resistance, obesity, dyslipidemia, and type 2 diabetes mellitus (4) . Cystatin-C is an extracellular cysteine protease inhibitor. It is a low molecular weight cationic protein (7) . It is a strong predictor of not only renal failure but also all-cause mortality, such as cardiovascular disease and diabetes mellitus (8) . It has also been significantly associated with asymptomatic coronary artery disease in patients with metabolic syndrome with normal renal function (9) . Because of these data, Cystatin-C was examined in studies due to the increased cardiovascular risk in polycystic ovarian disease, and this marker was found to be statistically significantly higher in the PCOS group than in the healthy group (10) . In previous studies, either inflammation and oxidative-antioxidative markers or markers such as Cystatin-C were studied. In this study, it was stated that high Cystatin-C levels in patients with PCOS were important in identifying patients at cardiovascular risk (11) . However, it is unclear whether the increase in Cystatin-C is due to increased inflammation or the deterioration of oxidant-antioxidant mechanisms. This study investigated the relationship between Cystatin-C elevation and inflammatory and oxidant-antioxidant mediators. If it is associated with these mechanisms, targeted therapy may come to the fore in terms of cardiovascular protection.
Study Design and Participants
Patients over the age of 18 who applied to Yozgat Bozok University Medical Faculty Hospital between 01.01.2022 and 01.04.2022 were included in the study. The Yozgat Bozok University Local Ethical Committee approved the present study (2017-KAEK-189_2021.12.29_02) and informed consent was obtained from all participants. The diagnosis of PCOS was made according to the Rotterdam criteria. These criteria were clinical and/or biochemical hyperandrogenemia, presence of oligomenorrhea (interval between two menstrual periods more than 35 days) or amenorrhea (no vaginal bleeding for at least six months), and ultrasonographic polycystic ovary appearance (≥12 follicles measuring 2-9 mm in diameter, or ovarian volume >10 mL in at least one ovary) (12) . The presence of acne and/or hirsutism and/or alopecia were evaluated as clinical signs of hyperandrogenemia. Feriman Galway's scoring was used for hirsutism. Nine different parts of the body, upper lip, chin, chest, upper back, lower back, upper abdomen, lower abdomen, arm, and thigh, were scored between 1 and 4 and a total score of 8 and above was considered hirsutism (13) . Findings of hyperandrogenemia and menstrual patterns were recorded in the database at the first diagnosis. The demographic and laboratory data of the patients were recorded retrospectively from the hospital database. Demographic features included waist to hip ratio (WHR), body mass index (BMI), gravidity, parity, and abortion. The parameters evaluated in the study were examined from the blood samples collected for diagnostic purposes before the treatment was initiated. Exclusion criteria from the study were the presence of chronic systemic disease, infectious and inflammatory diseases, hormone replacement therapy, use of oral contraceptives or drugs for insulin resistance, patients under 18 years of age, presence of psychiatric disorder and drug use for it, history of bariatric surgery, thyroid dysfunction. Secondary causes of clinical and/or biochemical hirsutism and oligomenorrhea, such as congenital adrenal hyperplasia, androgen-secreting tumors, Cushing's syndrome, hyperprolactinemia, thyroid dysfunction, and adrenal disorders were excluded.
Anthropometric Measurements
A weight measurement of the patients was made with a digital scale, with at least clothes and no shoes. Height measurements were made while standing without shoes. BMI was obtained by dividing weight in kilograms (kg) by height (m 2 ) (kg/m 2 ). Determined according to BMI World Health Organization's criteria. WHR; was obtained by dividing the waist circumference measured at the thinnest point between the rib and the iliac crest with the hip circumference measured from the widest part of the hips.
Ultrasonography Assessment
Gynecological ultrasound was performed on the second or third day of menstruation with a 7.5 MHz transvaginal transducer or a 5 MHz transabdominal transducer. Antral follicles were measured in three dimensions, and those with an average diameter of 2-9 mm were counted.
Biochemical Measurements
All blood samples used in the study were taken between 08.00 and 09.00 in the morning in the early follicular phase on the second or third day of the menstrual cycle. Pituitary, adrenal and gonadal axis hormones were checked in all patients due to amenorrhea and hirsutism complaints. Liver and kidney function tests, hemogram, serum lipid levels, fasting plasma glucose, and fasting insulin levels were measured. Serum folliclestimulating hormone, luteinizing hormone (LH), prolactin, insulin, and thyroid-stimulating hormone (TSH) levels were determined by chemiluminescent immunometric assays using a Cobas 6000 analyzer (Roche, Swiss) method. Fasting glucose, total cholesterol, high-density lipoprotein cholesterol and triglyceride levels (TG) were measured spectrophotometrically using an enzymatic colorimetric assay (Roche Integrated system, Mannheim, Germany). Low-density lipoprotein (LDL) cholesterol was calculated using the Friedewald formula. Insulin resistance was calculated using the homeostatic model assessment for the insulin resistance index (HOMA-IR). The HOMA-IR formula is fasting plasma glucose (mg/dL) x fasting serum insulin (mU/mL)/405 (14) . Blood samples were collected from each patient after a 12-hour fasting period for TNF-α, interleukin-1 beta (IL-1β), and IL-6. Whole blood samples were centrifuged for 10 min at 4000 rpm, and the supernatants were kept at -80 °C until the assays were performed by an investigator who was blind to each patient's status. Commercial enzyme-linked immunosorbent assay (ELISA) kits were used for measuring Cystatin-C, TNF-α, IL-1β, and IL-6 (Bioassay technologies, China) levels using appropriate wavelengths on a microplate reader (BioTek Instruments, EL x 800 TM, USA) following the assay instructions. Concentrations were calculated over the standard curves. Serum MDA level was determined according to Göçmen et al. (15) Total SOD activity was examined using the SOD Activity Assay kit (Rel Assay Diagnostics kit; Mega Tıp, Gaziantep, Turkey), according to the manufacturer's instructions.
Statistical Analysis
The statistical package program SPSS 20 (IBM Corp. released 2011. IBM SPSS Statistics for Windows, version 20.0, Armonk, NY: IBM Corp.) was used to evaluate the data. Data were expressed as mean ± standard deviation and in percentages. Continuous variables were investigated using analytical methods (Kolmogorov-Simirnov/Shapiro-Wilk's test) to determine whether they were normally distributed. The Mann-Whitney U test was used for the non-parametric numerical data, while the Student's t-test was adopted for the parametric numerical data.
Relationships between categorical variables were analyzed by the chi-square test. Bivariate correlations were investigated by the Spearman's correlation analysis. Univariate and multivariate logistic regression analyzes were performed to identify risk factors that may affect the PCOS group. P<0.05 were accepted as statistically significant.
Results
A total of 96 patients were included in the study, 60.4% of whom were PCOS (n=58) and 39.6% were from the control group (n=38). When the demographic data of both groups were analyzed, gravida and parity were found to be significantly lower in the PCOS group (p<0.05) ( Table 1). When the laboratory data of the patients were evaluated, it was observed that the TSH level was statistically significantly lower in the PCOS group (p<0.05). There was no significant difference between the two groups in fasting glucose, fasting insulin, and cholesterol levels, which are cardiovascular risk markers, but Cystatin-C level was found to be high in the PCOS group (p<0.05) ( Table 2). When the inflammatory, oxidant, and antioxidant markers of both groups were compared, it was seen that IL-1β, IL-6, TNF-α, and MDA were statistically significantly higher and SOD was low in patients with PCOS (p<0.05) ( Table 2). In the multivariate regression analysis, TNF-α [odds ratio (OR)=1.2, 95% confidence interval (CI)=1.1-1.3], IL-1β (OR=1.1, 95% CI=1.1-1.3), IL-6 (OR=3.9, 95% CI=1.1-13.5) and Cystatin-C (OR=11.7, 95% CI=2.8-98.1) levels were found to be independently high in the PCOS group (Table 3). When the relationship between Cystatin-C elevation and these markers was evaluated (in the bivariate correlation), it was observed that the increase in Cystatin-C was associated with an increase in IL-6 levels (r=0.214, p=0.037) and a decrease in SOD levels (r=-0.280, p=0.006) ( Table 4).
Discussion
This study showed that IL-1β, IL-6, TNF-α, and MDA were significantly higher and SOD was low in patients with PCOS. Again in the study, Cystatin-C, which is a risk factor for cardiovascular diseases, was found to be high in the PCOS group. When the relationship between the elevation of Cystatin-C and inflammatory, oxidant, and antioxidant mediators was evaluated, it was observed that there was a significant correlation with the increase in IL-6 and decrease in SOD. Studies have shown that Cystatin-C is a good predictor of cardiovascular events (16,17) . It has been reported that it may be an indicator of future cardiovascular risk in women with PCOS (11) . Çınar et al. (18) stated that increased Cystatin-C levels in patients with PCOS are an early indicator of negative clinical outcomes. Statistically significant negative outcomes in the PCOS group in this study were BMI, WHR, FS, triglyceride, LDL, total cholesterol, estradiol, dehydroepiandrosteno-sulphate, free testosterone, LH, high sensitive C-reactive protein. Gozashti et al. (10) also found high Cystatin-C levels in patients with PCOS. In our study, there was no significant difference between the two groups in terms of fasting glucose, fasting insulin and lipid levels, which are cardiovascular risk factors, while Cystatin-C was found to be high regardless of these risk factors. TNF-α, IL-1β, IL-6 are markers that show inflammation. Studies have shown that TNF-α is higher in women with PCOS than in the healthy population (19) . TNF-α has been particularly associated with insulin resistance and hyperandrogenemia and is higher in follicular fluid than in serum (20)(21)(22) . It has also been stated that high TNF-α levels in patients with PCOS cause the development of type 2 diabetes mellitus (type 2 DM), infertility, atherosclerosis and some cancers (23) .
Other cytokines known to increase in PCOS are IL-1β and IL-6. It is thought that IL-1β gene activation, which plays a key role in the inflammatory response, may affect steroidogenesis in granulosa cells (24) . It has also been reported that increased IL-1β causes follicular atresia and inhibits oocyte maturation (25) . In the study of Alkhuriji et al., (26) it was observed that IL-1β levels were high in patients with PCOS with obesity. The increase in IL-1β in these patients is thought to be due to anovulation (27) . It has been shown that IL-6 levels, one of the inflammatory cytokines, are increased especially in patients with PCOS with insulin resistance (28) . IL-6 is thought to have proinflammatory properties that cause insulin resistance (29) . Additionally, it has been observed that insulin resistance and obesity stimulate TNF-α and IL-6 gene expression in adipose tissue in patients with PCOS (30) . Although there was no statistical difference in BMI and WHR between the PCOS and control groups in our study, TNF-α, IL-1β, and IL-6 were found to be statistically significantly higher in the PCOS group. These mediators were independently elevated in patients PCOS when performed in a multivariate analysis. This indicates that inflammation plays an important role in the pathophysiology of PCOS. As it is known, proinflammatory mediators increase the risk of cardiovascular diseases (31) . MDA is an indicator of intracellular and cell membrane damage, and lipid peroxidation (32) . SOD is one of the major antioxidant enzymes that neutralizes free oxygen radicals (33) . They are mediators that show oxidative stress in patients with PCOS (5) . It is stated that insulin resistance, obesity, dyslipidemia, and hyperandrogenism seen in patients with PCOS increase MDA levels and decrease SOD levels (34) . Increased MDA levels are an indicator of lipid oxidation and this is a risk factor for cardiovascular diseases (5) . Studies on SOD levels are conflicting. While studies have shown that it decreases in patients with PCOS, there are also studies indicating that SOD levels increase in response to increased oxidant levels in the circulation (5,35) . Polat and Şimşek (36) reported in their study that Turkish women with PCOS had mutations in the SOD-1 and SOD-2 genes and did not have sufficient antioxidant capacity. In our study, MDA was found to be high and SOD to be low in patients with PCOS. In our study, there was no difference between the two groups in terms of BMI, WHR, fasting glucose, fasting insulin, and lipid levels, while a significant difference was found between Cystatin-C, inflammatory, oxidant, and antioxidant markers. This shows that inflammation and oxidant-antioxidant pathway are affected independently by obesity, metabolic syndrome, and insulin resistance. Additionally, although routine cardiovascular risk factors seem to be normal, high Cystatin-C levels made us think that these mediators may be related. In the correlation analysis performed for this purpose, the increase in Cystatin-C was correlated with the increase in IL-6 and the decrease in the SOD level. Gozashti et al. (10) , in their study, no relationship was found between elevated Cystatin-C and inflammatory mechanisms in patients with PCOS. There is no other study in the literature examining this relationship. Clarification of this relationship is also important in terms of treatment.
Polat and Şimşek (36) who detected SOD-1 and SOD-2 gene mutations in patients with PCOS, suggested adding antioxidant supplementation to the treatment due to decreased antioxidant capacity. When the relationship between IL-6 and SOD and Cystatin-C is evaluated, it may be necessary to add antioxidant supplements and anti-inflammatory agents to the treatment for cardiovascular protection. However, more studies are needed to include them in routine treatment.
Study Limitations
Our study has some limitations. A limitation is that the patient sample is too small and the PCOS cannot be divided into subgroups. Not looking for oxidant-antioxidant markers other than MDA and SOD may be another limitation. In addition, we did not apply antioxidant supplements and anti-inflammatory treatments to these patients. Therefore, we do not have posttreatment results. However, even if BMI, WHR, fasting glucose, fasting insulin, and lipid values are not different from the control group, it is important to show the elevation of Cystatin-C in these patients and to correlate this elevation with IL-6 and SOD.
Conclusion
Our study showed that Cystatin-C levels were high in patients with PCOS, even though there was no difference between the control group and the PCOS groups in terms of other cardiovascular risk factors. It is also the only study showing the relationship between increased Cystatin-C levels and IL-6 and SOD. This result may be effective in the treatment plan of the patients. However, our results should be confirmed with studies conducted with more patients. | 2022-09-24T06:18:27.643Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "1e009cefe3c101029b2cbc10686c52a33f8beb7e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.4274/tjod.galenos.2022.29498",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58ef745525eecd0247a143b75080fd1cf7c30a8d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15983001 | pes2o/s2orc | v3-fos-license | Theoretical Study on the Allosteric Regulation of an Oligomeric Protease from Pyrococcus horikoshii by Cl− Ion
The thermophilic intracellular protease (PH1704) from Pyrococcus horikoshii that functions as an oligomer (hexamer or higher forms) has proteolytic activity and remarkable stability. PH1704 is classified as a member of the C56 family of peptidases. This study is the first to observe that the use of Cl− as an allosteric inhibitor causes appreciable changes in the catalytic activity of the protease. Theoretical methods were used for further study. Quantum mechanical calculations indicated the binding mode of Cl− with Arg113. A molecular dynamics simulation explained how Cl− stabilized distinct contact species and how it controls the enzyme activity. The new structural insights obtained from this study are expected to stimulate further biochemical studies on the structures and mechanisms of allosteric proteases. It is clear that the discovery of new allosteric sites of the C56 family of peptidases may generate opportunities for pharmaceutical development and increases our understanding of the basic biological processes of this peptidase family.
PH1704 shares 90% sequence identity with PfpI, the most well characterized protein in this family, which is an intracellular cysteine peptidase characterized by its stability and the speculated proteolytic activity of the thermophilic archaebacterium. The 3D structure of PH1704 shows an α/β-sandwich fold, containing a similar domain characterized by a sharp turn between the β-strand and the α-helix. The fold resembles that of class I glutamine amidotransferase (GATase) [10], which is characterized by a conserved Cys-His-Glu active site. PH1704 forms a hexameric structure, and the active sites are formed at the interfaces between three pairs of monomers. The shared active site between subunits A and C of PH1704 performs proteolytic cleavage through a Cys-His-Glu catalytic triad: Cys100 and His101 residue on the A subunit, and Glu474 on the neighboring C subunit (seen in Figure 1a). Allosteric regulation of the function of a protein occurs when a small activator or inhibitor molecule binds away from the oligomeric protein's normal active site [11]. In recent years, several oligomeric proteases have been found to possess allosteric sites, and binding of small molecules to these sites could result in the modulation of enzyme activities [12][13][14]. For example, a novel allosteric site in protein tyrosine phosphatase 1B (PTP1B) was discovered to bind small molecules allowing an opportunity to avoid troubles associated with inhibitors of the catalytic site. Allosteric inhibition in PTP1B is a promising, new strategy for treatment of obesity and type II diabetes [15,16]. Another example of the importance of allosteric sites has been discovered with caspases, which are mediators of apoptosis and the inflammatory response [15,17]. Caspases are an important class of drug targets for stroke, ischemia, and cancer [17], but it has been difficult to find drug-like caspase inhibitors because of a strong preference for an acidic side chain and an electrophilic functionality to bind at the active site [17]. However, the binding of various ligands at the allosteric site prevents peptide binding at the active site [17]. From the above discussion, it is clear that the discovery of new allosteric sites may generate opportunities for pharmaceutical development and increases the understanding of basic biological processes.
In our previous study, PH1704 was observed to be an allosteric enzyme through experimental methods [18,19]. Anion allosteric regulation showed that Cl − functions as an allosteric inhibitor [18,19], but we still do not know how Cl − stabilizes distinct contact species and controls the enzyme activity. In this study, quantum mechanical calculations were employed to determine the binding mode of Cl − . A molecular dynamics (MD) method was used to explore the allosteric regulation. Our findings revealed that at least two processes are involved in functionally coupling the allosteric site and the active center of PH1704, that is: (i) Cl − binding, a process that entails masking the conformational stabilization of the subunit contact, is not beneficial to enzyme activity; (ii) stabilization of the active conformation of the common H bond between His101 and Glu474, may be caused by the unoccupied site at the two contacts of Cl − . This H bond enhances the rate of formation of the active conformer. Therefore, further experimental and theoretical studies for PH1704 are necessary.
Quantum Mechanical Calculation to Determine the Cl − Binding Mode
Motif Scan [20] was used to search for the Cl − binding site. It can be found that there was a possible amidation site (refering to the amidation reaction that acylates the NH group of an amino acid with oxygen, chloride, and sulfur atoms) in Arg113 and Lys116. To explore the Cl − binding mode, we checked the crystal structure of PH1704 carefully. Two SO 4 2− were bound in the AC contacts: Arg113 : to form a salt bridge with Arg113 and form a hydrogen bond with Asn129 through water.
Rosetta design can be used to redesign an existing protein for increasing binding affinity. We used Rosetta design to get best theoretical mutant, R113T. As for the experimental data, the negative controlling effect disappeared for the R113T mutant for substrate R-AMC, and the apparent k cat /K m value was 4.4-fold higher than that of PH1704 for substrate R-AMC, indicating that it became a standard Michaelis-Menten enzyme [18,19].
Based on the above data, Arg113 may be involved in the allosteric regulation. However, the exact mechanism of the binding mode with the allosteric inhibitor Cl − is still unknown. Sheng et al. were the first to observe that structural stabilization at the active site caused by the binding of an anionic allosteric activator [21]. Structural and biochemical data showed that mutations of some residues at this site influenced the binding of SO 4 2− and affected the enzymatic activity. In the present study, we find another anionic allosteric inhibitor, namely, Cl − . The crystal structures of L-arginine·2HCl·H 2 O have been determined [22], but some basic questions relate to PH1704 remain unanswered. An example, one of these questions is: "should Asn129 also form a hydrogen bond with Cl − through water?" Detailed understanding of the Cl − binding mode at the atomistic level is necessary for further successful rational design of PH1704. Figure The calculated Arg·2HCl·H 2 O·Asn supermolecular system displayed in Gaussian View. The coordinates of Arg113, Asn129, and Wat744 were taken from Protein Data Bank [23], and two Cl − and hydrogen atoms were manually added by Gaussian View.
Determining the state of water molecule in the structure is important. In this system, a proton donor water molecule formed hydrogen bonds with Cl − (68) and Asn129. Figure 3 shows the numbering of Arg113 and the binding mode of SO 4 2− . As shown in Figure 3, an intermediate region of coexistence of the strongest van der Waals and the weakest hydrogen bonds is found. For N (11)-H (43)···O (32) and N (14)-H (47)···O (34) contacts, the regions are located at 1.83 Å and 1.74 Å, respectively. Subtle differences were seen between models A and B: two Cl − formed two hydrogen bonds with Arg113 head and tail, whereas SO 4 2− only formed one hydrogen bond with the guanidine group of Arg113, so it can be concluded that Arg113 and Asn129 are coordinated with the allosteric inhibitor, Cl − . Thence Arg113 is involved in allosterc regulation, which is consistent with the experimental data. Asn129 may also take part in the allosteric action as it forms a hydrogen bond with Cl − through water. . The coordinates of Arg113, Asn129, Wat744, and one SO 4 2− were taken from Protein Data Bank [23].
Protein-Substrate Complex Preparation
The docking scores between L-arginyl-7-amido-4-methylcoumarin (R-AMC) and PH1704 with AutoDock vina, AutoDock 4.2 and Dock 6.6 are listed in Table 1. The docking score from AutoDock vina was lower than with the other software, so the docked complex from AutoDock vina was chosen for further study. The substrate, R-AMC (Figure 4a), was docked to PH1704 in AC contacts. As seen from Figure 4b, we can conclude that R-AMC located in the AC contacts away from the allosteric site (Arg113 and Ans129). There were 12 residues (Arg475, Glu410, Gly70, Lys43, Arg471, Cys100, Glu12, Arg71, His101, His44, Tyr120, and Glu474) around R-AMC (Figure 4c), and the results were consistent with those obtained by Du and Zhan [1,2] (Figure 4d), and it is easy for the reaction to occur, so the PH1704-R-AMC complex can be used for further study. (c) The important residues in R-AMC binding calculated by Discovery studio 3.5 client. Color green represents for van der Waals contacts with R-AMC, and color purple represents for electrostatic contacts with R-AMC; (d) The surface around the R-AMC generated by Discovery studio 3.5 client. R-AMC is in the active pocket and near the active triad (C100, H101 and E474).
Molecular Dynamics Simulations to Study Allosteric Regulation by Cl −
Chloride anions have been reported to function as organic guests for surpermolecular systems [24]. However, the present study is the first to observe that Cl − produces appreciable changes in the catalytic activity site of the protease when used as an allosteric inhibitor.
An important goal for studying the allosteric enzyme PH1704 is to regulate the activity of catalysis through the addition of small molecules that change the enzyme structure of the catalyst and in turn control catalytic reaction rates and product distributions. We aim to determine the regulation induced by the binding of Cl − on the function, structure, and flexibility of PH1704.
To obtain a deeper understanding of the structural and dynamic basis for the allosteric effects in PH1704, 10 ns explicit-solvent molecular dynamics simulations were used. In this study, two separate simulations were conducted in each of the aforementioned cases, with PH1704-R-AMC of the Cl − -binding state and the R113T-R-AMC mutant. Extensive analysis of conformational change and motion was conducted through the computation of the root-mean-square deviation (RMSD), order parameters, dynamical cross-correlation maps, and essential dynamics. Figure 5 shows the RMSD of Cα atoms with respect to their initial positions. Sharp increases in the R113T-R-AMC mutant were observed from the plot during the first 4,000 ps. The average RMSD was below 2.5 Å over the entire simulation for the complex. However, the simulation trajectories of the PH1704-R-AMC of the Cl-bind state appeared to be poorly equilibrated, with an average RMSD value of 4.0 Å over the last 11,000 ps, indicating that the structure of the R113T-R-AMC mutant was stable during data collection. Figure 6 shows the B-factors for Cα atoms calculated from the MD simulation of PH1704-R-AMC complex. As shown in this figure, simulation of B-factors often (although not always) peak for the same residues. Moreover, residues around or right at the turns or loops are more flexible on the basis of these peaks, which is consistent with the fact that these sites fall short of the stable hydrogen bond network. The secondary structures were well maintained. In all the systems, the sheets of the positions of S10 (residues 115-117) and S11 (residues 133-135) in the A and C contacts were lost and formed a loop (Figure 7). The most flexible regions corresponded to the AC contact (residues 115-135) in the R113T mutant. Large mobility of these domains is consistent with the need for this region to undergo conformational changes in oligomeric association of the AC subunit. The above result indicates that Cl − binding was beneficial for the main rigid region of the two contacts. Nevertheless, the flexibility of the two contacts was beneficial for the enzyme activity. Figure 7. (a) Conformation change in two contacts in the PH1704; (b) In the R113T mutant, S10 and S11 became a loop.
To investigate its flexibility, the phi and psi dihedral angles were plotted against MD time as shown in Figure 8a. The variations of the phi and psi dihedral angles were placed within 60 degrees for Lys116 in the WT PH1704. However, for the R113T mutant state, the phi and psi dihedral angles were positioned at 160 degrees (Figure 8b). As a result, the difference between Lys116 in the WT PH1704 included in the S10 and Lys116 in the R113T mutant, which were not included in the sheet, can cause changes in the stability of the AC contact.
As shown Figure 9a, the variations of phi and psi dihedral angles were placed within 60 degrees for Tyr134 in the WT PH1704. However, the phi and psi dihedral angles for the R113T mutant were positioned at 180 degrees (Figure 9b). The results above suggest that the R113T mutant is more flexible in the AC contacts, and is beneficial to the reaction. Hydrogen bonding (salt bridge) was present between His101 and Glu474, Ser108 and Asp525 and Arg477 and Asp126 (listed in Table 2). For the R113T mutant, S10 and S11 became a loop ( Figure 6). S10 and S11 were near the H7 helix (residues 121-127) which was close to the AC contacts. This conformational difference appeared to be correlated to the binding of the Cl − ion at the allosteric site and had an important role in the regulation of enzymatic activity. The hydrogen bond (salt bridge) has an important function in this system [25]. Therefore, the hydrogen bonds in the three mutants were monitored. Table 3 summarizes this noncovalent interaction in the AC contact occupancies at the last 11 ns. The hydrogen bonds between the AC contacts in the WT enzyme-substrate complex simulation, contained Asp525 and Ser108, and His101 and Glu474. The Ser108-Asp525 in the R113T mutant formed a hydrogen bond with occupancies of 97.35%. While the residue His101 formed one hydrogen bond with Glu474 with occupancies of 90.25% in the R113T mutant. Therefore, the hydrogen bond in the mutant was stable, but the hydrogen bond of Ser108-Asp525 decreased from 97.35% to 49.43% in WT type. The hydrogen bond of His101-Glu474 in WT type (42.71% occupancy) also decreased. The salt bridge of Arg477 and Asp126 (32.12% in the WT type) in the R113T mutant increased. In a word, in the R113T enzyme-substrate complex, all the hydrogen bonds and salt bridges were stable, but in the relationships fluctuated the WT enzyme-substrate during MD. When the ion pair between the AC contacts increases, the dimer may be compact, thereby benefiting the catalytic base hydrogen bond (His101 and Glu474). These results may partially explain why the former mutation resulted in the enhancement of the catalytic efficiency of the PH1704. The pKa values were calculated for all ionizable residues of PH1704 using the program H++3.0 [26]. The calculations were performed for the WT type and R113T mutant. The catalytic activity of Cys100 was believed to participate in the shuttling of protons from His101. His101 showed that the pKa values significantly shifted from the standard values. The pKa of His101 shifted from 7.0 units in the WT type to 11.16 units in the R113T mutant. Based on the crystallographic data His101 is involved in interactions with the catalytic residue Cys100 (Figure 1a). His101 forms a hydrogen bond with Glu474 to function as a catalytic base. In the active site, His101 in the WT type has lower pKa than the R113T mutant. The catalytic efficiency of PH1704 is related to the presence of the acid⁄base catalyst (His101-Glu474) of PH1704, and therefore to the pKa value of His101. The higher the pKa value of His101, the stronger the proton transfer between His101 and Cys100, and the stronger resulting in catalytic reaction becomes.
In summary, when the Arg113 mutant becomes Thr, Clis not located in the AC contacts and two β sheets (S10 and S11) become loops. This flexible domain may increase the formation of the hydrogen bonds between the AC contacts including His101-Glu474 and His501-Glu74, which function as catalytic bases. Figure 10 shows the cooperativity, which is a schematic representation the Monod-Wyman-Changeux model of allosteric transitions [27]. A symmetric, multimeric protein can exist in one of two different conformational states, namely, active and inactive conformations. Each subunit has a binding site for an allosteric inhibitor and an active or binding site. PH1704 has three binding sites for 12 Cl − and three pairs of catalytic triads. When Cl − is not located in the enzyme, the two contacts become flexible, thereby helping increase the stability of the enzyme-substrate complex. Thus, the activity of PH1704 is directly under allosteric control via the bound Cl − (allosteric inhibitor).
Quantum Mechanical Calculation Method
All calculations were performed with the Dmol 3 module in Material Studio program package [28]. In this system, C, N, and O atoms were fixed. Based on the optimized structure, the single point energy of all species was obtained by GGA/BLYP/DNP with the basis set 6-31+G (d) with the Gaussian 03 package [29][30][31]. Two quantum chemical models, ranging from 69 atoms to 72 atoms, were used. The following groups were included: (1) Model A composed of Arg113, Asn129, Wat744, and two Cl − ; (2) Model B composed of Arg113, Asn129, Wat744, and one SO 4 2− . The coordinates of Arg113, Asn129, Wat744, and one SO 4 2− were taken from Protein Data Bank [23], and two Cland hydrogen atoms were manually added by Gaussian View [32].
Protein-Substrate Complex Preparation
The 3D structure of PH1704 was downloaded from Protein Data Bank (PDB id: 1G2I). Water molecules and other heteroatoms were removed, and the program PDB2PQR 1.8 [33] was used to assign position-optimized hydrogen atoms, utilizing the additional H++ 3.0 [26] algorithm with a pH of 7.5 to predict protonation states. The 3D structure of the substrate (R-AMC) was built by InsightII/Builder program and was further optimized using the 6-31G (d) set with the Gaussian 03 package [29][30][31]. The R113T mutant was made by Pymol. The six subunits of PH1704 contained 966 residues were used for further docking and MD study. The substrate R-AMC, was docked in the AC contacts which contained catalytic triad: Cys100, His101 and Glu474 and the allosteric site: Arg113 and Asn129 in A subunit. Two Cland one water were manually added by Gaussian View [32] in the allosterc site according to Quantum mechanical calculation results.
Autodock Vina, AutoDock 4.2 and Dock 6.6 were used for perform docking [34][35][36]. The grid size for Autodock Vina and AutoDock 4.2 docking was 56 Å × 56 Å × 56 Å. The created clusters were enclosed in a box, and force fields scoring grids were generated by DOCK 6.6 [36]. The maximum number or orientations of the ligand was limited to 5000, and only the 20 lowest solutions were saved and evaluated.
Molecular Dynamics Simulations
Amber 10.0. [37] was used for MD simulation. For the ligand, R-AMC, GAFF force field [38] parameters and RESP partial charges [39] were assigned using the ANTECH Amber program implemented in Amber 10.0. Several sets of MD simulations were carried out on the protein-ligand complex and the mutant structures using the Amber 10.0 simulation package and the Parm06 force field [37], respectively. The program LEaP was used to neutralize the complexes. The SHAKE algorithm [40] was used to constrain the bonds with hydrogen atoms. The complexes were solvated in an octahedral box of water, with the shortest distance between any protein atom and the edge of the box being approximately 10 Å. The particle mesh Ewald (PME) method [41] was employed to calculate long-range electrostatic interactions. Then the complexes were minimized for 1,000 steps with the steepest descent method using the PME MD module of Amber 10.0. The systems were equilibrated at 353 K for 50 ps. MD simulation was employed to record time trajectory after 50 ps equilibration. The two systems of the complex were simulated for 15 ns. The time step used in all calculations was 2.0 fs. Coordinates were saved every 1 ps for subsequent analysis.
Conclusions
The intracellular protease from P. horikoshii (PH1704) is the first allosteric enzyme that has negative cooperativity with chloride ion. Quantum mechanical calculation identified the binding mode of Cl − with Arg113 and Asn129. Arg113 may be involved in the allosteric mechanism because it forms a salt bridge with two Cl − . The molecular dynamics simulation was used to investigate the allosteric mechanism of PH1704. Our findings indicated that at least two components are involved in functionally coupling the allosteric site and the active center of PH1704, namely: (i) Cl − binding process that masks the conformational stabilization of the subunit contact, is not beneficial to enzyme activity; (ii) stabilization of the active conformation of the H bond between His101 and Glu474, may be caused by the unoccupied site at the two contacts of Cl − . Thus, further experimental and theoretical studies are necessary. | 2016-03-14T22:51:50.573Z | 2014-02-01T00:00:00.000 | {
"year": 2014,
"sha1": "1c9d94504449781982ff8d2fa5d5b2be160611d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/19/2/1828/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c9d94504449781982ff8d2fa5d5b2be160611d2",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
220412488 | pes2o/s2orc | v3-fos-license | Beyond the HLA polymorphism: a complex pattern of genetic susceptibility to pemphigus
Abstract Pemphigus is a group of autoimmune bullous skin diseases that result in significant morbidity. As for other multifactorial autoimmune disorders, environmental factors may trigger the disease in genetically susceptible individuals. The goals of this review are to summarize the state of knowledge about the genetic variation that may affect the susceptibility and pathogenesis of pemphigus vulgaris and pemphigus foliaceus – both the endemic and the sporadic forms –, to compare and discuss the possible meaning of the associations reported, and to propose recommendations for new research initiatives. Understanding how genetic variants translate into pathogenic mechanisms and phenotypes remains a mystery for most of the polymorphisms that contribute to disease susceptibility. However, genetic studies provide a strong foundation for further developments in this field by generating testable hypotheses. Currently, results still have limited influence on disease prevention and prognosis, drug development, and clinical practice, although the perspectives for future applications for the benefit of patients are encouraging. Recommendations for the continued advancement of our understanding as to the impact of genetic variation on pemphigus include these partially overlapping goals: (1) Querying the functional effect of genetic variants on the regulation of gene expression through their impact on the nucleotide sequence of cis regulatory DNA elements such as promoters and enhancers, the splicing of RNA, the structure of regulatory RNAs and proteins, binding of these regulatory molecules to regulatory DNA elements, and alteration of epigenetic marks; (2) identifying key cell types and cell states that are implicated in pemphigus pathogenesis and explore their functional genomes; (3) integrating structural and functional genomics data; (4) performing disease-progression longitudinal studies to disclose the causal relationships between genetic and epigenetic variation and intermediate disease phenotypes; (5) understanding the influence of genetic and epigenetic variation in the response to treatment and the severity of the disease; (6) exploring gene-gene and genotype-environment interactions; (7) developing improved pemphigus-prone and non-prone animal models that are appropriate for research about the mechanisms that link genotypes to pemphigus. Achieving these goals will demand larger samples of patients and controls and multisite collaborations.
Introduction
Pemphigus is a group of autoimmune skin diseases of unclear etiology, characterized by epidermal blisters and erosions in the stratified squamous epithelium affecting the skin and/or mucous membranes. The main forms are pemphigus vulgaris (PV) and pemphigus foliaceus (PF). Pemphigus patients produce immunoglobulin G (IgG) antibodies targeting proteins at the cell surface of keratinocytes. The autoantigens are part of the desmosomes, the molecular complexes specialized for cell-to-cell adhesion by anchoring intermediate filaments. Keratinocytes within pemphigus lesions lose cell-cell adhesion due to damage of desmosomes, a process named acantholysis. While PV can affect either the mucous membranes alone or the mucous membranes and the skin, in PF lesions develop only in the skin. The primary autoantigens are desmoglein 1 (DSG1) in PF and desmo-glein 3 (DSG3) in PV, but PV patients may also develop anti-DSG1 autoantibodies. Detection of anti-desmoglein antibodies in patients with pemphigus is a hallmark and a diagnostic criterion. Additional autoantigens have been identified in PV patients (Kalantari-Dehaghi et al., 2013;Sajda et al., 2016); however, the significance of the non-desmoglein targets is unknown.
Diagnosis is based on clinical, histological, and immunochemical criteria. If untreated, pemphigus has a poor prognosis, and mortality is high, especially for PV. Treatment is primarily by systemic corticosteroids, and adjuvant broadscale immunosuppression, whose side effects can be severe. Other adjuvant therapies for patients with high levels of circulating autoantibodies are high-dose intravenous immunoglobulin (IVIg) and plasmapheresis or extracorporeal immunoadsorption with protein A. A promising option is the depletion of B lymphocytes with rituximab, a monoclonal antibody targeting CD20+ B cells, particularly in the treatment of patients who develop serious side effects or do not respond to conventional therapy. Some emerging therapies that have shown pos-itive outcomes in other autoimmune diseases are being investigated (Ruocco et al., 2013;Kasperkiewicz et al., 2017;Hans-Filho et al., 2018;Yanovsky et al., 2019).
Pemphigus frequency varies according to geographic area and ethnic groups (Alpsoy et al., 2015). Both PF and PV are rare, but, in most of the world, PV is more frequent, corresponding to 65% -85% of the pemphigus cases. The mean incidence of PV usually is higher in women, but the female:male ratio varies among populations from 0.45 to 5. The incidence reported for the different regions of Europe ranged between 0.5 and 2.4 per million and year. In Southern and Eastern Europe, the frequency of the disease is higher than in North and Central Europe; in Turkey, the yearly incidence was reported as 2.4 per million. The incidence in Asia varied between 1.6 and 16.1 per million. In North America, the incidence was reported as 32 per million in people of Jewish origin and 4.2 per million for people on non-Jewish ancestry. In Africa, the yearly incidence of pemphigus was reported as 2.9 per million in Mali and 6.7 and 8.6 per million in Tunisia. Most of these figures refer to pemphigus in general, or to only PV.
As for PF, it is generally even rarer than PV, but PF reaches high frequency in regions of endemicity in South America and Tunisia. The highest incidence of PF occurs in central-western Brazil, where the disease is known as fogo selvagem (FS, meaning wild fire in Portuguese). The incidence varies among regions and time from 9 to 83 cases per million inhabitants per year and the female:male ratio is approximately 1.5. Endemic foci have also been reported for Colombia, Venezuela, Peru, Bolivia, Argentina and Paraguay (Chiossi and Roselino, 2001). The highest prevalence has been observed in the Xavante and Terena Amerindians (1.4% and more than 3%, respectively; Aoki et al., 2004). In central and southern Tunisia, the yearly incidence of pemphigus was estimated at 6.7 cases per million and year, of which 61% were PF, particularly women living in rural areas, with a female:male ratio of 4.1 (Bastuji-Garin et al., 1995). However, in the north of the country, the incidence was of 8.6 cases per million and 61% were PV patients with a female:male ratio of 2 (Zaraa et al., 2011).
The mechanism resulting in the breakdown of the immunological tolerance remains unknown. However, it seems settled that the onset and the course of pemphigus depend on environmental factors triggering the disease in individuals with a predisposing genetic background (Figure 1). Although essential, the complex genetic background does not suffice for disease outbreak; exposure to ill-defined precipitating environmental factors is required. These also may differ between subjects and are related to their lifestyle. Many factors have been associated with the onset or the course of pemphigus. Certain drugs may interfere with the keratinocyte membrane biochemistry and/or with the immune balance (respectively, biochemical and immunologic acantholysis). Viral infections, primarily the herpetic ones, may trigger the outbreak of pemphigus or complicate its clinical course. The precipitating effect of the viral attack may result from overactivated inflammatory and immune responses. Rare, but well-documented events that may trigger the disease in susceptible individuals are physical agents (ultraviolet or ionizing radiation, thermal or electrical burns, surgery and cosmetic procedures), contact allergens (e.g., organophosphate pesticides), dietary factors (e.g., garlic, leek, onion, black pepper, red chili pepper, red wine, tea), and emotional stress (Ruocco et al., 2013). Epidemiological features of FS in Brazil indicate continued exposure to certain hematophagous insect bites as a possible precipitating factor of the disease (Lombardi et al., 1992;Aoki et al., 2004;Qian et al., 2016).
Herein I provide a comprehensive overview of the genetic risk factors and discuss insights into pemphigus pathogenesis that this knowledge is revealing. Both pemphigus foliaceus and pemphigus vulgaris are addressed. Associations are discussed considering the function of the gene product, the allele or haplotype frequencies in populations, the strength and statistical significance of the association, sample size, and statistical power.
Pemphigus genetics Evidence for a genetic basis
Although no systematic study of pemphigus recurrence in families has been published, several reports underscore the influence of a polygenic genetic background in susceptibility and pathogenesis.
Among first-degree relatives of PV patients only two cases (0.24%) of PV were seen among 830 first-degree relatives of PV patients, and none among relatives of 890 controls, while the prevalence of any autoimmune disease (AD) was 7.4% in relatives of the patients compared to 2.3% in relatives of the controls (Firooz et al., 1994). In a survey of 171 PV patients, the prevalence of AD among first-, secondand third-degree relatives of patients was 50.6%, and 34.3% and 15.1%, respectively (Gupta et al., 2011), in agreement with partially shared genetic and environmental factors between AD.
In one Terena Amerindian population in Brazil with a prevalence of FS close to 3%, over half of the patients (16 of 29) had at least one relative (parent, sibling, aunt/uncle, cousin) with the disease (Hans-Filho et al., 1999). Unfortunately, the prevalence among matched relatives of nondiseased individuals in that population was not reported.
The wide range of incidence among populations around the globe is often interpreted as evidence for a genetic basis of multifactorial diseases such as pemphigus, however (differently from monogenic disease), variable exposure to triggering non-genetic factors is a likely cause of this heterogeneity. The alternative hypotheses should be better explored in future studies.
Nonetheless, the associations with multiple genetic variants thus far described support the hypothesis that pemphigus are complex multifactorial diseases. Susceptibility is clearly polygenic, meaning that specific genotypes at multiple loci are involved. As is the case for other complex disorders, the involved polygenes overlap only partially between patients and none of the variants conferring genetic susceptibility is essential or sufficient for disease manifestation.
Variants of numerous genes have been analyzed, especially in FS, almost always in case-control association studies. Most candidates were genes whose products are involved in immune responses, in line with the autoimmune and autoinflammatory features of pemphigus. Data for PV and PF exist for populations of Europe, South and North America, East Asia, the Middle East, and North Africa and are described below and summarized in Table 1, and Tables S1 and S2. HLA and other major histocompatibility complex (MHC) genes Classical HLA genes As for most ADs, for pemphigus associations with the classical HLA class II genes were also the first described and their variants are the strongest determinants of disease risk.
Since the mid-1970s, numerous studies addressed a possible effect of HLA alleles on PV and PF pathogenesis. Pioneering studies used low-resolution serology to genotype HLA class I antigens only. Soon after the development of medium-to high-resolution typing at the DNA level in the 1980s, it was thought that the "true" associations with individual alleles would be discovered, facilitating understanding of the mechanisms governing susceptibility/resistance to pemphigus and other HLA associated diseases. However, since then, the enhanced insight into protein structure vs. function revealed additional layers of complexity.
The mechanisms by which some HLA alleles may impact the development of ADs are not precisely known, but it is reasonable to suppose that they are related to structural and functional aspects of peptide binding and interaction with the cognate receptors. Like most proteins, the HLA molecules also have pleiotropic effects, as illustrated by the func-The genetics of pemphigus 3 Figure 1 -Environmental factors trigger pemphigus in genetically predisposed individuals carrying susceptibility genotypes. According to this hypothetical mechanism, insect saliva, virus, or other environmental factor triggers mast cell degranulation, which increases the permeability of blood vessels, causing edema. Langerhans cells and keratinocytes react to the noxious environmental stimulus producing pro-inflammatory cytokines and delivering other stress signals. Skin-resident innate immune cells and fibroblasts may contribute to the local inflammatory response. Inflammation leads to the recruitment of neutrophils and monocytes. The first encountered antigen derived from the environmental triggering factor is yet unknown. Activated antigen-presenting cells (APC) such as dermal dendritic cells process proteins derived from the environmental agent, migrate to skin-draining lymph nodes and present antigenic peptides bound to HLA class II molecules to T cells. In the secondary lymphoid organ, the T helper cells activate B cells primed by the same antigen. This model predicts that T cells specific for an environmental peptide bound to a susceptibility HLA class II molecule cross-reacts with a self-peptide bound to the same type of HLA molecule, such that peptides derived from a non-self protein mimic peptides of the self-protein desmoglein when bound to the relevant HLA protein. Similarly, the disease may be triggered in the intestine and perhaps other tissues by an environmental or microbiota-derived antigen. tions of HLA class I molecules as ligands for both the T lymphocyte receptors and NK cell receptors. Very different polymorphic motifs of the HLA and their receptor molecules impact these two interactions. Also, the expression levels may differ between alleles of the same HLA locus, with functional consequences. Therefore, the genetic associations with diseases shall be further investigated, considering that functional complexity. On the other hand, there are some characteristics of HLA that hinder a clear conclusion about the causal susceptibility and protective HLA class II alleles in pemphigus as well as other diseases: (a) linkage disequilibrium (LD) between the analyzed genes, as will be discussed below; (b) additive or epistatic functional interactions between HLA-DRB1, DQA1 and DQB1, meaning that the haplotype rather than the individual alleles correspond to the disease-relevant genetic unit. The same rationale can be extended to the HLA genotype (union of the two HLA haplotypes of any individual) and other not investigated MHC genes; (c) the multiple HLA-DQ molecules in double heterozygous individuals: the HLA-DQ heterodimer can be formed by the alpha and beta chains encoded in cis, i.e., by HLA-DQA1 and -DQB1 genes of the same haplotype, or in trans, the corresponding genes of the paternal plus the maternal haplotype.
In most studies, only the HLA-DRB1 and HLA-DQB1 class II genes were analyzed, but several also analyzed HLA-DQA1; a few searched for associations with the classical class I (Ia) genes HLA-A, HLA-B or HLA-C. Even less looked at non-classical HLA class I genes (Ib) and non-HLA MHC genes.
Regarding the class Ia genes, associations differ greatly between populations, while this is less evident for the class II genes. Probably the majority of the class Ia associations result from LD with HLA-DRB1 and HLA-DQB1 alleles, but a direct effect cannot be ruled out. Indeed, associations between PF and the HLA Ia ligands of KIR have been detected (Augusto et al., 2012 and will be presented in this review together with KIR associations.
While PF and PV share several susceptibility and protective HLA variants, significant differences point to partially dissimilar etiology and pathogenesis of these diseases. Conversely, differences among populations for the same pemphigus form are mostly due to the different frequencies of the alleles and haplotypes, but some may have a biological basis, possibly related to environmental triggering factors and the immune response to specific peptides. The observed associations are described below. A summary of the associations between pemphigus and HLA class II alleles and haplotypes is presented in Tables S1 and S2.
Much work has been dedicated to the search of associations between HLA alleles of the classical class II genes and haplotypes in pemphigus vulgaris. Ashkenazy Jewish patients with PV presented significantly higher frequencies of HLA-DR4-HLA-DQw8(3) haplotypes than the matched control group (Ahmed et al., 1990). The results of case-control association studies of European, Western and Southern Asian, and North and South American populations of Euro-pean and Western Asian ancestry showed that the risk haplotype is DRB1*04:02-DQA1*03:01-DQB1*03:02 (Table S1). Apart from the data presented in the PV studies, this conclusion is based on the frequency of the individual alleles and this haplotype in the populations studied, available at Allele Frequencies Net Database (AFND; Gonzalez-Galarza et al., 2018). Allele DRB1*04:06 in the Japanese and Chinese is a PV susceptibility allele as well (Yamashina et al., 1998;Zhang et al., 2019). In the DRB1*04 haplotypes, the association with DQA1*03:01-DQB1*03:02 seems secondary to the association with DRB1*04, as a result of high linkage disequilibrium (LD): The alleles DRB1*04:01, 04:03 and 04:05 (and other) also are inserted in DQA1*03:01-DQB1*03:02 haplotypes, but were not associated with increased susceptibility to PV (e.g., Carcassi et al., 1999;Haase et al., 2015). In fact, DRB1*04:01 may be a protective allele in the German population .
Several of the studies do not report protective alleles and haplotypes or mention only the allele groups. The available data indicate that all major HLA-DRB1 allele groups apart from DRB1*04, 08 and 14 present lower frequency among PV patients compared to controls: DRB1*03, 07, 11, 13 and 15 in six to eleven of the populations DRB1*01 in three, DRB1*16 in two, and DRB1*09 in one (Table S1). This difference between associations of PV with DRB1 alleles is mostly due to low allele frequency in the populations studied. For example, the frequency of DRB1*09 is of 0 to 2% in the populations with exception of Chinese and Japanese (12% -16%). For HLA-DQB1, alleles DQB1*02, *06 and *03:01 are markers of decreased risk (Table S1). The LD pattern indicates that HLA-DQB1 rather than HLA-DRB1 alleles may be the relevant protective factors: Worldwide, DQB1*02 occurs almost exclusively together with DRB1*03:01 or 07:01 whose frequency is decreased among patients. However, DRB1*07:01 may also occur with DQB1*03:03 or 05:01 (Gonzalez-Galarza et al., 2018) which are not associated with PV. Similarly, alleles of group DQB1*06 occur in haplotypes bearing HLA-DRB1 alleles that belong to the DRB1*13 and 15 groups. Moreover, structural differences between DRB1*03:01 and 07:01, and between the DRB1*13 and 15 are great, resulting in distinct peptide-binding properties. For these reasons, DQB1*02 and DQB1*06 most likely are the protective alleles in the corresponding haplotypes. By contrast, allele DQB1*03:01, also associated with low PV risk occurs in both protective (DRB1*11), and susceptibility haplotypes (DRB1*08:04) and cannot have a direct effect on PV.
When HLA-DRB1 alleles were grouped in three categories, susceptibility (SU), protective (PR), and neutral (NE), an additive effect of SU was observed: the risk for SU/SU genotypes was about twice the risk for SU/NE geno-types; the PR/PR and PR/NE genotypes were equally highly protective; conversely, the PR/SU and NE/NE genotypes exhibited a neutral phenotype . A dominant effect of protective HLA alleles has been reported for other autoimmune diseases and may result from the action of autoantigen-specific regulatory T lymphocytes (Treg cells) (Ooi et al., 2017) Brochado et al., 2016) and DRB1*15-DQB1*06:02 (Moraes et al., 1991;Pavoni et al., 2003). It is difficult to identify the primary association or detect additive or epistatic gene-gene interactions because HLA-DQ and HLA-DRB1 alleles present high LD. However, the most constant markers of high risk are DQB1*05 and 03:02, while DQB1*02, 03:01 and 06 mark most low risk haplotypes (Table S2).
In Amerindian populations, a different picture emerges. In both the Xavante and the Terena populations, the FS susceptibility HLA-DRB1 allele is 04:04, while DRB1*08:02 is associated with relative protection from the disease. An additional association with DQB1*03:02 was seen in the Terena, which agrees with its high LD with DRB1*04:04 (Cerna et al., 1993;Moraes et al., 1997).
Most differences between Brazilians and Tunisians for PF-associated HLA class II alleles and haplotypes may be due to differing allele/haplotype frequencies, sample sizes, and statistical power. However, some may have a biological cause. The allele group HLA-DRB1*03 is not associated with FS, for which the two DRB1*03-bearing haplotypes have opposing effects on susceptibility (Table S2). So, allele DRB1*03:01, associated with lower FS risk in Brazil has been suggested to increase susceptibility to PF in Tunisia. The DQA1-DQB1 haplotypes associated with DRB1*03 alleles are the same in Brazil and Tunisia (Gonzalez-Galarza et al., 2018) and therefore cannot be the cause of the observed difference. Haplotypes that are common also in Tunisia but seemingly have no effect on PF susceptibility in that population are DRB1*07:01-DQA1*02:01-DQB1*02:01 or 03:01 and DRB1*01-DQA1*01:01-DQB1*05:01, respectively associated with low and high risk of FS in Brazil. The discrepancy between the Tunisian and Brazilian populations may at least partially stem from different environmental triggering factors, or from differential LD with other not analyzed relevant MHC genes.
For sporadic PF in China (Sun et al., 2019;Zhang et al., 2019) (Table S2). Most differences between sporadic PF on these populations and endemic PF in Brazil and Tunisia probably result from the distinct allele frequencies and statistical power.
Other MHC genes
The MHC is a region of about 4 Mb at the cytogenetic location 6p21.33 that contains numerous genes distributed over three regions: class I, class III and class II. The MHC gene set includes classical and non-classical HLA class I and II genes and many additional genes whose products perform immune-related and unrelated functions. Pavoni et al. (2003) observed that SU/SU and SU/NE were susceptibility genotypes (a), while the PR/PR and PR/NE genotypes were highly protective (b); conversely, the PR/SU genotype resulted in a neutral phenotype, similar to that of NE/NE. According to the model shown, the partial inhibition of the autoagressive response by conventional effector T cell in individuals with a SU/PR genotype is provided by the anti-inflammatory response of regulatory T cells (model based on Ooi et al., 2017). APC: antigen-presenting cell, TCR: T cell receptor, T conv: conventional effector T cell, T reg: regulatory T cell.
HLA-E is expressed on the surface of virtually every normal cell and plays a dual immunoregulatory role in innate and adaptive immune responses. It may present pathogenderived sequences, which elicit specific T lymphocyte responses, but the best-known function of HLA-E is the modulation of NK cell responses. HLA-E binds peptides derived from the signal sequence of HLA Ia molecules, mediating inhibitory signals via the CD94-NKG2A receptor, or activating signals via the NKG2C receptor when the HLA-G leader peptide is bound to HLA-E. So, it indirectly signals HLA class I expression, protecting healthy cells against lysis by NK cells, or allowing lysis of infected cells by NK cells when HLA Ia expression is abnormally low or absent, and HLA-G is upregulated (Lauterbach et al., 2015). Thus, HLA-E mediates self/non-self discrimination by NK cells, and this balance may be disturbed in pemphigus and other ADs.
In a case-control study of North American subjects, the frequency of homozygous HLA-E 01:03/01:03 individuals was significantly increased among PV patients. The data indicate that this association did not result from LD with PV-associated HLA-DRB1 and -DQB1 alleles (Bhanusali et al., 2013). The E*01:03 allele may increase susceptibility to other ADs as well. In a case-control study of rheumatoid arthritis (RA) in Poland, females (but not males) with the E*01:03 allele were at higher risk, and 01:01/01:01 homozygotes were at lower disease risk. Also, patients bearing the 01:01/01:01 genotype achieved a significantly better outcome of anti-TNF treatment than patients with the E*01:03 allele (Iwaszko et al., 2015).
The HLA-G molecule plays an immunoregulatory, tolerogenic role and interacts with several immune cells, through the CD8, LILRB1 and LILRB2, and KIR2DL4 receptors. HLA-G presents four membrane-bound and three soluble isoforms, restricted tissue expression, and limited nucleotide variability in the coding region, but high variability in the promoter and 3' UTR, which may influence HLA-G levels (Donadi et al., 2011).
A significant increase of the HLA-G 14-bp deletion allele was observed in Jewish PV patients (Gazit et al., 2004). This indel polymorphism rs371194629 is in exon eight that specifies the 3' UTR region of the mRNA and has been implicated in posttranscriptional gene regulation and alternative splicing. In general, the 14-bp deletion allele has been associated with higher production of HLA-G, an effect that might be due to other polymorphisms in LD with rs371194629 (Donadi et al., 2011). The same and other HLA-G polymorphisms were associated with various diseases, including autoimmune disorders (Donadi et al., 2011).
Altered expression of HLA-G and imbalance of its isoforms was observed in epidermal cells of PV patients, suggestiong that HLA-G may act to diminish the deleterious effects of disease-promoting T lymphocytes or contribute to the homeostatic balance of the skin at the end of inflammation (Yari et al., 2008).
· Heat shock proteins of the HSP70 family One of the first recognized functions of heat shock proteins (HSPs) is to chaperone other proteins, and most of them are upregulated during stressful conditions. Moreover, extracellular HSPs participate in the induction of cellular immune responses since they are involved in the antigen processing and presentation (de Jong et al., 2014). HSP70s are one of the most abundant sources of HLA class II ligands. Natural autoantibodies to HSP70s are common, and epitopes of HSP70s are recognized by Treg cells. However, exacerbated effector responses to HSP70s are associated with ADs. These findings demonstrate a complex relationship between autoimmunity and AD: natural autoimmunity to HSP70 is associated with health, whereas altered autoimmunity to HSP70 is related to disease. In this way, HSP70 could be essential autoantigens in balancing the healthy immune system (de Jong et al., 2014).
Three HSP70 family genes -HSPA1L, HSPA1A, and HSPA1B (often called HSP70-1, HSP70-2 and HSP70-HOM, respectively) -are located in the MHC class III region. In a case-control and family study of PF in Tunisia with three tagging SNPs, increased frequencies of HSPA1L rs2227956 C>T (Thr493Met) allele T, HSPA1A rs1043618 G>C (a 5' UTR SNP) genotype C/C, and HSPA1B rs1061581 G>A (a synonymous variant) genotype G/G were observed among the patients in comparison to the control group. However, the significant LD between the HSP70 SNPs and the HLA class II alleles, together with the results of the multivariate regression analysis could argue against a direct role of the HSP70 polymorphism in susceptibility to PF (Toumi et al., 2015).
· Conflicting results for association between pemphigus and the transporter associated with antigen processing (TAP) genes.
TAP is a heterodimeric membrane molecule of the endoplasmic reticulum (ER) required for the transportation of peptides generated by the proteasome from the cytosol to the ER lumen, where they are loaded onto the HLA class I molecules. The TAP1 and TAP2 genes are located between the HLA-DQ and HLA-DP genes in the MHC. In a Japanese sample, the allele, haplotype and amino acid residue frequencies at each dimorphic site did not differ between PV and PF patients and controls, nor between patients grouped according to anti-DSG autoantibody profiles (Niizeki et al., 2004). However, in Israeli Jews, significant differences between PV patients and controls were detected in TAP2 polymorphic amino acid residue frequencies (Slomov et al., 2005).
Variants of the MHC2TA (CIITA) gene indicate that the quantitative variation of MHC class II molecules also influences susceptibility The MHC2TA (also known as CIITA or C2TA) molecule is the master regulator of constitutive and IFNg-induced expression of HLA class II genes in antigen-presenting cells. Mutations in the MHC2TA gene (cytogenetic location 16p13.13) are responsible for the bare lymphocyte syndrome (BLS), type II, complementation group A (OMIM #209920), a severe immunodeficiency in which patients fail to produce HLA class II molecules. Like several other immunodefi-The genetics of pemphigus 11 ciencies, BLS also is often associated with autoimmune disorders. Patients have decreased numbers of Treg cells and fail to counterselect autoreactive mature naive B lymphocytes, suggesting that peripheral B cell tolerance also depends on HLA class II -T cell receptor (TCR) interactions (Hervé et al., 2007). Less detrimental variants of MHC2TA may have an impact in susceptibility to multifactorial diseases, notably HLA-associated diseases.
In a case-control study of FS, two SNPs were selected for association analysis. While the missense rs4774 (Gly500Ala) SNP in the NACHT domain was not associated with FS, the G allele of rs3087456 in the promoter region was significantly associated with increased susceptibility in both the homozygous G/G and the heterozygous G/A states (Piovezan and Petzl-Erler, 2013). Additionally, a strong additive interaction between MHC2TA and HLA-DRB1 genotypes in FS disease susceptibility was observed: The odds ratio for individuals having two susceptibility HLA-DRB1 alleles was 14.1 in the presence of the susceptibility MHC2TA rs3087456 G allele, but much lower (2.2) in the presence of the protective MHC2TA A/A genotype. Based on these results, the hypothesis that genetically controlled levels of MHC2TA result in differential expression of the susceptibility and protective HLA class II molecules was raised. Thus, the quantitative variation of HLA molecules, in addition to their structural variation resulting from polymorphism of the coding regions, influences the risk of an individual developing pemphigus (Piovezan and Petzl-Erler, 2013). The same polymorphism also was associated with increased susceptibility to multiple sclerosis (MS), RA, and myocardial infarction (Swanberg et al., 2005).
MHC2TA has four promoters that control its expression in different cell types. The rs3087456 polymorphism can affect promoter III functionality that is responsible for the constitutive expression of MHC2TA in B lymphocytes, which are crucial for pemphigus autoimmunity. When leukocytes were stimulated ex vivo with IFNg, lower expression of both the mRNA and the protein was seen for genotype G/G in comparison to genotypes A/A and A/G (Swanberg et al., 2005).
Do polymorphisms of the desmoglein 1 and desmoglein 3 genes have an impact on pemphigus pathogenesis?
Four DSG genes are closely linked in chromosome 18, at the cytogenetic position 18q12.1. DSG1 and DSG3 encode the major autoantigens in PF and PV, respectively. Both genes are polymorphic. Several rare pathogenic variants of DSG1 result in autosomal dominant monogenic diseases palmoplantar keratoderma I (OMIM #148700), and congenital erythroderma with palmoplantar keratoderma, hypotrichosis, and hyper-IgE (OMIM #615508). Conversely, numerous benign SNPs that could play a role in susceptibility to polygenic disease occur in DSG1 and DSG3.
The question of whether genetic variants of DSG1 could play a role in PF was addressed in studies of French, Tunisian and Brazilian populations. Two polymorphic markers were analyzed. A haplotype comprising five missense variants in LD resulting from SNPs rs8091003, rs8091117, rs16961689, rs61730306, rs34302455 and corresponding to the extracellular domains EC4 and EC5 was not associated with PF, indicating that the structure of this portion of the molecule does not impact PF susceptibility (Martel et al., 2001). Though, allele C of the synonymous rs12967407 SNP at exon 7 (809T>C) was significantly more frequent in French and Tunisian PF patients than in the respective controls, especially in the homozygous C/C state (Martel et al., 2001;Ayed et al., 2002). Additionally, interaction between DSG1 and HLA variants in PF susceptibility was observed by Martel et al. (2002). In FS the frequency of genotype C/C also was increased in the patient sample, but the association was not significant (p = 0.079; Petzl-Erler and Malheiros, 2005). In that context, the unusually extended and strong LD between rs12967407 and more than 100 polymorphisms, including SNPs in regulatory regions (Ensembl, Cunningham et al., 2019), is relevant and should be explored in future studies.
Significant associations between DSG3 variants and PV have been reported. Two related haplotypes were associated with PV in the British and Indian populations (Capon et al., 2006). In a follow-up study of the British sample, additional variants were examined, and the authors concluded that the association signal detected was due to other, regulatory rather than the previously examined coding SNPs The LRC on chromosome 19q13.42 comprises many genes for immunoglobulin-like cell surface receptors (Barrow and Trowsdale, 2008). It includes genes for the killer immunoglobulin-like (KIRs) receptors and the leukocyte Ig-like receptors (LILRs). The principal known ligands for both KIRs and LILRs are HLA class I molecules. Other Ig-family genes in the LRC are LAIR1 and LAIR2 (leukocyte-associated Iglike receptors-1 and -2), natural cytotoxicity triggering receptor 1 (NCR1 also named NKp46 or LY94), receptor for Fc fragment of IgA (FCAR or CD89), and platelet glycoprotein VI (GP6), whose ligands are as diverse as immunoglobulins, viral hemagglutinins, and collagens. The LRC also contains NLR family members (NLRP or NALP, NLR family, pyrin domain-containing) that localize inside the cell and contribute for the activation of proinflammatory caspases via their participation in multiprotein complexes called inflammasomes. The impact of the LRC on complex disease susceptibility has been poorly explored, despite its evident importance in inflammation and immunity.
A genome-wide expression profiling with approximately 55,000 probes revealed that several genes in 19q13 were differentially expressed in CD4+ lymphocytes when comparing FS patients and controls, as well as between dif-ferent FS clinical forms (Malheiros et al., 2014). Motivated by this result, recently the whole 1.5 Mb LRC was screened in a case-control study using genotype data of 527 tag SNPs of which three were associated with differential susceptibility to FS . The intergenic SNP rs465169 is in a region that regulates several immune-related genes, including VSTM1, LAIR1, LILRA3-6, LILRB2, NLRP12, and LENG8. Increased risk was associated with its minor A allele. The LENG8 rs35336528 and the FCAR rs1865097 SNPs and four haplotypes with SNPs within the KIR3DL2/3, LAIR2, and LILRB1 were also associated with FS.
The killer cell immunoglobulin-like receptor (KIR) and their HLA ligands modulate susceptibility to FS Natural killer (NK) cells belong to the family of innate lymphoid cells and are major players of innate immune responses, and also modulate adaptive immune responses. Various reports suggested a correlation of NK cell number and functional alterations with PV and other autoimmune conditions (Takahashi et al., 2007;Gianchecchi et al., 2018). NK cells express numerous receptors, including KIR that are also expressed in a subpopulation of cytolytic T lymphocytes.
There are inhibitory and activating KIR. The ligands for most activating KIR are unknown, but most inhibitory KIR bind HLA class I molecules. Cells with abnormally low classical HLA class I expression may escape recognition by cytotoxic CD8 T lymphocytes, but this renders these cells sensitive for NK-mediated killing. Hence, the cytotoxic response of NK cells occurs when activating signals predominate over inhibitory signals delivered by KIR-HLA (Kulkarni et al., 2008).
The genomic KIR region in the LCR is multigenic, but the number of KIR genes (gene content) varies widely, from 4 to 20 among KIR haplotypes. Each of these KIR genes presents multiple alleles. That normal variation does influence complex diseases and reproduction .
For FS, a protective association with activating KIR genes was observed in a study of KIR gene content polymorphism (Augusto et al., 2012). The presence of more than three activating genes apparently lowers the risk of FS significantly, and the strongest protective effect was found for higher activating/inhibitory KIR ratios. Furthermore, the presence of both the activating KIR3DS1 gene and its HLA-Bw4 ligand was protective. This contrasts with other ADs, where activating KIR genes have been commonly reported to increase the risk. On the contrary, for infectious diseases reduced susceptibility is associated with activating KIR (Kulkarni et al., 2008). The authors hypothesized that this unusual association for a disease with autoimmune features might be related to the environmental trigger of FS. Possibly a viral or a salivary protein inoculated by a hematophagous insect initiates the pathogenic process. Thus, a more effective immune response against the initial triggering factor, with the participation of activating KIR, may prevent the early events that initiate the pathogenic process (Augusto et al., 2012).
In a subsequent investigation of KIR3DL2 alleles in FS, increased susceptibility was associated with allele KIR3DL2*001 in an allele-dose and ligand-dependent manner: The risk was almost fourfold increased for KIR3DL2*001/001 homozygotes, and for the presence of KIR3DL2*001 together with at least one copy of the KIR3DL2 ligands HLA-A3 or HLA-A11. Moreover, a lower percentage of KIR3DL2-positive NK cells and lower expression of KIR3DL2 at the cell surface was seen for variant T (376Met) of SNP rs3745902 (1190C>T, Thr376Met). Amino acid 376 is in the cytoplasmic tail of the receptor and 376Met lowers the risk of FS. Because KIR3DL2 is an inhibitory receptor, lower susceptibility to FS may be due to decreased inhibitory signals within NK cells . These results are in line with the gene content analysis commented above (Augusto et al., 2012).
LAIR1 and LAIR2 gene variants are involved in gene expression and susceptibility to pemphigus foliaceus
The leukocyte-associated immunoglobulin-like receptor 1 (LAIR-1, or CD305) is a collagen-binding inhibitory receptor necessary for the regulation of immune responses, expressed on most peripheral blood mononuclear cells (PBMC). The complement component C1q and collagen XVII are among the ligands of LAIR-1. LAIR-1 ligand engagement and crosslinking suppresses the function and/or differentiation of NK cells, T and B lymphocytes, dendritic cells and its precursors, and monocytes. The principal source of its secreted homolog LAIR-2 (or CD306) are T CD4+ lymphocytes. LAIR-2 functions as a natural competitor of LAIR-1 by binding the same ligands, thus restraining the inhibitory potential of LAIR-1 (Meyaard, 2008). Altered protein levels of LAIR-1 and LAIR-2 have been associated with autoimmune and inflammatory disorders, such as systemic lupus erythematosus (SLE), RA and autoimmune thyroid diseases (ATD) (see Camargo et al., 2016).
In a study of genome-wide mRNA levels in FS, both the LAIR1 and the C1QA (that codes for the C1q ligand) mRNA levels were increased in CD4+ T lymphocytes of patients with disseminated (generalized) FS in comparison to unaffected controls (Malheiros et al., 2014).
Two of six analyzed LAIR1 tag SNPs (rs56802430 allele G and rs11084332 allele C) were respectively associated with increased and decreased susceptibility to FS, and one of eight LAIR2 tag SNPs (rs2287828 allele T) was associated with increased susceptibility in a case-control analysis for FS (Camargo et al., 2016). Furthermore, 4 to 5-fold increased susceptibility was seen for a haplotype of four LAIR2 SNPs that are not in LD with each other (r 2 £ 0.08; rs2042287, rs2287828, rs2277974, and rs114834145; haplotype G-T-C-A). Alleles of four of the LAIR1 SNPs mark increased mRNA expression: rs3826753 G, rs74463408 C, rs3745444 T, rs56802430 G; however, no link between LAIR1 expression and the disease was observed, leading to the conclusion that FS susceptibility by LAIR1 polymorphisms is not a consequence of variable gene expression. Conversely, the same LAIR2 G-T-C-A haplotype is associated with both FS and 4.5-fold higher LAIR2 mRNA The genetics of pemphigus 13 levels. The authors suggested that higher levels of the LAIR-2 protein are detrimental in FS by antagonizing LAIR-1 function and exacerbating immune responses (Camargo et al., 2016). Noteworthy, most LAIR1 and LAIR2 SNPs associated with FS or in high LD with them are in regions that present pre-or post-transcriptional regulatory features, such as chromatin modifications, regulatory RNA binding, or RNA splicing (Camargo et al., 2016).
A regulatory 3' UTR polymorphism of KLRG1 influences susceptibility to pemphigus foliaceus The killer cell lectin-like receptor subfamily G member 1 protein (KLRG1, alternatively MAFA, MAFAL or CLEC15A) is an inhibitory receptor expressed on the surface of mainly NK cells, and of CD4+ and CD8+ ab T lymphocytes with an effector or effector-memory phenotype. In addition to the inhibitory KIRs that regulate NK cell function via binding of HLA class I molecules on target cells (see above), NK cells also have inhibitory receptors specific for non-HLA ligands. KLRG1 monitors the expression of E-, Nand R-cadherins on target cells, mediating missing-self recognition by binding to a highly conserved site on these classical cadherins.
The KLRG1 gene is in the NK cell complex (NKC) in the chromosomal region 12p13. In an analysis of candidate SNPs chosen because of their putative ability to disrupt or create microRNA binding sites, increased FS susceptibility associated with the A/G genotype of KLRG1 rs1805672 compared with the A/A genotype was seen. The KLRG1 rs1805672 G allele disrupts a miR-584-5p binding site in the 3' UTR of KLRG1; accordingly, KLRG1 mRNA levels were significantly higher in PBMC of G-positive individuals in comparison to individuals with genotype A/A. Functional analyses indicated that allele G directly interferes with miR-584-5p binding, allowing for KLRG1 mRNA (and possibly protein) accumulation, which in turn may contribute to the pathogenesis of FS (Cipolla et al., 2016). Interestingly, autoantibodies against the KLRG1 ligand E-cadherin (CDH1) were detected in sera of about half of PF patients and healthy subjects of an endemic area in Brazil, but not in healthy individuals from USA (Flores et al., 2012). It remains to be tested whether a relationship between increased KLRG1 levels, KLRG1-CDH1 binding, and anti-CDH1 autoantibodies exists in pemphigus.
Genetic variants of some cytokines and cytokine receptors have an impact on pemphigus susceptibility
Cytokines are involved in immune responses and the regulation of numerous other biological processes. Many cytokine genes are polymorphic, and that diversity may have a functional impact, reflecting on susceptibility to complex diseases.
Interleukin 6 (IL-6) is an inducer of the acute phase response and acts on immune and non-immune cells. It is involved in monocyte and lymphocyte differentiation and is required for the generation of Th17 lymphocytes. Also, IL-6 plays an essential role in the terminal differentiation of B lymphocytes into immunoglobulin-secreting cells.
A significant association with the IL6 rs1800795 -174G>C polymorphism was found for FS, indicating that the C/C genotype has a protective effect, while G in the homozygous or heterozygous state is associated with increased susceptibility (Pereira et al., 2004). The rs1800795 SNP is within the gene promoter. The C allele is associated with lower plasma levels and lower in vitro expression in comparison to the G allele (Fishman et al., 1998;Rivera-Chavez et al., 2003). Increased levels of IL-6 have been correlated with inflammatory and AD susceptibility, activity or more severe clinical symptoms (Mihara et al., 2012).
The rs2243250 SNP (also known as -590C>T or -589C>T) in the promoter of the IL4 gene was investigated for PF susceptibility in Brazil and Tunisia (Pereira et al., 2004;Toumi et al., 2013). In both studies, the T/T genotype was increased in the patient samples in comparison to the control samples. Interestingly, the T/T genotype expresses higher mean serum levels of IL-4 compared to the C/T and C/C genotypes (Toumi et al., 2013). Higher IL-4 levels might be contributing to the polarization of autoreactive Th lymphocytes towards the Th2 pathway, inducing proliferation of autoreactive B lymphocytes and facilitating immunoglobulin class switching to IgG4 that is pathogenic in pemphigus. The IL4R gene (also known as IL4RA, in 16p12.1) encodes the IL-4Ra chain of the heterodimeric receptors for IL-4 and IL-13. Toumi et al. (2013) observed a positive association of PF in Tunisia with the T;A-C-A combination for rs2243250 (of IL4), and rs4787948-rs3024622-rs3024530 (of IL4R), raising the hypothesis that genetic variation of IL-4 and IL-4Ra interact and play a central role in the regulation of pathogenic IgG4 antibody production or the clinical course of the disease. In contrast, polymorphisms of the IL13 gene and IL13RA2 (that encodes one IL-13 receptor chain; located in Xq24) were not associated with the disease (Toumi et al., 2013).
Pemphigus has been considered a Th2 disease. However, support for the involvement of the IL23/Th17 pathway in the pathogenesis of pemphigus was found. In Tunisian PF, a higher frequency of circulating Th17 cells was observed in patients' blood compared to controls. Eleven tag SNPs in the IL23/Th17 axis genes IL23R (interleukin 23 receptor), IL17A (interleukin 17A), IL17F (interleukin 17F), IL17RA (interleukin 17 receptor A), RORC (RORgt), TNF (tumor necrosis factor) and STAT3 (signal transducer and activator of transcription 3) genes were selected. The IL23R rs11209026 G/G genotype, the IL17A rs3748067 C/C genotype, the IL17F rs763780 C allele, and the TNF -308G>A rs1800629 A allele (in both the A/A and A/G genotypes) were associated with increased susceptibility (Ben Jmaa et al., 2018).
The favored hypothesis about mechanisms underlying the associations of pemphigus with cytokine polymorphisms is that individuals with different genotypes for regulatory polymorphisms express different cytokine levels that may impact pathogenesis. The disease-associated IL6, IL4, IL4R and TNF SNPs cited above are eQTL (expression quantita-tive trait loci, which influence the transcription level of one or more genes) according to the GTEx Portal (Carithers et al., 2015). Altered levels of cytokines were observed in the sera and lesional skin of pemphigus patients and possibly play a role in pathogenesis and disease severity (Ameglio et al., 1999;Zeoti et al., 2000;Timóteo et al., 2017a,b;Ben Jmaa et al., 2018). Moreover, the altered cytokine levels that occur in ADs are among the causes of the wide variation in responsiveness to glucocorticoid therapy. Augmented production of inflammatory cytokines may downregulate glucocorticoid receptor expression, resulting in diminished or lacking response to treatment (Yang et al., 2012). Decreased glucocorticoid sensitivity associated with higher levels of IL-6 and TNFa was seen in vitro for PBMC from pemphigus patients (Chriguer et al., 2012). Genetic variants and the expression level of the BAFF cytokine has also been investigated in pemphigus and will be discussed in the following topic.
The B lymphocyte co-stimulators CD40, CD40LG, BAFF, and CD19
A role for CD40, CD40LG and BAFF polymorphisms in pemphigus finds support in studies of protein or mRNA levels in PF and PV, and in the effects of these molecules and their genetic variants in homeostasis, in inflammation, and ADs.
CD40 (TNFRSF5) is a co-stimulatory molecule at the surface of a variety of cells like B lymphocytes, macrophages, and dendritic cells. In the skin, Langerhans cells and keratinocytes constitutively express CD40. Its ligand CD40LG (also known as CD40L TNFSF5, CD154, TRAP) is expressed at the surface of activated but not resting CD4+ T lymphocytes, and other hematopoietic and nonhematopoietic cells.
CD40/CD40LG interaction induces intracellular signals and expression of surface and secreted molecules required for antibody-and cell-mediated adaptive immune responses The CD40/CD40LG interactions also are essential for peripheral B lymphocyte tolerance. Lack of functional CD40LG or CD40 results in the monogenic immunodeficiency syndromes called hyper IgM syndrome type 1 (Xlinked, OMIM #308230) and type 3 (autosomal recessive, OMIM #606843), respectively. Patients present normal or elevated serum IgM levels associated with markedly decreased IgG, IgA, and IgE, and reduced Treg frequency, as well as impaired immunoglobulin somatic hypermutation, class switch recombination, and repertoire selection. The patients are susceptible to recurrent or opportunistic infections and autoimmune manifestations.
The CD40LG gene is located at Xq26.3. The risk of FS is increased by homozygosity (in women) or hemizygosity (in men) for the major T allele of SNP rs3092945 (-726T>C). No association was seen for rs56074249 a 3' UTR(CA) short tandem repeat (STR, or microsatellite) (Malheiros and Petzl-Erler, 2009).
The CD40LG rs3092945 SNP has not been considered as a marker in other studies of ADs, because it is absent or very rare all over the world, except for sub-Saharan African and some admixed populations of South and North America. However, associations with other polymorphisms of the CD40LG gene or close to it the were seen for ADs such as celiac disease, ulcerative colitis, and Crohn's disease (Li et al., 2015).
The CD40 gene is at the cytogenetic location 20q13.12. The 5' UTR polymorphism -1C>T (rs1883832) was analyzed in FS. This SNP resides in the Kozak sequence that includes the translation initiation codon (AUG) and the surrounding nucleotides and is important for ribosome binding to the mRNA. The rs1883832 T allele was significantly associated with decreased susceptibility to FS, consistent with a dominant or additive protective effect. Accordingly, the C/C genotype was associated with increased susceptibility to FS (Malheiros and Petzl-Erler, 2009).
Involvement of CD40/CD40LG levels was observed in the pathogenesis of pemphigus. Upregulation of both the receptor and the ligand has been reported in lesional skin and the serum of patients with active PV and PF. Numerous CD40LG+ cells and CD40LG mRNA copies were seen in lesional specimens compared to controls, and immunostaining for CD40 was intense both in the dermis and in keratinocytes. Additionally, patients' sera contained high levels of sCD40LG that is mainly secreted by activated T lymphocytes (Caproni et al., 2007).
In FS patients, there is an increased number of dendritic cells in lesional skin, and this correlates with serum autoantibody titers (Chiossi et al., 2004). It has been shown that the CD40 rs1883832 C allele increases the translational efficiency of nascent mRNA, resulting in 15% to 32% more CD40 protein than that seen for the T allele (Jacobson et al., 2005). Altogether, these findings support the hypothesis that higher levels of CD40 in individuals with the rs1883832 C allele may contribute to the pathogenesis of FS.
Variable susceptibility to ATD, SLE and RA also is associated with CD40 polymorphisms, especially in Europeans and specifically with the intron SNP rs4810485 that is a proxy for rs1883832 (r 2 1 in all analyzed non-African populations; 1000 genomes via LDlink) (Lee et al., 2015a,b).
The BAFF (TNFSF13B) gene maps to 17p13.1. The B cell activating factor (BAFF, also known as BLYS, TNFSF13B, TALL1, THANK) is predominantly produced by myeloid cells, but regulated expression by many different hematopoietic and non-hematopoietic cell types has been described (Vincent et al., 2013). BAFF is initially expressed as a membrane-bound trimer, which is proteolytically cleaved and released in a soluble form. Among its multiple effects, BAFF is a critical regulator of B lymphocyte differentiation, maturation, and survival. It is also involved in the immunoglobulin switch from IgM to IgG, IgE, and IgA. The homologous proliferation-inducing ligand (APRIL, TNFSF13A or TALL2) also has multiple effects in B lymphocyte biology; however, a possible impact of its genetic variants in pemphigus has not yet been published.
For FS, a weak protective association was found with the T allele of the rs9514828 SNP (-871C>T, upstream of the BAFF gene transcription initiation site) (Malheiros and Petzl-Erler, 2009). This SNP is in the binding site of transcription factor MZF1 and may change its binding affinity, resulting in altered levels of BAFF (Kawasaki et al., 2002). MZF1 was reported to be preferentially expressed in differentiating myeloid cells (Hromas et al., 1991). In a genomewide mRNA expression profile in FS, BAFF expression was significantly increased in CD4+ T lymphocytes of patients with active disease and decreased in patients under immunosuppressive treatment, both compared to healthy individuals, and also overexpressed in lesional skin compared to non-lesional skin of the same patients (Malheiros et al., 2014). However, the 3' UTR SNPs rs4145212, rs116898958, and rs185198828 that may alter the binding sites of microRNAs were not associated with FS (Cipolla et al., 2016).
Remarkably, regarding susceptibility to FS, gene-gene interactions may occur between BAFF and both CD40 and CD40LG. So, the protective effects of CD40LG rs3092945 C and CD40 rs1883832 T alleles only manifest in BAFF rs9514828 T-positive individuals, and vice versa (Malheiros and Petzl-Erler, 2009). This is not unexpected given the functional interactions between CD40, CD40LG, and BAFF in health and disease, and the effect of the genetic variants on protein levels. Notwithstanding, additional studies are needed to validate the associations and to understand their causes.
Functional effects of BAFF genetic variation have also been reported for SLE and MS in Sardinia (Steri et al., 2017). Circulating BAFF levels are often elevated in patients with SLE and correlate with clinical disease activity. Elevated levels of BAFF were reported in the serum of RA and Sjögren's syndrome as well, but not PV (Asashima et al., 2006).
The B lymphocyte antigen CD19 is expressed by early pre-B cells from the time of heavy chain rearrangement until plasma cell differentiation, and by follicular dendritic cells. Antigen-induced B cell receptor signaling is modulated by a multimolecular complex on the membrane of B lymphocytes of which CD19 functions as the principal component. CD19 is required for optimal antibody responses and selection against inherent autoreactivity. The autosomal recessive common variable immunodeficiency 3 (CVID3, OMIM #613493) is caused by lack of functional CD19. Among other alterations of B lymphocyte immunity, selection against the autoreactive properties of immunoglobulins is defective in patients (van Zelm et al., 2014).
Recently, a unique CD19 hi B lymphocyte population exhibiting activation and memory-like properties was detected in the periphery of pemphigus patients. Genes involved in B lymphocyte activation and differentiation were up-regulated in these B cells. A tight correlation between peripheral CD19 hi B cells and total IgG/IgM levels was seen. These cells might contain B lymphocyte precursors for terminal differentiation and contribute to IgG/IgM production in ADs (Liu et al., 2017).
These observations motivated the search for a possible association of FS with CD19 variants. Two polymorphisms of the CD19 gene (mapped to 16p11.2), intron SNP IVS14 -30C>T and an STR at the 3' UTR were used as markers.
They had been previously associated with susceptibility to SLE in the Japanese population (Kuroki et al., 2002). For pemphigus, no significant differences between the patient and control samples were seen such that these polymorphisms do not play any crucial role in the inter-individual variation of susceptibility (Malheiros and Petzl-Erler, 2009). Given these observations and the scarcity of studies, it would be premature to conclude that genetic variation of CD19 is irrelevant for pemphigus.
Common genetic variants of the molecules involved in T lymphocyte activation and tolerance may influence susceptibility to pemphigus.
Adequate T lymphocyte response to antigen requires specific interaction of the peptide-HLA complex with the T cell receptor, as well as co-stimulatory and co-inhibitory signals that regulate activation, proliferation, and termination of the T cell response. The balance between positive and negative signals determines the outcome; hence, disruption of that balance may result in disease. These co-regulatory signals are provided by membrane-bound receptor-ligand pairs of which the most prominent are CD28/CTLA4:CD80/CD86, ICOS:ICOSL, and PD-1 (or PDCD1):PD-L1(CD274)/PD-L2, which are members of the immunoglobulin superfamily.
CD28-CD80/CD86 is the classical T lymphocytes costimulatory pathway. CTLA4 (or CD152) is an inhibitory receptor that can outcompete CD28, binding to CD80 and CD86 with higher affinity than CD28 and limiting T cell responses (Goronzy and Weyand, 2008). ICOS is an important co-stimulatory receptor, especially for Th2 effector cells. While CD28:CD80/CD86 interactions are critical for the initiation of an effective immune response, ICOS:ICOSL is required at later stages and predominates over CD28 for secondary immune responses (Coyle et al., 2000). ICOS is critical for humoral immune responses. The co-inhibitory molecules PD-L1 and PD-L2 interact with the PD-1 receptor to suppress responses by T lymphocytes (Keir et al., 2007).
Haploinsufficiency of, or impaired ligand binding to, CTLA4 result in a rare autosomal dominant immune dysregulation syndrome with incomplete penetrance named autoimmune lymphoproliferative syndrome type V (ALPS5 or CHAI; OMIM #616100). Common variable immunodeficiency 1 (CVID1; OMIM #607594) is an autosomal recessive disease due to mutations in ICOS.
For FS in Brazil, nineteen polymorphic markers were analyzed. For region 2q33.2, seven SNPs and three STR were selected, ranging from the promoter region of the CD28 gene to the intergenic region between CTLA4 and ICOS. A protective effect of allele T of CTLA4 rs5742909 (-318C>T) was detected, while for CTLA4 rs733618 ( -1722T>C) the C allele was associated with increased susceptibility. Another CTLA4 SNP rs139105990 in a putative microRNA binding site is not associated with FS (Cipolla et al., 2016). For region 3q13.33, seven polymorphisms in the CD80 promoter and one missense SNP of the CD86 gene were analyzed. Significant associations occurred for CTLA4 and CD86 SNPs, and the STR (Dalla-Costa et al., 2010).
The rs5742909 T allele marks higher promoter activity (Wang et al., 2002) and increased expression of CTLA4 (Ligers et al., 2001), which could lower the risk of ADs.
The rs733618 risk allele C might lead to altered alternative splicing and decreased expression and function of membrane-bound CTLA4, resulting in impaired inhibition of T lymphocyte activation, which might contribute to the development of AD, as suggested for myastenia gravis (MG) (Wang et al., 2008).
In the sample of predominantely African ancestry, lower risk of the disease was associated with allele A of the CD86 rs1129055 (1057G>A, Ala304Thr) SNP, particularly when in homozygosis. This allele may alter the intracellular signal transduction pathways controlled by the CD86 molecule on antigen presenting cells; however, the lack of association in the Euro-Brazilian sample argues against a direct effect of the rs1129055 polymorphism in susceptibility.
Analysis of a small sample of Polish PV (n = 40) and PF (n = 14) patients showed no statistically significant differences between the patients and the controls for the CTLA4 missense polymorphism rs231775 (+49A>G; Thr17Ala). For ICOS, carriers of the allele C of rs10932029 (IVS1+173T>C) were more frequent between each of both patient samples in comparison with the control sample (Narbutt et al., 2010).
Genetics sheds light on the controversial relevance of the complement system in pemphigus A primary message given by associations between complex diseases and (inherited) genetic polymorphisms is that the (mostly still unknown) mechanisms linking the genotype to disease susceptibility are causal in (rather than a consequence of) the pathogenic process.
The complement system (CS) consists of a large number of soluble and membrane-bound proteins and represents one of the major effector mechanisms of innate immunity against pathogens, and for removal of cellular debris and immune complexes. The role of complement in pemphigus has been a controversial issue, mainly because pathogenic antidesmoglein autoantibodies mostly belong to the IgG4 subclass that does not initiate the classical complement pathway, and because acantholysis in pemphigus does not require complement in vitro. However, the alternative and the lectin complement pathways are not initiated by antigen/antibody complexes. Moreover, the CS is emerging as a global regulator of immune responses and tissue homeostasis, beyond its well-known role in innate immunity (Hajishengallis et al., 2017, and others). Please refer to the recent article by Bumiller-Bini et al. (2018) for an appraisal of the debate about the role of complement in pemphigus.
Motivated by prior observations of altered expression of CD59 -the most essential MAC inhibitor -in several ADs, the hypothesis that SNPs in noncoding regions may regulate CD59 expression levels and participate in autoimmune pathogenic processes was tested in a recently published work (Salviano-Silva et al., 2017). Six intronic and 3' UTR polymorphisms that might affect alternative splicing of the primary RNA transcript or regulation of mRNA stability in the cytoplasm were analyzed in a case-control FS association study, and for a possible effect on transcript levels. Specific alleles and haplotypes influenced disease susceptibility as well as mRNA expression levels, especially in women. The risk haplotype G-G-C-C-A-A also marked higher mRNA expression. The authors concluded that higher CD59 transcriptional levels might increase susceptibility to FS (especially in women), possibly due to the role of CD59 in T lymphocyte signal transduction (Salviano-Silva et al., 2017). Association with rs1047581 was replicated in a subsequent study (Bumiller-Bini et al., 2018, see below).
The complement receptor 1 (CR1, or CD35) plays a major role in inhibiting the complement system, removing immune complexes, and activating B cells. The gene contains several functional polymorphisms that have been associated with different multifactorial diseases (please refer to Oliveira et al., 2019). In a study of FS, 11 CR1 SNPs were analysed. Among these were the SNPs that define the Knops blood group system (York and McCoy antigens on erithrocytes). The haplotypes CR1*3B2B (York) and CR1*3A2A (with p.1208Arg) were associated with protection, and the CR1*1 haplotype (McCoy) with increased susceptibility. Furthermore, heterozygote rs12034383 A/G individuals presented higher CR1 mRNA levels than G/G homozygotes. The lowest soluble CR1 (sCR1) levels occurred in patients with active, more severe (generalized) disease, but treatment and remission resulted in the increase of median sCR1 levels. So, genetic variants of CR1 seem to modulate susceptibility to the disease and higher sCR1 levels may have an anti-inflammatory effect in patients with FS (Oliveira et al., 2019).
A region in chromosome 9q33.2 that includes the complement component C5 gene and the TNF receptor-associated factor 1 gene (TRAF1) had been identified as a susceptibility and severity factor for diverse diseases, such as RA and SLE (Kurreeman et al., 2010). The SNP rs10818488 is an eQTL for different genes and could have a functional impact on C5 synthesis. Even so, this regulatory intergenic SNP was not associated with PF and PV in the Tunisian population (Mejri et al., 2009). In line with this result, C5 polymorphisms were also not associated with FS (Bumiller-Bini et al., 2018).
In a recent comprehensive study, 992 SNPs distributed within 44 CS genes were analyzed in a case-control study of FS (Bumiller-Bini et al., 2018). Evidence for association was seen with variants of 10 genes that encode most of the complement proteins previously detected in the skin or pre-senting altered serum levels in patients (Table 1): C3 (complement component 3), C5AR1 (complement component 5a receptor 1, the primary receptor for C5a anaphylatoxin), C8A (complement component 8, alpha subunit, a component of the membrane attack complex MAC), C9 (complement component 9 of the MAC), CD59 (MAC inhibitor), CFH (complement factor H, the major regulator of the alternative pathway), CR2 (complement receptor 2), ITGAM (integrin alpha-M or CR3, the alpha chain of a receptor for the iC3b fragment of C3), ITGAX (integrin alpha-X, or CR4, the alpha chain of a receptor for the iC3b fragment of the C3), and MASP1 (mannan-binding lectin serine protease 1, an essential protein in the lectin pathway of complement) (Bumiller-Bini et al., 2018).
Epigenetic alterations of DNA and histones
In recent years, the involvement of epigenetic alterations in inflammatory and autoimmune diseases has been recognized and attracted much interest (Nielsen and Tost, 2013;Picascia et al., 2015;Zhang and Lu, 2018). However, the molecular mechanisms underpinning these epigenetic changes in diseases are still poorly understood. Most studies on the effect of epigenetic mechanisms on complex diseases have been restricted to evaluation of the DNA methylation pattern.
For pemphigus, there are no published studies of variants in genes that act on epigenetic mechanisms, except for a recent study of FS (Spadoni et al., 2020). A total of 566 polymorphisms in 63 genes that code for lysine methyltransferases (KMT), demethylases (KDM), DNA methyltransferases (DNMT) and ten-eleven translocation demethylases (TET) were considered in a case-control association study. Eleven SNPs in four genes were associated with FS: In the histone lysine demethylase 4C gene KDM4C (3 SNPs) and in the histone lysine methyltransferases genes SETD7/KMT7 (1 SNP), MECOM/KMT8E (5 SNPs), and PRDM16/KMT8F (2 SNPs). The results of the study indicate that dysregulated histone (de)methylation plays a major role in pemphigus pathogenesis.
Associations with variants of genes involved in regulated cell death pathways yield insight into the poorly understood cell death mechanism in pemphigus
Twelve regulated cell death (RCD) routes have been recognized: intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic, NETotic, lysosome-dependent, autophagy-dependent and immunogenic pathways (Galluzzi et al., 2018). To date, only apoptosis has been considered in pemphigus, with controversial results about its role in the loss of cell adhesion and cell death. Some authors stated that cell death occurs by apoptosis (Gniadecki et al., 1998;Rodrigues et al., 2009), while others argued that there is no clear evidence of the occurrence of such an event in pemphigus Schmidt et al., 2009;Janse et al., 2014;Sokol et al., 2015).
Frequencies of 1,167 SNPs from genes encoding products of all the 12 well-established cell death cascades were compared between FS patients and healthy control individuals (Bumiller-Bini et al., 2019). Ten gene variants belonging to six cell death pathways differed significantly between these two population samples: necroptosis (TNF and TRAF2), apoptosis (TNF, CD36 and PAK2), pyroptosis (PRKN), immunogenic cell death (CD47, SIRPA and EIF2AK3), parthanatos (HK1) and necrosis (RAPGEF3). The genetic association profile with TNF, TRAF2, CD36 and PAK2 variants marks decreased susceptibility to FS and higher TNF and TRAF2 levels and lover CD36 levels. This profile may favor cell survival and inflammation instead of apoptosis/necroptosis. Conversely, higher susceptibility is marked by variants of CD47 and SIRPA of the immunogenic cell death pathway, proposed to lead to increased internalization of cell debris and antigen presentation, which may increment autoantibody production in FS.
Receptors for the Fc portion of immunoglobulin G
Low-affinity Fcg receptors bind the Fc portion of polymeric IgG in antigen-antibody immune complexes. They are cell-surface receptors expressed by different immune cells and mediate inflammatory responses. IgG binding can either activate or inhibit downstream cellular responses depending on the presence of ITAM or ITIM in the intracellular portion of the engaged Fcg receptor. Dysregulation of Fcg receptors is critical in diverse inflammatory and ADs (see Recke et al., 2015).
Five closely linked paralogous genes at the cytogenetic location 1q23.3, FCGR2A, FCGR2B, FCGR2C, FCGR3A, and FCGR3B, encode the low-affinity receptors FcgRIIa, FcgRIIb, FcgRIIc, FcgRIIIa, and FcgRIIIb, respectively. Copy number variation occurs by deletion or duplication of FCGR3A and FCGR2C together, or FCGR3B and FCGR2C together, but not FCGR2A or FCGR2B. Common single-nucleotide variation within and between the paralogs adds another layer of complexity to the FCGR region. Recke et al. (2015) estimated the effect of the patient FCGR genotype on the risk to develop PV/PF or BP (bullous pemphigoid) in a case-control study. The risk of PV/PF was decreased in the presence of allele C of the promoter polymor-phism rs3219018 (-386G>C) that affects the binding of transcription factors and expression levels of FcgRIIb (FCGR2B) and increased in the presence of an FCGR2C rs183547105 ORF allele. Because the inhibitory FcgRIIb is involved in peripheral tolerance of B lymphocytes, which may be counterbalanced by functional FcgRIIc expression, the authors proposed that these polymorphisms alter the risk of PV/PF due to a shift of the threshold for activation and proliferation of autoreactive B lymphocytes (Recke et al., 2015).
Does the variation of the forkhead box P3 gene FOXP3 have any impact on pemphigus foliaceus?
The FOXP3 gene is located at Xp11.23 and mutations in its coding region cause IPEX, the monogenic X-linked immune dysregulation, polyendocrinopathy, and enteropathy (OMIM #304790). Susceptibility to some multifactorial ADs has been associated with FOXP3 polymorphisms (Oda et al., 2013). FOXP3 is a candidate for diseases with an immune background because it codes for a transcription factor of prime importance for the regulation of immune responses by T lymphocytes and in development of CD4+ CD25(IL2RA)+ Treg cells, which are critical for suppression of autoimmune or otherwise inappropriate immune responses. DSG3-induced Treg cells that inhibited autoreactive Th clones were preferentially isolated from the peripheral blood of healthy individuals who carried the PVassociated HLA class II alleles, HLA DRB1*04:02 and DQB1*05:03, and only from a minority of patients with PV. These results strongly suggest that these Treg cells may be involved in the maintenance of self tolerance against DSG3 (Veldman et al., 2004).
Thus far, only in Tunisia PF has been analyzed in search of a possible influence of FOXP3 variants in susceptibility. In a sample of women, the intronic SNPs rs3761547 allele G, rs3761548 A, rs3761549 C, and rs2294021 C were associated with increased susceptibility to endemic PF. For sporadic PF, a weak association was seen only with rs3761549 C in the individual analysis, but higher susceptibility to both endemic and sporadic PF was associated with haplotype G-A-15-C-C where 15 stands for the allele of a (GT) n STR in the promoter region (Ben Jmaa et al., 2017). The genomic region of chromosome X that includes the FOXP3 gene bears many protein-coding and noncoding RNA genes whose SNPs present very high LD (r 2 ³ 0.8) (1000 genomes via LDlink). The four SNPs analyzed in that study mark three different LD blocks. It would be relevant to verify if the association can be validated in other populations and for the various forms of pemphigus.
The ST18 gene ST18 encodes a transcription factor (zinc finger protein 387; ZNF387) and regulates apoptosis and inflammation, two processes relevant to pemphigus pathogenesis. ST18 expression was upregulated in the skin of PV patients. The secretion of TNFa, IL-1a, and IL-6 that were reported to be increased in the lesioned skin of PV patients. In functional assays, these same cytokines were increased by ST18 overexpression in the presence of PV serum or PV IgG. Allele A of the ST18 polymorphisms rs2304365 was associated with PV in Jews and Egyptians, but not in Germans (Sarig et al., 2012). Subsequently, high LD between SNPs rs2304365 and rs17315309 was detected in Jews, and the authors concluded that the functional SNP rs17315309 allele G, which drives ST18 upregulation possibly is the causal polymorphism (Vodo et al., 2016). The risk allele rs2304365 A was associated with severe pemphigus and a higher age of disease onset in the Iranian population (Etesami et al., 2018). In the Chinese population, no association with rs2304365 was seen for PV and PF (Yue et al., 2014). The apparently conflicting results between the association studies in the Jewish and Chinese populations can be explained by the absence of the risk allele rs17315309 G in Eastern Asian populations (Ensembl, Cunningham et al., 2019). However, the rs17315309 G allele is common, and LD with rs2304365 is high in Europeans (LDlink), such that the lack of association with PV in the German population remains unexplained.
Non-coding RNAs: The new players in complex phenotypes
A non-coding RNA (ncRNA) is a functional RNA molecule that is transcribed from DNA but not translated into a polypeptide. NcRNAs are involved in a wide range of biological processes, including gene transcription, posttranscriptional modifications, signal transduction, besides chromatin remodeling and other epigenetic mechanisms. Their deregulation or nucleotide sequence variation may contribute to disease.
For FS in Brazil, 2,080 SNPs located in long ncRNAs (lncRNAs) genes were evaluated in a case-control association study. Six of these polymorphisms possibly have an impact on susceptibility. The variant rs7144332 T in the lncRNA AL110292.1 showed the most significant association with FS susceptibility. Results for five other lncRNA genes were suggestive of association: rs6942557 C in LINC01176 and rs17774133 T in LINC01119 were associated with increased risk; rs6095016 A in lnc-PREX1-7:1, rs7195536 G in AC009121.1, and rs1542604 T in AC133785.1 were associated with decreased disease risk (Lobo-Alves et al., 2019b). The function of these five lncRNA is elusive so far, but it was possible to suggest a functional impact of the SNPs on the lncRNAs structure, expression level, or interaction with microRNAs.
Lack of association with some remarkable candidates
The lymphoid phosphatase LYP (also known as PTPN22 -protein tyrosine phosphatase, nonreceptor-type, 22) regulates signal transduction in immune cells. Among its many effects, LYP is a potent negative regulator of T lymphocyte activation (Hill et al., 2002). Genetic variation of PTPN22 (located at 1p13.2) is among the most influential genetic risk factors for ADs outside the MHC (Burn et al., 2011). Increased risk has been associated with variants of PTPN22, notably with allele T of the missense rs2476601 The genetics of pemphigus 19 (1858C>T) single nucleotide polymorphism (SNP) that results in the Arg620Trp amino acid substitution in the first proline-rich motif of the LYP protein (Zheng et al., 2012). The hypothesis that the rs2476601 T (620Trp) variant could be a shared risk variant for autoimmune and immunemediated diseases has been raised (Smyth et al., 2004). Notwithstanding, for PF and PV, there is lack of association with rs2476601. The first study of PF and PV in the Tunisian population (Mejri et al., 2007) provided evidence for no effect of the PTPN22 variant on susceptibility; however, interpretation of the results was hampered because the frequency of allele T was low, impacting the statistical power of the analysis. Later on, in a North American population, the frequency of allele T was higher, of 7.8% in both the PV patient and control samples, and again no association was seen (Sachdev et al., 2011). For FS, four SNPs (including rs2476601) in the genomic region 1p13.2, which together tag 28 SNPs on a segment of approximately 312,000 bp encompassing the PTPN22, RSBN1, AL137856.1 genes and the 5' portion of the AP4B1-AS1 gene, were used as markers. No significant association was found. Allele rs2476601 T was observed at the frequency of about 14% in both the patient and control samples (Lobo-Alves et al., 2019a). So, variants in structural and regulatory sites of PTPN22 and its flanking regions are not susceptibility factors for pemphigus, and it seems settled that the genetic variation of LYP has no impact on pemphigus disease susceptibility.
Most ADs not associated with rs2476601 allele T (LYP 620Trp) manifest in skin, the gastrointestinal tract or in immune privileged sites, leading to the suggestion that the influence of that variant in susceptibility to ADs depends on the affected tissue, and that that the PTPN22 polymorphism is not a shared susceptibility factor to antibody driven ADs (Zheng et al., 2012). This opposition should be explored to deepen the understanding of the mechanisms that differentiate these two groups of ADs.
The CD1D gene encodes an MHC class I like glycoprotein (CD1d) whose primary function is presenting glycolipid antigens to natural killer T (NKT) cells. The CD1D mRNA is overexpressed in T CD4 + lymphocytes of FS patients when compared with healthy individuals (Malheiros et al., 2014). The CD1D 3' UTR SNPs rs16839951 and rs422236 were not associated with the disease (Cipolla et al., 2016). These results indicate that at least the analyzed genetic variants of CD1d do not contribute to FS susceptibility and that the reason for the previously observed expression difference between the samples of FS patients and unaffected individuals probably is a consequence of the pathogenic process. Also, the KLRD1 rs2537752 and the NKG7 rs3009 were not associated with FS (Cipolla et al., 2016).
Cellular levels of various proapoptotic molecules, including Bax (BCL2-associated x protein) and p53 (tumor protein p53), are increased in pemphigus (Wang et al., 2004). Because apoptosis dysregulation may play a role in pemphigus, genetic variants of the proteins involved in the apoptotic process may participate in the interindividual vari-ation of susceptibility to the disease. Nonetheless, no effect on FS was observed for the BAX gene upstream regulatory region SNP rs4645878 (-248G>A), nor for the missense TP53 rs1042522 (12139C>G, Pro72Arg) SNP (Köhler and Petzl-Erler, 2006).
Lack of association with PF in Tunisia was also observed for variants of the transcription factors RORC (RORGT) and STAT3 which stabilize and maintain Th17 cell function (Ben Jmaa et al., 2018).
Concluding remarks
Despite the successful identification of several genes and regulatory elements involved in pemphigus pathogenesis, knowledge in that field is still fragmentary, especially for PV. The consensus is that the HLA class II genes have the greatest effect in all forms of pemphigus and all populations. For several of the other genes, replication and validation studies are still lacking. Besides, many real associations certainly were overlooked, simply because the genes were not considered as candidates in published studies. It seemed that genome-wide association studies (GWAS) would permit pinpointing unsuspected non-coding and coding genetic elements. However, pemphigus are mostly rare, and unless very large patient (and control) samples are investigated, only highly significant associations with large effect sizes will be identified. This occurs because very stringent p-values are needed to avoid false-positives in GWAS. Consequently, many associations that may be relevant for the disease are missed in such studies. In fact, three recent GWAS of pemphigus in the Han Chinese population confirmed the great effect of HLA, but could not replicate weaker associations (Gao et al., 2018, Sun et al., 2019, Zhang et al., 2019. Moreover, in both the GWAS and the hypothesis-driven candidate gene association studies, the effect of certain variants may remain undetected if specific epistatic interactions between two or more variants are missed because informative tag SNPs of the relevant variant(s) of the additional locus (or loci) are not available or were ignored. Improved analytical approaches for assessing evidence for associations between the disease and clusters of variants rather than just one or a few SNPs, may prove more informative in this regard.
Since the genotype is established before disease onset, it is the genotype that influences the disease, and not contrariwise. Thus, information on pemphigus genetics could support approaches to personalized medicine in the future. Nonetheless, observing an association with an SNP does not necessarily imply that the SNP is causal, because of linkage disequilibrium with one to many additional variants. This is a well-known phenomenon that emphasizes the need for fine-mapping and annotating the variants in the genomic region flanking the associated SNP in the particular study population to select the most likely causal SNP(s) for subsequent functional analyses. A critical next step will be to identify the effects of the putative risk variant(s) to understand the disease mechanisms. To achieve this, genetic engineering approaches, such as CRISPR technology-based genome editing, as well as novel techniques to detect DNA-DNA, DNA-RNA, RNA-RNA interactions, and DNA-or RNAprotein interactions combined with the information by expression quantitative trait loci (eQTL) studies will provide insight into the functional impact of non-coding variants on altered cellular phenotypes.
Epigenetic modifications such as DNA methylation and histone modification whose dysregulation can also be implicated in tolerance breakdown and pathogenic autoimmunity add another layer of complexity. Various epigenetic modifications are sensitive to external stimuli and may bridge the gap between the genome and the environment. Therefore, besides mapping epigenetic modifications in health and disease, a better appraisal of the relevant environmental factors is needed. Additionally, key cell types and cell states that may be implicated in pemphigus pathogenesis should be defined. Functional genomic annotations from these cell types and states can then be used to determine candidate genes and regulatory sequences, and the causal variants. Together with longitudinal studies, these approaches may produce crucial insights into how pemphigus develops. The growing understanding of the genetics and epigenetics of autoimmune disease may facilitate early diagnosis, refine the disease phenotypes, and improve therapeutic intervention. | 2020-07-02T10:07:04.076Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "008b19be7f1d37d1be3b2a4454af1d625d40bb99",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/gmb/v43n3/1415-4757-GMB-43-3-e20190369.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5ed7425f68a489b4ad776c0e457bf4370ac3ac3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
4520825 | pes2o/s2orc | v3-fos-license | Slip of Alkanes Confined between Surfactant Monolayers Adsorbed on Solid Surfaces
The slip and friction behavior of n-hexadecane, confined between organic friction modifier surfactant films adsorbed on hematite surfaces, has been studied using nonequilibrium molecular dynamics simulations. The influence of the surfactant type and coverage, as well as the applied shear rate and pressure, has been investigated. A measurable slip length is only observed for surfactant films with a high surface coverage, which provide smooth interfaces between well-defined surfactant and hexadecane layers. Slip commences above a critical shear rate, beyond which the slip length first increases with increasing shear rate and then asymptotes toward a constant value. The maximum slip length increases significantly with increasing pressure. Systems and conditions which show a larger slip length typically give a lower friction coefficient. Generally, the friction coefficient increases linearly with logarithmic shear rate; however, it shows a much stronger shear rate dependency at low pressure than at high pressure. Relating slip and friction, slip only occurs above a critical shear stress, after which the slip length first increases linearly with increasing shear stress and then asymptotes. This behavior is well-described using previously proposed slip models. This study provides a more detailed understanding of the slip of alkanes on surfactant monolayers. It also suggests that high coverage surfactant films can significantly reduce friction by promoting slip, even when the surfaces are well-separated by a lubricant. ■ INTRODUCTION The dynamics of flow in nanofluidic devices and nanoelectromechanical systems (NEMS) can be accurately described only through a detailed understanding of the flow at fluid−solid interfaces. In this context, recent experimental and molecular simulation studies have shown that the common assumption in continuum hydrodynamics that fluid adjacent to a sliding surface moves at the same velocity as the surface (no-slip boundary condition) often breaks down at the nanoscale. A large proportion of such studies have investigated the slip of alkanes on surfactant monolayers adsorbed on smooth solid surfaces. Many of these studies report a slip length, defined as an extrapolated distance relative to the fluid−solid interface where the tangential velocity component vanishes. However, there is significant variation in both the magnitude and the shear rate dependence of the slip lengths from these experiments. For example, total internal reflection-fluorescence recovery after photobleaching (TIR-FRAP) experiments suggested a shear rate-independent (measured between 10 and 10 s−1) slip length of approximately 400 nm for n-hexadecane on smooth sapphire surfaces coated with octadecyltrichlorosilane (OTS) selfassembled monolayers (SAMs). In surface forces apparatus (SFA) experiments using tetradecane confined between smooth mica surfaces coated with OTS SAMs, slip was only detected once a critical shear rate was exceeded, above which a linear increase in the slip length with log(shear rate) from 0 to 1.4 μm was observed (between 10 and 10 s−1). More recently, colloidal probe atomic force microscopy (AFM) has been used to measure the slip length of several n-alkanes (from heptane to hexadecane) on OTS SAM-coated, ultrasmooth silicon wafers. They suggested much smaller, shear rate-independent (between 10 and 10 s−1) slip lengths of 10 to 30 nm, with longer nalkanesthat have higher viscosities giving more slip. In addition to surfaces coated with surfactant SAMs, slip of alkanes has also been observed on surfactant films formed from solution. For example, TIR-FRAP experiments also showed that the addition of a surfactant [stearic acid (SA)] to hexadecane confined between smooth sapphire surfaces led to an increase in the slip length from 150 to 350 nm. Moreover, adding hexadecylamine to n-alkanes confined between smooth mica surfaces in SFA experiments switched the boundary conditions from no-slip to partial slip. In these SFA experiments, the maximum slip length again increased with the increasing n-alkane chain length from octane to tetradecane (1−15 nm). The large variation between slip length measurements from different experiments on similar systems suggests that different mechanismsmay be acting to promote slip. Experiments which suggested smaller (<50 nm) slip lengths for alkanes on adsorbed surfactant monolayers are more consistent with the slip mechanism devised by Tolstoi and reviewed by Blake. This mechanism suggests that slip is due to enhanced liquid mobility at the surface and is essentially an extension of Eyring’s activated flowmodel for bulk liquid viscosity. Conversely, the much larger slip lengths measured in some experiments have subsequently Received: January 18, 2018 Revised: March 9, 2018 Published: March 14, 2018 Article pubs.acs.org/Langmuir Cite This: Langmuir 2018, 34, 3864−3873 © 2018 American Chemical Society 3864 DOI: 10.1021/acs.langmuir.8b00189 Langmuir 2018, 34, 3864−3873 This is an open access article published under a Creative Commons Attribution (CC-BY) License, which permits unrestricted use, distribution and reproduction in any medium, provided the author and source are cited. D ow nl oa de d vi a IM PE R IA L C O L L E G E L O N D O N o n Ju ly 2 , 2 01 8 at 1 4: 53 :2 5 (U T C ). Se e ht tp s: //p ub s. ac s. or g/ sh ar in gg ui de lin es f or o pt io ns o n ho w to le gi tim at el y sh ar e pu bl is he d ar tic le s. been explained through the formation of gas/vapour films or nanobubbles at the interface, as initially proposed by de Gennes. In addition to experiments, nonequilibrium molecular dynamics (NEMD) simulations have been used extensively to study slip of liquids confined between solid surfaces. For high-slip systems, such as water flowing on graphene surfaces or through carbon nanotubes, accurate determination of the slip length with NEMD has been shown to be less accurate than equilibrium molecular dynamics methods. However, for the partial-slip systems of interest in this study, NEMD simulations have been used extensively to further understand slip phenomena as well as to estimate slip lengths for a range of confined alkane systems. Most NEMD studies have investigated the slip behavior in highly confined (<10 molecular layers) alkane films between atomically smooth solid surfaces. The slip lengths from these NEMD simulations depend on the system and conditions but are typically <10 nm. In these studies, the slip length generally increased with alkane viscosity, slab stiffness, sliding velocity, and pressure. NEMD simulations of highly confined systems have also shown that the slip length decreases with increasing film thickness and is drastically reduced when the confining surfaces contain atomic-scale roughness. SFA and TIR-FRAP experiments have also shown significant reductions in the slip length on rougher surfaces. The slip length of water confined between atomically smooth surfaces coated by alkylsilane SAMs and adsorbedmethanol films has also beenmeasured with NEMD simulations. Several NEMD simulation studies have probed the structure, flow, and friction behavior of alkanes confined between surfactant monolayers; however, prior to this current study, slip lengths in such systems had not been successfully quantified. NEMD simulations have also been central in developing more accurate models to describe liquid slip between solid surfaces. From thier NEMD results, Thompson and Troian developed an equation for slip where, above a critical shear rate, the slip length is a power law function of the shear rate. Conversely, Lichter et al. suggested the variable-density Frenkel− Kontorova (vdFK) model, which predicts a plateau of the slip length at a high shear rate (rather than divergence). The model developed by Spikes and Granick to describe data from SFA experiments predicts the samebehavior, but here the slip length asymptotes at high shear stress rather than high strain rate. Wang and Zhao extended Eyring’s molecular kinetic theory (MKT) to describe the slip behavior. By assigning appropriate values of the tuneable parameters, the extended MKTmodel is able to describe the important features from the slip models due to Thompson and Troian, Lichter et al. (vdFK), and Spikes and Granick. In addition to accelerating flow in nanofluidic devices and NEMS, it has been proposed that slip could be exploited to reduce friction in macroscopic tribological systems. Specifically, slip has been used to rationalize results from macroscopic tribology experiments which showed that the addition of an organic friction modifier (OFM) to n-hexadecane could significantly reduce friction in the hydrodynamic lubrication regime when the surfaces are well-separated. OFMs are amphiphilic surfactant molecules that contain nonpolar aliphatic tailgroups attached to polar headgroups. They are based solely on C, H, O, and N atoms are not environmentally harmful and do not poison exhaust after-treatment devices. The most widely studied OFM headgroups are carboxylic acids; however, amines, amides, and glycerides are more industrially relevant. Commercial OFMs generally contain unbranched aliphatic tailgroups containing 12−20 carbon atoms because of their effective friction reduction, high base oil solubility, and high availability from natural fats and oils. OFMs adsorb tometal, ceramic, or carbonbased surfaces through their polar headgroups, and strong, cumulative van der Waals forces between proximal nonpolar tailgroups lead to the formation of incompressible monolayers. These monolayers are known to significantly reduce friction and wear in the boundary lubrication regime (where the load is primarily supported by surface asperities) by preventing direct contact between solid surfaces. However, it is less clear if OFMs also reduce friction in the hydrodynamic lubrication regime (where the load is supported by the lubricant). When OFM monolayers form on surfaces inside a tribological contact, the planes of methyl groups at the end of the vertically adsorbed OFM molecules create nonwetting, oleophobic surfaces over which lubricants could slip. In this study, large-scale NEMD simulations will be used to investigate the friction and the slip behavior of hexadecane, confined between OFM films, under hydrodynamic lubrication conditions. Several different OFM types [SA, stearamide (SAm), and glycerol mono-stearate (GMS)] and coverages (1.44−4.32 molecules nm−2) will be explored to understand their influence on the friction and the slip behavior. The dependence of the slip length and the friction coefficient on the applied shear rate (10 to 10 s−1) and pressure (0.1−1.0 GPa) will also be investigated. This study provides unique insights into the slip of alkanes on surfactant monolayers, as well as the relationship between slip and friction. These results also provide support for previously proposed mechanisms and models for slip in such systems. A deeper understanding of the slip phenomenon is expected to be valuable not only for improving the molecular design of lubricant additives but also for understanding and controlling flow and friction in nanofluidic devices and NEMS. ■ METHODOLOGY System Setup. A representative example of the systems simulated in this study is shown in Figure 1a. It consists of a lubricant (n-hexadecane) layer confined between two OFM monolayers which are adsorbed on atomically smooth hematite slabs. The hexadecane layer is sufficiently thick (>15 molecular diameters) such that no molecular structuring was evident in the middle of the film, and thus any confinement-induced viscosity increase was expected to be negligible. All systems were constructed using the Materials and Processes Simulations (MAPS) platform from Scienomics SARL. Classical MD simulations were performed using the large-scale atomic/ molecular massively parallel simulator (LAMMPS) code. Figure 1b shows the three different OFM types considered in these simulations; SA, SAm, and GMS. SA was selected to allow comparisons to previous experiments and NEMD simulations, whereas SAm and GMS are more commercially relevant. The hexadecane lubricant was chosen because of its well-defined properties and common use in both experimental and modeling tribology studies. In all of theMD simulations, (100) slabs of α-iron(III) oxide (hematite) with dimensions (xyz) of approximately 55× 55× 12 Å were used as the substrates. Periodic boundary conditions were applied in the x and y directions. The surface was cleaved such that the Fe/O ratio remained at 2:3 to ensure that surfaces with no overall charge were produced. Three different OFM surface coverages were considered. The coverage, Γ, is defined as the number of OFM molecules in a Langmuir Article DOI: 10.1021/acs.langmuir.8b00189 Langmuir 2018, 34, 3864−3873 3865 given surface area (nm−2). The limiting headgroup area for carboxylic acids, amides, and glycerides (assuming monodentate binding) is around 0.22 nm, which corresponds to a maximum theoretical coverage of 4.55 nm−2 for the OFMs. In these simulations, a high surface coverage (Γ = 4.32 nm−2) is simulated by adsorbing 132 OFMmolecules on each ≈30 nm slab to form a close-packed monolayer. Two other surface coverages are considered: a medium coverage (Γ = 2.88 nm−2), approximately 2/3 of the maximum coverage; and a low coverage (Γ = 1.44 nm−2), around 1/3 of the maximum coverage. In situ AFM experiments and depletion isotherms have shown that OFMs with saturated tailgroups (e.g., SA, SAm, and GMS) can form high coverage monolayers on solid surfaces. Multilayer OFM films have also been observed in recent in situ AFM experiments, although their role in lubrication remains unclear and they are not considered in this current study. The OFM molecules were oriented perpendicular to, and initially 3 Å above, the interior surfaces of the two hematite slabs (Figure 1a). This produced OFM films similar to those formed by Langmuir−Blodgett experiments. Between 450 (high coverage) and 650 (low coverage) hexadecane molecules were then randomly distributed in the region between the OFM films. This resulted in a similar total number of atoms (approximately 50 000 including the surface atoms) and film thickness for all of the coverages studied. Parameters from the long hydrocarbon-optimized potentials for liquid simulations all-atom force field (L-OPLS-AA) were used to represent the carbon and hydrogen atoms in the alkyl chains (both OFMs and hexadecane), and standard OPLSAA parameters were used for the headgroup atoms (see ref 33 for full details). Accurate density and viscosity prediction of bulk hexadecane using L-OPLS-AA under ambient and high pressure conditions has been confirmed in a previous publication. Lennard−Jones interactions were cut off at 12 Å, and “unlike” interactions were evaluated using the geometric mean mixing rules, as prescribed in the OPLS force field. Electrostatic interactions were evaluated using a slab implementation of the particle−particle, particle−mesh (PPPM) algorithm with a relative force accuracy of 10−5. Surface−hexadecane and surface−OFM interactions were represented by the Lennard−Jones and Coulomb potentials; the hematite surface parameters selected were developed by Berro et al. for alkane adsorption. The hematite slab atoms were “frozen” in the corundum crystal structure to facilitate more accurate slip length analysis. Previous NEMD simulations have shown that the use of rigid slabs can lead to an unphysical velocity-slip behavior when they are in direct contact with a nonwetting fluid; however, here, the flexible OFM films form the interface with the lubricant and are sufficient to prevent such a behavior (see Results and Discussion). The MD equations of motion were integrated using the velocity Verlet algorithm with an integration time step of 1.0 fs. Fast-moving bonds involving hydrogen atoms were constrained with the SHAKE algorithm. The Nose−́Hoover thermostat, with a time relaxation constant of 0.1 ps, was used to maintain the target temperature, T = 300 K. The pressure (P = 0.1−1.0 GPa) was controlled by applying a constant normal force to the outermost layer of atoms in the upper slab, keeping the z coordinates of the outermost layer of atoms in the lower slab fixed (Figure 1a). Simulation Procedure. First, the system was energyminimized before a density similar to that of liquid hexadecane (0.75 g cm−3) was achieved in the fluid between the slabs by moving the top slab down at 10 m s−1. The system was then pressurized (P = 0.1−1.0 GPa), thermostatted in directions perpendicular to the compression (x and y), and allowed to equilibrate at 300 K. Initially, the slab separation varied in a damped harmonic manner, so sliding was not applied until a constant average slab separation was obtained and the average hydrostatic pressure within the hexadecane film was close to its target value (within 1%). These compression simulations were generally around 2 ns in duration. After compressive oscillation became negligible, a velocity, vs, was added in the x direction to the outermost layer of atoms in the top slab (Figure 1a), and sliding simulations were conducted for between 2 and 20 ns (depending on vs). Lower sliding velocities required longer simulations for the block-averaged xvelocity profile and the friction coefficient to reach a steady state. The values of vs applied were between 1 and 200 m s −1. Accurate determination of the friction coefficient and particularly the slip length becomes challenging at even lower sliding velocities, while shear heating becomes more difficult to control above the selected range. A combination of high sliding velocity (or shear rate) and pressure are particularly relevant for components which primarily operate in the elastohydrodynamic lubrication (EHL) regime. A consequence of the rigid slabs is that the thermostat cannot be applied directly to the slab atoms, as is common in confined NEMD simulations. Applying the thermostat directly to fluid molecules confined between rigid surfaces has been shown artificially influence their behavior under sliding conditions and also lead to the erroneous slip behavior. Therefore, during the sliding simulations, any heat generated was dissipated using a thermostat acting only on the OFM layers (Figure 1a), applied perpendicular to the sliding direction (y and z). Using this approach, there was a negligible increase in the lubricant film temperature under shear, even at the highest sliding velocity considered. Moreover, the variation in the slip with sliding velocity is consistent with NEMD simulations of alkanes Figure 1. Simulation details: setup for compression and sliding simulations (a). Example shown for SA at 4.32 nm−2 after compression and before sliding. Fe atoms are shown in pink, O in red, terminal C in yellow, and the other all C in cyan. Headgroup H is shown in white, while H atoms in the alkyl groups are omitted for clarity. Periodic boundary conditions (orange dotted line) are applied in the x and y directions. Snapshot rendered using Visual Molecular Dynamics (VMD). Chemical structures of OFMs simulated in this study (b): SA, SAm, and GMS. Langmuir Article DOI: 10.1021/acs.langmuir.8b00189 Langmuir 2018, 34, 3864−3873 3866 confined between thermostatted, flexible metal slabs (see Results and Discussion). ■ RESULTS AND DISCUSSION Slip Length Analysis. The slip length analysis will be presented first, followed by the shear rate and pressure dependence of slip and friction, and finally the interdependence of slip and friction. Figure 2a shows a representative NEMD system snapshot (SAm, Γ = 4.32 nm−2). Note that, consistent with previous NEMD simulations, the tilted OFMmonolayers move at the same velocity as the slabs to which they are adsorbed. This is the case for all of the simulations, and thus the OFM velocity profiles are omitted for clarity. Comparing the red (assumed no-slip) and blue (measured) hexadecane velocity profiles shows that the confined lubricant layer is only partially sheared, and slip occurs at both of the OFM−hexadecane interfaces, not at the hematite−OFM interfaces (see also Figures 3−6). Figure 2b shows the definition of the slip length (purple arrows) from theNEMD simulations. The solid red line in Figure 2b shows the velocity profile that would be obtained within the hexadecane layer, assuming no-slip boundary conditions. In this case, the OFM layers move at the same velocity as the slabs to which they are adsorbed, the hexadecane layer is completely sheared, and there is a net zero velocity difference between the OFM and hexadecane layers at their interface. The solid blue line in Figure 2b shows an example velocity profile with a nonzero net velocity difference between the OFM and hexadecane layers, indicating partial slip at the interface. Note that this represents apparent slip rather than true slip because the slip plane is located above an adsorbed layer rather than directly at the solid−liquid interface. The slip length (shown in purple) can be calculated by extrapolating the measured (slip) velocity profile to the point at which it intersects the applied slab velocity (in this case 0 and vs) and measuring the distance from the OFM-hexadecane interface (see Figure 2b). Figures 3−6 show mass density profiles in the z-direction for the hexadecane (orange) and OFM (green) molecules. The measured hexadecane x-velocity profile in the z-direction (blue) and the assumed no-slip velocity profile (red) for a fully sheared hexadecane layer are also shown. Note that these figures are rotated 90° relative to the schematics in Figure 2. The sharp, intense OFM mass density peaks on the far leftand right-hand sides of Figures 3−6 indicate the adsorption of the headgroups on the slabs, while the less intense peaks which extend further from the surface are due to the tailgroups. Slip occurs when Figure 2. (a) Example system snapshot showing slip between OFM (SAm, Γ = 4.32 nm−2) films and the confined n-hexadecane layer. Fe atoms are shown in pink, O in red, N in blue, terminal C in yellow, and the other all C in cyan. HeadgroupH is shown in white, while H atoms in the alkyl groups are omitted for clarity. Snapshot rendered using VMD. The horizontal red dotted line shows the OFM−hexadecane interface. The solid red line shows a linear velocity profile when a no-slip boundary condition at the OFM−hexadecane interface is assumed. The solid blue line represents the measured (slip) velocity profile. (b) Schematic showing the definition of the slip length (purple double-headed arrows) and slip velocity (orange double-headed arrows) in OFM-lubricated systems. Figure 3. Effect of OFM surface coverage on the mass density profiles for the OFMs (green) and hexadecane (orange) and the measured velocity profile for hexadecane (blue). Representative example shown for SA;Γ = 1.44 (a), 2.88 (b), and 4.32 nm−2 (c); P = 0.1 GPa; vs = 100m s−1. The solid red line shows a linear velocity profile when a no-slip boundary condition at the OFM−hexadecane interface is assumed. The purple double-headed arrow between red and blue vertical dotted lines shows the calculated slip length, which is only detectable in (c). Langmuir Article DOI: 10.1021/acs.langmuir.8b00189 Langmuir 2018, 34, 3864−3873 3867 there is a net nonzero velocity difference between the hexadecane layer and the OFM layers at their interface. The interface where slip occurs (red vertical dotted lines) can be clearly identified by the intersection of the hexadecane and OFM mass density profiles. The center of the mass density and velocity profiles has been shifted to zero for clarity. This does not affect the measured slip length because it is an average between the top and bottom slabs. Figure 3 shows the effect of OFM surface coverage on the OFM and hexadecane mass density profiles as well as the hexadecane velocity profile. Comparing Figure 3a−c, the OFM films become substantially thicker and more strongly layered with increasing coverage. Both sum frequency generation spectroscopy and radial distribution functions from NEMD simulations have also shown that the OFMs form solid-like films at high coverage. There is much less overlap of the OFM and hexadecane mass density profiles at high coverage (Figure 3c) than at medium (Figure 3b) and low coverage (Figure 3a), indicating reduced interdigitation. Consequentially, under all of the conditions studied, the velocity profiles only show a measurable slip length at high coverage (4.32 nm−2). In Figure 3a,b, the assumed no-slip velocity profile (red) overlaps with the measured velocity profile (blue). Conversely, in Figure 3c, there is clear separation between the measured profile and the no-slip profile, allowing the slip length (0.9 nm) to be calculated. This observation agrees with experimental results which have shown much larger slip lengths for high coverage OFM films compared to low coverage films, as well as those which have shown a transition from no-slip to partial slip after high coverage films were allowed to form on the surface. Figure 4 shows the effect of sliding velocity (or shear rate) on the mass density and velocity profiles. As with previous NEMD simulations, the mass density profiles are insensitive to sliding velocity in the range tested. The velocity profiles show an increase in the slip length with increasing sliding velocity; from 0.3 nm at 10 m s−1 (Figure 4a) to 0.6 nm at 20 m s−1 (Figure 4b), to 0.8 nm at 50 m s−1 (Figure 4c), and to 0.9 nm at 100 m s−1 (Figure 3c). This observation is consistent with several previous experiments and NEMD simulations which also showed a general increase in the slip length with increasing sliding velocity. Figure 5 shows the effect of the OFM type on the mass density and velocity profiles at low pressure (0.1 GPa). The OFM mass density profiles for SA (Figure 5a) and SAm (Figure 5b) are very similar but the GMS profile (Figure 5c) shows that its films are Figure 4. Effect of sliding velocity on the mass density profiles for the OFMs (green) and hexadecane (orange) and the measured velocity profile for hexadecane (blue). Representative example shown for SA; vs = 10 (a), 20 (b), and 50m s−1 (c); Γ = 4.32 nm−2; P = 0.1 GPa. The solid red line shows a linear velocity profile when a no-slip boundary condition at the OFM−hexadecane interface is assumed. A purple double-headed arrow between red and blue vertical dotted lines shows the calculated slip length. Figure 5. Effect of the OFM type on the mass density profiles for the OFMs (green) and hexadecane (orange) and the measured velocity profile for hexadecane (blue) at low pressure (P = 0.1 GPa). Representative example shown for SA (a), SAm (b), and GMS (c); Γ = 4.32 nm−2; vs = 100 m s −1. The solid red line shows a linear velocity profile when a no-slip boundary condition at the OFM−hexadecane interface is assumed. The purple double-headed arrow between red and blue vertical dotted lines shows the calculated slip length. Langmuir Article DOI: 10.1021/acs.langmuir.8b00189 Langmuir 2018, 34, 3864−3873 3868 slightly thicker owing to the larger headgroup size. There is much less overlap of the hexadecane andOFMmass density profiles for SAm (Figure 5b) and GMS (Figure 5c) compared to SA (Figure 5a), indicating reduced interdigitation. At 0.1 GPa and 100m s−1, this leads to larger slip lengths for high coverage SAm (1.1 nm) and GMS (1.3 nm) films compared to SA (0.9 nm). Figure 6 shows the effect of the OFM type on the mass density and velocity profiles at high pressure (1.0 GPa). Similar to at low pressure, SAm (Figure 6b) and GMS (Figure 6c) films are less interdigitated than SA (Figure 6a) films, leading to larger slip lengths. Comparing Figures 5 and 6 shows the effect of pressure on the structure and flow behavior. For all of the OFMs, the mass density profiles show that there is a reduction in film thickness ≈10% moving from low pressure (Figure 5) to high pressure (Figure 6). The mass density profiles also indicate stronger layering both in the OFM and particularly the hexadecane films at high pressure relative to low pressure. The increased “first layer density” observed here for the hexadecane layer at higher pressure has been correlated with larger slip lengths in previous NEMD simulations. Comparing Figures 5a and 6a suggests that for SA, interdigitation between the hexadecane and OFM layers is reduced at higher pressure, owing to the denser OFM films. For all of the OFMs at 100 m s−1, the slip length increases significantly, moving from low to high pressure; 0.9 nm (Figure 5a) to 2.6 nm (Figure 6a) for SA, 1.1 nm (Figure 5b) to 4.3 nm (Figure 6b) for SAm, and 1.3 nm (Figure 5c) to 4.8 nm (Figure 6c) for GMS. The increase in the slip length with the increasing pressure is consistent with previous NEMD simulations and macroscopic tribology experiments which showed the same trend. Shear Rate and Pressure Dependence of Slip and Friction. Figure 7a shows the change in the slip length with log(shear rate) for high coverage (Γ = 4.32 nm−2) SA, SAm, and GMS films at low (0.1 GPa) and high (1.0 GPa) pressure. Note that here, the applied shear rate is calculated using the no-slip velocity profile and the hexadecane layer thickness rather than the overall film thickness (see Figure 2b). No slip occurs below a critical shear rate, above which the slip length increases linearly with log(shear rate) and then asymptotes toward a constant value. The critical shear rate decreases from ≈10 s−1 at low pressure to ≈10 s−1 at high pressure. The critical shear rates from these simulations are relatively high with respect to those observed in previous experiments for similar systems as well as the shear rates experienced in real components. The slip length asymptotes to approximately 1 nm at low pressure and between 2 and 5 nm at high pressure, depending on the OFM type. SAm Figure 6. Effect of the OFM type on the mass density profiles for the OFMs (green) and hexadecane (orange) and the measured velocity profile for hexadecane (blue) at high pressure (P = 1.0 GPa). Representative example shown for SA (a), SAm (b), GMS (c); Γ = 4.32 nm−2; vs = 100 m s −1. The solid red line shows a linear velocity profile when a no-slip boundary condition at the OFM−hexadecane interface is assumed. The purple double-headed arrow between red and blue vertical dotted lines shows the calculated slip length. Figure 7. Variation in the slip length (a) and the friction coefficient (b) with log10(shear rate) for SA (blue), SAm (red), GMS (green); 0.1 GPa (filled symbols) and 1.0 GPa (open symbols); Γ = 4.32 nm−2. Error bars in (b) show the standard deviation between block-averaged friction coefficient values. Langmuir Article DOI: 10.1021/acs.langmuir.8b00189 Langmuir 2018, 34, 3864−3873 3869 and GMS give consistently larger slip lengths than SA, particularly at high pressure, because of reduced OFM− hexadecane interdigitation (see Figure 5). The magnitudes of these slip lengths are lower than those measured for alkanes on adsorbed surfactant monolayers in AFM and SFA experiments (10−30 nm), performed under milder pressure and shear rate conditions. NEMD simulations of water slip on alkyl monolayers also underpredicted the slip length by a similar degree compared to the experiment. The exact reason for this discrepancy remains unclear, but it could be because the current simulations use atomically smooth surfaces and are representative of the slip mechanism proposed by Tolstoi and reviewed by Blake. Alternative slip mechanisms, such as the formation of multilayer OFM films or nanobubbles, could be the route of the larger slip lengths and lower critical shear rates observed in some SFA and TIR-FRAP experiments of similar systems. The asymptotic behavior of the slip length at high shear rates is consistent with results from previous NEMD simulations and AFM experiments which have studied alkane slip. Such a behavior can be rationalized through a transition from “defect slip” to “global slip” and captured using both the vdFK and extended MKT slip models. An increase in the slip length with increasing pressure has also been noted in both previous NEMD simulations and tribology experiments. This observation suggests that as the pressure is increased, the viscous friction between individual hexadecane molecules increases more than friction at the OFM−hexadecane interface. Both AFM and SFA experiments have shown that longer chain alkanes with a larger viscosity give larger slip lengths on surfactant monolayers. In future NEMD studies, it would be interesting to investigate the slip of larger alkanes with viscosities closer to real lubricants on surfactant monolayers. The friction behavior of the OFM films and its relation to slip was also investigated. The kinetic friction coefficient, μ, was obtained using the extended Amontons−Coulomb law under the high load approximation: FL/FN = F0/FN + μ ≃ μ. Here, FL and FN are defined as the block-averaged lateral force (shear stress) and the normal force acting on each hematite slab in response to the fluid during sliding, respectively, and F0 is the loadindependent Derjaguin offset, representing adhesive surface forces. Previous NEMD simulations of the friction between OFM films separated by a lubricant layer have confirmed the validity of this approximation. In agreement with experimental results, the friction coefficient is greater at low (1.44 nm−2) and medium (2.88 nm−2) coverage, where no slip was observed (Figure 3a,b), compared at high coverage (4.32 nm−2), where a slip length could be measured (Figure 3c). The differences in friction between the high coverage case and the low and medium coverage cases are broadly similar to those observed in previous NEMD simulations of OFM films. Figure 7b shows the change in the friction coefficient with log(shear rate) for high coverage (Γ = 4.32 nm−2) SA, SAm, and GMS films at low (0.1 GPa) and high (1.0 GPa) pressure. The friction coefficient generally increases linearly with log(shear rate), which is consistent with the stress-augmented thermal activation theory and macroscopic tribology experiments. The friction coefficient shows a much stronger shear rate dependence at low pressure than at high pressure. At low shear rates (<10 s−1), the friction coefficient is larger at high pressure than at low pressure, whereas at higher shear rates, the reverse is true. The slip length is always greater at high pressure, suggesting that friction may be more effectively reduced in the global slip regime, which only occurs at higher shear rates. A decrease in both the magnitude of the friction coefficient and the slope of its increase with sliding velocity at higher pressure has also been observed in tribology experiments of SA dissolved in hexadecane under hydrodynamic lubrication conditions. NEMD simulations of pure lubricant molecules under EHL conditions, where slip occurs within the film itself rather than at the solid−fluid interface, also showed a similar friction behavior. The slip length (Figure 7a) and the friction coefficient (Figure 7b) both generally increase with the increasing shear rate; however, at a given shear rate, systems with a larger slip length generally give a lower friction coefficient. This supports the postulate that OFM films can promote slip and thus reduce friction in the hydrodynamic regime. At high pressure, the friction coefficient drops slightly upon the transition from defect slip to global slip. These results also suggest that friction reduction by OFMs in the hydrodynamic regime will be greatest when the slip length is maximized through a combination of the high pressure and high shear rate, as are typical in the EHL regime. For all of the conditions studied, SA gives a significantly higher friction coefficient than SAm and GMS. This observation is in Figure 8.The effect of shear stress on the slip length (a) and slip velocity (b) for high coverage (Γ = 4.32 nm−2) SA (blue), SAm (red), andGMS (green) films at low pressure (0.1 GPa). Note the logarithmic x-axis in (b). Dotted lines in (a) are guides for the eye, dotted lines in (b) use the extended MKT slip model developed byWang and Zhao (eq 1). Fitting parameters are given in Table 1. Arrows on the x-axis show estimates of the critical shear stress. Langmuir Article DOI: 10.1021/acs.langmuir.8b00189 Langmuir 2018, 34, 3864−3873 3870 agreement with previous NEMD simulations and friction experiments under boundary lubrication conditions. This behavior can be most clearly explained by comparing the mass density profile in Figure 5a with those in Figure 5b,c; this shows that compared to SA films, SAm and GMS films are more solidlike and less interdigitated with the heaxadecane layer. The reduced interdigitation leads to significantly larger slip lengths for SAm and GMS compared to SA (Figure 7a) and ultimately lower friction (Figure 7b). This finding is expected to be useful in designing new OFMs to control friction and flow in a range of applications. Interdependence of Slip and Friction. It is difficult to establish quantitative relationships between slip and friction from these NEMD simulations because shear thinning of the hexadecane film will also reduce friction, particularly under the higher pressures and shear rates studied. However, it was possible to study the change in the slip length and slip velocity with shear stress and compare the trends with previously proposed models for slip. Figure 8a shows the relationship between the shear stress and the slip length for high coverage (4.32 nm−2) OFM films at low pressure (0.1 GPa). Slip occurs only above a critical shear stress, after which the slip length increases linearly with shear stress and then asymptotes, which is in agreement with the slip model proposed by Spikes and Granick. The critical shear stress increases with increasing hexadecane−OFM interdigitation from GMS, to SAm, to SA (Figure 5) and is <0.01 GPa for all of the OFMs at low pressure. The slip length values gathered from Figures 3−6 can also be used to calculate slip velocities (orange arrows in Figure 2). Figure 8b shows the effect of shear stress on the slip velocity for high coverage (4.32 nm−2) OFM films at low pressure (0.1 GPa). The slip velocity−shear stress behavior is well-described using Eyring’s MKT, extended to include the critical shear stress and energy dissipation at the interface by Wang and Zhao. In this model (Eq 1), the slip velocity is given by
■ INTRODUCTION
The dynamics of flow in nanofluidic devices 1 and nanoelectromechanical systems (NEMS) 2 can be accurately described only through a detailed understanding of the flow at fluid−solid interfaces. 3 In this context, recent experimental and molecular simulation studies have shown that the common assumption in continuum hydrodynamics that fluid adjacent to a sliding surface moves at the same velocity as the surface (no-slip boundary condition) often breaks down at the nanoscale. 3 A large proportion of such studies have investigated the slip of alkanes on surfactant monolayers adsorbed on smooth solid surfaces. Many of these studies report a slip length, defined as an extrapolated distance relative to the fluid−solid interface where the tangential velocity component vanishes. 3 However, there is significant variation in both the magnitude and the shear rate dependence of the slip lengths from these experiments. 3 For example, total internal reflection-fluorescence recovery after photobleaching (TIR-FRAP) experiments suggested a shear rate-independent (measured between 10 2 and 10 3 s −1 ) slip length of approximately 400 nm for n-hexadecane on smooth sapphire surfaces coated with octadecyltrichlorosilane (OTS) selfassembled monolayers (SAMs). 4 In surface forces apparatus (SFA) experiments using tetradecane confined between smooth mica surfaces coated with OTS SAMs, slip was only detected once a critical shear rate was exceeded, above which a linear increase in the slip length with log(shear rate) from 0 to 1.4 μm was observed (between 10 2 and 10 5 s −1 ). 5 More recently, colloidal probe atomic force microscopy (AFM) has been used to measure the slip length of several n-alkanes (from heptane to hexadecane) on OTS SAM-coated, ultrasmooth silicon wafers. 6 They suggested much smaller, shear rate-independent (between 10 2 and 10 3 s −1 ) slip lengths of 10 to 30 nm, with longer nalkanesthat have higher viscosities giving more slip. 6 In addition to surfaces coated with surfactant SAMs, slip of alkanes has also been observed on surfactant films formed from solution. For example, TIR-FRAP experiments also showed that the addition of a surfactant [stearic acid (SA)] to hexadecane confined between smooth sapphire surfaces led to an increase in the slip length from 150 to 350 nm. 4 Moreover, adding hexadecylamine to n-alkanes confined between smooth mica surfaces in SFA experiments switched the boundary conditions from no-slip to partial slip. 7 In these SFA experiments, the maximum slip length again increased with the increasing n-alkane chain length from octane to tetradecane (1−15 nm). 7 The large variation between slip length measurements from different experiments on similar systems suggests that different mechanisms may be acting to promote slip. 3,6 Experiments which suggested smaller (<50 nm) slip lengths for alkanes on adsorbed surfactant monolayers 6,7 are more consistent with the slip mechanism devised by Tolstoi and reviewed by Blake. 8 This mechanism suggests that slip is due to enhanced liquid mobility at the surface and is essentially an extension of Eyring's activated flow model for bulk liquid viscosity. 9 Conversely, the much larger slip lengths measured in some experiments 4,5 have subsequently been explained through the formation of gas/vapour films or nanobubbles at the interface, 3,6 as initially proposed by de Gennes. 10 In addition to experiments, nonequilibrium molecular dynamics (NEMD) simulations have been used extensively to study slip of liquids confined between solid surfaces. For high-slip systems, such as water flowing on graphene surfaces 11 or through carbon nanotubes, 12 accurate determination of the slip length with NEMD has been shown to be less accurate than equilibrium molecular dynamics methods. However, for the partial-slip systems of interest in this study, NEMD simulations have been used extensively to further understand slip phenomena 13−16 as well as to estimate slip lengths for a range of confined alkane systems. 17−21 Most NEMD studies have investigated the slip behavior in highly confined (<10 molecular layers) alkane films between atomically smooth solid surfaces. The slip lengths from these NEMD simulations depend on the system and conditions but are typically <10 nm. In these studies, the slip length generally increased with alkane viscosity, slab stiffness, sliding velocity, and pressure. 17−21 NEMD simulations of highly confined systems have also shown that the slip length decreases with increasing film thickness 20,22 and is drastically reduced when the confining surfaces contain atomic-scale roughness. 23,24 SFA and TIR-FRAP experiments have also shown significant reductions in the slip length on rougher surfaces. 25,26 The slip length of water confined between atomically smooth surfaces coated by alkylsilane SAMs 27,28 and adsorbed methanol films 29,30 has also been measured with NEMD simulations. Several NEMD simulation studies 31−33 have probed the structure, flow, and friction behavior of alkanes confined between surfactant monolayers; however, prior to this current study, slip lengths in such systems had not been successfully quantified.
NEMD simulations have also been central in developing more accurate models to describe liquid slip between solid surfaces. 34 From thier NEMD results, Thompson and Troian 14 developed an equation for slip where, above a critical shear rate, the slip length is a power law function of the shear rate. Conversely, Lichter et al. 35 suggested the variable-density Frenkel− Kontorova (vdFK) model, which predicts a plateau of the slip length at a high shear rate (rather than divergence). The model developed by Spikes and Granick to describe data from SFA experiments predicts the samebehavior, but here the slip length asymptotes at high shear stress rather than high strain rate. 36 Wang and Zhao 37 extended Eyring's molecular kinetic theory (MKT) 9 to describe the slip behavior. By assigning appropriate values of the tuneable parameters, the extended MKT model 37 is able to describe the important features from the slip models due to Thompson and Troian, 14 Lichter et al. (vdFK), 35 and Spikes and Granick. 36 In addition to accelerating flow in nanofluidic devices and NEMS, it has been proposed that slip could be exploited to reduce friction in macroscopic tribological systems. 38 Specifically, slip has been used to rationalize results from macroscopic tribology experiments which showed that the addition of an organic friction modifier (OFM) to n-hexadecane could significantly reduce friction in the hydrodynamic lubrication regime when the surfaces are well-separated. 38 OFMs are amphiphilic surfactant molecules that contain nonpolar aliphatic tailgroups attached to polar headgroups. They are based solely on C, H, O, and N atoms are not environmentally harmful and do not poison exhaust after-treatment devices. The most widely studied OFM headgroups are carboxylic acids; however, amines, amides, and glycerides are more industrially relevant. Commer-cial OFMs generally contain unbranched aliphatic tailgroups containing 12−20 carbon atoms because of their effective friction reduction, high base oil solubility, and high availability from natural fats and oils. 39 OFMs adsorb to metal, ceramic, or carbonbased surfaces through their polar headgroups, and strong, cumulative van der Waals forces between proximal nonpolar tailgroups lead to the formation of incompressible monolayers. 39 These monolayers are known to significantly reduce friction and wear in the boundary lubrication regime (where the load is primarily supported by surface asperities) by preventing direct contact between solid surfaces. 39 However, it is less clear if OFMs also reduce friction in the hydrodynamic lubrication regime (where the load is supported by the lubricant). When OFM monolayers form on surfaces inside a tribological contact, 39 the planes of methyl groups at the end of the vertically adsorbed OFM molecules create nonwetting, oleophobic surfaces 40 over which lubricants could slip. 38,39 In this study, large-scale NEMD simulations will be used to investigate the friction and the slip behavior of hexadecane, confined between OFM films, under hydrodynamic lubrication conditions. Several different OFM types [SA, stearamide (SAm), and glycerol mono-stearate (GMS)] and coverages (1.44−4.32 molecules nm −2 ) will be explored to understand their influence on the friction and the slip behavior. The dependence of the slip length and the friction coefficient on the applied shear rate (10 8 to 10 10 s −1 ) and pressure (0.1−1.0 GPa) will also be investigated. This study provides unique insights into the slip of alkanes on surfactant monolayers, as well as the relationship between slip and friction. These results also provide support for previously proposed mechanisms and models for slip in such systems. A deeper understanding of the slip phenomenon is expected to be valuable not only for improving the molecular design of lubricant additives but also for understanding and controlling flow and friction in nanofluidic devices 1 and NEMS. 2 ■ METHODOLOGY System Setup. A representative example of the systems simulated in this study is shown in Figure 1a. It consists of a lubricant (n-hexadecane) layer confined between two OFM monolayers which are adsorbed on atomically smooth hematite slabs. The hexadecane layer is sufficiently thick (>15 molecular diameters) such that no molecular structuring was evident in the middle of the film, 32 and thus any confinement-induced viscosity increase was expected to be negligible. 41 All systems were constructed using the Materials and Processes Simulations (MAPS) platform from Scienomics SARL. Classical MD simulations were performed using the large-scale atomic/ molecular massively parallel simulator (LAMMPS) code. 42 Figure 1b shows the three different OFM types considered in these simulations; SA, SAm, and GMS. SA was selected to allow comparisons to previous experiments and NEMD simulations, whereas SAm and GMS are more commercially relevant. 39 The hexadecane lubricant was chosen because of its well-defined properties and common use in both experimental 38,43 and modeling 33 tribology studies.
In all of the MD simulations, (100) slabs of α-iron(III) oxide 45 (hematite) with dimensions (xyz) of approximately 55 × 55 × 12 Å were used as the substrates. Periodic boundary conditions were applied in the x and y directions. 46 The surface was cleaved such that the Fe/O ratio remained at 2:3 to ensure that surfaces with no overall charge were produced. 33 Three different OFM surface coverages were considered. The coverage, Γ, is defined as the number of OFM molecules in a 50 although their role in lubrication remains unclear and they are not considered in this current study. The OFM molecules were oriented perpendicular to, and initially 3 Å above, the interior surfaces of the two hematite slabs (Figure 1a). This produced OFM films similar to those formed by Langmuir−Blodgett experiments. 47,51 Between 450 (high coverage) and 650 (low coverage) hexadecane molecules were then randomly distributed in the region between the OFM films. This resulted in a similar total number of atoms (approximately 50 000 including the surface atoms) and film thickness for all of the coverages studied. 32 Parameters from the long hydrocarbon-optimized potentials for liquid simulations all-atom force field (L-OPLS-AA) 52 were used to represent the carbon and hydrogen atoms in the alkyl chains (both OFMs and hexadecane), and standard OPLS-AA 53,54 parameters were used for the headgroup atoms (see ref 33 for full details). Accurate density and viscosity prediction of bulk hexadecane using L-OPLS-AA under ambient and high pressure conditions has been confirmed in a previous publication. 55 Lennard−Jones interactions were cut off at 12 Å, and "unlike" interactions were evaluated using the geometric mean mixing rules, as prescribed in the OPLS force field. 53 Electrostatic interactions were evaluated using a slab implementation of the particle−particle, particle−mesh (PPPM) algorithm 56 with a relative force accuracy of 10 −5 .
Surface−hexadecane and surface−OFM interactions were represented by the Lennard−Jones and Coulomb potentials; the hematite surface parameters selected were developed by Berro et al. 57 for alkane adsorption. The hematite slab atoms were "frozen" in the corundum crystal structure 45 to facilitate more accurate slip length analysis. Previous NEMD simulations have shown that the use of rigid slabs can lead to an unphysical velocity-slip behavior when they are in direct contact with a nonwetting fluid; 18 however, here, the flexible OFM films form the interface with the lubricant and are sufficient to prevent such a behavior (see Results and Discussion).
The MD equations of motion were integrated using the velocity Verlet algorithm with an integration time step of 1.0 fs. Fast-moving bonds involving hydrogen atoms were constrained with the SHAKE algorithm. 58 The Nose−Hoover thermostat, 59,60 with a time relaxation constant of 0.1 ps, was used to maintain the target temperature, T = 300 K. The pressure (P = 0.1−1.0 GPa) was controlled by applying a constant normal force to the outermost layer of atoms in the upper slab, keeping the z coordinates of the outermost layer of atoms in the lower slab fixed (Figure 1a). 34,61 Simulation Procedure. First, the system was energyminimized before a density similar to that of liquid hexadecane (0.75 g cm −3 ) was achieved in the fluid between the slabs by moving the top slab down at 10 m s −1 . The system was then pressurized (P = 0.1−1.0 GPa), thermostatted in directions perpendicular to the compression (x and y), and allowed to equilibrate at 300 K. Initially, the slab separation varied in a damped harmonic manner, so sliding was not applied until a constant average slab separation was obtained and the average hydrostatic pressure within the hexadecane film was close to its target value (within 1%). 32 These compression simulations were generally around 2 ns in duration.
After compressive oscillation became negligible, a velocity, v s , was added in the x direction to the outermost layer of atoms in the top slab (Figure 1a), and sliding simulations were conducted for between 2 and 20 ns (depending on v s ). Lower sliding velocities required longer simulations for the block-averaged xvelocity profile and the friction coefficient to reach a steady state. The values of v s applied were between 1 and 200 m s −1 . Accurate determination of the friction coefficient and particularly the slip length becomes challenging at even lower sliding velocities, while shear heating becomes more difficult to control above the selected range. 34 A combination of high sliding velocity (or shear rate) and pressure are particularly relevant for components which primarily operate in the elastohydrodynamic lubrication (EHL) regime. 62 A consequence of the rigid slabs is that the thermostat cannot be applied directly to the slab atoms, as is common in confined NEMD simulations. 33 Applying the thermostat directly to fluid molecules confined between rigid surfaces has been shown artificially influence their behavior under sliding conditions 63 and also lead to the erroneous slip behavior. 18 Therefore, during the sliding simulations, any heat generated was dissipated using a thermostat acting only on the OFM layers (Figure 1a), applied perpendicular to the sliding direction (y and z). Using this approach, there was a negligible increase in the lubricant film temperature under shear, even at the highest sliding velocity considered. Moreover, the variation in the slip with sliding velocity is consistent with NEMD simulations of alkanes
■ RESULTS AND DISCUSSION
Slip Length Analysis. The slip length analysis will be presented first, followed by the shear rate and pressure dependence of slip and friction, and finally the interdependence of slip and friction. Figure 2a shows a representative NEMD system snapshot (SAm, Γ = 4.32 nm −2 ). Note that, consistent with previous NEMD simulations, 33 the tilted OFM monolayers move at the same velocity as the slabs to which they are adsorbed. This is the case for all of the simulations, and thus the OFM velocity profiles are omitted for clarity. Comparing the red (assumed no-slip) and blue (measured) hexadecane velocity profiles shows that the confined lubricant layer is only partially sheared, 21 and slip occurs at both of the OFM−hexadecane interfaces, not at the hematite−OFM interfaces (see also . 38,39 Figure 2b shows the definition of the slip length (purple arrows) from the NEMD simulations. The solid red line in Figure 2b shows the velocity profile that would be obtained within the hexadecane layer, assuming no-slip boundary conditions. In this case, the OFM layers move at the same velocity as the slabs to which they are adsorbed, the hexadecane layer is completely sheared, and there is a net zero velocity difference between the OFM and hexadecane layers at their interface. The solid blue line in Figure 2b shows an example velocity profile with a nonzero net velocity difference between the OFM and hexadecane layers, indicating partial slip at the interface. Note that this represents apparent slip rather than true slip because the slip plane is located above an adsorbed layer rather than directly at the solid−liquid interface. 3 The slip length (shown in purple) can be calculated by extrapolating the measured (slip) velocity profile to the point at which it intersects the applied slab velocity (in this case 0 and v s ) and measuring the distance from the OFM-hexadecane interface (see Figure 2b). 4,11 Figures 3−6 show mass density profiles in the z-direction for the hexadecane (orange) and OFM (green) molecules. The measured hexadecane x-velocity profile in the z-direction (blue) and the assumed no-slip velocity profile (red) for a fully sheared hexadecane layer are also shown. Note that these figures are rotated 90°relative to the schematics in Figure 2. The sharp, intense OFM mass density peaks on the far left-and right-hand sides of Figures 3−6 indicate the adsorption of the headgroups on the slabs, while the less intense peaks which extend further from the surface are due to the tailgroups. 33 Slip occurs when there is a net nonzero velocity difference between the hexadecane layer and the OFM layers at their interface. 3 The interface where slip occurs (red vertical dotted lines) can be clearly identified by the intersection of the hexadecane and OFM mass density profiles. The center of the mass density and velocity profiles has been shifted to zero for clarity. This does not affect the measured slip length because it is an average between the top and bottom slabs. Figure 3 shows the effect of OFM surface coverage on the OFM and hexadecane mass density profiles as well as the hexadecane velocity profile. Comparing Figure 3a−c, the OFM films become substantially thicker and more strongly layered with increasing coverage. Both sum frequency generation spectroscopy 64 and radial distribution functions from NEMD simulations 33 have also shown that the OFMs form solid-like films at high coverage. There is much less overlap of the OFM and hexadecane mass density profiles at high coverage ( Figure 3c) than at medium (Figure 3b) and low coverage (Figure 3a), indicating reduced interdigitation. 33 Consequentially, under all of the conditions studied, the velocity profiles only show a measurable slip length at high coverage (4.32 nm −2 ). In Figure 3a,b, the assumed no-slip velocity profile (red) overlaps with the measured velocity profile (blue). Conversely, in Figure 3c, there is clear separation between the measured profile and the no-slip profile, allowing the slip length (0.9 nm) to be calculated. This observation agrees with experimental results which have shown much larger slip lengths for high coverage OFM films compared to low coverage films, 25 as well as those which have shown a transition from no-slip to partial slip after high coverage films were allowed to form on the surface. 4,7 Figure 4 shows the effect of sliding velocity (or shear rate) on the mass density and velocity profiles. As with previous NEMD simulations, 32,33 the mass density profiles are insensitive to sliding velocity in the range tested. The velocity profiles show an increase in the slip length with increasing sliding velocity; from 0.3 nm at 10 m s −1 (Figure 4a) to 0.6 nm at 20 m s −1 (Figure 4b), to 0.8 nm at 50 m s −1 (Figure 4c), and to 0.9 nm at 100 m s −1 (Figure 3c). This observation is consistent with several previous experiments 5, 7 and NEMD simulations which also showed a general increase in the slip length with increasing sliding velocity. 17−21 Figure 5 shows the effect of the OFM type on the mass density and velocity profiles at low pressure (0.1 GPa). The OFM mass density profiles for SA ( Figure 5a) and SAm (Figure 5b) are very similar but the GMS profile (Figure 5c) shows that its films are Figure 6 shows the effect of the OFM type on the mass density and velocity profiles at high pressure (1.0 GPa). Similar to at low pressure, SAm ( Figure 6b) and GMS (Figure 6c) films are less interdigitated than SA (Figure 6a) films, leading to larger slip lengths. Comparing Figures 5 and 6 shows the effect of pressure on the structure and flow behavior. For all of the OFMs, the mass density profiles show that there is a reduction in film thickness ≈10% moving from low pressure ( Figure 5) to high pressure ( Figure 6). The mass density profiles also indicate stronger layering both in the OFM and particularly the hexadecane films at high pressure relative to low pressure. The increased "first layer density" observed here for the hexadecane layer at higher pressure has been correlated with larger slip lengths in previous NEMD simulations. 19 Comparing Figures 5a and 6a suggests that for SA, interdigitation between the hexadecane and OFM layers is reduced at higher pressure, owing to the denser OFM films. For all of the OFMs at 100 m s −1 , the slip length increases significantly, moving from low to high pressure; 0.9 nm ( Figure 5a Figure 6c) for GMS. The increase in the slip length with the increasing pressure is consistent with previous NEMD simulations 20 and macroscopic tribology experiments which showed the same trend. 65 Shear Rate and Pressure Dependence of Slip and Friction. Figure 7a shows the change in the slip length with log(shear rate) for high coverage (Γ = 4.32 nm −2 ) SA, SAm, and GMS films at low (0.1 GPa) and high (1.0 GPa) pressure. Note that here, the applied shear rate is calculated using the no-slip velocity profile and the hexadecane layer thickness rather than the overall film thickness (see Figure 2b). No slip occurs below a critical shear rate, above which the slip length increases linearly with log(shear rate) and then asymptotes toward a constant value. The critical shear rate decreases from ≈10 9 s −1 at low pressure to ≈10 8 s −1 at high pressure. The critical shear rates from these simulations are relatively high with respect to those observed in previous experiments for similar systems 3 as well as the shear rates experienced in real components. 66 The slip length asymptotes to approximately 1 nm at low pressure and between 2 and 5 nm at high pressure, depending on the OFM type. SAm and GMS give consistently larger slip lengths than SA, particularly at high pressure, because of reduced OFM− hexadecane interdigitation (see Figure 5). The magnitudes of these slip lengths are lower than those measured for alkanes on adsorbed surfactant monolayers in AFM and SFA experiments (10−30 nm), performed under milder pressure and shear rate conditions. 6,7 NEMD simulations of water slip on alkyl monolayers also underpredicted the slip length by a similar degree compared to the experiment. 27,28 The exact reason for this discrepancy remains unclear, but it could be because the current simulations use atomically smooth surfaces and are representative of the slip mechanism proposed by Tolstoi and reviewed by Blake. 8 Alternative slip mechanisms, such as the formation of multilayer OFM films 50 or nanobubbles, 10 could be the route of the larger slip lengths and lower critical shear rates observed in some SFA and TIR-FRAP experiments of similar systems. 4,5 The asymptotic behavior of the slip length at high shear rates is consistent with results from previous NEMD simulations 17,18 and AFM experiments 6 which have studied alkane slip. Such a behavior can be rationalized through a transition from "defect slip" to "global slip" 17 and captured using both the vdFK 35 and extended MKT 37 slip models. An increase in the slip length with increasing pressure has also been noted in both previous NEMD simulations 20 and tribology experiments. 65 This observation suggests that as the pressure is increased, the viscous friction between individual hexadecane molecules increases more than friction at the OFM−hexadecane interface. 3 Both AFM 6 and SFA 7 experiments have shown that longer chain alkanes with a larger viscosity give larger slip lengths on surfactant monolayers. In future NEMD studies, it would be interesting to investigate the slip of larger alkanes with viscosities closer to real lubricants on surfactant monolayers. The friction behavior of the OFM films and its relation to slip was also investigated. The kinetic friction coefficient, μ, was obtained using the extended Amontons−Coulomb law under the high load approximation: F L /F N = F 0 /F N + μ ≃ μ. Here, F L and F N are defined as the block-averaged lateral force (shear stress) and the normal force acting on each hematite slab in response to the fluid during sliding, respectively, and F 0 is the loadindependent Derjaguin offset, representing adhesive surface forces. Previous NEMD simulations of the friction between OFM films separated by a lubricant layer have confirmed the validity of this approximation. 32,33 In agreement with experimental results, 4 the friction coefficient is greater at low (1.44 nm −2 ) and medium (2.88 nm −2 ) coverage, where no slip was observed (Figure 3a,b), compared at high coverage (4.32 nm −2 ), where a slip length could be measured (Figure 3c). The differences in friction between the high coverage case and the low and medium coverage cases are broadly similar to those observed in previous NEMD simulations of OFM films. 33 Figure 7b shows the change in the friction coefficient with log(shear rate) for high coverage (Γ = 4.32 nm −2 ) SA, SAm, and GMS films at low (0.1 GPa) and high (1.0 GPa) pressure. The friction coefficient generally increases linearly with log(shear rate), which is consistent with the stress-augmented thermal activation theory 67 and macroscopic tribology experiments. 43,51 The friction coefficient shows a much stronger shear rate dependence at low pressure than at high pressure. At low shear rates (<10 9 s −1 ), the friction coefficient is larger at high pressure than at low pressure, whereas at higher shear rates, the reverse is true. The slip length is always greater at high pressure, suggesting that friction may be more effectively reduced in the global slip regime, which only occurs at higher shear rates. 17,35 A decrease in both the magnitude of the friction coefficient and the slope of its increase with sliding velocity at higher pressure has also been observed in tribology experiments of SA dissolved in hexadecane under hydrodynamic lubrication conditions. 38 NEMD simulations of pure lubricant molecules under EHL conditions, where slip occurs within the film itself rather than at the solid−fluid interface, also showed a similar friction behavior. 62 The slip length (Figure 7a) and the friction coefficient ( Figure 7b) both generally increase with the increasing shear rate; however, at a given shear rate, systems with a larger slip length generally give a lower friction coefficient. This supports the postulate that OFM films can promote slip 38 and thus reduce friction in the hydrodynamic regime. 38,65 At high pressure, the friction coefficient drops slightly upon the transition from defect slip to global slip. 17,35 These results also suggest that friction reduction by OFMs in the hydrodynamic regime will be greatest when the slip length is maximized through a combination of the high pressure and high shear rate, as are typical in the EHL regime. 62 For all of the conditions studied, SA gives a significantly higher friction coefficient than SAm and GMS. This observation is in 43 This behavior can be most clearly explained by comparing the mass density profile in Figure 5a with those in Figure 5b,c; this shows that compared to SA films, SAm and GMS films are more solidlike and less interdigitated with the heaxadecane layer. 33 The reduced interdigitation leads to significantly larger slip lengths for SAm and GMS compared to SA (Figure 7a) and ultimately lower friction (Figure 7b). This finding is expected to be useful in designing new OFMs to control friction and flow in a range of applications.
Interdependence of Slip and Friction. It is difficult to establish quantitative relationships between slip and friction from these NEMD simulations because shear thinning of the hexadecane film will also reduce friction, particularly under the higher pressures and shear rates studied. 68 However, it was possible to study the change in the slip length and slip velocity with shear stress and compare the trends with previously proposed models for slip. Figure 8a shows the relationship between the shear stress and the slip length for high coverage (4.32 nm −2 ) OFM films at low pressure (0.1 GPa). Slip occurs only above a critical shear stress, after which the slip length increases linearly with shear stress and then asymptotes, which is in agreement with the slip model proposed by Spikes and Granick. 36 The critical shear stress increases with increasing hexadecane−OFM interdigitation from GMS, to SAm, to SA ( Figure 5) and is <0.01 GPa for all of the OFMs at low pressure.
The slip length values gathered from Figures 3−6 can also be used to calculate slip velocities (orange arrows in Figure 2). 11 Figure 8b shows the effect of shear stress on the slip velocity for high coverage (4.32 nm −2 ) OFM films at low pressure (0.1 GPa). The slip velocity−shear stress behavior is well-described using Eyring's MKT, extended to include the critical shear stress and energy dissipation at the interface by Wang and Zhao. 37 In this model (Eq 1), the slip velocity is given by where τ is the shear stress, τ 0 is a characteristic shear stress, f d is a dissipation factor, and v 0 is a characteristic velocity. 37 In this study, the product of f d and v 0 was used in the fitting for simplicity. To fit the dotted lines in Figure 8, both τ 0 and v 0 f d increase with increasing interdigitation, from GMS to SAm and to SA (Table 1). This suggests that the potential energy barriers for slip to occur are larger when there is more interdigitation, which is similar to the effect of stronger solid−fluid interactions seen in previous NEMD studies. 37 Intuitively, the critical shear stress values obtained for these partial-slip systems (3.0−7.2 MPa) are much larger than observed in NEMD simulations of high-slip (water inside carbon nanotubes) systems (0.003−0.2 MPa). 37 An important remaining question following these NEMD simulations is the resilience of slip with respect to surface roughness. The presence of nanoscale roughness has been shown to significantly reduce the slip length of alkanes confined by solid surfaces in NEMD simulations 23,24 and experiments. 25,26 However, recent NEMD simulations showed that high coverage OFM films can give smooth, low friction interfaces even on surfaces with realistic nanoscale roughness. 69 Thus, it is expected that unlike for NEMD simulations of alkanes confined between bare surfaces with nanoscale roughness, such surfaces coated with high coverage OFM films should still show slip. Although this is beyond the scope of this current study, it is certainly an interesting area to explore and it will be pursued in future NEMD investigations.
■ SUMMARY AND CONCLUSIONS
In this study, the slip and friction behavior of n-hexadecane, confined between OFM surfactant films adsorbed on hematite surfaces, has been studied using NEMD simulations. The influence of the OFM type (SA, SAm, and GMS) and coverage (1.44−4.32 nm −2 ), as well as the applied shear rate (10 8 to 10 10 s −1 ) and pressure (0.1−1.0 GPa), have been investigated.
The slip length is found to be highly sensitive to the OFM type and coverage as well as the applied shear rate and pressure. A measurable slip length is only observed for OFM films with a high surface coverage, which give a smooth interface between welldefined OFM and hexadecane layers. At low and medium coverage, the hexadecane and OFM layers are significantly interdigitated, which prevents slip, resulting in higher friction. At high coverage, slip only occurs above a critical shear rate, which depends on the applied pressure as well as the OFM type. Above the critical shear rate, the slip length increases with increasing shear rate and subsequently asymptotes toward a constant value. The maximum slip length increases significantly with increasing pressure from ≈1 nm at 0.1 GPa to 2−5 nm at 1.0 GPa. As has been noted for other systems, there seems to be a lower propensity for slip in these NEMD simulations, which show relatively high critical shear rates and low slip lengths, compared to previous experiments. This suggests that different slip mechanisms could be acting in these experiments.
For a given nonequilibrium state point, systems which show a larger slip length typically give a lower friction coefficient. Generally, the friction coefficient increases linearly with the logarithmic shear rate, in accordance with the stress-augmented thermal activation theory. However, the friction coefficient shows a much stronger shear rate dependence at low pressure (0.1 GPa), where only modest slip lengths are measured, than at high pressure (1.0 GPa), where the slip lengths are much larger. At low shear rates, the friction coefficient is higher at high pressure than at low pressure, while at high shear rates, the reverse is true. GMS and SAm films are less interdigitated than SA films, leading to significantly larger slip lengths and lower friction coefficients. This finding is expected to be useful in designing new surfactants to control hydrodynamic friction and flow in a range of applications.
The friction and slip behavior are interrelated and, in agreement with the slip model due to Spikes and Granick, slip occurs only above a critical shear stress, after which the slip length increases linearly with shear stress and then asymptotes toward a constant value. At low pressure, the slip velocity−shear stress behavior is well-described using the sinh relationship in Eyring's MKT model for slip, extended to include a critical shear stress and energy dissipation at the interface by Wang and Zhao. 37 This study has provided a more detailed understanding of how alkane slip occurs on surfactant monolayers adsorbed on solid surfaces. Indeed, to our knowledge, it is the first to successfully measure the slip length in such systems using NEMD. These simulations have also provided compelling evidence that OFMs can significantly reduce friction even when the surfaces are wellseparated, as in the hydrodynamic lubrication regime. The simulations also suggest that friction reduction by OFMs will be greatest when the slip length is maximized through a combination of high pressure and high shear rate, as is typical in the EHL regime. Future NEMD studies will probe the sensitivity of hydrodynamic friction and slip to lubricant viscosity and surface roughness. | 2018-04-03T00:11:14.124Z | 2018-03-14T00:00:00.000 | {
"year": 2018,
"sha1": "c70cdbe8018e1bf1ff29e7dae8a9c4a51573a6e4",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.langmuir.8b00189",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "be87188bf37b6730a1f60ac6c918d8923e1c5af7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
202712875 | pes2o/s2orc | v3-fos-license | Controllable Length Control Neural Encoder-Decoder via Reinforcement Learning
Controlling output length in neural language generation is valuable in many scenarios, especially for the tasks that have length constraints. A model with stronger length control capacity can produce sentences with more specific length, however, it usually sacrifices semantic accuracy of the generated sentences. Here, we denote a concept of Controllable Length Control (CLC) for the trade-off between length control capacity and semantic accuracy of the language generation model. More specifically, CLC is to alter length control capacity of the model so as to generate sentence with corresponding quality. This is meaningful in real applications when length control capacity and outputs quality are requested with different priorities, or to overcome unstability of length control during model training. In this paper, we propose two reinforcement learning (RL) methods to adjust the trade-off between length control capacity and semantic accuracy of length control models. Results show that our RL methods improve scores across a wide range of target lengths and achieve the goal of CLC. Additionally, two models LenMC and LenLInit modified on previous length-control models are proposed to obtain better performance in summarization task while still maintain the ability to control length.
Introduction
Neural encoder-decoder was firstly adopted for machine translation (Sutskever, Vinyals, and Le 2014), and fastly diffused to other domains like image caption (Vinyals et al. 2015) and text summarization (Rush, Chopra, and Weston 2015). In this paper, we focus on text summarization which aims to generate condensed summaries while retains overall points of source articles. Previous advanced work (Rush, Chopra, and Weston 2015;Nallapati et al. 2016) make remarkable progress and sequence to sequence (seq2seq) framework has become the mainstream in summarization task. An issue in original neural encoder-decoder is that it cannot generate the sequence with specified length, i.e., lack of length control (LC) capacity. Sentences with constrained length are required in many scenarios. For example, the headlines and news usually have length limit, or articles and messages in different devices have different length demands. Generate the sentences with various lengths also improve the diversity of outputs. However, the study of length control is scarce, and most research of neural encoder-decoder aim to improve the evaluation score.
To control the output length, Kikuchi et al. (2016) first proposed two learning-based models for neural encoderdecoder named LenInit and LenEmb. We observe that when two models have same or similar structures, the evaluation score of one model with more precise length control is usually lower than another with weaker length control. In other words, worse LC capacity results in better output quality. For instance, LenEmb can generate the sequence with more accurate length but evaluation scores are lower than LenInit. In most situations when sentence length is in an adequate range, i.e. the length constraint is satisfied, people prefer to focus on semantic accuracy of the produced sentence, at this case, LenInit seems to be a more appropriate choice than LenEmb. Therefore, it makes sense to research the control of trade-off between LC capacity and sentence quality, which we called controllable length control (CLC).
To track this trade-off, we set our sight into using Reinforcement Learning (RL) (Sutton and Barto 2018). Commonly, RL in neural language generation is used to overcome two issues: the exposure bias (Ranzato et al. 2015) and inconsistency between training objective and evaluation metrics. Recently, great efforts have been devoted to solve the above two problems (Ranzato et al. 2015;Rennie et al. 2017;Paulus, Xiong, and Socher 2018) In addition, RL can actually bring two benefits in allusion to the LC neural language generation. Firstly, most datasets provide only one reference summary in each sentence pair, so we can only learn fixed-length summary for each source document under maximum likelihood (ML) training. But for RL, we could appoint various lengths as input to sample sentences for training, consequently, promote the model to become more robust to generate sentences given different desired length. Secondly, the length information could be easily incorporated into reward design in RL to induce the model to have different LC capacity, in this way, CLC could be achieved.
Normally, RL for sequence generation is operated on MLtrained models, however, we find that directly applying RL algorithm on pre-trained models will dramatically degrade LC capacity. In this paper, we design two RL methods for LC neural text generation: MTS-RL, and SCD-RL. By adjusting the rewards in RL according to outputs score and length, our MTS-RL and SCD-RL can improve the summarization performance as well as control the LC capacity. Furthermore, we can make some modifications on previous models to improve the score by leveraging the trade-off. An intuitive approach is that we could add a "regulator" between length input and decoder to suppress or enhance the transmission of the length information. Under the guidance of this idea, two models named LenLInit and LenMC are proposed. These two LC models significantly improve the evaluation score at low cost of its ability to control the length in both ML and RL. The major contributions of our paper are four-fold: • To the best of our knowledge, this is the first work applying reinforcement learning on length-control neural abstractive summarization, and we present the concept of CLC. • Two RL methods are developed to successfully control the LC capacity, and improve the scores significantly. Meanwhile, we find that RL for LC text generation alleviate the limitation of inadequacy and unbalance of Ground-Truth reference in different lengths. • Two models named LenLInit and LenMC are proposed based on previous neural LC models (Kikuchi et al. 2016). • Extensive experiments are conducted to verify that proposed models with devised RL algorithms cover a wide range of LC ability and smoothly achieve CLC on Gigaword summarization Dataset.
Related Work
Abstractive Text Summarization There are increasing heuristic work based on the encoder-decoder framework (Rush, Chopra, and Weston 2015;Nallapati et al. 2016). DRGD designed by Li et al. (2017) is a seq2seq oriented model equipped with deep recurrent generative decoder. See, Liu, and Manning (2017) proposed a hybrid pointergenerator network that uses pointer to copy words from articles while produce the words by generator. Cao et al. (2018) used OpenIE and dependency parser to extract fact descriptions from the source text, then adopted a dual attention model to force the faithfulness of outputs. Yang et al. (2019) explored a human-like reading strategy for abstract summarization and leveraged it by training model with multi-task learning system.
Length Control neural Encoder-Decoder Kikuchi et al. (2016) first proposed two learning-based neural encoderdecoder models to control sequence length named LenInit and LenEmb. LenEmb mixes the inputs of decoder with remaining length embedded into each time step, while LenInit initializes the memory cell state of LSTM decoder with whole length information. Before that, sentence length is controlled by ignoring "EOS" at certain time or truncating output sentence. Fan, Grangier, and Auli (2018) treated the length of ground truth summaries in different ranges as independent properties and identify it as a discrete mark in an embedding unit. Liu, Luo, and Zhu (2018) presented a convolutional neural network (CNN) encoder-decoder, the inputs and length information are proceeded by CNN before entering the decoder unit. Generally, length control model in neural encoder-decoder can be divided into two types: Whole Length Infusing (WLI) model and Remaining Length Infusing (RLI) model. WLI model is to inform the decoder with entire length of target sentence and RLI model is to tell the remaining length of the sentence in each time step. LenInit (Kikuchi et al. 2016), Fan (Fan, Grangier, and Auli 2018) and LCCNN (Liu, Luo, and Zhu 2018) all belong to WLI models, while LenEmb (Kikuchi et al. 2016) is a typical RLI model. Ordinarily, RLI models have better length control capacity but lead to poor sentence quality compare with WLI models. We follow Kikuchi et al. (2016) to define the length of a sentence in character level, which is more challenge than Liu, Luo, and Zhu (2018) in word level.
Reinforcement learning in NLG There are several successful attempts to integrate encoder-decoder and RL for neural language generation. Ranzato et al. (2015) applied RL algorithm to directly optimize the non-differential evaluation metric, which highly raise score. Rennie et al. (2017) modified RL algorithm by replacing the critic model with inference results to produce rewards, this simple modification makes significant improvements in image caption task. Yu et al. (2017) rewarded the Monte-Carlo sampled sentences with adversarial trained discriminator. Paulus, Xiong, and Socher (2018) employed intra-temporal attention, and combined supervised word prediction with RL to generate more readable summaries. Liu et al. (2018) designed an adversarial process for abstractive text summarization. Chen and Bansal (2018) firstly selected the salient sentences and rewrote the summary, in which non-differential computation is connected via policy gradient. However, above mentioned work did not involve and explore length control in RL.
Methodology Problem Definition
The dataset D for text summarization contains pairs of input source sequence x = {x 1 , x 2 , . . . , x N } and corresponding ground truth summary y * = {y * 1 , y * 2 , . . . , y * M }, where N and M is the length of the input article and reference, respectively. The target of summarization is trying to seek a transform from x to y using a θ parameterized policy p θ , this can be formalized to maximize the conditional probability in Eq.(1), where y *
Encoder-Decoder Attention Model
Encoder-decoder with attention mechanism (Bahdanau, Cho, and Bengio 2014) is selected as the basic framework in this work. RNN encoder sequentially takes each word embedding of input sentence. Then the final hidden state of the encoder which contains whole information of source sentence is fed into decoder as the initial state. We select bi-directional Long Short-Term Memory (Hochreiter and Schmidhuber 1997) as the encoder to read the source sequence. Here we denote − → h e t as the hidden state of the BiL-STM encoder in forward direction at time step t and ← − h e t for backward direction. − → m e t and ← − m e t are the memory cell states of the BiLSTM encoder: Outputs of the encoder at time t are concatenated as , depicting the vector for attention. where [·||·] is denoted as concatenation.
Decoder unrolls the output summary from initial hidden state by predicting one word each time. Neglecting length control, initial state of decoder is set as 1 , and the hidden state h d t is calculated by: Context vector c t is used to measure which parts of the source words that decoder pays attention to at time t: Then we can concatenate c t with hidden state h d t to predict the next word:
Length Control Models
To control the length of output, we need to put the desired length information into the decoder, hence, the training objective in supervised ML with "teacher forcing" (Williams and Zipser 1989) becomes: Here, l t is denoted as length information the decoder perceives at time t. As is introduced before, LC models are classified into two groups. For the RLI model, remaining length is updated in each time step by l t+1 = l t − len(y * t ), while l 1 is set to len(y * ). In WLI model, decoder only aware of the whole length of the sentence, so we set all l t as len(y * ).
In this section, We will introduce four models: LenInit, LenEmb, LenLInit and LenMC. The first two models are proposed by (Kikuchi et al. 2016). We make modification on them and propose the remaining two. LenInit This WLI model uses memory cell to control the output length by rewriting the initial state m d 0 as: l 1 is regarded as the entire desired length of the output sentence, and b l ∈ R D is a learnable vector. LenLInit This model can be viewed as a variant of LenInit. In order to produce higher scores by leveraging the LC capacity, we simply add a linear transformation W l of length information, the model is thus named Length Linear Initialization (LenLInit). Unlike the LenInit, b l is replaced byb, a gaussian sampled non-trainable vector, and the initial memory cell state of decoder is: LenEmb For this RLI model, embedding matrix W le ∈ R D×L transforms l t into a vector e l (l t ) ∈ R D , where L is the possible length types, then e l (l t ) will be concatenated with the word embedding vector e w (y * t−1 ) as additional input for LSTM decoder: LenMC Other than LenEmb that length information e l (l t ) is concatenated as additional input, we infuse l t into memory cell at each time step in the same way as LenLInit, and name this RLI model as LenMC.
Length Control Reinforcement Learning
Models trained by maximum likelihood estimation with "teacher forcing" suffer from the problem of "exposure bias" (Ranzato et al. 2015). Moreover, the training process is to minimize the cross-entropy loss, while in test time, results are evaluated with language metrics. One way to redeem these conflicts is to learn a policy that directly maximizes the evaluation of metric instead of maximum-likelihood loss where RL could be naturally considered. From the perspective of RL for sequence generation, our LC models can be viewed as an agent, parameters of the network form a policy p θ , and making prediction at each step can be treated as action. After generating a complete sentence, agent receives a reward computed by evaluation metrics. During training process, decoder can produce two types of output: y = {y 1 , y 2 , . . . } with greedy search, and y s = {y s 1 , y s 2 , . . . } in which y s t is sampled from the probability distribution p θ (y s t |y s 1:t−1 , x) at time t. We assign a random number l s t within an appropriate range as the target summary length for each article and feed it into LC model to sample a sentence y s , then reward r(y s ) is evaluated between ground truth summary y * and sampled sentence y s . We apply the self-critical sequence training (SCST) (Rennie et al. 2017) as our RL backbone, and the training objective of SCST becomes: L rl (θ) = (r(y) − r(y s )) t=1 log p θ (y s t |y s 1:t−1 , x, l s t ) (12) This reveals that the goal of policy gradient RL in sequence generation is equivalent to increase the probability of generating high-score sentences. We encounter two additional problems about LC summarization, first is that LC models are designed to generate summaries in different lengths, but existing datasets only provide one or a few ground-truth references for each article, worse still, the number of reference with different length are terribly unbalanced (see Figure 2). In consequence, models trained under ML by this dataset tend to have better performance only in particular lengths. By sampling sequences with randomly assigned length l s 1 in reinforcement training, uniform-distributed length sentences are served as additional summaries to be judged by RL system, as a result, alleviate the above-mentioned issue.
The second problem is that directly applying SCST for LC models will seriously diminish the LC capacity, because some of the sampled sentences have deviation in length, enlarging the generation probability of these sentences will corrupt the LC capacity that in turn would fur-ther force the model to generate more length-deviation sentences, and therefore reinforcing a vicious cycle to lead LC capacity crash. To save the model from length control collapse in RL, an intuitive idea is to adjust the reward incorporating with outputs length, especially for those sentences with high scores and mismatched length. In consequence, we propose two training approaches for length control RL: Manually Threshold Select (MTS) and Self-Critical Dropout (SCD). Additionally, both training algorithms can regulate the model by tuning a hyper-parameter that has better LC capacity but lower sentence quality and vice versa, i.e, accomplishing the CLC.
Manually Threshold Select As an initial point, semantic accuracy is still the most critical indicator needed to be guaranteed. For a sentence has low score, its generation probability would be reduced during the training even with expected length. Considering sentences with high scores, for those who have expected length, reward should be naturally retained, thus, we only need to deal with remaining sentences with unqualified length.
Suppose the desired length for sampling sentence is l s 1 , and the length of the output sequence is len(y s ). The length prediction error d e is the difference of two lengths: abs(l s 1 − len(y s )). We manually choose an error threshold d th to eliminate the reward of sentence when d e exceeds d th : r(y s ) = r(y) r(y s ) > r(y) and d e > d th r(y s ) otherwise The LC capacity can be adjusted by setting different d th , larger d th would yield better evaluation score while smaller d th get better length control.
Self-Critical Dropout Two drawbacks occur in MTS-RL. Firstly, sentences exceeding the limit are completely ignored even though they reach high evaluation scores while d e is slightly larger than d th . Secondly, d th can only take discrete values, and this makes it hard to control those models that have precise length control such as LenMC. Inspired by SCST (Rennie et al. 2017) to approximate the baseline from the current training model, we propose Self-Critical Dropout RL approach. In each iteration, a batch of sampled outputs B = {y s,1 , y s,2 , ..., y s,|B| } is obtained, where y s,i is the i th sampled sentence with desired length l s,i 1 . The mean of d e is approximated by: We taked e as the threshold, unlike the previous method that restrains the rewards of all sentences with d e larger than d th , we keep their rewards by a probability of p select . At the same time, rewards should be more likely to be reserved when d e get closer tod e : p select = exp(−λ(d e −d e )) (15) λ reflects the degree of length constraint towards output sequence, therefore controls the LC capacity. Larger λ could force the model to generate sentences that have more accurate lengths, while smaller λ have weaker control of length so could improve the performance.
Experiments
The experiments are divided into two parts. We make basic experiments in ML to observe the gap of accuracy between LC models and other summarization baselines. Besides, trained models will be served as the initial state of RL. Then further comparison on LC models under different RL methods is conducted, we pay more attention to this part and perform extensive experiments to demonstrate the effectiveness of controllable length control by designed RL.
Experiment Setting
Gigaword Dataset Gigaword dataset is selected for our experiments. The corpus pairs including the collected news and corresponding headlines (Napoles, Gormley, and Van Durme 2012). We use the standard train/valid/test data splits followed by (Rush, Chopra, and Weston 2015), which are pruned to improve data quality. The whole processed dataset contains nearly 3.8 million sentences for training, along with one summary each. In the experiment of ML, to compare with other summarization models in a unified standard, we conduct the experiment on the entire dataset. Results are reported by standard Gigaword testset which contains 1951 instances and we name it "test-1951". For the experiments on RL, we shrink the size of training set by sampling 600K pairs of it, validation/test set is rebuilt imitating Song, Zhao, and Liu (2018), two non-overlapped sets are sampled from a standard validation set called: "valid-10K" and "test-4k" for model selection and result evaluation, respectively. Notice that the scores on "test-4k" are much higher than those on "test-1951", this is because in standard test set, words in summary sentences do not frequently occur in source texts which brings difficulty for word prediction during decoding. We build the dictionary containing 50000 words with the highest frequency and the other words are replaced by "unk" tag. Evaluation Metric Following other summarization work, we evaluate the quality of generated sentence by F-1 scores of ROUGE-1(R-1), ROUGE-2(R-2), ROUGE-L(R-L) (Lin 2004).
To measure the LC capacity, Liu, Luo, and Zhu (2018) use variance of summary lengths len(y) against target length l 1 , In this paper, we use the square root of variance (svar) : Implementation details Dimensions of hidden state for our BiLSTM encoder and one-layer LSTM decoder are both fixed to 512. The size of vector b l andb incorporating length input is 512 and the number of possible lengths L in LenEmb is 150. We first train our models in supervised ML using Adam (Kingma and Ba 2014) as optimizer and anneal the learning Table 3: Performance of length control RL in "test-4k" (ML results also included for comparison). Obviously highest scores (0.4 larger than the second best) are in bolded font, the scores in italic font are significantly worse score (2 lower than best socre rate by a factor of 0.5 every four epochs. We also apply gradient clip (Pascanu, Mikolov, and Bengio 2013) with a range of [-10, 10], and batch size is set to 64. Then we run RL algorithms on previously trained LC models with initial learning rate of 0.00001 and reward r(y s ) in RL is also set as the sum of R-1, R-2, and R-L scores. During the RL, desired length l s 1 to sample the sentences is average distributed in a interval [20,70]. We evaluate the model in validation set at each 2000 iterations and select the model according to its cumulative score of R-1, R-2 and R-L.
Note that in our experiments, the space is not counted into sentence length which is slightly different with (Kikuchi et al. 2016).
Experiment Results Analysis
Length Control in ML Although the evaluation score is not the unique objective in this research, it is of interest that how exactly the score is deprived by LC capacity. The results of four LC models in ML are presented in Table 2, and ROUGE scores are collected with desired length of 45. To embody the accuracy level of our LC models, we list several existing summarization baselines including ABS, ABS+ (Rush, Chopra, and Weston 2015), RAS-LSTM, RAS-Elman and Luong-NMT (Chopra, Auli, and Rush 2016). After individually comparing two WLI models and two RLI models, we find the two proposed models, LenLInit and LenMC, slightly corrupt LC capacity while improve the scores obviously.
In Table 1, we provide a representative example of the summaries generated by LC models, and results demonstrate that these models are able to output well-formed sentences with various lengths. It is also observed that LenLInit and LenMC perform better on short sentence summary in this case. Table 3 displays overall comparison of all models under RL. We evaluate our models with sentence length of 25, 45 and 65, which represent short, median and long sentences separately. Results may vary after each training process since RL is usually unstable, so we repeat training for multiple times in each model and statistic the results on average.
RL for Length Control
We first present the results of four LC models in ML. After that, we apply raw self-critical sequence training (SCST) on this basis, without any constraints on output length, we find that WLI models tend to lose control of length sharply but increase the accuracy significantly, while RLI methods still keep the good LC ability. This is mainly because the lengths of sampled sentences for RLI models are consistent with the input length in most cases, consequently, the training process is stable.
To further investigate the impacts that RL makes on LC models, we evaluate the models on all expected lengths within the range of [20,70]. These results are reported in Figure 3, where x-axis represents the length, ROUGE score and svar of y-axis measures output quality and LC capacity separately. For convenience, we take the average of R-1, R-2, and R-L values as ROUGE score. Obviously, RL improves scores among the range of all lengths but release LC capacity. The gain of scores is significantly on both short and long sentences for WLI models as well as short sentences for RLI models, which signify that RL alleviates the problem due to unbalanced amount of multiple lengths in training corpus. In particular, LenLIint performs the highest score among four models, nonetheless, have poor LC on long sentences. It is worth noting that LenMC with SCST results even higher score than LenInit on short summaries, and still perserve excellent LC ability. Since SCST has negligible effect on LenEmb, we exclude LenEmb for further comparison under length-control RL. Table 3 are followed by SCST part. We make experiments of MTS-RL on three LC models, for WLI models LenLInit and LenInit, accuracy and svar both rise under the selected d th increasing, which means hyperparameter d th in MTS-RL can be used to adjust the LC capacity. However, for the RLI model LenMC, results show there is no obvious distinction in scores when we use different d th . Hence, we adopt SCD-RL training algorithm for LenMC, the results show our SCD-RL algorithm can control LC capacity for RLI model as MTS-RL does for the WLI model. and SCD-RL can also manage the LC capacity for WLI models. Overall, two RL training algorithms prevent the model from length control collapsing, and make this capacity controllable via their own hyper-parameters. In order to make comprehensive comparison considering all factors, we build a scatter map (see Figure 4) to display the performance of models in different training strategies. The x-axis is svar to measure the LC capacity. To evaluate the scores intergrating different lengths, we take the average of R-1, R-2, R-L scores with lengths of 25, 45, 65 as the value on y-axis. From Figure 4, we can give some intuitive interpretations: (i) SCST as length control RL for WLI models is extremely unstable. (ii) For those models with similar average ROUGE scores, LenMC have strictly better LC capability than LenInit. (iii) Statistically, LenLInit performs higher score than LenInit when their svar values are relatively close. (iv) The models with designed RL algorithms sufficiently cover wide range of LC capacity with accuracy in a reasonable scope.
Conclusion and Future Work
In this paper, we proposed LenLInit and LenMC inspired by former work, our modified models improved length control summarization performance on Gigaword Dataset. Two developed RL algorithms were successfully applied in length control models to significantly improve the scores on all short, median and long sentences, and to allow users to determine the model with expected length control capacity. Due to the deficiency of the research in this field, extra work need to be pursued. We plan to perform experiments on other tasks such as image caption and dialogue system to further verify our RL algorithms. It is also valuable to investigate the mathematical relationship between length control capacity and evaluation scores, which can be beneficial for model selection. Furthermore, the controllable ability can be extended to other domains like sentiment or style. | 2019-09-17T08:57:07.000Z | 2019-09-17T00:00:00.000 | {
"year": 2019,
"sha1": "a7da804b0a9379ceaed05b8378f1ca2ca8e6db85",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a7da804b0a9379ceaed05b8378f1ca2ca8e6db85",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14454019 | pes2o/s2orc | v3-fos-license | Gender Differences in the Application of Spanish Criteria for Initiation of Enzyme Replacement Therapy for Fabry Disease in the Fabry Outcome Survey
Both male/female patients with Fabry disease (FD) may receive enzyme replacement therapy (ERT). Previously published analyses of the Fabry Outcome Survey (FOS; Shire-sponsored) database suggested gender differences in timing of ERT initiation. We assessed alignment of criteria for ERT initiation in the Spanish adult population included in FOS with recommendations of a Spanish national consensus. This retrospective analysis examined baseline clinical data of 88 adults (49 females) enrolled in the FOS database up to August 2014. Thirty-five (39.8%) patients were not receiving ERT: five (12.8%) males and 30 (61.2%) females. Baseline disease severity on the FOS-derived Mainz Severity Score Index was lower in untreated males (median (interquartile range), 0.0 (0.0–1.0)) than treated males (TM; 15.0 (7.5–26.5)), and was similar in untreated and treated females. The percentage of untreated females with at least one criterion for treatment initiation was 76.7% versus 100.0% of treated females (p = 0.0340) and 97.1% (p = 0.0210) of TM. In discordance with Spanish consensus recommendations, a substantial number of females with evidence of FD who might benefit from ERT have not yet initiated treatment. These results suggest unequal gender perceptions with respect to ERT initiation in Spain.
Introduction
Fabry disease (FD) is a rare inherited X-linked metabolic disease secondary to reduction/absence of lysosomal α-galactosidase A activity. As a result, a progressive accumulation of globotriaosylceramide (Gb 3 ) and related glycosphingolipids within lysosomes is believed to produce cellular changes that progressively affect multiple organ systems, determining a natural disease evolution ranging from an asymptomatic status in the first years of life to different clinical presentations with increasing age. FD in adults has a wide variety of phenotypes, from the "classical" severe form in males to a seemingly asymptomatic course in some females. Owing to the X-linked nature of the disease and the potential for skewed X-chromosome inactivation, females can have normal α-galactosidase A activity in plasma/leukocytes with variable signs and symptoms of FD [1,2]. Most heterozygous females develop symptoms with vital organ involvement, usually later than males [1]. FD manifestations may include neuropathic pain, gastrointestinal disturbances, angiokeratomas, hypohidrosis, kidney dysfunction, cardiac valve disease, cardiomyopathy and stroke, resulting in a reduction of health-related quality of life (HRQoL) and an increased risk of premature mortality [3].
Treatment of FD in male/female (pediatric and adult) patients with enzyme replacement therapy (ERT) has been shown to stabilize progressive multiorgan decline and improve clinical outcomes [1,[4][5][6]. ERT reduces plasma and urine Gb 3 and lyso-Gb 3 levels, ameliorates early clinical symptoms such as pain and gastrointestinal symptoms and improves heart rate variability and HRQoL [7][8][9][10]. At the organ level, ERT reduces left ventricular mass (LVM) and ventricular wall thickness, and slows the progression, or stabilizes, mild to moderate nephropathy as assessed by estimated glomerular filtration rate (eGFR) [11][12][13][14][15]. Indeed, the pattern of mortality has changed since the introduction of ERT, from a higher percentage of deaths by renal failure in males and cerebrovascular disease in females, to cardiac disease in both genders [16]. Although few studies have been published specifically describing ERT effects in female patients with FD [4,[17][18][19], a direct comparison of agalsidase alfa ERT effectiveness between male and female patients using data from the Fabry Outcome Survey (FOS) showed "that women are as likely to respond to ERT as men" [20].
Several expert panel-derived guidelines for initiation of ERT have been proposed on the basis of published evidence for efficacy and local health care system variations [21][22][23]. In Spain, a 2005 national consensus document set criteria for initiation of ERT in patients with FD independent of patient gender [24]; an update of this became available in 2011 [25].
FOS, sponsored by Shire, is a global international multicenter registry of patients with a confirmed diagnosis of FD who are receiving, or are candidates for, ERT with agalsidase alfa. Previously published analyses of Spanish patients included in the FOS database suggested gender differences at the time of ERT initiation [26,27]. The aim of our research was to assess the extent to which criteria for ERT initiation in the Spanish adult population included in FOS align with recommendations of a national consensus document.
years in females.
A total of 53 (60.2%) patients were receiving ERT, including 87.2% of males and 38.8% of females. The groups studied comprised 34 treated males (TM; 38.6% of the total sample; 87.2% of all males), five untreated males (UM; 5.7% of the total sample; 12.8% of all males), 19 treated females (TF; 21.6% of the total sample; 38.8% of all females) and 30 untreated females (UF; 34.1% of the total sample; 61.2% of all females). Baseline clinical characteristics are shown in Table 1.
Males and females receiving ERT at baseline began treatment at a median (IQR) age of 41.4 (33.2-50.0) years; age at treatment initiation was independent of gender, even when age at symptom onset was different. Median (IQR) age of symptom onset was lower in TM (14.0 (10.0-25.0) years) than in TF (30.5 (16.0-41.0) years; p = 0.027). The median (IQR) age at treatment initiation in TF, 47.7 (35.8-52.8) years, was not significantly different than the median (IQR) age at data extraction in UF, 45.7 (35.4-57.5) years (p = 0.910). FOS-Mainz Severity Score Index (MSSI) scores of treated male patients were higher than those of untreated male patients, indicating greater disease severity in treated versus untreated male patients. The median baseline FOS-MSSI score was not significantly different between treated and untreated female patients. HRQoL, as measured by EuroQol 5-Dimensions (EQ-5D), was assessed for only 3 of 30 (10%) UF, 6 of 19 (32%) TF and 11 of 34 (32%) TM.
The percentage of patients with proteinuria (recorded as "signs and symptoms" in the FOS database; Table 1) was much lower in UF (16.7%) compared with TM (61.8%; p < 0.0001), and was approximately half that seen in TF (36.8%; p = 0.1730). Analytical values for proteinuria (Table 1) were present in 25.0% of UF, versus 70.6% (p = 0.0250) of TM and 54.5% (p = 0.2140) of TF. The percentage of patients with eGFR <90 mL/min/1.73 m 2 was 30.0% in UF, 36.8% in TF, 0% in UM and 52.9% in TM (Table 1). Baseline microalbuminuria with a renal biopsy suggestive of FD was seen in 1 UF, 1 TF, 2 TM and in no UM. Left ventricular hypertrophy (LVH) recorded as "signs and symptoms" in the FOS database was present in 36.8% of TF and 23.3% of UF (p = 0.3460). UF did not substantially differ from treated patients in disease characteristics such as pain or other markers of renal and cardiac involvement. Neuropathic pain was present in 30.0% of UF, with a distribution of median (IQR) Brief Pain Inventory (BPI) scores for worst (8.0 (8.0-8.0)), least (2.5 (0.0-5.0)) or average (3.5 (0.0-7.0)) pain during the previous 24 h, or for pain intensity at the visit (7.5 (7.0-8.0)) that did not notably differ from those in the other groups of patients. Median (IQR) EQ-5D index score in UF (0.8 (0.7-1.0)) was similar to that in TF (0.7 (0.3-0.7)). A graphical comparison of the proportion of patients by organs affected in TM, TF and UF is displayed in Figure 1. 17,1965 4 of 11 UF did not substantially differ from treated patients in disease characteristics such as pain or other markers of renal and cardiac involvement. Neuropathic pain was present in 30.0% of UF, with a distribution of median (IQR) Brief Pain Inventory (BPI) scores for worst (8.0 (8.0-8.0)), least (2.5 (0.0-5.0)) or average (3.5 (0.0-7.0)) pain during the previous 24 h, or for pain intensity at the visit (7.5 (7.0-8.0)) that did not notably differ from those in the other groups of patients. Median (IQR) EQ-5D index score in UF (0.8 (0.7-1.0)) was similar to that in TF (0.7 (0.3-0.7)). A graphical comparison of the proportion of patients by organs affected in TM, TF and UF is displayed in Figure 1. As a result, the percentage of UF fulfilling at least one criterion of the Spanish guidelines for treatment initiation was 76.7%, which differed from the percentage of both TF (100%; p = 0.0340) and TM (97.1%; p = 0.0210; Table 2). The presence of other criteria was as follows: 30.0% of UF met pain criteria (versus 47.4% of TF (p = 0.2420) and 41.2% of TM (p = 0.4370)) and 23.3% of UF met cardiac criteria (versus 52.6% of TF (p = 0.0630) and 55.9% of TM (p = 0.0110)). Further, 43.3% of UF met renal criteria (versus 57.9% of TF (p = 0.3870) and 82.4% of TM (p = 0.0020)). Distribution of patients according to the criteria is shown in Table 2. As a result, the percentage of UF fulfilling at least one criterion of the Spanish guidelines for treatment initiation was 76.7%, which differed from the percentage of both TF (100%; p = 0.0340) and TM (97.1%; p = 0.0210; Table 2). The presence of other criteria was as follows: 30.0% of UF met pain criteria (versus 47.4% of TF (p = 0.2420) and 41.2% of TM (p = 0.4370)) and 23.3% of UF met cardiac criteria (versus 52.6% of TF (p = 0.0630) and 55.9% of TM (p = 0.0110)). Further, 43.3% of UF met renal criteria (versus 57.9% of TF (p = 0.3870) and 82.4% of TM (p = 0.0020)). Distribution of patients according to the criteria is shown in Table 2.
Discussion
The study of rare diseases is intrinsically impaired by their low prevalence, making a complete picture of clinical characteristics and management difficult to ascertain. Therefore, global multicenter registries are an essential tool for the study of rare diseases; the FOS registry is one of the largest disease-specific registries of patients with FD. Although inter-center variations in data collection procedures are common, they are addressed by a common protocol that defines the standard data to be collected and unifies them into a single database for analysis. The noninterventional nature of this research method permits the observation of clinical practice variability in management of FD that may lead to detection of unmet needs and disparities or divergences from current recommendations. This paper describes a set of Spanish adult patients at inclusion in FOS according to treatment status and gender. One of the objectives was to present clinical characteristics of this population and to compare the treatment status with the recommendations from a national consensus panel to explore deficiencies in ERT access.
The relatively short period since ERT introduction in 2001, the low prevalence of FD and its slow progression hampers understanding of long-term and patient-related outcomes and mortality. Consequently, international guidelines for ERT initiation are mostly based on expert panel recommendations. As a result, guidelines vary from one country to another, particularly in management of heterozygous females and children [21]. Spanish guidelines existing at the time when the patients entered FOS did include the definitions and criteria from international guidelines, but did not differentiate patients by age or gender regarding when it should be advisable to start ERT [24]. However, the presence of one major criterion in females is consistent with the international consensus document regarding indications for ERT in females when progression of organ involvement is detected [23]. Spanish guidelines were updated in 2011 [25], maintaining the main criteria to start ERT, but refining some definitions and indications for treatment initiation in light of international consensus. In our opinion, the fact that the main criteria to start ERT were the same in the 2011 update as those proposed in 2005, but were more thoroughly defined and less vague in the update, may have increased the number of patients deemed as qualifying for ERT, but without any relevant effect on the number of patients not receiving ERT, especially females; this is one of the most striking findings of our work.
The gender differences observed in access to ERT according to existing guidelines confirm that a proportion of female patients with FD fulfill criteria for ERT initiation, but are not receiving treatment. This finding has previously been suggested in published analyses of the Spanish patients in FOS [26,27], all patients in FOS [20] and other studies from different registries [28][29][30][31][32]. Disease rarity, misconceptions about their carrier status and gender have been proposed as the main drivers for the differential access to ERT for females with FD [33]. Evolving knowledge of the natural history of FD and its management with ERT has changed the consideration of heterozygous females from obligate carriers, mostly asymptomatic or with mild disease, to patients with important clinical features and time-dependent disease progression without ERT [30,34,35]. Evidence from the literature suggests that females are referred less often for diagnostic interventions and treated less aggressively than males [33]. Furthermore, disparities in treatment between genders have been consistently identified for heart [36] and kidney diseases [31,37], among others [32].
The fact that there were no differences in the number of affected organs between UF and TF or between UF and TM suggests the multisystem nature of the disease in both genders in our patients. There was no significant difference between UF and TF in FOS-MSSI scores, and 76.7% of UF fulfilled at least one criterion for ERT initiation. Considered independently, 60.0% of UF fulfilled one renal criterion, most frequently renal impairment (eGFR < 90 mL/min/1.73 m 2 in 30.0%). This is similar to values reported for UF in other registries (e.g., 55% in Ortiz et al.) [28], and similar or slightly lower than those reported in the literature for combined TF and UF (e.g., 58% in Wang et al. [30], 62.5% in Wilcox et al. [38]). Proteinuria (recorded as a sign/symptom) was observed in 17% of UF, similar to the 16% reported for the whole group of Spanish females included in FOS in 2009 [26].
A recent review noted that progressive nephropathy is prominent in FD and although males are more profoundly affected than females, the authors concluded that both males and females should initiate ERT if they have evidence of renal involvement [39]. More than half of UF met the criteria to start ERT based on cardiac involvement. These included LVM index as the major driver, atrioventricular block and LVH. The observed LVH frequency does differ from some reports for all females in previous publications [30,34]; however, prevalence of LVH is age dependent and is higher in older populations. Recently, Hopkin et al. presented data from the Fabry Registry showing that delayed ERT, as well as having experienced a previous clinical event (cardiac, renal or cerebrovascular) before ERT start, are risk factors for an unfavorable evolution and the appearance of new clinical events under ERT, in male and female patients alike [40]. In accordance with this, the absence of timely ERT initiation in female as well as in male patients showing cardiac or renal involvement of FD may jeopardize their clinical evolution. This clearly applies to patients with classic FD; however, atypical milder, later-onset phenotypes have been associated with variant mutations, including cardiac and cerebrovascular variants [41][42][43][44].
Recently, Lenders et al. published the findings of a multicenter German study with 224 genetically confirmed adult female patients with FD [45], investigating their current ERT status at the time of their last visit to analyze whether patients were treated in accordance with current European FD guidelines (class I and IIA/B recommendations) [23]. It is noteworthy that these recommendations are quite similar to the 2011 update of the Spanish recommendations [25]. Lenders et al. found in their cohort that one-third of German females without ERT fulfilled indications for starting it [45]. Unlike the population in the German cohort where TF were older than UF, in our patients we did not see any differences in age between UF at data collection and TF at ERT start. In addition, in our patients as well as in the German cohort, both TF and UF showed a significant number of organs affected by the disease as an expression of multisystemic involvement. Moreover, the main organ manifestations seen in German TF were cardiac and, to a lesser extent, renal [45]; similarly, in our Spanish TF patients, cardiac involvement was somewhat more frequently seen than renal involvement. It thus appears that there are some differences in the management of female compared with male patients with FD in clinical practice in Germany as well as in Spain.
Two limitations of our research should be mentioned. First, the numbers of patients overall, and especially in the various subgroups, are quite small, with missing data for some parameters. The data were collected from an observational registry that was not specifically designed to assess gender differences in ERT initiation. Together with the large number of evaluated outcomes, these factors might have reduced the power and robustness of the statistical tests. Selection bias is a recognized limitation of registry studies and not all patients in our sample had complete data, a reflection of real-world clinical practice.
The second limitation of our study could be the use of ERT initiation criteria according to an updated version of the 2005 recommendations. This update might have increased the sensitivity of cardiac and, to a lesser extent, renal criteria for detecting treatment candidates and, consequently, may have modified the classification for a certain number of patients. It should be noted that these data were collected more than three years after the last recommendations update; therefore, physicians should have had enough time to adopt the latest criteria. In our opinion, failure to do so reflects nonadherence to recommendations regardless of the proposed criteria, rather than the effect of other factors. According to our current knowledge, some female patients with mild renal involvement (e.g., with microalbuminuria and a slight decrease in eGFR to 80-90 mL/min/1.73 m 2 ) could have slow clinical progression and without objective signs of other organ damage (cardiac, central nervous system, pain or gastrointestinal symptoms), a personalized approach is warranted and ERT could be delayed with careful and continuous follow-up. Additionally, some of the assessments were not available for all patients (e.g., laboratory results for proteinuria, some echocardiographic assessments and EQ-5D or BPI score).
Nonetheless, from our point of view, the current study is important because it addresses differences based on gender regarding ERT initiation in Spain, somewhat similar to that observed in another European country. As with many diseases, clinicians in the real-world setting derive their best practices from both clinical practice guidelines and their own clinical experience and impressions. There is an ongoing need in the medical community for greater and more widespread knowledge regarding FD and other rare diseases to illuminate our current understanding that heterozygous females with FD may have substantial disease effects. In turn, this can only enhance our efforts to offer the best standard of care for both men and women with FD. Disease registry analyses, such as the current one, can offer valuable insights into real-world disease management.
Design
This was a retrospective analysis of baseline clinical data of adult patients with FD who were managed in Spanish centers and enrolled in the FOS database up to August 2014.
Study Population
Characteristics of the FOS registry that started data collection in 2001 have been described elsewhere [46,47]. Briefly, the FOS registry collects standardized information from patients who are managed at participating centers and provide signed informed consent. FOS has been approved by the ethics committee/institutional review board of all participating centers and all procedures were in accordance with the Declaration of Helsinki of 1975, revised in 2013. Information obtained during routine clinical follow-up includes baseline and clinical laboratory data plus additional information on patient-reported outcomes collected through questionnaires (e.g., pain and HRQoL) [46]. Additionally, disease severity is assessed through the FOS-MSSI, an adaptation of the MSSI to a binary format data input [48,49].
Study Measures
The 2005 Spanish national consensus document stated that ERT should be initiated for FD immediately upon presentation of any one of the following signs or symptoms (major criteria) [24]: severe neuropathic pain, nephropathy (proteinuria >300 mg/24 h in adults or >5 mg/kg/24 h in children; eGFR < 80 mL/min/1.73 m 2 ; renal biopsy), cardiac disease (LVH, ischemic heart disease or arrhythmias) or cerebrovascular disease (clinical or neuroradiological signs). Additionally, ERT initiation may be considered when at least two of the following FD symptoms are present (minor criteria): hypoacusia or vertigo interfering with HRQoL, gastrointestinal manifestations, asthenia, episodic fever, osteoarticular disease, growth delay, microalbuminuria or mild acroparesthesia. In the 2011 update, the former criteria were refined regarding main organ involvement, and it was emphasized that such criteria should be the same for every patient regardless of gender [25]. Specifically, the eGFR criterion was raised to 90 mL/min/1.73 m 2 , more specific electrocardiographic and echocardiographic criteria were considered for diagnosis of cardiac involvement and microalbuminuria was "upgraded" from a minor to a major criterion, but with a renal biopsy with FD findings [25]. eGFR was calculated with serum creatinine values adjusted (if necessary) by the analytical method used at each center to achieve uniformity [50]. LVH was considered when LVM index was ≥51 g/m 2.7 in males or ≥48 g/m 2.7 in females. LVM index was determined by standard M-echocardiography at each participating center and adjusted for height using the Devereux formula [51]. HRQoL was assessed through the EuroQol Group's measure of health status (EQ-5D) [52], using a descriptive system of five categorical dimensions, a visual analog scale ranging from 0 (death) to 100 (full health) and a derived tariff based on population weights from 0-1 with the same extreme anchors [53]. The MSSI consists of four sections covering various signs and symptoms of the disease (general, neurological, cardiovascular and renal), weighted in accordance with their contribution to morbidity. Hence, a global score was obtained to enable patient classification according to disease severity.
Statistical Analyses
Descriptive and analytical analyses were performed for the overall sample and for subgroups created according to gender and ERT status: TM, TF, UM and UF. Categorical variables were described by their frequency and percentage. Continuous variables were described by median and IQR (IQR = Q1 − Q3). Comparisons between two independent samples were made with Wilcoxon rank-sum test for continuous variables and Fisher's exact test for categorical variables.
Conclusions
These results suggest gender differences in initiation of ERT in Spain. With reference to recommendations of a Spanish consensus on FD, a substantial number of females with evidence of FD may benefit from ERT but have not yet initiated treatment. The ERT initiation delay in female patients who fulfill the criteria for ERT initiation results in these patients missing the full benefits of treatment and might put them at risk of FD complications with associated morbidity and mortality. | 2017-05-24T06:31:03.514Z | 2016-11-24T00:00:00.000 | {
"year": 2016,
"sha1": "606ab606b7e91b44fd367af8d280e101ede599b9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/17/12/1965/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "606ab606b7e91b44fd367af8d280e101ede599b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243865306 | pes2o/s2orc | v3-fos-license | Extend, don’t rebuild: Phrasing conditional graph modification as autoregressive sequence labelling
Deriving and modifying graphs from natural language text has become a versatile basis technology for information extraction with applications in many subfields, such as semantic parsing or knowledge graph construction. A recent work used this technique for modifying scene graphs (He et al. 2020), by first encoding the original graph and then generating the modified one based on this encoding. In this work, we show that we can considerably increase performance on this problem by phrasing it as graph extension instead of graph generation. We propose the first model for the resulting graph extension problem based on autoregressive sequence labelling. On three scene graph modification data sets, this formulation leads to improvements in accuracy over the state-of-the-art between 13 and 24 percentage points. Furthermore, we introduce a novel data set from the biomedical domain which has much larger linguistic variability and more complex graphs than the scene graph modification data sets. For this data set, the state-of-the art fails to generalize, while our model can produce meaningful predictions.
Introduction
Generating or modifying graphs based on natural language texts is a versatile technique that has applications in different subfields of Natural Language Processing (NLP) such as dependency parsing (Manning and Schütze, 2001) or semantic parsing (Oepen et al., 2019(Oepen et al., , 2020. However, while these tasks can all be viewed as instantiations of conditional graph generation, they have been traditionally addressed as distinct tasks with different data sets, models and evaluation settings. Contrary to that, we are interested in studying the general task of generating or modifying graphs based on textual input. Specifically, we focus on the recently introduced task of conditional graph modification, in which a model is given a graph which it should modify according to natural language instructions (He et al., 2020). Their proposed method first embeds both the graph and the instructions with a joint encoder into an embedding h and then rebuilds the graph using a separate generative model for graphs (You et al., 2018b) conditioned on h. While this approach achieves state-of-the-art results for the Scene Graph Modification (SGM) data sets, we identified two shortcomings of this approach: (i) The model has to newly generate also the parts of the input graph that actually should be left unmodified and (ii) the model uses a separate graph encoder in the generative decoder model, which does not share knowledge with the encoder.
We propose an alternative formulation of this problem in which we model the modification as a graph extension instead of graph generation. To this end, we introduce the two special node labels ADD and DEL which allow us to model node insertions, deletions and edge modifications in the graph extension setting. We develop a model for this novel graph extension problem that autoregressively solves a sequence labelling task for each node that is added to the graph. This formulation addresses both shortcomings of the model of He et al. (2020). First, it precisely extends the graph without the need for rebuilding the unmodified parts. Second, it models the graph as text which allows to encode the input text, the original graph and the extension with the same encoder and enables the straightforward integration of pretrained language models such as BERT (Devlin et al., 2019). Our proposed model outperforms the state-of-the-art for the three data sets published by He et al. (2020) by a large margin, with improvements between 13 and 26 percentage points (pp). To test the limits of our approach, we furthermore present a new, more challenging graph modification task in which biomedical event graphs have to be modified based on scientific texts. To this end, we transform data from an existing biomed- Figure 1: Rephrasing conditional graph modification as autoregressive sequence labelling. The substitution of the 'blue' node is replaced by the extension with the two nodes 'DEL' and 'ADD'. This graph extension problem is phrased as three autoregressive steps of sequence labelling. The original nodes have brown background while the extension nodes are drawn in blue.
ical event extraction task (Ohta et al., 2013) to a graph modification data set. Compared to the SGM data sets, the resulting data set displays much larger linguistic variation in the instruction texts and more complex graph structures. Our experiments show that the state-of-the-art fails to generalize on this data set, while our model is able to produce meaningful predictions. We analyze our model via a detailed ablation study and analyze the errors with respect to the input complexity, which allows us to precisely explain the improvements over stateof-the-art and suggest routes for even better future models.
To encourage further research on the challenging task of graph modification, we implement the models and data sets in a modular fashion and make the code available under an open-source license 1 .
Related Work
Generating graphs from natural language texts is a central problem in many subfields of NLP.
Many classical problems in NLP such as dependency parsing (Manning and Schütze, 2001) or relation classification (Vu et al., 2016) are text-tograph problems, but with highly restricted graph structures (e.g. nodes can be only words or named entities respectively). The methods developed for these tasks are typically tailored to these structures and cannot be used for other types of graphs. In contrast, our proposed method can handle arbitrary directed acyclic graphs (DAGs).
There is also significant interest in developing methods that jointly embed graphs and text 1 https://github.com/leonweber/extend to exploit graph-based information in NLP, especially for Question Answering based on Knowledge Bases (Lin et al., 2019;Yasunaga et al., 2021). However, these usually treat the graph as a static source of information and cannot be used for generating or extending graphs.
A task closer to our work in this regard is Cross-Framework Meaning Representation Parsing (Oepen et al., 2019. In this task, systems are required to parse text into a general graph-based format in which nodes are not necessarily anchored in the text. The major difference between MRP and our graph modification setting is that in MRP the models always generate the full graph from scratch, while our method modifies an already provided graph. There is strong interest in graph generation also outside the NLP community, e.g. for modelling arbitrary distributions of graphs (You et al., 2018b;Liao et al., 2019) or for generating novel protein structures (Jin et al., 2018;You et al., 2018a). However, these methods neither do graph generation conditioned on textual input nor support the modification of partial graphs.
To the best of our knowledge, the only model that was explicitly developed for modifying arbitrary graphs based on natural language instructions is the model by He et al. (2020). It uses a transformer (Vaswani et al., 2017) to jointly embed text and graph, modelling the graph structure by restricting attention only to neighbouring nodes and by adding edge label embeddings onto the node embeddings. Based on this joint embedding, the modified graph is generated by a separate decoder based on the GraphRNN architecture (You et al., 2018b). While this architecture allows to model graph modification as graph generation, it also requires the model to generate the unmodified parts of the graph again which leaves more room for errors, whereas we only extend the graph leaving the unmodified parts untouched. Furthermore, this formulation uses two separate graph encoders; the transformer for encoding the original graph and the GraphRNN for encoding the partially generated graph. In contrast, our proposed method uses the same encoder for encoding the given graph and its extensions, allowing for more parameter sharing and thus for potentially better graph representations. Our simpler architecture makes the integration of BERT-style pretrained language models (Devlin et al., 2019;Gu et al., 2020) straightforward, however this only partly explains the observed gains over He et al. (2020) (see Section 5.3).
Methods
The task of graph modification is to modify a graph G into a graph G according to natural language instructions t. To this end, let G = (N , E) be a DAG consisting of nodes N and edges E ⊆ N × N , along with node labels l n ∈ L n ∪{NONE} and edge labels l e ∈ L e ∪ {NONE}. Let G be defined analogously. We develop a model to estimate p(G | t, G). Here, we first present our approach for the problem of graph extension, i.e., the case N ⊆ N and E ⊆ E .
Then, we show how general graph modification can be reduced to this case. To this end, we identify three ways in which a graph can be modified: a node can be added, a node can be removed or the edges of an existing node can be modified. We show how we can model all three cases by introducing the two special node labels ADD and DEL which can be used to model node insertions and deletions in a graph extension setting. Both, ADD and DEL identify the argument that should be added or deleted via a special theme edge.
Explanation by Example
A visual explanation of our method can be found in Figure 1 and we will use it as a running example in the text. In this graph, we have the three nodes boy, shirt and blue and the edges (shirt, on, boy) and (blue, color-of, shirt). We want to change the color of the shirt from blue to red. For this, we extend G by a DEL node with its edge (DEL, theme, blue) and a red node with its edge (red, color-of, shirt). In our autoregressive sequence labelling framework, this extension of the original graph by two nodes is modelled as three successive calls to a sequence labelling model that receives as input the input text and a linearized form of the current graph. In the first call to the sequence labelling model, the input consists of the text and a linearization of the original graph. As output, the model produces the first extension node with label DEL and one edge to the node with the label blue (DEL, theme, blue). This is achieved by labelling the CLS token as DEL and the linearized representation of the blue node with theme. In the next step, the model receives as input the text and a linearized form of the now partially extended graph and predicts the next extension by labelling CLS with ADD to predict the node label. Additionally, the model predicts the two edges (ADD, theme, red) and (red, color-of, shirt). The predicted edge (ADD, theme, red) reflects the addition of a new node with the label red and is modelled by labeling the word red in the text with self. Note that this also produces an anchoring of the red node to the labeled span in the text. The edge (red, color-of, shirt) is produced by labeling the linearized form of the shirt node with color-of. After receiving the text and the further extended graph as input, the model signals the end of the extension process by labeling CLS with NONE.
Graph Extension as Autoregressive Sequence Labelling
We formulate the graph extension problem autoregressively. That is, we extend the provided graph one node at a time, together with the corresponding edges starting at the node. Let N + = N \ N be the extension nodes in our graph, and π = (n + 1 , n + 2 , ..., n + N ) an ordering thereon. Note that the size N = |N + | of the extended graph is known during training, while at test time we abort the generative process after a fixed number of steps which we treat as a hyperparameter or when the model predicts a node with label NONE. We write j≤i }] for the subgraph induced by the union of all nodes N from the original graph with the additional nodes up to n + i in π. Let be the joint probability of G and the ordering π, where {e ij } j are all edges from n + i to all available nodes n j . As multiple orderings can lead to the same graph, the total probability of a graph G is where the sum is over all possible orderings. In practice, marginalising over all possible orderings is infeasible. Therefore, we impose two conditions on the ordering to reduce their number, often even making it unique. First, π must be a topological ordering, which exists as G is a DAG. Second, among the topologically sorted orderings, we impose an additional order of node labels. In our running example, there are two possible topological orderings of the extended graph, because both extensions nodes could be added first. However, we impose that DEL nodes always come before ADD nodes, making π unique in this case.
We estimate p(n + i , {e ij } j | t, G i−1 ) by formulating it as a sequence labelling task over a combination of the provided text t and a textual representation of G i−1 . We solve the resulting sequence labelling task with BERT. For this, we first linearize G i into a textual representation t G i . We treat the exact form of the linearization as a hyperparameter, with the only constraint that every node n ∈ G i is represented by a unique span denoted span(n). A possible linearization for the original graph in our running example would be boy | shirt | blue | shirt on boy | blue color-of-shirt.
We use the linearization to jointly predict the label of the added node n + i and its edges {e ij } j . To generate the prediction, we concatenate the instruction text t and the linearized graph The model predicts the label of n + i using the embedding of BERT's [CLS] token h [CLS] . Then, the model predicts the labels of the outgoing edges of n + i to all possible target nodes j. This is achieved by marking j's span in t G i with the corresponding edge label (including NONE) in an IOB-tagging scheme.
That is, for producing a sequence labelling that represents the addition of the DEL node with its edge (DEL, theme, blue), we would mark the [CLS] token as DEL and the token blue as B-theme, with all other tokens being labelled as O.
We then estimate the joint probability of n + i 's label and the labels of the edges e ij from n + i to all possible target nodes j, by conditioning the edge probabilities on the node label. To this end, we first predict the node label from the embedding of the [CLS] token : We then predict the single edge probabilities conditioned on the node label: where h k are the token embeddings of j's span with k ranging over each token. For modelling the joint probability, we assume independence between the edges: where W N is the node-classification layer and the edge classification layers for each node label.
For training, we use the negative-log likelihood − log p(n + i , {e ij } j | t, G i−1 ) as loss, together with teacher forcing (Williams and Zipser, 1989). For prediction, we employ greedy search, choosing arg max p(n + i , {e ij } j | t, G i−1 ) at each step. In some applications, such as semantic parsing, it can be necessary to anchor some nodes to the text, i.e. assign a specific span in t to a node (Oepen et al., 2020). For instance, this can be used to encode that a certain semantic concept represented by a node is expressed in a specific span in the text. Our proposed autoregressive sequence labelling framework provides natural support for such a node anchoring, by including edges to spans in t. This is modelled by labelling the anchor spans in t with the desired edge type. Refer to Figure 1 for an example in which the ADD node, added in the second pane, has an edge to the span red in t. This edge triggers the creation of one additional node with label red which encodes the anchoring information.
Modelling Graph Modification as Graph Extension
We now formulate the graph modification problem as described in He et al. (2020) as a graph extension task. In contrast to the formulation as graph generation, this framework does not require the model to reproduce the unmodified parts of the graph. We produce an extended graph G from the original graph G, which contains information on the modifications to apply. From this graph, an applicationindependent postprocessing can trivially calculate the modified graph G m . We distinguish three different ways in which G can be modified: (1) an existing node n is deleted, (2) a node n is added and (3) the edges of an existing node n are changed.
For case (1), we add a DEL node with a single theme edge to n. In the post processing, we remove all nodes (and connected edges) that have edges from DEL nodes.
For case (2), we introduce an ADD node that adds n and all its outgoing edges. To determine the label of the added node, we extend G by another node representing the label of n. Modelling the label of added nodes in this way instead of predicting it directly, allows us to optionally use anchor nodes to determine the labels of added nodes. This can drastically reduce the size of the output space if the number of node labels is very large. For instance, in our running example, we want to add a node with the label red. The proposed model can achieve this in two ways: The first option is depicted in Figure 1, where it predicts the ADD node and marks the span "red" in t as the special theme edge. This is then interpreted as a prediction of the ADD node and that of an anchor node with label red. The second option is to first generate a node with label red and in a later generation step the ADD node with a theme edge to the red node. Both lead to the same graph (third pane in Figure 1), with the exception of the anchor edge, which would only be present under the first option. In the post processing, we change the label of n from ADD to the label of the additional label node and then remove the label node.
For case (3), we model modified edges of n as a sequence of deletions and additions by deleting n and then adding it back with the modified edges using the operations described in cases (1) and (2).
Experimental Setup
We evaluate our model on three data sets for SGM and a novel Biomedical Event Graph Completion data set.
Scene Graph Modification
SGM is a task defined by He et al. (2020). The model is given a scene graph and modification instructions in natural language and has to produce a new version of the graph that was modified according to the instructions. He et al. (2020) published three data sets: MSCoco, GCC and CrowdSourced. The first two were created synthetically from publicly available data sets (Lin et al., 2014;Sharma et al., 2018), while the instructions of the third were generated via crowd sourcing. Data set statistics can be found in Table 1 We rephrase the graph modification task as a graph extension problem as described in Section 3.3. We found that in these data sets the labels of almost all extension nodes appear in the modification prompts verbatim. Accordingly, we introduce additional anchoring nodes by exact string matching of the node label with the textual instructions. While this means that our model now has to add twice as many nodes (one anchor and one ADD node per additional node) it allows us to reduce the output space for the node label from 14,873 / 26,827 / 5,747 to two (ADD and DEL) for the training sets of MSCoco, GCC and CrowdSourced respectively.
The edges in all three SGM data sets are undirected but for our proposed framework we require the graph to be directed. Thus, we transform the undirected graphs to DAGs by defining the directions of edges between extension nodes and original nodes {n + , n} with n + ∈ N + , n ∈ N to go from n + to n. The rest of the directions is assigned arbitrarily.
For SGM, we linearize the graphs by writing out all nodes as each comes with a unique natural language label such as 'shirt' or 'blue'. Additionally, we write out all (directed) edges (u, v) in the form <u> <edge-label> <v>. See Figure 1 for a detailed example.
Biomedical Event Graph Completion
Biomedical Event Extraction is an information extraction task in which events that model biomedical processes have to be extracted from text (Ohta et al., 2013). These events are defined by their trigger (a typed span in the text) and their arguments, which can be other events or named entities and have a type called role. Together, all events and named entities in a given text form a directed graph which we call Text Event Graph (TEG). The nodes of the TEG are comprised of all named entities and all events in the text with their provided labels. The edges in the TEG always originate from event nodes and can have other event nodes or entity nodes as targets. An example TEG can be found in Figure 2.
We transform a Biomedical Event Extraction data set to a graph modification data set by randomly deleting event nodes and asking the model to recover them. Specifically, we use the BioNLP 2013 Pathway Curation (PC13) dataset (Ohta et al., 2013), split it into sentences and then randomly delete between zero and three event nodes, with the constraint that no more than 75% of the events can be deleted. We treat event hedging (negation and speculation) as special event nodes with one edge to the modified events. Furthermore, we delete all triggers (which correspond to anchor nodes), so that we can also evaluate the method of (He et al., 2020) which does not have support for anchor nodes. Statistics of the resulting data set can be found in Table 1. Notably, the PC13 data set differs considerably from the three SGM data sets in important respects. First, the task presented by the data set is a pure graph extension task as it is only necessary to add nodes. Second, both original and modified graphs are much larger than the scene graphs both in terms of nodes and of edges. Third, the PC13 data set is much smaller in terms of examples. Fourth, the PC13 data set possesses a much larger linguistic variability which is reflected in a larger variability in node labels, because all named entities of the text appear as nodes in the graph. This leads to a very high number of node labels and words in the text which appear in the dev/test set but not in the training set (35-67% vs 8-11% in the CrowdSourced data set). Overall, we consider this data set as considerably more challenging than the SGM data, which is reflected in much lower performance (see Section 5) in our experiments. Importantly, to correctly modify the graph, the models frequently have to generate more than one node with multiple edges, making it harder to achieve a prediction that is correct on the graph level than for the SGM data sets, where the modifications are limited to a change of exactly one node.
For graph linearization, we first write out all entity nodes using their associated text attributes. We append a linearization of each event e that consists of its label together with all edges e, n in the form <edge-label> ( <n> ). If n is an event itself, we use its linearization, which is possible because the graph is free of cycles. An example linearization can be found in Appendix B.
Evaluation Metrics & Baselines
We follow He et al. (2020) and report the metrics graph accuracy, node F1 and edge F1. For graph accuracy, we define a predicted graph to be correct if it is isomorphic to the ground truth under the constraint that the labels of nodes and edges match. In the SGM data sets, the labels of the nodes are typically unique in a given graph and thus can be used to define precision and recall for nodes and edges in their standard formulation.
For PC13, there are usually multiple nodes with the same label, which makes it necessary to use an alternative definition for whether an extension node n + is present in some reference graph G r . We define that n + ∈ G r if the subgraph induced by n + and its descendants is isomorphic to a subgraph in G r . Because each n + corresponds to an event and the event is fully specified by this descendant subgraph, this formulation corresponds to the standard evaluation protocol in the BioNLP shared task series (Ohta et al., 2013) with the exception of anchor nodes which we disregard to allow for a fair comparison to He et al. (2020). We compare our model on all data sets with the best configuration reported by He et al. (2020), which is their cross-attention model that jointly embeds text and graph with a transformer. As an additional baseline, we use the CopySource baseline which simply predicts the unmodified source graph.
Training Details
For the SGM data sets, we use the bert-baseuncased (Devlin et al., 2019) model of Hugging-Face transformers (Wolf et al., 2019) as our pretrained transformer. We optimize our models with Adam (Kingma and Ba, 2015) using a batch size of 16 and a learning rate of 3e-5 for 100 epochs on CrowdSourced and for 20 epochs on the two other data sets.
For PC13, we use the HuggingFace transformers' version of BiomedNLP-PubMedBERT-baseuncased-abstract-fulltext (Gu et al., 2020), a version of BERT trained on biomedical texts, as the transformer and train for 100 epochs, using a batch size of 16 and a learning rate of 3e-5.
For all datasets, we abort the graph extension process after 10 generated nodes.
Comparison with State of the Art for Scene Graph Modification
Results for the SGM data sets can be found in Table 2. On all four data sets, our proposed method outperforms the model of He et al. (2020) by 13 to 26 pp accuracy. The improvement is especially pronounced on the CrowdSourced data set. We attribute this stronger improvement to two characteristics of the data set that make it benefit from using pretrained language models. First, compared to the two other data sets, CrowdSourced is the only non-synthetic one which leads to a larger linguistic variability. Second, it is much smaller. For both characteristics, a pretrained language model such as BERT is an ideal solution, as it alleviates the need for large training data and was exposed to a lot of linguistic variation during pretraining. We verified this hypothesis by enriching the graph and text embeddings of the model of He et al. (2020) with fine-tuned BERT embeddings (see Appendix A for details) which led to an improvement of 10pp on CrowdSourced but led to diminished results on the two other SGM data sets. To quantify the advantage that a graph extension formulation has over graph generation, we analyzed how many errors the model of He et al. (2020) made because it incorrectly reconstructed the subgraph that should be left unmodified. As the weights of the models reported in He et al. (2020) are not publicly available, we retrained a model using the authors' implementation and the reported choice of hyperparameters on the CrowdSourced data set. For the 1000 development examples, the resulting model produced 403 incorrect graphs. Almost 50% of these errors (181) were due to incorrect reconstructions of the original graph, whereas the proportion is only 16% for our model. This confirms our hypothesis that reformulating graph modification as graph extension instead of graph generation helps to avoid a large proportion of errors in reconstructing the parts of the graph that should be left unmodified.
Performance in BioNLP Event Graph Completion
Results for the PC13 data set can be found in Table 3. Our proposed method achieves a graph accuracy of 47.12% and improves upon the CopySource baseline by over 2pp. However, both in terms of Node F1 and Edge F1, CopySource performs better than our method, which indicates that when our model wrongly extends a graph it does this frequently by introducing more than one wrong node or edge. The model of He et al. (2020) fails to produce meaningful predictions, achieving only 1.09% accuracy. We attribute this to the high rates of tokens and node labels that appear in the test set but not in the training set (48% of the tokens and 67% of the node labels). Because this model attempts to reproduce the whole graph and because it treats each node label as a class, it has no appropriate mechanism for predicting modifications of graphs that have a large number of unknown node labels. Additionally, we expect the model to struggle with a large number of unknown tokens in the instruction, because it does not use pretrained embeddings. This hypothesis is supported by the fact that the model achieves 71.67% accuracy on the PC13 train set as opposed to the 1.09% on the test set. Note, that this failure to generalize cannot be explained Table 2: Comparison with state-of-the-art on the scene graph modification test sets. Baseline results are taken from He et al. (2020) including missing values for CopySource and results for Modified Graph RNN (You et al., 2018b), Graph Transformer (Cai and Lam, 2020) and DCGCN (Guo et al., 2019). Results marked with a '*' denote results obtained by us. purely by the absence of a pretrained model component, because our proposed model still performs much better when the pretrained component is ablated (see Section 5.3).
Explaining the Performance Gains via Ablations
We performed an ablation study on the development sets of CrowdSourced and PC13 to identify the source of performance gains achieved by our model compared to He et al. (2020). Results can be found in Table 4. The ablation of the pretrained language model BERT, in which we used the same architecture as in our original model but initialized all parameters from scratch, led to a decrease in accuracy of 4.9pp on CrowdSourced and 6.8pp on PC13. Note, that for PC13 the results without BERT are worse than the results of the CopySource base-line, which confirms our hypothesis that for this data set a pretrained language model is required to generalize well.
We also investigated how important the type of graph linearization is. To this end, we tested a variation of the linearization for each of the two data sets: For PC13, we changed the proposed linearization that contains all information about the event graph to text that is formulated closer to natural language, which has the downside that argument edges to other events may not be uniquely represented but might be easier to analyze by the language model. For instance, we would change regulation cause ( stat1 ) theme ( pathway participant ( ifn -gamma ) ) to regulation of pathway containing ifn-gamma by stat1. This led to a decrease in accuracy of roughly 2pp, indicating that uniqueness and full information in the graph linearization might be more important than natural sounding language. For CrowdSource, the linearization is already natural sounding and unique. We evaluated a linearization without any edge representations, retaining only a list of the contained nodes, to test whether our proposed model makes use of information relating to the graph topology. This led to a pronounced drop in accuracy of roughly 26pp, which verifies that our proposed model makes use of the edge information and that a full representation of the graph is required for strong performance on this data set.
Furthermore, we checked whether the conditioning of the edges on the node label is beneficial, as it comes at the price of increasing the number of parameters in the output layer by a factor of |L n |. For this, we evaluate a variant of our model in which we treat the prediction of node label and edge labels as independent. For PC13, this leads to a decrease of over 3pp in accuracy, which we expected, because the allowed edge labels differ strongly depending on the node label. On Crowd-Sourced, the ablation of this dependency actually improved accuracy by roughly 0.5pp. We hypothesize that this is because there are only two node labels which can be predicted in this data set (ADD and DEL) and thus there is no strong dependency relations between node and edge labels. This leads to redundant parameters in the output layer which have to be learned from the same amount of training data.
Finally, we suspected that a large factor of the improvements over the He et al. (2020) model is the introduction of anchors, which essentially transforms the generative task of predicting the node label to a discriminative sequence labeling task. To test this, we perform an ablation in which we remove all anchors and instead extend the graph with a node that has the appropriate label. As this drastically increases the number of node labels, conditioning the edge labels on the node labels leads to out-of-memory exceptions on a single Nvidia RTX 3090 with 24 GB of RAM. Thus, we use the unconditional probabilities mentioned in the ablation of the conditional probability. We found the the ablation of anchors indeed led to a notable drop of almost 4pp in accuracy but that other factors such as the pretrained language model and the graph linearization had a much larger effect.
Error analysis
We analyzed the performance for predicting missing nodes on the PC13 development set with respect to various characteristics of the input data. Note, that here, precision, recall and F1 are calculated with respect only to extension nodes, while node F1 is calculated with respect to all nodes.
First, we investigated the effect of error accumulation. To test this, we analyzed how the precision of a node behaves as a function of the step at which it was predicted (the index in π, see 3.2). Surprisingly, we did not find a clear trend with precision being roughly 51%, 33% and 50%, for the first, second and third step, respectively. However, there was a strong trend for decreasing recall with the number of missing event nodes in the graph with 54%, 34% and 17% for one, two and three missing nodes. This might be because of the small amount of training examples with multiple missing nodes (see Table 1) or due to error accumulation.
Additionally, we found a moderate negative correlation between the number of nodes in the input graph G and precision (Pearson's r = −0.47) and a stronger negative correlation between the number of nodes and recall (Pearson's r = −0.71). This indicates that one route to further improve our proposed model might be to strengthen its ability to reason about complex graphs.
We conjectured that ADD nodes would be much harder to predict than DEL nodes, because DEL nodes have exactly one edge with only one possible edge label (theme), whereas ADD nodes can have arbitrarily many edges. Indeed, we found that our proposed model achieved an F1 score of over 97% for DEL nodes, as opposed to 76% for ADD nodes on the development portion of the CrowdSourced data set, confirming our hypothesis.
Conclusion
We have developed a novel formulation of the conditional graph modification problem as conditional graph extension. This allows us to only generate the modified parts of the graph as opposed to rebuilding the full graph. Additionally, our model uses only one encoder for both graph and text allowing for maximum parameter sharing and can make use of pretrained language models such as BERT. On three SGM data sets and on a newly introduced biomedical event graph completion data the proposed model outperforms the state-of-theart. Our error analysis highlights that performance degrades for larger input graphs. Thus, we plan to evaluate whether usage of more sophisticated Graph Neural Networks would improve results for these cases. We are also interested to apply our conditional graph modification framework to other tasks such as graph-based semantic parsing and knowledge graph completion, as this might yield a unified framework for many standard NLP tasks.
A Integrating BERT into He et al. (2020) We evaluate a modified version of the model of He et al. (2020) in which we integrate a finetuned BERT component. To explain this modification, we use the notation of He et al. (2020) in which y ranges over the text tokens and x over the nodes of the unmodified graph. For this, we use Flair (Akbik et al., 2019) together with the bert-base-uncased model to calculate embeddings for tokens h y and nodes h x . To represent nodes, we use the node label as input to BERT and if there are multiple subword tokens per token or node label, we use the embedding of the first. Then, we fuse the original token embeddings m y ∈ R d and node embeddings m x ∈ R d with two newly introduced single layer Multilayer Perceptrons: where W (1) x ∈ R d+768×d , W x ∈ R d×d , W y ∈ R d+768×d , W (2) y ∈ R d×d and BERT are the additional trainable parameters and [·] denotes concatenation. The resulting modified token embeddings m y and node embeddings m x are then used in place of the original ones leaving the rest of the implementation unchanged. | 2021-11-10T14:19:55.469Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "47f3f165f05d9cd2eca29d1559cb65824641db6b",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2021.emnlp-main.93.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "19706277db4d4dfa92950552a19eafff077c9724",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
256060011 | pes2o/s2orc | v3-fos-license | FDG-PET hypermetabolism is associated with higher tau-PET in mild cognitive impairment at low amyloid-PET levels
FDG-PET hypermetabolism can be observed in mild cognitive impairment (MCI), but the link to primary pathologies of Alzheimer’s diseases (AD) including amyloid and tau is unclear. Using voxel-based regression, we assessed local interactions between amyloid- and tau-PET on spatially matched FDG-PET in 72 MCI patients. Control groups included cerebrospinal fluid biomarker characterized cognitively normal (CN, n = 70) and AD dementia subjects (n = 95). In MCI, significant amyloid-PET by tau-PET interactions were found in frontal, lateral temporal, and posterior parietal regions, where higher local tau-PET was associated with higher spatially corresponding FDG-PET at low levels of local amyloid-PET. FDG-PET in brain regions with a significant local amyloid- by tau-PET interaction was higher compared to that in CN and AD dementia and associated with lower episodic memory. Higher tau-PET in the presence of low amyloid-PET is associated with abnormally increased glucose metabolism that is accompanied by episodic memory impairment.
Introduction
In Alzheimer's disease (AD), alterations in glucose metabolism as assessed by [ 18 F]fluorodeoxyglucose positron emission tomography (FDG-PET) are a common pathological hallmark [1]. Specifically, FDG-PET hypometabolism within temporoparietal regions is commonly observed in AD dementia and earlier AD stages, including in amyloid-positive mild cognitive impairment (MCI; i.e., prodromal AD) [2] and cognitively normal (CN) elderly at genetic risk of AD [3]. However, FDG-PET metabolism shows complex changes during the course of AD, where not only reductions but also increases in FDG-PET metabolism have been reported across CN amyloid-positive subjects [4] and subjects at genetic risk of AD [5,6] and MCI [7]. Thus, clinical staging of cognitive symptoms does not correspond to FDG-PET alterations in a straightforward manner.
Studies using amyloid-and tau-PET imaging suggest that these pathologies are important predictors of regional FDG-PET alterations. For amyloid-PET, elevated global levels of amyloid-PET have been associated with reduced FDG-PET in both AD dementia [8] and MCI [9]. However, increased FDG-PET has also been observed in association with elevated amyloid-PET [4]. Furthermore, there is a poor regional match between amyloid-PET and FDG-PET in typical [10] and atypical AD [11] suggesting that amyloid-PET alone cannot fully account for FDG-PET alterations. Results from tau-PET studies suggest that tau pathology may be an important modulating factor of FDG-PET [12][13][14]. Results from recent studies in elderly asymptomatic CN revealed an interaction between amyloid-and tau-PET, where higher tau-PET was associated with higher FDG-PET at low levels of amyloid-PET, but with lower levels of FDG-PET at high levels of amyloid-PET [15,16]. These results provide an intriguing model of the dynamic bidirectional changes in relationship to beta-amyloid (Aβ) and tau pathology. The focus on biomarkers of Aβ and tau pathology rather than the clinical diagnosis of AD allows to investigate the effect of different mixtures of both pathologies on FDG-PET changes and cognitive impairment. This is important because even in the absence of abnormal levels of Aβ, abnormal tau-PET levels can be observed in higher cortical brain areas in a substantial number of elderly subjects, where higher tau-PET was associated with cognitive impairment [17]. However, the association of higher tau-PET with FDG-PET alterations at varying levels of Aβ in symptomatic elderly subjects is unclear. In order to address this research gap, we examined both the main and interaction effects of [ 18 F]AV45 amyloid-PET and [ 18 F]AV1451 tau-PET on FDG-PET in subjects with amnestic MCI. Furthermore, we tested whether the observed higher levels of FDG-PET represent abnormally increased FDG-PET, i.e., FDG-PET hypermetabolism, and whether such increases in FDG-PET are beneficial or detrimental for cognition.
Participants
All subjects were recruited within the Alzheimer's Disease Neuroimaging Initiative (ADNI phase III; http:// adni.loni.usc.edu/) [18]. Inclusion criteria for the current study beyond those of ADNI were a diagnosis of MCI at the PET acquisition visit (Mini-Mental State Examination (MMSE) > 24, Clinical Dementia Rating (CDR) = 0.5, objective memory loss on the education-adjusted Wechsler Memory Scale II, preserved activities of daily living) and the availability of [ 18 F]AV1451 tau-PET, [ 18 F]AV45 amyloid-PET, and [ 18 F]FDG-PET up to 6 months apart. From the total sample of 74 MCI subjects fulfilling the inclusion criteria, two subjects failed preprocessing and were excluded, yielding a final sample of 72 MCI subjects. Apolipoprotein E (APOE) genotyping was available as well.
In addition to the MCI group with all three PET modalities, a group of 70 cerebrospinal fluid (CSF) Aβ-and p-tau 181 -negative CN subjects (MMSE > 24, CDR = 0) and 95 AD dementia subjects (MMSE < 26, CDR > 0.5, fulfillment of NINCDS/ADRDA criteria for probable AD) [19] were also included to assess group-level differences in regional FDG measures. These subjects were recruited in ADNI phase II and were selected for the current study based on the availability of FDG-PET and CSF biomarkers of Aβ and tau. CN subjects were asymptomatic and Aβ and phosphorylated tau (p-tau) negative based on a quantitative CSF threshold (Elecsys CSF immunoassay; Aβ 1-42 > 976.6 pg/ml, p-tau 181 < 21.8 pg/ml [20];). AD dementia subjects were diagnosed based on ADNI diagnostic criteria and were CSF biomarker positive (Elecsys CSF immunoassay; Aβ 1-42 < 976.6 pg/ml, ptau 181 > 21.8 pg/ml [20]).
MRI and PET acquisition
All MRI data were obtained on 3-T scanner systems at each ADNI site according to standardized protocol. Tau-PET data were acquired for 30-min dynamic emission scan, six 5-min frames, 75-105 min post-injection of 10.0 mCi of [ 18 F]AV1451. Amyloid-PET data were acquired for 20-min dynamic emission scan, four 5-min frames, 50-70 min post-injection of 10.0 mCi of [ 18 F]AV45. FDG-PET data were acquired for 30-min dynamic emission scan, six 5-min frames, 30-60 min postinjection of 5.0 mCi of [ 18 F]FDG. PET data underwent extensive quality control protocols and standardized image preprocessing correction steps to produce uniform data across the ADNI centers. These steps included frame co-registration, averaging across the dynamic range, and standardization with respect to the orientation, voxel size, and intensity [21]. Detailed information on the imaging protocols and standardized image preprocessing steps for MRI and PET can be found at http://adni.loni.usc.edu/methods.
MRI and PET preprocessing
T1 MRI images acquired in closest temporal proximity to the tau-PET scan were preprocessed using the same SPM12-based (Wellcome Trust Centre for Neuroimaging, University College London) pipeline as described previously [18]. Briefly, for each subject, the T1 MRI image was segmented into gray matter (GM), white matter (WM), and CSF maps. Next, non-linear highdimensional spatial normalization parameters were estimated, and a group-specific template was created using SPM's DARTEL toolbox. The group-specific template was linearly registered to the MNI template in order to estimate the affine transformation parameters.
For each subject, tau-PET, amyloid-PET, and FDG-PET images were coregistered to the participant's T1 MRI image in native space. For the voxel-based analyses, all PET images were subsequently spatially warped to MNI space using the DARTEL flow fields and affine transformation parameters estimated based on the MRI spatial registration described above. For all PET modalities, standardized uptake value ratio (SUVR) images were computed using the inferior cerebellar gray for tau-PET, the whole cerebellum for amyloid-PET, or the pons for FDG-PET as reference regions. A GM mask was created by warping the group-average GM map from the DARTEL template to MNI space and binarizing the image to only include voxels that had at least 30% GM probability. We further excluded subcortical structures (basal ganglia, thalamus, cerebellum, and brain stem) from the mask because they were either used as reference region or in order to avoid inclusion of regions that show off-target [ 18 F]AV1451 binding likely unrelated to tau [22]. All PET images were GM masked and smoothed using an 8-mm Gaussian smoothing kernel.
Creation of z-transformed deviation images (z-maps)
To assess differences in tau deposition, we computed voxel-wise mean and standard deviation of SUVR values for CN. The CN group was recruited in ADNI phase III and consisted of 27 amyloid-negative CN subjects with [ 18 F]AV1451 tau-PET. z-score deviation maps were created for each of the MCI subjects, by subtracting from each voxel the voxel-wise mean and dividing by the standard deviation of CN group SUVR.
Assessment of amyloid status
Amyloid status was computed using a pre-established protocol [23]. Specifically, T1 MRI images were segmented and parcellated into cortical regions with Freesurfer (v5.3; surfer. nmr.mgh.harvard.edu/), which was used to extract mean amyloid-PET uptake from GM regions (frontal, lateral temporal, lateral parietal, and anterior/posterior cingulate) relative to the whole cerebellum. Participants were classified as amyloid-positive or amyloid-negative based on established cut-points (global amyloid-PET SUVR ≥ 1.11) [23].
Cognitive assessment
To estimate memory performance, we used ADNI-MEM, an episodic memory composite score based on a broad battery of neuropsychological memory tests [24]. The ADNI-MEM score includes the Rey Auditory Verbal Learning Test, the Alzheimer's Disease Assessment Scale, the Wechsler Logical Memory I and II, and the word recall of the MMSE.
Statistical analysis
Demographics were compared between diagnostic groups using t tests for continuous variables and chisquared tests for categorical variables.
We conducted voxel-based linear regression analyses to test the main effect as well as the local interactions amyloid-by tau-PET on FDG-PET. All analyses were controlled for age, gender, education, study site, and -in case of testing the interaction effect -the main effects of amyloid-and tau-PET. All PET measures were included as continuous variables and obtained in spatially corresponding voxels across all three PET modalities, thus assessing the local relationship between the variables. These calculations were done via the software package VoxelStats, a MATLAB (Mathworks Inc., Natick, MA, USA)-based package for multimodal voxelwise brain image analysis [25]. The customized GM mask (see above) was used to constrain the analysis to cortical GM. The voxel-based statistical parametric maps were corrected for multiple comparisons, where the statistical significance was defined using a random field theory-based [26] threshold of p < 0.05 with a cluster forming threshold of p < 0.001. In order to examine the nature of the amyloid-by tau-PET interaction, significant voxel clusters of the interactions were identified and labeled according to the largest overlap to the automated anatomical labeling regions. For all three PET modalities, we extracted the mean voxel values within each cluster showing significant amyloid-by tau-PET interactions on FDG-PET resulting from the voxel-wise analyses. We plotted the interactions to ensure that results were not driven by extreme values. The robustness of the interaction effect for each cluster was tested by rerunning the regression model after removing influential cases defined by Cook's distance D [27]. Observations with large influence (the threshold for considering an observation as influential was defined as 4/number of observations) and observations exceeding 3 standard deviations from the mean were excluded in order to test whether the regression coefficient remained significant. Clusters were considered significant and stable when meeting an alpha threshold of 0.05 after removing influential cases.
In addition, post hoc interaction analyses on the mean cluster values were conducted controlling additionally for APOE genotype status (APOE ε4 allele carriers vs non-carriers).
Group-level differences in regional FDG measures were assessed by a one-way ANCOVA (controlling for age, gender, education, and study site) with post hoc t test between each pair to assess the difference between MCI subgroups and control groups.
In order to test whether FDG-PET cluster values were associated with memory performance, we conducted for each cluster a linear regression analysis including ADNI-MEM scores as the dependent variable and the FDG-PET cluster values as the predictor, controlling for age, gender, education, and study site.
All statistical analyses were performed using R-statistical software (http://www.R-project.org). Associations were considered significant when meeting an alpha threshold of 0.05.
Sample characteristics
Demographic characteristics and group differences are presented in Table 1. Figure 1 shows the tau-PET distribution within amyloid-negative CN subjects. Tau-PET levels predominantly in the temporal lobe were higher in MCI compared to those in amyloid-negative CN (Fig. 1b).
Voxel-wise amyloid-and tau-PET main effects on FDG-PET metabolism
First, we tested the main effects of amyloid-and tau-PET on FDG-PET in MCI. As shown in Fig. 2 (for statistics, see supplementary Table 1), higher amyloid-PET was associated with higher FDG-PET in small clusters located in the right superior frontal, right occipital, left cuneus, and right temporal pole. On the other hand, higher tau-PET was associated with higher FDG-PET in multiple regions within the bilateral parietal lobe, left insular, and cingulate cortices. Negative associations were primarily observed within the left middle frontal and left temporoparietal regions.
When stratified by amyloid status (global amyloid-PET SUVR ≥ 1.11), the associations between higher tau-PET and higher FDG-PET metabolism are evident only within the amyloid-negative subgroup, while the opposite association was primarily observed in the amyloid-positive subgroup (Fig. 2, Table 1).
Voxel-wise amyloid-by tau-PET interactions on FDG-PET metabolism
Since we found that the associations between tau-PET and FDG-PET are dependent on Aβ levels, we further tested the local amyloid-by tau-PET interaction on FDG-PET in MCI. Linear regression analysis of the interaction of amyloid-PET by tau-PET (included as continuous variables) showed significant effects in multiple brain regions. In order to examine whether any outliers may drive these interactions, we extracted the mean voxel values in each cluster and examined the undue influence of any observations based on Cook's distance. Those clusters that survived the quality check are displayed in Fig. 3a (for statistics, see Table 2).
All amyloid-PET by tau-PET interactions were of the same direction, i.e., higher tau-PET was associated with higher FDG-PET at low levels of amyloid-PET but not at high levels of amyloid-PET (Fig. 3b). These clusters were predominantly located within the left middle temporal gyrus, right inferior temporal gyrus, right lingual gyrus, left precuneus, bilateral inferior parietal gyrus, left superior frontal gyrus, and right middle frontal gyrus.
To determine whether these effects were driven by differences in APOE status, we tested whether APOE status had influenced the results. When controlling all above listed models for APOE, the observed interactions remained significant (p < 0.05) in all clusters.
Tau-related hypermetabolism in amyloid-negative MCI subjects
In order to examine whether the observed tau-related increase in FDG-PET cluster values in the MCI subjects with low amyloid represented abnormal FDG-PET hypermetabolism, we compared the FDG-PET cluster values in the MCI subgroups to the FDG-PET in amyloid-negative CN (n = 70) and subjects with fullblown AD dementia (n = 95). Note that these two reference groups including CN and AD dementia were characterized by CSF biomarker profile of Aβ 1-42 and ptau 181 rather than amyloid-and tau-PET given that those imaging modalities were not available in a sufficiently large number of CN and AD dementia subjects. MCI subjects were divided by high and low tau-PET (median split) and by amyloid status (global amyloid-PET SUVR ≥ 1.11), resulting in four subgroups (high vs low tau/positive vs negative amyloid). FDG-PET levels for all MCI subgroups along with the control groups are plotted in Fig. 4. ANCOVA showed significant (p < 0.05) group differences in FDG-PET for all clusters except for one cluster within the left superior frontal gyrus (p = 0.067). Post hoc analyses confirmed that the tau-related increase in FDG-PET in the high-tau/amyloid-negative MCI subgroup was significantly higher compared to the CN group in clusters located within the right middle frontal, left middle temporal, and right lingual gyri. The same group also had significantly higher FDG-PET levels compared to AD dementia cases within the same clusters, confirming that the FDG-PET levels will eventually decrease with clinical AD progression.
Hypermetabolism in the right middle frontal cortex is associated with lower memory performance Next, we addressed the question whether tau-related FDG-PET hypermetabolism in MCI is associated with memory performance. Since FDG-PET hypermetabolism was observed at lower levels of amyloid-PET (see above), we chose to test FDG-PET cluster values as predictors of memory performance in amyloid-negative MCI subjects in each cluster. We found a significant association in the right middle frontal (p = 0.013; Fig. 5). The association was negative, meaning higher FDG-PET metabolism in the middle frontal gyrus cluster of FDG-PET Table 1 Fig. 3 Regional interactions between amyloid-and tau-PET on FDG-PET metabolism in MCI. a Projection of significant clusters resulting from the voxel-wise analysis. b Scatterplots are based on mean SUVR values extracted from voxel-wise analyses for each of the significant clusters (arranged by anatomical adjacency). For all statistical analyses, amyloid-PET was used as a continuous measure; for illustrational purposes, however, amyloid levels were binarized into high and low levels (median split). Scatterplots are presented after removal of outliers (i.e., defined as influential observations by Cook's distance and 3 standard deviations from the mean); for regression plots including the outliers, see supplementary Fig. 1 hypermetabolism was associated with a lower ADNI-MEM score. This result suggests that right frontal FDG-PET hypermetabolism is associated with worse memory performance. Control analysis in the amyloid-positive MCI subjects did not show significant associations between FDG-PET and cognition for any of the clusters.
Discussion
Our first major finding showed that higher tau-PET was associated with higher glucose metabolism in subjects with lower levels of amyloid-PET, but not higher levels of amyloid-PET. These effects were predominantly found within the middle temporal gyrus, posterior parietal, and frontal cortex and were independent of APOE genotype. Our second major finding was that the taurelated increases in FDG-PET represented hypermetabolism since the FDG-PET level exceeded that of CN and AD dementia subjects. Our third major finding was that the tau-related FDG-PET hypermetabolism in MCI subjects with low amyloid was associated with lower memory performance.
Our findings advance the current understanding of FDG-PET changes in MCI, providing an explanatory model of FDG-PET hypermetabolism that has been observed in multiple studies in asymptomatic and symptomatic elderly subjects (for a review, see [28]). In line with our results, a recent study in MCI reported increased FDG-PET metabolism at low levels of amyloid-PET but not high levels of amyloid-PET [7]. FDG-PET metabolism was positively associated with Aβ in MCI, but inversely associated with Aβ in AD dementia [29]. We show that tau-PET plays an important role in FDG-PET hypermetabolism in MCI subjects at low Aβ levels, suggesting the interaction of tau and amyloid pathology in non-demented subjects to be key for the increase in FDG-PET. Compared to the interaction approach, our analysis of tau-PET stratified by negative vs positive amyloid-PET showed a more widespread association of higher tau-PET and FDG-PET. Higher tau-PET was preferentially associated with higher FDG-PET in Aβnegative MCI subjects, but with lower FDG-PET in Aβpositive subjects, consistent with the results of our interaction analyses. The spatially more restricted interaction effect is probably due to lower statistical power to test an interaction effect compared to testing a main effect.
Our results are consistent with recent findings in CN, where higher tau-PET was associated with higher FDG-PET in participants with low levels of amyloid-PET [15,16]. We expand significantly above those previous results by showing that the interaction extends to MCI, where the tau-related increase in FDG-PET represents hypermetabolism above normal levels and is associated with lower memory performance. These findings on FDG-PET show parallels to fMRI detected hyperactivation as a function of tau and amyloid pathology. Both resting-state and task-evoked hyperactivity, especially in the medial temporal lobe [30], but also other brain regions [31] has been observed in early-phase autosomal dominant AD [32] and MCI [30,31,33]. fMRI-assessed hyperactivation in the medial temporal lobe was associated with faster cognitive decline in MCI [33], consistent with our findings of FDG-PET hypermetabolism to be associated with lower cognitive performance in MCI. Furthermore, fMRI-assessed hyperactivation was associated with higher tau-PET in CN [34,35]. An interaction of tau-PET by amyloid-PET on resting-state fMRIassessed network connectivity in CN was observed, such that after a phase of hyperconnectivity, there was a decline in network connectivity when both tau-PET and amyloid-PET were high [36]. These results are reminiscent of the interaction effect of tau-PET by amyloid-PET on FDG-PET observed in the current study. Together, these studies suggest a synergistic interaction of tau and amyloid pathology on brain activity assessed across different modalities.
In the current study, we took a biomarker-centered approach using amyloid-and tau-PET to predict changes in FDG-PET in MCI. A subset of the MCI patients showed no abnormal Aβ levels. Higher tau-PET levels in the absence of abnormal Aβ levels may be due to primary age-related tauopathy (PART) [37]. PART is characterized by elevated tau pathologies confined to Braak-stage regions I-IV at absent or low levels of amyloid plaques and has been proposed to be an etiological entity that is qualitatively different from AD [37,38]. Although it is still debated whether PART is part of the AD continuum [39], it is generally accepted that abnormal Aβ levels are a defining feature of AD. Thus, not all MCI participants were within the AD continuum. Nevertheless, based on biomarker-driven rather than diagnostic characterization, our study showed that the interaction between both types of AD pathologies is predictive of FDG-PET alterations. The mechanism by which pathologic tau or amyloid is associated with an increase in glucose metabolism remains an open question. In vitro electrophysiological analysis showed that secreted extracellular tau fragments obtained post-mortem from the brain of an individual with AD cause neuronal hyperactivity in human neurons [40]. Moreover, transgenic mice studies showed that reducing tau protein levels in the brain is associated with reduced susceptibility to neuronal hyperexcitability and seizures [41], suggesting that tau modulates neuronal hyperactivity of neuronal networks [42]. The disruption Fig. 4 FDG-PET levels in MCI subgroups compared to CN and AD control groups. Mean FDG-PET levels for each cluster (arranged by anatomical adjacency) compared to CN and AD dementia subjects. MCI subjects were stratified by high and low tau PET (median split) and amyloid PET (global amyloid-PET SUVR ≥ 1.11). Significant differences between groups are indicated by *p < 0.05, **p < 0.01, and ***p < 0.001; one-way ANCOVA with post hoc t test between each pair of GABAergic neuronal network has been suggested as a possible mechanism of tau-associated disturbance of hippocampal neuron excitability [43]. The differential role of tau and amyloid in driving hypermetabolism is somewhat unclear. In transgenic mice expressing amyloid, higher amyloid was linked to higher neural excitability [44]. A recent study in transgenic mouse models of tau and amyloid suggests that amyloid is driving neuronal hyperactivity, but increased levels of tau lead to reduced neuronal activity [45]. However, these results are in conflict with previous results of the amyloid-independent association of tau-related susceptibility to hyperexcitability discussed above [41]. One possibility to reconcile the findings is that tau enhances amyloid-related neuronal hyperactivity at lower levels of amyloid, but reduces neuronal function at higher levels of amyloid. This stance would be in agreement with results from previous studies in humans reporting tau-PET but not amyloid-PET to be linked to fMRI-assessed hyperactivation [35] or FDG-PET hypermetabolism [15,16]. Furthermore, we observed FDG-PET hypermetabolism in the group of amyloid-negative/high-tau but not amyloid-positive/ low-tau suggesting that higher levels of tau in the presence of lower levels of amyloid are decisive for FDG-PET hypermetabolism. As a third alternative, neuronal hyperexcitability may drive initial tau release, propagation, and spread [46,47]. Future preclinical and intervention studies targeting amyloid or tau pathology will be instrumental in disentangling the causative relationship between primary AD pathologies and FDG-PET hypermetabolism.
Another major finding of our study was the association between FDG-PET hypermetabolism and lower memory performance suggesting that FDG-PET hypermetabolism may reflect pathologically altered FDG-PET levels that are detrimental rather than of compensatory nature. In previous studies including cognitively impaired elderly subjects, increased FDG-PET in the hippocampal formation was associated with poorer cognitive performance [48]. Moreover, reducing hippocampal hyperactivity by drug intervention improves cognition in MCI [49], where the same drug reduced taurelated neuronal hyperexcitability in a transgenic mouse model of AD [50]. Alternatively, higher neural activity may enhance tau spreading which in turn may lead to cognitive decline [46,47]. To test such a potentially mutually reinforcing chain of events would require longitudinal studies. With the caution that the current study does not allow for a causative interpretation, our findings suggest that local FDG-PET hypermetabolism in the presence of tau has no beneficial effect on cognition. We further caution that the MCI syndrome may have been also caused by other pathologies than amyloid and tau pathologies, especially in the MCI subjects with low amyloid. Alternative pathologies that have been linked to AD-like symptoms include cerebrovascular disease, aggregation of the transactive response DNA binding protein 43 kDa (TDP-43), and alpha-synuclein [51][52][53][54].
Several caveats need to be considered when interpreting the results of the current study. First, the current study is cross-sectional in nature. A longitudinal study will be informative to test the predictive value of tauand amyloid-PET for the subsequent changes in FDG-PET and cognition. Second, the presence of the APOE ε4 allele has been previously shown to be associated with glucose hypermetabolism [6] and thus may provide a confounding variable. However, a post hoc analysis showed that the observed interaction remained significant even when controlling for APOE genotype, suggesting that any association between APOE and tau pathology did not explain the current results. Third, although FDG-PET is commonly interpreted to reflect neural activity, it is possible that FDG-PET also reflects glial activity. For example, microglia activation is increased in relation to tau and amyloid pathology and can be associated with FDG-PET hypermetabolism as suggested by findings in mice [55]. However, our results on FDG-PET show parallels with the findings on restingstate and task-evoked fMRI BOLD signal which is less likely to reflect glia activity, discounting the possibility of glia activation as a major source of PET. Fourth, we did not apply partial volume correction to FDG-PET. We did so deliberately in order to avoid that FDG-PET hypermetabolism may occur due to the correction procedure. Here, we observed increased FDG-PET despite not correcting, supporting the view that a true increase in FDG-PET can be observed as a function of tau and amyloid pathology.
Conclusions
We found that FDG-PET hypermetabolism occurs as a function of increased tau-PET in the presence of low amyloid-PET, and is associated with worse cognitive performance. Our results have implications for clinical trials, where FDG-PET is often used as an outcome parameter [56]. Given the non-linear changes of FDG-PET as a function of tau and amyloid pathology, a beneficial drug effect on FDG-PET may not always translate into a reduction in the decline of FDG-PET, but could also be a reduction of the detrimental increase in FDG-PET. Clearly, our results call for a more sophisticated model of FDG-PET changes in the course of AD, taking both amyloid-and tau-PET into account.
Additional file 1: Figure S1. Regional interactions between amyloidand tau-PET on FDG-PET metabolism in MCI. Table S1. Areas showing significant voxel-wise effect of amyloid-PET and tau-PET on FDG-PET in MCI.
Acknowledgements
Data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc. edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in the analysis or writing of this report. A complete list of ADNI investigators can be found at https://adni.loni.usc.edu/wp-content/uploads/how_to_ apply/ADNI_Acknowledgement_List.pdf.
Authors' contributions AR conducted the analyses and wrote the manuscript, NF and JN provided critical review of the manuscript, and ME designed the study, interpreted the results, and wrote the manuscript. The authors read and approved the final manuscript.
Funding
The work was supported by the LMUexcellent Initiative (to ME) and DFG (German Research Foundation, INST 409/193-1 FUGG). ADNI data collection and sharing for this project was funded by the ADNI (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging, and Bioengineering, and through contributions from the following: AbbVie, Alzheimer's Association; Availability of data and materials All neuroimaging and neuropsychology data that were used in this study are available online at the ADNI data repository (adni.loni.usc.edu).
Ethics approval and consent to participate Ethical approval was obtained by the ADNI investigators, and all study participants provided written informed consent.
Consent for publication
Not applicable. | 2023-01-22T14:34:54.452Z | 2020-10-19T00:00:00.000 | {
"year": 2020,
"sha1": "db16f9c3da8083626b55a8cc5d667b8d4ca178bc",
"oa_license": "CCBY",
"oa_url": "https://alzres.biomedcentral.com/track/pdf/10.1186/s13195-020-00702-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "db16f9c3da8083626b55a8cc5d667b8d4ca178bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
113876492 | pes2o/s2orc | v3-fos-license | A split signal polynomial as a model of an impulse noise filter for speech signal recovery
The synthesis of the non-linear non-recursive digital filter of impulse noise on the basis of the splitting method in time domain is described. The filter recovers speech signals, distorted by impulse noise. The filter model is constructed as the splitting polynomial of an odd degree. The splitter is the time delay line, comprising the equal number of previous and subsequent samples with respect to the current time moment. The polynomial parameters result from solving an approximation problem in the mean-square norm. It is shown that the filter with the splitting model provides more precise speech signal recovery than the median and Volterra filters.
Introduction
The problem of impulse noise filtration is often solved in electrical and radio engineering [1][2][3]. The impulse noise is emerged during switching of different electrical and electronic devices, in case of mechanical damage of the information storage device surface, in operating internal combustion engines, under the influence of various atmospheric phenomena etc. Various methods of impulse noise cancelling are applied to improve the signal recovery quality and to achieve the high signal recognizability [1][2][3][4].
The classic method of impulse noise suppression is the median filtration. However, the median filter is known for its drawback which consists in distortion of signal intervals not affected by impulse noise. The median filter is considered not to be optimal, because it does not use the information on the statistical properties of signal and noise [1][2][3][4]. As a result, the development of impulse noise filtration methods, ensuring a high quality of signal recovery, is an urgent task. This paper represents the synthesis of impulse noise filters in time domain on the basis of the splitting method for impulse noise suppressing in speech signals. This method has the following significant advantages [4,5]: • the statistical properties of signals and interference are taken into account automatically in the process of filter synthesis (its training); • in comparison with the mathematical apparat, such as the functional Volterra series [6][7][8][9][10], applied to filter modeling, the splitting method builds a simpler polynomial filter model, adapted to the assigned class of input signals; • as distinct from the Volterra series the split signal polynomial is free from the convergence problem, that is why it makes possible synthesis of substantially nonlinear devices; • the split signal polynomial comprises the linearly incoming parameters, so the parameters of the filter model are defined as a globally optimal solution of the approximation problem in the uniform and mean-square norms [4].
A splitting method for non-linear non-recursive digital filter modeling
Digital impulse noise filters are synthesized within the framework of the "black box" principle by the split signal theory. According to this theory of the non-linear filter, operator s F is described by the composition of two operators: the splitter operator and the operator of the nonlinear memoryless transformer [4,5] .
Splitter operator p F transforms scalar signal ( ) a n x , , [ ] a n x a n x a n x a n x F a n x pm p p -the phase portraits of split signals do not cross and touch each other, and they are not selfintersecting, i.e. for any Linear, non-linear, stationary and non-stationary signal splitters can be designed [4].
Operator P of the nonlinear memoryless transformer converts vector signal ( ) a n x p , into scalar signal ( ) a n y , . This operator is usually described by multidimensional polynomial although, there are other forms of the operator model [4].
.. 2 1 ) for all the n I n ∈ , a G a ∈ satisfies the following condition: ) is the assigned error of the approximation of ideal non-linear filter operator s F , ( ) a n y o , is the output signal of the ideal non-linear filter: The structure of the non-linear non-recursive digital filter (NNDF) model, described by equation (1), is shown in Figure 1 If number m of splitting channels in polynomial (1) is large, the task of operator s F approximation has a high dimension. As a result, the solution of this task causes ill-conditioning and large computational cost.
Let us consider the synthesis of NNDF, a restoring speech signal from the mixture of it with impulse noise, on the basis of the splitting method.
Impulse noise cancellation in speech signals by non-linear filtration in time domain
The speech signal, used for training the NNDF model, had the time length of 35 seconds (280 000 samples at an 8 kHz sampling rate). It comprised various phrases of four speakers (two men and two women). These phrases differed in loudness.
The speech signal, applied for estimating the NNDF model quality, had the time length of 20 seconds ( 000 160 = Q samples at an 8 kHz sampling rate). It differed from the training speech signal. The impulse noise was formed as a random process with the uniform distribution in the range of ), then the impulse interference appeared at the n -th time. Thus, the probability of the impulse interference appearance at the current n -th moment is α , and the absence probability is ( ) α − 1 . There was an additional restriction: the distance between adjacent interference samples is no less than five samples of the speech signal.
The filter quality was evaluated by means of the root-mean-square error written as is the desirable filter output signal (undistorted speech signal) with Q samples, ( ) a n y , is the output signal of non-linear filter model (1).
The splitter was built in the form of the time delay line. Every unit of this line delayed signal with one sample. The delay line length varied. The causal polynomial filter models, which take into account signal samples at the n -th, ) 1 ( − n -th, ) 2 ( − n -th and etc. time moments, as well as the non-causal polynomial models, which operate with samples at previous and subsequent time moments relative to the current n -th moment, were synthesized. The dependencies of error ε on variable ξ are depicted in Figure 2. Variable ξ is the number of subsequent samples relevant to the n -th moment for the splitter with memory length m equal to five. The dependencies are represented for various degrees of the polynomial model. It was found, that the rise of splitter memory length m does not decrease the approximation error, so it is recommended to build the NNDF splitter with the least memory length (the results, represented below, were obtained for 5 = m ). The analysis of curves in Figure 2 shows the following: -the way of signal splitting affects the filtration precision, namely, error ε is minimum under the condition of equal number ( 2 = ξ ) of previous and subsequent samples with respect to the n -th time moment; -the members of the even degree in the split signal polynomial do not influence root-mean-square error ε , so they can be eliminated from filter model (1). a n x a n x a n x C a n y ( ) ( ) a n x a n x j j , 2 , 1 5 4 As it is seen from Figure 3, NNDF, synthesized by the splitting method, provides a more accurate signal recovery as compared to the median and Volterra filtrations. The signal recovery accuracy is increasing at the rise of the NNDF polynomial model degree.
Conclusion
The proposed method of impulse noise filter synthesis is based on the splitting theory [4,5] and the principle of "supervised learning" [4][5][6][7][8][9][10]. Filter model parameters are determined by solving the nonlinear operator approximation problem. Eventually, the input-output mapping of the filter is created. The statistical properties of signals and noises are taken into account in the process of "learning" the filter model automatically. Since the filter model in the form of the split signal polynomial is linear with respect to incoming parameters, these parameters, resulted from approximation problem solving, are globally optimal. An essential feature of the filter, synthesized by the splitting method, is its input signal invariance under condition of keeping the statistical signal and noise characteristics. Thus, the filter, synthesized at test signals, acceptably works at other signals with similar statistical characteristics. The rise of splitting polynomial degree enables to achieve the high filtration accuracy.
At the NNDF synthesis for cancelling impulse noise from speech signals it was revealed the following facts: • the NNTSF model is a multi-dimensional polynomial of an odd degree; • the splitter should be built as the time delay line with the length equal to 5; • the least error is achieved, if the split signals contain an equal number of previous and subsequent samples with respect to the current time moment; • NNDF, synthesized on the basis of the splitting method, provides the least root-mean-square error in comparison with the median and Volterra filters.
Acknowledgments
This work was supported by Saint Petersburg Electrotechnical University "LETI" according to the base part of the state scientific work from the Russian Education Ministry. | 2019-04-15T13:10:59.471Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "07bd33a1e6d82807ffb4d72961a08916fd7901e9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/803/1/012156",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a04367eb1d0c386ca2d9ea66521c3da90b3c96e3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
202562139 | pes2o/s2orc | v3-fos-license | Correlation between urinary fractionated metanephrines in 24-hour and spot urine samples for evaluating the therapeutic effect of metyrosine: a subanalysis of a multicenter, open-label phase I/II study.
We recently conducted an open-label phase I/II study to evaluate the efficacy and safety of preoperative and chronic treatment with metyrosine (an inhibitor of catecholamine synthesis) in pheochromocytoma/paraganglioma (PPGL) in Japan. We compared creatinine-corrected metanephrine fractions in spot urine and 24-hour urine samples (the current standard for the screening and diagnosis of PPGLs) from 16 patients to assess the therapeutic effect of metyrosine. Percent changes from baseline in urinary metanephrine (uMN) or normetanephrine (uNMN) were compared between spot and 24-hour urine samples. Mean percent changes in uMN or uNMN in spot and 24-hour urine were -26.36% and -29.27%, respectively. The difference in the percent change from baseline between uMN or uNMN in spot and 24-hour urine was small (-2.90%). The correlation coefficient was 0.87 for percent changes from baseline between uMN or uNMN measured in spot and 24-hour urine. The area under the receiver operator characteristic (ROC) curve of uMN or uNMN measured in spot urine vs. 24-hour urine (reference standard) to assess the efficacy of metyrosine treatment was 0.93. Correlations and ROCs between 24-hour urinary vanillylmandelic acid, adrenaline, and noradrenaline and 24-hour uMN or uNMN were similar to those between spot uMN or uNMN and 24-hour uMN or uNMN. No large difference was observed between spot and 24-hour urine for the assessment of metyrosine treatment by quantifying uMN or uNMN in Japanese patients with PPGLs. These results suggest that spot urine samples may be useful in assessing the therapeutic effect of metyrosine.
Current clinical practice guidelines [7] recommend the measurement of free MN and NMN in plasma or urinary metanephrine fractions (uMN and uNMN) for the diagnosis of neuroendocrine tumors. In Japan, the measurement of free MN and NMN in plasma was recently approved by regulatory authorities, and its use in clinical practice is expected. Thus, the measurement of uMN and uNMN in 24-hour urine samples is the current standard for the screening and diagnosis of neuroendocrine tumors in Japan. Quantification of uMN and uNMN in 24-hour urine samples may also be useful for evaluating the efficacy of pheochromocytoma treatment. However, 24-hour urine sample collection cannot be performed easily or conveniently for several reasons. For example, this sampling method is subject to storage issues leading to the loss of specimens, or improper storage of the urine, which can affect the integrity of the sample and accuracy of the measurements [10,11]. For this reason, patients tend to require hospitalization for 24-hour urine sample collection. Moreover, such sample collection may lead to unnecessary exposure to hospital-acquired infections and is best avoided if possible. As long-term monitoring is often needed for patients with PPGLs, more convenient quantitative assessments are required, not only for diagnosis, but also to evaluate biochemical treatment responses and long-term monitoring for the management of PPGLs.
Several studies have evaluated the correlation between MN in spot urine samples and 24-hour urine samples. A previous study concluded that levels in single-voided specimens were closely correlated to those in 24-hour specimens [12]. Another study reported that total MN measurements in random 1-hour and 24-hour urine samples were useful for diagnosing pheochromocytomas [13]. Another study showed that total MN measurements in urine samples could be used to diagnose benign PPGLs with a sensitivity of 74% [14]. Two other studies suggested that a spot urine MN and NMN assays could be sensitive and specific screening and diagnostic tools for pheochromocytoma [15] and for managing incidentaloma [16].
α-Methyl-paratyrosine (metyrosine) is a tyrosine hydroxylase inhibitor that inhibits catecholamine synthesis and is used for the management of PPGLs in patients where other treatments have been ineffective [3]. Two retrospective analyses [17,18], in which patients with PPGLs were prepared preoperatively with metyrosine and phenoxybenzamine, concluded that the combination of metyrosine and α-blockade resulted in better blood pressure control, less blood loss, less use of antihypertensive medication or pressors during surgery, and the need for less intraoperative fluid replacement.
We recently conducted an open-label, multicenter phase I/II study to evaluate the efficacy and safety of preoperative and chronic treatment with metyrosine in PPGLs in Japan [19]. In our study, we determined the treatment efficacy of metyrosine by assessing whether uMN or uNMN in 24-hour urine samples decreased by more than 50% from baseline. Here, we describe a subanalysis in which we aimed to compare metanephrine fractions in spot urine with metanephrine fractions in 24hour urine samples (as a reference standard) to assess the therapeutic effect of metyrosine. Additionally, 24-hour urinary catecholamine fractions (urinary adrenaline [uA] and urinary noradrenaline [uNA]), and urinary VMA (uVMA), which were used for biochemical diagnosis, were compared with metanephrine fractions.
Study design
The study design has been described in detail elsewhere [19]. The main study was a prospective, multicenter, open-label phase I/II study (Japic CTI-152999) conducted in Japan [19]. The present study was a retrospective subanalysis of data collected during the main study [19]. The institutional review boards of all participating centers approved the study and informed consent was obtained from all patients.
Patients
Patients were ≥12 years of age; had inoperable tumors that required chronic medication therapy; were surgical candidates requiring preoperative treatment; had a diagnosis of PPGLs; had baseline uMN and uNMN levels ≥3 times the upper limit of normal; were treated with αblockers; and presented symptoms of excess catecholamines.
Patients who were newly treated or temporarily treated with a drug or who consumed foods that could affect urinary catecholamines and their metabolites; with impaired intestinal absorption; with severe or uncontrollable complications; with an estimated glomerular filtration rate (eGFR) <30 mL/min; and with a left ventricular ejection fraction <40% were excluded from the main study [19].
Study interventions and measures
Metyrosine treatment and dose adjustments were described in detail previously [19]. After the dose was established, two consecutive 24-hour urine samples were collected to measure uMN, uNMN, uVMA, uA, and uNA. Levels of uMN, uNMN, and creatinine were also measured in spot urine samples, and the creatininecorrected values (µg/g creatinine) of uMN and uNMN were calculated.
Patients were hospitalized while the urine examinations were completed. For 24-hour urine samples, patients were instructed to accurately record the start/end time of urine sample collection and the total volume of urine collected. The 24-hour urine samples were collected at the following time points: 3 days during the observation period, Days 6-8, Days 28-29, Days 56-57, Days 84-86, 3 days after fixed administration at the increased or decreased dose, and every 12 weeks (continuous dosing) after Day 84. In cases of adrenalectomy, 24-hour urine samples were collected as recommended (2 days before surgery, the day of surgery, and Days 5-7 after surgery). For 24-hour urine sample collection, an acid UriMeasure Tablet (Kanto Chemical Co., Inc., Tokyo, Japan) was added to prevent decomposition of urinary catecholamines and metabolites, and the samples were stored at room temperature.
Spot urine samples were collected from the first-void urine of each pooled urine sample. If this was not possible, a spot urine sample was collected from the secondvoid or subsequent urine. Then, samples containing 2 mL of pooled urine and 3 mL of spot urine were obtained for the measurement of metanephrine fractions. Collected samples were stored at ≤15℃ until retrieval. Urinary concentrations were determined by high-performance liquid chromatography with electrochemical detection (CoulArray, Thermo Fisher Scientific Inc., Waltham, MA, USA) for uMN, uNMN, and uVMA. Urinary concentrations were determined by high-performance liquid chromatography with fluorescence detection (HLC-725CA II, Tosoh Corporation, Tokyo, Japan) for uA and uNA.
For uMN or uNMN, whichever of the two parameters had the higher ratio of baseline to the upper limit of the reference value was chosen for evaluation in each patient (shown as uMN or uNMN). Similarly, for uA or uNA, whichever of the two parameters had the higher ratio of baseline to the upper limit of the reference value was chosen for evaluation in each patient (shown as uA or uNA).
Study assessments
In this study, we evaluated the achievement of 50% reduction in uMN or uNMN from baseline at each time point at which both spot and 24-hour urine levels were measured, along with the achievement of 50% reduction in 24-hour uVMA and uA or uNA. Percent changes from baseline in uMN or uNMN, uA or uNA, and uVMA determined from 24-hour urine samples, and uMN or uNMN determined from spot urine samples were evaluated. uMN or uNMN in a spot urine sample were compared with uMN or uNMN in a 24-hour urine sample to assess the treatment effect of metyrosine. Additionally, the 24-hour uMN or uNMN were compared with 24-hour uA or uNA and 24-hour uMN or uNMN were compared with 24-hour uVMA.
Statistical analysis
The sample size for this main study was at least 10 patients because the population of patients eligible for the main study was thought to be very small. This analysis was based on the data of the full analysis set, which was defined as the population of patients included in the safety analysis set who were evaluated for efficacy at least once.
The Pearson product-moment correlation coefficient was calculated for metanephrine fraction values determined from spot and 24-hour urine samples. Similarly, the Pearson product-moment correlation coefficient of change from baseline was calculated for uMN or uNMN values determined from spot and 24-hour urine samples. Summary statistics were calculated for the difference in percent change from baseline for uMN or uNMN values determined from spot and 24-hour urine samples.
The reference standards for this subanalysis were the results of 50% reduction in uMN or uNMN from baseline in 24-hour urine samples. Using these results, we calculated the receiver operating characteristic (ROC) curves of spot uMN or uNMN, 24-hour uA or uNA, and 24-hour uVMA. Similarly, we calculated the area under the curve (AUC). We examined whether the factors eGFR, age, sex, and body weight of each patient had an effect on uMN or uNMN values determined from spot and 24-hour urine samples. The differences in uMN or uNMN values determined from spot and 24-hour urine samples in relation to these factors were then plotted.
A statistical significance level was not established as no formal statistics were performed on account of the small sample size. All statistical analyses were performed using SAS Ver.9.3 (SAS Institute Inc., Cary, NC, USA).
Results
Spot and 24-hour urine samples were collected from 16 patients at 11 sites. Details of the baseline patient characteristics have been described previously [19]. Briefly, the sample comprised 11 men and five women, aged 12 to 86 years, with a mean blood pressure of 126.4/71.1 mmHg, and renal function ranging from normal (n = 5, eGFR ≥90 mL/min), mildly reduced (n = 6, 60 mL/min ≤ eGFR < 90 mL/min), to moderately reduced (n = 5, 30 mL/min ≤ eGFR < 60 mL/min). Nine patients were diagnosed with pheochromocytoma and seven with paraganglioma. Eight patients had metastatic PPGL.
The Pearson product-moment correlation coefficient of uMN or uNMN values determined from creatininecorrected spot and 24-hour urine samples was 0.94 (Supplementary Fig. 1). Table 1 shows the percent changes from baseline in uMN or uNMN in spot and 24-hour urine samples. Mean percent changes in uMN or uNMN in spot and 24-hour urine samples were -26.36% and -29.27%, respectively. The difference in the percent change from baseline in uMN or uNMN, as calculated by subtracting the percent change obtained for spot urine from that obtained for 24hour urine, was small (-2.90%).
The Pearson product-moment correlation coefficient of the percent changes from baseline of uMN or uNMN values determined from spot and 24-hour urine samples was 0.87 (Fig. 1A). The area under the ROC curve of uMN or uNMN values measured in spot urine and 24hour urine, using 24-hour urine as the reference standard, to assess the efficacy of metyrosine treatment was large (0.93) (Fig. 1B). Figs Either uMN or uNMN, whichever had a higher ratio at baseline according to the upper limit of the reference value, was used for the efficacy assessment. CORR, correlation; uMN, urinary metanephrine; uNMN, urinary normetanephrine; uA, urinary adrenaline; uNA, urinary noradrenaline; AUC, area under the curve Either uMN or uNMN, whichever had a higher ratio at baseline according to the upper limit of the reference value, was used for the efficacy assessment. CORR, correlation; uMN, urinary metanephrine; uNMN, urinary normetanephrine; uVMA, urinary vanillylmandelic acid; AUC, area under the curve overweight, measurement values of spot urine samples tended to be higher than those of 24-hour urine samples.
Discussion
In this subanalysis of a phase I/II study, we compared metanephrine fractions in spot urine with metanephrine fractions in 24-hour urine samples to assess the therapeutic effect of metyrosine, unlike previous reports [12][13][14][15][16] that focused on the diagnostic capability for PPGLs. Additionally, we evaluated the correlation between the percent changes from baseline in uMN or uNMN in spot and 24-hour urine samples, and the correlation between the percent changes from baseline in uA or uNA, and uVMA vs. uMN or uNMN in 24-hour urine samples for assessing the therapeutic effect of metyrosine. We based our analysis on a 50% reduction in uMN or uNMN from baseline at each time point at which both spot and 24hour urine levels were measured. As there is no reference standard for the evaluation of the effects of metyrosine, this general convention was used; thus, the present results should be interpreted with care.
We found that there was a small mean difference (-2.90%) in the percent change from baseline between uMN or uNMN in spot and 24-hour urine samples. The correlation coefficient between the two collection meth- The difference (%) was calculated as follows: (measured value in spot urine -measured value in 24-hour urine) / (measured value in 24-hour urine) × 100 Normal, eGFR ≥ 90 mL/min; Mild renal impairment, 60 mL/min ≤ eGFR < 90 mL/min; Moderate renal impairment, 30 mL/min ≤ eGFR < 60 mL/min uMN, urinary metanephrine; uNMN, urinary normetanephrine; eGFR, estimated glomerular filtration rate Mean ± standard deviation is shown for each patient. N = 175.
ods for the assessment of uMN or uNMN was 0.87. Additionally, the correlation coefficient was 0.77 for uMN or uNMN vs. uA or uNA and 0.84 for uMN or uNMN vs. uVMA in the 24-hour urine samples. These results are in line with the findings of previous studies [15,16], suggesting that spot urine MN or NMN assays are sensitive and specific screening and diagnostic tools for PPGLs and managing incidentaloma. Furthermore, using 24-hour urine as the reference standard, the area under the ROC curve of spot to 24hour urine assessing the efficacy of metyrosine treatment was large (0.93). The areas under the ROC curve of uA or uNA (0.91) and of uVMA (0.88) to uMN or uNMN were also large. This suggests that the measurement of uMN or uNMN in spot urine samples could potentially be as useful in assessing the efficacy of metyrosine as uA or uNA and uVMA in 24-hour urine, but further studies are necessary. We consider that these findings are very relevant given the lack of studies reporting on the potential use of these parameters for the assessment of the therapeutic effect of metyrosine.
Dopamine is the precursor of NA, A, MN, NMN, and VMA. Urinary dopamine was increased in some patients in our phase I/II study [19], whereas other catecholamine derivatives (i.e., uNA, uA, uMN, uNMN, and uVMA) were not. This finding is consistent with the findings reported by Kuchel et al. [20]. They reported that treating malignant pheochromocytoma with metyrosine led to initial increased in the serum levels of DOPA, DOPA sulfate, and dopamine sulfate, and that progressive increases in urinary dopamine were observed during metyrosine treatment. Notably, this effect, together with increases in dopamine metabolites (e.g., dihydroxyphenylethanol and plasma dopamine sulfate), occurred without causing changes in serum dopamine levels [20]. Thus, we consider that both serum and urinary dopamine levels should be carefully assessed in patients treated with metyrosine because urinary dopamine increases in the absence of serum dopamine increases could lead to false evaluations of urinary dopamine.
Currently, there is no consensus or standard guidelines stating which biochemical tests should be used to confirm and locate or exclude a suspected PPGLs [21]. However, it is recommended that screening for PPGLs be performed by testing for plasma-free MNs and/or urinary fractionated MNs and catecholamines [22,23].
Although several studies have concluded that fractionated uMN and uNMN provide superior diagnostic sensitivity over uA and uNA, uVMA, or total MNs [24][25][26][27], other studies have concluded that plasma-free MNs have higher sensitivity and specificity [28,29] for excluding or confirming pheochromocytoma. The present findings are not proof that spot urine samples can replace 24-hour urine sample collection, but they support previous findings indicating that spot urine samples may have a similar diagnostic capability as 24-hour urine samples. Additionally, spot urine samples may be useful in assessing the therapeutic effect of metyrosine.
It has been reported that measurements of total MNs and catecholamines in 24-hour urinary samples yield fewer false-positive results [30]. Several factors, such as issues with the reliability of the timed urine sample collection, difficulties in sample storage, and inpatient sampling make 24-hour urine sample collection inconvenient and costly [10,11,21]. In contrast, spot urine sampling has several advantages, such as relative ease of collection, noninvasiveness, the wide availability of the test, ease of implementation, and cost-effectiveness [21,31]. Taken together, the convenience of spot urine sampling, the existing evidence showing that the spot urine sample has similar diagnostic yields to those of the 24-hour urine sample [31,32], and the development of new, simplified methods for the quantification of catecholamines and MNs in spot urine [33,34] will likely lead to a less frequent use of 24-hour urine testing for diagnostics and treatment assessments in routine clinical practice.
Within-subject variability did not seem to be influenced by eGFR, age, sex, or body weight on uMN or uNMN values determined from 24-hour urine samples and spot urine samples. However, women and patients with lower body weight, rather than men and overweight patients, seemed to have higher values of uMN or uNMN in spot urine than in 24-hour urine. Because the uMN and uNMN values of spot urine were creatininecorrected, these values may have been higher in patients with less muscle mass and therefore less urinary creatinine excretion.
As mentioned above, there was no major difference between spot and 24-hour urine samples in percent changes from baseline in uMN or uNMN levels. Nevertheless, the measurement values for spot urine can be expected to vary by sampling time, which is not the case in 24-hour urine samples. This may affect the reliability of measurements using spot urine when assessing the efficacy of metyrosine. In this study, a large difference in percent changes from baseline was observed between spot and 24-hour urine samples in some cases. Thus, a possible variance in measured values by sampling time must be considered in the clinical setting. When the efficacy of metyrosine is evaluated using spot urine samples in clinical practice, it is appropriate to scrutinize measurement results by considering multiple measurement points and measurement changes over time.
This subanalysis had some limitations, such as its retrospective nature and small sample size, which limited the number of urine samples collected, and the fact that plasma concentrations of MN, NMN, and other metabolites were not quantified. The absence of formal statistical testing also limited the extent of conclusions that can be drawn from current findings.
Based on the present findings, we conclude that differences were small between the spot and 24-hour urine samples for the assessment of metyrosine treatment based on the quantification of uMN and uNMN in Japanese patients with PPGL. | 2019-09-13T13:07:24.093Z | 2019-09-10T00:00:00.000 | {
"year": 2019,
"sha1": "b89a796328ef475bde8653a6898c9fc821bb4365",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/endocrj/66/12/66_EJ19-0125/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "92b93e6e4c30152d7b60cb5f49c72d23c392fa61",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
110792859 | pes2o/s2orc | v3-fos-license | Influence of Emitted Electrons on the Method for Direct Measurement of Condensate Resistance
Electrical resistance of the vacuum-deposited condensate has non-linear relation to the condensate film thickness. Therefore, there exists a need for experiments to study applicability of the non-invasive method for condensate resistance measurement and to identify parameters by measuring condensate resistance directly. By using two-point measurement probes, this study analyses the influence of electrons emitted in the process of evaporator electronic emission on the method for direct measurement of condensate resistance. DOI: http://dx.doi.org/10.5755/j01.eee.20.2.6379
I. INTRODUCTION
Mathematical models [1], [2] of non-invasive methods for measurement of vacuum-deposited condensates describe quite precisely the measurement method itself.This measurement method utilises a flow of electrons emitted from the hot evaporator.The electron flow highly depends on the evaporator temperature, its design, and spatial configuration of grounded metal elements in a vacuum chamber.Magnetic fields in a vacuum chamber and also their direction and strength have a significant impact on the non-invasive measurement method.Main source of magnetic field in a chamber is heating current, which flows in the evaporator as in a closed circuit and usually reaches from 50 A to 300 A. If the evaporator is heated by alternating current, generally there are no stable electron flows in a vacuum evaporation chamber, and their strength and direction fluctuate periodically.Experiments utilizing alternating current controlled by the thyristor converter allowed finding just a short time period, during which probes measured stable electric charge flows [3].Therefore, further experiments utilized DC stabilized voltage for evaporator heating.This allowed getting stable electric charge flows over the time span.
The other disadvantage of the non-invasive method for resistance measurement is the fact that electrical resistance of the growing condensate film has non-linear relation to the condensate film thickness [4].Therefore, there exists a need for experiments to study applicability of the non-invasive method for condensate resistance measurement and to identify parameters by measuring condensate resistance directly.This can be implemented using conventional twoor four-point [5] measurement probes, on which metal condensation takes place.During the process of evaporation, both the hot evaporator and material being evaporated emit electrons, the flow of which reaches the condensation surface.This particular electron flow is used in the noninvasive method for condensate resistance measurement [2].However, electrons above the surface of two-or four-point measurement probes can also influence the accuracy of the direct condensate resistance measurement [6].
By using two-point measurement probes, this study analyses the influence of electrons emitted in the process of evaporator electronic emission on the method for direct measurement of condensate resistance.
II. MATHEMATICAL MODEL OF CONDENSATE RESISTANCE MEASUREMENT USING DC STABILIZER
Earlier developed mathematical model for measurement of depositing condensate resistance [1] was complemented.This model additionally includes stabilized probe current is (Fig. 2).
Condensate resistance measurement probe (Fig. 1) consists of insulating material substrate 1 with width b having metallized areas 2 and 3. Distance between areas equals to L. Molecules of the material being evaporated hit the probe plane XY in the direction of Z-axis.Ionized atoms and electrons emitted in the process of thermoelectronic emission reach this area too.As the distance between the probe plane XY and the evaporator is long, density of electric charge and material atom flow forming the condensate 4 between contact areas 2 and 3 is uniform.Mathematical model [1] assumes that condensate 4 formation rate in between contact areas A and B is uniform (Fig. 1); condensate resistance denoted as rp.As during early formation stages condensate is deposited in islands, value h denotes the equivalent condensate thickness.For development purposes, condensate strip 4 is divided into sub-strips with length dx (Fig. 2), width b, and thickness h.The equivalent resistance of the electric charge flow iE denoted as rE.To simplify the model, rE assumed as constant.
As the evaporator is powered from a direct current source, electron flow of the thermoelectronic emission is exposed to constant electromagnetic and electrostatic fields.These fields change the direction of the electron flow away from the substrate aligned perpendicularly to the evaporator.This results in reduced measurements of the voltage on measurement probes.As a consequence, this method is only applicable for analysis of materials with very high evaporation temperatures.However, the solution was found to expand method application limits.
It utilizes additional electron source (tungsten filament) placed closer to measurement probes by 2/3 of the total distance to the evaporator.This distance selected to ensure the minimum influence of the electromagnetic field on the electron source and, at the same time, for it not to be too close to the probes to prevent temperature of the film being formed from rising above the allowable limit.The solution enabled measuring resistance of films condensed from conductive materials with low evaporation temperatures.It also contributed to stabilizing values of the thermal EMF source and rE without any noticeable impact on the film growth.
During the model development, thermoelectromotive force E of this additional electron source has been also included.
Potential ux of the sub-strip with width dx relates to the distance x till the area A as follows .
Common solution of ( 1) is as follows .
Expression of the current ix in the substrate has also been obtained .
Coefficients C1 and C2 depend on the initial conditions on contact areas A and B, i.e. on the probe connection in the measurement circuit.This study analyses circuit diagram where the probe area A is grounded and the measurement takes place of the area B potential uB, which is caused by the voltage drop on the resistance rm (Fig. 3).As the area A is grounded, initial conditions obtained from (3) are as follows In case of stabilized current is, we can write for the node of the area B as follows: 0 By inserting ( 6) into (3) and by factoring common members the following is obtained To find coefficients C1 and C2 from ( 5) and (7), system of equations is made: where Considering expressions of C1, C2 [1], relation of condensate potential ux to the distance x is expressed as follows Figure 4 shows how the trend of potential uB (10) relates to the condensate resistance rp according to EMF changes in the additional thermoelectronic emission source.
The results presented show (Fig. 4) that additional thermoelectronic emission source contributes to higher values of the voltage uB.Changes of the EMF in the additional thermoelectronic emission source cause shifts of the voltage uB extremum.Reduction of this EMF results in shifting the extremum of uB towards the range of higher substrate resistances rp.Results in Fig. 5 show that measurement of the substrate resistance rp without additional thermoelectronic emission source (corresponds to the state of closed shutter) can only be started after the current stabilizer changes its status from saturation and starts stabilizing the current.With the fixed value of stabilized current is = 2 µA and with the shutter open (thermoelectronic emission source voltage of 6 V), the substrate voltage uB relates to the stabilized current value.Modelling results allow concluding that additional thermoelectronic emission source will impact on measurement results; therefore, galvanic isolation of resistance measurement and additional thermoelectronic emission source circuits is necessary.
To check the modelling results, growing condensate resistance was measured using both direct and non-invasive methods.Modelling results also allow concluding that direct measurement of the substrate resistance will only be possible when the current stabilizer changes its status from saturation.
III. DIRECT METHOD OF CONDENSATE RESISTANCE MEASUREMENT
Circuit diagram of the experiment is presented in Fig. 6.Experiment utilizes three identical substrates P1, P2, and P3 located one beside the other.After setting evaporation parameters (evaporator powered from 15 V adjustable power source), the shutter is opened and vapour flow reaches substrates.Resistance of the substrate P1 is measured directly, using two-point measurement probes.Contact areas of the substrate P1 connected to the stabilized current source (Fig. 6) which, for the purpose of experiments, had its voltage set to 8 V and provided substrate current is = 2 µA.In the process of the experiment, voltage drop Up1 on the substrate P1 film was measured during the course of its formation.As we can see from the above modelling results, this measurement method requires galvanic isolation of the substrate P1 from the common system ground; otherwise, electron flow from the additional thermoelectronic emission source (additional electron source is powered by the voltage of 6 V) impacts on measurements results, and this prevents from determining the substrate resistance correctly.AD210JN galvanically isolated operational amplifier used for this purpose.amplifier also contributed in reducing noise impact due to long measurement leads.
Substrates P2 and P3 were intended for their resistance measurement using electron emission current.The substrate P3 has two contact areas; one of them is connected to the system's ground and the other to the data collection board (DCB) through INA128 operational amplifier installed within the vacuum chamber beside the substrate.The operational amplifier reduces noise impact due to connecting leads.Contact area of the substrate P2 is also connected to DCB through the operational amplifier.The substrate P2 has no second contact area.In the process of the experiment, substrate voltages Up2 and Up3 were measured respectively.Results of the experiment are presented in Fig. 7.
As we can see, resistance measurement by the direct method can only be performed at a certain time period.This section within which the measurement is possible denoted as H in Fig. 7. Substrate resistance can be calculated as follows For comparison purposes, Fig. 7 also shows voltages Up2 and Up3 of non-invasive measurement probes with one and two contacts.As we can see, the signal measured by the noninvasive method starts changing from the beginning of condensation, i.e. much earlier before reaching interval H.
IV. CONCLUSIONS
By properly selecting design of emission source and value of stabilized current, direct condensate resistance measurement is possible within the section of substrate voltage decrease.In case on non-invasive measurement method which utilizes probe with two contacts (Fig. 7), trend contains pronounced extremum that can be useful in control for getting desired layer thickness of the material being evaporated.It was proven that electron flow from the thermoelectronic emission source has an impact on direct resistance measurement signal, and therefore, it is necessary to galvanically isolate circuits of resistance measurement and additional thermoelectronic emission source.
Fig. 2 .
Fig. 2. Schematic circuit diagram of the condensate resistance measurement model, where contact area A is grounded, and contact area B is connected to the ground through the resistance rm.Additionally, stabilized current is flows through the probe.
replacing the distance x in (9) with the distance L between areas A and B, relation of the contact area B potential uB to β and, at the same time, to the resistance rp is obtained development purposes, it is assumed that condensate formation rate is constant, i.e. condensate thickness has linear relation to time: h f t .Due to thisreason, modelling results are presented as a relation to the substrate resistance rp, which is inversely proportional to the thickness:
Fig. 4 .
Fig. 4. Relation of the potential uB to the condensate resistance rp according to different values of the emf in the additional thermoelectronic emission source.Stabilized current is = 0.
Figure 5
Figure 5 presents voltage uB (10) modelling results obtained using stabilized current source with maximum voltage of 8 V and setting stabilized substrate current is = 2 µA.Modelling performed with additional thermoelectronic emission source voltage of 6 V and measuring device resistance rm = 10 MΩ.
Fig. 5 .
Fig. 5. Relation of the potential uB to the condensate resistance rp.Stabilized current is = 2 µA.
Fig. 7 .
Fig. 7. Voltage drop on each of the substrates during the course of film formation. | 2019-04-13T13:03:28.173Z | 2014-05-02T00:00:00.000 | {
"year": 2014,
"sha1": "d6a3fff946a90cb44247a1ba27bab3d571c8ed09",
"oa_license": "CCBY",
"oa_url": "https://eejournal.ktu.lt/index.php/elt/article/download/6379/3338",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d6a3fff946a90cb44247a1ba27bab3d571c8ed09",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
257643338 | pes2o/s2orc | v3-fos-license | Simulations of Policy Responses During the COVID-19 Crisis in Argentina: Effects on Socioeconomic Indicators
This paper simulates the effects of policy responses to the COVID-19 pandemic on household income and employment in Argentina, using household survey data and administrative data on employment and wages by economic sectors. The paper also includes a gender and age group analysis. The results indicate that during the COVID-19 crisis, household income decreased. This welfare loss was nonlinear along the income distribution, with the lowest income earners suffering the most due to relatively higher informality at the bottom of the income distribution. The policy responses seem to ameliorate by around one-third what the average drop in household income, and prevented major increases in poverty and inequality.
Introduction and background
As in most countries, the COVID-19 pandemic has caused a severe crisis in Argentina. The preventive and mandatory social isolation policy (ASPO in Spanish), established by the national government on March 20, 2020, had the primary objective of immobilizing the population to prevent further transmission and to gain time to prepare the health system for the care of patients. The ASPO had been extended for more than 200 days, being one of the longest isolation periods in the world and greatly affecting the country's economic activity. According to the National Institute of Statistics and Census (INDEC) of Argentina, economic activity fell by 25 percent in April and by 20 percent in May (compared to the same months of 2019). While many productive and essential services continued normally (e.g., food production, health services), other less essential services were significantly reduced (e.g., transportation, construction, domestic services). Those that required a physical presence in the workplace (e.g., manufacturing, construction) were also restricted, and others were directly suspended (e.g., tourism, recreation). In addition, some activities were adapted and carried out remotely (e.g., many professional services, education). This reduction in economic activity has put the job stability of many people at risk, affecting their income levels and deteriorating social indicators.
In this context, the Argentine government has implemented a series of economic policy responses to face the crisis. To guarantee access to food and to sustain the income of less well-off sectors, it established Emergency Family Income (IFE in Spanish). This was an exceptional monthly payment of $10,000 pesos (around US$120, about 60 percent of the minimum wage) during April, June, and August 2020 to unemployed people, informal workers, low-income self-employed workers, and approximately a third (-4.1 percent vs -6.0 percent). This prevented major increases in poverty and inequality. A key aspect of the satisfactory policy response was that public assistance was targeted at informal workers and at less well-off households with children. This policy response's large offsetting effect is in line with previous findings such as Lustig et al. (2020). 5 Given the crisis' magnitude and the public policy responses, it is very important to determine the damages of the crisis and how effective these policies have been in mitigating it. However, any attempt of studying this phenomenon while the pandemic is going on, faces two main limitations. First, there are limited data available on how COVID-19 is affecting economic activity in real time. For example, at the moment of writing this paper, only the household survey corresponding to the first quarter of 2020 is available in Argentina. Also, information on how employment in the different sectors is being affected has been published with lags. Second, we face a limitation on how to predict the evolution of this uncertain phenomenon. Argentina, like the rest of other countries in the world, is going through the COVID-19 pandemic without certainty about when or how it will stop.
With these caveats in mind, we believe that this paper makes two contributions. First, the paper contributes to the literature studying the effects of the COVID-19 pandemic on socioeconomic variables such as unemployment, poverty, and inequality. To some extent, it fills the gap on the immediate effects of the COVID-19 crisis at the household level for most low-and middle-income countries, as remarked by Janssens et al. (2021). 6 Since we focus on Argentina, a Latin American developing country, the paper is closely related to some previous studies for the region on this topic. Lustig et al. (2020) micro-simulate the distributional consequences of COVID-19 in Latin American countries, considering the expanded social assistance that governments introduced in response. Their findings suggest that i) the worst effects are not on the poorest but on those (roughly) in the middle of the income distribution, ii) the policy responses presented a large offsetting effect but with different intensities across countries, and iii) the increase in poverty induced by the lockdown was similar for male-and female-headed households. Moreover, Bonavida and Gasparini (2020) analyze the effects of the COVID-19 pandemic on remote employment, evaluating to what extent this type of employment is feasible for Argentine workers. They suggest that about a quarter could do it remotely, and the degree of applicability of this modality is very heterogeneous (by occupation and industry). Less compatible occupations are characterized by a higher share of informal and self-employed workers, with lower levels of education, skills, and wages. Thus, the short-term negative effects of the pandemic would be greater in the lower-income sectors, implying a significant increase in poverty and income inequality. 7 Brum and De Rosa (2021) micro-simulate the short-run effect of the crisis on the poverty rate for Uruguay and estimate the effect of the crisis on formal, informal, and self-employed workers, finding that during the first full month of the lockdown, the poverty rate increased by approximately 3.2 points, from 8.5 percent to 11.7 percent. This represents about 110,037 additional people below the poverty line. Cash transfers implemented by the government would have had a positive but very limited effect in mitigating this poverty spike. Second, the paper provides estimates of the effectiveness of the government's response to the pandemic with respect to its impact on household welfare. Thus, the paper brings a real time approximation on the effects of the adopted policies for policy makers, highlighting pros and cons to help design a the new round of measures to be adopted at the end of the pandemic.
The rest of the paper is structured as follows. Section 2 details the data and the methodological approach. Section 3 presents the simulation, distinguishing between scenarios before and after COVID-19. Section 4 summarizes the key findings and offers guidelines and recommendations for public policy discussions.
Data and methodology
The main source of our data is the Permanent Household Survey (EPH in Spanish), carried out by the INDEC. The survey covers urban areas that represent around 62 percent of the total population. 8 5. For the San Francisco Bay area, Martin et al. (2020) also find that government benefits decrease the amplitude of the crisis. 6. As remarked by Janssens et al. (2021), most of the evidence is from developed countries. See, for example, Coibion et al. (2020), Forsythe et al. (2020), and Montenovo et al. (2020). 7. Similarly, and for the United States, Montenovo et al. (2020) show that job loss is larger in occupations that cannot be performed remotely. 8. Unfortunately, the EPH, which is the main household survey in Argentina, does not cover rural areas. This is a strong data limitation that prevents us from understanding the distributional effects of COVID-19 in rural areas.
It contains information on whether the household member is (labor) active and in which economic sector they work. The survey also reports information on earned income, for both labor and non-labor incomes. The latter includes cash transfers, including those from government. 9 Economic shocks like COVID-19 can affect household welfare through different channels (World, 2020; Kokas et al., 2020;Araar et al., 2020). First, through the impact on labor income due to the direct effect of lost earnings because of illness or the need to take care of sick household members. Also, households can experiment shocks to earnings and employment, caused by a decline in aggregate demand and supply disruptions. 10 Second, through the impact on non-labor income due to, for example, a decline in international remittances or changes in public transfers. Third, through the impact on consumption if prices changes or shortages of basic consumption goods take place. Finally, through service disruptions with adverse impact on non-monetary dimensions of welfare. For example, the suspension of classes and feeding programs in schools, leading to impacts on student retention, learning, and nutrition, or the potential saturation of health systems in countries with a high incidence of COVID-19. Also, disruptions in mobility can occur due to quarantines and other containment measures, which may drastically reduce public and private transportation services.
The paper focuses on the effects of COVID-19 policy responses on household labor income using different scenarios and considers government policy responses. 11 To measure household welfare, we mainly use monthly gross income per capita (gipc). The methodology to simulate the COVID-19 impact in different scenarios involves several steps. First, we characterize the pre-COVID-19 scenario using the EPH corresponding to the first quarter of 2020. 12 We denote this initial scenario as the pre-Covid-19 one, providing an accurate representation on how the Argentine scenario-in terms of incomes, poverty, and inequality-was before the pandemic. To measure poverty, we use the same methodology as the INDEC. 13 That is, we compare the current household's total income to its poverty line, based on the market value of a food basket, a non-food basket, and the number of household members. 14 We rely on the Gini coefficient and Atkinson indicators to measure the impact on inequality. In this pre-COVID-19 scenario, as in the others, we explore heterogeneity of the results regarding age groups, gender, and economic sectors.
We then simulate the post-COVID-19 scenario for three quarters after the pandemic, considering the effects on employment and on labor incomes. 15 We assume that labor income variation affects only households with active members and non-zero labor income. Given that the COVID-19 shock may have differential effects by economic sectors depending on productive sector characteristics, for the employment simulation, we use data on employment variations dis-aggregated by 11 sectors, provided by the Argentine Ministry of Labor, Employment and Social Security. 16 We also distinguish between public and private employment and in the latter between formal and informal workers.
Beyond the possibility that urban areas have been the epicenter of the pandemic, we believe that this paper is an appropriate attempt to approximate the effects of COVID-19 in Argentina at the beginning of the pandemic given the data availability. 9. We do not impute the rental value of owner-occupied housing. 10. The impacts can take one or more of the following forms: (a) a decline in quantity of work, either hours (intensive margin) or employment (extensive margin); (b) a decline in wages, which is unlikely for salaried workers in the short run but may occur over time due to furloughs or wage cuts by some employers to avoid layoffs; (c) a decline in the income of self-employed workers due to the reduction of economic activity (sales, production) in micro and small enterprises due to the fall in demand and disruptions in supply of inputs or due to mobility restrictions, particularly for migrants engaged in seasonal agriculture. 11. We do not consider the effect of price changes given that in Argentina home production is negligible, and so net producer/consumer models are not relevant for this country. In addition, we also do not consider the remittances given that they are not a relevant component of households' income. In 2019, remittances only accounted for 0.11 of GDP. 12. This is the latest wave available at the time of writing this paper. 13. See here for the official methodology on poverty estimation in Argentina. 14. To calculate poverty and indigence (i.e., extreme poverty) following INDEC, we modify the gipc by dividing the total income of the household by the number of equivalent adult members of the household (that is, a man between 30 and 60 years based on calorie needs) instead of just the number of members. We use the traditional FGT poverty indicators (Foster et al., 1984), considering poverty lines (and extreme poverty) that vary according to the geographical location of the household (i.e., region) due to the variation in the price level. Specifically, these regions are: Gran Buenos Aires, Noroeste Argentino, Noreste Argentino, Cuyo, Pampa, Patagonia. 15. Note that the quarters correspond to the second, third, and fourth quarters of 2020. 16. See here. These data provide quarterly information on the number of formal and informal workers by economic sector.
The simulations on employment variations involve three steps: (i) determining how much employment falls in each sector and also between formal and informal workers, (ii) determining who loses their jobs, and (iii) determining the variation on wages for those who remain employed. To determine how much employment falls, we rely on past information. 17 The COVID-19 shock was the largest economic contraction that Argentina experienced after the Great Recession of 2008/2009, with output contraction being almost twice as high during the COVID-19 shock than it was in the Great Recession. Therefore, for the simulation on employment variations we use the largest quarterly drop in employment during the Great Recession, increased by the relation between output drops during the COVID-19 shock relative to those ones during the Great Recession. 18 Given the challenge of simulating a crisis of unprecedented magnitude, we believe that using a major global recession as a reference could be useful, although we are aware that this assumption is not free of limitations (i.e., differences in how each crisis -Great Recession and COVID-19 affected different countries).
To determine who lose their jobs, we rely on a simple selection model based on individual probabilities. Specifically, we estimate logistic regressions for formal and informal workers together with unemployed individuals. The dependent binary variable takes the value of one if the individual is employed and zero otherwise. The vector of independent variables includes a set of characteristics referring to individual and household observable characteristics. 19 We then use the estimated probability of those with a lower probability of being employed are more likely to lose their jobs. 20 To determine the variation on wages for those who remain employed, we simulate the variation on labor income for those who continue working-as in the pre-COVID-19 scenario-using data on wages from the INDEC. In this case we consider income variation for public employees, private employees, and informal workers (non-registered salaried employees and self-employed workers). 21 Note that we follow a standard microsimulation without behavioral responses. Moreover, the microsimulation is parametric: we define those who lose their job based on the estimated probability of being employed through a logit model. Formally, we define Y i,h,0 as the pre-COVID-19 (2020, first quarter) total income for individual i in household h and L i,h as the labor income variation for individual i : 22 where Y i,h,1 constitutes the simulated total income in the post-COVID-19 scenarios.
Since inflation in Argentina is very high, we further deflate total incomes by the price-level variation of the total basic basket between the first and three subsequent quarters that cover the post-COVID-19 scenario. 23 With these real final incomes, we re-estimate total household income and the gipc to compute poverty and inequality indicators. We denote these scenarios as first-, second-, and third-quarter-ahead scenarios without policy responses.
We then simulate government policy responses in line with what the Argentine government is doing to mitigate the crisis, considering the most relevant ones in terms of household welfare (see Appendix Section 4). 24 We first consider the IFE, which consisted of three exceptional monthly payments during April, June, and August 2020 of $10,000 pesos each to less well-off families. Note that this policy affects only the first and second quarters after the COVID-19 simulations. We then consider the extra payment to recipients of the AUH and the AUE as well as the payment to retirees. This was an exceptional payment of $3,000 pesos for the first and second quarter of 2020 after COVID-19. For the third quarter, the AUH payment was increased up to $6.000 (corresponding to a 5 percent increase plus 20 percent of the cash transfer that is withheld every month and received at the end of the year). In terms of the simulation, we identify in the EPH all potential beneficiaries of these programs according to 22. We collapse labor income to zero for those individuals who are selected to lose their job, and we impute the percentage change in wages for those who remain employed. Note that the income that collapses to zero is labor income, which is the focus of our simulation. This assumption could be very restrictive if individuals lose their jobs, lose their income, and use savings or sell assets during unemployment. These income componentsother than labor-are not modeled or considered in equation 1. If these types of effects exist, the income effects captured in this paper could be considered bound effects (e.g., lower bound effects if the income collapse is, partially or completely, offset through assets sales or spending saving). 24. Our simulations only indirectly consider the ATP program. Since this program was granted to companies to pay salaries and avoid dismissals, the wage and employment figures we use for the simulation implicitly contemplate it. eligibility criteria and then simulate the cash transfer T i,h . 25 Thus, the after-policy response of individual (2) We then compute, again, the poverty and inequality indicators for these scenarios, denoted as first-, second-, and third-quarter-ahead scenarios with policy responses. Here, a critical point regarding the evolution of labor incomes must be considered when comparing the post-COVID-19 quarters with the pre-COVID-19 scenario. Workers in Argentina benefit from a wage bonus, known as aguinaldos, that is paid bi-annually during June and December. They are consequently registered and included in the first and third quarters of our simulations but not in the second and fourth ones. Thus, to provide an accurate comparison, the pre-COVID-19 scenario should be compared with the second-quarter-ahead scenario. This is valid for both the scenario with and without policy responses. Finally, leaving aside the pre-COVID-19 scenario, each quarter can be compared between the scenario with and without policy responses, respectively, which provides a good approximation for the effects of policy responses after the pandemic.
Pre-COVID-19 scenario
We begin by characterizing the pre-COVID-19 scenario, corresponding to the first quarter of 2020. According to the EPH, approximately 12 million people were employed during the first quarter of 2020. Figure 1 shows the distribution of workers among gender, age groups, and economic sectors. Male workers represent 56 percent of total employees, and around 78 percent of all workers were 25 to 59 years old. The most relevant economic sectors in terms of employment were commerce (19 25. In other words, we follow the benefit incidence analysis, one of the most widely used methodologies for policy response simulation (van de Walle, 1995;Bourguignon et al., 2003;Gasparini et al., 2014;Lustig, 2017). percent), financial services (11 percent), and manufacturing (11 percent). The informality rate was, on average, 40 percent but was decreasing on the income level.
Panel A in Figure 2 indicates that for the lowest (highest) income decile, the informality rate is near 85 (10) percent. The share of woman between informal workers seems to be slightly higher as income levels increase. Informality also varies among economic sectors (see Panel B in Figure 2), with domestic services being the sector with the higher informality rate (76 percent) and with higher relative participation of woman among informal workers. Construction is another sector with high informality but is mostly made up by male workers. Education and health sectors also have a relatively low informality rate. Table 1, Column [1] presents the average per capita income for pre-COVID-19 scenario. For the first quarter of 2020, it was $19,914, slightly higher for households with a male household head (see Table A4). The richest decile shows an income approximately 22 times higher than the poorest. Table 2, Column [1] provides the same information but distinguishes between economic sectors instead of income deciles. 26 The table also shows a wide heterogeneity across sectors. Highest per capita incomes can be appreciated in primary activities, financial services, and social and health services. Domestic services, construction, hotels, and restaurants are among sector with lowest incomes. Table 3, Panel A, Column [1] shows that these figures on per capita income results in a Gini coefficient of 0.441. Income inequality was more pronounced among female-headed households (0.458; Panel B, Column [1]). In terms of indigence and poverty, 8.59 percent of the population was indigent, and the poverty rate was around 34.5 percent, which represents around 9.8 million people (Panel C, Column [1]). Again, the poverty incidence is higher for households with a female head (38.5 versus 31.8; Panel B, Column [1]).
Post-COVID-19 scenarios without policy responses
As previously mentioned, the COVID-19 shock was the largest economic contraction that Argentina experienced after the Great Recession of 2008/2009. During the second quarter of 2020, the county experienced an output reduction of around 20 percent. Naturally, this shock had considerable effects on employment. Given that this output contraction was almost twice during the COVID-19 shock than it was in the Great Recession, we assume that employment reductions during the second quarter of 2020 were twice as those experienced during the Great Recession. 27 Under this assumption, employment among formal workers was reduced-during the second quarter of 2020-by 24.2 percent and by 47 percent among informal workers (see Table A1). Drops in informal employment were higher than those in formal employment for all economic sectors. In our simulations around 2.6 million people 26. See Table A5 for a more detailed sectorial effect when considering the household head's gender. 27. Output reductions during the third and fourth quarters of 2020 represent 1.1 and 0.9 times the decline associated with the Great Recession, respectively. Therefore, the simulated scenarios for these periods assume that the falls in employment were 1.1 and 0.9 times the falls in employment experienced during the Great Recession, respectively. informality but is mostly made up by male workers. Education and health sectors also have a relatively low informality rate. For the first quarter of 2020, it was $19,914, slightly higher for households with a male household 9 Figure 2 Pre-COVID-19 scenario. Share of informal workers by deciles, gender, and sectors. [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [ aguinaldos, that is paid twice a year in June and December. Therefore, they are registered and included in the first and third quarters of our simulations but not in the second and fourth ones. Thus, to provide an accurate comparison, the pre-COVID-19 scenario should be compared with the second-quarter-ahead scenario. This is valid for both the scenario with and without policy responses. [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [ Deciles of per capita income. Workers in Argentina benefit from a wage bonus, aguinaldos, that is paid twice a year in June and December. Therefore, they are registered and included in the first and third quarters of our simulations but not in the second and fourth ones. Thus, to provide an accurate comparison, the pre-COVID-19 scenario should be compared with the second-quarter-ahead scenario. This is valid for both the scenario with and without policy responses. [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [ [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [ Source: Authors' own calculations based on Ministry of Production and Labor and EPH-INDEC. Notes: Workers in Argentina benefit from a wage bonus, aguinaldos, that is paid twice a year in June and December. Therefore, they are registered and included in the first and third quarters of our simulations but not in the second and fourth ones. Thus, to provide an accurate comparison, the pre-COVID-19 scenario should be compared with the second-quarter-ahead scenario. This is valid for both the scenario with and without policy responses. Table 3. Continued lost their jobs between the second and the first quarter of 2020.
In line with Figure 1, higher absolute drops in employment were in commerce, financial services, manufacturing, domestic services, and construction. The share of women who lost their jobs varies among sectors, with domestic service being the sector in which women were more affected as 92 percent of new unemployed workers were women (Figure 3). When looking at the employment rate, the contraction was relatively higher among women than men (-23 percent versus -20.8 percent, respectively). In early working ages (18-24) the differences are large: the fall in the employment rate was 63 percent for men and 80 percent for women ( Figure 4). These differences become more pronounced when considering the presence of children. For example, for the age group between 18 and 24, the contraction in employment for men with children was close to 57 percent, while for women with children it was 82 percent. In the group between 25 and 40 years old, the falls were 18 percent and 30 percent, respectively. Even conditioning on sectors, In 7 of the 11 analyzed sectors, women's employment was more affected than that of men (see Table A3). 28 In the simulated post-COVID-19 scenarios without policy responses, average per capita income decreased by about 6 percent ( Table 1, Column [5]). 29 However, this reduction was nonlinear along the income distribution. While the richest decile showed an income reduction of approximately 3.7 percent, the reduction among the poorest was around 15.7 percent. 30 This is explained by the fact that informal workers are concentrated at the bottom of the income distribution and were the ones most affected by the crisis (they are not protected by labor regulations). 31 The results for this scenario in terms of poverty and income distribution were as expected ( 28. These results should be taken with caution since we are not testing the significance of the gender differences. 29. Remember that given the inclusion of aguinaldos, an accurate comparison with the pre-COVID-19 scenario (first quarter of 2020) must be made with the second quarter ahead of the post-COVID-19 scenario. 30. For the case of Janssens et al. (2021) find a similar reduction of around one-third in the poorest household incomes. Martin et al. (2020) perform a simulation for the San Francisco Bay area and also document that the lowest income earners suffer the most in relative terms. 31. See Figure 2 and Table A1 for the informality rate by income deciles and by economic sectors and for the employment variation between formal and informal workers. Indigence sharply rose up to 12.4 percent, 45.3 percent higher when comparing to pre-COVID-19 scenario. Poverty incidence rose up to 39.22 (13.5 percent higher, when comparing with the pre-COVID-19 scenario).
Interestingly, the poverty measures that consider their severity like FGT(1) and FGT(2) present larger increases. Increases in poverty incidence were similar for female-headed households (12.5 percent) and male-headed households (14.3 percent). This change in poverty represented an additional of 1.3 million people under the poverty line (Table 3, Panel C,Column [4]). According to our simulation, these "new poor" individuals are those who work in the informal sector either as wage earners or self-employed, and their dependents. These individuals belong to households with an average size of around five people and where 44 percent are children. They are mostly employed in sectors such as construction, domestic service, hotels and restaurants, and manufacturing. In terms of education, they have approximately 6 years of education, which represents 2.4 years less than the average for the total population. Income inequality also worsened substantially; the Gini coefficient for this scenario was 0.455, 3.1 percent higher than the one corresponding to the pre-COVID-19 scenario. This could be related to the fact that the lowest deciles experienced higher income reductions, relative to the highest deciles.
Continuing with the post-COVID-19 scenarios without policy responses, an interesting exercise is to compare the third quarter ahead with the first quarter ahead (i.e., a comparison with the probable worst moment of the pandemic). This provides a sense of the evolution of income and employment net of public assistance, that is, changes that can be associated with economic activity only. Table 1, Column [7] shows that household income, on average, increased by 2.1 percent, but the differential effect on income distribution remains. The income of the richest decile fell by 2 percent, while the income of the poorest decile increased by 20.9 percent. This, consistent with administrative data, may be due to a faster recovery of informal employment-which has a greater relative weight in the lower part of the income distribution-than formal employment.
Following the INDEC, the employment rate for informal (formal) wage earners increased from 6.1 (19.6) percent to 7.7 (19) percent between the second and the third quarter of 2020. This recovery of income occurs precisely in sectors that have a relatively higher percentage of informality such as domestic services, construction, and commerce ( Table 2, Column [7]). 32 Consistent with this evolution of income, poverty and income distribution responded as expected (Table 3, Panel A, Column [7]). Indigence was reduced by 26.3 percent, and poverty incidence fell by 5.2 percent; this change represents around 0.6 million people jumping the poverty line (Panel C,Column [6]). Income inequality also improved substantially. The Gini coefficient for the third-quarter-ahead scenario is 0.444, 4.7 percent lower than the first-quarter-ahead scenario after COVID-19 (0.465).
Post-COVID-19 scenarios with policy responses
When considering government policy responses, the post-COVID-19 scenario seems to have ameliorated. A comparison with the pre-COVID-19 scenario shows a contrast where economic activity and policy responses are, jointly, the main drivers behind changes. In this scenario ( versus -6.0 in Column [5]). For a household at the second decile, income was reduced by 3.4 percent with policy responses, but without public assistance this reduction would have been 14.1 percent. This "cushioning" effect of public assistance occurred, albeit with decreasing intensity, in the following deciles.
Further, as expected, the policy responses do not seem to have been impacted substantially in the top 20 percent of the income distribution. And when analyzing by economic sectors, the findings are similar. It is worth noting, though, that public assistance ameliorated the scenario of some exposed (to COVID-19) activities. For example, construction workers' incomes reduced by 3.6 percent ( Table 2, Column [11]). This reduction would have been 7.4 percent without policy responses (Column [5]). Similar conclusions can be found for other sectors such as domestic services and hotels and restaurants. Table A1.
See
Public assistance also ameliorated the scenario in terms of poverty and income distribution ( Table 3, Panel A, Columns [10] and [11]). Indigence increased by 10.02 percent, 16.6 percent more when compared to the pre-COVID-19 scenario. It is worth recalling that this 10.02 would have been 12.48 in the absence of policy responses (Panel A,Columns [5]). Poverty incidence rose up to 37.31 (2.73 percentage points or 7.9 percent higher, when comparing with the pre-COVID-19 scenario), implying a reduction of 2.46 (1.92) percentage points in the indigence (poverty) rate when compared to the scenario without policy responses (Panel A,10.02 (37.31) in Column [10] versus 12. 48 (39.22) in Column [5]). This effect prevented more than 0.55 million people from falling into poverty (Panel C,Column [10]).
Policy responses were equally important in alleviating poverty in female-and male-headed households. For the former, poverty rose from 38.58 to 41.65 (Panel B, Column [1] and [10]), while it would have increased to 43.40 in the scenario without policy responses (Column [4]). This represents a cushion of 1.75 percentage points, or 4 percent (1.75/43.40). In the case of male-headed households, the cushion was 2.03 percentage points (34.34 --36.37), or 5.6 percent (2.03/36.37). Income inequality also ameliorated substantially. The Gini coefficient for this scenario is 0.450, pretty similar to the one corresponding to that of the pre-COVID-19 scenario (Panel A, Column [10]).
To better understand these results, it is useful to characterize the public assistance beneficiaries. In our simulations, around 10 million people received some kind of public cash transfers. We identify 4.7 million who received IFE, 3.5 million who received a bonus for AUH, and 2.7 million retirees. The share of women among these groups is 54 percent, 57 percent, and 67 percent, respectively. Figure 5 presents how these beneficiaries are distributed along the income distribution. The figure shows that AUH is the most pro-poor cash transfer, with around 84 percent of its beneficiaries at the bottom 40 percent of the income distribution. Approximately 72 percent of IFE´s beneficiaries are also at this bottom 40 percent. This is in line with previous figures on informality. Given that eligibility criteria for IFE were informal workers, unemployed workers, and low-income self-employed workers and that informal workers are in the bottom of the income distribution, logically, IFE´s beneficiaries will be concentrated there. Finally, the less pro-poor distribution is associated with retirees and pensioners; only around 23 percent are at the bottom 40 percent of the income distribution.
Finally, comparing the same quarters between scenarios with and without policy responses could provide additional relevant insights regarding policy effects. Note that here, policy responses will be, uniquely, the main driver behind the changes since the economic activity driver is the same in both scenarios. For this purpose, in Column [9] of each table (i.e . Tables 1 and 2), we compare the percentage change of income levels and poverty and inequality indicators at the first quarter after the pandemic with and without policy responses. In Column [13] of Tables 1 and 2 we do the same for the third quarter. The results indicate that average per capita familiar income was consistently higher, on average, in all quarters compared to what it would have been without public assistance. This holds, naturally with a different intensity, for both the lower and upper sides of the income distribution. Given that we assume that the intensity of public assistance decreases as the economy reduced its rate of contraction, this seems to be correlated with the fact that in the first quarter after the pandemic, household income was, on average, 2.5 higher compared to what it would have been without public assistance ( Table 1, Column [9]). During the third quarter, this difference decreased to 1.5 percent (Column [13]). This phenomenon can also be seen when analyzing the dynamics of income by economic sectors (Columns [9] and [13]). The gradual withdrawal of public assistance is also consistent with its lower intensity to reduce indigence and poverty and to improve the income distribution. In terms of indigence reduction, during the first quarter post-COVID-19, public assistance reduced the indigence rate by 14.8 percent (Table 3, Panel A, Column [9]). Despite the economy falling so much, and incomes slowly recovering, lower public assistance reduced indigence by 15.3 percent in the third quarter (Panel A, Column [13]). In terms of poverty, it is interesting to look at the absolute values of individuals. During the first quarter, public assistance contributed to preventing nearly a half million people from falling into poverty (521,045 in Table 3, Panel C,Column [9]). Yet, in the third quarter, and with a lower number of poor people, public assistance helped another 400,000 people avoid becoming poor (375,360 in Panel C,Column [13]).
Conclusions and recommendations for policy discussion
While the health effects of the COVID-19 crisis were the initial focus of the government, its socioeconomic effects and accompanying policy responses have been receiving more attention, mainly in low-and middle-income countries. In this context we analyze the impact of the COVID-19 crisis on households' incomes, unemployment, poverty, and inequality in Argentina. For this purpose, using a standard microsimulation methodology we simulate impacts on welfare at the household level using household survey data and administrative data on employment and wages by economic sectors. The simulations also include public cash transfers (i.e., policy responses), implemented by the government to mitigate the crisis, to get a sense of how effective these policy measures were in counteracting those negative effects.
The results indicate that during the COVID-19 crisis, households would have experienced a reduction of about 6 percent on their incomes without any policy responses. This reduction was nonlinear along the income distribution, with the lowest income earners suffering the most in relative terms. This result is strongly related with relatively higher informality at the bottom of the income distribution. The greater negative effects of the pandemic in less well-off parts of the income distribution are in line with results from Bonavida and Gasparini (2020). Furthermore, the impact was not homogeneous by gender: on average, the employment rate fell more among women than men (-23 percent versus -20.8 percent when comparing the second and first quarters of 2020). In early working ages (18-24), the differences are very significant: the fall in the employment rate was 63 percent for men and 80 percent for women. These differences become more pronounced when considering the presence of children. For example, for the 18-24 age group, the contraction in employment for men with children was close to 57 percent, while it was 82 percent for women with children. In the 25-40 age group, the contractions were 18 percent and 30 percent, respectively. Moreover, even when conditioning on sectors, in 7 of the 11 analyzed sectors, women's employment was more affected than that of men. These findings are consistent with ILO (2020) and are associated with the overlap of work responsibilities and care responsibilities (housework, childcare and eldercare), which have intensified during the pandemic, especially for households with children (OECD, 2020;WEF, 2021). In addition, we find that the policy response cushioned by around one-third what the average drop in household income would have been. This prevented major increases in poverty and inequality. A key aspect of the satisfactory policy response was that public assistance was targeted at informal workers and at less well-off households with children. This policy response's large offsetting effect is in line with previous findings such as Lustig et al. (2020).
Our simulations face two main limitations. First, there are limited data available on how the pandemic is affecting economic activity in real time. At the moment of writing this paper, only the household survey corresponding to the first quarter of 2020 was available in Argentina. Also, information on how employment in the different sectors was being affected has been published with lags. Second, we have lack knowledge on how to predict how the pandemic will evolve. Argentina, like the rest of the countries in the world, are going through the pandemic without certainty about when or how it will yield.
In the transition toward the end of the pandemic, policy discussions should include short-and medium-term policies. For short-term policies, policymakers and academics should focus on how to accurately target public assistance. An important lesson from the Argentine case is that as most of poorest households are employed in the informal sector, relief measures-such as those applied in this paper-considering informality becomes crucial. Thus, it becomes very important to discuss policies aimed at reducing labor informality as labor market policies play an important role in the formalization of employment. Given that most informal workers have low qualifications and work in jobs that are difficult to identify for public policies, an integrated policy approach is necessary, which includes economic, social, and labor policies (Bertranou et al., 2013). Along these lines, it is also important to address how to adapt people to the transformation of the workplace in the post-COVID-19 era since the crisis may allow wider adoptions of teleworking practices.
Also crucial in policy discussions is accurately identifying those who really need public assistance. All information about citizens, contained in the administrative records of the different divisions of the public sector, must be used. Invest in resources and modern technologies to obtain a good handling of this information should also be considered to have it available at the right times. In turn, it is important to make efforts to get this information as updated as possible. All this aims to minimize the typical errors of inclusion and exclusion that arise when targeting social policies. An adequate use of the administrative records of the Argentine social security, through its different contributory and non-contributory programs, despite its limitations, is a key aspect since it already covers most of the population (Giuliano et al., 2020). At the beginning of the COVID-19 crisis, on March 23, 2020, through decree 310/2020, the National Social Security Administration (ANSES) created the IFE benefit aimed at the most vulnerable sectors of the population. The IFE consists of an exceptional non-contributory monetary benefit, intended to compensate the loss or serious decrease in income of people affected by the health emergency scenario (declared by decree no. 260/2020). To mitigate the increase in poverty and indigence, this measure was aimed at households composed of informal workers, unemployed workers, and low-income self-employed workers. The latter are those with an monthly average gross income of less than $17,394 (around US$220); that is, those sectors of the population with the highest degree of vulnerability in socioeconomic terms.
ORCID iDs
The amount of the IFE was $10,000 (around US$120, which represents 60 percent of the minimum wage) and can be collected by a member of the household who is under conditions of exclusion or job insecurity and is facing socioeconomic vulnerability. The IFE presents two exclusive definitions regarding the delimitation of the beneficiary population. On the one hand, it provides assistance to workers affected by precarious job placement (low-income self-employed workers, domestic workers, informal employees, and unemployed workers). On the other hand, the program limits this coverage to the employment and economic situation of the household to which the beneficiary belongs, in the sense that all its members must meet the conditions to access the IFE and only one of them may receive the benefit. The IFE was compatible with the receipt of other social programs like the AUH or the AUE.
Simultaneously with the IFE, a bonus of up to $3,000 (around US$40) was granted to more than 4.6 million retirees and pensioners who received a single pension until reaching $18,892 (around US$240). In addition, the amount of the AUH and the AUE was doubled, benefiting more than 4.3 million children and adolescents who received a supplementary income of $3,103. | 2023-03-22T15:18:09.703Z | 2022-12-31T00:00:00.000 | {
"year": 2022,
"sha1": "aa55fb24bce7921931b3a5ca93c7cde499381605",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.34196/ijm.00269",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "94a61304d7272b3003f504460e01ae8d279ea2ae",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195804908 | pes2o/s2orc | v3-fos-license | Risk of newly detected infections and cervical abnormalities in adult women seropositive or seronegative for naturally acquired HPV‐16/18 antibodies
Abstract Background Infections with human papillomavirus (HPV) types 16 and 18 account for ~70% of invasive cervical cancers but the degree of protection from naturally acquired anti‐HPV antibodies is uncertain. We examined the risk of HPV infections as defined by HPV DNA detection and cervical abnormalities among women >25 years in the Human Papilloma VIrus Vaccine Immunogenicity ANd Efficacy trial's (VIVIANE, NCT00294047) control arm. Methods Serum anti‐HPV‐16/18 antibodies were determined at baseline and every 12 months in baseline DNA‐negative women (N = 2687 for HPV‐16 and 2705 for HPV‐18) by enzyme‐linked immunosorbent assay (ELISA) from blood samples. HPV infections were identified by polymerase chain reaction (PCR) every 6‐months, and cervical abnormalities were confirmed by cytology every 12 months. Data were collected over a 7‐year period. The association between the risk of type‐specific infection and cervical abnormalities and serostatus was assessed using Cox proportional hazard models. Results Risk of newly detected HPV‐16‐associated 6‐month persistent infections (PI) (hazard ratio [HR] = 0.56 [95%CI:0.32; 0.99]) and atypical squamous cells of undetermined significance (ASC‐US+) (HR = 0.28 [0.12; 0.67]) were significantly lower in baseline seropositive vs baseline seronegative women. HPV‐16‐associated incident infections (HR = 0.81 [0.56; 1.16]) and 12‐month PI (HR = 0.53 [0.24; 1.16]) showed the same trend. A similar trend of lower risk was observed in HPV‐18‐seropositive vs ‐seronegative women (HR = 0.95 [0.59; 1.51] for IIs, HR = 0.43 [0.16; 1.13] for 6‐month PIs, HR = 0.31 [0.07; 1.36] for 12‐month PIs, and HR = 0.61 [0.23; 1.61] for ASC‐US+). Conclusions Naturally acquired anti‐HPV‐16 antibodies were associated with a decreased risk of subsequent infection and cervical abnormalities in women >25 years. This possible protection was lower than that previously reported in 15‐ to 25‐year‐old women.
| BACKGROUND
Infections with human papillomavirus (HPV) types 16 and 18 are responsible for approximately 70% of invasive cervical cancers. 1 While most infections clear on their own, some develop into precancerous lesions and cervical cancer.
Previous studies have shown that many women with incident HPV-16 or HPV-18 infections develop serum antibodies of the corresponding type of HPV. [2][3][4][5][6][7][8] These naturally acquired antibodies can remain detectable for at least 4-5 years after the initial infection. 9 Whether or not these naturally acquired antibodies protect against future infection remains debatable. [10][11][12][13][14][15][16][17][18] Risk of incident HPV infections in adult women is positively associated with new sexual partners and with the lifetime number of sexual partners. 19,20 In older women, both new viral acquisition and intermittent detections of HPV from past HPV exposures are likely to account for what has been classified as apparent new HPV infections. In women 30-50 years of age, factors associated with repeat HPV detection have been shown to be comparable in short-term and longer-term studies, suggesting association between shortterm repeat detection and long-term persistence. 21 As incident HPV detection is negatively associated with viral load as well as with repeat detection, this suggests that actual new acquisition of HPV is less common than reactivation or intermittent persistence.
The role of naturally acquired antibodies in the prevention of new infections and cervical abnormalities can be explored in the control arms of large HPV vaccine trials. A correlation between naturally acquired antibodies to HPV-16 (and to a lesser extent HPV-18) and reduced risk of newly detected infection was demonstrated in younger women (15-25 years) in the control arm of the PApilloma TRIal against Cancer In young Adults (PATRICIA; NCT00122681). 12 Here, we examined the risk of "newly" detected HPV infections and cervical abnormalities among women >25 years in relation to naturally acquired HPV-16/18 antibodies in the control arm of the VIVIANE during a 7-year follow-up period. 22,23 Our aim was to assess whether the risk factors for HPV infection differed between seropositive and seronegative women. We also analyzed risk factors stratified by baseline serostatus to mitigate the limitations in differentiating between new and reactivated infections.
| Study participants and procedures
Women aged >25 years were included in the control arm of the multinational, VIVIANE trial and were followed up for seven years. VIVIANE is the Human Papilloma Virus: Vaccine Immunogenicity and Efficacy trial. This is a phase 3 double-blind, controlled vaccine trial based on age, cytology, region, and serostatus. 23 The methodology of VIVIANE has been presented in detail elsewhere. 24 Our analysis included women DNA-negative for HPV-16 and −18 at Month 0, with normal or low-grade cytology (ie, negative or atypical squamous cells of undetermined significance [ASC-US] or low-grade squamous intraepithelial lesion [LSIL]) at Month 0, who had received at least one control vaccine dose (Al[OH] 3 ) and who had sexual intercourse before or during the follow-up (Figure 1).
Serum anti-HPV-16/18 antibodies were determined by enzyme-linked immunosorbent assay (ELISA) from blood samples collected at baseline and every 12 months thereafter. Seropositivity was defined as an antibody level greater than or equal to the assay cutoff which was 8 ELISA units (EU)/ mL for HPV-16 and 7 EU/mL for HPV-18. 25 Liquid-based cytology samples were tested for HPV using DNA typing PCR-based assays every six months and cytopathological examinations every12 months. 25 Information on known risk factors that predispose women to HPV cervical infection or recognized cofactors for cervical carcinogenesis was also collected through questionnaires. These data were collected at study entry and included demographic information, smoking habits, past and current sexual history, and reproductive status. In addition, data on participants' sexual behavior and use of contraception were collected every six months up to month 48.
Written informed consent was obtained from each woman before any study-specific procedures were implemented. The protocol and other materials were approved by a national, regional, or investigational center Independent Ethics Committee or Institutional Review Board. The trial was conducted based on the Code of Ethics of the World Medical Association (Declaration of Helsinki).
| Statistics
The analyses were performed on the total vaccinated cohort (TVC) of the control arm of the VIVIANE trial and included all women who received at least one control vaccine dose, who were DNA-negative for HPV-16 and HPV-18 at Month 0, and who also had a normal or low-grade cytology (ie, negative or ASC-US or LSIL) at Month 0. All analyses were performed on women who had ever had sexual intercourse before study entry or during the follow-up period. Analyses were performed using SAS version 9.2. The incidence rate (IR) was calculated as the number of incident events divided by the total person-time. Person-years were calculated as the sum of the follow-up for each participant expressed in years. The follow-up period started on the day after first vaccination (control vaccine) and ended on the first occurrence of the endpoint or the last visit (whichever occurred first). The relationship between the exposure variables and the risk of newly detected infections or cervical abnormalities was assessed using Cox proportional hazard models. Univariate analyses were done to obtain unadjusted hazard ratios of the determinants of interest (not shown). For each endpoint, the following multivariable Cox models were performed including: 1. the type-specific serostatus at baseline as a binary variable; 2. the type-specific serostatus as a binary time-dependent variable; 3. the antibody level as a time-dependent continuous variable; 4. log-transformed antibody level as a time-dependent continuous variable.
For each endpoint, we included nine covariates in these models: region, age at inclusion, age at first sexual intercourse, marital status, smoking status at baseline, number of sexual partners during the past year, previous pregnancy, history of Chlamydia trachomatis infection, history of HPV infection/ treatment or nonintact cervix. HPV-associated infection or treatment was defined as two or more abnormal smears in sequence, an abnormal colposcopy or biopsy, or treatment of the cervix after abnormal smear or colposcopy findings. The histories of HPV infection/treatment were collected at baseline using medical history.
For ASC-US+ only, previous type-specific HPV infection was included as a time-dependent variable since the presence of these cells indicates an active infection at a specific point in time. For CIN1+ and CIN2+ endpoints, no inferential analyses were performed due to the low number of cases. Also, analyses of determinants of interest were performed separately for the baseline seronegative and seropositive subjects to help determine whether newly detected infections were new or had been reactivated. The analysis is based on two assumptions: (a) An association between a latent reactivated infection and a known risk factor should be weaker than an association between a new infection and a known risk factor. (b) The reactivation of a PI should be more frequent in the baseline seropositive (representing presumed prior HPV infection exposure) subjects than in the baseline seronegative (representing presumed naïve, absent prior HPV infection exposure) subjects.
| Study population
In total, 2687 and 2705 participants were included in the analysis of HPV-16 and HPV-18 endpoints, respectively ( Figure 1). There was a difference of 3% between HPV-16/18 by serostatus at baseline. Seroprevalence at enrollment was 31% (828/2687 seropositive women) for HPV-16 and 28% (756/2705 seropositive women) for HPV-18 (Table 1). This difference is entirely in agreement with the well-known higher prevalence of 16 than 18 in HPV infections.
At enrollment, 45% of women were 26-35 years old, 44% were 36-45 years old, and 11% were ≥46 years old. Nearly all participants had been previously sexually active at the start of the study, except five who had their first sexual intercourse during the follow-up. 56% had started sexual activity between 18 and 25 years (32% between 15 and 17), 80% had had one sexual partner during the previous year, and 84% had had a previous pregnancy. Moreover, 14% of women were current smokers, 5% were C trachomatis-positive, and 87.8% were classified as pre-menopausal, 6.5% as peri-menopausal, 5.1% as post-menopausal, while the status for the remaining 0.5% was missing.
| Multivariable models
The multivariable Cox proportional hazard model, including the serostatus at baseline as a binary variable, showed that the risk of newly detected HPV-16, 6-month PI and ASC-US+ was statistically significantly lower in seropositive vs seronegative women (hazard ratio [HR] = 0.56 [0.32-0.99; P = 0.04] and 0.28 [0.12-0.67; P = 0.004], respectively; Table 2). Analysis for HPV-16 incident infections and 12-month PI also showed a somewhat lower risk in seropositive than seronegative women although the difference was not statistically significant (HR = 0. The analyses stratified by baseline serostatus showed that these risk factors (number of sexual partners in the last 12 months, living single and smoking) were more marked in seronegative than in seropositive women (Table 4).
| DISCUSSION
In this study, HPV-16-seropositive women of 25 years and older had a moderate decrease in risk of developing a new type-specific HPV, PI, and ASC-US+ compared to seronegative women. This result agrees with the hypothesis that naturally acquired HPV antibodies probably provide only partial protection against subsequent infection with the same HPV type. However, HPV-18-seropositive women had deficient levels of protection. Any naturally acquired protection afforded by either antibody is unlikely to be better than the benefits acquired by vaccination. Another study has found that women aged between 30 and 50 who were seropositive for high risk (HR) HPV at baseline had a higher incidence of new type-specific HPV infection than women who were seronegative. 26 The association between seropositivity and the reduced risk of new infection was less in our study of 26+-year-old women than demonstrated in our study of younger women aged 15-25 years in PATRICIA and in the Costa Rica Vaccine Trial. 12,15 This low protective effect or even absence of protective effect in >25-year-old women could suggest waning of the natural immunity but it could also reflect reactivation of prior infection. 26 In the present study, we were not able to determine an accurate antibody threshold value for a defined reduction rate in infection. In the PATRICIA trial, HPV-16 antibody levels comprised between 200 and 500 EU/mL were associated with a 90% reduction of incident infection, of 6-month PI and of ASC-US+. 12 For HPV-18, seropositivity was associated with a lower risk of ASC-US+ and CIN1+ but no association was found between naturally acquired antibodies and new infection. 12 The current study also attempted to consider the change in serostatus during the follow-up period. Including the serostatus as a time-dependent variable and as a continuous variable in the Cox models is original. In a recent metaanalysis, assessing the naturally acquired immunity against HPV infection, none of the 14 included studies considered the possible change of serostatus during the follow-up period. 27 Overall, our various models gave consistent results. However, the interpretation of the time-dependent serostatus models can be challenging because of the interaction between the change in antibody titers and the incidence of new HPV infections. Because the serology was collected every 12 months and the cervical sample every six months, new, but undetected, infection could have boosted the antibody titer.
In another analysis of the control cohort of the VIVIANE trial, the risk of detecting CIN after natural HPV infection in women aged >25 years was similar to that observed in women aged 15-25 years from the PATRICIA trial. 24 This observation suggests that there are little to no age-related differences in the detection of natural HPV infection and their associated CIN lesions.
Our analysis of determinants when considered separately for the baseline seronegative and seropositive subjects partially supports the hypothesis suggested by other studies that most of the newly detected HPV infections in seropositive women would be a reactivation of prior HPV infections. 19,20 The strengths of this study included the large cohort size of approximately 2700 women, and the relatively extended follow-up period of seven years, which allowed for a thorough evaluation of an unvaccinated cohort. This study also F I G U R E 2 Risk ratio of incident, 6-mo persistent, and 12-mo persistent infection and atypical squamous cell of undetermined significance or greater in HPV-16/HPV-18 type-specific seropositive vs seronegative women. Error bars represent 95% confidence intervals; # 95% confidence intervals are narrow and not visible; HPV, human papillomavirus; PI, persistent infection; bin, binary; ab, antibody; ACS-US+, atypical squamous cell of undetermined significance or greater had several limitations. A cervical sample test was performed only every six months, which could have meant that some incident HPV infections were not detected. In addition, it was not possible to determine whether an infection was quiescent, persistent at undetectable levels or was a new infection. Evidence exists that type-specific HPV infection can present after a period of nondetection. 28 Based on this assumption, some infections considered as new could indeed be a PI. This scenario could also bias the assessment of the relationship between natural antibodies and risk of new infection. Furthermore, the number of CIN1+ and CIN2+ cases was too low to allow for inferential analyses. Since we were unable to define which HPV type caused the abnormal cytology, ASC-US+ lesions could ensue from non-HPV-16/18 types. Further research is needed to better understand the natural history of HPV infection and the link between seropositivity and subsequent protection in women of different age groups.
Incident -Time-dependant ter (per 10 EU/ml
In conclusion, multivariable Cox analyses showed evidence of lower risk of newly detected incident and persistent HPV infections and ASC-US+ in women with naturally acquired antibodies against HPV-16. The results for HPV-18 are not conclusive since only a limited and nonsignificant decrease in risk was observed. These findings are consistent with a partial protective role of naturally acquired HPV antibodies against future infection with the corresponding HPV type. However, no threshold of antibody levels necessary for protection could be defined.
ACKNOWLEDGMENTS
The authors thank all study participants and their families, all clinical study site personnel who contributed to the conduct of this trial, and Dr. N Chakhtoura and Dr. L Myron as investigators. Writing support services were provided by John Bean (Bean Medical Writing), Kristel Vercauteren, and Claire Verbelen (XPE Pharma & Science, Belgium) on behalf of GSK, Wavre, Belgium. The authors would also like to thank Business & Decision Life Sciences platform for editorial assistance and manuscript coordination, on behalf of GSK. Thibaud André coordinated manuscript development and editorial support.
CONFLICT OF INTEREST
D Rosillon and F Struyf are employed by the GSK group of companies and received GSK shares. L Baril was employed by the GSK group of companies at the time of the study and received GSK shares. G Dubin is currently a full-time employee of Takeda Pharmaceuticals, Deerfield, Illinois, and receives salary and stock shares. MR Del Rosario-Raymundo reports payment of honorarium as principal investigator and support for travel to meetings for the study from the GSK group of companies during the conduct of the study; payment for lectures including service on speakers' bureaus from the GSK group of companies. M Martens reports grants from the GSK group of companies, during the conduct of the study. C Bouchard reports grants from the GSK group of companies, during the conduct of the study. She reports grants and honorarium from Merck. KL Fong reports grant from the GSK group of companies via her institution for the conduct of the study. MC Bozonnat is a consultant outsourced from 4Clinics to the GSK group of companies. A Chatterjee received grant funding for clinical trials, and served on the speakers' bureau and advisory boards for the GSK group of companies and Merck. SM Garland has received advisory board fees and grants from CSL and the GSK group of companies, and lectures fees from Merck, the GSK group of companies, and Sanofi Pasteur. In addition, she received funding through her institution to conduct HPV vaccines studies for MSD and the GSK group of companies. She is a member of the Merck Global Advisory Board as well as the Merck Scientific Advisory Committee for HPV. E Lazcano-Ponce received fees to conduct HPV vaccines studies from the GSK group of companies and Merck. SA McNeil has received research grants from the GSK group of companies and Sanofi Pasteur and speaker honoraria from Merck. B Romanowski received research grants, travel support, and speaker honoraria from the GSK group of companies. SR Skinner received funds through her institution from the GSK group of companies to cover expenses involved in the collection of data for this study. The GSK group of companies provided funds to reimburse expenses incurred with travel to conference to present data from other studies and paid honoraria to her institution for work conducted in the context of Advisory Board and educational meetings. CM Wheeler's institution received a contract from the GSK group of companies to act as a clinical trial site for this study, and reimbursements for travel related to publication activities and for HPV vaccine studies. Her institution also received funding from Merck to conduct HPV vaccine trials, and from Roche Molecular Systems equipment and reagents for HPV genotyping studies, outside the submitted work. X Castellsagué received research funding through his institution (ICO) from Merck & Co, SPMSD, the GSK group of companies, and Genticel. He also received honoraria for conferences from Vianex and SPMSD. G Minkina, as an investigator at a study clinical site, received fees from the GSK group of companies through her institution. She also received funding from Merck Sharp & Dohme to participate as principal investigator in efficacy trials. She received travel support to attend scientific meetings, honoraria for speaking engagements and participation in advisory board meetings, and consulting fees from the GSK group of companies and Merck Sharp & Dohme. T Stoney received honoraria from the GSK group of companies for study committee membership (Asia Pacific study follow-up committee for Zoster studies), for conference attendance, and travel support. Her institution also received additional funding from a bioCSL grant for a project in which she is an investigator, funded by National Health and Medical Research Council. She also received travel support for participation in study investigator meetings from Novartis Vaccine and Diagnostics, Sanofi Pasteur, Alios BioPharma, and Pfizer. SC Quek received honoraria | 2019-07-06T13:05:07.043Z | 2019-07-05T00:00:00.000 | {
"year": 2019,
"sha1": "c26057f5146b64fa81be7ddc8978dd3389b40e84",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.1879",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e4eb6471231a3f9788e893ac78f63cb175add96",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215238410 | pes2o/s2orc | v3-fos-license | Classical Optimizers for Noisy Intermediate-Scale Quantum Devices
We present a collection of optimizers tuned for usage on Noisy Inter-mediate-Scale Quantum (NISQ) devices. Optimizers have a range of applications in quantum computing, including the Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization (QAOA) algorithms. They have further uses in calibration, hyperparameter tuning, machine learning, etc. We employ the VQE algorithm as a case study. VQE is a hybrid algorithm, with a classical minimizer step driving the next evaluation on the quantum processor. While most results to date concentrated on tuning the quantum VQE circuit, our study indicates that in the presence of quantum noise the classical minimizer step is a weak link and a careful choice combined with tuning is required for correct results. We explore state-of-the-art gradient-free optimizers capable of handling noisy, black-box, cost functions and stress-test them using a quantum circuit simulation environment with noise injection capabilities on individual gates. Our results indicate that specifically tuned optimizers are crucial to obtaining valid science results on NISQ hardware, as well as projecting forward on fault tolerant circuits.
INTRODUCTION
Hybrid quantum-classical algorithms are promising candidates to exploit the potential advantages of quantum computing over classical computing on current quantum hardware. Target application domains include the computation of physical and chemical properties of atoms and molecules [10], as well as optimization problems [9,34] such as graph MaxCut.
These hybrid algorithms execute a classical optimizer that iteratively queries a quantum algorithm that evaluates the optimization objective. An example is the Variational Quantum Eigensolver (VQE) algorithm [20] applied in chemistry, where the objective function calculates the expectation value of a Hamiltonian H given an input con guration of a simulated physical system. The Hamiltonian describes the energy evolution of the system, thus the global minimum represents the ground level energy. The classical side variationally changes the parametrized input con guration until convergence is reached, thereby nding the eigenvalue and eigenstate of the ground energy of H. Quantum Approximate Optimization Algorithms (QAOA) [9,34] employ a similar approach.
For the foreseeable future, quantum algorithms will have to run on Noisy Intermediate-Scale Quantum (NISQ) devices which are characterized by a small number of noisy, uncorrected qubits. Hybrid methods are considered auspicious on such devices due to: (1) the expectation that their iterative nature makes them robust to noise; and (2) reduced chip coherence time requirements because of the single Hamiltonian evaluation per circuit execution.
However, these considerations relate to the quantum side of the hybrid approach. Rather, as we will show in this paper, the impact of noise on both the classical and quantum parts needs to be taken into account. In particular, the performance and mathematical guarantees, regarding convergence and optimality in the number of iterations, of commonly used classical optimizers rest on premises that are broken by the existence of noise in the objective function. Consequently, they may converge too early, not nding the global minimum, get stuck in a noise-induced local minimum, or even fail to converge at all. For chemistry, the necessity of developing robust classical optimizers for VQE in the presence of hardware noise has already been recognized [20]. However, the rst published hardware studies side-stepped optimizers by performing a full phase space exploration [8,18,31] and back tting the solution to zero noise. This works for low qubit count and few minimization parameters, but is not tractable at the O(100) qubit concurrency soon expected on NISQ-era devices, nor for the number of parameters needed for realistic problems. To our knowledge, QAOA studies also ignore the e ects of the noise on the classical optimizers.
In this study, we want to understand the requirements on classical optimizers for hybrid algorithms running on NISQ hardware and which optimization methods best ful ll them. We use VQE as the testing vehicle, but expect the ndings to be readily applicable to QAOA and other hybrid quantum-classical methods which employ similar numerical optimization. The goals and contributions of our empirical study are twofold: • A practical software suite of classical optimizers, directly usable from Python-based quantum software stacks, together with a tuning guide. We consider factors such as the quality of the initial solution and availability of bounds, and we test problems with increasing number of parameters to understand scalability of the selected methods. • A study of the optimizers' sensitivity to di erent types of noise, together with an analysis of the impact on the full VQE algorithm. We consider the domain science perspective: some level of experimental error is expected and acceptable, as long as the result is accurate and the errors can be estimated. We run simulations at di erent noise levels and scale, for several science problems with di erent optimization surfaces, nding the breaking points of the minimizers and the algorithm for each.
We have taken a very practical tack and rst evaluated the minimizers from SciPy [29]. These include methods such as the quasi-Newton BFGS [22] algorithm, and are the default choice of many practitioners. Most optimization tools in standard Python and MAT-LAB software are not noise-aware and, as we have found in our evaluations, actually fail in the presence of quantum noise. Some optimizers are more robust due to the smoothing e ect of the underlying methods used (e.g. modeling in trust region methods), but that is seldom by design.
Fortunately, applied mathematicians in the optimization community have long been working on this type of problem and have provided high quality, open source, software. Based on their recommendation, our nal selection contains representative methods of (hybrid) mesh (ImFil [16], NOMAD [17]); local t (SnobFit [15]); and trust regions (PyBobyqa [5,6]). Python and C++ are far more widely used in quantum computing than MATLAB. Thus, we have rewritten optimizers where necessary from MATLAB into Python, while ensuring, through a suite of unit tests, reproducible deterministic behavior after porting, and provided consistent interfaces and plugins for high level quantum frameworks such as Qiskit [1] and Cirq [12]. These products have been packaged into [28]. The optimization package in also provides tutorial notebooks with tips and hints for hyper-parameter optimization, and an evaluation harness to quickly assess applicability to new problems.
has been evaluated on three VQE problems (ethylene C 2 H 6 rotation and bond stretching, and Hubbard model simulation), each with di erent optimization requirements. The results indicate that a suite of minimizers is needed to match speci c strengths to speci c problems. Achieving high quality solutions is aided by domain science information, if available, such as a good initial parameters, knowledge of local minima, or the need to search around inaccessible regions. Such information is problem speci c and in practice we observe di erent performance bene ts with different optimizers from its inclusion. Where this information is not available, our study indicates that the best results are obtained by composing local and global optimizers, leveraging their respective strengths, during the VQE algorithm run.
The organization of this paper is as follows. In Section 2, we give a brief background on numerical optimization and our requirements on optimizers. In Section 3 we describe the optimizers available in in more detail. We provide the necessary background on hybrid quantum-classical algorithms in Section 4 and we describe the impact of noise in Section 5. Our numerical experiments are presented in Section 6 and discussed in Section 7. We compare our work with related studies in Section 8 and nally summarize the main conclusions in Section 9.
NUMERICAL OPTIMIZATION
In variational hybrid quantum-classical algorithms, such as VQE, the execution on the quantum processor evaluates the objective function to be optimized classically. In most cases, it is not possible to calculate gradients directly, thus derivative-free optimization methods are required. For a deterministic function f ∶ Ω ⊂ IR n → IR over a domain Ω of interest that has lower and upper bounds on the problem variables, derivative-free algorithms require only evaluations of f but no derivative information. They assume that the derivatives of f are neither symbolically nor numerically available, and that bounds, such as Lipschitz constants, for the derivatives of f are also unavailable.
Optimizers are judged on the quality of the solution and on their speed and scalability. A good solution has a short distance to the true global optimum, high accuracy of the optimal parameters found, or both. A good overview and thorough evaluation of derivative-free algorithms can be found in Rios et al. [27]. The main criteria for matching an optimizer to a problem are the convexity and the smoothness of the optimization surfaces. Convexity has the familiar meaning; smoothness in our context requires that the function is "su ciently often di erentiable". In VQE, the shape of the optimization surface is determined by the ansatz, and although typical surfaces are smooth, noise can change this considerably. Figure 1 shows the evolution of the optimization surface for a single parameter in a simple VQE problem (rotation/torsion of an ethylene molecule; 4 qubits, 2 parameters) for increasing levels of Gaussian gate noise (detailed background on this and other studies is provided in Sections 4 and 5). For low noise, the optimization surface is convex around the global minimum and smooth. For increasing levels of noise, the optimization surface becomes both non-convex and non-smooth. It gets substantially worse for more complex problems: because circuit depth increases, because the number of parameters increases the likelihood of noise-induced local minima, and because entanglement over many qubits means that the e ects of gate noise become non-local. This can be seen in Figure 2, which displays the e ect of noise on an 8 qubit Hubbard model simulation, with 14 parameters at a moderate level of gate noise of σ = 0.01rad. (cf. the mid-range in the ethylene gure). We are thus interested in optimizers that perform well across the whole range of behaviors: convex and non-convex surfaces, smooth and non-smooth surfaces.
Optimizer Selection Criteria
The criteria for selecting optimizers that we considered are: (1) Ability to nd a good solution in the presence of noise, potentially using di erent methods for di erent types of surfaces. (2) Scalability with the number of parameters, as this determines the asymptotic behavior on future quantum hardware that allows the simulation of larger problems. (3) Number of samples (queries to the objective function) required and precision needed, which a ects scaling and wall-clock time spent on the quantum chip. (4) Implementation performance and ability to parallelize, as these a ect scaling and wall-clock time spent on the classical side.
There are two common strategies for optimizing noisy outcomes: optimize for the expected value of the response, or for the worst case [25]. Quantum simulations, being probabilistic in nature, t the former: many runs ("shots") of a circuit are required to obtain the output distribution, which is then inclusively averaged over local noise sources.
Baseline Optimizers
Under the assumption that the objective function is still continuously di erentiable, quasi-Newton methods can be used. These approximate the rst (and often the second) derivative from the evaluations at di erent points. Such methods work better if a detailed understanding of the noise is available, allowing selection of good step sizes and properly weigh evaluations when incorporating them into the approximation of the derivatives. In the case of BFGS, which has been used by VQE developers for algorithm development on quantum simulators 1 , each new evaluation is instead added to the current derivative estimate with equal weight to all points collected so far combined. This means that BFGS is easily thrown o when function values are noisy. 1 As opposed to real hardware. Given that it is still a common rst choice, we retain BFGS as a baseline for comparisons for our initial experiments and candidate optimizer selection for . We use the SciPy [29] BFGS implementation and tune it for all input problems. We have also evaluated a range of other methods for which implementations are readily available in Python, such as the Nelder-Mead simplex method [11] (considered by McClean et al. [20] in their initial VQE analysis paper), RBFOpt [7], Cobyla [24], DYCORS [26], and CMA-ES [13,14]. These methods do not make the hard assumptions about data quality that BFGS does, leaving them somewhat more robust to noise. Based on our evaluation, we nd Cobyla to outperform and thus we use it as a second baseline for subsequent comparisons.
SCIKIT QUANT OPTIMIZERS
The initial selection of optimizers packaged in consists of NOMAD, ImFil, SnobFit, and BOBYQA; each detailed in the rest of this section. This choice is motivated by the evaluation of Rios et al. [27] combined with open-source availability and ease of porting 2 to Python. Rios et al. [27] indicate the following trends: • In terms of scalability, SnobFit and NOMAD may have scalability challenges with the number of parameters (tested up to 300). ImFil and BOBYQA are among the fastest optimizers. • For convex optimization surfaces, BOBYQA and SnobFit perform well for smooth surfaces, while NOMAD and ImFil perform better for non-smooth surfaces. • For non-convex optimization surfaces, SnobFit and NOMAD are good for smooth surfaces, while ImFil and NOMAD are good for non-smooth surfaces.
In the rest of this section we give a short description of each algorithm together with their tunable knobs that a ect their performance and solution quality. As common characteristics we note that all derivative-free optimizers employ sampling strategies and require a minimum number of samples to get started. This allows a common interface to employ parallelization of the quantum step, even if the original codes do not support this directly. Sampling requires that the parameter space is bounded, or that search vectors are provided. Most optimizers can make use of further detailed science domain information, such as the magnitude and shape of uncertainties, local functional descriptions, inaccessible regions, etc. If no such information is provided or available, they will choose reasonable defaults, e.g. assumption of homogeneous, symmetric, uncertainties; and qubic or quadratic local functional behavior on a small enough region. Inaccessible regions can simply be communicated by returning NaN from the objective function.
NOMAD
NOMAD, or Nonlinear Optimization by Mesh Adaptive Direct Search (MADS) [17] is a C++ implementation of the MADS algorithm [2][3][4]. MADS searches the parameter space by iteratively generating a new sample point from a mesh that is adaptively adjusted based on the progress of the search. If the newly selected sample point does not improve the current best point, the mesh is re ned. NOMAD uses two steps (search and poll) alternately until some preset stopping criterion (such as minimum mesh size, maximum number of failed consecutive trials, or maximum number of steps) is met. The search step can return any point on the current mesh, and therefore o ers no convergence guarantees. If the search step fails to nd an improved solution, the poll step is used to explore the neighborhood of the current best solution. The poll step is central to the convergence analysis of NOMAD, and therefore any hyperparameter optimization or other tuning to make progress should focus on the poll step. Options include: poll direction type (local model, random, uniform angles, etc.), poll size, and number of polling points.
The use of meshes means that the number of evaluations needed scales at least geometrically with the number of parameters to be optimized. It is therefore important to restrict the search space as much as possible using bounds and, if the science of the problem so indicates, give preference to polling directions of the more important parameters.
In we incorporate the published open-source NO-MAD code through a modi ed Python interface.
ImFil
Implicit Filtering (ImFil [16]) is an algorithm designed for problems with local minima caused by high-frequency, low-amplitude noise and with an underlying large scale structure that is easily optimized. ImFil uses di erence gradients during the search and can be considered as an extension of coordinate search. In ImFil, the optimization is controlled by evaluating the objective function at a cluster (or stencil) of points within the given bounds. The minimum of those evaluations then drives the next cluster of points, using rst-order interpolation to estimate the derivative, and aided by user-provided exploration directions, if any. Convergence is reached if the "budget" for objective function evaluations is spent, if the smallest cluster size has been reached, or if incremental improvement drops below a preset threshold.
The initial clusters of points are almost completely determined by the problem boundaries, making ImFil relatively insensitive to the initial solution and allows it to easily escape from local minima. Conversely, this means that if the initial point is known to be of high quality, ImFil must be provided with tight bounds around this point, or it will unnecessarily evaluate points in regions that do not contain the global minimum.
As a practical matter, for the noisy objective functions we studied, we nd that the total number of evaluations is driven almost completely by the requested step sizes between successive clusters, rather than nding convergence explicitly.
For we have rewritten in Python the original ImFil MATLAB implementation available.
SnobFit
Stable Noisy Optimization by Branch and FIT (SnobFit) [15] is an optimizer developed speci cally for optimization problems with noisy and expensive to compute objective functions. SnobFit iteratively selects a set of new evaluation points such that a balance between global and local search is achieved, and thus the algorithm can escape from local optima. Each call to SnobFit requires the input of a set of evaluation points and their corresponding function values and SnobFit returns a new set of points to be evaluated, which is used as input for the next call of SnobFit. Therefore, in a single optimization, SnobFit is called several times. The initial set of points is provided by the user and should contain as many expertly chosen points as possible (if too few are given, the choice is a uniformly random set of points, and thus providing good bounds becomes important). In addition to these points, the user can also specify the uncertainties associated with each function value. We have not exploited this feature in our test cases, because although we know the actual noise values from the simulation, properly estimating whole-circuit systematic errors from real hardware is an open problem.
As the name implies, SnobFit uses a branching algorithm that recursively subdivides the search space into smaller subregions from which evaluation points are chosen. In order to search locally, SnobFit builds a local quadratic model around the current best point and minimizes it to select one new evaluation point. Other local search points are chosen as approximate minimizers within a trust region de ned by safeguarded nearest neighbors. Finally, SnobFit also generates points in unexplored regions of the parameter space and this represents the more global search aspect.
For we have rewritten in Python the original SnobFit MATLAB implementation available.
BOBYQA
BOBYQA (Bound Optimization BY Quadratic Approximation) [23] has been designed to minimize bound constrained black-box optimization problems. BOBYQA employs a trust region method and builds a quadratic approximation in each iteration that is based on a set of automatically chosen and adjusted interpolation points. New sample points are iteratively created by either a "trust region" or an "alternative iterations" step. In both methods, a vector (step) is chosen and added to the current iterate to obtain the new point. In the trust region step, the vector is determined such that it minimizes the quadratic model around the current iterate and lies within the trust region. It is also ensured that the new point (the sum of the vector and the current iterate) lies within the parameter upper and lower bounds. BOBYQA uses the alternative iteration step whenever the norm of the vector is too small, and would therefore reduce the accuracy of the quadratic model. In that case, the vector is chosen such that good linear independence of the interpolation points is obtained. The current best point is updated with the new point if the new function value is better than the current best function value. Note that there are some restrictions for the choice of the initial point due to the requirements for constructing the quadratic model. BOBYQA may thus adjust the initial automatically if needed.
Although it is not intuitively obvious that BOBYQA would work well on noisy problems, we nd that it performs well in practice if the initial parameters are quite close to optimal and the minimum and maximum sizes of the trust region are properly set. This is rather straightforward to do for the speci c case of VQE, where a good initial guess can be obtained relatively cheaply from classical simulation. For Hubbard model problems, which have many (shallow) local minima, BOBYQA does not perform nearly as well.
Validation and Tuning
We have validated the implementations for correctness and performance using a suite of unit tests. For ImFil and SnobFit, which have been ported from MATLAB, we have thoroughly tested correctness, using their original tests as well as our own. For NOMAD and PyBobyqa we invoke the original implementations, limiting the need for testing beyond the application programming interface. All tests have been included in the repository. We have chosen defaults for each optimizer that should work best for the type of optimization surfaces and noise behavior observed in the problems considered. Several of these choices are di erent from the original defaults, and in all cases involved at least an increase of the number of samples per iteration (BOBYQA and NOMAD in particular bene t here) or a tightening of the convergence criteria (important for SnobFit). This trades wall clock performance with science performance. In the case of ImFil, a functional change was needed: without a reduction in the smallest step scales, chemical accuracy could not be achieved. We balanced this cost with a reduction in the allowed number of internal iterations in the interpolation on a stencil.
We consider good default values extremely important: as a practical matter, domain scientists tend to judge optimizers based on trial runs on their problem at hand, rather than rst studying their problem's mathematical properties and only then searching for an optimizer to match, with di erent tuning as needed. That (faulty) approach may well cause them to miss out on the best choice. Good domain-speci c defaults ameliorate this practical issue somewhat.
HYBRID QUANTUM-CLASSICAL ALGORITHMS
The hybrid quantum-classical algorithms we consider iteratively alternate between a classical numerical optimizer and a quantum algorithm that evaluates some objective to be minimized. The classical optimizer varies a set of parameters that determine the input state for the quantum processor to prepare. The quantum side then executes an algorithm resulting in measurement and some output distribution of probabilities. This distribution is mapped into an objective function value that the classical optimizer can handle, such as a single oating point number, e.g., one representing the expected energy of a physical system (see Figure 3). In the Variational Quantum Eigensolver approach for solving chemistry and physics problems, the objective function calculates the expectation value of the Hamiltonian H associated with a conguration of the simulated physical system. Without noise, the optimization surface is expected to be smooth and convex around the global minimum. Bounds and constraints to help the optimizer and analysis are often straightforward to obtain from physical laws, e.g. there should be no loss of particles.
In Quantum Approximate Optimization Algorithms, the state is prepared by a p-level circuit speci ed by 2p variational parameters. Even at the lowest circuit depth (p=1), QAOA has non-trivial provable performance guarantees. Initial QAOA exemplars have been selected from the domain of graph optimization problems such as MaxCut. The optimization surfaces generated by QAOA problems can be arbitrarily complex and bounds and constraints are harder to de ne as they need not be physical.
Because of these last di erences, understanding the impact of noise on the behavior of hybrid algorithms is more straightforward for VQE and we will concentrate our study on its behavior. However, since we do not restrict the study to realistic noise levels only, but push the optimizers to their breaking point, we believe that our ndings are directly applicable to the higher complexity in QAOA algorithms as well. For more details, see Section 8.
Role of the Ansatz in VQE
The classical optimizer is not free to choose input states for VQE, but constrained by a parametrized ansatz, which describes the range of valid physical systems and thus determines the optimization surface. A good ansatz provides a balance between a simple representation (and thus simple operators in the quantum circuit), e cient use of available native hardware gates, and su cient sensitivity of the objective with the input parameters. An e ective ansatz can greatly reduce circuit depth, search space, and the number of steps necessary to convergence.
For now, ansatz design is still an art that requires detailed insights from the domain science to uncover symmetries and to decide which simpli cations are acceptable. However, our main interest is to push the optimizers. Since a better ansatz will simply allow the domain scientist to work on larger, more complex, problems that equally push the optimizer harder, we will restrict ourselves to the commonly used, and practical, unitary coupled cluster ansatz (UCC ansatz) for all studies. For physical systems, the UCC ansatz can be thought of as describing the movements of individual particles (linear terms) and those of interacting (e.g. through electric charge) pairs of particles (quadratic terms). It is simple to map and, because particles such as electrons are indistinguishable, easy to nd symmetries to reduce the number of parameters needed to describe all valid con gurations.
Besides the number of parameters, the choice of ansatz also a ects the number of qubits used. For example, the UCC ansatz provides for simple physical interpretations, such as '1' meaning that a site or orbital is occupied by an electron, and '0' meaning that it is unoccupied. Add a second qubit for spin up and down, and two qubits can fully describe a site or orbital. 3 However, there is a clear ine ciency here: it is unnecessary to describe the spin of an unoccupied site. But changing to a more compact representation requires changing the ansatz and the operators, which can actually make the problem harder to solve. Published results [8,18,31] comprise only two and four qubit experiments with two parameters. In our studies we have used 4 and 8 qubit problems, with the number of parameters ranging from 2 to 14.
VQE Quantum Processor Step
The quantum circuit consists of two parts: a state preparation and an evolution. The state preparation takes the chip from its computational ground state to the intended initial state as set by the classical 3 It is still completely up to the domain scientist to determine which and thus how many sites are relevant for the problem they are trying to solve, which is the most important driver of the number of qubits needed.
optimizer. The evolution works by computing successive steps in "imaginary time" (e −iHτ with τ = it). This process attenuates the contributions of the eigenvectors of the Hamiltonian proportional to the exponent of their respective eigenvalues. Thus, after a su cient number of steps, only the component of the smallest eigenvalue is left. The chip readout is then a probability distribution of bit strings that represents the estimated ground energy eigenstate, from which the estimated energy is then calculated classically using the Hamiltonian. The mapping of the measured probability distribution to a single number (the energy) is non-linear because the input is constrained to be physical and sum to 1. It is thus not possible to make any general inference about the uncertainty distribution of the estimated energy from the expected errors in the probability distribution, but only about speci c problem instances.
IMPACT OF NOISE
VQE is considered to have some robustness against noise due to its iterative nature and hence is expected to be well suited for upcoming NISQ devices. Nevertheless, the need for studying the dynamics of the full hybrid VQE algorithm has been identi ed early on [20] as a prerequisite for successfully running it on NISQ hardware.
There are two components to this problem: 1) understanding how well optimizers handle noisy data; and 2) understanding how well the full quantum-classical algorithm handles noise.
Accounting for Noise Sources
There are a range of ways that noise enters the nal result: from electronic noise and quantum crosstalk, to decoherence and calibration inaccuracies. How the output of a quantum circuit is a ected by noise is an open research problem, with no accurate predictive models available, even when restricted to a speci c chip instance. Our main concerns, however, are about overall magnitude of noise and the e ects on the shape of the optimization surface.
In our study, we provide coverage of the problem domain by varying the magnitude of the noise in simulation by a wide range, and by studying di erent problems with a priori di erent optimization surfaces. The actual noise impact for a given hardware instance is likely to be captured within our parameter sweep. The upshot is that we study a wide range of noisy pro les across di erent optimizers to arrive at a map and guidance for actual experiments. The goal is explicitly not to nd and describe the single way, if any such exists, of how VQE behaves with a given noise model, nor to nd the one optimizer that should be used for all VQE problems. It is, after all, well known in the applied math community that there is no such thing as a "free lunch, " meaning that each optimizer has speci c strengths, none are best in all instances, and each problem needs to be individually matched to the appropriate optimizer(s).
To account for the impact of noise sources, we consider an empirical approach where we inject noise as Gaussian-distributed over-/under-rotations with an added orthogonal component onto the circuit gates. This ensures several realistic properties: noise increases with circuit depth and complexity, and two-qubit gates have larger contributions than one-qubit gates.
We do not add coherent or correlated noise sources, for the reasons explained below. The measurement result is a probability distribution of bit strings, and any stochastic noise behaves on it in a Figure 4: Impact of noise types. The optimizer can "compensate" in the choice of input for the predictable e ects of systematic/coherent noise (left) and thus still nd the global minimum. But stochastic noise leads to a "random walk" away from the intended output state (right), resulting in an increasingly diminished likelihood of the objective function returning the global minimum. similar way: it redistributes relative counts with rates proportional to the content and with the same equilibrium in the limit, namely a uniform distribution. Coherent and correlated noise sources can, on the other hand, potentially result in any biased distribution, making their study meaningless, unless taken from the behavior of actual hardware. But that would, of course, limit their relevance to that speci c hardware. Further, as detailed below, VQE has more "builtin" robustness against coherent than against stochastic noise. Coherent noise can also be expected to more easily produce nonphysical outcomes (e.g. fewer or more particles in the nal than in the input states); those measurements can be ltered out and discarded. Last but not least, orthogonal error mitigation techniques such as Randomized Compiling [33] have been shown to alleviate coherent errors by making them stochastic.
We do not factor in an additional noise contribution from measurement errors: shot noise is expected to be unbiased (i.e. it can be averaged out to zero noise in the limit by taking a large number of measurements). In other words, it a ects the overall magnitude of stochastic noise sources, which we already sweep, not what we most care about: the shape changes in the optimization surface.
Interplay with Minimizer
Some general observations can be made about the di erent impacts of coherent and stochastic errors, and why the distinction matters on hybrid quantum-classical algorithms that involve a classical optimizer, such as VQE.
Quantum computing is very sensitive to noise, because a noisy execution is just as valid as a noise-free one: without error correction codes, there is no distinguishing between valid and erroneous states. Therefore, if a circuit is intended to simulate the evolution of some Hamiltonian H, then a single noisy run can be seen as the evolution of some other Hamiltonian H ′ . As long as the noise level is "small enough, " the eigenstates 4 of H and H ′ will be close.
The algorithm is somewhat robust to coherent errors. By de nition, changes around the output state that represents the global minimum are, to rst order, zero for small linear changes in the input state. With a systematic di erence between H ′ and H, the global minimum is still found by the optimizer compensating accordingly in the input state, see Figure 4 (left). Thus, even as the calculated minimum energy may still be very close, the optimal parameters found are likely to be systematically o . There is a further twist here for VQE: the ansatz restricts the input states that can be chosen, thus VQE will be more quickly a ected by coherent errors than hybrid algorithms in general.
The algorithm has challenges with stochastic noise. The picture changes signi cantly with stochastic noise: each execution of the circuit is in e ect a di erent H ′ . Once close to the global minimum, the minimizer will not be able to distinguish the outputs of runs with di erent inputs, as the changes get washed out in the noise (as shown earlier in Figure 2). With su cient symmetry in the optimization pro le or a functional description based on the domain science, the optimizer can still nd the correct optimal parameters by searching for a robust global minimum or doing a local t. However, any execution at the optimal parameters will calculate an output distribution that is some random walk away from the intended state, as the errors (in particular those on the control qubit of CNOTs) do not commute with the circuit as a whole, see Figure 4 (right). When calculating the energy objective from any of these noisy outputs that are close to, but not at, the global minimum, the results will by de nition be higher than the ground state energy 5 . With increasing noise, the likelihood of the true global minimum energy being returned by the objective function goes to zero, as shown in Figure 5.
RESULTS
As study cases, we used the C-C axis rotation and bond stretching and breaking of the ethylene (C 2 H 6 ) molecule (see Figure 6), representing two di erent chemical transformation processes. In the rotation and bonding processes, the character of the wave function changes drastically. For example, in the C-C axis rotation Π − Π bonds are broken/formed.
We also used a Hubbard simulation of 4 sites, occupied with either 4 or 2 electrons (see Figure 7). In the Hubbard simulations, we use a hopping term of 1.0, a Coulomb term of 2.0, and in the 4 electron case add a chemical potential of 0.25. The electrons have 5 Unless the noise is so large that the output state no longer represents the initial physical system: then all bets are o . spins in all cases. In all cases, OpenFermion [21] is used to generate the circuits.
With a Unitary Coupled Cluster ansatz (see Section 4), the minimal representation needs to describe the rotation consisting of 4 qubits (representing 4 orbitals) and 2 terms in the wave function expansion that need to be optimized. Similarly, the bond breaking process requires 8 qubits and uses a wave function expansion with 14 parameters, the 4 sites Hubbard model requires 8 qubits and 9 parameters for a 2 electron occupancy; and 8 qubits with 14 parameters when simulating 4 electrons.
Experimental Setup
Noise Injection: We extended the ProjectQ [30] quantum simulation infrastructure with noise injection capabilities. For each gate in the circuit circuit (R X (θ ), R Y (θ ), H , CNOT 6 ), we add an operator in the form of rotations whose angles are independently sampled from a distribution: systematic over/under rotation (along the same axis) and noise drawn from a Gaussian probability distribution (main component along same axis, small orthogonal component). The noise operator for each gate is sampled independently of the others. For each scenario we perform sweeps with increasing noise strength until it breaks the minimizers. In the rest of this paper, numerical values for noise magnitude refer to the standard deviation (σ ) of the Gaussian noise probability distribution.
Methodology: In each study, the minimizer is given an appropriate budget (maximum number of invocations of the objective function) and convergence criteria are adjusted in favor of using 6 We do not add noise to R Z (θ ) as these are purely mathematical, thus noise-free. up the budget. The minimizers are run until any convergence criteria are met or the budget is used up. We repeat the full algorithm several times and report the average and overall minimum across all runs, as well as the average result when running the simulation at the optimal parameters found. The results are compared to the results of classical ab-initio calculations.
Optimizer Baseline: The optimizers included in have been described in Section 3. Each optimizer has been individually tuned with good settings for the type of problems generated by our VQE test circuits, see Section 3.5. As baseline comparison, we choose BFGS and Cobyla, both from SciPy [29], because they are well known and widely used, as explained in Section 2.2.
Hardware: The simulations were small enough, memory-wise, to run on a standard server. We note that for this study simulating the quantum circuit constitutes the main bottleneck; optimizers can run well and handle a large number of parameters when using just a single server.
Optimization Solution Quality
One of the e ects of stochastic noise is to lift the results returned from the objective function as explained in Section 5 and shown in Figures 1 and 5. There are two ways to evaluate the optimizers: 1) by the minimum energy they actually nd relative to what was possible given the response limitations of the objective function; or 2) by the quality of the optimal parameters found, evaluated by calculating the expected energy from a noise-free simulation run at those parameters. Which quality measure is most relevant will depend on the application and science goals at hand, so we provide examples of both. For example, in the case of chemistry studies, quantum subspace expansion [19] requires accurate parameters.
Distance to minimum energy. Figure 8 shows the average calculated energy of the full VQE algorithm for the ethylene rotation (left) and bond breaking simulation (right), for 100 runs at each noise level for the former and 10 each for the latter. 7 The straight, dashed, black lines show the chemical accuracy (0.00159 hartrees): a solution closer to the exact value than this cut-o (i.e. results below this line) are scienti cally useful. The dashed yellow lines show the lowest value the objective function returned across all runs, i.e. the lowest value any of the minimizers could theoretically have found. Where this line is above the chemical accuracy, the optimizer is not the weak link of the algorithm, the quantum processor is the limiting component. The larger, deeper, 8-qubit circuit clearly su ers more from noise: even at moderate levels, a chip with such gate noise would be the weak link in the full algorithm.
Considering the minimizers, BFGS can not nd the global minimum even with small levels of noise (lowest level shown is 10 −4 ), because it treats any gradients seen as real, including fakes due to noise, and gets stuck. It works, however, ne on a noise-free run (not plotted). The other baseline, Cobyla, performs quite well at low levels of noise, but clearly underperforms as noise increases. The optimizers designed to handle noise well outperform across the full range, with some strati cation only happening at the highest noise levels and ImFil yielding the overall best results. In the low noise regime, however, where all optimizers perform similarly, other considerations, such as the total number of iterations, come 7 The larger 8-qubit circuits took about two orders of magnitude more time to run. into play to determine which is "best." Cobyla would then most likely be preferred (see Section 6.4 for a detailed discussion).
Parameter quality. Figure 9 (left) shows the results for the full VQE algorithm Hubbard model simulations, with the energy recalculated at the optimal parameters using a noise-free run. With the Hubbard model, the region of the optimization surface around the global minimum is rather shallow (see also Figure 2), which clearly stresses the optimizers a lot more. The behavior of BFGS and Cobyla mimics the results from the ethylene studies, but this time both NOMAD and especially SnobFit also underperform or even fail. A detailed analysis shows that this weakness is exposed by bounds that are too large for either optimizer to handle: reducing the bounds greatly improves their performance (whereas it does not for BFGS and Cobyla).
Leveraging Domain Science Constraints and Optimizer Knowledge
From the discussion above, it is already apparent that di erent methods perform best for di erent problems as optimization surfaces vary. Furthermore, the quality of the solution may be improved by exploiting a combination of domain science and optimizer knowledge. For our VQE examples, the most obvious and realistically actionable parameters are: 1) quality of initial solution; and 2) good parameter bounds. Impact of initial solution quality. VQE for chemical problems has the advantage that a good initial can often be obtained from approximate classical calculations. To understand the impact of initial solution quality we consider a comparison of ImFil and PyBobyqa for the ethylene rotation simulation.
In Figure 11 we plot the evaluation points chosen by each optimizer: using a good initial at (0.1, 0.1) and a bad one at (0.3, −0.3). The global optimum is at (0.00012, 0.04). Whether it receives a good (A) or bad (B) initial, ImFil will use the given bounds to determine its rst stencil, doing a mostly global search. Although the initial drives the rst few iterations, it quickly moves away from the bad initial, to converge at the optimum. PyBobyqa starts by considering only points within its trust region around the initial point. If the initial is close enough to make the global optimum fall within that region, it will nd it quickly (C). However, if the initial is near a pronounced local minimum, in (0.5, −0.5) in this case, it will get stuck (D), never nding the global minimum.
Overall, this analysis indicates that if good initials are available with low computational overhead, they can improve both the quality and speed to solution.
Impact of bounds. Some optimization methods, such as Snob-Fit, bene t greatly from having the search space (and thus the needed number of evaluations, alleviating scaling issues) reduced by tight bounds on the optimization variables. When possible, such bounds should be provided from the domain science. When bounds derived from rst principles are unavailable, an automatic way of nding tighter bounds can be had by running a composition of optimizers. To illustrate this principle we show the e ect of optimizer composition by using ImFil to derive tight bounds for SnobFit.
ImFil uses progressively smaller stencils in its search for the global minimum (see Section 3.2). Once close enough, the combination of high noise levels and a shallow optimization surface means that no further progress can be made on the stencil, which ImFil then labels as "failed. " The last good stencil provides the necessary bounds for SnobFit to proceed and nd a robust minimum. The results of this approach are shown in Figure 9 (right) for Hubbard simulations with occupancies of 2 and 4 electrons. In all cases, ImFil already outperforms the other optimizers, but SnobFit is still able to improve from the point where ImFil fails. Crucially, ImFil fails much earlier when noise levels are high (see Section 6.4), allowing the combined run of ImFil+SnobFit to stay within budget.
Performance Considerations
Besides nding a good solution, optimizer quality is also quanti ed by its total execution time. First, we note that for hybrid algorithms the wall time is completely dominated 8 by the quantum chip for current devices. When considering the optimizer in isolation the number of objective function evaluations is thus a good proxy for wall clock performance. Most optimizers provide control over the number of evaluations per iteration, thus determining single iteration overhead. We nd in practice that the defaults work best: a certain minimum number of evaluations is always necessary to ll out a stencil, local model, or map a trust region. The incremental improvement from adding more points to the current iteration is, however, less than the improvement obtained from spending that budget on an extra iteration.
Convergence criteria provide control over the total number of iterations. Most optimizers de ne convergence as improvement between consecutive steps falling below a threshold, or failing altogether a given number of times. The lack of local improvement need not stop the search, e.g. for NOMAD and SnobFit it can be chosen to initiate more global searches, and subsequently use up the whole budget. Whether those global searches are useful depends on the quality of the initial and on the presence of local minima.
The setup of the science problem at hand matters greatly as well: tighter bounds and a higher quality initial reduce the number of iterations needed, as was already seen in Figure 11. An e cient ansatz with fewer parameters, for example through exploitation of symmetries, and an optimization surface with steep gradients near the global minimum, can also have a big impact.
Finally, there are di erences intrinsic to the optimization methods. Figure 10 shows the number of objective function evaluations for increasing levels of noise, for both the ethylene rotation simulation (left) and the Hubbard model with 4 electrons (right). There is little sensitivity to noise in the much simpler rotation simulation, except for BFGS which falls apart at high noise levels. A clearer picture emerges in the Hubbard simulation: convergence criteria that take into account the observed level of noise in their de nition of "no improvement" work best. E.g. PyBobyqa, which uses a xed threshold, fails to converge, because noise causes su cient di erences between iterations to remain above threshold, so it continues, using up the full budget. The other optimizers, which either track overall improvement or improvement within an iteration given the noise, stop much earlier as noise increases. This is especially 8 The true ratio depends on the quantum hardware chosen and the server CPU running the classical optimizer. We estimate the time spent in the classical step to be about 1% of the total. Furthermore, several of the optimizers are in pure Python and their wall clock performance could be greatly improved with a rewrite in C++ if necessary. bene cial when conserving budget is important to allow switching of optimizers, e.g. from ImFil to SnobFit as shown in the previous section, while remaining within the budget overall.
DISCUSSION
Much work is being dedicated to improving the VQE quantum circuits (depth, CNOT count, ansatz etc.) and to demonstrate science results on NISQ hardware. The need for noise-aware minimizers has been previously acknowledged, but its magnitude may have been understated. In fact, our study indicates that using a classical optimizer that is not noise-aware would make it the weakest link in the VQE chain: use of specialized noise robust optimizers is essential on NISQ hardware.
Our evaluations of the noise-aware optimizers we collected (and rewrote in some cases) into indicate that: • When solving noise-free optimization problems, SciPy optimizers such as BFGS or Cobyla are fastest by far. They do fail in the presence of even small noise, to the point of becoming unusable. • When decent parameter bounds are available, ImFil is preferable, followed by NOMAD. When tight bounds are available, SnobFit should be considered. A composition of optimizers works best for nal solution quality, e.g. running ImFil rst to derive tight bounds for SnobFit. • When high quality initial parameters are available, trust region methods such as PyBobyqa are fastest and preferable, followed by NOMAD and to a lesser extent SnobFit. ImFil is not sensitive to the value of the initial solution. • Taking performance data into account does not change the above recommendations. We do note that some optimizers are adaptive and properly reduce the number of evaluations in the presence of noise, e.g. ImFil and NOMAD. • When examining control over the number of iterations and search strategy (balancing solution quality, execution time, and premature convergence), ImFil provides direct control over scales and searches. For the others, only limited control is possible by tweaking the convergence criteria, (attenuated) step sizes, points in the local model, or overall budget.
Given our collection of optimizers, we wanted to know which method best handles the combination of optimization surfaces generated by the science problems and noise caused by the quantum hardware. Since the ansatz in VQE directly drives the former, and in uences the latter (e.g. through circuit depth), this provides important feedback for practical ansatz design. There are strong convergence requirements on the minimizer in terms of distance to the global minimum [20], but also constraints on the number of evaluations possible before convergence as e.g. calibrations may drift over the duration of the experiment. To make progress, the optimizer may need to nd gradients on a surface with many local minima due to the noise, and do so with the least number of iterations possible. Our results support the following conjectures: • There is no free lunch: a suite of minimizers is needed to match speci c strengths to speci c problems, making use of available domain science information such as a high quality initial parameters, knowledge of local minima, or the need to search around inaccessible regions. • Circuit level noise redistributes counts in the output bit string probability distributions, from which the objective is calculated. This redistribution a ects the latter in a non-linear way and thus does not simply average out. With large noise, it may thus be impossible to retrieve the actual global minimum value, but by searching for a robust minimum, the correct optimal parameters may still be found. • For complex surfaces with local minima close to the global minima, noise can prevent the optimizer from distinguishing local from global. An understanding of the science is then needed to provide more constraints, e.g. in subdividing the problem and studying the minimum found in each with higher statistics. • Most of the methods can scale up to hundreds of parameters. On NISQ hardware, with the minimizers provided, we expect the performance of hybrid approaches to be limited by the quantum part of the algorithms. The optimizers can easily execute on a single node server systems, no distributed memory parallelization is required yet.
Overall, this study indicates that the success of VQE on NISQ devices is contingent on the availability of classical optimizers that handle noisy outputs well at the scale of the "necessary" qubit concurrency. As of yet, this is a largely open research area, where our study details some of the challenges to be expected. Our software optimizers toolkit is directly useful to VQE Quantum Information Science practitioners, as well as a good starting point for mathematicians in search of better optimization methods tailored to VQE and other hybrid quantum-classical algorithms.
RELATED WORK
Hybrid quantum-classical algorithms such as VQE and QAOA employ optimizers in the classical part of the computation. For VQE, an initial discussion about optimization challenges in the presence of noise is provided by McClean et al. [20]. They study a unitary coupled cluster wavefunction for H 2 , encoded into 4 qubits and with optimization over a single parameter. In the experiments, simulated measurement estimator noise is added to the objective function at a speci ed variance ϵ 2 . They compare Nelder-Mead with TOM-LAB/GLCLUSTER, TOMLAB/LGO, and TOMLAB/MULTIMIN. The choice of TOMLAB is motivated by the optimization study by Rios et al. [27], which reports a good combination of scalability and quality of solution. Even for this single parameter problem, these optimizers face challenges in the presence of noisy output. Current QAOA [35] studies still use BFGS and Nelder-Mead, as they still concentrate mostly on the quantum algorithm part of the problem. While the VQE result (system energy) is subject to physical or chemical laws which constrain its values, there is no such equivalent for most QAOA approaches. Thus, it is our expectation they will need to be supplemented with optimizers robust in the presence of noise.
An orthogonal approach in the realm of hybrid-algorithm design for short-depth circuits is the incorporation of error mitigation techniques. The proposed zero-noise extrapolation techniques [18,31] seem to impose no constraints on optimizers and just run in the rst step the full VQE algorithm. An additional step calibrates the impact of system noise, followed by an o ine procedure to extrapolate results to the ideal regimen of zero-noise. While the IBM studies [18,31] insert noise at the pulse level, Dumitrescu et al. [8] insert noise using additional CNOT gates and describe a zero-noise extrapolation procedure. Current results are for small circuits with few parameters (two) involved in the optimization. Their applicability to higher dimensional problems on complex optimization surfaces remains to be seen and whether they relax the requirements on robust optimizers.
Another area of interest is the work in the numerical optimization realm. Rios et al. [27] provide a comprehensive evaluation of derivative-free numerical optimizers along multiple dimensions including scalability and quality of solution, for convex and nonconvex, smooth and non-smooth surfaces. Overall, they recommend the commercial TOMLAB [32] implementations of GLCLUSTER, LGO and MULTIMIN. Each is best for a given combination of surface convexity and smoothness. Also note that all the algorithms included in are very close to any of the TOMLAB implementations for some type of surface.
CONCLUSION
Successful application of hybrid-quantum classical algorithms, with the classical step involving an optimizer, on current hardware, requires the classical optimizer to be noise-aware. We have collected a suite of optimizers in that we have found to work particularly well, easily outperforming optimizers available through the widely used standard SciPy software.
We have focused on VQE, but we expect the results to be generally applicable: by providing a suite of optimizers with consistent programming interfaces, it is possible to easily apply combinations of optimizers, playing into their respective strengths. Our studies indicate that with these optimizers, the classical step is no longer the weakest link on NISQ-era hardware. | 2020-04-08T01:00:41.899Z | 2020-04-06T00:00:00.000 | {
"year": 2020,
"sha1": "2b8e5a9567bb75b629d67c28ee4759e71097bfef",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.03004",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2b8e5a9567bb75b629d67c28ee4759e71097bfef",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
118478137 | pes2o/s2orc | v3-fos-license | Hairy AdS Solitons
We construct exact hairy AdS soliton solutions in Einstein-dilaton gravity theory. We examine their thermodynamic properties and discuss the role of these solutions for the existence of first order phase transitions for hairy black holes. The negative energy density associated to hairy AdS solitons can be interpreted as the Casimir energy that is generated in the dual filed theory when the fermions are antiperiodic on the compact coordinate.
Introduction
We construct analytic neutral hairy soliton solutions in Anti de Sitter (AdS) spacetime and discus their properties. This analysis is important in the context of AdS/CFT duality [1] because bulk solutions correspond to 'phases' of the dual field theory [2].
There is by now a huge literature on (locally) asymptotically AdS solutions in both phenomenological models and consistent embedding in supergravity. We will consider theories of gravity coupled to a scalar field with potential V (φ). AdS spacetime is not globally hyperbolic, which means that the evolution is well defined if the boundary conditions are imposed. In particular, since for the same self-interaction there exist many boundary conditions for the scalar field (that may or may not break the conformal symmetry in the boundary), one can 'design' a specific field theory [3] with a given effective potential [3][4][5].
Different foliations of AdS spacetime lead to different definitions of time and so to distinct Hamiltonians of the dual field theory. Since the classical (super)gravity background, with possible α ′ corrections, is equivalent to the full quantum gauge theory on the corresponding slice, one expects physically inequivalent dual theories for different foliations. Indeed, when the horizon topology of the black hole is Ricci flat and there are no compact directions, there are no first order phase transitions similar to the Hawking-Page [6] phase transitions that exist for the spherically symmetric black holes. However, when some of the spatial directions are compactified on a circle asymptotically, one expects the existence of a negative Casimir energy of the non-supersymmetric field theory that 'lives' on the corresponding topology. Horowitz and Myers have shown in [7] that, indeed, there exists a (bulk) gravity solution dubbed 'AdS soliton' with a lower energy than AdS itself. This solution was obtained by a double analytic continuation (in time and one of the compactified angular directions) of the planar black hole. This fits very nicely with the proposal of Witten [2] that a non-supersymmetric Yang-Mills gauge theory can be described within AdS/CFT duality by compactifying one direction and imposing anti-periodic boundary conditions for the fermions around the circle.
Hairy neutral AdS solitons were previously analysed (see, e.g. [8][9][10][11][12][13][14]), though most of these studies are using numerical methods. Hence, it would be interesting to find examples of analytic hairy AdS solitons and investigate their generic properties. In recent years, analytic regular neutral hairy black holes in AdS were constructed, e.g. [15][16][17][18][19][20][21] and so one expects that constructing analytic soliton solutions could be also possible. We use some particular exact planar hairy black hole solutions in four and five dimensions of [15,16] and obtain the corresponding solitons by using a double analytic continuation as in [7]. The hairy AdS solitons are the ground state candidates of the theory [22].
Since the AdS soliton is the solution with the minimum energy within these boundary conditions [23,24], it is natural to investigate the existence of phase transitions with respect to this thermal background. In the nice work [25], it was shown that there exist first order phase transitions between planar black holes and the AdS soliton. We construct the hairy AdS soliton and compute their mass by using the counterterm method of Balasubramanian and Kraus [26] supplemented with extra counterterms for the scalar field as was proposed in [27]. We investigate then the existence of first order phase transitions with respect to the hairy AdS soliton and discuss the effect of 'hair' on the thermodynamical behaviour.
Hairy AdS soliton
In this section we construct exact hairy AdS soliton solutions in four and five dimensions and compute their energy. In five dimensions [15,16], we obtain a new hairy black hole solution, which corresponds to a parameter ν that at first sight makes the moduli potential divergent. However, by taking the right limit, we show that the theory is in fact well defined and the solution is regular.
AdS soliton
We start with a short review of [25], though, to connect this analysis with the rest of the paper, the computations are done by using the counterterm method of Balasubramanian and Kraus [26].
We consider the usual AdS gravity action supplemented with the gravitational counterterm proposed in [26] where Λ = −3/l 2 is the cosmological constant (l is the radius of AdS), 16πG N = 1 with G N the Newton gravitational constant, the second term is the Gibbons-Hawking boundary term, and the last term is the gravitational counterterm. Here, h is the determinant of the induced boundary metric and K is the trace of the extrinsic curvature. The planar black hole solution is where µ b is the mass parameter and we consider the compactified coordinates 0 ≤ The normalization is such that the time coordinate and the coordinates x 1 and x 2 have the same dimension and so the analytic continuation for obtaining the AdS soliton produces the same boundary geometry. The role of the counterterm is to cancel the infrared divergence of the action so that the final result is finite: The horizon radius is denoted by r b and β b is the periodicity of the Euclidean time that is related to the temperature of the black hole by: Using the usual thermodynamic relations and free energy F = I E b /β b , we obtain the energy and entropy of the planar black hole: The AdS soliton solution was obtained in [7] by using a double analytic continuation t → iθ, x 1 → iτ of the planar black hole metric (2). To distinguish from the black hole solution, we denote by µ s the mass parameter of the AdS soliton and, in the Euclidean section (τ → iτ E ), the periodicity is 0 ≤ τ E ≤ β s . To obtain a regular Lorentzian solution, the coordinate r is restricted to and to avoid the conical singularity in the plane (r, θ), we impose the following periodicity for θ: The finite on-shell Euclidean action and mass of the AdS soliton can be obtained in a similar way (but we do not present the details here): and the mass can be obtained by using the thermodynamical relations with the free energy F = I E s /β s = M (or from the quasilocal stress tensor) and the result is The mass of the AdS soliton corresponds to a Casimir energy associated to the compact directions of the dual boundary theory, and so it is negative. With this information it is straightforward to check the existence of first order phase transitions. To compare the Euclidean solutions, one should impose the same periodicity conditions, which become in the boundary (r → ∞), β b = β s and L s = L b . Let us know compare the actions (free energies): The change of sign is an indication of a first order phase transition between the planar black hole and the AdS soliton. It was shown in [25] that the small hot black holes (with respect to r s ) are unstable and decay to small hot solitons, but the large cold black holes are stable. Note that the phase transition is controlled by the dimensionless parameter z = T L s .
Hairy AdS soliton in 4-dimensions
We consider the exact regular hairy black hole solutions with a planar horizon [15,16,28]. The action is and we are interested in the following moduli potential: 1 We focus on the concrete case of ν = 3, though hairy AdS solitons for other values of ν probably also exist but the analysis is technically more involved and we do not investigate them in the present work. In this case, the scalar field potential becomes The potential has two parts that are controlled by the parameters Λ and α. Asymptotically, where the scalar field vanishes, just the parameter Λ survives and it relates to the AdS radius as Λ = −3l −2 .
Using the following metric ansatz the equations of motion can be integrated for the conformal factor [15,16,28,33,34] With this choice of the conformal factor, it is straightforward to obtain the expressions for the scalar field and metric function where η is the only integration constant. The parameter α is positive for x < 1 and negative otherwise. We shall focus below on the case x < 1. The conformal boundary is at x = 1, where the metric becomes and we use the following notation for the conformal factor: The geometry where the dual field theory 'lives' has the metric The regularized Euclidean action for these black holes was obtained in [27] (see, also, [35]) (in what follows we use the same notations as in the previous section for β b and L b ): where the area of the horizon and black hole temperature are The mass of the hairy black hole is [27,36] as can be also checked by using the usual thermodynamical relations. Using this expression of the mass, one can also easily check the first law of thermodynamics.
Let us now construct the hairy AdS soliton. By using again a double analytical continuation x 1 → iτ and t → iθ in (16), the metric becomes Similarly with the hairy black hole case, the conformal factor (17) is but now we denote the integration constant with λ to distinguish it from the integration constant η of the black hole. To get rid of the conical singularity in the plane (x, θ), we have to impose the periodicity: where x s is the minimum value of x, namely the biggest root of f (x s ) = 0. After imposing the right periodicity on θ and restricting the coordinate x so that the metric is Lorentzian, we obtain a well-defined regular solution.
We use the method of [27] to compute the regularized Euclidean action and the result is from which the mass can be immediately read off: As a check, we have also obtained the quasilocal stress tensor for this case and then computed the mass, but we do not present the details here.
Hairy AdS soliton in 5-dimensions
Let us now construct an exact hairy AdS soliton solution in five dimensions. We consider the solutions in [16], but we investigate the case ν = 5. In this case, at first sight the potential of [16] is not well defined. However, by taking the limit carefully, we obtain that the theory (potential) and solution are regular. The ansatz metric is and, for ν = 5, we obtain and The black hole temperature is where f (x h ) = 0. We shall consider the below the case when α < 0. The black hole entropy can be also easily computed and we obtain To regularize the Euclidean action we choose the following counterterm for the scalar field: The finite action is (37) and the mass of the hairy black hole is We again construct the hairy AdS soliton by using a double analytical continuation x 1 → iτ and t → iθ: The conformal factor for the hairy soliton is and, to get rid of the conical singularity in the plane (x, θ), we have to impose the following periodicity of the angular coordinate: We again consider α < 0, to be consistent with the black hole case.To complete the analysis, we compute the Euclidean action and the mass of the hairy AdS soliton
Implications for phase transitions
Within AdS/CFT duality, the black holes are interpreted as thermal states in the dual field theory.
We are going to show that there exist first order phase transitions between the planar hairy black hole and the hairy AdS soliton. With the results from the previous sections, we are ready to investigate the existence of phase transitions. 2 Let us focus on D = 4. Before comparing the actions, we would like to point out that from the definitions of x s and x h we obtain that they are equal, x s = x h . At first sight, this may be a bit strange because in general it is expected that they depend on the mass parameters λ and η for the soliton and black hole. However, in these unusual coordinates, x s and x h are defined by (19), but the true are of the horizon and 'center' of the soliton are determined by the conformal factor in front of the metric. This conformal factor depends on the mass parameter and we define: As before (12), we have to compare the free energies of solutions in the same theory and so we have to impose the same periodicity conditions at the boundary β b = β s and L s = L b . The hairy AdS soliton has a negative energy (the AdS space in planar coordinates has zero mass) and it is the ground state of the theory. Hence, the energy of the hairy black hole should be computed with respect to the ground state and we obtain with µ b and µ s defined in (25) and (30). The same periodicity of the Euclidean time implies the same temperature and we consider the hairy soliton solution as thermal background: Using the expressions of the black hole temperature T and periodicity L s , we can rewrite the difference of the free energies as Written in terms of the temperature, there is a drastic change compared with the no-hair case because the conformal factor appears explicitly. Clearly, the sign of this expression is controlled by the ratio r b /r s . Interestingly enough, despite the appearance of the conformal factor, the critical point where ∆F = 0 it is again for the temperature T c = 1/L s (that is because when ∆F = 0, µ b = µ s and so η = λ). This is what one expects for a conformal field theory because the phase transition should depend on the ratio of the scales. Writing the area of the black hole in terms of β b and β s , we find that where (49) 2 The case k = 1, when the horizon topology is spherical, was studied in [37].
However, since x h satisfies f (x h ) = 0, it can be computed as a function of the parameter α of the moduli potential, which implies that the coefficient L (α, l) is a function only of α and l. From the definition (44), one can easily obtain r b /r s = λ/η and so (48) can be rewritten in this useful form: There is an important difference by comparing with the no hair case, namely the appearance of the function L (α, l). When α is very small so behave L and, in this case, one can still keep the radius of the horizon of the same size as r s . Therefore, for small α, not only the small hot black holes, but also the large hot black holes are unstable and decay to hairy AdS solitons. We are going to comment more on this new feature in 'Conclusions' section. When α parameter is large, the thermodynamical behaviour of hairy black holes is similar to the one of no-hairy planar black holes.
Conclusions
Hawking and Page have shown that there exists a phase transition between spherical AdS (Schwarzschild) black hole and global (k = 1) AdS spacetime. As is well known, the phase transition, both on the gravity side and on the gauge theory side, is sensitive to the topology of the AdS foliation. For AdS black holes with planar horizon geometry, there exists no Hawking-Page transition with respect to AdS spacetime. In other words, the planar black hole phase is always dominant for any non-zero temperature.
Interestingly, it was shown that when one (or more directions) are compact there exist also Hawking-Page phase transitions between the planar black holes and the AdS soliton, which is obtained by a double analytic continuation from the black hole. We have obtained a similar behaviour for the hairy black holes, but now the ground state corresponds to a hairy soliton. One important difference with the no hair case is that the phase transition is also controlled by the parameter α in the scalar potential. Once α is fixed, the theory is fixed, but for very small α the theory contains hot black holes (small or large) that are unstable and decay to hairy AdS solitons. This drastic change is related to the fact that when α vanishes, the hairy black hole solutions become naked singularities. The self interaction of the scalar fied is very weak and so a large temperature can destabilize the system regardless of the size of the black hole.
As a future direction, it will be interesting to understand the physics of this instability in the dual field theory. It will also be interesting to investigate the general phase diagram for an arbitrarily parameter ν in the moduli potential and the embedding in supergravity [38]. When the effective cosmological constant vanishes, one can also obtain hairy black holes in flat space (stationary hairy black holes were also obtained, but only numerically). The thermodynamics and phase diagram of asymptotically flat hairy black holes [28,[39][40][41] can be also studied with a similar counterterm method [42][43][44]. | 2016-07-04T18:28:59.000Z | 2016-06-25T00:00:00.000 | {
"year": 2016,
"sha1": "8ec7af2878a5dfe79d61cb15febf720ff429a7dc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2016.08.049",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "8ec7af2878a5dfe79d61cb15febf720ff429a7dc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14674115 | pes2o/s2orc | v3-fos-license | Contribution of oligomerization to the anti-HIV-1 properties of SAMHD1
Background SAMHD1 is a restriction factor that potently blocks infection by HIV-1 and other retroviruses. We have previously demonstrated that SAMHD1 oligomerizes in mammalian cells by immunoprecipitation. Here we investigated the contribution of SAMHD1 oligomerization to retroviral restriction. Results Structural analysis of SAMHD1 and homologous HD domain proteins revealed that key hydrophobic residues Y146, Y154, L428 and Y432 stabilize the extensive dimer interface observed in the SAMHD1 crystal structure. Full-length SAMHD1 variants Y146S/Y154S and L428S/Y432S lost their ability to oligomerize tested by immunoprecipitation in mammalian cells. In agreement with these observations, the Y146S/Y154S variant of a bacterial construct expressing the HD domain of human SAMHD1 (residues 109–626) disrupted the dGTP-dependent tetramerization of SAMHD1 in vitro. Tetramerization-defective variants of the full-length SAMHD1 immunoprecipitated from mammalian cells and of the bacterially-expressed HD domain construct lost their dNTPase activity. The nuclease activity of the HD domain construct was not perturbed by the Y146S/Y154S mutations. Remarkably, oligomerization-deficient SAMHD1 variants potently restricted HIV-1 infection. Conclusions These results suggested that SAMHD1 oligomerization is not required for the ability of the protein to block HIV-1 infection.
SAMHD1 is comprised of the sterile alpha motif (SAM) and histidine-aspartic (HD) domains. The HD domain of SAMHD1 is a dGTP-regulated deoxynucleotide triphosphohydrolase that decreases the cellular levels of dNTPs [27][28][29][30]. The sole HD domain is sufficient to potently restrict infection by different viruses [31]. The HD domain is also necessary for the ability of SAMHD1 to oligomerize and to bind RNA [31]. The ability of SAMHD1 to block retroviral infection in noncycling cells, such as macrophages, dendritic cells and resting CD4+ T cells, is controlled by phosphorylation of T592 [32][33][34]. Phosphorylation of SAMHD1 regulates the capability of SAMHD1 to block HIV-1 infection but not the ability to decrease the cellular levels of dNTPs [33].
In agreement with Goldstone and colleagues, we have established that SAMHD1 is an oligomeric protein in mammalian cells [31,33]; however, the contribution of oligomerization to the ability of SAMHD1 to block HIV-1 infection is not understood. Previous studies have suggested that oligomerization is essential for the enzymatic activity of the HD domain [35]. This work explores the contribution of SAMHD1 oligomerization to HIV-1 restriction, dNTPase activity and nuclease activity. Using the SAMHD1 structure provided by Goldstone and colleagues, we identify key interfacial residues and demonstrate that their mutations disrupt SAMHD1 oligomerization. Recombinant purified oligomerization-deficient SAMHD1 mutants lost their dNTPase but not nuclease activity. In agreement, oligomerization-deficient SAMHD1 mutants immunoprecipitated from mammalian cells lost their dNTPase activity. Remarkably, oligomerization-deficient SAMHD1 variants potently restricted HIV-1 infection. These results suggest that SAMHD1 oligomerization is not required for the ability of the protein to block HIV-1 infection.
Mutations of hydrophobic interfacial residues disrupt SAMHD1 oligomerization in mammalian cells
The recently discovered restriction factor SAMHD1 blocks infection of HIV-1 and other retroviruses [20,21,[27][28][29][30][31]36,37]. In the crystal structure by Goldstone and colleagues the HD domain of the human SAMHD1 appears as a dimer with extensive dimerization interface [29] ( Figure 1A). A very similar interface was observed in the structure of EF1143, an HD domain protein from Enterococcus faecalis, although the bacterial protein was found to be tetrameric in the crystal [35]. It has been proposed that SAMHD1 also functions as a tetramer [38]. To understand the contribution of oligomerization to the antiviral activity of SAMHD1, we set out to explore the antiviral activity of oligomerization-defective SAMHD1 variants. Inspection of the SAMHD1 crystal structure reveals that the extensive dimer interface is stabilized by two hydrophobic patches formed by residues Y146, Y154, L428 and Y432 ( Figure 1B), thus we investigated how mutations of these residues affect SAMHD1 oligomerization and activity.
To test the hypothesis that residues in the hydrophobic patches stabilized the dimer interface, we tested the ability of these mutants to oligomerize by using our previously described oligomerization assay [31]. As shown in Figure 2A and Table 1, FLAG-tagged SAMHD1 variants Y146S/Y154S, L428S/Y432S and Y146S/Y154S/ L428S/Y432S lost the ability to oligomerize with the HAtagged wild-type SAMHD1 (mutant association to wild type), suggesting that these variants are no longer able to form oligomers. We also tested the ability of each FLAG-tagged variant to interact with its corresponding HA-tagged mutant ( Figure 2B and Table 1) (mutant selfassociation). These results showed that the SAMHD1 oligomerization-defective variants were not able to interact with themselves.
To indirectly rule out the possibility that SAMHD1 oligomerization-defective variants are not misfolded [29]. The extensive dimer interface is stabilized by two hydrophobic patches formed by residues Y146, Y154, L428 and Y432 shown in green and magenta. (B) The close-up view showing the packing of the four hydrophobic residues at the interface. The two patches are related by the 2-fold rotational symmetry of the dimer. Oligomerization of SAMHD1 variants was tested as previously described [31]. Briefly, human 293 T cells were co-transfected with a plasmid expressing wild type SAMHD1-HA and a plasmid either expressing wild type or mutant SAMHD1-FLAG proteins. Cells were lysed 24 hours after transfection and analyzed by Western blotting using anti-HA and anti-FLAG antibodies (Input). Subsequently, lysates were immunoprecipitated by using anti-FLAG agarose beads. Anti-FLAG agarose beads were eluted using FLAG peptide, and elutions were analyzed by Western blotting using anti-HA and anti-FLAG antibodies (Immunoprecipitation). Similar results were obtained in two independent experiments and representative data is shown. WB, Western blot; IP, Immunoprecipitation; WT, wild type. (B) Similar immunoprecipitations were performed by pulling down an HA-tagged variant with its corresponding FLAG-tagged variant. (C) The ability of SAMHD1 variants to bind nucleic acids was tested as previously described [31]. Human 293 T cells were transfected with plasmids expressing the SAMHD1 variants were lysed (Input) and incubated with the RNA analog ISD-PS immobilized to Strep Tactin Superflow affinity resin. Eluted proteins from the resin were visualized by Western blotting using anti-FLAG antibodies (Bound). Similar results were obtained in three independent experiments and a representative experiment is shown. ISD-PS, interferon-stimulatory DNA sequence containing a phosphorothioate backbone. (D) Intracellular distribution of SAMHD1 variants in HeLa cells. HeLa cells expressing the indicated SAMHD1-FLAG variants were fixed and immunostained using antibodies against FLAG (red) as previously described [31,44]. Cellular nuclei were stained by using DAPI (blue). Image quantification for three independent experiments is shown in Additional file 1.
proteins, we tested for the ability of these variants to bind RNA ( Figure 2C and Table 1), as described [31]. For this purpose we tested the ability of SAMHD1 to interact with the interferon-stimulatory DNA sequence containing a phosphorothioate backbone (ISD-PS), which is an RNA analog [31,39]. As shown in Figure 2C, all tested SAMHD1 variants were able to interact with the RNA analog ISD-PS. These results indicated that oligomerizaton is not required for the ability of SAMHD1 to bind RNA (Table 1). Next we tested the ability of the SAMHD1 variants to localize to the nuclear compartment ( Figure 2D). SAMHD1 variants Y146S/Y154S and L428S/Y432S exclusively localized to the nuclear compartment ( Figure 2D and Table 1). By contrast, image quantification of the SAMHD1 variant Y146S/Y154S/ L428S/Y432S showed that this variant does not exhibit complete nuclear localization suggesting that this particular variant has lost a function or its partially misfolded ( Figure 2D and Additional file 1). Because the SAMHD1 variant Y146S/Y154S/L428S/Y432S has lost nuclear localization, we will no longer pursue its analysis.
To get a more refined mechanistic understanding of the effect of the interfacial SAMHD1 mutations we performed in vitro comparative studies of SAMHD1 oligomerization. The HD domain construct of human SAMHD1 used by Goldstone and colleagues in the crystallographic studies (residues 120-626) [29] lacks several N-terminal residues that are important for the binding of dGTP at the allosteric site, as observed in the bacterial HD domain homologue to SAMHD1 [35]. Therefore, we used an extended construct that comprises SAMHD1 residues 109-626 for our in vitro studies.
Size-exclusion chromatography of the purified wild type and Y146S/Y154S variant of the SAMHD1 construct 109-626 were performed on the HiLoad 16/60 Superdex 200 media (GE Life Sciences), and showed that both proteins elute as single peaks at the retention volume of approximately 82 mL indicating that both recombinant proteins are predominantly monomeric in solution ( Figure 3A). Following incubation of the proteins with dGTPαS, a dGTP analog that is hydrolyzed by SAMHD1 at a slower rate, size exclusion chromatography revealed an additional peak at~69 mL in the chromatogram of the wild type protein, which is absent in the Y146S/Y154S sample. This peak is distinct from the high molecular weight aggregates, which elute in the excluded volume (42-45 mL) of the HiLoad 16/60 Superdex 200 column. Most likely the 69 mL peak corresponds to the previously reported tetrameric form of the HD domain [38].
The effect of dGTPαS incubation on the oligomeric state of the protein was investigated using sedimentation velocity as described in [40]. Diffusion-corrected van Holde -Weischet sedimentation coefficient distributions [41] of the purified proteins ( Figure 3B) revealed mono-disperse species with sedimentation coefficient close to 4. Additional 2DSA-Monte Carlo analysis [42,43] reports a frictional ratio of~1.5, which corresponds to a molecular weight of~60 kDa, in agreement with a monomeric state. Incubation of wild type monomeric SAMHD1 with dGTPαS induced the formation of HIV-1 restriction was measured by infecting U937 cells stably expressing the indicated SAMHD1 variants with HIV-1-GFP. After 48 hours, the percentage of GFPpositive cells (infected cells) was determined by flow cytometry. b Oligomerization of the different SAMHD1 variants was determined by measuring the ability of the SAMHD1-FLAG variant to interact with wild type SAMHD1-HA variant, as described [31]. " + " indicates 100% oligomerization, which corresponds to the amount of wild type SAMHD1-HA that interacts with wild type SAMHD1-FLAG. " -" indicates the absence of oligomerization. c SAMHD1-FLAG variants were assayed for their ability to bind the double-stranded RNA analog ISD-PS, as described [31]. "+" indicates the RNA binding achieved by wild type SAMHD1. d Subcellular localization of the different SAMHD1 variants in HeLa cells was performed as described [31]. "N" indicates nuclear localization; "N/C" indicates nuclear and cytoplasmic localization. The cellular dATP levels of PMA-treated U937 cells stably expressing the different SAMHD1 variants were determined by primer extension as described [31]. "Low" indicates similar to the dATP levels observed in PMA-treated U937 cells stably expressing wild type SAMHD1. f WT and SAMHD1-FLAG variants were assayed for association with wild-type SAMHD1-HA as described [31]. Percentages are an average of two independent experiments. The percentage represents the fraction of the SAMHD1 variant coprecipitated with wild-type SAMHD1 relative to the amount of wild-type SAMHD1 coprecipitated with itself. g WT and SAMHD1-FLAG variants were assayed for association with wild-type and variant SAMHD1-HA as described [33]. Percentages are an average of two independent experiments. The percentage represents the fraction of the SAMHD1 variant coprecipitated with itself relative to the coprecipitation of wild-type SAMHD1 with itself.
high molecular weight species; this oligomer sediments at approximately 9.7 s consistent with a 240 kDa tetramer with a frictional ratio of 1.5 ( Figure 3C and E). By contrast, dGTPαS had no effect on the oligomerization state of the Y146S/Y154S variant ( Figure 3D-E), which is in agreement with the results obtained by size-exclusion chromatography. In all samples, we observed the appearance of a low sedimentation component (< 2) most likely the result of dGTPαS absorption at 280 nm. Collectively, this data demonstrates that the recombinant wild type HD domain of SAMHD1 can form a tetramer in a dGTP-dependent manner, and that tetramerization is disrupted by the Y146S/Y154S mutation.
To understand the contribution of dGTP-mediated tetramerization to SAMHD1 enzymatic activity, we investigated the dNTPase and nuclease activity of Y146S/ Y154S and wild type SAMHD1 proteins.
To study the dNTPase activity, we used an NMRbased dGTP hydrolysis assay to monitor the dNTPase activity of SAMHD1 ( Figure 4A). The H8 proton of the guanine base appears as a narrow singlet peak at 8.04 ppm in the 1 H NMR spectrum of dGTP. This signal is shifted to 7.92 ppm upon hydrolysis of dGTP to deoxyguanosine, and can thus be used to monitor SAMHD1-catalyzed dGTP hydrolysis reaction in real time ( Figure 4A). The assay revealed that the wild type construct hydrolyzed dGTP whereas the activity of the Y146S/Y154S mutant was virtually undetectable ( Figure 4B).
Subsequently, we tested the nuclease activity of the two SAMHD1 constructs using a quenched fluorescent single-stranded DNA substrate as described in Methods. The measured activity of the Y146S/Y154S variant is slightly lower when compared to the nuclease activity of the wild type protein ( Figure 4C). These results indicated that in contrast to the dNTPase activity, the nuclease activity of SAMHD1 is not subject to allosteric regulation via dGTP-dependent tetramerization.
To directly analyze the dNTPase activity of SAMHD1 full-length variants, we tested the ability of immunoprecipitated SAMHD1 variants ( Figure 5A) to hydrolyze α-32 P-TTP to dT and α-32 PPP, in the presence of the allosteric activator dGTP. For this purpose we incubated the indicated SAMHD1 variant in the presence of radiolabeled α-32 P-TTP. Reaction products were separated using thin-layer chromatography in order to determine the amount of hydrolyzed α-32 PPP ( Figure 5B), as previously shown [31,33]. In agreement with our results using bacterially purified protein, immunoprecipitated Y146S/ The nuclease activity of wild type and Y146S/Y154S SAMHD1 proteins was measure using a nuclease activity assay. Briefly the different proteins were incubated with a single stranded DNA (ssDNA) containing a 5′ FAM label and a 3′ BHQ1 black hole quencher. The fluorescence of the ssDNA substrate containing a 5′ FAM label and a 3′ BHQ1 black hole quencher is increased more than 6 fold after the ssDNA is cleaved. Plots of total FAM fluorescence measured as a function of time reveal that Y146S/Y154S mutation has only a modest effect on SAMHD1 nuclease activity. Figure 5 dNTPase activity of SAMHD1 oligomerization variants immunoprecipitated from mammalian cells. The indicated FLAG-tagged SAMHD1 variants were immunoprecipitated (A), and tested for their ability to hydrolyze α − 32 P-TTP to dT and α − 32 PPP, in the presence of the allosteric activator dGTP. Reactions products were separated using thin-layer chromatography using polyethyleneimine cellulose in order to determine the amount of hydrolyzed α − 32 PPP (B). As a control, we have included the mutant HD206AA, which is a SAMHD1 protein defective in the active site of the HD domain. The results of three independent enzymatic reactions per treatment are shown. WT, wild type; CIP, calf intestine phosphatase.
Y154S and L428S/Y432S SAMHD1 variants lost dNTPase activity when compared to wild type SAMHD1 ( Figure 5B). As expected, the SAMHD1 variant HD206AA completely lost dNTPase activity [31,33]. These results suggested that mutants that lost the ability to form tetramers in a dGTP-dependent manner were also defective in their dNTPase activity.
Ability of SAMHD1 variants to restrict HIV-1 infection
To understand whether dGTP-dependent tetramerization contributes to the antiretroviral properties of SAMHD1, we tested the ability of dGTP-dependent tetramerization-defective SAMHD1 variants to restrict HIV-1 infection. For this purpose, we stably expressed the indicated SAMHD1 variants in human monocytic U937 cells ( Figure 6A), and tested them for the ability to block HIV-1 infection. PMA-treated U937 cells stably expressing SAMHD1 variants were challenged with increasing amounts of HIV-1 virus expressing GFP as a reporter of infection ( Figure 6B and Table 1). Remarkably, SAMHD1 variants that lost dGTP-dependent tetramerization potently restricted HIV-1 infection. These results suggested that SAMHD1 dGTP-dependent tetramerization is not required for the ability of SAMHD1 to block infection.
Because expression of SAMHD1 in U937 cell decreases the cellular levels of deoxynucleotides (dNTPs), we measured the cellular levels of dNTPs in U937 cells expressing the different SAMHD1 variants, as previously described ( Figure 6C and Table 1) [31]. Interestingly, SAMHD1 oligomerization variants decreased the cellular levels of dNTPs ( Figure 6C and Table 1) indicating that the dNTPase activity of SAMHD1 in mammalian cells may be upregulated by a mechanism that does not depend on tetramerization and dGTP binding.
Vpx-mediated degradation of SAMHD1 variants
Finally, we explored the ability of Vpx from HIV-1-ROD (Vpx rod ) to degrade SAMHD1 oligomerizationdefective variants, as previously described [44]. As shown in Figure 7, tetramerization-defective SAMHD1 variants were degraded by Vpx rod . As a control, we used the Vpx protein from red-capped mangabeys (Vpx RCM ), which does not induce the degradation of SAMHD1. These results indicated that dGTP-induced tetramerization is not required for the ability of Vpx to degrade SAMHD1.
Discussion
Overall, the work presented here analyzes the contribution of oligomerization to the different functions of SAMHD1. Close analysis of the interfacial residues in the structure presented by Goldstone and colleagues revealed four residues (Y146, Y154, L428 and Y432) that might be stabilizing the hydrophobic interactions between the monomers in the dimer structure [29]. To test this hypothesis we tested the ability of the double mutants Y146S/Y154S and L428S/Y432S to form oligomers. Using our oligomerization assay that utilizes proteins extracted from mammalian cells [31], we found that SAMHD1 variants Y146S/Y154S and L428S/Y432S completely lost their ability to form oligomers. In agreement, the recombinant Y146S/Y154S variant of the HD domain construct (SAMHD1 residues 109-626) lost its dGTP-dependent tetramerization ability when compared to wild type protein, as measured by gel filtration and analytical ultracentrifugation. These results show that hydrophobic interfacial residues Y146, Y154, L428 and Y432 are critical for the dGTP-dependent tetramerization ability of SAMHD1.
Next we explored the contribution of oligomerization to the described enzymatic activities of SAMHD1. The HD domain of SAMHD1 exhibits dNTPase and nuclease activity [28][29][30][31]45]. Interestingly, SAMHD1 oligomerization-defective variants lost their dNTPase activity when SAMHD1 proteins were prepared in bacteria or in mammalian cells. These results suggested that tetramerization is important for dNTPase activity, as previously suggested [29,35,38]. In contrast, the nuclease activity of the Y146S/ Y154S oligomerization-defective SAMHD1 variant was not significantly perturbed. Overall, these findings suggested that dGTP-dependent SAMHD1 tetramerization is important for dNTPase but not nuclease activity. These results are interesting in the light of the new discovery that SAMHD1 exhibit nuclease activity [45], suggesting that RNAase might be part of the mechanism by which SAMHD1 blocks HIV-1 infection.
We found that SAMHD1 variants that are defective for dGTP-dependent tetramerization potently blocked HIV-1 infection when compared to wild type SAMHD1, which suggested that oligomerization is not required for the antiretroviral properties of SAMHD1. Surprisingly, SAMHD1 oligomerization-deficient mutants were able to decrease the dNTP cellular levels when compared to wild type SAMHD1. These results suggest that the dNTPase activity of SAMHD1 might be regulated in cells by a yet unknown mechanism that does not require tetramerization. Another possibility is that SAMHD1 mutants that are strongly oligomerization-deficient in our in-vitro and immunoprecipitation assays described here, are still capable of forming tetramers when inside mammalian cells through interaction with other factors or some other compensatory mechanism. Future experiments will determine whether dNTPase and/or nuclease activities are required to block HIV-1 infection.
Conclusions
These results suggested that SAMHD1 oligomerization is not required for the ability of the protein to block HIV-1 infection.
Generation of U937 cells stably expressing SAMHD1 variants
Retroviral vectors encoding wild type or mutant SAMHD1 proteins fused to FLAG were created using the LPCX vector (Clontech). Recombinant viruses were produced in 293 T cells by co-transfecting the LPCX plasmids with the pVPack-GP and pVPack-VSV-G packaging plasmids (Stratagene). The pVPack-VSV-G plasmid encodes the vesicular stomatitis virus G envelope glycoprotein, which allows efficient entry into a wide range of vertebrate cells [46]. Transduced human monocytic U937 cells were selected in 0.4 mg/ml puromycin (Sigma).
Infection with HIV-1 expressing the green fluorescent protein (GFP)
HIV-1 expressing GFP, pseudotyped with the VSV-G glycoprotein, were prepared as described [47]. For infections, Figure 7 Vpx-induced degradation of SAMHD1 variants. HeLa cells were cotransfected with plasmids allowing expression of SAMHD1-FLAG variants and the Vpx protein of HIV-2 ROD (Vpx ROD ) or the Vpx protein of SIVrcm (Vpx rcm ), as described [44]. Thirty-six hours post-transfection the cells were harvested, and the expression levels of SAMHD1 and Vpx were analyzed by Western blot using anti-FLAG antibodies. As a loading control, cell extracts were Western blotted using antibodies against GAPDH. Similar results were obtained in three independent experiments and a representative experiment is shown. 6 × 10 4 cells seeded in 24-well plates were either treated with 10 ng/ml phorbol-12-myristate-3-acetate (PMA) or DMSO for 16 h. PMA stock solution was prepared in DMSO at 250 mg/ml. Subsequently, cells were incubated with HIV-1-GFP for 48 h at 37°C. The percentage of GFP-positive cells was determined by flow cytometry (Becton Dickinson). Viral stocks were titrated by serial dilution on dog Cf2Th cells.
SAMHD1 oligomerization assay
Approximately 1.0 × 10 7 human 293 T cells were cotransfected with plasmids encoding SAMHD1 variants tagged with FLAG and HA. After 24 h, cells were lysed in 0.5 ml of whole-cell extract (WCE) buffer [50 mM Tris (pH 8.0), 280 mM NaCl, 0.5% IGEPAL, 10% glycerol, 5 mM MgCl2, 50 μg/ml ethidium bromide, 50 U/ml benzonase tail (Roche)]. Lysates were centrifuged at 14,000 rpm for 1 h at 4°C. Post-spin lysates were then pre-cleared using protein A-agarose (Sigma) for 1 h at 4°C; a small aliquot of each of these lysates was stored as input. Pre-cleared lysates containing the tagged proteins were incubated with anti-FLAG-agarose beads (Sigma) for 2 h at 4°C. Anti-FLAG-agarose beads were washed three times in WCE buffer, and immune complexes were eluted using 200 mg of FLAG tripeptide/ml in WCE buffer. The eluted samples were separated by SDS-PAGE and analyzed by Western blotting using either anti-HA or anti-FLAG antibodies (Sigma).
Sense and antisense primers were incubated at 65°C for 20 min, and primers were allowed to anneal by cooling down to room temperature. Annealed primers were immobilized on an Ultralink Immobilized Streptavidin Plus Gel (Pierce). Cells were lysed using TAP lysis buffer (50 mM Tris pH 7.5, 100 mM NaCl, 5% glycerol, 0.2% NP-40, 1.5 mM MgCl2, 25 mM NaF, 1 mM Na3VO4, protease inhibitors) and lysates were cleared by centrifugation. Cleared lysates (Input) were incubated with immobilized nucleic acids at 4°C on a rotary wheel for 2 h in the presence of 10 mg/ml of Calf-thymus DNA (Sigma) as a competitor. Unbound proteins were removed by three consecutive washes in TAP lysis buffer. Bound proteins to nucleic acids (Bound) were eluted by boiling samples in SDS sample buffer (63 mM Tris-HCl, 10% Glycerol 2% SDS, 0.0025% Bromophenol Blue) and analyzed by Western blot-ting using anti-FLAG antibodies (Sigma).
In vitro oligomerization assays.
WT and Y146S/Y154S variant of the strep-tagged HD domain construct of human SAMHD1 (residue 109-626) were expressed in BL21(DE3) E.coli using a pET expression vector. Protein was purified by affinity chromatography [29]. SAMHD1 constructs at 8 μM concentration were incubated with or without 50 μM dGTPαS for 4 days at 4C. After the incubation the samples were analyzed by size-exclusion chromatography using a HiLoad 16/60 Superdex 200 column. (GE Life Sciences).
Nuclease. A quenched fluorescent single-stranded DNA substrate was used to measure the nuclease activity of SAMHD1 HD domain constructs. The single-stranded 45base DNA oligo 5′-tacagatctactagtgatctatgactgatctgtacatgatctaca-3′ was ordered from MWG operon with 5′-FAM and 3′-BHQ1 modifications. The substrate (100 μM) and the enzyme (12.5 μM and 3.25 μM) stocks were prepared in the assay buffer (50 mM tris, pH 7.4, 5 mM MgCl2, 50 uM Zn2+ and 50 mM NaCl). 20 μL of the substrate stock was mixed with 20 μL of the enzyme stock in a 384well microplate and the fluorescence signal measured on a Biotek Synergy 2 microplate reader using 485/20 excitation and 528/20 emission filters. The fluorescence intensities were plotted as a function of the reaction time.
Determination of dNTPs cellular levels.
2 × 10 6 to 3 × 10 6 cells werecollected for each cell type. Cells were washed twice with 1x PBS, pelleted and resuspended in ice cold 65% methanol. Cells were vortexed for 2 min and incubated at 95°C for 3 min. Cells were centrifuged at 14000 rpm for 3 min and the supernatant was transferred to a new tube for the complete drying of the methanol in a speed vac. The dried samples were resuspended in molecular grade dH2O. An 18-nucleotide primer labeled at the 5 end with 32 P (5-′GTCCCTGTTCGGGCGCCA-3) was annealed at a 1:2 ratio to four different 19-nucleotide templates (5′-NT GGCGCCCGAACAGGGAC-3′), where'N' represents the nucleotide variation at the 5′ end. Reaction condition contains 200 fmoles of template primer, 2 ml of 0.5 mM dNTP mix for positive control or dNTP cell extract, 4 ml of excess HIV-1 RT, 25 mM Tris-HCl, pH 8.0, 2 mM dithiothreitol, 100 mM KCl, 5 mM MgCl 2 , and 10 μM oligo (dT) to a final volume of 20 mL. The reaction was incubated at 37°C for 5 min before being quenched with 10 mL of 40 mM EDTA and 99% (vol/vol) formamideat 95°C for 5 min.The extended primer products were resolved on a 14% urea-PAGE gel and analyzed using a phosphoimager. The extended products were quantified using QuantityOne software to quantify percent volume of saturation. The quantified dNTP content of each sample was accounted for based on its dilution factor, so that each sample volume was adjusted to obtain a signal within the linear range of the assay.
Immunofluorescence microscopy
Transfections of cell monolayers were performed using Lipofectamine Plus reagent (Invitrogen), according to the manufacturer's instructions. Transfections were incubated at 37°C for 24 h. Indirect immunofluorescence microscopy was perfomed as previously described [44]. Transfected monolayers grown on coverslips were washed twice with PBS1X (137 mM NaCl, KCl 2.7 mM, Na 2 HPO 4 . 2H 2 O 10 mM, KH 2 PO 4 mM) and fixed for 15 min in 3.9% paraformaldehyde in PBS1X. Fixed cells were washed twice in PBS1X, permeabilized for 4 min in permeabilizing buffer (0.5% Triton X-100 in PBS), and then blocked in PBS1X containing 2% bovine serum albumin (blocking buffer) for 1 h at room temperature. Cells were then incubated for 1 h at room temperature with primary antibodies diluted in blocking buffer. After three washes with PBS, cells were incubated for 30 min in secondary antibodies and 1 mg of DAPI (49, 69diamidino-2-phenylindole)/ml. Samples were mounted for fluorescence microscopy by using the ProLong Antifade Kit (Molecular Probes, Eugene, OR). Images were obtained with a ZeissObserver Z1 microscope using a 63x objective, and deconvolution was performed using the software AxioVision V4.8.1.0 (Carl Zeiss Imaging Solutions).
Assay to determine dNTPase activity of SAMHD1 by thinliquid chromatography Wild type and mutant SAMHD1 proteins immunoprecipiated from mammalian cells were incubated with or without 100 μM dGTP, 500 μM dTTP and 0.25 μl α32P-dTTP (PerkinElmer) in SAMHD1 reaction buffer (50 mM Tris-HCl pH 8, 50 mM KCl, 5 mM MgCl 2 , 0.1% Triton-X 100) in a 17.5 μl final volume. Reactions were initiated by addition of SAMHD1, incubated for 1 h at 37°C, and terminated by incubation for 10 min at 70°C. The no enzyme control reaction and the antarctic phosphatase reaction contained dGTP. The antarctic phosphatase reaction (2 ul, New England BioLabs) was used to show the mobility of monophosphates on the plate as a comparison to triphosphate mobility. Reactions were spotted (0.5 μl) on a TLC PEI Cellulose F plate (EMD Chemicals) and separated in a 0.8 M LiCl solvent. Product formation was analyzed on a Bio-Rad Personal Molecular Imager. | 2016-05-12T22:15:10.714Z | 2013-11-12T00:00:00.000 | {
"year": 2013,
"sha1": "b35530c49d82a61e83196e9020ace12ae8079ac1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/1742-4690-10-131",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eae8623c47ad723921a39ef3018585f4aa5f4d6d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
3636749 | pes2o/s2orc | v3-fos-license | The ESO Survey of Non-Publishing Programmes
One of the classic ways to measure the success of a scientific facility is the publication return, which is defined as the number of refereed papers produced per unit of allocated resources (for example, telescope time or proposals). The recent studies by Sterzik et al. (2015, 2016) have shown that 30-50 % of the programmes allocated time at ESO do not produce a refereed publication. While this may be inherent to the scientific process, this finding prompted further investigation. For this purpose, ESO conducted a Survey of Non-Publishing Programmes (SNPP) within the activities of the Time Allocation Working Group, similar to the monitoring campaign that was recently implemented at ALMA (Stoehr et al. 2016). The SNPP targeted 1278 programmes scheduled between ESO Periods 78 and 90 (October 2006 to March 2013) that had not published a refereed paper as of April 2016. The poll was launched on 6 May 2016, remained open for four weeks, and returned 965 valid responses. This article summarises and discusses the results of this survey, the first of its kind at ESO.
The SNPP sample included all Normal, Guaranteed Time Observations (GTO) and Target of Opportunity (TOO) programmes that were scheduled between October 2006 and March 2013. This timeframe was selected to accommodate some delay between data acquisition and publication. To minimise ambiguity, we only considered programmes for which all runs were scheduled at the highest priority (i.e., Visitor Mode [VM] or A-ranked Service Mode [SM]). In addition, only programmes that had acquired a minimum amount of data were included in order to remove obvious cases, with a threshold of one science frame per allocated hour. In the selected period range, we identified 2716 proposals that obeyed the above criteria (90.7 % of the total A-ranked SM and VM proposals), involving 2089 Normal, 478 GTO and 149 TOO programmes. According to the ESO bibliographic database telbib 1 (Grothkopf & Meakins 2015), 1278 (47.1 %) of these programmes have not produced a a Time Allocation Working Group Report: http://www.eso.org/ public/about-eso/committees/uc/uc-41st/TAWG_REPORT.pdf 1 ESO telbib database: http://telbib.eso.org refereed paper 2 as of 16 April 2016. This gives an overall publication return of 52.9 % with publication fractions of 52.5 %, 52.7 % and 59.7 % for Normal, GTO and TOO programmes, respectively. 1143 Principal Investigators (PIs) were associated with the 2716 survey programmes; 755 (66.1 %) of the PIs from this group did not publish a paper associated with these programmes. 34 % of PIs published results for all programmes, 29 % published results for some programmes, and 37 % published results for none at all. 45 % of the PIs were associated with only one programme from the survey, and 55 % of these did not publish. On average, 1.1 proposals per PI have not yet produced a refereed paper. The sample of 2716 survey programmes involves time allocated on 33 different instruments. For programmes that were allocated time on more than one instrument, we introduced the concept of a fractional proposal, attributing to a given instrument a fraction corresponding to the portion of total time assigned to it. For instance, if a programme was allocated one hour 2 Throughout this paper the definition of non-publishing programmes includes archival publications, i.e., articles that would be published by scientists not included in the list of co-investigators for the given proposal. Therefore, in this study, a non-publishing programme is one that has produced no refereed publication of any kind. on FORS2 and four hours on UVES, this was counted as 0.2 and 0.8 proposals for the two instruments respectively. It is worth noting that 91.5 % of the survey proposals requested time on a single instrument, and 7.6 % requested two instruments. Table 1 shows the distribution of proposals per instrument for the entire survey sample, as well as for the sub-sample that did not publish. For simplicity, we grouped instruments with fewer than 50 proposals under OTHER. These correspond to 5 % of the total and involve eleven instruments, including SUSI2, TIMMI2 and VIRCAM. Table 1 also shows the nominal non-publishing fraction per instrument. According to this metric, which neglects instruments with low number statistics (i.e., OTHER), the most productive instrument is HARPS with a nominal publication return rate of about 78 %. At the other end of the distribution, VIMOS and CRIRES are characterised by return rates lower than 39 %. Although there is certainly a degree of instrument dependence, approximately 80 % of the proposals show a publication rate of less than 60 %, irrespective of the instrument used to produce the data.
THE QUESTIONNAIRE
The PIs were asked the following question: "Why were you not able to publish the results of your observations in a refereed paper?" and were provided with ten possible options: 1. I did publish a refereed paper (provide a hyperlink in the comments).
2. Insufficient data quality (observations out of required specifications).
6. Lack of resources on the PI side.
7. Science case no longer interesting.
8. I am still working on the data (provide time estimate in the comments).
9. I published a non-refereed paper (provide a hyperlink in the comments).
The web form included a free-text field for comments. The responses were tagged with the Programme ID, to enable the analysis of correlations between the answer and programme properties (for example, time, constraints, instruments, scientific category, etc.). Of the 1278 targeted programmes, we received responses for 965 (75.5 %). Accounting for the fact that approximately 70 queries could not be delivered (due to outof-date User Portal profiles), the response return was 80 %, which is much higher than expected from webbased surveys (∼10 %; Fan & Yan (2010)). The response rate increased for more recent time allocations, with a response rate of 85 % from PIs associated with programmes from the last semester, compared to 70 % from PIs from the first semester. PIs were allowed to select more than one option in their replies. Most selected a single option (55.5 %), with 31.1 % selecting two options and fewer than 10 % selecting three. The most popular single-option response was "8. I am still working on the data" (14 %), followed by "1. I did publish a refereed paper" (9 %). The most popular two-option response was "6. Lack of resources on the PI side" and "8. I am still working on the data" (5 %), followed by "2. Insufficient data quality" and "3. Insufficient data quantity" (3 %). The general outcome of the survey is summarised in Table 2. Given the possible multiple options within each single response, the results are presented in two flavours. For each single option, we list the number and percentage of responses and the weighted number and percentage. The weighted values were computed by giving equal weights to the various options within the same response ( Figure 1). By construction, the number of weighted responses (and percentages) adds up to 965 (100 %), whereas this is obviously not the case for the non-weighted responses. The two sets of numbers have different meanings: the latter is related to the frequency of responses associated with a given option, while the former provides information about its relative importance. The difference becomes clearer when considering the following simplified example. If a hypothetical survey includes the following four responses: (1, 1, [2, 4, 5, 6], [2,7,8,9]), the non-weighted frequencies of options 1 and 2 are both 50 %. On the other hand, the weighted fractions of the two options are 50 % (1) and 25 % (2), respectively. Therefore, while options 1 and 2 were included in the same fraction of responses (50 %), option 1 is twice as significant. The breakdown of responses by instrument shows some instrument-specific dependencies. For instance, while for X-shooter the frequency of option 8 is equal to the average (23.7 %), UVES is characterised by a significantly larger fraction (35.5 %), and AMBER shows a lower fraction (18.0 %). This may be related to the specific scientific areas covered by the instruments, the complexity of the science cases involved, and their appeal to the community. In the following sections, we will go into more detail for each of the options in the questionnaire.
Option 1: I did publish a refereed paper
Of the 124 responses associated with option 1, 14 provided incomplete information (for example, no link to the refereed publication or a link to a non-refereed publication). These cases were conservatively counted as nonpublished. The remaining 110 replies can be grouped as follows: a) the Programme ID was either wrong or absent (61; 55.5 %); b) the refereed paper appeared in print after the SNPP sample definition and was listed by telbib (25; 22.7 %); c) the paper is in the process of being accepted (21; 19.1 %); and d) the paper is missing from telbib (3; 2.7 %). 11.4 % of the responses correspond to false negatives (i.e., published programmes that were initially classified as non-publishing). This fraction, deduced from the 965 replies, can be used to compute the completeness-corrected value of the publication rate within the whole SNPP sample (N = 2716; 58.9 %). In the following we use the term "completeness" to refer to the completeness of telbib. In response to the information provided by the PIs, 64 telbib records were modified. The vast majority (87.5 %) of these records were previously included in the database, but the particular Programme ID in question was missing. We updated these records accordingly. Only eight papers (12.5 %) had not previously been considered as using ESO data; these records were added to the telbib database without further verification. As a side note, the SNPP has allowed us to robustly determine that the telbib completeness is better than 96 %.
Options 2 and 3: Insufficient data quality and/or quantity
We will discuss options 2 and 3 together because there is a clear overlap, as confirmed by comments from the PIs. In total, these two options account for 23.2 % of the cases, with 8.2 % citing only option 2, and 4.9 % citing only option 3. There is a striking difference between SM (32 %) and VM (68 %) programmes in the responses associated with option 2. This is likely due to the fact that VM observations are more adversely affected by bad weather conditions, while by definition SM is less affected by weather. We found a small correlation with requested seeing constraint and Quality Control (QC) grades in the SM programmes. Unsurprisingly, the majority of the affected SM programmes requested relatively good conditions (seeing ¡ 1 arcsecond) and associated observations had higher fractions of B quality control (QC) grades (i.e., one of the observing constraints was violated up to 10 %) compared to the rest of the sample. A clear dichotomy is also seen when considering responses per telescope, with the largest fractions related to the Very Large Telescope Interferometer (VLTI; 26 %), and Unit Telescope 1 (UT1; 20 %). The vast majority (90 %) of VLTI programmes involved AMBER and were associated with Guaranteed Time Observations (GTO), which are often riskier as they tend to involve new instrumentation. For UT1, most cases are related to the early years of CRIRES operations or problems with the degraded coating of the FORS2 longitudinal atmospheric dispersion corrector, which have since been resolved (Boffin et al. 2015). A detailed analysis of the responses that only cited op-tion 3 confirms that the corresponding programmes had been affected by weather, technical losses (in VM), or a completion fraction of lower than ∼50 % (for SM). We conclude that most of the cases involving options 2 and 3 can be accounted for within ESO's operation model, and/or reflect the early operation of new complex systems.
Option 4: Inadequate ESO tools
This was the least selected option, with a weighted fraction below 3 %, indicating that a negligible fraction of users identify the software provided by ESO as the cause for non-publication.
Option 5: Null or inconclusive results
The fraction of cases reporting null or inconclusive results is comparable to that of option 2 (insufficient quality). Although null or inconclusive results are arguably part of the scientific process, PIs may be reluctant to admit this, potentially biasing the responses and underestimating the fraction. No correlation was found between the fraction of inconclusive results and the scientific subcategories of the programmes, indicating that all science cases are affected in similar ways.
Option 6: Lack of PI resources
The weighted frequency of this option is 9.7 %. When considered together with option 8 below, these two options account for 33.4 % and point to a significant difficulty in the community to keep up with the rate of data production.
2.6. Option 7: Science case no longer interesting Only 2.3 % of the cases were indicated as obsolete science. These occurrences can be tentatively identified as instances in which the data delivery duty cycle and/or the time taken for the PI to make the data publishable was too long compared to the evolution in the given field.
Option 8: I am still working on the data
This was the most frequent response. Excluding the 13 cases in which options 1 and 8 were selected, a total of 339 responses included this option: 135 as single option, 49 with option 6, 26 with option 10, and 129 in other combinations. For a more quantitative approach we introduce the ratio, R, between the number of proposals for which work is still in progress and the total number of non-publishing proposals (corrected for telbib completeness). The previous numbers yield R = 339/(965-110) = 39.6±2.5 % for the overall SNPP sample. This ratio can be calculated individually for each semester to study its evolution with time. The completeness-corrected result is presented in Figure 2, which shows a net and steady overall decrease for older programmes. The fact that R = 78 and not zero for the earliest semester in the sample indicates that it takes longer than 12 semesters for all programmes that will eventually produce a refereed publication to do so.
Before we discuss this result in more detail, we will define the Publication Delay Time Distribution (PDTD), which describes the delay between the allocation and the publication time. This provides a measure of the complete duty cycle, including the time for ESO to deliver the data and for the user to process, analyse and publish them. We used the data provided by the ESO telbib interface to derive this function. For each year from 2008-2015 we extracted the refereed publications per programme for programmes that used Paranal tele- scopes. Due to their nature, Large Programmes and Director Discretionary Time Proposals were excluded. Each publication in the sample of 1303 refereed papers is characterised by the publication year (t P ) and the programme's allocation period (P). A given publication year is tagged with its central semester 3 , P 0 . The publication delay, in semesters, is then computed as ∆P = P − P 0 . The sample data show that only 1.1 % of papers are published with a null delay using the above definition, while this grows to 11 % for ∆P =6 semesters, after which it steadily decreases for larger delays. This is illustrated in Figure 3, which also includes the cumulative distribution function, indicated as C(t) (where the time tis counted from P ). At face value, it takes 7 semesters to reach 50 % of the publications, and 20 semesters to reach 95 %, in agreement with Sterzik et al. (2016). The quantity 1 − C(t) can be regarded as the probability that a programme that has not published a refereed paper after a time t, will publish it in the future. For example, a programme that has not published after 10 semesters has a 22 % residual probability of publishing in the future.
The behaviour of R(t) in Figure 2 is a direct consequence of the publication delay. In fact, it is easy to show (see Appendix A) that if f 0 P is the underlying pub- lication fraction (i.e., the return rate one would measure for a sample of programmes at a time when C = 1), then the ratio R(t) observed for a set of proposals all allocated in the same period and observed after a time t (i.e., the time when the survey is carried out) is given by: One can also show (see Appendix A) that this expression can be applied to computeR for a whole sample, including programmes allocated in a period range, by replacing C(t) with its weighted averageC: C = P N (P ) C(P S − P ) P N (P ) where N (P ) is the number of proposals allocated in semester P , and P S is the period in which the survey is run. It can be readily demonstrated (see Appendix A) that the expected publication fraction at the time of the survey is simply f P = f 0 PC . In the real case C = 0.78, while the SNPP providedR = 0.396±0.025. The above relation can be inverted to express f 0 P as a function of R, from which one can finally estimate the delay-and completeness-corrected return rate: f 0 P = 0.75±0.01. This implies that after waiting a sufficiently long time, more than 20 semesters after the most recent period in the sample (see Figure 3), one would measure a publication return of approximately 75 %. This calculation conservatively assumes that all programmes for which the users have specified option 8 will eventually publish. This assumption can be verified by comparing the real data with two predictions that descend from the above equation. The first is the overall publication fraction expected for the real SNPP case, which is given by f P = f 0 PC = 58.5±1.0 %. This can be directly compared to the completeness-corrected value derived from SNPP, 58.9 % (see above), which is fully consistent within the estimated uncertainty.
The second prediction concerns the time dependence of R(t), as defined by the above relation. This is compared to the real SNPP data in Figure 2 (blue line), which illustrates how the predicted behaviour matches the data within the estimated uncertainties. The above results indicate that the SNPP fraction of option 8 gives a realistic representation of the situation and is not the result of a "convenient answer" from PIs attempting to justify a lack of publication. In other words, the SNPP result is fully compatible with the estimated PDTD, and shows that the publication delay correction is significant, especially when the most recent periods included in the sample date back less than 10-12 semesters at the time of the survey.
Option 9: I published a non-refereed paper
The cases in which a programme did not publish a refereed paper but rather a non-refereed article account only for 3.5 % of the total. This implies that, with very few exceptions, if a project does not produce a refereed publication then it will not produce any publication at all.
Option 10: Other
This option reflected 12.3 % of the cases and the associated comments yielded a mixture of reasons, the most frequent being that the person leading the project left the field. Other recurrent explanations included: lack of ancillary data from other facilities, results not meeting expectations, lowered priority of the project because of more pressing activities, quicker results obtained by other teams and/or with better-suited instruments, nondetections, etc.
CONSIDERATIONS ON OBSERVING MODE AND ALLOCATED TIME
As a final analysis, we have derived the completenesscorrected publication fractions considering VM and SM separately, as the two observing modes were reported to behave in a different way by Sterzik et al. (2016). For this purpose, we have considered only single observing mode proposals within the SNPP initial sample, including 1089 SM programmes (40.1 %) and 1493 VM programmes (55.0 %). The remaining 134 mixed observing mode programmes (4.9 %) were excluded from the calculations. For each of the observing modes we have computed the time intervals that define the four quartiles of the respective time distributions. These differ for SM and VM, with median allocated times of 1.4 and 2.1 nights, respectively. For the time conversion, we adopted the ESO convention of 10 hours per night in odd periods and 8 hours per night in even periods. Finally, we derived the publication fraction, f P , within each time bin for the two observing modes separately (see Table 3). An interesting feature, common to both SM and VM, is the steady increase of the return rate for larger time allocations: the publication fractions in the fourth quartile are 60 % and 40 % larger than in the first quartile for the two modes, respectively. Another aspect is the larger return of VM programmes when compared to SM (Sterzik et al. 2016). To some extent this is expected, as VM programmes tend to be larger than SM programmes. This becomes clearer when comparing SM and VM runs with the same median duration. For instance, the two rates are very similar for SM runs in their second quartile (53.6 %) and the VM runs in their first quartile (50.5 %), both having a median duration of one night. Although observing mode effects cannot be excluded, the amount of time allocated to the programme appears to be the dominant factor. Figure 4 (upper panel) shows the dependence of publications on the allocated time, plotting the completeness-corrected publication fraction measured by SNPP in octiles of the overall time distribution (each time bin includes about 320 proposals). GTO programmes constitute 17.6 % of this sample, potentially biasing this result. As GTO programmes make systematic use of novel instruments designed to cover the specific science cases for which they were built, they tend to be more productive than average (Sterzik et al. 2016). For this reason, we produced a similar plot for Normal programmes (Figure 4, lower panel), which reveals a similar trend, albeit with more noise. We conclude that larger programmes tend to be more productive on average; this is in line with the results of Sterzik et al. (2015Sterzik et al. ( , 2016. We find the same trend within the Normal programmes, which account for the largest fraction of the allocation (both in terms of number of proposals and time). In an attempt to understand what makes larger allotments more productive, we examined the frequency of the SNPP options as a function of allocated time, dividing the programmes into the four quartiles of the time distribution. No significant dependence was found for any of the options, suggesting that the lower observed return rate f P for smaller time allocations was the fruit of a lower inherent return rate f 0 P , regardless of the reason for the lack of publication. We note that two non-publishing programmes with very different allocations are counted in the same way here. However, it is obvious that they have a different impact in terms of "wasted" telescope time. To quantify this aspect, we computed the telbib completeness-corrected fraction of scheduled time that was allocated to non-publishing programmes as a function of their size (in the four quartiles of the time distribution). We did this for the entire SNPP sample, both as observed and correcting for the publication delay (Table 4), assuming that all the work from in progress cases will eventually produce a refereed paper. At the time of the SNPP survey, about 37 % of the time allocated to A-ranked SM and VM programmes had not produced a refereed publication. This fraction in time is very similar to the corresponding completeness-corrected fraction in proposals (100 % -58.9 % = 41 %). Once corrected for the publication delay, this fraction reduces to about 25 %. Therefore, as in the case of the number of proposals, about one quarter of the telescope time allotted to A-ranked SM and VM proposals will not lead to a refereed publication.
A closer inspection of Table 4 reveals that, although larger programmes tend to be more successful in terms of producing at least one publication (Table 3 and Figure 4), the non-publishing time fraction tends to increase with their size. This finding is equivalent to the lower number of publications per programme per unit of allocated time that was reported by Sterzik et al. (2015Sterzik et al. ( , 2016 for proposals with sizes between the short Normal (below 20 hours) and Large Programmes (above 100 hours).
One can assume that there exists an optimal distribution of allocated times that maximises scientific return and minimises the waste of telescope time. Identifying such an ideal distribution is beyond the scope of this paper. However, Table 4 allows us to gain a first in- Table 4. Fraction of allocated telescope time not producing a refereed paper in the four quartiles of the time distribution measured by SNPP (Observed) and extrapolated in the hypothesis that all programmes that included option 8 (still working) will eventually publish (Delay-corrected).
sight into the boundary conditions of such a parameter search: in both cases (observed and delay-corrected), programmes with allocations below and above ∼2.5 nights "waste" the same amount of time. This implies that increasing the number of programmes with allocations larger than this value would effectively decrease the overall amount of time that leads to no refereed publication. This can be understood considering two extreme cases in which the schedule is completely filled with a) only programmes shorter than one night, or b) only programmes longer than three nights. The first case would yield a much larger number of allocated programmes than in the second case (by a factor larger than 12), but the total amount of "wasted" time would also be larger. The SNPP data, once corrected for completeness and time delay, show that about 40 % of programmes shorter than one night do not publish, producing a time waste of this same magnitude in the hypothetical first case. On the other hand, programmes longer than three nights would "waste" less time (about 20 %), but the number of published papers would be much smaller than in the first case, which would likely result in a decrease of the overall scientific return. These simple considerations suggest that the optimal distribution of allocated times must ensure the proper level of diversity, by including a mix of programme sizes.
CONCLUSIONS
The performance of a scientific facility can be evaluated using various metrics, each of which are affected by different issues. In this study, we have focused on the binary bibliographic figure of merit, i.e., the publication or lack of publication of at least one refereed paper. This is one of the simplest bibliometric estimators, as it does not account for the publication's impact or the resources involved. The fact that a programme has not yielded a refereed publication does not necessarily imply that the observations were a complete waste of resources. Nevertheless, analysing this aspect and understanding its possible causes is certainly one of the basic steps that institutes and organisations such as ESO must undertake to characterise their overall efficiency. The SNPP has shown that there are many reasons why a programme may not produce a refereed publication. With the notable exception of option 8 ("team still working on the data") and the combination of options 2 and 3 ("insufficient data quality and quantity"), there is not a single, dominant culprit. The relatively large fraction of proposals for which work is still in progress (∼40 %) is fully compatible with the Publication Delay Time Distribution deduced from an independent set of programmes. Once corrected for the publication completeness of the telbib database -where the vast majority of the missing cases are generated by wrong or absent Programme IDs in the published papers -and for the publication delay, the estimated asymptotic publication rate is approximately 75 %. This means that, at least in the phase covered by the SNPP, about a quarter of the proposals scheduled in VM and/or in A-ranked SM will never publish a refereed paper. Although this fraction can likely be decreased by further improving the overall workflow, part of the problem may be inherent. The non-negligible fraction of cases of insufficient resources (generally option 6 but also indicated in option 10) and the typically long publication delay may be symptoms of workload pressure in the community. The significant numbers of cases in which negative or inconclusive results do not turn into publications also support this conclusion. This reflects what may be a growing cultural problem in the community as scientists tend to concentrate on appealing results, especially if they have limited resources, and need to focus predominantly on projects that promise to increase their visibility (see Matosin et al. (2014) and Franco, Malhotra & Simonovits (2014)). An important result that emerged from this study is the higher publication rate of programmes associated with larger allocations of telescope time. This is detected in both observing modes (SM and VM) as well as in the Normal programme type sub-sample. The SNPP did not reveal any significant dependence on allocated time in the distributions of responses for programmes with no refereed publications. This may be interpreted as an indication that a minimum amount of data is required to achieve results of a sufficient quality and quantity to warrant a publication (including the necessary effort that goes with it) across all science cases. We cannot exclude the possibility that the time distribution is skewed towards smaller requests by the general perception that this increases the chances of success rate during the selection process. As the scientific process requires experimentation, it is necessary for an observatory to accommodate a fraction of risky proposals. When compounded with technical and weather losses, a 100 % return in publications across all programmes becomes impossible. Nevertheless, the current level of 75 % may be improved by a further 10-15 % by addressing specific factors. For example, by further optimising how observations are scheduled and executed at the telescope and re-evaluating the optimal fraction of risky observations, ESO can improve its data delivery performance. At the same time, the community can optimise the distribution of resources to ensure that data can be analysed more effectively as soon as it becomes available.
The authors are grateful to Francesca Primas, Martino Romaniello, and all of the members of the Time Allocation Working Group for their help during the formulation of the SNPP questionnaire.
A. PUBLICATION FRACTION TIME DEPENDENCE
Let us first consider a single generation of programmes, all allocated at the same time. Let then N T be the total number of programmes that can produce a publication, N P the number of programmes that will eventually produce a publication, and f 0 P = N P /N T the average publication fraction. With these settings, the number of programmes that will never produce a publication is N N = N T (1 − f 0 P ). Let us then define C(t) as the cumulative distribution function of the publication delay time distribution (PDTD): The number of programmes that have already produced a publication at time t is then: so that the number of programmes that are still working at time t is: The number of programmes that have not published at time t is N N P (t) = N N + N W (t), which can be written as: We can now define the ratio between N W (t) and N N (t): which is a quantity that can be directly measured.
If we now consider a set of multiple programme generations, the number of programmes still working on the data at time t is given by: where N T,i is the total number of programmes in the i-th generation that can produce a publication, and t i is the time when this generation was allocated. If we pose S = i N T,i and S W = i N T,i C(t − t i ), then we can write: more concisely as: where:C = S W S = i N T,i C(t − t i ) i N T,i is the weighted mean of C(t) over the duration of the survey, in which the weights are the number of proposals that can produce a publication in the given period. Therefore, the expression for publication fraction for a population including different project generations is analogous to that of a single generation (Equation A2), in which C(t) is replaced by its average weighted over the time range between the first generation and the time of the survey.
Equation A3 can be readily inverted to yield: With similar considerations, one can generalise Equation A1 for the whole multi-generation set of programmes: where N = i N T,i is the total number of programmes that can produce a publication. Considering thatN P (t)/N = f P (t), this finally yields: f P (t) = f 0 PC which gives the expected publication fraction at the time of the survey. | 2018-03-06T08:59:21.000Z | 2018-02-09T00:00:00.000 | {
"year": 2018,
"sha1": "9a48020652c786a6fb9f3963e3a52c1269478aa2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eabed639665128f8df369fd22d311c22319c5e33",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Political Science"
]
} |
248930541 | pes2o/s2orc | v3-fos-license | P-wave duration and atrial fibrillation recurrence after catheter ablation: a systematic review and meta-analysis
Abstract Aims Atrial fibrillation (AF) is a global health problem with high morbidity and mortality. Catheter ablation (CA) can reduce AF burden and symptoms, but AF recurrence (AFr) remains an issue. Simple AFr predictors like P-wave duration (PWD) could help improve AF therapy. This updated meta-analysis reviews the increasing evidence for the association of AFr with PWD and offers practical implications. Methods and results Publication databases were systematically searched and cohort studies reporting PWD and/or morphology at baseline and AFr after CA were included. Advanced interatrial block (aIAB) was defined as PWD ≥ 120 ms and biphasic morphology in inferior leads. Random-effects analysis was performed using the Review Manager 5.3 and R programs after study selection, quality assessment, and data extraction, to report odds ratio (OR) and confidence intervals. : Among 4175 patients in 22 studies, 1138 (27%) experienced AFr. Patients with AFr had longer PWD with a mean pooled difference of 7.8 ms (19 studies, P < 0.001). Pooled OR was 2.04 (1.16–3.58) for PWD > 120 ms (13 studies, P = 0.01), 2.42 (1.12–5.21) for PWD > 140 ms (2 studies, P = 0.02), 3.97 (1.79–8.85) for aIAB (5 studies, P < 0.001), and 10.89 (4.53–26.15) for PWD > 150 ms (4 studies, P < 0.001). There was significant heterogeneity but no publication bias detected. Conclusion P-wave duration is an independent predictor for AF recurrence after left atrium ablation. The AFr risk is increasing exponentially with PWD prolongation. This could facilitate risk stratification by identifying high-risk patients (aIAB, PWD > 150 ms) and adjusting follow up or interventions.
Introduction
Atrial fibrillation (AF) is the most common arrhythmia affecting 2% of the population with >34 million patients around the world and a prevalence that will double by 2050. Atrial fibrillation is associated with increased morbidity and mortality and a significant financial burden for the social security systems worldwide.
Catheter ablation (CA) has been established as an effective therapy reducing AF burden and symptoms. However, recurrence during follow up remains a major concern making patient selection for first or repeat ablations a very important task. Therefore, simple predictors of AF recurrence (AFr) could facilitate ablation strategy, closer follow up or prophylactic interventions and guidance of anticoagulation in such patients.
Several recent studies have shown an association between P-wave duration (PWD) and AFr after ablation. Both 12-lead and signal-averaged electrocardiogram (SAECG) has been evaluated in different populations evaluating duration or morphology of the P-waves. However, most studies are single-centre reports with limited sample sizes resulting in different PWD cut-offs and compromising its true predictive value. Moreover, several studies showed no difference in PWD for patients with AFr, reported a non-significant predictive value or failed to detect the effect size for different cut-offs.
This updated meta-analysis reviews the increasing evidence of all available studies that reported PWD prior to CA and its association with AFr during follow up in order to provide practical clinical implications.
Methods
This study was reported in adherence to Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statements. We searched PubMed/Medline, Embase, and ClinicalTrials.gov databases through Cochrane Library without language restriction from database inception to January 2021. The following keywords were used as search terms: 'P wave', 'P waves', 'interatrial block', 'interatrial conduction', 'AF recurrences', 'atrial fibrillation', and 'AF' with filters 'Clinical Trial' and 'Randomized Controlled Trial'. References of included articles were manually searched to identify additional eligible studies. No language restrictions were applied (see Supplementary material online, Table S1).
All studies were screened by three authors according to the following inclusion criteria: (i) studies including adult AF patients, (ii) PWD was measured prior to ablation, (iii) AFr after ablation was reported as an endpoint, and (iv) PWD was used as a variable to predict AFr. Case reports, reviews, letters, and editorials were excluded. The primary endpoint was AFr during follow up. Atrial fibrillation was defined as paroxysmal or persistent according to the current guidelines. Prolonged PWD > 120 ms and prolonged biphasic P-waves (in inferior leads) were defined as partial (pIAB) and advanced interatrial block (aIAB), respectively. 1 Two authors independently extracted data and summarized them in a data extraction file. Any disagreement was resolved by consensus or by consulting a third author. The missing data of eligible studies were calculated by the reported continuous PWD values or by contacting the original authors. The studies selected in our meta-analysis were evaluated for methodological quality using the Newcastle-Ottawa scale (0-9 points) based on selection, comparability, and outcome.
Statistics
Data for continuous variables were pooled to calculate a weighted mean difference (WMD) and 95% confidence interval (CI). The WMD of PWD between patients with and without AFr was computed and compared. The pooled odds ratio (OR) and 95% CI of PWD per cut-off value or according to the presence of partial or advanced IAB were calculated to evaluate their prognostic value for the primary endpoint. Furthermore, forest plots were constructed to display overall effects using a random-effects model. Heterogeneity was assessed using Higgins I 2 statistics, with values of 25, 50, and 75% representing low, moderate, and high heterogeneity, respectively. Sensitivity analysis was performed to evaluate the effect modification according to method (ECG or SAECG) and AF type as well as to exclude the effect of publication bias (based on Funnel plot) on the overall pooled estimates. Additionally, Egger's and Copas tests were applied to evaluate the presence of publication bias. Review Manager 5.3 (Cochrane Collaboration, Oxford, UK), R 3.5.3 (open source), and Stata 16.0 (Stata Corp, TX, USA) were used for the analysis. A P-value <0.05 was considered statistically significant.
Results
From the initial 351 studies screened and retrieved according to the search strategy, 100 were removed as duplicates. Potentially eligible studies (n = 37) were identified after screening titles and abstracts and 15 were excluded following full-text review for not meeting the inclusion criteria. Consequently, a total of 22 studies including 4.175 AF patients were included in the final analysis ( Figure 1). Quality assessment using the Newcastle-Ottawa scale showed high scores (≥7 points) in the majority of the studies enrolled in our meta-analysis (see Supplementary material online, Figure 1 Flow chart of the study selection process according to the preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines. AFr, atrial fibrillation recurrence; PWD, P-wave duration.
Baseline characteristics
All 22 included studies were single-centre cohort studies. There were 10 studies with exclusively paroxysmal and 1 with persistent AF patients. Most studies (n = 15) used ECG and 6 used SAECG for PWD measurement ( Table 1). Included patients (n = 4175) had a mean age of 61 ± 10 years with a normal left ventricular function (LVEF 62 ± 8%) and a left atrium of 40 ± 5 mm. There were 62% males and 72% paroxysmal AF (PAF) patients. Among the 16 studies that reported comorbidities, the most common diseases were hypertension (45%) followed by diabetes and ischaemic heart disease. The mean baseline PWD among reported studies was 126 ms. During a mean follow-up time of 16 ± 9 months (ranging from 3 to 50), 1138 patients (27%) experienced an AFr after CA ( Table 2).
There was significant heterogeneity as revealed by the Higgins I 2 statistics. Although visual inspection of the funnel plots suggested some asymmetry, Egger's and Copas' test revealed no evidence of publication bias and sensitivity analysis did not reveal a significant change in the results of the overall analysis ( Figure 4).
Discussion
To our knowledge, this updated meta-analysis is the largest systematic collection and quantitative synthesis of 4.175 patients undergoing AF ablation. We revealed the strong predictive value of different preprocedural PWD cut-offs for the recurrence of AF after CA. In specific, we found that a pre-procedural PWD > 120 ms (pIAB) doubles the risk for AFr during follow up. When this is combined with morphologic criteria (biphasic P wave in inferior leads), indicating an aIAB, the risk is four times higher. Most importantly though, further PWD prolongation to >150 ms leads to a 10 times higher risk of recurrence.
Identifying patients at risk for arrhythmia recurrence after AF ablation remains challenging. Several predictive models have been reported including clinical, anatomical, imaging, and serological characteristics. Common predictors are LA size, AF type, age, female sex, and to a lesser extent estimated glomerular filtration rate or biomarkers such as B-type natriuretic peptide. However, these models have a highly variable discriminatory ability (c-statistic) and do not characterize Wu et al. 19 17 ± accurately the individual structural and electrical atrial remodelling. 24 This is better assessed with pre-procedural imaging (e.g. cardiac MRI with late gadolinium enhancement) or intra-procedural mapping (lowvoltage areas). These methods can depict anatomical changes and the surrogate presence of fibrosis, which are associated with higher recurrence risk, but are costly and not readily available for pre-procedural planning. Thus, in the clinical setting, there is a need for a feasible lowcost surrogate of recurrence risk that could potentially improve patient selection and translate into cost savings by avoiding unnecessary procedures. The present analysis reveals the practical implications of PWD and IAB by describing their predictive value.
P-wave duration and interatrial block
Electrocardiography is a simple and widely available tool that can predict the risk for AFr by evaluation of the PWD and its characteristics. 25,26 P-wave changes have been associated with conduction changes and fibro-fatty replacement in histological studies. 27 More specifically, this conduction delay at the Bachmann bundle level has been defined as pIAB or aIAB (Graphical Abstract). This results in atrial remodelling and asynchronous LA contraction, 28,29 but can also appear without LA enlargement as a surrogate of AF substrate. 1 In fact, aIAB has been described as a separate clinical entity, called 'Bayes' syndrome', that has been associated with AF or other atrial arrhythmias and an increased stroke, dementia and mortality risk. 1 Chen et al. 21 reported that PWD was an independent predictor of atrial scarring, even after adjusting for age, sex, and LA diameter. Moreover, non-predictive value triggers were more common in patients with scarring, putting them at greater risk of recurrence. This is in agreement with the results of the study by Mugnai et al., 14 who found that PWD and dispersion were independent predictors of recurrence in patients with non-dilated left atria. In other words, electrical remodelling often precedes apparent structural changes and patients with normal dimensioned left atria and PAF should undergo AF ablation early, to prevent further electromechanical deterioration.
On the other hand, patients with persistent AF and advanced remodelling are more prone to AFr after CA. Jadidi et al. 16 found that an amplified PWD of >150 ms signifies extended LA scar with high sensitivity and specificity. P-wave duration in these patients was the only independent AFr predictor, even after adjusting for known confounders like age, sex, LA diameter, structural cardiomyopathies, hypertension, and antiarrhythmic drugs. Consequently, as shown in our analysis, AFr risk in patients with PWD > 120 ms and persistent AF was almost 10 times higher than those with PAF (see Supplementary material online, Figure S1).
In support of these findings, we found an exponential dose-response effect between the PWD and AFr risk. While pre-procedural PWD > 120 ms (pIAB) doubles the risk for AFr (OR ∼2.0), this is slightly higher for PWD > 140 ms (OR ∼2.4) and much higher for aIAB (OR ∼4.0) and PWD of >150 ms (OR ∼10.9). Interestingly, the method of recording did not influence the outcomes. The mean PWD difference was similar whether measured by ECG or SAECG (P = 0.46). Thus, while averaging, filtering and amplifying the electrical signal of the P-wave offers more accurate measurements, practically the SAECG does not improve the predictive value of PWD. Similarly, the mean PWD difference was similar in patients with paroxysmal and persistent AF (P = 0.365). Therefore, these cut-offs could facilitate patient selection for additional substrate ablation, for patients with earlier stages of fibrosis, or alternative strategies for patients with advanced stages, as in the DECAAF II study. Given the insight that ablation is not as effective in scar, we should evaluate new approaches in such patients. Our findings for example could help select those with high scar or recurrence risk and prospectively randomize them to LA ablation (radiofrequency or pulsed field ablation) or a 'pace and ablate' (AV junction) strategy.
The association of PWD with AFr after ablation has also been evaluated in a recent meta-analysis of 1482 patients conducted by Pranata et al. 26 The association was significant in SAECG, ECG, and PAF subgroups as well in both genders and all age groups, with or without structural heart disease. These results supplemented those from an earlier analysis of 1010 patients by Wang et al. 25 and a meta-analysis of 2587 patients by Tse et al., 30 both of which did not specify the predictive value of different cut-offs, as in our analysis. The first one included only eight studies, while the later one focused more on new-onset AF and included three studies regarding AFr. 30 Our findings derive from a significantly larger population and provide for the first time practical insights for different PWD cut-offs, which should be further evaluated in prospective studies.
Other P-wave characteristics
There are also several other P-wave indices that have been evaluated as predictors of AFr and in some studies overweighed the PWD. The PWD index, defined as the ratio of PWD to the PR interval in Lead II, has been described as a way to overcome the effects of the autonomic nerve system and was found to be an independent AFr predictor. 17 The PWD dispersion (max-min value > 45 ms) has also shown very good discriminative predictive value in a study by Mugnai et al. 14 but failed to reach significance in the study by Caldwell et al. 6 A later study by Nakatani et al. 20 though argued that the coefficient of variation for the PWD (>0.08), calculated by dividing the standard deviation by the mean PWD value, has the highest predictive accuracy among P-wave parameters in predicting AFr in PAF patients. Finally, the combination of PWD with other characteristics like in the morphology-voltage-PWD (MVP) score has also shown good predictive accuracy. The MVP assigns 0-2 points for each of the following factors: morphology in inferior leads, voltage in Lead 1 and PWD. Yang et al. 23 found that an MVP >3 has the best predictive ability for AFr (c-statistics 0.789), but this index requires additional measurements of low-amplitude P-waves in Lead I and was not directly compared with PWD alone. The P-wave terminal force in lead V1 (PTWFV1 > 0.04 mm*s), calculated by multiplying the duration and the amplitude of deep terminal negativity of the P-wave (prime) in Lead V1, was also found to be strongly correlated with LA enlargement and the risk of AF occurrence. 31 However, in the study by Doi et al., 15 PWTFV1 did not overweight the predictive value of PWD for AFr. Kanzaki et al. 12 came to a similar conclusion, with SAECG and P-wave force (the amplitude of the negative terminal phase multiplied by the filtered PWD) values >9.3 mV*ms becoming significant only when measured acutely postprocedurally. Masuda et al. 4 found a simpler SAECG marker; the atrial late potential, defined as PWD ≥ 130 ms and a terminal root meansquared voltage ≤2.0 mV, which was associated with AFr in PAF patients (OR = 4.2). Park et al. 8 though proposed an easier approach and found the P-wave amplitude in Lead I (<0.1 mV) to be independently associated with AFr and linearly correlated with LA voltage and conduction velocity. The recent consensus document about P-wave parameters and indices provides a further in-depth analysis that underlines the importance of this topic. 32 Taken together, these studies reveal a paucity of methods to approach P-wave morphology. Nevertheless, PWD and IAB have a higher practical value, since they are easily identifiable and simple to use and report in the majority of the studies. 18
Clinical implications
Our study has shown that the OR for AFr after CA increased exponentially from 2 for PWD > 120 ms to 2.4 for PWD > 140 ms, then 4 for aIAB and 10 for PWD > 150 ms. We reviewed the evidence connecting PWD with atrial fibrosis and suggest that the considerations of this simple measurement are far more practical than other complex P-wave indices. These specific PWD cut-off limits could be used as a surrogate marker of fibrosis to better stratify patients into different treatments, leading them to ablation, when the risk of recurrence is acceptable or examining alternatives, when signs of advanced fibrosis are present. Accordingly, patients with prolonged PWD should have a closer follow-up strategy. The present findings emphasize the clinical importance of evaluating PWD prior to CA for AF and deserve further investigation.
Limitations
The variation in population characteristics, measurements, ablation techniques (radiofrequency or cryo-ablation), or strategies, endpoints and follow up has contributed to high heterogeneity (I 2 = 87% for PWD > 120 ms). This was due to the widely inclusive selection criteria and was reduced as patient characteristics converged through selection (prolonged PWD). However, we used random-effects models and performed subgroup and sensitivity analysis to analyse and eliminate this heterogeneity. Additionally, the included studies had high quality according to the Newcastle-Ottawa scale (≥7 points). Although no study reported on intra-or interobserver variability for PWD measurement, our findings were consistent and significant, regardless of the measurement method. Although the increased OR of the group with PWD >150 ms could be partially explained by older studies or selection of sicker persistent AF patients, the concurrent results, even by inclusion of only recent studies, designate an exponential relationship that has also been seen between PWD and new-onset AF. Due to limited data, comparison of the predictive value of PWD with that of LA size was not possible. Nevertheless, we found no evidence of publication bias and we quantified for the first time the prognostic value of PWD for different cut-offs and IAB definitions.
Conclusion
In this updated meta-analysis of 4175 patients, PWD was found to be an independent predictor of AFr after CA. This risk is increasing exponentially with PWD prolongation. Thus, it could facilitate risk stratification by identifying high-risk patients (aIAB, PWD > 150 ms) and adjusting follow up or interventions.
Supplementary material
Supplementary material is available at Europace online.
Funding
None declared. | 2022-05-21T15:08:18.407Z | 2022-05-18T00:00:00.000 | {
"year": 2022,
"sha1": "cb67395172cdc51919f2af779c684a3420de01d5",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5817c777285cf7a207cfd6d74e93fa0f28200f9c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
209572083 | pes2o/s2orc | v3-fos-license | Impact of Row Distance and Seed Density on Grain Yield, Quality Traits, and Free Asparagine of Organically Grown Wheat
Organic farming faces challenges providing sufficient nutrient supply as manure and crop rotations are often the major nutrient inputs. Larger row distances and fewer seed densities can support nitrogen availability by giving more space to the single plant. As free asparagine (Asn) the main precursor of acrylamide (AA) in plants is closely related to nitrogen uptake and storage, the question arose whether free Asn will be affected by row distance and seed densities in organic farming. This study investigated the effect of row distance and seed density on yield, yield components, baking quality, and free Asn in organic farming. A two-year field trial was carried out including two winter wheat cultivars, two row distances, and two seed densities. Year and cultivar highly influenced all traits. The impact of both treatments was mainly caused by interaction. Nevertheless, enlarged row distances raised baking quality, while free Asn was changed to a minor extent. Thus, we recommend larger row distances for raising baking quality without increasing free Asn. Seed density is of minor relevance. The close relation found between free Asn and grains per spike (R2 = 0.72) indicates that smaller grains contain more Asn than bigger grains. This opens new insights into Asn synthesis during grain development and offers a potential prediction of Asn amounts.
Introduction
Securing of food quality is currently a major task for the scientific community. In this context, ensuring the absence of harmful substances in foods that can cause cancer is of high relevance. Until the year 2000, the food born toxicant Acrylamide (AA) was not known to be present in food products. That changed when Tareke et al. (2002), stated that carbohydrate rich food products contain AA [1]. Now nearly two decades after the first discoveries of AA in food, the European Commission [2] announced a regulation which restricts AA contents in cereal food products and forces the implementation of harm minimization strategies if benchmark levels are exceeded. Since that announcement, the food industry has faced the major challenge of reducing the risk of AA appearing in their food products.
AA is formed during the Maillard reaction in carbohydrate rich material like cereals and potatoes where free asparagine (free Asn) and reducing sugars react under heat treatment [3][4][5]. Up to now, the lowering of AA was mainly achieved by reducing the process temperature and heating time. Further studies investigated the effect of adjusting processing parameters such as pH, changing baking In this context, Stockmann et al. [31] investigated organically and conventionally cropped cereals to determine their content of free Asn. The used species and cultivars were the same for both systems, while only the crop management differed. They found a high impact of the cropping system for wheat in particular, as the organically grown wheat cultivars had a significantly lower level of free Asn.
In addition, Stockmann et al. [20] examined the effect of nitrogen on free Asn formation by comparing conventional cropping methods with organic ones. They found that the wheat samples produced under organic farming conditions showed no significant increase in free Asn if nitrogen levels were raised. Significantly higher levels of free Asn were only found within the conventionally treated wheat samples when nitrogen amounts of 180 kg N ha −1 or higher were applied, which led to crude protein contents over 14%. They concluded that until a certain level of nitrogen was reached which included a sufficient protein synthesis, free Asn would not be significantly affected. This is in agreement with Lea et al. [24], who stated that large amounts of nitrogen during a phase of low protein synthesis will increase free Asn.
In contrast, a set of organically cropped cereal species and cultivars were investigated by Stockmann et al. [32] for their content of free Asn. The samples were only marginally supplied with nitrogen, however a high range of free Asn comparing species and cultivars within species were reported. Thus, only reducing nitrogen could lead to failure to reduce the levels of free Asn. Particularly if a sufficient baking quality is needed, nitrogen supply should be adequate to help obtain marketable flours. In this context, baking properties are highly related to crude protein (gluten content), the sedimentation value, and falling number since these traits affect the dough preparation and bread volume [33].
However, nitrogen supply in low input farming systems is generally lower. Hence, strategies are needed to ensure there is a certain amount of crude protein to obtain a good baking quality.
Regarding organic farming, growing wheat in a larger row distance is a known agronomic strategy. In addition to providing better weed control, the main reason for this agronomic management tool is better nitrogen availability for each single plant. Thus, larger row distances can lead to a better baking quality in terms of quality traits like crude protein and the sedimentation value [34].
In addition, lowering the seed density could also support the effect of a better nutrient supply of the single cereal plant, as different plant densities per unit may change plant architecture in terms of the number of spikes per m 2 and grains per spike.
As free Asn is closely related to nitrogen uptake, storage and transport within plants [24,35], the question arose whether the level of free Asn and finally AA formation would be affected by a larger row distance and a lower seed density.
As such, a two-year field trial was established to investigate (i) the impact of row distance and seed density on yield, quality aspects and free Asn of two winter wheat cultivars, and (ii) the relation between the grain number per spike, crude protein, free Asn, and AA formation.
Experimental Site
The field trial was carried out over two consecutive growing seasons (2006-2007; 2007-2008) at the experimental station for organic farming of the University of Hohenheim, Kleinhohenheim, Stuttgart (48 • 44 N 9 • 12 E; average annual temperature 8.8 • C; average annual rainfall 700 mm).
The research station is located 435 m above sea level in the southern peripheral part of Stuttgart, Germany. The soil at the trial site in Kleinhohenheim falls under the Luvisol type. It is characterized by a nearly 2 m thick horizon of loess to loamy clay. Therefore, it features a high-water holding capacity and is well suited for agricultural purposes. In spring 2007, mineral N content was 35 kg ha −1 within a soil horizon of 0 to 60 cm compared to 62 kg ha −1 in 2008. In Table 1, the main results of the soil chemical analysis are presented.
Experimental Design
The field trial was set up as a randomized block design with three repetitions (plot size 4 × 6 m). As the trial was established according to the standards of organic farming (e.g., no artificial fertilizer, no pesticides). The previous crop in both years was winter wheat, while a 2-yr wheat clover grass mixture was grown in the years before.
Two different winter wheat cultivars (cv. Bussard and cv. Naturastar), two row distances, and two seed densities were tested. The tested treatments are shown in Table 2. The soil at the trial site in Kleinhohenheim falls under the Luvisol type. It is characterized by a nearly 2 m thick horizon of loess to loamy clay. Therefore, it features a high-water holding capacity and is well suited for agricultural purposes. In spring 2007, mineral N content was 35 kg ha −1 within a soil horizon of 0 to 60 cm compared to 62 kg ha −1 in 2008. In Table 1, the main results of the soil chemical analysis are presented.
Experimental Design
The field trial was set up as a randomized block design with three repetitions (plot size 4 × 6 m). As the trial was established according to the standards of organic farming (e.g., no artificial fertilizer, no pesticides). The previous crop in both years was winter wheat, while a 2-yr wheat clover grass mixture was grown in the years before.
Two different winter wheat cultivars (cv. Bussard and cv. Naturastar), two row distances, and two seed densities were tested. The tested treatments are shown in Table 2. Table 2. Applied cultivars, row distances, and seed densities in the field trials.
Cultivar
Row Distance Seed Density
Agronomic Practices
Primary tillage was done in both years with a moldboard plough (25 cm depth). Seed bed preparation was accomplished using a power harrow.
Sowing was done on 19 October 2006 and 24 October 2007. In total 100 kg N ha −1 were applied as liquid cattle manure (100 m 3 ha −1 : 1 kg N m −3 total nitrogen content, 4% dry matter) which was split into two rates of 50 m 3 ha −1 at the start of vegetation and at the start of stem elongation.
No pesticides and no growth regulators were applied. If necessary, weeds were treated by a currycomb. Infestation of diseases was monitored, but the outcome showed no significant infestation.
Harvest was accomplished by a Hege 180 plot combine harvester (Hege, Eging am See, Germany) after grains had reached a dry matter content of 85%.
Yield
Grain yields was determined by weighing the plot yield. Grain samples were dried at 105 • C for 24 h to determine grain moisture. Grain yields given refer to 86% dry matter content.
Thousand Kernel Weight
Thousand kernel weight (TKW) was determined by counting 1000 grains, which were dried to absolute dry matter content by a Contador ® seed counter (Pfeuffer GmbH, Kitzingen, Germany).
Test Weight
The test weight was determined by a cereal sampler (Pfeuffer GmbH, Kitzingen, Germany) after drying grain samples to absolute dry matter content by using a grains volume of 1 ⁄4 L.
Flours
For the determination of quality parameters, the determination of the AA precursor content free Asn, and the AA formation potential, grain samples were milled on a laboratory mill (Quadrumat Junior, Brabender, Duisburg, Germany). Ash content of flours was about 0.5% of flour DM. Flour moisture was calculated from the weight loss before and after drying of about 5 g flour at 105 • C for 24 h.
Crude Protein Content
Total grain nitrogen content was determined by Near-Infrared-Spectroscopy (NIRS, NIRS 5000, FOSS GmbH, Rellingen, Germany). Calibration samples were analyzed according to the Dumas Method [36] using a Vario Max CNS analyzer (Elementar, Hanau, Germany). The analyzed final nitrogen content was multiplied by a factor of 5.7 [37] for the wheat samples.
Hagberg Falling Number
The Hagberg falling number was determined in line with ICC standard No. 107 using a PerCon 1600 Falling Number machine (PerCon, Hamburg, Germany) and 7 g of flour (weight adjusted for moisture concentration to 15%).
Zeleny's Sedimentation Test
Zeleny's sedimentation test was determined in wheat flours using 3.2 g flour according to ICC standard No. 116. The sedimentation values of the flours were adjusted to a 14% moisture level.
Free Asparagin
For free amino acids, extraction 2 g of wheat flour were mixed with 8 mL of 45% ethanol for 30 min at room temperature. After centrifugation for 10 min at room temperature with 4000 rpm and 10 min at 10 • C and 14,000 rpm, the supernatant was filtered through a 0.2 µm syringe filter and poured into vials. Analysis of free Asn was performed using Merck-Hitachi HPLC components. The pre column derivatization with FMOC [38] was completely automated by means of an injector program. Subsequently, the derivatized Asn was separated on a LiChroCART Superspher RP 8 column (250 mm × 4 mm, Fa. Merck, Darmstadt, Germany) at a constant temperature of 45 • C. The fluorescence intensity of the effluent was measured at the excitation and emission maxima of 263 and 313 nm.
Acrylamide Formation Potential
The AA formation potential of wheat flour was assessed according to the AA contents of 5 g white flour in 250 mL Erlenmeyer flasks after heating in an oven for 10 min at 200 • C. Due to the complexity of the AA analysis, sample size was reduced to an overall number of 16 samples.
Sample preparation was accomplished according to the test procedure 200L05401 [39] of the Chemische und Veterinäruntersuchungsamt (CVUA) Stuttgart.
After cooling the heated flour samples down to ambient temperature, 100 mL of bidestilled water and 100 µL of D 3 -Acrylamide were added as an internal standard to the heated flour samples in the Erlenmeyer flasks. To completely extract acrylamide from the flour, samples were put in an ultrasonic bath for 10 min at 40 • C. After adding 1 mL of Carrez I and II to each of the samples, and shaking the flasks thoroughly, the samples were filtered using folded filter paper to separate the colloids and flour particles from the aqueous solution. Subsequently, samples were cleaned up by a solid phase extraction in a vacuum chamber after preconditioning the cartridges by 10 mL of bidestilled water and 10 mL methanol. After sample clean-up, around 1 to 2 mL of the eluate from each sample was filled in an autosampler vial and was deep frozen (−18 • C) until AA was determined by LC-MS-MS by the CVUA according to the test procedure 201L01301 [40]. The eluates were separated by a graphite or RP18-phase and detected by tandem-mass-spectrometer. Quantification was undertaken by using the isotope-labeled internal standard (D 3 -Acrylamide).
Statistical Analyses
For each trait listed in the section above, analysis of variance (ANOVA) was performed using the procedure PROC MIXED of the statistical software package SAS 9.2 (SAS Institute Inc., Cary, NC, USA). ANOVA was done for the main effects of year, treatment (row distance, seed density), cultivar, and all interactions. A mixed-linear model approach was used. All effects were taken as fixed.
In order to ensure normal distribution and equality of variances, the data was transformed if necessary. Means were analyzed for statistically significant differences using the Tukey range test. As a level of significance, α = 0.05 was chosen. For analyzing the coefficient of determination concerning the grains per spike, crude protein, free Asn, and AA, a linear regression was performed using the software package of Sigmastat 4.0 (Systat Software Inc., Cranes Software, San Jose, CA, USA).
Yield and Yield Components
Grain yield was significantly affected by year (Y), seed density (SD), and the interaction cultivar (Cv) × row distance (RD) ( Table 3). As the interactions Y × Cv × SD, Cv × SD × RD and Y × Cv × RD × SD were not significant for any tested trait, it was not listed in Table 3. Table 3. F-values and p-values for all main effects and interactions, where at least one tested trait had a significant impact on grain yield [kg ha −1 ], thousand kernel weight (TKW) and quality parameters: Test weight (TW), falling number (FN), crude protein (CP), sedimentation value (SV), and free Asn of flours. Comparing years, in 2007 a grain yield of 3740 kg ha −1 was harvested while in 2008 the average was 4350 kg ha −1 , around 700 kg ha −1 higher. In addition to the year, the higher seed density of 350 grains m −2 led to a significantly higher grain yield. The higher seed density resulted in a grain yield of 4190 kg ha −1 compared with 4020 kg ha −1 when using the smaller seed density. This was likely the result of more spikes m −2 as the number of spikes m −2 increased by the higher seeding rate (Table 4). Indeed, less spikes m −2 could only partially be compensated by an increased number of grains per spike (Table 4). Similar results were observed by Landon 1994 [41] and Arduini et al. [42], who investigated the effect of seeding rate on the grain yield of wheat. Both reported a compensation by either a higher number of grains per spike or a higher kernel weight. Gooding et al. [43], stated in their study that a lower seed density was compensated by a larger level of tillers and grain numbers per ear. In our work the effect of an increased number of tillers or a higher thousand kernel weight was not determined, while the number of grains per spike increased. However, the number of grains per spike could not compensate for the effect of a lower seed density on yield.
Row distance only had a significant impact on grain yield for some of the cultivars (cv). In this context, grain yield was significantly lower for the larger row distance of 30 cm if cv Bussard was grown and differed by around 300 kg ha −1 . For cv Naturastar, the row distance had no significant effect. Whereat, a yield increasing tendency was observed by enlarging the row distance but without being significant. The different reactions of both cultivars regarding row distance might be related to the varying structure of spikes per m 2 and grains per spike. As shown in Table 4, cv Bussard responded to the larger row distance with a higher reduction of spikes m −2 compared to cv Naturastar (Table 4). Landon et al. [41] developed a different effect of row distance in their study where increasing the row distance led to a higher grain yield due to an increased number of kernels per spike. However, in this study the number of grains per spike only marginally changed in the case of cv Bussard and the spikes per m 2 decreased (Table 4). Thus, the larger row distance was not compensated by an increased number of spikes, nor by more grains per spike of cv Bussard (Table 4). The Thousand kernel weight (TKW) was significantly affected by the year, the cultivar, and the interaction of both (Table 3). Neither row distance nor seed density had a significant effect. This is in contrast to a study carried out by Hiltbrunner et al. 2005 [44], who reported an increase in TKW if the row distance was expanded. Nevertheless, the year had a significant effect as TKW was lower in 2008 (38.4 g) than in 2007 (40.5 g). Across years, the TKW of cv Bussard was 40.9 g, which was significantly higher than Naturastar at 38.0 g. This fits well to the monitored level of grains per spike, which were lowest for cv Bussard (Table 4). This leads to the assumption that the grains of cv Bussard were bigger and thus a heavier TKW was reached. As cv Naturastar is known for reaching a high grain yield with a higher level of grains per spike, this leads to the suggestion that grains of this cv were generally smaller, leading to a lower TKW.
For the interaction Y × Cv in 2007, the highest TKW of 42.4 g was observed for cv Bussard, while TKW was lowest (37.4 g) for cv Naturastar in 2008. Finally, TKW was much more affected by the cultivar and year than by row distance or seed density.
Older studies reported that test weight can serve as a marker for flour yield [45]. Newer findings have not supported this statement [46]. However, test weight is still used in some countries as a quick test for grain quality. Higher amounts indicate rounder grains, leading to a better milling behavior and thus a higher flour yield. In contrast, smaller grains can include an uneven shape and thus provide lower test weights. In our study, test weight was significantly influenced by year, seed density, and the interaction Y × Cv (Table 3). In 2007, the test weight was 80.9 kg hL −1 , which was significantly higher than for 2008 (78.5 kg hL −1 ). Cultivar only had a significant effect in interaction with the year. Compared to Bussard (80.6 kg hL −1 ), cv Naturastar in 2007 reached a much higher amount (81.2 kg hL −1 ), while in 2008 there was no statistically proofed difference between the two (78.4 and 78.7 kg hL −1 , respectively).
Next to the year, the most relevant factor for test weight was seed density. A lower seeding rate (250 grains m −2 ) led to a significantly lower test weight of 79.5 kg hL −1 , while the seeding rate of 350 grains m −2 provided a test weight of 79.9 kg hL −1 . This can partly be explained by the differences in spikes per m 2 and the grains per spikes. Spikes per m 2 were higher if 350 grains m −2 were sown, leading to a lower number of grains per spike (Table 4). This leads to the assumption that grain size was bigger and thus the test weight also increased. This is well guided by the TKW, as cv Bussard with the smaller number of grains per spike reached the highest TKW, which was most likely caused by larger grains. Schuler et al. [47] investigated the impact of seed and spike characteristics on test weight. They reported that number of seeds in spikes and test weight had a negative correlation of r = 0.41. Hence, if the seeds per spike increased, the test weight decreased. This fits well to the results of this study as lowering seed density to 250 grains m −2 increased the number of grains per spike, especially for cv Naturastar (Table 4). We assumed that the higher number of grains per spike led to a smaller grain size, which may explain the lower test weight. Finally, the lower seed density led to less spikes per m 2 and this was likely compensated by a higher rate of grains per spike along with smaller grains.
Baking Quality Traits
Falling number (FN) is a baking quality trait, as it refers to water absorption during dough preparation. Thus, effective preparation of dough requires a sufficient FN. Delayed grain harvest can cause pre-harvest sprouting causing a higher activity of enzymes (amylase). This may lead to a lower FN as consequence of polysaccharides decomposition (amylose and amylopectin) and thus affecting baking quality [48]. FN was significantly influenced only by year and cultivar (Table 3) but was not affected by row distance or by seed density. The mean FN was 244 s (cv Bussard) and 332.5 s (cv Naturastar). Brunner [49], recommended that for organically produced wheat flours, FN should range between 160 and 280 s. They stated that such flours deliver a sufficient baking quality, including a normal, elastic well pored crumb and an adequate gas holding capacity. Thus, referring to reference [49] the FN results revealed in our study indicates no negative effect on baking quality.
Crude protein (CP) is the most widely used method for estimating the baking quality of wheat flour other than gluten content. High levels of CP indicate a good preparation of foods such as biscuits. This trait was significantly influenced by year and the interaction Y × RD (Table 3, Figure 2A). Neither cultivar nor seed density had a relevant impact.
In 2008, CP content was 11.7% which was around 8% higher than in 2007 (10.6%). In general CP ranged from 10.4% to 12.2%. The high impact of the year can be explained by different weather conditions, especially during grain filling periods. In 2008, the temperature during grain filling period (May-July) was 1.4 • C higher than in 2007 ( Figure 1). This fits well to the corresponding rainfall, which was around 140 mm higher in the period May-July for 2007 compared to 2008. These weather conditions led to a better CP synthesis during 2008 and thus to higher CP values.
That climate conditions especially during grain development can influence grain composition was reported by Fuhrer et al. [25], Shewry et al. [50], and Ohm et al. [51]. Further et al. [25] reported the effects of ozone on the grain composition. They observed an increased CP level. Shewry et al. [50] analyzed the impact of temperature and water availability during grain growth on grain composition. After the observation of 26 genotypes grown at different locations, they stated that mean temperature and precipitation was either positively or negatively related to phytochemical contents during grain growth, or to water-soluble arabinoxylan fiber in bran and white flour. As locations are closely related to environmental conditions like rainfall and sunshine, Ohm et al. [51] observed a significant impact of locations on SDS unextractable polymeric protein parameters.
Nevertheless, as Brunner [49] and Casagrande [52] announced that a CP content of at least 10.5% is required to match baking industry needs for organic flours, the CP levels accomplished in this trial were sufficient.
Overall, only row distance had a significant effect on CP if the years were separated. While in 2007 no statistical implication was analyzed, in 2008 the larger row distance of 30.0 cm significantly raised the mean CP content by nearly 12% (Figure 2A). This was around 5% more than for the smaller row distance, which reached a CP content of 11.4%. In fact, 12% CP is a well synthesized amount, as the required level of the baking industry [49,52] of at least 10.5% was outnumbered by 1.5%. The impact of row distance on CP was also investigated by Becker et al. [34] and Hiltbrunner et al. [44]. Both studies revealed a higher protein content if the row distance was enlarged. Thus, it can be assumed that increasing the row distance may provide an opportunity in organic farming to match needed protein concentrations.
Nevertheless, it has to be taken into account that lower grain yields and more weed management efforts must be accepted if raised CP levels are the main target. Selecting a fitting cultivar (cv) could diminish yield loss, as in our study cv Naturastar (A-wheat) did not respond by lower grain yields if row distance was increased. But this might be an effect of different wheat classes.
Sedimentation value (SV) is a key parameter for interpreting quality of CP and therefore is of high relevance for baking quality. Compared to CP, significant effects regarding SV were more In fact, 12% CP is a well synthesized amount, as the required level of the baking industry [49,52] of at least 10.5% was outnumbered by 1.5%. The impact of row distance on CP was also investigated by Becker et al. [34] and Hiltbrunner et al. [44]. Both studies revealed a higher protein content if the row distance was enlarged. Thus, it can be assumed that increasing the row distance may provide an opportunity in organic farming to match needed protein concentrations.
Nevertheless, it has to be taken into account that lower grain yields and more weed management efforts must be accepted if raised CP levels are the main target. Selecting a fitting cultivar (cv) could diminish yield loss, as in our study cv Naturastar (A-wheat) did not respond by lower grain yields if row distance was increased. But this might be an effect of different wheat classes.
Sedimentation value (SV) is a key parameter for interpreting quality of CP and therefore is of high relevance for baking quality. Compared to CP, significant effects regarding SV were more distinct as significant differences were obtained for the effect of the year, cultivar, and the interactions Y × Cv, Y × RD, Cv × RD, and Y × Cv × RD (Table 3).
Concerning years, an SV of 37.3 mL was measured in 2007, while SV in 2008 was significantly lower, reaching 36.2 mL. As already explained in the CP section during grain filling, climate conditions, especially sun duration, could have affected this trait differently over different years.
Regarding the impact of row distance on CP and SV, both increased significantly in 2008 (Figure 2A,B). However, the effect was more consistent for SV, as a slight trend of increasing SV by a larger row distance of 30.0 cm was obvious in both years ( Figure 2B). In general, SV ranged from 30.0 mL to 42.7 mL. The smallest level was obtained in 2008 for cv Naturastar if cropped in the narrow row distance, while the overall highest level was reached in 2008 by cv Bussard cropped in the larger row distance ( Figure 2B). Expanding the row distance to 30.0 cm increased SV only slightly from 36.5 mL to 37.0 mL. The effect was most notable for cv Naturastar as in both years, SV increased under the larger row distance, although it was only significant in 2008. For cv Bussard, the effect was not consistent.
The impact of row distance on quality trait SV within organic farming systems was also investigated by Becker [34] and Germeier [53]. Both studies announced that SV significantly increased if row distances were expanded to either 50.0 cm [34] or 75 cm [53]. Indeed, the row distance in our study was only increased to 30.0 cm but partially reached the same result. We suppose that if row distance could be further increased, then the effect on SV could have been more pronounced.
Nevertheless, a mean SV of at least 34 mL seems to be sufficient for the baking industry [49]. That level was obtained in all treatments of the trial. Finally, larger row distances seem to support the requirements of the baking industry.
Free Asn and AA Formation Potential
Free Asn as main indicator for AA formation potential in cereals was significantly influenced by the year, the cultivar, and by the interactions Y × Cv, Cv × RD, SD × RD, Y × SD × RD (Table 3). Neither SD nor RD as single treatment significantly affected the free Asn amount.
In 2007, free Asn was significantly higher than in 2008 (13.3 mg 100 g −1 vs. 9.2 mg 100 g −1 ). Separating years within 2007, the treatments, row distance, and seed density had no effect on free Asn levels at all. This trait ranged from 13.5 mg 100 g −1 to 13.7 mg 100 g −1 ( Figure 2C). By contrast, significant changes were observed in 2008. If the higher seed density of 350 grains m −2 was chosen, increasing the row distance to 30 cm raised free Asn levels significantly from 8.5 mg 100 g −1 to 9.5 mg 100 g −1 . A lower planting density could have changed grains per spike as was shown in Table 4, especially for cv Naturastar. This cultivar showed a higher level of grains per spike if the seed density was decreased (30 to 38 and 32 to 38 grains spike −1 ). By contrast, cv Bussard did not change the number of grains spikes −1 if the seed density was lowered.
We suppose that smaller grains contain less starch and more soluble nitrogen (N) fractions, leading to higher CP levels or N fractions may be stored as free Asn. This fits well to the test weight, as it was stated above that bigger grains are expected to deliver higher test weights, including more starch. Furthermore, Figure 3A presents the relation between free Asn and grains spike −1 . In this context, more grains spike −1 indicates an increase in free Asn. This supports the above-mentioned postulation of higher soluble N fractions in smaller grains.
However, as it is known that wholemeal flour contains more free Asn compared to white flour [23], the hull/grain ratio could have influenced the free Asn level since the proportion of hull can be higher if grains are smaller. In contrast, compared to the surface, bigger grains may have a lower proportion of hull. We measured the free Asn level of hull in our trial and analyzed a mean of around 53 mg 100 g −1 , and found almost 5-fold more Asn in hull compared to in white flour. This should also be taken into account.
Independent of the highest significant interaction for free Asn, a clear impact of the cultivar was obvious, as the level of free Asn almost was twice as high for cv Naturastar (14.2 mg 100 g −1 ) compared to cv Bussard (8.8 mg 100 g −1 ) ( Table 5). Table 5. Level of free Asn (mg 100 g −1 ) across years influenced by cultivar and row distance. Different letters next to free Asn amount refer to significant differences.
Cultivar
Row Distance (cm) Free Asn (mg 100 g −1 ) Bussard 12. Both cultivars differ in their quality class (Bussard: highest baking quality, Naturastar: high baking quality) and grain yield. Naturastar is related to higher grain yields while Bussard is a high protein wheat. We conclude that protein synthesis of Bussard leaves less soluble N in grain until harvest, while Naturastar used N for grain yield formation and lower protein synthesis, leading to the hypothesis of accumulating more soluble N fractions in grain. Those soluble N fractions may contain free Asn. This assumption is supported by the significant impact of cultivars on sedimentation value (SV), as this trait describes protein quality. Generally, SV was significantly higher for cv Bussard than for cv Naturastar (34.6 mL). Additionally, the higher Asn level of cv Naturastar fits well to the stated effect of smaller grains on Asn, since for cv Naturastar, grains per spike −1 were much higher compared to cv Bussard, leading to smaller grains.
Other studies, conducted either under conventional farming or organic farming conditions, also reported that years and cultivars [19][20][21]23,[26][27][28][29]31,32] have a major impact on free Asn levels in cereal grains. In this context free Asn in conventional trials normally indicate a higher level as well as a broader range. Stockmann et al. [31] reported an average of 15.5 mg free Asn 100 g −1 in white flour and a range of 12 to 32 mg free Asn 100 g −1 in conventionally cropped wheat cultivars. Nevertheless, our results of free Asn concentrations fit well to the references and are comparable.
The impacts of row distance and seed density has to date never been investigated before concerning free Asn. Stockmann et al. [20] investigated the effect of nitrogen (N) supply in organically grown wheat cultivars. They increased the N supply step by step to a maximum of 180 kg ha −1 and analyzed the impact on baking quality traits and free Asn. It was stated that a raised N supply increased protein significantly, but the free Asn level did not change significantly. Additionally, a high impact of cultivars under different N treatments was reported. The same was found in our study, as above all row distances were able to increase N availability and could have similar effects to those of N treatments.
Hence, those results support the assumption that raising nutrient supply by increasing row distance will increase the protein content and sedimentation value without elevating free Asn.
Relationship between Baking Quality, Yield Components, Free Asn, and AA
Free Asn as precursor of AA formation potential was not related to crude protein ( Figure 3B, R 2 of 0.04). Thus, raising baking quality by using treatments like larger row distances does not increase AA formation potential in the case of free Asn. That is also indicated by the regression of crude protein and AA formation (R 2 = 0.53, Figure 3D). Studies are available reporting either a clear relation between free Asn and protein [23] or no such relation [51]. Thus, further studies investigating the relation between crude protein and free Asn, especially for wheat, are highly important. The relation between free Asn and AA formation in conventionally cropped cereals was reported in different studies [7,54]. However, for cereals grown under organic farming, this relation has not been investigated intensively. Across all treatments free Asn seems to be a main precursor of AA formation, as shown by an R 2 of 0.41 ( Figure 3C). Thus, it seems that in organically grown wheat flours, similar mechanism pathways seem to be present during food processing to those for conventional flours. However, the relation was smaller, wherefore we suppose that other amino acids took part in AA formation. Such findings were also reported by Mottram et al. [4] and Stadler et al. [5].
To date, no study investigated the relation between grain number per spike and free Asn amount ( Figure 3A). Interestingly, increasing grain numbers per spike increased the free Asn, as indicated by a close relation of R 2 = 0.72. In this context more grains per spike indicate a smaller grain size as the spike has only a defined size. Thus, it can be assumed that smaller grains might contain more free Asn. These findings correspond well with the results mentioned above, where the level of free Asn was highest for cv Naturastar, while also having a high number of grains. This outcome was additionally assisted by the analyzed sieve grading (data no shown), where grains were separated into four grain sizes (>2.8 mm, >2.5 mm, >2.2 mm, and <2.2 mm). In this context cv Bussard reached the biggest grain size fraction (>2.8 mm) for 60% to 80% of its kernels across various years, while for cv Naturastar the equivalent was only 40%. Most kernels of cv Naturastar were within the smaller grain size fractions. In addition, cv Naturastar also had a lower TKW. Overall, free Asn and TKW reached a negative regression of R 2 = 0.71 (2007) and 0. 49 (2008), indicating that smaller grains led to lower TKW and thus an increase in Asn concentration.
Hence, all three traits (TKW, grains per spike, sieve grading) indicate smaller grains with higher free Asn concentrations. Such a relation (grain number per spike vs. free Asn) has not been observed by other studies before.
Nevertheless, it should also be taken into account that bigger grains might contain less free Asn as a consequence of a thinning effect. Transferring starch assimilates the grain short before the harvest could dilute the level of Asn in grain. However, our results do not support this assumption. The relation between free Asn and AA formation in conventionally cropped cereals was reported in different studies [7,54]. However, for cereals grown under organic farming, this relation has not been investigated intensively. Across all treatments free Asn seems to be a main precursor of AA formation, as shown by an R 2 of 0.41 ( Figure 3C). Thus, it seems that in organically grown wheat flours, similar mechanism pathways seem to be present during food processing to those for conventional flours. However, the relation was smaller, wherefore we suppose that other amino acids took part in AA formation. Such findings were also reported by Mottram et al. [4] and Stadler et al. [5].
To date, no study investigated the relation between grain number per spike and free Asn amount ( Figure 3A). Interestingly, increasing grain numbers per spike increased the free Asn, as indicated by a close relation of R 2 = 0.72. In this context more grains per spike indicate a smaller grain size as the spike has only a defined size. Thus, it can be assumed that smaller grains might contain more free Asn. These findings correspond well with the results mentioned above, where the level of free Asn was highest for cv Naturastar, while also having a high number of grains. This outcome was additionally assisted by the analyzed sieve grading (data no shown), where grains were separated into four grain sizes (>2.8 mm, >2.5 mm, >2.2 mm, and <2.2 mm). In this context cv Bussard reached the biggest grain size fraction (>2.8 mm) for 60% to 80% of its kernels across various years, while for cv Naturastar the equivalent was only 40%. Most kernels of cv Naturastar were within the smaller grain size fractions. In addition, cv Naturastar also had a lower TKW. Overall, free Asn and TKW reached a negative regression of R 2 = 0.71 (2007) and 0. 49 (2008), indicating that smaller grains led to lower TKW and thus an increase in Asn concentration.
Hence, all three traits (TKW, grains per spike, sieve grading) indicate smaller grains with higher free Asn concentrations. Such a relation (grain number per spike vs. free Asn) has not been observed by other studies before.
Nevertheless, it should also be taken into account that bigger grains might contain less free Asn as a consequence of a thinning effect. Transferring starch assimilates the grain short before the harvest could dilute the level of Asn in grain. However, our results do not support this assumption.
Moreover, Navrotskythe et al. [55] found that thousand kernel weight and kernel size correlated with free Asn by r = 0.3. This is also in contrast to the supposed dilution effect. Further, Navrotskythe et al. [55], reported that delayed harvest elevated free Asn concentration. Delaying harvest is linked to enhancing the possibility of pre-harvest sprouting, which leads to an increase in free Asn [56]. Taking these effects into account, the correlation between kernel weight/kernel size and Asn in the study of Navrotskythe et al. [55] could have been covered by the delayed harvest. However, a falling number was not mentioned in their study, which makes finding common relations between both studies difficult. Additionally, in our study no pre-harvest sprouting seemed to occur, as the falling number was not decreased.
In contrast to our results, no relation between free Asn and TKW respectively kernel weight was reported by Corol et al. [23]. They stated that free Asn is not determined by grain size and grain number per plant. Further, they found higher levels of free Asn in taller cereal plants. We suppose that taller plants differ in their grain number and structure compared to smaller ones. Moreover, larger stems may differ in their transferring ability of nitrogen during grain development leading to less nitrogen mobilization directed to the grains, which could have affected free Asn assimilation. However, it is worth noting that Corol et al. [23] investigated wholemeal flour. This was in contrast to our study as we used white flour, wherefore different outcomes can be expected. Nevertheless, it would be interesting to investigate if free Asn in smaller plants is increased by a higher number of grains per spike. Further, as we just cropped two cultivars, additional trials should be carried out, including an enlarged number of wheat cultivars focusing on grain structure and free Asn value.
The fact that smaller grains may contain more Asn can to some extent be explained by the higher proportion of hull in relation to the full grain size [57]. Studies of Corol et al. [23] did not support these results as they announced no relation between free Asn and yield of flour and bran. We also analyzed Asn concentration in hull and found up to fivefold higher levels compared to white flour. In addition, rye and einkorn seems to have much higher free Asn concentrations than wheat [32]. At the same time, both species have much smaller grains than wheat (TKW: rye: 28-36 g, einkorn: 21-35 g, wheat: 40-55 g). Moreover, protein fractions also differ a lot between these cereals, which could explain the different Asn levels.
In summary, the described relations above provide an absolutely new insight of Asn synthesis and interaction with other traits. This forces the need for future studies revealing the interactions of spike/grain structure and free Asn.
Conclusions
The study aimed to assess the impact of row distance and seed density on grain quality, yield components, and yield in organically grown wheats. Although all traits were influenced by year and mostly by cultivar, increasing the row distance also increased the baking quality traits of the crude protein level and the sedimentation value, while free Asn concentration was affected only to a minor extent. Thus, we recommend larger row distances as a feasible way of raising baking quality traits without increasing free Asn levels, which act as precursors for AA formation. Seed density seems to be of minor relevance, as it only affected grain yield and test weight. Nevertheless, as seed density may affect plant space, seed density should be taken into account in further studies, since it may lead to changes in baking quality traits. Zhang et al. [58] reported that increasing plant density had an effect on e.g., grain protein concentration, amount and composition of protein fractions as well as loaf volume in interaction with nitrogen supply. Baking quality traits increased upon increasing the plant density if the plants were highly fertilized with nitrogen, while it decreased if no nitrogen was fertilized. As organic farming is a so-called low input system, the effects of seed density on baking quality must be considered.
Moreover, gluten quality is also important, as described by Augspole et al. [59]. They reported significantly lower gluten content in wheat grains grown under organic farming conditions, while gluten was significantly stronger compared to conventionally cropped samples. Especially under organic farming conditions, strong gluten quality seems highly important for obtaining a good baking performance.
However, if higher yields are required, then seed density should not be diminished. In addition, the study revealed new relationships between yield components (grain structure, TKW, grains per spike) and free Asn. It seems that smaller grains contain more free Asn, which opens new insights into Asn synthesis during grain development.
Thus, future studies revealing the interaction of spike/grain structure and free Asn would be of great interest. | 2019-11-07T14:58:23.734Z | 2019-11-04T00:00:00.000 | {
"year": 2019,
"sha1": "2825442efb0d8a297403b2972cf9566b1700b87b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/agronomy9110713",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "27b37bc23235439c53bcc23a3e87f03a59f6e269",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
49645995 | pes2o/s2orc | v3-fos-license | Differential patterns of age‐related cortical and subcortical functional connectivity in 6‐to‐10 year old children: A connectome‐wide association study
Abstract Introduction Typical brain development is characterized by specific patterns of maturation of functional networks. Cortico‐cortical connectivity generally increases, whereas subcortico‐cortical connections often decrease. Little is known about connectivity changes amongst different subcortical regions in typical development. Methods This study examined age‐ and gender‐related differences in functional connectivity between and within cortical and subcortical regions using two different approaches. The participants included 411 six‐ to ten‐year‐old typically developing children sampled from the population‐based Generation R study. Functional connectomes were defined in native space using regions of interest from subject‐specific FreeSurfer segmentations. Connections were defined as: (a) the correlation between regional mean time‐series; and (b) the focal maximum of voxel‐wise correlations within FreeSurfer regions. The association of age and gender with each functional connection was determined using linear regression. The preprocessing included the exclusion of children with excessive head motion and scrubbing to reduce the influence of minor head motion during scanning. Results Cortico‐cortical associations echoed previous findings that connectivity shifts from short to long‐range with age. Subcortico‐cortical associations with age were primarily negative in the focal network approach but were both positive and negative in the mean time‐series network approach. Between subcortical regions, age‐related associations were negative in both network approaches. Few connections had significant associations with gender. Conclusions The present study replicates previously reported age‐related patterns of connectivity in a relatively narrow age‐range of children. In addition, we extended these findings by demonstrating decreased connectivity within the subcortex with increasing age. Lastly, we show the utility of a more focal approach that challenges the spatial assumptions made by the traditional mean time series approach.
| INTRODUC TI ON
Understanding typical brain development is critical to understanding the mechanisms behind neuropsychiatric disorders. Mental health in adulthood is highly dependent on brain development beginning in the womb and continuing throughout adolescence and into adulthood. One theory is that the neurobiological underpinnings of mental illnesses are largely driven by atypical brain connectivity originating in childhood (Di Martino et al., 2014;Menon, 2013).
Through an understanding of typical connectivity, we can identify aberrant patterns associated with neuropsychiatric disorders.
Functional connectivity changes dramatically in the early years of life. In infancy, the brain's short-range connections are dominant (Gao et al., 2011;Di Martino et al., 2014). Throughout childhood and adolescence, functional connectivity becomes increasingly distributed, with long-range connections becoming stronger and shortrange connectivity decreasing (Fair et al., 2009;Di Martino et al., 2014;Rubia, 2013). Furthermore, graph theory studies have also demonstrated that while topological features of brain connectivity are mature by age eight, the hierarchical and modularity of global brain networks continues to mature into adulthood (Menon, 2013).
Functional connectivity between subcortical and cortical regions has been shown to decrease with age in children (Cerliani et al., 2015;Greene et al., 2014;Sato et al., 2015;Supekar, Musen, & Menon, 2009). However, other studies have found the opposite pattern (Sato et al., 2015;Solé-padullés et al., 2015). Age-related differences in functional connectivity between subcortical and cortical regions are accompanied by stronger cortico-cortical connectivity in older children (Supekar et al., 2009). There have been few studies examining the role of connections between different subcortical brain structures in children. Gaining a better understanding of the agerelated development of subcortical functional connectivity provides an important baseline for the study of childhood psychopathology.
Development of brain connectivity is increasingly being studied using whole-brain connectomes derived from resting-state functional MRI (rs-fMRI; Di Martino et al., 2014;Rubia, 2013).
Since connectome approaches evaluate networks within the entire brain, they are well suited to evaluate the major changes taking place in typical neurodevelopment.
In this study, we utilized two connectome approaches to evaluate age and gender associations in a large group of school age children across the functional connectome. First, we used the correlation of the mean time series for brain regions involved in a given connection to express uniform and homogenous connectivity.
However, connectivity in some regions becomes increasingly focal during development (Durston et al., 2006), which we captured with a new measure of connectivity that determines the focal maxima of correlations between ROIs. Each approach measures different aspects of connectivity, which can help parse whether connectivity differences in development involve larger brain regions or tend to be more focal within an ROI.
Considering the mixed findings in the literature related to cortical and subcortical functional connectivity, we aimed to determine age related differences in connectivity between pairs of cortical and subcortical regions. In addition, we were interested in determining how functional connectivity patterns differ with age between pairs of subcortical regions. This has not yet been investigated in previous studies. Previous studies examining rs-fMRI connectivity in typical development included subjects with a broad age range or had small to moderate sample sizes (n < 200 in most cases ;Cerliani et al., 2015;Fair et al., 2009;Greene et al., 2014;Rubia, 2013;Sato et al., 2015;Solé-padullés et al., 2015;Supekar et al., 2009). Thus, to reduce heterogeneity, which could contribute to the mixed findings, we used a large sample of 6-to-10 year-old children from a population-based cohort. By focusing on a narrow age range in a large sample, we aimed to shed new light on brain development within a narrow period of childhood. This age range is particularly interesting because it is a period in which the brain, behavior, and cognition are rapidly maturing (Livy et al., 1997;Mous et al., 2016).
This critical phase in development can provide clues into typical brain function, which can then be extended to evaluate mechanisms governing psychopathology.
| Participants
The participants of this study included a subgroup of children participating in the Generation R Study, which is a large, population-based prenatal cohort study in Rotterdam, the Netherlands (Jaddoe et al., 2012). Magnetic resonance imaging (MRI) scans were obtained in a total of 1,070 children between 6 and 10 years of age. The protocol for recruitment and study design is described in detail elsewhere (White et al., 2013). General exclusion criteria consisted of severe motor or sensory disorders (deafness or blindness), neurological disorders, moderate to severe head injuries with loss of consciousness, claustrophobia, and contraindications to MRI. Of 1,070 children who visited the research center for an MRI, 964 children underwent an rs-fMRI scan. Of those children, 227 were screened as having problem behaviors using the Child Behavior Checklist (see description below) and were excluded from the analyses. Furthermore, subjects were excluded due to excessive head motion (n = 88), failed registrations (n = 21), failed or low quality cortical segmentations (n = 126), less than 125 volumes left after data scrubbing (n = 5) and an incidental finding (n = 1). The final dataset included 411 subjects.
Informed consent was obtained from parents, and all procedures were approved by the Medical Ethics Committee of the Erasmus MC, University Medical Center Rotterdam.
| Behavioral and IQ assessment
The children were assessed for behavioral and emotional problems using the Child Behavior Checklist (CBCL/1½-5), which is a questionnaire filled out by their mothers (93%) or fathers (7%; Achenbach & Rescorla, 2000). The CBCL is a 99-item inventory covering various behaviors reported by parents. It uses a Likert response format (i.e., "not true", "somewhat true" and "very true").
The CBCL was used to select children without problem behavior to ensure that associations were independent of major behavioral problems. This was accomplished by excluding participants with a score above the clinical cutoff on any syndrome (98th percentile), DSM-oriented (98th percentile), or broadband scale (91st percentile), according to Dutch norms (Tick, van der Ende, Koot, & Verhulst, 2007). Furthermore, to minimize the potential for residual confounding, the square root of the sum of all items was used to compute a total problem score to be used as a covariate in analyses.
| MR-image acquisition
Magnetic resonance imaging data were acquired on a General Electric MR-750 3-Tesla whole-body scanner (General Electric, Milwaukee, WI) using a standard 8-channel, receive-only head coil. A three-plane localizer was run first and used to position all subsequent scans. Structural T 1 -weighted images were acquired using a fast spoiled gradient-recalled echo (FSPGR) sequence (TR = 10.3 ms, TE = 4.2 ms, TI = 350 ms, NEX = 1, flip angle = 16°, matrix = 256 × 256, field of view (FOV) = 230.4 mm, slice thickness = 0.9 mm). Echo planar imaging was used for the rs-fMRI session with the following parameters: TR = 2,000 ms, TE = 30 ms, flip angle = 85°, matrix = 64 × 64, FOV = 230 mm × 230 mm, slice thickness = 4 mm. In a previous study the number of TRs necessary for functional connectivity analyses was determined, and therefore the first set of acquisitions acquired 250 TRs (acquisition time = 8 min 20 s; White et al., 2014). After it was determined that fewer TRs provided stable networks of higher quality (less motion), the number of TRs was reduced to 160 (acquisition time = 5 min 20; White et al., 2014). Children were instructed to keep their eyes closed and not to think about anything in particular during the rs-fMRI scan. After the scan session they were asked how the scan went and whether they fell asleep during the scan.
| Anatomical Image Processing
Predefined ROIs were defined in native space and used as the anatomical regions to quantify time-series data for brain-wide connectivity analysis. A total of 34 cortical regions and seven subcortical ROIs were defined in each hemisphere of the brain in native space from T 1 -weighted images using the FreeSurfer analysis suite (https:// surfer.nmr.mgh.harvard.edu; Fischl et al., 2004). Details about the FreeSurfer data processing and quality control in the Generation R Study are described elsewhere (Mous et al., 2014). The FreeSurfer image, including the cortical and subcortical labels were registered to the rs-fMRI data by applying the transformation matrix resulting from a 12 degree of freedom affine registration of the T 1 -weighted image to the rs-fMRI data (Greve & Fischl, 2009). Thus, all timeseries for analyses were extracted from native fMRI space.
| Resting-state image processing
Resting-state fMRI data were preprocessed using a combination of tools from the Analysis of Functional NeuroImages package (AFNI; Cox, 1996), the Functional MRI of the Brain Software Library (FSL; Jenkinson, Beckmann, Behrens, Woolrich, & Smith, 2012), and inhouse software written in Python version 2.7.3. For the rs-fMRIs acquired with 250 TRs, only the first 160 volumes were used so that all time courses contained the same amount of information.
Preprocessing of the rs-fMRI began with slice-timing correction, motion correction, removing the first four volumes, and 0.01 Hz highpass temporal filtering. Next, the six motion correction parameters, the mean white matter signal and mean cerebral spinal fluid (CSF) signal were regressed out of each voxel's time course (Fox, Zhang, Snyder, & Raichle, 2009). Finally, data scrubbing was used to further compensate for motion, removing volumes with excessive movement (i.e., greater than 0.5 mm root mean squared relative motion; Power, Barnes, Snyder, Schlaggar, & Petersen, 2012 since head motion during scanning can amplify developmental differences in connectivity (Power et al., 2012). This effect is significantly reduced after compensating for movement (Di Martino et al., 2014).
Given geometric distortions resulting from susceptibility artifacts, some ROIs were excluded from the analyses. In order to identify affected ROIs, FSL's Brain Extraction Tool (Smith, 2002) was used to create a brain mask from the rs-fMRI. The proportion of voxels in each ROI that intersected with the brain mask was computed for each subject. Overlap between voxels believed to represent true signal (i.e., within the brain mask) was found to be low in ROIs known to be affected by susceptibility artifacts. ROIs with a mean overlap across subjects of less than 90% were visually inspected and those ROIs with consistently low overlap were excluded from the analyses (entorhinal cortex, frontal pole, inferior temporal gyrus, lateral orbitofrontal cortex, medial orbitofrontal cortex, and temporal pole). In the remaining ROIs, only voxels in the intersection of the ROI and the brain mask were included in the analyses. See Table 1 for a listing of included ROIs.
| Brain-wide connectivity analysis
Brain-wide connectivity analyses were conducted in rs-fMRI native space, after the FreeSurfer labels were mapped to the rs-fMRI data. The labels and preprocessed rs-fMRI data were used to calculate pairwise region-to-region functional connectivity. Before calculating functional connectivity, a 3 × 3 × 3 voxel median spatial filter was applied to the preprocessed rs-fMRI to increase the signal to noise ratio. Two types of functional connectivity matrices were calculated. First, the connection weight for each pair of ROIs was calculated using a Pearson correlation coefficient of the mean timeseries between all pairs of ROI's (MeanTS). For the second approach, Pearson correlation coefficients were computed between all pairs of voxels within two ROIs, and the pair with the highest Pearson correlation coefficient was selected to represent the connection between those two ROIs. We coin this approach the "Anatomic and Local Peak Activity Correlation Analysis" (ALPACA). The first approach represents connectivity which is homogeneous over a pair of ROIs, whereas the second approach represents the peak connectivity which is localized to focal areas within a pair of ROIs.
For both types of connectivity, only voxels that were part of the fMRI brain mask were considered. This minimized voxels affected by geometric distortions from influencing the connection weight. Prior to statistical analyses, to satisfy normality assumptions for parametric statistics, Pearson correlation coefficients were converted to a normal distribution using the Fisher's r-to-z transformation.
| Statistical analysis
Statistical analyses were conducted with the statsmodels (Seabold & Perktold, 2010), scipy (Oliphant, 2007) and numpy (Van Der Walt, Colbert, & Varoquaux, 2011) packages in Python (v2.7). For each connection, two regression models were fitted, one for MeanTS and one for ALPACA. In both cases, age, gender, and the CBCL total problem score were included as independent variables, and main effects were examined for age and gender. The CBCL total problem score was included to account for residual behavioral differences among included children. To control for multiple testing, the number of effective independent tests/connections, M eff , was computed for both ALPACA and MeanTS according to the method outlined in (Li, Yeung, Cherny, & Sham, 2012). The threshold of significance was determined using the Sidak correction, corr = 1 − (1 − ) (1∕M eff ) , where α = 0.05. We additionally conducted a separate analysis in which interaction between age and gender was tested by adding an interaction term to the model. Multiple testing was controlled using the same thresholds as in the main-effects model. Connectograms (van Horn et al., 2012) were used to visualize associations of age and gender with functional connectivity.
| Visualization
Connectograms are used in brain connectivity analyses to show relationships between ROIs in a circular two-dimensional representation. ROIs are positioned around the outside of the circle. A given connection is represented by a line between the associated ROIs, where color and thickness are used to indicate specific properties of a connection. In this study, ROIs were grouped by anatomy (see Table 1 for groupings) and by hemisphere. Only connections with significant associations are shown. Red and blue represent positive and negative associations with age or male > female and female > male in the case of gender respectively. Increased color intensity represents increased significance. Connectograms are often easier to interpret than three-dimensional representations of connectivity in anatomical space (Langen, White, Ikram, Vernooij, & Niessen, 2015).
Worm plots were used to directly compare groups of connections between MeanTS and ALPACA (Langen et al., 2015). Each TA B L E 1 Regions used in connectome analysis, grouped by location in the brain
Cluster Region Abbreviation
Frontal ( is on the y-axis, which is the negative log of the p-value, multiplied by the sign of the association and a scaling factor that is used to ensure that the line representing significance is at the same location for both connectivity types. Connections were ordered along the xaxis according to the anatomical group to which their ROIs belonged (see Table 1 for the list of ROIs belonging to each group). Groups were ordered by their mean associations with ALPACA. Within each group of connections, points were ordered by their association, which produces a worm-like shape. This allows easy comparison of association strengths and distributions between connectivity types.
The ordering was performed separately for each type of connectivity, which means that the order of connections likely differs between connection types. Points that are outside of the dashed lines indicate connections with significant associations after correction for multiple testing.
| RE SULTS
Sample characteristics are reported in Table 2. Mean age was 8 years and 206 subjects were female. The majority of subjects (372 of 411) were right-handed. Mean connectomes across subjects are shown in Figure 1 for both MeanTS and ALPACA.
Numerous age-related connections had significant associations that survived correction for multiple testing. These are shown in connectograms in Figure 2 and are summarized in Table 3 Other western (n) 27
FMRI motion parameters
Average RMS relative (mm) 0.11 ± 0.08 F I G U R E 1 Mean connectomes across subjects for MeanTS and ALPACA. Each element in the matrix represents one connection, where connection weight is the Fisher r-to-z transformation of the correlation between the corresponding regions on the x-and y-axes
MeanTS ALPACA
age and mean displacement. There was a significant Pearson correlation between age and mean displacement (−0.15, p < 0.05), however, we adjusted for motion as described in the methods section.
The age connectograms were relatively symmetric, suggesting that both homogeneous and focal age-related differences occur similarly in both hemispheres in the brain. Specific connections with symmetric age associations are shown in Figure 2d, Figure 3 shows the distribution of connection weights grouped by lobe using a worm plot (Langen et al., 2015). Most subcortical/ parietal and subcortical/frontal connection associations with age were positive in MeanTS but negative in ALPACA. In other words, in this group of edges homogenous functional connectivity increases with age, however, there are focal areas where functional connectivity decreases with age. There were few connectivity differences between gender using both the ALPACA and MeanTS approaches. van Horn et al., 2012) showing connections with a significant association of (a) MeanTS with age (b) ALPACA with age and (c) MeanTS with gender. There were no connections with significant associations with gender and ALPACA, therefore the corresponding connectogram is not shown. Brain regions are divided according to location in the brain, including frontal (FRO), temporal (TEMP), subcortical (SUB), parietal (PAR), and occipital (OCC). They are arranged in a circle. Regions from the left hemisphere are on the left side of the diagram. Significant connections between two regions are plotted as red (positive age associations, or male > female) and blue (negative age associations, or female > male) lines, where color intensity indicates relative significance. The opacity of each region indicates the relative number of significant associations that each regions has. The age associations had a great deal of symmetry in both networks, as shown in (d) MeanTS had a total of five significant associations with gender, including three in which connectivity in males was stronger than in females (left isthmus cingulate/left lingual, left accumbens/left insula, and left lingual/right hippocampus) and two where females had greater connectivity than males (right accumbens/right caudate and right accumbens/right inferior parietal cortex). ALPACA did not identify any significant associations after correction for multiple testing.
MeanTS and age
This suggests that gender-related differences in connectivity are homogeneous across the involved ROIs rather than focal.
| D ISCUSS I ON
In this study, we examined age-and gender differences in functional connectivity by applying two different, but complementary approaches to measure functional connectivity. Both connectivity approaches revealed both common and different patterns of connectivity in relation to age, and relatively similar patterns of connectivity between boys and girls. Significant associations between connectivity and age revealed a concentration of negative
| Connectivity increases in the cortex and decreases in the subcortex with age
Both methods derived several cortico-cortical connections that were positively associated with age. This is consistent with a recent study that found that cortico-cortical connectivity increases during development in children from seven to 18 years of age (Solé-padullés et al., 2015). Our findings expand upon this finding by demonstrating that age-related increases in connectivity are present in a narrow age-range in young children while utilizing two different methods F I G U R E 3 Worm plots (Langen et al., 2015) of association of functional measures with age and gender. Connections are split into groups based on the location of the associated regions, including frontal (Fro), temporal (Temp), subcortical (Sub), parietal (Par), and occipital (Occ). Connections within each group are ordered by association strength, producing worm-like shapes. Order of groups on the x-axis is ordered by mean association strength in ALPACA. On the y-axis is the negative log of the p-value, multiplied by the sign of the test, multiplied by a scaling factor. Each point outside of the dotted lines represents a significant association of age or gender with a specific connection: Worm plots (Langen et al., 2015) of association of functional measures with age and gender. Connections are split into groups based on the location of the associated regions, including frontal (Fro), temporal (Temp), subcortical (Sub), parietal (Par), and occipital (Occ). Connections within each group are ordered by association strength, producing worm-like shapes. Order of groups on the x-axis is ordered by mean association strength in ALPACA. On the y-axis is the negative log of the p-value, multiplied by the sign of the test, multiplied by a scaling factor. Each point outside of the dotted lines represents a significant association of age or gender with a specific connection (a)
Gender versus functional connectivity
Age versus functional connectivity for deriving connectivity indices. This increase in connectivity parallels an increase in volume of the frontal, temporal, and parietal lobes, which has been reported to occur between the 6-10 years of age (Lenroot & Giedd, 2006). Thus, the increased volume, which may be a result of synaptogenesis and arborization, may also result in increasing cross-talk between brain regions. Previous studies have found that functional connectivity increases with age in long-range connections and decreases in short-range connections (Fair et al., 2009;Rubia, 2013). This is partially consistent with our observations, since many of the identified significant positive associations were in connections between regions in different lobes and/or hemispheres, and were thus medium to long-range connections. We did, however, find a small number of both long-range connections that decreased with age and short-range connections that increased with age. Thus, maturation of brain connectivity may be region dependent, with many long-range connections increasing with age, whereas some show decreases. While the regions with positive associations differed between the two connectivity types, both support the notion of generally increasingly distributed networks with age. Our observations are particularly interesting because we focused on a narrow age range, whereas many previous studies focused on relatively large age ranges (Fair et al., 2009;Rubia, 2013). It is remarkable that such striking connectivity differences with age can be observed even within a narrow age range in school-age children. This is likely a result of the rapid neurodevelopment that occurs during this age range. In addition, since movement during MRI scanning shows strong age-related differences, with children having greater movement than adolescents and adults, the narrow age range used in our study provides greater similarity in movement parameters compared to studies with larger age ranges (Fair et al., 2009;Rubia, 2013) and thus is less biased by age-related movement artifacts.
Age associations with connections between cortical and subcortical regions differed between network approaches. MeanTS had a mix of positive and negative associations, while ALPACA had exclusively negative associations with age, adding new insight into the nature of previously observed changes in connectivity with age. The negative associations in ALPACA suggest that focal connectivity between cortical and subcortical regions decreases with age, which is consistent with studies reporting negative associations with age in connections between subcortical and cortical regions in typical development (Cerliani et al., 2015;Greene et al., 2014;Sato et al., 2015;Supekar et al., 2009). However, (Solé-padullés et al., 2015) found primarily positive as well as some negative age associations between cortico-subcortical connections, and (Sato et al., 2015) found that the thalamus had both positive and negative as- whereas those involved in different functions would show less agerelated functional connectivity. Significant differences of corticosubcortical functional connectivity with age are also parallel to previously observed increases in size of the frontal, temporal and parietal lobes as well as some subcortical regions (Lenroot & Giedd, 2006).
While there is a wealth of developmental studies examining cortical-to-cortical connections, and to a lesser extent subcorticalto-cortical connections, there is a gap in the literature regarding agerelated differences in the connectivity between different subcortical structures. In this study, we found that all significant associations of connectivity between subcortical regions with age were negative for both network types. Our findings between subcortical structures may reflect networks transforming from local to distributed during development, as was shown by (Fair et al., 2009). However, their study focused on cortical and cerebellar regions, and did not report on subcortical/subcortical connectivity.
Structural MRI studies of subcortical structures examined how volumes of subcortical regions change over time (Lenroot & Giedd, 2006). These changes include an inverted U-shaped pattern in the volume of the caudate with peaks at 7.5 and 10.0 years of age in females and males, respectively; an increase in hippocampal size in males only and an increase in the size of the amygdala in girls only.
The amygdala, hippocampus and caudate were involved in subcortical connections with negative associations with age, which was true for both networks for the amygdala and hippocampus, and only for ALPACA in the caudate. As these regions have been shown to increase in volume during childhood and subsequently decrease during adolescence (Sowell, Thompson, & Toga, 2004), their communications with other subcortical regions likely also change during development. It is thus possible that in the presence of later maturing cortical structures in young children (i.e., prefrontal cortex;Lenroot & Giedd, 2006;Mills, Goddings, Clasen, Giedd, & Blakemore, 2014), subcortical structures rely on within-system connectivity. As the cortex matures and its connections to the subcortex strengthen (Cummings, 1993), this previous subcortical reliance on highly integrative connectivity may be relaxed. Such an imbalance in timing of development has been previously proposed for cortical/limbic connectivity (Casey, Jones, & Hare, 2008;Heller, Cohen, Dreyfuss, & Casey, 2016). Given the importance of various subcortical structures and their cortical connections with different psychiatric disorders (e.g., Cortico-cerebellar-thalamic-cortical loop in Schizophrenia, caudate motor in ADHD, thalamus/basal ganglia/primary sensory networks; Cerliani et al., 2015), having a better understanding of differences within and between cortical and subcortical regions is a crucial foundation for future efforts studying connectivity differences related to psychopathology.
An interesting finding in this study was inter-and intrahemispheric symmetry in age associations. Symmetry in the negative associations in both network types was primarily between subcortical regions with the nucleus accumbens playing a central role, whereas positive symmetry involved frontal, temporal, parietal, and subcortical regions. This suggests that many bilateral connections within and between hemispheres are developing simultaneously. The fact that many subcortical connections with the accumbens area had negative associations with age in both network types might be related to development of the reward center of the brain. The accumbens has been linked to risk-taking behavior in adolescents , but previous studies have not directly investigated the development of subcortical connection to the amygdala in children. Our results suggest that activity is increasingly directed by cortical regions rather than subcortical regions. Asymmetry in brain connectivity has previously been observed in lateralization studies (Agcaoglu, Miller, Mayer, Hugdahl, & Calhoun, 2015;Di, Kim, Chen, & Biswal, 2014;Holland et al., 2007). Adolescent and adult brains are highly lateral across several resting state networks, with several brain regions showing a decrease in lateralization with age (Agcaoglu et al., 2015). In children, language networks become increasingly leftlateralized throughout development (Groen, Whitehouse, Badcock, & Bishop, 2012;Holland et al., 2007), whereas visuospatial networks become right-lateralized (Groen et al., 2012). Although lateralization of the brain may be related to asymmetric association of functional connectivity with age, this relationship has not been studied directly, nor can it be definitively assumed. Lateralization can increase even if the association with age is significant on both sides of the brain.
While symmetry in functional connectivity has been widely studied, the symmetry of associations with functional connectivity have not. Examination of association symmetry could be informative in future studies. For example, individual deviations from the symmetry pattern found in typical development could be used as a marker of psychopathology.
| Sexual dimorphism
Five connections had significant associations surviving correction of multiple testing of MeanTS with gender. ALPACA did not have any associations with gender. Together these results suggest that gender-related differences in functional connectivity are likely more uniform across the involved regions, rather than being localized to spatially focal peaks. These results could alternately suggest that MeanTS is a more robust measure of sexual dimorphism. Previous studies of gender-related differences in resting-state functional connectivity are sparse in this age range. A recent study did not find any gender differences in the age range of 7-12 (Solé-padullés et al., 2015). Additionally, diffusion tensor MRI study in children aged six to ten found no significant gender-related differences in measures of white matter integrity (Muftuler et al., 2012). Both studies support our observation of few connectivity differences between gender in this age range.
The lack of observed gender differences in functional connectivity during development in both our study and previous studies are surprising given that studies of structural connectivity have found gender differences in relation to cognition and/or intelligence in children and adolescents. Several previous studies have found gender differences in structural connectivity (Hänggi et al., 2010;Schmithorst, 2009;Simmonds, Hallquist, Asato, & Luna, 2014), however, a recent DTI study in the current cohort did not show gender differences (Muetzel et al., 2015). Gender differences have also previously been observed in neuroanatomical studies. For example, longitudinal structural MRI studies have shown gender differences in grey matter volume in the frontal, parietal, and temporal lobes, as well as in the caudate, amygdale, and hippocampus from childhood throughout adolescence (Lenroot & Giedd, 2006). In this study, all of these regions with the exception of the amygdala had connections with significant associations with gender. Given that previous work present conflicting views on gender differences in connectivity and related grey matter volumes, and since our study found a small number of connections with gender differences in only one of the two functional networks studied, it seems that gender differences in functional connectivity are subtle and limited in typically developing children in this age range. Measureable gender differences in the brain may emerge or become unmasked with development, with differences between boys and girls may become more apparent during adolescence and young adulthood.
| Defining functional connectivity by peak activation versus over an entire region
As described above, both network types were generally in agreement with each other and with the existing literature. In some specific connections, some differences were apparent across method with respect to associations in specific connections. In the case of such differences, this suggests that the nature of the development of functional connectivity is not the same for all regions. For example, MeanTS did not have significant associations with age in frontofrontal connections, whereas ALPACA's positive associations with age were exclusively found in fronto-frontal, fronto-temporal, and fronto-parietal connections. This is in line with findings of an earlier study that suggested that cortical connections become increasingly focal with age (Durston et al., 2006). This is in contrast with age associations with the posterior cingulate, which were positive in MeanTS but not ALPACA. This suggests that developmental changes in posterior cingulate connectivity are distributed across the entire structure rather than localized in a focal region. Previous studies have shown that connectivity in the default mode network changes during development, including connections involving the posterior cingulate (Fair et al., 2008;Supekar et al., 2010).
Increasingly diffuse connectivity with age was also found in cortical-to-subcortical connections, which were primarily positive in MeanTS but exclusively negative in ALPACA. This thus suggests a focal to diffuse trajectory with age. Such a trajectory in subcorticalto-subcortical connections was not found since their age associations were exclusively negative in both network types.
It is interesting to consider the differences between the two network types in the context of the underlying neuronal architecture.
If connectivity with grey matter is more diffuse, with connecting neurons covering a more extensive surface of an ROI, then a more diffuse representation, such as MeanTS, would better capture changes in functional connectivity (e.g., a "shared pathway"). On the other hand, if axonal pathways between two regions start and end in focal gray matter locations, then a focal representation of functional connectivity, such as ALPACA, may target critical regions of connectivity.
There are additional factors that must be kept in mind interpreting results involving ALPACA. For example, ALPACA's focal approach may be more flexible in identifying the location of activation because it does not average over entire regions, which can blur the signal. This may be advantageous in relation to both structural and functional variability because it may not always be sensible to assume the same spatial activation patterns across individuals. On the other hand, ALPACA does not guarantee that the activation detected across individuals corresponds to the same focal connection. For example, it may be that a large region has more than one focal peak in connectivity. ALPACA may thus choose one peak for some subjects and another for others, in which case comparison across individuals would not involve the same connection. Additionally, in some cases, weaker functional connectivity has been related to some forms of psychopathology (e.g., autism [Ha, Sohn, Kim, Sim, & Cheon, 2015] and depression [Hermesdorf et al., 2015]). In this situation, finding the local maximum may not be desirable in the context of better explaining the neurobiological underpinnings of psychopathology or identifying novel biomarkers because the local maxima may not necessarily reflect the reduced connectivity across the involved regions.
Given the benefits and drawbacks and the underlying assumptions of each network type, using both ALPACA and MeanTS simultaneously in future studies may result in greater insights into different aspects of functional connectivity and make inferences of whether a given connection has a diffuse or focal connectivity pattern.
| Strengths and limitations
While most studies on developmental functional connectivity focus on broad age ranges with moderate sample sizes (Rubia, 2013), many of which used task-based fMRI rather than resting state fMRI, our study focused on a narrow age range and benefited from increased statistical power due to the large cohort. The children included in this study were sampled from a population-based cohort and were representative of the general population, which helped to mitigate the common issue of selection bias of children with higher than average IQ or greater socioeconomic status. An additional strength of this study is that, by keeping our analysis in native space, our results were not influenced by intersubject registration, which has frequently been used in previous studies and has been shown to blur cortical areas (Fischl, Sereno, Tootell, & Dale, 1999;White et al., 2001). This study also effectively used "brain-wide" visualizations to display large amounts of connectomic information, namely in the connectograms and worm plots.
In addition, we present both novel findings as well as replication of observations from earlier studies, the latter being important in neuroscience, which is a field plagued by many underpowered studies that do not replicate (Nichols et al., 2017;Open Science Collaboration, 2015).
As previously mentioned, we used a FreeSurfer anatomical segmentation to define our regions of interest. Anatomical segmentations have also been used in several previous studies (Cammoun et al., 2012;Fornito, Yoon, Zalesky, Bullmore, & Carter, 2011;Tadayonnejad, Yang, Kumar, & Ajilore, 2014). This approach benefits from a subject-specific segmentation in native space, which does not require intersubject registrations. Studies that include intersubject registrations are vulnerable to misregistration (Di Martino et al., 2014 In order to reduce the possibility of spurious correlations we applied a median filter. This approach runs the risk that connectivity between highly focal voxels may be diminished via the spatial smoothing. Thus, we chose to smooth only using the 28 voxels surrounding the voxel of interest. Given a voxel dimension of 3.4 mm × 3.4 mm × 4.0 mm, the total size of the smoothed voxel including the median filter is 1,248 mm 3 , which is a reasonably large smoothing kernel for native space and should help reduce chance findings due to noise spikes within the data. We have shown previously that not only structural variability, but also functional variability contributes to differences in the anatomic locations of fMRI signals (White et al., 2001), and thus specific voxels may not be spurious correlations, but rather the higher intensity may be the result of a true underlying focal neural signal that differs spatially between participants.
We did not evaluate the variability in the spatial location of the ALPACA-derived peaks. Larger brain regions, such as many of the This study measured alertness by asking subjects to report whether they fell asleep in the scanner. While none of the children reported falling asleep, we did not measure EEG activity and thus it is possible that some of the children may have slept during the scans.
This could have an effect on the results of this study.
Functional connectivity studies, and particularly those involving pediatric populations, are frequently impacted by motion artifacts, which can erroneously increase long-range connectivity and decrease short-range connectivity (Fornito, Bullmore, & Zalesky, 2017;Di Martino et al., 2014;Power et al., 2014) Given that younger children tend to move more than older children, this can have an impact on developmental studies. In this study, we corrected for motion using the "scrubbing" method (Power et al., 2012(Power et al., , 2013, where corrupted volumes are removed. While this method significantly reduces the effect of motion , it is but one of many strategies (Di Martino et al., 2014). Among the drawbacks of the scrubbing method are the loss of data within subjects, and the unequal degrees of freedom across subjects .
Another issue relevant to connectome-wide association studies is multiple testing correction. This study calculated the "number of effective tests" for each network type based on the covariance in the data, and used this number to adjust the significance threshold.
Some of the differences in associations between the two networks investigated in this study could be simply due to the threshold chosen for each network. This is one of many similar methods commonly used in genetics studies to approximate permutation testing (Sham & Purcell, 2014). Permutation testing has been used previously in connectomics (Ingalhalikar et al., 2014), but remains a computationally expensive method of multiple testing correction.
Another option is to reduce the number of tests by using measures such as the network-based statistic (Zalesky, Fornito, & Bullmore, 2010), or to consider graph theoretical measures that produce node-or graph-level values (Kaiser, 2011;Rubinov & Sporns, 2010). This approach has been used in several studies (Betzel et al., 2014;Crossley et al., 2014;Fornito et al., 2011;Fornito, Zalesky, Pantelis, & Bullmore, 2012;Zhou, Gennatas, Kramer, Miller, & Seeley, 2012), however, it fundamentally shifts the research focus from identification of relevant connections to the interpretation of measures that often do not have a known relation to neuro-biology (Smith, 2012). Lastly, this study included individuals from the general population, rather than solely recruiting "typically developing" children from the community. We utilized a common behavioral and emotional problem inventory to exclude children with high levels of behavior problems to maximize comparability of these data with the existing literature. While most behavioral and emotional problems are robustly measured by this parent-report instrument, the children themselves may arguably be better informants for some types of problem behavior (e.g., internalizing vs. externalizing problems). However, even with some misclassification of problem behavior, the population-based nature of the present sample is highly useful in that it greatly increases the generalizability of findings across all individuals of the population, rather than only the "typically developing" individuals.
| CON CLUS ION
The current study provides both replication and novel findings for age-related maturation of intrinsic connectivity. Replication of findings is noteworthy given our large sample size and narrow age range, coupled with critique regarding less than optimal reproducibility and replication in the field of neuroimaging. Cortico-cortico connectivity was found to increase with age, while connectivity between subcortical regions decreased with age. Some cortico-cortical connections became increasingly focal with age, whereas other cortico-cortical and most cortico-subcortical connections became more diffuse with age. Additionally, we demonstrate the utility of native-space analyses of connectivity and offer examples of how the data can be efficiently and intuitively displayed. Future studies should explore using different anatomical or functional parcellations to determine to what extent the connectivity patterns are influenced by ROI boundaries. | 2018-07-12T07:59:37.309Z | 2018-06-30T00:00:00.000 | {
"year": 2018,
"sha1": "e1e6e8380d91cecf9b5423295216d30c17689e4c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.1031",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1e6e8380d91cecf9b5423295216d30c17689e4c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
218819848 | pes2o/s2orc | v3-fos-license | Bees Occurring in Corn Production Fields Treated with Atoxigenic Aspergillus flavus (Texas, USA)
: A saprophytic soil fungus, Aspergillus flavus , produces aflatoxin (toxigenic strains) in the kernels of corn ( Zea mays L.) and seeds of many other crops. Many strains of A. flavus do not produce toxigenic aflatoxin, and soil application of these atoxigenic strains is a suppressive control tactic to assist in controlling toxigenic conspecifics. Effects of atoxigenic A. flavus applications on honey bees ( Apis mellifera L.) and other bees are unknown, and basic information on bee occurrences in corn fields treated with and without this biological pesticide is needed to inform integrated pest management in corn. Fields with atoxigenic A. flavus applications were compared to nearby control fields in three counties in corn production regions in eastern Texas. In each corn field, twenty bee bowl traps were deployed along four equal transects located between corn rows, with contents of the bowls (i.e. bees) retrieved after 24 hours. Eleven bee genera from four families were collected from corn fields, with only two honey bees collected and zero honey bees observed in transects. The sweat bee genus Agapostemon (primarily composed of the Texas-striped sweat bee A . texanus ) was most abundant in corn fields (44% of the total number of bees collected) followed by long-horned bees ( Melissodes spp., 24%). The southernmost county (i.e. San Patricio) produced over 80% of the total number of bees collected. Bee communities occurring in corn production fields with applications of atoxigenic A. flavus applications were not significantly different from nearby control fields. While little is known of bee resource use in corn production systems in Texas, the abundant yet variable bee communities across latitudes in this study suggests a need to investigate the influence of farming practices on bee resources in regional corn production systems.
Introduction
Aspergillus flavus is a common saprophytic soil fungus which produces toxigenic aflatoxin in the kernels of corn (Zea mays L.) [1], seeds of cotton (Gossypium hirsutum L.) [2], and seeds of many other crops both before and after harvest [3]. Toxigenic A. flavus causes ear rot in corn, one of the most important diseases, which diminishes grain quality and marketability, and livestock health if affected grain is consumed. Corn yields and profitability can be negatively impacted by toxigenic A. flavus by producing aflatoxin on corn before harvest and in storage [4,5], and therefore advancing practices for its control is necessary. A previous study reported that one of several species of Aspergillus causes stonebrood in honey bees (Apis mellifera L.) [6], and therefore applications of A. flavus should consider impacts to pollinator health.
It is expected that bees are minimally exposed to aflatoxin in corn fields, but evidence suggests bees visit corn during flowering [7] and therefore could be exposed to agrochemicals used in corn. Use of atoxigenic conspecific strains of A. flavus is the most widely used biocontrol method for reducing aflatoxin contamination in corn [8], in which toxigenic A. flavus strains were found to be altered and displaced by atoxigenic A. flavus strains [9]. Some registered microbial pesticides that reduce toxigenic A. flavus populations are Aflaguard™ (Strain NRL 21882, Syngenta) and Ensure™ (Strain AF36; Arizona Cotton Research and Protection Council). In Texas, a new product (FourSure™) contains four atoxigenic strains of A. flavus which are expected to provide control of toxigenic A. flavus for several years following application [10,11]. It is recommended that FourSure™ be applied between the 7th leaf stage and tasseling to ensure A. flavus presence and its exposure to foraging insects at the time of flowering. Another bee resource that could be exposed to and affected by applications of A. flavus is soil nesting habitat for native bees, since approximately 75% of over 4000 species of wild bees in North America provision pollen in subsurface-soil brood chambers. However, how adults and immature stages of bees are affected by these pest control applications remains largely unknown.
The impetus for this project was a need to determine if negative impacts to honey bees could occur in fields with applications of commercial atoxigenic A. flavus. In 2003, it was determined that atoxigenic A. flavus strain AF36 in cotton represented low risk to honey bees, yet a high-mortality event observed in a cotton field on the thirteenth day following application [9] emphasized a need for further investigating potential non-target effects. The objectives of this study were to sample bee communities occurring across corn production fields in Texas (USA), and to compare generic richness and relative abundances of bees in fields with and without applications of atoxigenic A. flavus (hereafter FourSure™). The conservation of wild, native bees in corn production systems and further research needs are discussed in relation to findings.
Description of Field Sites
The study was conducted in corn fields in three counties across a latitudinal range from northern to extreme southern Texas ( Figure 1). The geographical extent of the study ranged from the Blackland Prairie and Cross Timbers ecoregions in the northern part to the Coastal Prairies in the extreme southern region of the state ( (Table 1). In Ellis County, corn was planted on 8 March and 1 April 2019, and application of FourSure was performed on 19 May of 2019. Corn planting and FourSure application were performed on 22 March and 6 June 2019, respectively, in Grayson County. FourSure was applied at 11.3 kg ha -1 using an all-terrain vehicle-mounted spreader. Temperature and rainfall during the sampling period in each county are listed in Table 2. The temperature in San Patricio county was higher than in Ellis and Grayson counties during sampling. There was no rainfall in the week before sampling in San Patricio County, and thus the soil surfaces of corn fields were dry during the sampling. In contrast, three rain events of 0.3 mm, 33.4 mm, and 0.3 mm occurred on 5 June, 6 June, and 9 June, respectively before the sampling date (June 11) in Ellis County. The gravimetric water content of soil at 0-10 cm depth was determined by drying soil samples at 105°C for 48 h. The soil surfaces in Ellis County during the sampling were moist. In Grayson County, rain events of 14.5 mm, 16 mm, and 4.3 mm occurred on 16 June, 17 June, and 19 June, respectively, before the sampling date (20 June). Thus, the soil surfaces were wet during the sampling in Grayson County. During sampling in San Patricio county, there were storms moving through and occasional overcast skies and high wind speeds. The average wind speed for 24 h periods in San Patricio County was 5.1 m s -1 , while it was 4.3 and 9.3 in Ellis and Grayson Counties, respectively. The stages of the corn at sampling were silking (R1) and blister (R2). Corn at the time of sampling was late stage and mostly post-anthesis. Other studies in corn have shown bee abundance and diversity to be greatest during flowering [7,12].
Bee Bowl Procedure
Pan traps (i.e. bee bowls) [13] were used to collect foraging bees. Bee bowls were set on 21 May in San Patricio County, 11 June in Ellis County, and 20 June in Grayson County (Table 1). Bee bowls were 104-mL plastic cups (New Horizons, Upper Marlboro, MD, USA) painted fluorescent yellow, blue or white on the inner surface. Five bee bowls, each positioned on 0.9-m elevated wooded stakes, were set in each transect of 20 m with 5-m distance between adjacent bowls ( Figure 2). Each field replicate contained four transects with a total of 20 bee bowls established per field. The height of bee bowls was approximately 40% of the height of silking (R1) to blister (R2) stage corn. The extent of the total area in each field sampled in the four transects was less than one ha, and field sizes ranged from 24.3 to 72.8 ha. Two-thirds of each bee bowl was filled with a water and dish soap solution (approx. 5 to 10 drops of Dawn brand liquid soap per liter of water) to serve as a capturing and killing fluid. Bee bowls were left in the field for 24 h after which all bees from all bowls in each field replicate were collected and transferred into labeled glass vials containing 75% ethanol for Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 12 March 2020 doi:10.20944/preprints202003.0216.v1 preservation. Individual bees were identified to family and genus, and relative abundances of families and genera were compared between control and FourSure-treated fields (n = 11). In addition to bee bowl sampling, in each of the four transects per field, five-minute surveys were conducted after bee bowl establishment to record numbers and types of live foraging insects.
Figure 2.
Diagram depicting the location within a corn field where bees were sampled; circles represent location of bee bowls on wooden stakes.
Statistical Analyses
Data were analyzed by analysis of variance consisting of two treatments (i.e. control and FourSure-applied) and three replications (five replications in Ellis County) for within-site tests. Sites were also combined to test for main effects of treatment and treatment × site interactions on bee relative abundances using Proc Mixed in SAS 9.4 [14]. Treatments were set as fixed effects, and replicates and sites were set as random effects. LSMEANS procedure was used to compare means. Differences were considered significant at P ≤ 0.05.
Results
Eleven bee genera among four families were collected: Apidae, Colletidae, Halictidae, and Megachilidae. A total of 245 bees were collected, and the total numbers of bees collected across control and FourSure-applied fields was not significantly different between treated and untreated fields. The Texas striped sweat bee (Agapostemon texanus) accounted for 44% of the total number of bees collected from three counties over the entire study period. Long-horned bee (Melissodes spp.) was the second most abundant bee in pan traps, accounting for 24% of the total number of bees collected, while the metallic sweat bee genera/subgenera Lasioglossum (Dialictus spp.), was the thirdranked bee in abundance (23%). These three bee taxa constituted 91% of the total number of bees collected. The small carpenter bee (Ceratina spp.), chimney bee (Diadasia spp.), and sweat bees in the genus Halictus were less common (Table 3). Only two honey bee and two green sweat bee (Augochlorella spp.) individuals were collected in bee bowls, while the long-horned bee (Svastra spp.), a masked bee (Hylaeus spp.), and a leafcutter bee (Megachile spp.) were collected as singletons. The dominance of Halictidae in our samples was expected considering the inherent sampling bias regarding this taxon and its typically high occurrence in pan traps/bee bowls [15,16]. Nonetheless, relative occurrences and frequencies of bee taxa across fields provided robust data to investigate generalized community structures (e.g. relative abundances of bee genera) and differences among treatments. Total number of bees was not significantly different (P > 0.09) between FourSure-treated and control fields in all counties (Table 4; Figure 3). Treatments were not significantly different (P = 0.06) in total number of bees when data from all three counties were combined. Although there were no significant treatment × site interactions (P = 0.42), there was a greater total number of bees collected in San Patricio County than those in Ellis and Grayson counties (Table 4). Because of low numbers of bees collected in Ellis and Grayson counties, an analysis of bee data from San Patricio County only was conducted using the three dominant bee taxa, i.e. Texas striped sweat bee long-horned bee, and metallic sweat bee Table 5). In San Patricio County, the differences in numbers of Texas striped sweat bees and long-horned bees between FourSure-treated and control fields were not significant (P = 0.80 and 0.63, respectively). Although the control fields had greater numbers of metallic sweat bees than did treated fields, the difference was not significant (P = 0.30).
Discussion
This study documented honey bee and native bee communities occurring in both atoxigenic Aspergillus flavus-treated and nearby control corn fields across different corn production zones in Texas. While previous studies have reported honey bees foraging in corn [16,17], we found extremely few honey bees, which is similar to an earlier study [12] in which bee bowls were elevated at tassel height and few honey bees were recovered from traps. It was reported that height of bee bowl placement with the corn canopy may affect sampling accuracy of the pollinator community [15]. A previous study found a more abundant pollinator community in bee bowls deployed at tassel height than those deployed at ear height or ground height [12]. In a recent study in Texas pasturelands, honey bees were found to be the second most abundant after sweat bees of Halictidae family, using bee bowls on the soil surface [18]. Thus, it appears that the presence of extremely few honey bees in this study may not be due to bias associated with the height of the collection device (i.e., bee bowl).
Relatively high and unexpected abundances of wild native bees foraging in corn were counted in both FourSure-treated and nearby control fields in the current study. There were no differences in bee relative abundances between A. flavus-treated and control fields in each county, but greater bee abundances, particularly ground-nesting bees, were found in San Patricio County, and fields in this county generally contained lower soil moisture than those in the other sampled counties. Most native bees in Texas are ground-nesters and prefer well-drained ground habitat [19], and therefore soil conditions in corn could affect local uses by bees. Ground-nesting bees were more abundant in perennial grass pastures with low soil moisture compared to grass pastures with high soil moisture in the Texas High Plains [18]. The most abundant bee in our study was the Texas striped sweat bee followed by long-horned bee (Table 3). A sweat bee [Lasioglossum (Dialictus) spp.] is the next most abundant recovered in the current study. These results agree with a previous study [12] in which the most abundant bee species captured was [Lasioglossum (Dialictus) spp.] followed by Melissodes spp. in corn fields in Iowa.
The reasons for differences in abundances of wild bees between San Patricio and Ellis/Grayson are not known, but differences in weather conditions around the time of sampling (particularly rainfall) may be associated with patterns observed. There was no rainfall in San Patricio County, whereas three rain events occurred in Ellis and Grayson counties prior to sampling. Measurements of soil water contents (g g -1 soil) as described by (20,21) indicated that soil water contents in San Patricio County (0.15) was lower compared to Ellis (0.25) and Grayson (0.24) counties.
Furthermore, while landscape context was not investigated here, larger areas of wild and uncultivated habitat in farmland could be influencing bee diversity and abundances [22], and this could have influenced the variability in bee abundances observed across latitudes. Although a functional relationship between bee abundance and corn plants is not clear, the observed diversity 9 of 10 and abundances of bees suggests that the corn fields could be providing resources for native bees. Further study of bees in corn production systems in Texas are needed to better understand native bee resource use in corn fields in relation to weather variation and other local and landscape environmental factors, including those that could influence bee development in soil nests.
Conclusions
This study appears to be the first attempt to document bees occurring in corn fields in Texas. This survey of bees in corn was in part prompted by previous observations of dead honey bees in a cotton field following application of atoxigenic Aspergillus flavus (AF36 strain) to flowering-stage cotton in Arizona. We documented the honey bees and wild native bees in corn fields treated with atoxigenic Aspergillus flavus. The clearest result was that both FourSure-treated and control corn fields (particularly in San Patricio County) had fairly high and unexpected abundances of wild native bees foraging in corn. This suggests that atoxigenic FourSure had no negative effects on bee communities, yet toxicological studies and more field data are needed to elucidate potential negative impacts on bees as a result of its application. Among corn fields, only two honey bees were collected or observed during this study, which suggests a dearth of honey bees in corn production fields at this production stage. The reason for the greater abundances of bees in southerly San Patricio County is unknown, but differences in rainfall influencing soil moisture conditions during the sampling may have contributed to the observed variation. The potential benefits to pollinators in acquiring resources in corn (i.e. pollen and soils for nesting) and the use of corn by wild bees found in this study suggest a need to better understand non-target impacts to native fauna in corn production systems. | 2020-04-23T09:03:15.740Z | 2020-03-12T00:00:00.000 | {
"year": 2020,
"sha1": "7144ae94f2c3883429b4a728020a29ac0d254793",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/10/4/571/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5956b9fbb7bb68cbbea7713c82f5cab1205de612",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
119192133 | pes2o/s2orc | v3-fos-license | X-ray spectral and timing investigations of XTE J1752-223
We report on X-ray monitoring observations of the transient black hole candidate (BHC) XTE J1752-223 with the Rossi X-ray Timing Explorer (RXTE). The source was discovered on 2009 October 23 and during its low/hard state, which lasted for at least 25 days, all timing and spectral properties were similar to those of Cyg X-1 during its canonical hard state. The combined PCA/HEXTE spectra were well fitted by an absorbed broken powerlaw with a high energy cutoff. When RXTE observations were resumed, after an observational gap due to solar constraint, the source was in the hard intermediate state. The evolution through the hardness intensity diagram and the timing properties observed in the power density spectrum suggest that the source crossed all the canonical BHCs states. We discuss the different states and present the results of our spectral and timing investigations.
Introduction
Black hole X-ray transients (BHTs) stay most of the time in quiescence. They represent the majority of the black hole binary (BHB) population known so far. During outburst BHTs show a characteristic evolution of their spectral and temporal properties. This led to the definition of different states: at the begin of the outburst the source is in the so-called low/hard state (LHS), it then evolves to the high/soft state (HSS) and finally returns to the LHS. Although this general behaviour is widely agreed on, the exact definition of the states and especially of the transition between these states are still under debate. In this work we follow the classification of [2] (see however [5] for an alternative classification and [6] for a comparison).
XTE J1752-223 was discovered by the Rossi X-ray timing explorer (RXTE) on 2009 October 23 [4] at a 2 to 10 keV flux of 30 mCrab. A daily monitoring by RXTE to follow up the outburst evolution was triggered by significant similarities with the typical properties of a BHT during the low hard state (LHS) as well as detections of an optical and a radio counterpart. An overview paper, including spectral and time variability studies, based on RXTE Proportional Counter Array (PCA) data, was presented by [10]. A two day long RXTE observation taken in the early phase of the outburst was analysed by [7]. The results obtained from MAXI GSC and Swift were presented by [9] and [1], respectively.
Observations and data analysis
We investigated 206 RXTE observations taken between 2009 October 26 and 2010 July 3, which cover the whole outburst, to present a comprehensive spectral-timing study of XTE J1752-223. In order to do so, we included data obtained by the High Energy X-ray Timing Experiment (HEXTE; 20 -200 keV) on board RXTE. This means, that we also investigate the high energy range above ∼45 keV compared to [10].
For our timing analysis, we used PCA channels 0 -35 (2 -15 keV) only. The PCA Standard 2 mode (STD2), which covers the 2 -60 keV range with 129 channels, was used for spectral analysis. Energy spectra were extracted from PCA and HEXTE data using the standard RXTE software within HEASOFT V. 6.9. From the PCA only Proportional Counter Unit 2 data were used. From HEXTE we used Cluster B data for observations taking before 2009 December 14. For later observations the "on source" spectrum was obtained from Cluster A, while the background spectrum was estimated using Cluster B data. 1 To account for residual uncertainties in the instrument calibration a systematic error of 0.6 and 1 per cent was added to the PCA and HEXTE data, respectively. Nevertheless there are still additional residuals in the HEXTE spectra obtained after 2009 December 14. We will address this point in more detail in Sect. 4.
Timing investigations
The Proportional Counter Array (PCA) light curve, using data of PCU #2, is shown in Fig. 1. The count rate is rather constant during the first part of the outburst (dark blue points in Fig. 1 at T< −60 d), which corresponds to the initial LHS [7]. During the following gap the source was not observable due to solar constraint. In the first observation taken after this gap the count rate has increased by about a factor of two. From this point in time the source decreased in brightness, apart from tow periods of re-brightening. Figure 2 shows the hardness intensity diagram (HID), which gives the PCU2 count rate depending on the hardness. XTE J1752-223 describes the standard q-shaped pattern; starting in the upper right corner (dark blue dots in Fig. 2) and evolving in counter clockwise direction.
The rms-intensity diagram (RID), which gives the PCU2 count rate depending on the total rms, is shown in Fig. 3. It was introduced, based on GX 339-4 data, in a recent paper by [8]. This diagram allows to constrain different states without needing any spectral information. All observations of the LHS are close to the line indicating a fractional rms of 40%. The onset of the HIMS is given by a blue dot, of the SIMS by a green dot. The onset of the LHS at lower luminosity is marked in violet. More information on the different states is given in Sect. 5.
In a first step we tried to fit the PCA/HEXTE spectra with one-component models, such as a power law with cut-off or a multi-colour disc blackbody. However, all models failed to describe the spectra properly (see also [7]). Following [7], the PCA/HEXTE spectra were fitted using an absorbed broken power law with a high energy cut-off. To account for the excess at 6.4 keV a Gaussian centered at that energy was added. From day 2 onwards until day 68 an additional disc blackbody model was needed, representing the emission of the soft X-ray disc surrounding the black hole. The foreground absorption was fixed at N H = 0.72×10 22 cm −2 [7].
As already mentioned in Sect. 2, all HEXTE observations taken after day 0 are affected by additional residuals, which are related to the fact that HEXTE detectors have stopped rocking. To take these residuals into account, we allowed the strength of the HEXTE background to be renormable (corback command in ISIS) during fitting and added three additional gaussians at the position of the strongest residuals (at ∼63 keV, ∼53 keV, and ∼40 keV). Nevertheless, some spectral fits still yielded unacceptable high values of χ 2 red or totally un-physical parameter values. For these observations, we decided to model the HEXTE background during fitting, using a sophisticated model that takes known residual lines into account.
The temporal evolution of χ 2 red as well as of selected spectral parameters is given in Fig. 4. To derive the inner disc radius a distance of 3.5 kpc [10] and an inclination of 70 • [7] were assumed. The behaviour of spectral parameters during different states is presented in Sect. 5.
The different states and their timing and spectral properties:
• For the first 29 days (dark blue dots in Figs. 1, 2; about 50 observations) the count rate was rather constant. During this time the source was in the low/hard state (LHS), showing rms variability of ∼40% (see Fig. 3, [7]). The spectral components are rather constant during this state (see also Fig. 4): fold energy (cut-off) ≈145 keV, break energy ≈10 keV, photon index below break (PhI 1 ) ≈1.53, photon index above break (PhI 2 ) ≈1.28. Furthermore they are very similar to those of Cyg-X1 [7].
• After that XTE J1752-223 was not observable with RXTE for a further 60 days due to solar constraint.
• When the source was observed again with RXTE its count rate has increased and the source was in the hard intermediate state (HIMS). During this observation and the following two observations the source showed type C QPOs (Quasi Periodic Oscillations) at 2.2 Hz, 4.1 Hz, and 5.5 Hz, respectively, while the rms variability decreased from 25% to 18% (see Fig. 3).
The spectrum was softer than in the LHS, with PhI 1 ∼2.8 and PhI 2 ∼2.0. The high energy cut-off is no longer well constrained.
• XTE J1752-223 evolved further through the soft intermediate state (SIMS), showing type A/B QPOs, and an rms variability of less than 10%. In the following the source showed a main transition to the high/soft state (HSS) as well as several secondary transitions between the SIMS and HSS. A detailed discussion of these transitions will be given in a forthcoming paper. With the transition to the SIMS R in was ∼60 km and T in was ∼0.6 keV. During the HSS R in increased slightly, while T in decreased continuously. • After a further 59 days XTE J1752-223 passed through another HIMS at lower luminosity.
During this transition T in as well as PhI 1 decreased rapidly.
• Finally the source entered into the LHS again at lower luminosity. The spectral components are rather similar to those at the beginning of the outburst, apart from the fold energy, which cannot be well constrained. This is partly due to the source being fainter, but even more due to the large uncertainties in the HEXTE spectra.
• In total XTE J1752-223 was for more than about 300 days in outburst and evolved through all canonical BHCs states, before it faded into quiescence again. | 2011-03-22T17:03:22.000Z | 2011-03-22T00:00:00.000 | {
"year": 2011,
"sha1": "427c7b0b8e107d9e6f3257c98eee2d8254fede57",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/123/032/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "427c7b0b8e107d9e6f3257c98eee2d8254fede57",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
268694703 | pes2o/s2orc | v3-fos-license | What is mental health and disorder? Philosophical implications from lay judgments
How do people understand the concepts of mental health and disorder? The objec-tive of this paper is to examine the impact of several factors on people’s judgments about whether a condition constitutes a mental disorder or a healthy state. Specifi - cally, this study examines the impact of the source of the condition, its outcome, individual valuation (i.e., the value the individual attaches to the condition), and group valuation (i.e., the value the relevant group attaches to the condition). While we find that people’s health and disorder judgments are driven by perceived dys - function, we also find that health and disorder judgments are impacted differently by these factors. Health judgements are impacted by outcome and individual valuation, and disorder judgments are impacted by condition source. These results suggest that the folk concept of mental health is positive (i.e., mental health is more than the absence of mental disorder) and normativist (i.e., value judgments play a significant role in determining whether a condition counts as healthy), while the concept of mental disorder aligns with a naturalist perspective, at least to the extent that dysfunction plays an important role in categorizing a condition as a disorder. However, our finding that people’s dysfunction judgments are influenced by indi - vidual valuation and outcomes poses a strong challenge to naturalist accounts.
Synthese
How health and disease are characterized has important implications for medicalresearch, public health measures, and clinical decision-making.As a result, systematic reflection on the nature of these concepts has garnered considerable and ongoing interest within philosophy of medicine (see e.g., Giroux, 2016;Radden, 2019;Reiss & Ankeny, 2016).Despite the divergence of views on several key issues among philosophers, there are two issues that warrant particular attention.We will refer to these issues as the evaluative issue and the relational issue (for reviews, see Kingma, 2019;Murphy, 2021).
Despite what is at stake, many researchers now argue that the philosophical debates concerning the concepts of health and disease have hit a standstill, with conceptual analysis stalling between conflicting intuitions (Schwartz, 2017;Lemoine, 2013;Sholl, 2015;Fuller, 2018).There is growing sentiment that conventional conceptual analysis alone cannot propel philosophical debates forward, prompting some to argue in favor of incorporating different methods, including empirical ones, such as those of experimental philosophy (De Block & Hens, 2021;Griffiths & Stotz, 2008).Grasping people's understanding of the concepts of health and disease via empirical methods might carry important implications for philosophical debates, as many think that definitions and conceptual analyses should align with common sense judgments when possible.
To date, there have been several studies that have examined the concept of mental disorder, though not with the main aim of contributing to philosophical debates related to the evaluative and relational issue (Kirk et al., 1999;Béghin & Faucher, 2023;Wakefield et al., 2006;Wakefield, 2021;Tse & Haslam, 2023).Instead, their focus has been examining how closely the lay judgments align with the definition of mental disorder described in the DSM-5.Here we will highlight three recent studies that have directly targeted the evaluative and relational issue (Machery, 2023;Varga and Latham, forthcoming;Varga, Latham, and Machery, forthcoming).These studies utilized a contrastive vignette technique to investigate how individuals understand and categorize health and disease, what factors influence their decisions to label a condition as health or disease, and how these judgments vary across different demographic groups.
Speaking to the relational issue, one study has found that most lay people, as well as medical students, deploy a positive concept of health, in which health is more than the absence of disease (Varga and Latham, forthcoming).Speaking to the evaluative issue, Machery (2023) examined how people's disease judgments are influenced by whether the condition is typical, involves dysfunction, and is disvalued by the group.Machery's findings tentatively indicated that the folk concept of disease is naturalistic (i.e., value judgments do not matter for whether a condition is a disease).Finally, with respect to both issues, another study examined the effect of typicality, dysfunction, individual valuation, and group valuation on both people's health and disease judgments (Varga, Latham, and Machery, forthcoming).Supporting naturalism, only dysfunction was found to have a significant effect on health or disease judgments: typicality, individual and group valuation did not appear to play any role in determining whether a condition counts as a disease or someone is healthy. 1hile these studies provide some empirical traction on these issues, they possess limitations which mean that caution is warranted.First, these studies only considered physical conditions, and it is very plausible that these results would vary significantly if mental conditions were being evaluated.Second, while it seems that people's judgments are only sensitive to dysfunction, very few participants thought that the conditions being evaluated in two studies (i.e., purple eyes leading to color blindness) were a disease.This indicates the existence of important factors still unaccounted for in our understanding of health and disease.
This study extends the findings of the previous research.First, we focus on mental health and mental disorder, noting that while both "disease" and "disorder" are standardly comprehended as involving deviations from functional norms, in psychiatry, conditions are standardly referred to as "disorders" (e.g., Obsessive-Compulsive Disorder) rather than "diseases". 2 Second, we ask participants to evaluate a mental condition that is clearly atypical and involves a dysfunction. 3While previous studies did not find any effect of individual and group valuation in the case of physical conditions, it is possible that they might influence people's health and disease judgments regarding a mental condition.Third, in addition to individual and group valuation, we also investigate the impact of two new factors: outcome (i.e., whether the condi-tion results in a beneficial or detrimental effect for the individual) and source (i.e., whether the condition is caused by genetic factors or upbringing).These factors were not only chosen to improve our understanding of how people conceptualize health and disease, but also yield insights relevant for philosophical debates.The findings offer insights into how lay views match-up to philosophical views and carry implications extending beyond philosophical discussions, impacting areas like public health initiatives and clinical psychiatry.
The plan for the paper is as follows.In Sect. 1, we provide a brief overview of the existing philosophical and empirical literature in this field.Then, in Sect.2, we describe the experimental materials, methods, and hypotheses that guided our research.To provide access to our supplementary data, an appendix has been included.In Sect.3, we present the results of the study.Finally, in Sects.4 and 5, we discuss our findings, their implications for philosophical debates, and describe some limitations of our research.
Vignette-based experimental design
Studies on health and disease in medical sociology, anthropology, and psychology have typically aimed to establish a connection between individuals' beliefs and attitudes concerning a specific chronic medical condition, such as diabetes, and their corresponding behaviors, like dietary practices (Hughner & Kleine, 2004).Very few studies have explored lay conceptions of health and disease.Utilizing different kinds of surveys and unstructured, in-depth interviews, these studies have most frequently identified several major "themes" in people's conceptualization of health and disease (e.g., health as the absence of illness, as a capacity, as equilibrium), including the presence of cross-cultural differences (e.g., Herzlich, 1973;Weller, 1984;Williams, 1990;Blaxter, 1990;Jensen & Allen, 1994;Hughner & Kleine, 2004;Bishop & Yardley, 2010).These studies typically ask participants to define their concepts, articulate their understanding, or describe a healthy individual they know (e.g., Blaxter, 2010).While such approaches are well-suited to identifying broad "themes" and rich accounts of the narratives that surround health and disease, they are not well-suited to determining how such "themes" intersect, or what happens when they diverge or conflict.For instance, while there is good evidence that lay people are negativists and define health as the absence of disease (Calnan, 1987;McKague & Verhoef, 2003), there is some equally good evidence that they conceptualize health as more that the absence of disease, such as the ability to function according to one's own expectations (McKague & Verhoef, 2003) or fulfill social roles (Blaxter, 1990; for a review, see Hughner & Kleine, 2004).
Our approach does not seek to identify "themes", and, consequently, does not ask participants to define, describe or articulate the contents of their concepts of health and disease.Instead, we ask them to deploy these concepts by making judgments about various scenarios.By examining patterns of judgments to systematically varied scenarios, we can gather defeasible evidence about the content of people's concepts, even if that content is largely implicit and opaque to people.Our approach thus is distinct from the aforementioned research in the social sciences, and from orthodox work in philosophy of medicine, which typically relies on conceptual analysis alone.
More specifically, the current study employs a vignette-based methodology.Vignettes describing carefully crafted scenarios are presented to participants who are then asked to respond.Using vignettes allows for the manipulation of certain factors while controlling others, making it possible to investigate how judgments are affected by factors that might be difficult to tease apart in real-life scenarios.Vignette-based designs typically consist of controlled factors (which remain constant across vignettes) and experimental factors (which are manipulated across vignettes), allowing for the assessment of their impact on dependent variables (Evans et al., 2015).By comparing people's responses between different vignettes researchers gain evidence about the content of people's concepts.
It is important to note that in the medical and health psychology literature, vignettes have been occasionally used to identify factors that influence medical decisions and variations in healthcare practices (e.g., Payton & Gould, 2023;Bachmann et al., 2008).In addition, "anchoring vignettes" have been used to improve the betweengroup comparability of self-assessed health surveys (Grol-Prokopczyk et al., 2011).While important work, our study diverges from these by using vignettes to explore the influence of factors on people's evaluations of health and disease.
Background and cues
The present study involves vignettes in which the person we describe, Katie, is unable to make slow, methodical, deliberative decisions.Her condition is described as atypical (i.e., not possessed by most people) and dysfunctional (i.e., interferes with normal functioning).Many theorists judge that a condition being atypical and dysfunctional is necessary (or even sufficient) to be pathological.In Christopher Boorse's Biostatistical Theory (BST), dysfunction is both a necessary and sufficient condition for disease or mental disorder, while in Jerome Wakefield's Harmful Dysfunction Analysis (HDA) (1992,2014) dysfunction is a necessary condition.Moreover, dysfunction must be tightly linked to atypicality.For instance, in BST, the function of a trait in a reference class is its statistically typical contribution to survival and reproduction, and a pathological condition is species-subnormal part-function (Boorse, 1977(Boorse, , 1997(Boorse, , 2014)).Thus, whether something counts as subnormal function depends on levels of functioning in the reference class, and conditions that are typical in a reference class will not count as pathological, even if they result in a decrease in survival and reproduction (see Schwartz, 2007, for a critique).
In this study, we systematically compared the effect of four factors-source, outcome, individual valuation, and group valuation-on people's health and disorder judgments.Each factor plays an important role in the evaluative and relational issues described earlier in the paper.
To address the relational issue, we asked people to make both health and disorder judgments, to see whether they would be differentially affected by these factors.If they are, then this would challenge both the BST and the HDA, which embrace negativism and comprehend health as merely the absence of disease and not the pres-ence of some positive state (e.g., Boorse, 1977Boorse, , 1997Boorse, , 2014;;Wakefield, 1992Wakefield, , 2005Wakefield, , 2014)).While we anticipate that evaluations of health and disorder will diverge, and so might align with positivism, our approach would also enable us to explore to what extent individuals understand health as a favorable condition or capability, possibly associated with well-being.Both the World Health Organization and several philosophers have tied health to the possession of certain skills or abilities that are essential for achieving well-being.For example, Lennart Nordenfelt argues that the secondorder abilities that characterize health are those that are necessary for pursuing "vital goals," where a vital goal for a person is something that is either part of or necessary for the person to achieve a minimum level of happiness or well-being.Wren-Lewis and Alexandrova (2021, 696) present an account of mental health grounded in wellbeing, proposing a definition of mental health as "the capacities of each and all of us to feel, think, and act in ways that enable us to value and engage in life."Sridhar Venkatapuram (2011Venkatapuram ( , 2013) ) argues that health is a necessary precondition for well-being and sees health as the ability to have abilities that are objective and universal conditions for basic well-being.Finally, Graham (2010) argues mental disorder involves an impairment in a fundamental psychological ability required to lead any kind of a "decent or personally satisfying life" (Graham, 2010, 131-132).Overall, while we expect that the patterns of judgments we find to be of relevance for the relational issue (likely aligning with positivism), our approach will also enable us to address accounts that link health to well-being or to the possession of skills or abilities that are essential for well-being.
To address the evaluative issue in philosophical debates, we hypothesized that the key factors would be individual valuation, group valuation, and outcome.Valuation (individual and group) is of key interest in the debate between naturalism, normativism and hybridism.While naturalists hold that health and disease are value-neutral properties independent of value judgments, normativist (e.g., Nordenfelt, 2007;Cooper, 2002) and hybrid perspectives on health and disease (e.g., Wakefield, 1992) maintain that the concepts of health and disease are value-laden, reflecting what is deemed valuable or otherwise.Normativists argue that whether something counts as a pathological condition will depend on evaluative judgments about it being undesirable or desirable (Cooper, 2002(Cooper, , 2005)), whereas the HDA holds that harm is a prerequisite for a condition to be classified as a disease or disorder.
Mirroring a division found in the philosophical literature, the distinction we introduce between individual and group valuation will allow us to address specific accounts more directly.Some philosophers think that what matters is individual valuation.For instance, Nordenfelt defines health in relation to the individual's ability to successfully pursue "vital goals" leaving open the possibility that agents may have different vital goals, depending on what they value in life.One consequence of this view is that the lack of a certain ability might undermine the health of one person with a particular set of values but not another (see Cooper, 2002Cooper, , 2005;;Wren-Lewis & Alexandrova, 2021).In contrast, others think that what matters is group valuation.For example, according to the HDA, what counts as a harmful condition depends not on individual valuation, but social norms that stem from the values of the culture that the individual is a member of.One consequence of this view is that a particular trait could be viewed as a disorder in one culture but not in another.Other accounts deny that health and well-being are agent-dependent.For instance, Sridhar Venkatapuram (2011Venkatapuram ( , 2013) ) argues that health is a necessary precondition for well-being and sees health as the ability to have abilities that are objective and universal conditions for basic well-being.Thus, for Venkatapuram, a person's health (and well-being) can be compromised even if the person does not disvalue the condition and does not consider herself unhappy.Something similar holds for Graham's account, which comprehends the relevant abilities (e.g., the ability to understand oneself and the world, to take responsibility for oneself and make decisions) as Rawlsian primary goods that everyone would prefer in the original position, because these goods are compatible with all ideas of the good life (Graham, 2010, 147-149).
The factor outcome introduces an aspect that is highly relevant for both the evaluative issue and the relational issue.While the factors individual and group valuation are concerned with evaluations of the condition, the factor outcome is concerned with the outcomes of that condition.While these evaluations are no doubt tightly linked, they nevertheless can come apart.For example, I might have a negative attitude towards a condition that I possess, even if that condition results in positive outcomes for me.Speaking first to the evaluative issue, the outcome of a condition (i.e., whether it leads to positive or negative results for the person with the condition) does not matter for whether or not the condition warrants the label "disorder."Instead, for naturalists, what matters is the existence of a dysfunction, a deviation or failure in normal physiological or psychological functioning such as an inability to make decisions in a psychologically flexible manner, which is generally expected in healthy individuals.Outcome does not directly matter because it could be attributable to random chance and is not intrinsically tied to the nature of the condition itself.In contrast, normativists and hybridists could allow outcome to matter for health and disorder judgments, but only via individual or group valuation.This means that the outcome of a condition could influence whether it is considered a disorder, but only if that outcome is deemed undesirable or harmful by the standards of the individual or the relevant group.To explore this further, we defined positive outcomes as those that support the individual in realizing her vision of a good life, enhancing her wellbeing, and promoting long-term happiness.Conversely, negative outcomes are those that obstruct or hinder her in achieving these aspirations.By distinguishing between positive and negative outcomes in this way, we can examine the interplay between the nature of the condition, its outcomes, and different philosophical perspectives on what constitutes a disorder.
The factor outcome is also relevant for the relational issue.Expanding on our exploration of positivism, we hypothesized that adding outcome to the design of our study would provide us with insights into how the abilities associated with health and disorder are conceptualized.Positivist accounts will agree that something like being in important relationships or making reflected decisions are vital goals and that the compromised or absent ability to achieve them counts as a disorder.But what if one succeeds in attaining their vital goal without possessing the relevant ability?On some views like Graham's, the condition should still count as a disorder.This, however, is not what we predicted.
Finally, as with outcome, orthodoxy in the philosophical debates holds that the source of a condition is not relevant to whether a condition is a disorder.This also aligns with some recent empirical work.For instance, Machery (2023) tested to what extent different physical sources of a condition (i.e., genetic mutation, bacterium, nuclear power plant) impacted people's disease judgments.Machery's investigations were motivated by early anthropological accounts of disease that highlighted the nature of a condition's cause as being relevant to people's disease judgements (e.g., Clements, 1932;Rogers, 1944;Young, 1978;Foster, 1976).Ultimately, Machery did not find any effect of the different physical sources.Nonetheless, there are good reasons to think that results could be different when comparing physical versus social causes, especially when evaluating mental disorders.For instance, Machery (2023) maintains that our disease concept is part of folk biology, but it is plausible that (certain) mental disorders might instead be part of folk psychology.Furthermore, while lay beliefs about the origins of disease typically align with knowledge derived from scientific research, this is often not the case with mental disorders where people's etiological beliefs appear to matter (Troisi & Dieguez, 2022).Interventions on physical disorders most often proceed via underlying physical mechanisms, but interventions on mental disorders can also proceed via psychosocial mechanisms.As a result, we predicted that the source of a condition might influence whether it is considered a disorder.Specifically, we predicted that conditions that result from a biological source would be more likely to be viewed as disordered.In contrast, conditions that result from social factors would be less likely to be viewed as disordered.Instead, they might be viewed as something else, for example, a "lifestyle problem".
Methods and results
The study was pre-registered at https://osf.io/fy23w.400 people were recruited online using Prolific.53 were excluded from the analyses for failing to respond to all the questions or answer all the attention and comprehension checks correctly.The final sample consisted of 347 participants (173 female, 8 trans/non-binary, aged 19-79; M = 38.75,SD = 13.17).Ethics approval for the study was obtained from the Aarhus University Human Ethics Committee. 4he study was a 2 (source: genetic vs. upbringing) × 2 (outcome: positive vs. negative) × 2 (individual: positive vs. negative) × 2 (group: positive vs. negative) between-subjects design.Participants were randomly assigned to one of 16 conditions.The vignettes of the study read as follows (the vignettes vary across conditions between brackets): Human decision-making has evolved to include two distinct mental systems, known as System 1 and System 2. System 1 is characterized by unconscious, rapid and intuitive decision-making, and is typically used to make quick and routine decisions.Since System 1 decision-making operates outside of our conscious awareness, we have limited conscious control over these intuitive choices, and we typically do not know why we make them.In contrast, System 2 is characterized by conscious, slow, methodical, and deliberate decisionmaking, and is typically used to make complex and important decisions.Since System 2 decisions rely on conscious reflection, we have conscious control over them and we typically know why we make them.Most experts believe that both systems are necessary, no matter what conceptions of the good life, well-being, or long-term happiness we aim to attain.
Katie is in all respects an ordinary woman, but she possesses a genetic mutation that determines that no matter what kind of decision she has to make, Katie always uses System 1 and makes very fast and extremely intuitive decisions.Even if she tries to use System 2 to make slow, considered decisions, she almost never succeeds.The fast System 1 always kicks in and makes decisions before she has a chance to engage her more deliberate System 2. (/Katie is in all respects an ordinary woman, but her unique upbringing and education determine that no matter what kind of decision she has to make, Katie always uses System 1 and makes very fast and extremely intuitive decisions.Her parents and mentors emphasized the importance of taking action quickly and she was constantly exposed to risky situations where speed and efficiency were very important.Even if she tries to use System 2 to make slow, considered decisions, she almost never succeeds.The fast System 1 always kicks in and makes decisions before she has a chance to engage her more deliberate System 2.) Katie's very fast and extremely intuitive decisions almost always bring about positive outcomes that help her achieve her conception of the good life, wellbeing, and long-term happiness.As a result, Katie is able to excel in her career, manage her finances effectively, and establish lasting personal relationships.(/ Katie's very fast and extremely intuitive decisions almost always bring about negative outcomes that hinder her in achieving her conception of the good life, well-being, and long-term happiness.As a result, Katie is unable to keep a job, manage her finances effectively, and establish lasting personal relationships.) Katie does not mind that she frequently remains unaware of the reasons behind her choices, and she does not perceive the way she makes decisions as harmful or negative for her life.She feels content with it, it brings her a sense of calm and security, and she can not imagine herself thinking differently and judging differently than the way she in fact does now.(/Katie minds that she frequently remains unaware of the reasons behind her choices, and perceives the way she makes decisions as harmful and negative for her life.She feels really unhappy with it, it causes her profound insecurity and distress, and she wishes that she could change the way she thinks so that she could succeed in slowing and reflecting before deciding.) According to the prevailing social norms and values of the society in which Katie lives, the way she makes decisions can be valuable and have a positive impact on achieving a good life, well-being, and long-term happiness.(/ According to the prevailing social norms and values of the society in which Katie lives, the way she makes decisions is disvalued and considered to have a negative impact on achieving a good life, well-being, and long-term happiness.)Table 1 below shows all 16 possible conditions, of which participants saw and responded to one.Following the vignette participants were asked: "In this scenario, Katie is healthy.";"In this scenario, Katie has a mental disorder."To which participants could indicate their level of agreement on a 7-point Likert scale that ranged between "Strongly disagree" and "Strongly agree".We also asked the following comprehension check questions: (A) "In this scenario, System 1 is associated with unconscious, rapid, and intuitive decision-making.";(B) "In this scenario, System 2 is associated with conscious, slow, methodical, and deliberate decision-making.";(C) "In this scenario, Katie almost always uses System 2 to make decisions."Once again participants could indicate their level of agreement on a 7-point Likert scale that ranged between "Strongly disagree" and "Strongly agree".Participants who failed to agree to (A) and (B) and disagree with (C) were excluded from the analyses.
While our scenario described a condition that many theorists would judge to be a dysfunction, this does not mean that it is something that most lay people would consider to be a dysfunction.To account for this fact, we asked participants "In this scenario, Katie's decision making is dysfunctional".To which participants could indi- cate their level of agreement on a 7-point Likert scale that ranged between "Strongly disagree" and "Strongly agree".Similarly, it is also possible that the extent to which participants judge that a scenario could actually take place actually could impact their health and disorder judgments too.To account for this, we also asked "How likely do you think it is that the scenario you were asked to read could actually take place in the world?"To which participants could indicate the level of likelihood on a 7-point Likert scale that ranged between "Incredibly unlikely" and "Incredibly unlikely".
We also asked a number of further questions regarding related phenomena to health and disorder.We did not have any specific hypotheses regarding how the different factors in the scenario might impact people's judgments about these, but we were interested in exploring what people might say.We asked: "In this scenario, Katie is morally responsible for her decisions.";"In this scenario, Katie's condition impacts her ability to achieve her goals.";"In this scenario, Katie is in control of her decisions.";"In this scenario, Katie is the author of her decisions."To which participants could indicate their level of agreement on a 7-point Likert scale that ranged between "Strongly disagree" and "Strongly agree".Finally, we asked participants "Please rate the level of well-being that you perceive in Katie."To which participants could indicate perceived level of well-being on a 7-point Likert scale that ranged between "Low well-being" and "High well-being".Results for these exploratory questions can be found in Appendix A of this paper.
Figure 1 below shows the descriptive results for participant's health judgments.The ANOVA revealed a significant main effect of outcome, F(1,326) = 99.702,p <.001, η p 2 = 0.234, and individual valuation, F(1,326) = 25.601,p <.001, η p 2 = 0.073.The main effect of outcome was that participant's health judgments were significantly lower in the negative cases (M = 3.74, SD = 1.44) than in the positive cases (M = 5.30, SD = 1.44).The main effect of individual valuation was that participant's health judgments were significantly lower in the negative cases (M = 4.13, SD = 1.42) than in the positive cases (M = 4.91, SD = 1.43).
Figure 3 Given the influence that outcome and individual valuation were observed to have on participant's dysfunction judgments we were interested in exploring whether the influence that outcome and individual valuation had on people's health and dis- ease judgements was a direct influence, or an indirect influence via dysfunction.To explore this possibility, we reran both health and disorder ANOVAs with the inclusion of participant's dysfunction judgments.If outcome and individual valuation only indirectly influence participant's health and disorder judgments, then the inclusion of participant's dysfunction judgments should block their observed effects. 5irst, when we reexamined participant's health judgements, we continued to observe significant main effects of both outcome, F(1, 325) = 39.929,p <.001, η p 2 = 0.109, and individual valuation, F(1, 325) = 13.212,p <.001, η p 2 = 0.039, as well as a significant effect of dysfunction, F(1, 325) = 73.650,p <.001, η p 2 = 0.112.The effect of dysfunction was that higher dysfunction judgments were associated with lower health judgments.
In contrast, when we reexamined participant's disorder judgments, although we continued to observe a significant main effect of source, F(1, 325) = 9.551, p =.002, η p 2 = 0.029, as well as a significant effect dysfunction, F(1, 325) = 127.807,p <.001, η p 2 = 0.343, we failed to observe any significant effect of outcome, F(1, 325) = 0.094, p =.760, η p 2 < 0.001, or individual valuation, F(1, 325) = 1.146, p =.285, η p 2 = 0.004,.The effect of dysfunction was that higher dysfunction judgments were associated with higher disorder judgments.Thus, it appears that while outcome and individual valuation might have a direct influence on participant's health judgments, they might only have an indirect influence on participant disorder judgments via dysfunction.
Discussion
The primary objective of our paper was to investigate the concept of mental disorder and contribute to philosophical debates regarding the evaluative and relational aspects of this issue.In the following subsections, we will address the implications of our findings on these matters.
The evaluative issue
Beginning with the evaluative issue our findings have implications for the debate between naturalism, normativism and hybridism.First, consistent with previous research in experimental philosophy of medicine, our findings suggest that naturalism and hybridism might be correct that dysfunction is central to the lay concept of disorder (e.g., Wakefield, 2021;Béghin & Faucher, 2023).However, in contrast to those earlier results, we found that the factors outcome and individual valuation influence participant's dysfunction judgments.Second, our findings also show that the factors outcome and individual valuation influence participant's health judgments.This suggests that the lay concept of health might not be value-neutral.Let us take a closer look at each of these findings in turn.
One key finding of this study was the significant association between participant's dysfunction judgments and both their health and disease judgments.This association is typically taken to be evidence that people might be naturalists or hybridists with respect to health and disease (e.g., Machery, 2023), but that might not be the case.First, as we already noted, participant's health judgments in this study were significantly influenced by the factors outcome and individual valuation; the latter of which, at least, is certainly not value-neutral.Second, whether the association between dysfunction and disorder provides any evidence in favor of naturalism or hybridism ultimately depends on people's understanding of dysfunction.The description of the mental condition that we used in our study was developed to meet a value-neutral description of dysfunction.As such, we did not anticipate that any of the factors that we examined would influence whether participants would consider the condition to be a dysfunction (or not).Surprisingly, however, we found that both the factors outcome and individual valuation influenced participant's dysfunction judgments.As a result, people might not actually be naturalists or hybridists with respect to disorder either.That is because, even if outcome and individual valuation do not directly influence participant's disorder judgments, they can indirectly influence them via their influence on dysfunction.
Of course, the thought that people are neither naturalists nor hybridists with respect to disorder hinges on it being the case that the influence of the factors outcome and individual valuation provide evidence of value-ladeness.First, the influence of the factor individual valuation in our study suggests that what people's dysfunction judgments regarding a condition depend on, at least in part, is the attitude of the person who possesses condition towards it.This result is consistent with those of a recent study performed by Latham and Varga (forthcoming).They directly examined whether a patient's evaluation of their condition influenced dysfunction judgments and found evidence that they influence participant's judgments in the case of mental conditions, such as the condition in this study, but not physical conditions.This result suggests then that while individual's valuations can influence dysfunction judgments, this influence might not generalize across all cases.Developing our understanding of how individual evaluations contribute to people's understanding of dysfunction is an important direction for future research.
Second, whether the influence of the factor outcome suggests that people's dysfunction judgments are value-laden will depend on precisely what is driving the influ-1 3 ence of the factor outcome.In the current study, positive outcomes are characterized in terms of being whatever it is "that helps her [Katie] achieve her conception of the good life, well-being, and long-term happiness".In contrast, negative outcomes were characterized in terms of being whatever it is "that hinders her [Katie] in achieving her conception of the good life, well-being, and long-term happiness".While the fact that outcomes are evaluated positively by the agent is made salient by the vignette, it is possible that the outcomes, whatever they are, might also count as positive or negative despite how the agent evaluated them.People might think (perhaps tacitly) about positive and negative outcomes in a value-neutral manner.For instance, achieving one's conception of a good life, well-being, and long-term happiness is a positive outcome because it reliably contributes to survival and reproduction.Conversely, a failure to achieve these things counts as a negative outcome because it reliably hinders survival and reproduction.Of course, both these value-laden and value-neutral senses of positive and negative outcomes are very likely to track close together, and it is possible that both might influence people's judgments.Further research is required to disentangle these two senses of the factor outcome.
Surprisingly, our results also showed that the factor condition source significantly influenced participant's disorder judgments, but not their health judgments.Specifically, participants were more likely to judge that the presented condition was a disorder when it had a genetic cause rather than a social cause.Most theorists judge that the source of a condition is irrelevant to whether it is a disorder or not.So why do we find evidence that it impacts lay people's judgments?One explanation might be that the source of a condition is acting as a proxy for some other relevant factor.For instance, Varga, Latham, and Machery (forthcoming) suggest that dysfunction magnitude might be important for people's disease judgments.People might judge that a condition with a genetic source is more likely to be a disease or disorder than the same condition with a social source because dysfunctions associated with genetic factors are reliably more severe than those associated with social conditions.Of course, the influence of condition source might also lend itself to either normativist or hybridist interpretations.Perhaps the reason source is associated with higher disorder judgments is not because genetic sources are reliably associated with higher severity than social sources, but because people negatively value genetic sources more than social sources.That said, if condition source is indeed tracking something evaluative, then it is not clear why it did not also affect health judgments.Future research is required to find out what it is about condition source that matters to people making disorder judgments.
Finally, what about the influence of individual valuation and outcome on participant's health judgments?First, the finding that individual valuation influences health judgments provides support to normativist and hybridists positions that highlight the importance of an individual's valuation of their condition.But what about those normativist and hybridist accounts which hold that it is the group's valuation that matters (e.g., Wren-Lewis & Alexandrova, 2021;Graham, 2010;Wakefield, 1992Wakefield, , 2014))?For instance, according to the HDA, while harm is a necessary condition for disorder, what counts as a harmful condition depends on social norms rooted in the cultural values of the individual's community.We failed to find any evidence at all that group valuation has any impact on participant's judgments.
How about the factor outcome? Whether the result of outcome provides any evidence against naturalism about health will depend on, as described above, what it is about outcome that is driving participants responses.For instance, if what is driving the responses are outcomes that are positively valued by the person with the condition, then this would count as an empirical mark against naturalism regarding health.That is because according to naturalism whether the outcomes are positively valued (or not) should not matter to whether the person with the condition is healthy or not.Alternatively, if what is driving the responses are outcomes that are positive despite how the person evaluates them (i.e., contributes positively to survival and reproduction), then such a result would be consistent with naturalism.Once again, it is entirely possible that both sense of outcomes matter for people's judgments, and future research is needed to investigate this possibility.
Interestingly, with respect to the factor outcome, difficulties emerge for those normativists that link health to well-being.On normativist accounts by Nordenfelt, Wren-Lewis, Alexandrova, and Graham, Katie is considered to have a mental disorder, because she lacks a certain capacity necessary for attaining some minimal objective well-being and good life, which on these accounts would also mean that she is unhealthy. 6Bracketing for the moment whether the influence of outcome in our study is value-laden, our findings do not align with these views: in scenarios, in which Katie was the recipient of positive outcomes (defined as those that help her achieve her conception of the good life, subjective well-being, and long-term happiness), people tended not to judge that she is unhealthy.That is, people appear to be moved by the fact that Katie achieves what she wants by her lights.
These normativists might propose an alternative reading of our results.They could highlight that normativism is correct linking health with well-being, but stress that (a) people may not view the relevant ability in Katie's case as necessary for wellbeing or (b) they may think that the relevant ability is necessary, but also that Katie actually possesses the ability, given that a positive outcome is achieved.In other words, people may understand ability as actual instead of dispositional.If ability is interpreted in the actual sense, then what makes something unhealthy is that it under actual circumstances diminishes or removes an ability that is somehow crucial for well-being.But, if ability is understood dispositionally, then what makes something unhealthy is that it would have impaired well-being, even though it does not under present circumstances.Unfortunately, our study does not yield direct insights on this matter, underlining the necessity for future research.
The relational issue
Moving on to the relational issue.Negativism would predict that people's health and disorder judgments go together, while positivism allows that they can come apart such that people's disorder judgments can be affected by a factor that does not affect health judgments.Our results confirm that health and disorder are related and associ-ated with dysfunction.Specifically, higher judgments that the target person has a dysfunction were associated with higher judgments that the person has a disorder, and lower judgments that the person is healthy.This result is also consistent with previous findings in experimental philosophy of medicine which have found a similar effect of dysfunction on people's health and disease judgments (Machery, 2023;Varga and Latham, forthcoming).
Prima facie then our results would appear to support negativism, but such a conclusion would be too hasty.Let us distinguish two different forms of negativism.Strong negativism, which is how negativism is standardly characterized in the literature, judges that health is the absence of disease.Thus, if someone judges that someone is healthy, then they should also judge that they do not have a disorder, and similarly, if someone judges that someone has a disorder, then they should also judge that they are not healthy.Weak negativism, on the other hand, just holds that there is an association between people's health and disease judgments, such that higher health judgments tend to be associated with lower disorder judgments.Thus, weak negativism is consistent with judging that someone is healthy and has a disorder.For instance, having increased credence that someone has a disorder might be associated with having reduced credence that someone is healthy, without then judging that the person is unhealthy.Strong negativism implies weak negativism but not vice versa.The question is which version our results appear to support. 7reviously we described studies which found that only dysfunction impacted people's health and disease judgments, decreasing health judgements, and increasing disease judgments (Machery, 2023;Varga, Latham, and Machery, forthcoming).However, overall, people judged that the evaluation target was healthy and did not have a disease.Thus, while people's health and disease judgments were coupled together in the manner that negativism would predict, more factors, in addition to dysfunction, seem to be required for people to judge whether an evaluation target has a disorder or is unhealthy.
Our results present preliminary evidence of what some of those factors might be.Interestingly, what we found was that those additional factors were different between health and disorder.Specifically, we found that (a) condition source influences people's disorder judgments but not their health judgments, whereas (b) individual valuation and outcome influence people's health judgments but not their disorder judgments (at least not directly).The fact that people's health and disorder judgements are influenced differently by factors suggests that under certain circumstances they could come apart in a way that would be inconsistent with strong negativism.
It is important to note that the BST and the HDA differ in important respects in how they view negativism.The former (Boorse, 1997(Boorse, , 2014) ) holds that its analysis only applies to the conception of disorder found in theoretical medicine, whereas the latter (Wakefield, 1992(Wakefield, , 2007) ) maintains that it applies to both medical and lay conceptions.This means that our findings carry more substantial implications for the HDA than for the BST.Nevertheless, even if Boorse is right that health professionals operate with a negative conception and our study is right that lay people do not, this has important implications.If health professionals and patients operate with different concepts, then there could be disagreement regarding when health is decreased due to some pathological condition or when health is restored after a treatment.Of course, they will very likely be aligned in most cases given the role that dysfunction plays in both people's health and disorder judgments, but in cases where they do not, the disparity might lead to miscommunication regarding appropriate care and treatment.Further research is needed to explore whether health professionals operate with a strong negative concept, and under what conditions professional and lay judgments come apart.
Limitations
While this study contributes to our understanding of people's concepts of health and disorder, along with the factors that influence their health and disorder judgments, it is important to highlight some of the limitations of the employed approach.Considering these limitations in the context of future research can contribute to a more nuanced perspective of this matter.
First, while employing vignettes as a methodological approach offers a controlled way to present scenarios and elicit judgments, the use of hypothetical scenarios can introduce a high level of abstraction that could affect participants' responses compared to real-life contexts.The results of the MANOVA indicated, overall, that vignette likelihood had an impact on people's judgments, and while follow-up tests did not find any effect of this factor on people's health and disorder judgments, the details of the scenario and how closely people perceive them to be like an actual scenario very likely impact people's judgment.With that said, it is worth noting that the ability to manipulate certain factors of interest might be incredibly difficult with certain real-life cases.For instance, imagine a case of depression where the individual and group evaluates the condition positively.While a case where everyone positively evaluates depression, as we understand it, is certainly possible, such a situation is hypothetical and removed from real-life situations.
Essentially, we have shifted the limitation from one part of the vignette to another.Developing an understanding of how people understand health, disease, and disorder, will likely depend on examining people's judgments for both real-life and hypothetical cases.Also, to properly isolate the factors we wanted to study, we used a single vignette about a single character.It is possible that there is something particular about this case that causes people to respond in the way that they are.Future research should check whether the pattern of judgments that we observed in this case generalizes to other scenarios.
Second, the current study investigated health and disorder judgments of Englishspeaking Americans.There is an open question whether these judgments generalize to other Western societies and then beyond that.This is especially important in the context of examining normativist (and hybridists) positions, where what matters is the valuation of the individual or the group.Between-group variability in valuations will not just impact people's judgments to the scenarios but also how they understand the scenarios themselves.Certain participants might perceive the group valuation described in the scenario as aligning with their own group's values, while other participants might judge that they are in conflict with their own group's values.As a result, the generalizability of these findings to other cultural contexts with potentially different characterizations of health and disorder will be limited.Future studies should aim to include more diverse samples and investigate potential cross-cultural variation.
Third, there is still an open question whether the concept deployed by lay people is continuous with the one deployed by health practitioners and researchers.Some accounts of disorder are explicit that they are accounts of the technical concept deployed by health professionals (i.e., BST), whereas others claim to be continuous with both the technical and lay concept (i.e., HDA).To date, there is at least some evidence suggesting that those training to work in medicine make judgments comparable to those of lay people (Varga and Latham, forthcoming), but it is an open question whether those currently working in the discipline are similarly moved by these factors.This is not to say that if there are differences then revisions must be made on the part of health professionals, rather an awareness of such differences may be important to satisfying the aims of medicine.
Conclusion
Our study explored how people understand the concepts of mental health and disorder.Specifically, we examined how people's health and disorder judgments were impacted by condition source, individual valuation, group valuation, and outcome.Our findings carry significant implications for understanding both the relational issue and the evaluative issue.Moreover, they shed light on how ordinary views align with philosophical perspectives (i.e., positivism, negativism, naturalism, normativism, and hybridism) and have implications for public health measures and clinical psychiatry.
We observed that people's health and disorder judgments are both associated with their dysfunction judgements: higher dysfunction judgments are associated with higher disorder judgments and lower health judgments.We also observed that people's health judgments are influenced by individual valuation and outcomes, whereas their disorder judgments are not.Instead, disorder judgments are influenced by condition source.Overall, the lay conception of mental health appears to be both positive and normativist, while the lay conception of mental disorder aligns with a naturalist perspective, at least to the extent that dysfunction plays an important role in categorizing a condition as a disorder.However, our finding that people's dysfunction judg-ments are influenced by individual valuation and outcomes poses a strong challenge to naturalist accounts.the positive cases (M = 3.78, SD = 1.60).The main effect of individual valuation was that participant's control judgments were significantly lower in the negative cases (M = 3.14, SD = 1.60) than in the positive cases (M = 3.83, SD = 1.59).The effect of possibility was that higher possibility judgments were associated with higher control judgments.
The main effect of source was that participant's author judgments were significantly lower in the genetic cases (M = 3.89, SD = 1.66) than in the upbringing cases (M = 4.42, SD = 1.65).The main effect of outcome was that participant's author judgments were significantly lower in the negative cases (M = 3.85, SD = 1.67) than in the positive cases (M = 4.46, SD = 1.65).The main effect of individual valuation was that participant's author judgments were significantly lower in the negative cases (M = 3.83, SD = 1.65) than in the positive cases (M = 4.48, SD = 1.64).The effect of possibility was that higher possibility judgments were associated with higher author judgments.The effect of political ideology was that higher conservatism was associated with higher author judgments.
Simple effects tests with Bonferroni correction were performed on the two-way interaction between outcome and group valuation.First, for positive outcome cases, there was no significant difference in participant's author judgments between the positive group valuation case (M = 4.21, SD = 1.66) and the negative group valuation case (M = 4.72, SD = 1.66).Nor was there any significant difference for negative outcome cases between participant's author judgments in the positive group valuation case (M = 4.11, SD = 1.67) than in the negative individual valuation case (M = 3.59, SD = 1.66).Second, for positive group valuation cases, there was no significant difference in participant's author judgments between the positive and the negative outcome cases.In contrast, for negative group valuation cases, participant's author judgments were significantly higher in the positive outcome case than in the negative outcome case (p <.001).
The effect of possibility was that higher possibility judgments were associated with higher well-being judgments.
Funding Open access funding provided by Aarhus Universitet.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
below displays the descriptive results for participant's dysfunction judgments.The result of ANOVA showed a significant main effect of outcome, F(1, 326) = 93.024,p <.001, η p 2 = 0.222, and individual valuation, F(1, 326) = 20.728,p <.001, η p 2 = 0.060.The main effect of outcome was that participant's disorder judgments were significantly lower in the positive cases (M = 3.70, SD = 1.65) than in the negative cases (M = 5.43, SD = 1.65).The main effect of individual valuation was that participant's disorder judgments were significantly lower in the positive cases (M = 4.16, SD = 1.64) than in the negative cases (M = 4.97, SD = 1.65).
Fig. 2 Fig. 1
Fig. 2 Jitter plot showing the distribution of participant responses to the question "In this scenario, Katie has a mental disorder".Black dots represent the mean response value and error bars show standard deviation
Fig. 3
Fig. 3 Jitter plot showing the distribution of participant responses to the question "In this scenario, Katie's decision making is dysfunctional".Black dots represent the mean response value and error bars show standard deviation | 2024-03-26T17:56:25.434Z | 2024-05-07T00:00:00.000 | {
"year": 2024,
"sha1": "13e0493b6f8a7a5e0a942b4d9ad6ab9a299a4ed6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11229-024-04555-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bbe068195c21b2baf242875fba690a9a902b0650",
"s2fieldsofstudy": [
"Philosophy",
"Psychology"
],
"extfieldsofstudy": []
} |
225355229 | pes2o/s2orc | v3-fos-license | Real-Time Control of Plug-in Electric Vehicles for Congestion Management of Radial LV Networks: A Comparison of Implementations
: The global proliferation of plug-in electric vehicles (PEVs) poses a major challenge for current and future distribution systems. If uncoordinated, their charging process may cause congestion on both network transformers and feeders, resulting in overheating, deterioration, protection triggering and eventual risk of failure, seriously compromising the stability and reliability of the grid. To mitigate such impacts and increase their hosting capacity in radial distribution systems, the present study compares the levels of effectiveness and performances of three alternative centralized thermal management formulations for a real-time agent-based charge control algorithm that aims to minimize the total impact upon car owners. A linear formulation and a convex formulation of the optimization problem are presented and solved respectively by means of integer linear programming and a genetic algorithm. The obtained results are then compared, in terms of their total impact on the end-users and overall performance, with those of the current heuristic implementation of the algorithm. All implementations were tested using a simulation environment considering multiple vehicle penetration and base load levels, and equipment modeled after commercially available charging stations and vehicles. Results show how faster resolution times are achieved by the heuristic implementation, but no significant differences between formulations exist in terms of network management and end-user impact. Every vehicle reached its maximum charge level while all thermal impacts were mitigated for all considered scenarios. The most demanding scenario showcased over a 30% reduction in the peak load for all thermal variants.
Introduction
With over three million registered units worldwide in 2017 [1], driven by their own improving competitiveness over conventional powertrains and incremental government support, the market share of plug-in electric vehicles (PEVs) is expected to grow further. By 2030 the projected number of light-duty vehicles ranges from 125 to 220 million [1]. If uncoordinated, the charging of PEVs, which include both plug-in hybrid (PHEV) and battery electric vehicles (BEV), may pose major technical and operational challenges that could compromise the stability and reliability of low voltage distribution networks [2,3]. make the algorithm difficult to be understood. The compatibility of the algorithm with the current charging standards is not discussed.
The study in [15] focused on the coordination of the charging of PEVs together with photovoltaic power generators. The algorithm clearly improves voltage quality, but the most important limitation is that it works only in areas with very high penetration of distributed photovoltaic power. In [16], the authors not only focus on PEVs but take a more holistic approach by coordinating the charging of PEVs together with on-load tap changers, voltage regulators and capacitor banks in order to improve voltage profiles and decrease network losses. The algorithm is efficient; however, the paper does not discuss practical aspects of the implementation, which might be very complex. Likewise, the work in [17] sought to perform voltage control through not only PEVs but also by using on-load tap changers and capacitors in low and medium voltage networks. As said complicated control algorithm grows into a very complex one easily, practical aspects regarding its real implementation should be discussed. The topology of the innovative four-quadrant charging stations employed in [17] is further examined in more detailed in [18]. Additionally, the study in [19] examined voltage control through PEVs in coordination with an on-load tap changer. The work focused on microgrid applications. A fundamental distinctness compared with most studies is that it also considered economic metrics and impacts. In [20] the concept of smart loads to relieve voltage issues caused by the charge of PEVs is discussed. The approach presented is able to correct short-term voltage problems, so other means, such as an on-load tap changer, are necessary to make voltage corrections during longer time periods. However, the idea in [20] seems to offer another fruitful branch of research considering the integration of PEVs to low voltage networks. Another promising trend is to study a stationary battery energy storage option together with a PEV charging station in order to make the operation of the battery lighter from the network viewpoint. Different aspects of such implementations are discussed in [21][22][23]. Even though energy storage systems are not further discussed in this paper, it is still important to recognize them as a possible solution in the future.
In contrast, a simple droop-based controller for the provision of multiple ancillary services, including network congestions, was proposed and validated in [24]. The authors pointed out the overall lack of field validation of the suggested controls in the current literature. This topic was the focal point in [25], wherein it was successfully demonstrated that autonomous droop controllers can support network voltage in practice, even in relatively severe situations. Unlike many others, this study improved the state-of-the-art in experimental testing. The work in [26] introduced a charging strategy with similar objectives as the one in [25], considering voltage and thermal limits of the network based on droop control. In addition to positive results, the authors discussed the limitations of the communications, which is a crucial aspect in commercial implementations. An important difference with the work in [25] is that the research in [26] has a strong focus on microgrid applications. Additionally, an interesting charging strategy considering network congestions is presented in [27]. The strategy is more straightforward than the ones presented in [25,26]; while it does modulate the charging of PEVs, it only does so by enabling or disabling the charging current and does not consider voltage constraints. Due to the discrete switching of the EVs, the algorithm is quite rough and may result in an oscillating behavior at large charging sites. This aspect was not discussed. However, the method was tested on commercial PEVs.
To address these issues, a real-time agent-based charge control algorithm designed to mitigate the impacts of uncontrolled domestic Mode 3 AC charging on radial distribution networks was presented and validated through hardware-in-the-loop (HIL) simulations in [28]. Due to employing centralized flexibility-offer-based congestion management designed to minimize the impact on car owners and combining that with a decentralized sensitivity-based nodal voltage control, its formulation and architecture were conceived to require a minimal infrastructural deployment for its operation.
The current work is intended to serve as a continuation of the foundations laid in [28]. This paper proposes two novel problem formulations, linear and convex, to replace the the current heuristic thermal management of the charge control algorithm described in [28]. Additionally, the performances and levels of effectiveness of both approaches are evaluated and compared with each other and with the existing one. The new suggested problem formulations are also conceived to minimize the total impact upon car owners and designed to serve as a complete replacement of the current thermal management. This means guaranteeing full compatibility with the architecture and operation of the algorithm, and coordination with the local voltage management. Moreover, no additional data are required for their execution, ensuring the same minimal infrastructural deployment is needed for the operation of the charge control algorithm, solely requiring once again controllable charging stations (EVSE), communication links and strategically allocated sensors across the network [28].
A complete set of night charging episodes under increasing PEV penetrations were tested on a residential low voltage (LV) grid facing comprehensively severe conditions, given by two peak winter domestic demand scenarios and a low initial state of charge (SOC) for all vehicles. To assess the effectiveness of all three thermal implementations, their execution times and the final share of PEV owners who achieved a final acceptable SOC were weighted against their capacity to mitigate the registered network impacts for the uncontrolled and controlled charging scenarios. Moreover, to clearly evaluate the different formulations, the execution of the algorithm was solely limited to the central thermal control, disabling the local voltage management.
The contributions of this paper are: 1. Two novel problem formulations for the congestion management of the charge control algorithm described in [28]. These offer support for the same features as the original formulation: • Support for 1-phase AC and 3-phase AC Mode 3 domestic charging. • Support for the current charging standard IEC 61851-1 [29].
2. The formulation of constraints and objective functions (linear and convex) compatible with the same minimal data availability and infrastructural deployment for their operation, as in [28]. 3. A comparative evaluation between them and the existing one in terms of their performance and effectiveness under comprehensively severe testing conditions.
The paper is organized as follows. Section 2 introduces the two alternative thermal management formulations for the algorithm in [28], linear and convex. Section 3 describes the simulation environment and the considered study cases. The main results showcasing and comparing the performances of all implementations are presented and discussed in Section 4. Finally, the main conclusions from the study are drawn in Section 5. Nomenclature is provided in Table 1.
T
Loading factor: distribution transformer T * Reevaluated distribution transformer loading factor after the 1st thermal auction Complex nodal phase voltages at node i #» I * ch kj Conjugate charging rate of vehicle k at phase j located at node i
Thermal Implementations
In this section a detailed mathematical description of the two alternative problem formulations designed to substitute the current heuristic methodology employed by the charge control algorithm described in [28] is proposed. A brief overview of the auction-based centralized thermal management presented in [28] is followed by the problem formulations employed for the charge increase and charge decrease auctions. A detailed description of the heuristic methodology can be found in [28].
Thermal Management Overview
The thermal management employed in [28] has been designed to allow each car to charge at the maximum plausible rate so that the loading of the distribution transformer and the loading of the head feeders are kept below 95% of their rated capacities. This narrow security margin (β = 5%) is used to guarantee adequate control. Given a radial topology with n feeders, the loading of the assets is monitored using the loading factors, defined for the distribution transformer (T) and phase j of feeder i (F ij ) as: where S m and S r are the measured and rated powers of the transformer; I m ij and I r ij are the corresponding measured and rated loading currents for phase j of feeder i.
Based on their values, a decision parameter ψ is employed to determine the need to call for a charge increase (ψ = 1), decrease (ψ = −1) or no thermal management auction (ψ = 0) at all. When requested, all participating vehicles submit flexibility offers containing their maximum current bid (ϕ k ), bid division (λ k ), cumulative charging time (t ch k ) and lastly, a charge characteristics parameter (ρ k ) which identifies whether a 3-phase AC (ρ k = 4) or 1-phase AC (ρ k = [1, 2, 3] respectively for phases [a,b,c]) charge is taking place. If different network assets require different courses of action, a charge increase auction is launched first. Once it concludes and within the same control cycle, a charge decrease auction follows to correct the required congestions.
Once each offer is received, based on its feeder of origin τ k , its impact parameters m 1Ph ijk and m 3Ph ik are determined. These describe how each car affects the network assets, indicating whether vehicle k affects phase j of the feeder i, for 1-phase AC charging, or all phases of feeder i, for 3-phase AC charging. All vehicles affect the transformer in radial topologies. The definitions of the decision parameter ψ, the flexibility offers and the impact parameters are presented respectively within Equations (2)-(4) and Table 2, where I ch k and I max k represent, respectively, the present and the maximum charging rate of the station (the minimum rate is equal to 6 A). A more detailed description can be consulted in [28]. Table 2. Impact parameters: m 1Ph ijk and m 3Ph ik [28].
After collecting the flexibility offers from all ℵ participating vehicles, this information is then used to launch the thermal auctions and determine the necessary degree of participation from each car in order to perform the required network management and minimize the total impact on the owners. The participation of each vehicle k is measured by its participation factor (x k ) which indicates the total number of bidding units taken from its complete flexibility offer, ranging from no contribution (x k = 0) to full participation (x k = ϕ k /λ k ). When the auction concludes, the charging rates of the vehicles are adjusted based on their resulting participation.
A simplified flowchart highlighting the integration of the different thermal variants within the logic of the algorithm is presented in Figure 1. For simplification purposes and since the objective of this paper is to draw a comparison between the three thermal formualtions, only those processes involved in the thermal management are shown in detail. The control flows associated with the local voltage management and further details regarding the complete structure and control logic can be found in [28].
3
: as the most restrictive Calculate Offer: Thermal Limitation
Problem Formulation: Charge Decrease Auction
As in [28] the charge decrease auction is executed in two stages. A first optimization seeking to correct all congestion from both the head feeders and their phases is followed, if needed, by a second one aimed at alleviating all remaining congestion that could still affect the distribution transformer. Said structure is used to avoid duplicities since all congestion may already be corrected solely acting on the head feeders. Mathematically, the linear and convex formulations of the first optimization problem managing congestion affecting the head feeders are given by: for k = 1,..., ℵ, with ℵ referring to the total number of participating vehicles. Both problem formulations, linear and convex, posses the same constraints, but differ in their objective functions formulated respectively in Equations (5a) and (5b), thereby entailing different corrective courses of action. The same applies for all thermal auctions. Although both implementations seek to minimize the total impact on the users by prioritizing those cars with the higher cumulative charging times, the way their contribution is selected changes. While the linear formulation always forces the maximum participation of the car with the highest charging time before the next in line can contribute, a more equitable approach, like the one used by the heuristic resolution [28], is favored by the convex implementation. This is accomplished dividing the linear objective function by the standard deviation of all participation factors σ({x 1 , x 2 , ..., x ℵ }).
Adding one unit to the denominator forces the convexity of the problem and favors the full correction of the network issues over identical car participation solutions ({x 1 = x 2 = ... = x ℵ }). The linear problem is solved by means of integer linear programming, while the solution for the convex optimization is calculated by means of a genetic algorithm starting with an initial population given by x k = 0.
For the current feeder congestion optimization, a distinction between which feeders and phases require corrective measures is made when formulating the constraints, as shown by Equations (6a) and (6b). If no action is necessary on phase j of feeder i, as indicated by its loading factor (Equation (6b)), the constraint is formulated so no participation will result from those vehicles relying on 1-phase AC charging connected to it. This does not apply to vehicles using 3-phase AC charging, as they could also be affecting other congested phases.
The obtained solution, consisting of the participation factors for each vehicle (x 1st k ), is then used to reevaluate the loading of the transformer as indicated in Equation (7). Here V t represents the nominal voltage at the transformer and the parameter ζ k , defined in Equation (8), is used to indicate how each vehicle affects the transformer based on its charging characteristics.
If congestion still affects the transformer (T * > 0), a second optimization is launched. Mathematically the objective functions for both formulations are given respectively, once again, by Equations (5a) and (5b), combined with the following constraints: minimize: Equation (5a) Equation (5b) subject to: ∀i = 1, ...n; ∀j = (a, b, c) As indicated by Equation (9b), for the second optimization, the participation of each vehicle is limited already by its previous contribution during the feeder management. The final value for each car x f k can be determined as follows, depending on whether two optimizations or just one optimization are executed:
Problem Formulation: Charge Increase Auction
Following the same structure proposed in [28] a single optimization is used for the charge increase auction considering both the capacities of the head feeders and the distribution transformer. Mathematically the objective functions for both formulations are given, respectively, once again, by Equations (5a) and (5b), combined with the following constraints: minimize: Equation (5a) Equation (5b) subject to: ∀i = 1, ...n; ∀j = (a, b, c) As was the case for the first charge decrease auction, different constraints are formulated depending on which feeders and phases allow for rate increases of the corresponding vehicles (Equations (11b) and (11c)). Likewise, to preserve the integrity of all network assets, once again restrictive conditions have been established for the charge increase auction. If to decrease the rate of a vehicle using 3-phase AC charging the congestion of a single phase was enough, now it can only be increased if all three phases of the corresponding feeder allow it.
Simulation Environment and Case Studies
In this section the different case studies are presented together with the designed simulation environment.
Test Network
The same residential LV Dutch modified network topology employed in [28] was used in this work. The network, modeled using Simulink and its Simscape Power Systems Library [30], consists of a total of 20 individual households fed by power underground cables distributed among three main feeders supplied by a 10/0.4 kV, 100 kVA MV/LV transformer. All feeders have three-phases with three-phase customer connection points. The allocation of the existing dwellings and the different cable sections, and the distribution of the different PEVs within the network can be found in Figure 2. Its main electrical characteristics are summarized in Table 3. Table 3. Test network: electrical characteristics [28].
Cable Type R (Ω/km) X (Ω/km) C (µF/km) Ampacity (A)
A 150 mm 2 As shown in [28], the considered network was found to experience voltage violations ahead of thermal congestions, with critical vehicle penetration levels at 60% and 100% respectively. Therefore, to avoid a predominantly voltage-governed network management and to expose the differences between the different thermal implementations, the local voltage control was disabled and the impacts over the network voltage profile were disregarded. The scope of the present work focuses on evaluating the performances of the thermal implementations. The combined results of the voltage and thermal managements for the heuristic implementation can be found in [28].
Domestic Load Profiles
The same characterization of the uncontrolled domestic demand employed in [28] was used in this work. Through the CREST tool [31] 20 random individual domestic load profiles were generated based on a typical weekday during winter in Dortmund. The individual profiles, indicating the net active power demand of each household in kW, were then assigned a random inductive power factor between 0.9 and 0.95. Winter conditions were considered to account for the maximum uncontrolled demand and thus the hardest base load conditions. As no significant thermal violations were found in [28] below a full vehicle penetration, an additional unrealistic future increased base demand (IBD) scenario was considered to achieve a deeper comparison between the three proposed implementations. This scenario was defined as doubling the current original uncontrolled domestic base demand (OBD) in order to put additional thermal stress on the network. The combined net active power aggregated demand of all households for the OBD and IBD scenarios is shown in Figure 3.
PEV Demand
The same type, amount, allocation, penetration, grid connection, charge characteristics, charger and battery model, initial SOC and arrival and departure times employed in [28] were used in this work to model the PEV demand. All vehicles were modeled after the two commercially mass-produced cars: a Nissan Leaf and a BMW i3. Their technical characteristics are compiled and summarized in Table 4. As in [28], the scope of this study only accounts for 1-phase and 3-phase mode 3 AC domestic charging compliant with the IEC 61851-1 standard [29], with all households possessing commercial charging stations supporting a maximum phase current of 16 A, effectively limiting the maximum charging rate of both vehicles respectively to 3.7 kW and 11 kW. A maximum of 20 vehicles, 15 Nissan Leafs and 5 BMW i3s, were again considered and assigned to the different households. The vehicles were equally distributed across five incremental penetration levels, ranging from 20% to 100%, and defined as the fraction of total households possessing at least one electric vehicle. Likewise all vehicles had to achieve a final charge level of at least 85% overnight to be considered impact free. This was found to be the average SOC most danish drivers started their trips with every day, based on an analysis of their driving patterns [32].
Even if the control algorithm does not rely on the SOC of the vehicles, a battery model must be employed for simulation purposes in order to halt the charging process once the battery reaches full charge. For comparison purposes the same exact model employed in [28] was used. The formulation is based on the charging rate model presented in [33], where it was shown to offer faster computational times and compatibility with multiple battery technologies. Furthermore, compared to a classic simplified equivalent circuit model applied to a LiFePO4 cell of known parameterization, it only exhibited less than 1% deviation. Even though more representative network impacts could be achieved through a more complex battery modeling, neither the capacity of the controls to perform an effective management nor the results of the comparison between thermal implementations should be affected by the model of choice if the same one is used for all cases.
As in [28], the model has been further expanded to additionally consider both the performance of the on-board charger (OBC) and the charge-discharge efficiency of lithium-ion batteries. For the latter a 97% value was chosen based on the experimental findings in [34]. Its mathematical description is presented in Equations (12) and (13).
where SOC k (t) represents the SOC of the vehicle k at an instant t and is expressed as a function of its SOC at a prior instant (t 0 ) α simulation time steps behind, plus its variation over that time period. The change in SOC experienced by vehicle k is calculated based the capacity of its battery (C n ) in Amp hours, its rated voltage (V bat ) in Volts, its charge-discharge efficiency ( η bat ), the simulation time step in seconds (t ∆ ), the performance of the OBC (η obc k ) and the cumulative charging powers within the studied period. The charging powers (P ch k (γ)) at every simulated step (γ) are calculated based on the respective complex nodal voltages # » V ij (γ) and conjugate charging rates of each vehicle #» I * ch kj (γ), k, located at node i. In the case of 1-phase AC charging the corresponding phase voltage j is used to compute the charging power, while for 3-phase AC all phase voltages must be considered. The current drawn by each vehicle is determined by the behavior of its OBC, which rectifies the network signal from AC to DC to charge the battery. At the same time, it reduces its harmonic injection and corrects the resulting power factor [35]. As in [28], a simplified model was built on Simulink to account for the resulting charging impacts of the vehicles on the analyzed network. The model employs the obtained charging current RMS setpoints to multiply a set of unitary sinusoidal waves calculated using a PLL (phase-locked loop) measuring the corresponding nodal voltages at each location. The resulting waves are then fed to controlled current sources connected at each respective node.
Case Studies
To quantify the progressive network impacts caused by PEVs and to assess the effectiveness of the different thermal formulations, the following case studies were considered: 1. Uncontrolled charging-OBD Network behavior resulting from the increasing penetration levels of PEVs (20%, 40%, 60%, 80% and 100% respectively) without any control action taken to restrict their charging process was considered under original uncontrolled domestic base demand conditions.
Controlled charging-OBD
Network behavior resulting from the increased penetration levels of PEVs (20%, 40%, 60%, 80% and 100% respectively) with their charging process managed according to thermal limitations for the three proposed alternatives was evaluated under original uncontrolled domestic base demand conditions. All vehicles were considered to actively participate in network management.
Uncontrolled charging-IBD
Network behavior resulting from the increasing penetration levels of PEVs (20%, 40%, 60%, 80% and 100% respectively) without any control action taken to restrict their charging process was considered under increased uncontrolled domestic base demand conditions.
Controlled charging-IBD
Network behavior resulting from the increasing penetration levels of PEVs (20%, 40%, 60%, 80% and 100% respectively) with their charging process managed according to thermal limitations for the three proposed alternatives was evaluated under increased uncontrolled domestic base demand conditions. All vehicles were considered to actively participate in network management.
All case studies were run entirely within Simulink. The network, domestic demands and PEV demands are modeled using the Simscape Power Systems Library, and the control optimizations were implemented using MATLAB functions and were directly interfaced with the network model using the MATLAB function block within Simulink. The linear and convex formulations were solved respectively by means of the "intilinprog" and "ga" optimization toolboxes within MATLAB. The required computational times of the different implementations were evaluated by enclosing each corresponding code section with the "tic" and "toc" MATLAB functions.
Results and Discussion
In this section, the results of the three thermal implementations for the different considered case studies are presented and discussed. A comparison in terms of capacity to address network issues, total impact on the end-users and overall performance is presented .
Alleviation of Network Congestions
First, capacity to alleviate network congestions was analyzed. For all study cases the charging demand only caused a significant overloading on the distribution transformer, without exceeding the capacity of the head feeders. Thus, solely the impacts and corrective measures derived from the loading of the distribution transformer are presented and discussed.
The most relevant results for both the OBD and IBD scenarios are summarized respectively in Figure 4 and Tables 5 and 6. Figure 4 depicts the loading profile of the distribution transformer over the complete simulation time frame for all thermal implementations considering the three highest PEV penetration levels: Figure 4a,d (100%), Figure 4b,e (80%) and Figure 4c,f (60%). These are compared among them and against their respective uncontrolled loading profiles while accounting for the rated capacity of the transformer and the base loading of the initial vehicle free network. Complementary Table 5 (OBD) and Table 6 (IBD) present the overall mean transformer loading over the complete simulation time frame together with its standard deviation for all uncontrolled and controlled charging scenarios and the three highest PEV penetration levels.
As shown in [28] an increasing number of PEVs results in a higher and more heterogeneous loading of the distribution transformer. This is indicated graphically by the loading profiles of the transformer in Figure 4 and by the a higher mean and standard deviation values of the transformer loading presented in Tables 5 and 6. Moreover, as discussed in [28], a satisfactory controlled penetration of PEVs should result in lower load deviations while maintaining the average transformer loading. An equal average loading indicates that the same amount of energy has been transferred in the considered period, while the smaller load deviations indicates a less fluctuating and more stable overall charging demand.
For all considered scenarios, Figure 4 shows how all thermal implementations satisfactorily mitigated all thermal impacts and the peak was shaved on the total demand curve. This effect becomes most noticeable in Figure 4d since it depicts the highest loading scenario for the distribution transformer. For said case the uncontrolled peak demand of 173.6 kVA was reduced by 30% to 117.8, 119.8 and 119.9 kVA respectively for the heuristic, linear and convex implementations. All thermal variants demonstrated their effectiveness in preventing the thermal overloading of the transformer for both OBD and IBD scenarios with only minor differences due to their own respective formulations. This is further reinforced by the data presented in Tables 5 and 6, where all implementations are shown to result in equivalent average loading values. Finally, it can be seen how the linear and convex implementations scored lower deviation values over the heuristic formulation, with the smallest deviations being registered by the linear variant. This is indicative of a slightly superior performance of the linear implementation over the other two alternatives. 18
Overall User Impact
The impacts caused upon PEV users by the alleviation of transformer congestions overnight are summarized respectively in both Tables 7 and 8 for the OBD and IBD scenarios. Both tables compare the average final SOC levels reached by all vehicles together with their standard deviations for the uncontrolled and controlled charging implementations covering the higher PEV penetration levels and all thermal variants. Additionally, the total share of cars that reach a final SOC ≥ 85% and can thus be considered impact-free is also highlighted.
As discussed in [28], the upper maximum performance limit for the thermal implementations is given respectively by each corresponding no control case. This is because no restrictions over the charging process are imposed and thus all cars can achieve the maximum possible charging level given by their charging rates and connection times. Tables 7 and 8 reveal how all thermal variants resulted in a null impact on the participating users for all vehicle penetrations. All cars surpassed the 85% SOC limit and reached maximum charge, equal to or beyond 99%, with a null deviation regardless of the base load demand level. Table 7. Final SOC levels (%) and number of unaffected users: original base demand 60%, 80% and 100% PEV scenarios.
Running Times
Finally, a comparison among the three implementations was done in terms of their required computational times. Shorter times are preferred as they are indicative of less required computational power and less demanding communications for a chosen control period. The obtained results considering the highest vehicle penetration levels together for both OBD and IBD scenarios are summarized in Figure 5a,b. Both figures depict in ascending order and using a logarithmic time scale, the computational running times respectively for each executed charge increase (Figure 5a) and charge decrease auction (Figure 5b). First of all, it can be seen how increasing numbers of PEVs, causing more loading of the distribution transformer, results in more charge decrease and in fewer charge increase auctions being executed. This is shown in Figure 5a,b across all thermal variants. Additionally, the results reveal a significant difference between implementations for both thermal auctions. The heuristic variant outperformed both the linear and convex formulations, managing to achieve an effective solution with consistent running times below 100 µs for the charge decrease auction and for most iterations of the charge increase auction. A maximum resolution time of 0.5825 s was registered in said case. On the other hand, running times registered for the linear and convex formulations ranged, respectively, from 0.0415 s to 5.1766 s and 1.0416 s to 71.5619 s for the charge decrease auction, and 0.0131 s to 18.9781 s and 0.2698 s to 127.0041 s for the charge increase auction.
Overall faster resolution times were achieved by the heuristic implementation, followed respectively by the linear and convex implementations.
Conclusions
This work has proposed two novel alternative formulations, linear and convex, for the centralized thermal management of a real-time, agent-based charge control algorithm currently solved by means of a heuristic implementation. The algorithm is conceived to mitigate and correct the main network impacts caused by the penetration of PEVs on radial distribution networks. Both formulations seek to minimize the total impact upon car owners and have been designed to serve as replacements of the current heuristic implementation of the algorithm, ensuring their compatibility with its current architecture and coordination with the local voltage management. The implementations, solved respectively by means of integer linear programming and a genetic algorithm, were tested using a simulation environment considering multiple vehicle penetration levels and two base demand scenarios, and their results were compared with those of the current heuristic implementation of the algorithm. An analysis of the obtained results was then done in terms of their total impacts on the end-users and overall performances. Results showed how faster resolution times were achieved by the current heuristic implementation, and no significant differences between formulations existed in terms of network management and end-user impact for any considered scenario.
Future work is encouraged to test the validity of the obtained results analyzing the effects of increasing number of participating vehicles on execution times. Additionally, employing larger and more realistic distribution networks, incorporating distributed generation and using stochastic modeling to improve the current simulation environment are all also recommended to detect potential differences between the formulations in terms of their total impacts on the end-users and overall performances. The combined execution of each implementation with the local voltage control could also be explored, as potential operational synergies with certain implementations might be revealed. Finally the testing of the new formulations within a hardware-in-the-loop setup with real electric vehicles and charging stations, as done for the current heuristic implementation, should be carried out in order to validate their operation and analyze how their execution times could affect the performance of the overall control.
Funding: This research received no external funding. | 2020-08-20T10:06:03.608Z | 2020-08-15T00:00:00.000 | {
"year": 2020,
"sha1": "0cb18580dacc851eb0e077deebc7b04a43fbc5e8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/13/16/4227/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "833daff41fa9d49a592ba491c730d258d80d7bed",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
89187040 | pes2o/s2orc | v3-fos-license | NEW Stylogaster AND RANGES OF CONOPIDAE ( DIPTERA ) FROM THE BRAZILIAN AND BOLIVIAN AMAZONIA
Manaus, Brazil, are examinied, of which two from Brazil are new, Stylogaster rafaeli from Rondônia and S. ctenitarsa from Roraima. This data, along with new material from Bolivia, expands the distributions of thirteen species of Conopidae. Figures for S. rafaeli sp.n. and S. ctenitarsa sp.n. are included.
INTRODUCTION
Since our last treatment of Stylogaster in 1985, more conopid material has become available for study from neotropical regions we have not previously examined.This material is particularly important in enabling us to test our earlier concepts of species and their diagnostic limits as well as furnishing new information about their distribution.The use of the malaise trap has singularly been the most important factor contributing to the collection of this material.This paper is based on specimens from the Instituto Nacional de Pesquisas da Amazonia (INPA) in Manaus, Brazil, and contains two new Brazilian species of Stylogaster, S. rafaeli from Rondônia and S. ctenitarsa from Roraima. S. sousalopesi Camras, from this collection, was described in a previous paper (CAMRAS, 1989).It was considered appropriate to include in this study specimens taken near Buena Vista, Bolivia because this region represents the southernmost extension of the Amazonian forest and expands the ranges of the following thirteen conopids: Physoconops (s.str.)peruviana, Stylogaster souzai, S. longispina, S. dispai; S. brasilia, S. rufa, S. rectinervis, S. banksi, S. jactata, S. lepida, S. decorata, S. peruviana, and S. plumidecorata.These details will be discussed at length under the heading of each species below.Historical information on the known ranges of species were compiled from PAPAVERO (1971) and CAMRAS & PARRILLO (1985).
METHODS AND MATERIALS
156 Conopidae from INPA and 37 Stylogaster from the collection of S. Camras were examined with a Wild M8 Zoom stereo microscope.Illustrations were made using a dedicated drawing tube.Measurements were made from the base of the antennae to the apex of the abdomen in males and to the base of the ovipositor in fe males.Dissections of the male geni talia were made by removing the pos terior portion of the abdomen and re laxing it in a solution of potassium hydroxide (1 chip per I ml of water) for 12 to 24 hours at room tempera ture.This proceedure rehydrates and clears the abdomen for dissection and examination.The dissected portion was then returned to a glycerin-filled microvial and pinned with the corre sponding specimen for permament as sociation.
It should be noted that the draw ings of the surstyli capture these very three dimensional structures at specific viewing angles and observing them at other angles may make them appear quite different than what is shown in our figures.
The authors have adopted the morphological terms found in Manual of Nearctic Diptera since their paper in 1985.
The holotypes of new species will be deposited in the collection of the Instituto Nacional de Pesquisas da Amazonia (INPA), Manaus, Brazil.Paratypes will be deposited at INPA and the S. Camras collection (SCC).Known from Mexico to Bolivia.
Known from Ecuador to Paraguay.
Known from Mexico to Argentina.
Known from Venezuela to Argentina.
Stylogaster
souzai Monteiro, 1960:111.A member of the sfy/ata-group, S. souzai, until now, has been known only from the holotype.Known from the states of Amapá and Pará in Brazil.
Stylogaster longispina Camras & Parrillo
Stylogaster longispina Camras & Parrillo, 1985:115.This member of the stylata-group is diagnostic by the elongated styles of the aedeagus and large flat medial tooth of the hypandrium.A similar elongation of these styles has occured in S. rafaeli sp.n. (q.v.) but they are markedly shorter.The presence of long aedeagal styles of the ornatipes-and negleeta-groups is probably a synapomorphy, while their occurance in S. longispina is clearly a case of convergence
Diagnosis
With diagnostic characters of the stylata-gvoxxv).Male immediately rec-ognized by a hippocrepiform invagina tion of the hypandrium which is bor dered circum-marginally by long black hairs on the fifth sternite.
Legs: All segments simple in form and pale yellow in color unless stated otherwise.Pro-and mesocoxae with black setae on anterior and pos terior faces.Pro-and mesotibiae pale black setulae on dorsal surface, gla brous ventrally.Mesofemur with ablateral longitudinal row of long fine setae on apical half.Pro-and mesotibiae densely pale setose.Protibia with several long, flat setae at apical adlateral margin.Mesotibia black setulose adlaterally, pale setulose ablaterally.All tarsi simple.Protarsus black setulose dorsally.Probasitarsus with adlateral brush of pale, ventrally directed setae along its length.Plantar surface glabrous.Mesotarsus similar to protarsus except for the absence of basitarsal brush and is setulose on plantar surface except last tarsomere.Metacoxa brown with moderately granulate microsculpture giving it at dull sheen.Black setae on ventral margins.Metatrochanter ventrally with long black setae.Metafemur yellow with three brown bands: prebasai, preapical, and apical; black setose, adlateral margin with long black setae on basal half.Metatibia yellow, with a brown stain at middle and dark brown at apex; black setulose.Metatarsus brown, black setulose.Wings hyaline, medial , +2 vein broadly bowed toward ventral wing margin.Costal setulae long and semierect.Hal ter with base pale yellow, knob brown.
Postabdomen: Epandrium yel low, with a circular brown macula on either side; lacking macrosetae.Hypandrium broadly hippocrepiform, circum-marginally lined with long black setae ( Etymology: Named in honor of José Rafael, who collected the type series.
Discussion
The apomorphous elongated aedeagal styles of S. rafaeli is doubt less a convergence as this species clearly belongs to the sty/ata-group.See discussion under S. iongispina.Paratypes with less black on the terg ites.Length: 5.5 -7.0 mm.This spe cies keys to S. souzai in CAMRAS & PARRILLO (1985) but is easily dis tinguished by the characters men tioned in the diagnosis.Figure 1 drawn from paratype.Previously known from the monotype male from Cuzco, Peru.
Diagnosis
With all the diagnostic characters of the rectinervis-group, S. ctenitarsa is distinguished by ablateral, longitu dinal row of hairs on the protarsus.
Description
Head: Vertex dark brown.Frons brown, becoming rufous anteriorly.Five proclinate frontal bristles, increasing in size anteriorly.Ocellar triangle equilateral in shape, pale, posteriorly brown.Ocellar tu bercle black, ocelli circular, amber colored.Triangle proceeding to a little behind middle of frons.Three ocellar bristles present behind median ocellus; right side two, left, one (see discussion).Two promi nent post-ocellar bristles.Frontal lunule, facial ridges and parafacials pale yellow, with light-directional silky sheen.Facial ridge medially carinate.Eyes with anterior facets about three diameters larger than lat eral facets.Basal antennomere yellow.Sec ond antennomere pale yellow, with uni form covering of sparse black setulae, less so on medial lateral face.Third antennomere yellow, apical half infuscated; subequal in length to second antennomere.Arista brownish, infuscated.Proboscis ba sally pale, otherwise blackish brown, labella yellow.Postcranium cinereous, with pale occipital and post ocular setae.
Thorax: Reduced macular pattern for the stylata-group.Presuturai scutum pale yellow, except for insular curved brown macula, medial to post-pronotal lobes and extending almost to transverse suture.Central vitta evanescent.Chaetotaxy of presuturai scutum: 1 notopleural bristle, 1 post-pronotal bristle.The postsutural scutum generally pale yellow, with a slight hint of a brown stain along the longitudinal axis of the supra-alar bristle.Chaetotaxy of postsutural scutum: 1 supra-alar, 2 post alar, and 1 dorsocentral (a smaller bristle also present in front of the dorsocentral).The scutellum is very light brown, with 1 pair of scutellar bristles.The pleurae are generally pale yellow except for brown crescent-shaped stain on the meso-anepisternum below the notopleural bristles and anteroir to the wing base.Mediotergite and laterotergite yellow.Chaetotaxy of pleu rae: 1 proepisternal (which are missing but indicated by setal sockets) and 1 meso-anepimeral.
Legs: All segments pale yellow unless stated otherwise.Pro-and mesocoxae with black setae on ante rior and posterior faces.Pro-and mesotibiae simple in form, black setulate on dorsal surface, glabrous ven trally.Mesofemur with ablateral longitudi nal row of long fine setae on apical 0.66.Pro-and mesotibiae densely pale setulose, with some black setulae on adlateral basal 0.20 of protibia and adlateral basal half of mesotibia.Proand mesotibiae with several long, flat, pale setae at apical adlateral margin.All tarsomeres simple in form.Probasitarsus pale setulose dorsally, the remaining portions and other tarsomeres black setulose above.The plantar sur faces of protarsomeres glabrous.All protarsomeres with adlateral brush of short, pale, ventrally directed setae along its length.Immediately dorsal to this row of setae is another row of long pale setae which extend ablaterally to the longitudinal axis of the tarsus, forming a plumose tarsal comb, the length of the setae gradually diminishing toward the tarsal apex (Fig. 2a Postabdomen: Epandrium pale, lack ing macrosetae.Hypandrium with bulbous swelling on either side, externally with only a few black setae (Fig. 2b).Internal oriface ventrally U-shaped, internally with a long, nanow tooth (Fig. 2b).Aedeagus with long membranous lobe (Fig. 2c).Surstyli as in Figures 2d and 2e.Parameres produced as laminate lobes with proclinate rows of black setae.Cerci oval in outline with dorsal setae and central field of strong black setae (Fig. 2f).
Ecology: No data available on hosts or specific ecitonine associations.
Etymology: Named for the con spicuous protarsal brushes.
Discussion
This species belongs to the rectinervis-group and keys to S. rufa in CAMRAS & PARRILLO (1985), from which is may be distinguished by the pale mesonotum, a black macula near the humeral callosity and the long lateral hairs of the protarsus.
. The phylogenetic relationship of style development in S. longispina and S. rafaeli is presently unknow.Previouly known from Peru and Bolivia.
Metatibia back setulose which decumbant basally and longer and erect toward the apex.Some white setulae at basal 0.33.Apex pale and with a preapical brown band.Metatarsus brown, black setulose.Wings hyaline, medial , + 2 vein straight.Costal setulae moderately long and semierect.Halter with base pale yellow, knob brown.Abdomen: Tergites generally pale yellow with covering of black, decumbent setulae.Only with faint traces of triangular maculae.Tergite 1 with lateral callosity bearing several black setae.Tergite 2 with anteriolateral margin lined with 4 black macrosetae. | 2018-12-29T02:33:11.979Z | 1995-01-01T00:00:00.000 | {
"year": 1995,
"sha1": "81d95c86e1db1096bc0c9331955835683aa74e19",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/aa/a/DdgkWNH6LLpNfwHh8NRgQXs/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "81d95c86e1db1096bc0c9331955835683aa74e19",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
119128664 | pes2o/s2orc | v3-fos-license | Conformal welding for critical Liouville quantum gravity
Consider two critical Liouville quantum gravity surfaces (i.e., $\gamma$-LQG for $\gamma=2$), each with the topology of $\mathbb{H}$ and with infinite boundary length. We prove that there a.s. exists a conformal welding of the two surfaces, when the boundaries are identified according to quantum boundary length. This results in a critical LQG surface decorated by an independent SLE$_4$. Combined with the proof of uniqueness for such a welding, recently established by McEnteggart, Miller, and Qian (2018), this shows that the welding operation is well-defined. Our result is a critical analogue of Sheffield's quantum gravity zipper theorem (2016), which shows that a similar conformal welding for subcritical LQG (i.e., $\gamma$-LQG for $\gamma\in(0,2)$) is well-defined.
Introduction
Let D 1 and D 2 be two copies of the unit disk D, and suppose that φ : ∂D 1 → ∂D 2 is a homeomorphism. Then φ provides a way to identify the boundaries of D 1 and D 2 , and hence produce a topological sphere. The classical conformal welding problem is to endow this topological sphere with a natural conformal structure. When the sphere is uniformised (i.e., when it is conformally mapped to S 2 ) we get a simple loop η on S 2 , which is the image of the unit circle. Equivalently, the conformal welding problem consists of finding a triple {η, ψ 1 , ψ 2 }, where η is a simple loop on S 2 , and ψ 1 and ψ 2 are conformal transformations taking D 1 and D 2 , respectively, to the two components of S 2 \ η, such that φ = ψ −1 2 • ψ 1 . If such a triple exists and is uniquely determined by φ (up to Möbius transformations of the sphere) then one says that the conformal welding (associated to φ) is well-defined.
The extension of this problem to the setting of random homeomorphisms has received much attention in recent years; in particular, when the random curves and homeomorphisms are related to natural conformally invariant objects such as Schramm-Loewner evolutions (SLE) and Liouville quantum gravity (LQG). This will be the focus of the present paper. In particular, we consider the case of critical (γ = 2) LQG, which is associated with SLE 4 .
Roughly speaking, LQG is a theory of random fractal surfaces obtained by distorting the Euclidean metric by the exponential of a real parameter γ times a Gaussian free field (GFF). Such random surfaces give rise to random conformal welding problems, for instance, when the homeomorphism φ corresponds to gluing the boundaries of two discs according to their LQG-boundary lengths. Weldings of this type have been studied in several recent works [3,4,28,11,20]. In particular, for a class of homeomorphisms defined in terms subcritical LQG measures (γ-LQG for γ ∈ (0, 2)) existence and uniqueness of the conformal welding was established by Sheffield [28], and the interface η was proven to have the law of an SLE κ with κ = γ 2 ∈ (0, 4). Uniqueness of a random conformal welding where the interface η has the law of an SLE 4 was recently established by McEnteggart, Miller, and Qian [20].
Let us now make the set-up more precise. Given a parameter γ ∈ (0, 2], a simply connected domain D ⊂ C, and an instance h of (some variant of) a GFF on D, one would heuristically like to define the γ-LQG "surface" associated with (D, h) to be the 2d Riemannian manifold with metric tensor e γh (dx 2 + dy 2 ) on D. This definition does not make rigorous sense since h is a distribution and not a function, but one can prove by regularising the field ( [17,25,14,7]) that h induces a so-called "γ-LQG area measure" µ γ h in D (with formal definition e γh(z) dxdy) and a "γ-LQG boundary length measure" ν γ h along ∂D (with formal definition e (γ/2)h(x) ds). The case γ = 2 is known as critical, because the regularisation procedure used when γ ∈ (0, 2) breaks down at this point, and defining the critical measure requires a different strategy.
In fact, it is more convenient to consider this problem in the setting where (D i , h i ) for i = 1, 2 have infinite boundary length. To explain the interpretation of the conformal welding problem in this framework, and to state our main theorem, we need the following definition. For a simply connected domain D ⊆ C let H −1 loc (D) denote the space of generalised functions h on D such that for any open set U with U D, the distribution h| U is in the Sobolev space H −1 (U ). where Q γ = 2/γ + γ/2. 1 It follows from the regularisation procedure used to define the LQG measures that if h 1 and h 2 are related as in (1.1), then the push-forward of µ γ h2 (resp., ν γ h2 ) by ψ is equal to µ γ h1 (resp., ν γ h1 ). In this paper the distribution h will always be a Gaussian free field or a related kind of distribution. We think of two equivalent pairs (D 1 , h 1 ) and (D 2 , h 2 ) as two different parametrisations of the same γ-LQG surface; indeed, the previous paragraph implies that they describe equivalent LQG measures. We will often abuse notation and refer to (D, h) as a γ-LQG surface, i.e., we identify (D, h) with its equivalence class. If we introduce a γ-LQG surface S by writing S = (D, h) we mean that S is a γ-LQG surface (i.e., an equivalence class) while (D, h) is a particular parametrisation of this surface. Recall that by the Riemann mapping theorem, a quantum surface comes equipped with a well-defined notion of topology: either that of H (equivalently, some other bounded simply connected domain), C, or S 2 .
Let us now come back to conformal welding: we will consider the following alternative version of the problem. Suppose that H 1 , H 2 are two copies of the upper half-plane and φ is a homeomorphism from R + to R − . The problem is to find a triple {η, ψ 1 , ψ 2 }, where η is a simple curve in H from 0 to ∞ and ψ 1 , ψ 2 are conformal transformations taking H 1 and H 2 to the two components of H \ η, such that φ = ψ −1 2 • ψ 1 . If such a triple exists and is unique then we say that the conformal welding associated to φ is well-defined.
Then S L and S R are independent 2-LQG surfaces, and each surface has the law of a (2, 2)-quantum wedge. Furthermore, the quantum boundary lengths along η as defined by S L and S R agree. η S L S R Figure 1: Illustration of the conformal welding problem. We get a topological half-plane by welding together the two surfaces S L and S R . By Corollary 1.4, if S L and S R are independent (2, 2)-quantum wedges and the welding is defined in terms of 2-LQG boundary length, then the resulting surface (a (2, 1)-quantum wedge) has an a.s. uniquely defined conformal structure, and the interface η has the law of an SLE 4 . Figure 2: Consider a (2, 1)-quantum wedge (H, h, 0, ∞) decorated by an independent SLE 4 η. The quantum zipper identifies segments [0, X(t)] and [Y (t), 0], each of quantum length t > 0. This gives a new surface/curve pair (h t , η t ) with the same law as before. By Theorem 1.5, the processes of zipping up and zipping down are measurable with respect to (h, η).
We remark that independence of the 2-LQG surfaces S L and S R in Theorem 1.2 does not mean that the fields h| D L and h| D R are independent; these two fields are dependent e.g. since they induce the same quantum length measure along η. Instead, we have independence of the two surfaces viewed as equivalence classes. This means that if we embed the two surfaces in some standard form then the fields in this embedding are independent. Explicitly, if (H, h L , 0, ∞) is an embedding of S L such that (say) the unit half-circle has unit mass, and (H, h R , 0, ∞) is defined similarly for S R , then the fields h L and h R are independent.
By Theorem 1.2 we have a quantum length measure along η which is defined by considering the LQG boundary measure of the surfaces S L and S R . We remark that this length measure along η can be defined equivalently in a more intrinsic way by considering e h dm, where m is the measure supported on η given by its 3/2-dimensional Minkowski content. This equivalence was proved for the subcritical zipper in [5] and the critical case follows by the same argument.
The following uniqueness result concerning the conformal welding problem of Theorem 1.2 was recently established in [20,Theorem 2]. This a.s. gives a uniquely defined conformal welding of the two 2-LQG surfaces such that the interface η between the surfaces has the law of a chordal SLE 4 .
Observe that the conformal welding in this corollary is not proven to be the unique conformal welding among all possible conformal weldings; since it is assumed in Theorem 1.3 that the curves ϕ(η) and η both have the law of SLE 4 curves, we only obtain uniqueness among the weldings for which the interface has this law. The uniqueness result can be strengthened to curves a.s. satisfying certain deterministic geometric properties by using the stronger variant of Theorem 1.3 found in [20,Theorem 2].
We also obtain a dynamic version of the critical conformal welding, analogous to Sheffield's quantum gravity zipper [28,Theorem 1.8] in the case γ ∈ (0, 2). See Figure 2 for an illustration. Theorem 1.5 Let (H, h 0 , 0, ∞) be the equivalence class representative of a (2, 1)-quantum wedge with the last exit parametrization (see Definition 2.2). 2 Let η 0 be an SLE 4 from 0 to ∞ in H which is independent of h 0 . Then for every t > 0 there exists a conformal map f t defined on H, which is measurable with respect to h 0 , such that: • (h t , η t ) has the same law as and [Y (t), 0] to the right-and left-hand sides of η t \ f t (η 0 ), respectively, and for every s ≤ t, X(s) and Y (s) are mapped to the same point on η t \ f t (η 0 ). This gives rise to a bi-infinite process (h t , η t ) t∈R , such that: • (h t , η t ) t∈R is measurable with respect to (h t0 , η t0 ) for any t 0 ∈ R; and • (h t , η t ) t∈R is stationary, i.e., for any t 0 ∈ R the two processes (h t0 , η t0 ) t∈R and (h t0+t , η t0+t ) t∈R are equal in law.
As described in [28] we can think of the operation (h 0 , η 0 ) → (h t , η t ) for t > 0 as zipping up the surfaces h 0 | D L , h 0 | D R to the left and right of η 0 by t units of quantum boundary length. Similarly, we think of the operation (h 0 , η 0 ) → (h t , η t ) for t < 0 as zipping down.
Related works
Conformal weldings related to LQG were first studied in [3,4], where it was proven that the conformal welding of a subcritical LQG surface to a Euclidean disk according to boundary length is a.s. well-defined (see [29] for the case of critical LQG). In Sheffield's breakthrough work [28] it is shown that the conformal welding of two subcritical LQG surfaces is a.s. well-defined, and that the interface is given by an SLE κ curve. More precisely, the following is proved. Theorem 1.6 (Sheffield '16) Consider two (γ, γ)-quantum wedges S L = (H, h L , 0, ∞) and S R = (H, h R , 0, ∞), with γ ∈ (0, 2), and identify the boundary arc [0, ∞) of S L to the boundary arc (−∞, 0] of S R according to γ-LQG boundary length. This a.s. gives a uniquely defined conformal welding of the two γ-LQG surfaces. In this conformal welding, the interface η between the surfaces has the law of a chordal SLE γ 2 , and the combined surface 4 has the law of a (γ, γ − 2/γ)-quantum wedge that is independent of η.
The existence part of Theorem 1.6 is established by studying a certain coupling between a GFF and a reverse SLE κ , where the law of the GFF is invariant under zipping up and down the SLE κ . The uniqueness part follows from [16], where Jones and Smirnov proved that the boundaries of Hölder domains are conformally removable, and [26], where Rohde and Schramm proved that the complement of an SLE κ for κ ∈ (0, 4) is a.s. a Hölder domain. For an overview of the proof, we recommend the notes [6].
Remark 1.7 The analogue of Theorem 1.5 is also proved in [28,Theorem 1.8] in the case γ ∈ (0, 2). That is, starting with the curve and combined surface described at the end of Theorem 1.6 (let us call them (h 0 , η 0 )) we get a bi-infinite stationary process (h t , η t ) t∈R that is measurable with respect to (h 0 , η 0 ). Duplantier, Miller, and Sheffield [11] have also studied problems closely related to conformal welding. In particular, they proved that if one considers an SLE κ η on an independent γ-LQG surface S, where κγ 2 = 16, then η is measurable with respect to a pair of so-called forested wedges. These wedges are the restrictions of S to the components of the complement of η -one consisting of components traced anti-clockwise by η, and the other consisting of components traced clockwise -along with topological information (encoded by a pair of Lévy processes) about how these components are glued together. A number of other measurability results concerning welding of general LQG surfaces are established in the same paper. We note however that these measurability results are of a weaker kind than, for example, the result in [28]. For instance, uniqueness of the "gluing" of forested wedges described above is only proved under the assumption that the resulting field h and curve η have a particular joint law.
As already mentioned, McEnteggart, Miller, and Qian in the recent paper [20], have also proved uniqueness of conformal weldings in certain settings. More precisely, they prove that if η is a curve in H and ψ : H → H is a homeomorphism which is conformal on H \ η, then ψ is in fact conformal as soon as η and ψ(η) satisfy certain geometric regularity conditions. These conditions are in particular satisfied a.s. if η and ψ(η) both have the law of an SLE κ for κ ∈ (0, 8). Their result is new for κ ∈ [4,8), while it follows from conformal removability for κ ∈ (0, 4).
Outline
The rest of the article is structured as follows. We begin in Section 2 by collecting relevant definitions: of the Gaussian free field and its variants; LQG surfaces and their parametrisations; and the specific quantum surfaces known as quantum wedges that will be particularly important in this paper. Here we also describe the construction of boundary LQG measures, and discuss some properties of these measures that are needed in what follows. In particular we will make use of a connection between subcritical and critical measures, that is a consequence of [2]. We conclude the preliminaries by briefly introducing Schramm-Loewner evolutions, and proving some basic convergence results that will be useful later on.
Sections 3 and 4 provide the key ingredients (Propositions 3.1 and 4.4, respectively) for the proofs of Theorems 1.2 and 1.5. In Section 3 it is shown that if one observes a 2-LQG surface in a small neighbourhood of a critical LQG-measure typical boundary point, then it closely resembles a (2, 2)-quantum wedge. This gives the critical LQG analogue of [28, Proposition 1.6], justifies why the (2, 2)-quantum wedge is a natural quantum surface (to our knowledge this is the first time that this surface is defined in the literature), and is important to identify the laws and establish independence of the quantum surfaces S L and S R in the proof of Theorem 1.2.
In Section 4 we prove that Sheffield's subcritical quantum gravity zipper (defined for γ ∈ (0, 2)), has a limit in a strong sense as γ ↑ 2. This is shown by proving and combining various convergence results concerning reverse SLE κ=γ 2 and γ-LQG measures as γ ↑ 2. The proof requires a careful study of quantum wedges and their associated measures in a neighbourhood of the origin, and analysis of the Loewner equation for points on the real line. As a consequence of this section, we obtain Theorem 1.5. Finally, in Section 5 we show how the main results of the previous sections allow us to deduce Theorem 1.2.
It is also worth taking a moment now to discuss why the proof in [28] does not generalise straightforwardly to the critical case. At a very high level, the key difficulties are: (a) lack of first moments for critical LQGmeasures; and (b) non-Gaussian conditioning for the law of the field around "quantum typical points". To explain this in more detail, we first need to describe the general strategy of [28] (for a more complete overview, the reader should consult [28] or [6]). As in the present paper, the fundamental object to construct is the quantum gravity zipper : a dynamic coupling between a (γ, γ − 2/γ)-quantum wedge and an SLE κ=γ 2 analogous to the coupling described in Theorem 1.5. From this, the analogue of Theorem 1.2 follows fairly easily.
In order to construct the subcritical quantum gravity zipper, Sheffield first describes a different dynamic coupling, this time between an SLE κ and a Neumann GFF plus a log singularity, that he calls the "capacity zipper". The existence of this coupling is straightforward to prove using a martingale argument. From here, roughly speaking, the "quantum zipper" can be obtained by "zooming in" at the origin of the capacity zipper. One key tool that is made use of (see, for example, [28, Proposition 1.6]) is a nice description of the field plus a γ-quantum typical point, when the field is weighted by γ-LQG boundary length. The difficulty with this in the critical case is that, in contrast to the subcritical setting, critical LQG measures assign mass with infinite expectation to finite intervals. Although this issue is actually possible to circumvent for many purposes -we will do exactly this using a truncation argument in Section 3 -it causes significant problems if we want to say anything precise about the joint law of the curve and the surface in the critical analogue of the capacity zipper, at a time when a critical quantum typical point is "zipped up" to the origin. An additional technical difficulty is created by the fact that critical measures need to be defined using a different approximation procedure to subcritical measures (see Section 2.3). This means that the law of the field around a quantum-typical point is no longer described in terms of its original law via a simple Girsanov shift, and makes it difficult to describe how the law of the curve changes in the context mentioned above. For example, it is unclear if it will simply add a drift to the reverse SLE driving function, as is the case when γ ∈ (0, 2).
Although it may be possible to obtain the results of this paper by adapting the method of [28] in some way, for the sake of avoiding significant additional technicalities we have chosen the approximation approach.
Acknowledgements N.H. acknowledges support from Dr. Max Rössler, the Walter Haefner Foundation, and the ETH Zürich Foundation. E.P. is supported by the SNF grant #175505. Both authors would like to express their thanks to Juhan Aru, for his valuable input towards the initiation and strategy of this project, and for numerous helpful discussions. They also thank an anonymous referee for his or her careful reading of the paper and for helpful comments.
Gaussian free field
Let D ⊂ C be a domain with harmonically non-trivial boundary, i.e., such that a Brownian motion started at some point in D hits ∂D a.s. Let C ∞ 0 (D) denote the space of infinitely differentiable functions on D with compact support. For f, g ∈ C ∞ 0 (D) define the Dirichlet inner product of f and g by Let H 0 (D) denote the Hilbert space closure (with respect to this inner product) of the subspace of functions f ∈ C ∞ 0 (D) with f ∇ := f, f ∇ < ∞. 5 Let f 1 , f 2 , . . . be a ·, · ∇ -orthonormal basis for H 0 (D). The zero boundary Gaussian free field (GFF) h is then defined by setting where α 1 , α 2 , · · · ∼ N (0, 1) are independent. (2.1) The convergence of (2.1) does not hold in H 0 (D) itself, but rather in a space of generalised functions. More precisely, let H −1 (D) be the dual space of H 0 (D), equipped with the norm where we use the notation (k, ·) for the action of Then the series (2.1) converges a.s. in H −1 loc (D) and the Gaussian free field h is defined as an element of this space a.s. In particular, h is a.s. a random distribution; as above, we write (h, f ) for the action of h on f ∈ C ∞ 0 (D). We note that when D is bounded, the series actually converges a.s. in H −1 (D) and so h is a.s. an element of this space.
Finally, we mention that for and so (h, f ) actually makes sense (as an a.s. limit) for any f such that . When is D is bounded, for instance, this is exactly the set of functions f in H −1 (D). For any given bounded and measurable ρ : ∂D → R the GFF with Dirichlet boundary condition ρ is defined to be a random distribution with the law of h + ρ, where ρ is the harmonic extension of ρ to the interior of D.
To define a mixed boundary condition GFF, assume that ∂D is divided into two boundary arcs ∂ D and ∂ F , and that a function ρ : ∂D → R satisfying ρ| ∂F = 0 is given. Write ρ for the harmonic extension of ρ to D and let H ∂D,∂F (D) be the Hilbert space closure of the subspace of functions f ∈ C ∞ (D) with f ∇ < ∞ and f | ∂D = 0. The mixed boundary GFF with Dirichlet boundary data ρ on ∂ D , is then defined to be a random distribution with the law of h + ρ, where h is now defined by (2.1) with f 1 , f 2 , . . . an orthonormal basis for H ∂D,∂F (D).
To define the free boundary GFF (equivalently, the Neumann GFF), consider the subspace of functions f ∈ C ∞ (D) with f ∇ < ∞. Notice that ·, · ∇ is degenerate on this subspace of functions, in the sense that f C , g ∇ = 0 for any g if f C ≡ C ∈ R. However, ·, · ∇ defines a positive definite inner product as soon as we quotient the space by identifying functions that differ by an additive constant. Write H(D) for the Hilbert space closure of this quotient space with respect to the inner product ·, · ∇ . The free boundary GFF h is then defined by (2.1), where f 1 , f 2 , . . . is now an orthonormal basis for H(D). Again the convergence of the defining sum does not take place in H(D) itself, but in the quotient space of H −1 loc (D) under the equivalence relation that identifies elements differing by an additive constant. We therefore define the free boundary GFF as an element of H −1 loc (D), modulo an additive constant, i.e., h and h + C are identified for any C ∈ R. One may fix the additive constant in various ways, for example by requiring that the average of h over some fixed set is 0.
When Finally, we mention that if f ∈ H(D) and h is a Neumann GFF in D, then the law of h + f is absolutely continuous with respect to the law of h. Indeed by standard theory of Gaussian processes, the Radon-Nikodym derivative of the former with respect to the latter is proportional to e h,f ∇ , where h, f ∇ := lim n→∞ n j=1 α j f j , f ∇ .
Quantum wedges
Recall the definition of a γ-LQG surface from the introduction (Definition 1.1).
Quantum wedges are a particular family of doubly-marked LQG surfaces which were originally introduced in [28] (see also [11]). We will parametrise these surfaces by (H, h, 0, ∞) throughout most of the paper, but also sometimes by the strip S = R × [0, π] with marked points at ±∞. These will be related by the conformal which sends ∞ (resp., −∞) to 0 (resp., ∞). When we discuss quantum wedges, there will be two parameters of interest. The first parameter γ specifies how we are defining equivalence classes of quantum surfaces (i.e., it plays the role of the parameter γ in Definition 1.1) and the second parameter α specifies the weight of a logarithmic singularity that we are placing at the origin. We refer to the surface as a (γ, α)-quantum wedge. In this paper we will actually only consider (γ, γ − 2/γ)-quantum wedges and (γ, γ)-quantum wedges for γ ∈ (0, 2]. The case γ = 2 has not been considered in earlier papers, but the definition from [28,11] extends in a natural way to this case. Before we state the formal definition of the (γ, α)-quantum wedge we need to introduce some notation.
Since a doubly-marked quantum surface actually refers to an equivalence class, and since for any a > 0 the map z → az defines a conformal map from (H, 0, ∞) to (H, 0, ∞), there are several different fields h that describe the same quantum surface (H, h, 0, ∞). It is therefore convenient to decide on a canonical way to choose h from the set of possible fields, or a "canonical parametrisation". We will consider the last exit parametrisation in most of this paper, since this parametrisation leads to the cleanest definition of (2, 2)-quantum wedges. Note that this is different from the unit circle parametrisation considered in [11].
Definition 2.2 The last exit (resp., unit circle) parametrisation of a doubly-marked γ-quantum surface S with the topology of H, is defined to be the representative (H, h, 0, ∞) of S such that if h rad (r) is the average of h on the semi-circle of radius r around 0 (i.e., h rad is the projection of h onto H 1 (H)), then s → h rad (e −s ) − Q γ s hits 0 for the last (resp., first) time at s = 0.
If the last exit parametrisation of a surface exists (i.e., if h rad (r) + Q γ log r = 0 for all r > 0 small enough) it can easily be seen to be unique, by mapping the surface to the strip S with the map φ from (2.4). Let h circ = h − h rad be the projection of h onto H 2 (H), and write h GFF circ for the law of this field when h is a Neumann GFF on H. Observe that this describes the law of a well-defined element of H −1 loc (H) (i.e., not only an element up to an additive constant).
Remark 2.4 In [28,11] the (γ, α)-quantum wedge is defined to be the γ-quantum surface whose unit circle parametrisation is given by (H, h, 0, ∞), where: h = h circ + h rad ; h circ is as in Definition 2.3; h circ and h rad are independent; and h rad (e −s ) is equal to B 2s + αs for s ≥ 0, and to B −2s + αs conditioned to stay above s → Q γ s for s < 0.
We show in Lemma 2.8 below that this definition is equivalent to Definition 2.3 In Definition 2.3 we require α to be strictly smaller than Q γ , and one can check that this is satisfied for α = γ when γ ∈ (0, 2). However, we are also interested in the case γ = 2, where we have Q γ = 2 = γ. Thus, we need to give a definition of the following surface, which arises as a limit of a (γ, γ)-quantum wedge when γ ↑ 2. 6 Definition 2.5 We define the (2, 2)-quantum wedge to be the doubly-marked 2-quantum surface whose last exit parametrisation (H, h, 0, ∞) can be described as follows: • (h rad (e −s )) s≥0 has the law of (−B 2s + 2s) s≥0 , where B is a 3-dimensional Bessel process started from 0.
The (γ, γ)-quantum wedges are of particular interest since they may be obtained by sampling a point from the boundary γ-LQG measure and then "zooming in" near this point. This was established in [28] for γ ∈ (0, 2), and Proposition 3.1 below is a variant of this result for γ = 2.
Remark 2.6 The last exit parametrization is more convenient than the unit circle parametrization for the (2, 2)-quantum wedge since with the unit circle parametrization any neighborhood of zero has infinite mass a.s. This can be seen by using that with the unit circle parametrization, the field (h rad (e −s )) s≥0 has the law of (B 2s + 2s) s≥0 for B a standard Brownian motion started from 0.
In some of our proofs it will be convenient to parametrise the quantum wedges by the strip S instead of the upper half-plane H. Recall that H(S) denotes the Hilbert space closure of the subspace of functions f ∈ C ∞ (S) with f ∇ := (f, f ) ∇ < ∞, defined modulo additive constant. By [11,Lemma 4.2], H(S) = H 1 (S) ⊕ H 2 (S) is an orthogonal decomposition of H(S), where H 1 (S) is the subspace of functions f ∈ H(S) that are constant on all line segments {x} × [0, π] for x ∈ R (considered modulo an additive constant), and H 2 (S) is the subspace of functions f ∈ H(S) that have mean zero on all such line segments. Let h GFF,S circ denote a field with the law of a Neumann GFF on S projected onto H 2 (S) (as in the case of H, this is a well-defined element of H −1 loc (S)). The strip is convenient to work with since the term Q γ log |φ | in the coordinate change formula (1.1) is equal to zero for conformal transformations of the kind z → z + a for a ∈ R (these are precisely the conformal maps from S to itself that map +∞ to +∞ and −∞ to −∞, and correspond after conformal mapping to dilations of H). Furthermore, as the following remark illustrates for the case of the (2, 2)-quantum wedge, the quantum wedges defined above have a somewhat nicer description when parametrised by the strip.
as a distribution modulo an additive constant) and the following hold: • (h rad (s)) s≥0 has the law of −B 2s where B is a 3-dimensional Bessel process starting from 0.
and let h u = h u,rad + h u,circ and h = h ,rad + h ,circ be the orthogonal decompositions of these fields. Then h u,circ and h ,circ are both equal in distribution to h GFF,S circ . For B a standard Brownian motion, ( h ,rad (s)) s≥0 has the law of (B 2s + (α − Q)s) s≥0 conditioned to be negative for s > 0, and ( h ,rad (s)) s≤0 has the law of (B −2s + (α − Q)s) s≤0 . Furthermore, ( h u,rad (s)) s≥0 has the law of (B 2s + (α − Q)s) s≥0 , and ( h u,rad (s)) s≤0 has the law of (B −2s + (α − Q)s) s≤0 conditioned to be positive for s < 0. Let a = inf{t ≤ 0 : h ,rad (t) < 0}. We conclude by observing that if we apply the change of coordinates z → z − a to the field h , we get a field with the law of h u ; this can e.g. be deduced from the last assertion of [24,Lemma 3.4] and [24,Remark 3.5], which refers to [30].
Gaussian multiplicative chaos and the Liouville measures
In this section, we give a proper definition of the boundary Liouville quantum gravity measures described in the introduction. For a much more complete survey, including the case of bulk LQG measures, we refer the reader to [25,14,7] for the subcritical case and to [23,12,13,22] for the critical case.
In the following, when we refer to the topology of local weak convergence for measures on R, we mean the topology such that The following statement comes from [14] when γ ∈ (0, 2), and from [22] when γ = 2 (with a trivial adaptation of the argument from the bulk to the boundary measure). Lemma 2.9 Suppose that γ ∈ (0, 2] and let h be a Neumann GFF in H with some fixed choice of additive constant, or a GFF with mixed boundary conditions on D + = D ∩ H (free on ∂D + ∩ R, and Dirichlet with some ρ on ∂D + \ R). Let h ε denote the ε semi-circle average field of h on R, 7 let dz denote Lebesgue measure on R, and set Then ν γ h,ε converges in probability to a limiting measure ν γ h (resp., ν h,ε converges in probability to a limiting measure ν h when γ = 2) as ε → 0. These convergences are with respect to the topology of local weak convergence of measures on R.
Lemma 2.10 The result of Lemma 2.9 also holds when h is the field of a (γ , α)-quantum wedge in the last exit parametrisation, with 2 ≥ γ ≥ γ and α < Q γ ≤ Q γ .
Note that we do not require γ = γ here. We need to work in this set-up in, for example, Lemma 2.13. Proof. For notational simplicity we work in the case γ ∈ (0, 2), but the argument when γ = 2 is the same. Without loss of generality, it suffices to show that ν γ h,ε converges in probability, as a measure on [−1, 1], as ε → 0. To show this, we will explain how to obtain h| D from a field h that is absolutely continuous with respect to a Neumann GFF (by re-centring around a ν α -typical point). The result then follows from Lemma 2.9.
More precisely, we consider the following construction. Let P denote the law of a Neumann GFF on H, with additive constant fixed so that its average on the unit semi-circle is equal to 0, and write (h , z) for a pair with joint law Let h be the field h after re-centring around the point z, i.e., h = h (· + z).
Then it follows from [14, §6.3] that if h rad is the projection of h onto H 1 (H) (and h rad (s) denotes its common value on the semi-circle of radius e −s around 0) then ( h rad (s) − h rad (0)) s≥0 has the law of (B 2s +αs) s≥0 , where B is a standard Brownian motion. Moreover, by scale invariance of h GFF circ , the projection h circ of h onto H 2 (H) is equal in law to h GFF circ . Now, let M = sup s≥0 h rad (s) − Q γ s, and let T be the time at which this maximum is achieved (these are both finite a.s. since α < Q γ by assumption). Then by scale invariance of h GFF circ , if ψ T : H → H is the map z → e −T z, the field h := h • ψ T − M restricted to D + has the same law as h restricted to D + .
From here we can conclude, by observing that the law of h is absolutely continuous with respect to that of a Neumann GFF in H. Therefore, since all that is done to get from h to h is to re-centre around a random point, rescale by a random amount, and subtract a random constant, Lemma 2.9 implies that ν γ h,ε converges in probability as a measure on [−1, 1]. By the previous paragraph, the same thing then holds for h.
Remark 2.11
The measures ν γ h , ν h defined in Lemmas 2.9 and 2.10 are a.s. atomless and give strictly positive mass to every interval of strictly positive length a.s. (see, for example, [14,12]).
The following lemma will be important when we construct the critical quantum zipper by taking a limit of subcritical quantum zippers. Proof. This was shown in [2, §4.1.1-2] when h is either one of the fields in the statement of Lemma 2.9. It extends to the case when h is a (2, 1)-quantum wedge by the same proof as for Lemma 2.9 (using that it holds for the Neumann boundary condition GFF and then re-centring the field around a ν 1 h -typical point).
Schramm-Loewner evolutions
We assume the reader is familiar with the basic theory of Schramm-Loewner evolutions (SLE): for an introduction, see e.g. [19,18]. In this section we simply fix some notation and discuss a few points that will be relevant later on.
In this article, we will consider chordal SLE κ with κ ∈ (0, 4]. SLE κ in H from 0 to ∞ is defined to be the Loewner evolution in H with random driving function (W t ) t≥0 = ( √ κB t ) t≥0 , where B is a standard Brownian motion. When κ ∈ (0, 4] an SLE κ is a.s. a simple curve that does not touch the real line. We usually parametrise an SLE κ curve η by half-plane capacity; that is, we choose the parametrisation of η such that for every t > 0, the unique conformal map g t : H \ η([0, t]) → H with g t (z) = z + a t /z + O(|z| −2 ) as |z| → ∞ for some a t > 0, satisfies a t = 2t. We use the notation g t for the centred Loewner map g t = g t − W t , that sends η(t) to 0.
A curve η between boundary points a and b in a domain D is said to be an SLE κ from a to b if it is the image of an SLE κ in H from 0 to ∞, under a conformal map from H to D mapping 0 to a and ∞ to b.
Definition 2.14 (Reverse SLE κ ) A reverse Loewner evolution with continuous driving function W t : [0, ∞) → R is a solution f (t, z) = f t (z) to the following differential equation for every z ∈ H: In fact for every z ∈ H (see e.g. [18,Lemma 4.9]), a solution exists for all t ≥ 0, so that each f t defines a map H → f t (H).
A reverse SLE κ flow is the reverse Loewner evolution ( f t ) t≥0 driven by W t = √ κdB t , where B is a standard Brownian motion. One can also consider the centred reverse SLE κ flow, defined by f t (z) = f t (z + W t ) for all z, t. Then (f t ) t≥0 satisfies the following SDE for all z ∈ H: Moreover, there a.s. exists a continuous curve η such that for each t we have H \ f t (H) = η([0, t]).
Due to the time-reversal property of Brownian motion, if (f t ) t≥0 is a centred reverse SLE κ and (g t ) t≥0 is a centred forward SLE κ , both parametrised by half-plane capacity, then for any fixed t ≥ 0, f −1 t is equal in law to g t . In other words, if t > 0 is fixed, η([0, t]) is a forward SLE κ run until it has half-plane capacity t and η ([0, t]) is a reverse SLE κ run until it has half-plane capacity t, then η([0, t]) is equal in law to η ([0, t]).
Let us now provide a notion of convergence for Loewner evolutions; this will be particularly important in our construction of the critical conformal welding. Note that when considering sequences (f n ) n∈N or (g n ) n∈N of Loewner evolutions, we move the time parameter t into a superscript.
Definition 2.15
Suppose that (f t n ) t≥0 for n ∈ N and (f t ) t≥0 are centred, reverse Loewner evolutions in H from 0 to ∞, parametrised by half-plane capacity. Let σ n : R → [0, ∞) be defined by setting σ n (x) = inf{t ≥ 0 : f t n (x) = 0} for each x ∈ R, n ∈ N, and define σ in the corresponding way for f . Then we say that f n → f in the Carathéodory+ topology if • for every T < ∞ and ε > 0, f n converges to f uniformly on [0, T ] × {H + iε}; and • σ n → σ uniformly on compacts of R.
Remark 2.16
Note that this is stronger than the usual notion of Carathéodory convergence for Loewner evolutions. For forward Loewner evolutions, Carathéodory convergence is characterised by the requirement that, if g n , g are the flows in question, we have g −1 n → g −1 uniformly on [0, T ] × {H + iε} for every T, ε > 0 (see [19, §4.7]). The motivation for working with this stronger topology should be clear from the nature of the conformal welding problem that we are considering.
In the sequel we make the following slight abuse of notation. Suppose we have (η n ) n∈N and η, a collection of simple, continuous, transient curves starting from 0 in H. Then we will say that η n → η in the Carathéodory topology, if the corresponding forward (half-plane capacity parametrised) Loewner evolutions converge in the Carathéodory sense.
The convergence results that will be important in this article are the following.
Lemma 2.17
Suppose that κ n ↑ 4 as n → ∞, that η n has the law of an SLE κn curve in H from 0 to ∞ for each n ∈ N, and that η has the law of an SLE 4 in H from 0 to ∞. Then η n → η in distribution as n → ∞, with respect to the Carathéodory topology.
where σ * n (x) := σ n ( √ κn 2 x). We will first show that σ * n → σ uniformly a.s. on compacts of time. Observe that the coupled equations (2.8) imply that for any fixed x ∈ R, σ * n (x) is a.s. increasing in n and bounded above by σ(x), so has some a.s. limit σ * (x) ≤ σ(x). In fact, it holds that σ * (x) = σ(x) a.s. To see this, without loss of generality assume that x ≥ 0 and suppose for contradiction that σ(x) > σ * (x). This means that for some ε > 0 we have f t (x) ≥ ε for all t ≤ σ * (x). Define σ ε n (x) to be the first time that h t n (x) ≤ ε/2 for each n, so that: • σ ε n (x) ≤ σ * n (x) ≤ σ * (x) for all n; and • h t n (x), f t (x) ≥ ε/2 for all t ≤ σ ε n (x) and all n. Then (2.8), plus Grönwall's inequality applied to the function h t n −f t , implies that |h σ ε n (x) n (x)−f σ ε n (x) (x)| → 0 as n → ∞. This is a contradiction, since the first term in the difference is equal to ε/2 by definition, and the second should always be greater than ε.
For any K > 0, this argument then gives the existence of a probability one event Ω 0 , on which we have σ * n (q) → σ(q) for all q ∈ Q ∩ [0, K + 1]. Since σ and (σ n ) n∈N are defined from reverse SLE κ curves, we may also assume that σ and (σ n ) n∈N are continuous on Ω 0 . So now, suppose we are working on Ω 0 , and take any x ∈ [0, K]. Let q − k ↑ x and q + k ↓ x with q ± k ∈ Q ∩ [0, K + 1] for every k, so that σ * n (q − k ) ≤ σ * n (x) ≤ σ * n (q + k ) for every n, and σ * n (q − k ) ↑ σ(q − k ), σ * n (q + k ) ↑ σ(q + k ) as n → ∞ for every k. This means that σ * n (x) is a bounded sequence, and any converging subsequence has limit lying between σ(q − k ) and σ(q + k ) for every k. Since σ is continuous, this implies that any such subsequential limit must be equal to σ(x), and so in fact, it must be that σ * n (x) → σ(x). To summarise, on this event Ω 0 of probability one, we have that: σ * n → σ pointwise on [0, K]; σ * n (x) is increasing in n for every x ∈ [0, K]; and the functions σ and (σ * n ) n∈N are continuous. These are exactly the conditions of Dini's theorem, and so we may deduce that σ * n → σ uniformly on [0, K] a.s. To finish the proof, it is enough to show that for K arbitrary, the quantity sup x∈[0,K ] |σ n (x) − σ(x)| converges to 0 a.s. as n → ∞. Suppose without loss of generality that κ n ≥ 2 for all n. Then setting K = 2K in the previous paragraph, one deduces the existence of a probability one event Ω 1 , on which sup y∈[0,2K ] |σ * n (y) − σ(y)| → 0 as n → ∞ and σ is continuous. Then we have and on Ω 1 , the final expression goes to 0. This completes the proof.
3 The (2, 2)-wedge via "zooming in" at quantum-typical point The main goal of this section is to prove Proposition 3.1 below. This proposition illustrates why the (2, 2)quantum wedge is a particularly natural quantum surface, and will also be important in our proof of Theorem 1.2. Before we state this proposition, we briefly define the relevant notion of convergence for γ-LQG surfaces. Let S n for n ∈ N and S be doubly-marked γ-quantum surfaces with the topology of H. We say that S n converges to S in the sense of doubly-marked γ-quantum surfaces if we can find parametrisations (D, h n , a, b) and (D, h, a, b) of S n and S, respectively, with D C and a, b ∈ D, such that for any open and bounded U ⊂ D, h n | U converges to h| U in H −1 (U ). 8 Remark 3.2 Note that the above is a statement about the law of a quantum surface conditionally on several quantities. The same statement holds unconditionally, but we need the stronger statement for the proof of Theorem 1.2. Let us now briefly explain why. Theorem 1.2 says that when we cut a (2, 1)-quantum wedge with an independent SLE 4 , the surfaces on either side are independent (2, 2)-quantum wedges. For the proof the idea is to make use of the stationary critical zipper, Theorem 1.5. This can be used (by "zipping down" the SLE 4 some amount of quantum length) to say that the law of two surfaces in question are the same as the law as the surfaces lim C→∞ (H, h+C, X , ∞) and lim C→∞ (H, h + C, Y, ∞) where X , Y are two quantum-typical points at equal quantum distance to the right and left of 0. See Proposition 5.1 and the proof of Theorem 1.2 below. In particular X and Y depend on one another via the quantum boundary length measure. It is therefore important to know that the local convergence to a quantum wedge described in Proposition 3.1 holds even given this information.
We first prove a lemma that says, roughly speaking, that convergence of the type considered in Proposition 3.1 only depends on the local behaviour of the field h around the point z. This will be useful several places in what follows. ( D, h + C, z, z 0 ) converges in law to a (2, 2)-quantum wedge as C → ∞. 8 We remark that convergence of quantum surfaces is defined somewhat differently in [28] and [11] than in the current paper. In [28] one embeds the surfaces such that the field hn gives unit mass to the unit half-disk for all n, and the surfaces are said to converge if, restricted to any bounded subset of H, the area measures µ γ hn associated with the hn converge weakly to the area measure µ γ h associated with h. In [11] one embeds the surfaces with the unit circle embedding and requires that the fields hn converge as distributions to h. However, the exact notion of convergence considered does not play an important role in this paper, and the convergence results we prove also hold for the alternative notions of convergence considered in [28] and [11].
Proof. We may assume without loss of generality that h ∈ H −1 ( D) (rather than h ∈ H −1 loc ( D)) since the field of a (2, 2)-quantum wedge restricted to any bounded set is in H −1 ( D), so the considered fields must be in H −1 (U ) for some neighbourhood U around z in order for the assumed convergence to hold. We may also assume without loss of generality that D = H and z 0 = ∞. Consider a conformal map φ : H → D sending 0 → z and ∞ → z 0 . Without loss of generality, upon replacing φ by φ(c ·) for an appropriate c > 0, we may assume that φ (0) = 1. We only prove that (ii) implies (i), since the other direction can be verified by a similar argument.
Suppose that (ii) holds, and write h for a random element of H −1 (H), with the law of h(·+z) conditionally on (z, ν h ([a, z]), ν h ([z, b])). Then for every C > 1 there exists a random conformal map ψ C : H → H of the form w → r C w for r C > 0, such that h • ψ C + 2 log |ψ C | + C converges in law in H −1 (H) as C → ∞, to the field described in Definition 2.5. Note that r C → 0 as C → ∞ since when C → ∞ the measure assigned to any fixed boundary segment by h • ψ C + 2 log |ψ C | + C goes to infinity, while the measure assigned to (say) [−1, 1] by the field in Definition 2.5 is of order 1.
By the definition of convergence for doubly-marked 2-LQG surfaces, in order to prove (i) it is sufficient to show convergence of the following quantum surface to a (2, 2)-quantum wedge: where we note that the field depends only on the restriction of h to D). Equivalently, letting h wedge denote the field in Definition 2.5, it is sufficient to show the existence of maps ψ C : H → H of the same form as ψ C such that the convergence in law We will show that this in fact holds with ψ C = ψ C .
To do this, we set h C := h • ψ C + 2 log |ψ C | + C and rewrite the left-hand side of (3.1) as where we can immediately note (since φ (0) = 1, φ is continuous, and r C → 0) that the second term converges to 0 in distribution as C → ∞. Furthermore, h C is equal in distribution to h wedge + g C where as C → ∞, whenever (h wedge , φ C ) and (g C , φ C ) are coupled such that the marginal laws of h wedge , g C and φ C are as in the discussion above. Observe that φ C − z and its first derivatives converge to 0 in probability, uniformly on compact subsets of H ∪ R as C → ∞. Let F := {f ∈ H 0 (H) : f ∇ = 1} and recall that for an arbitrary g ∈ H −1 (H), its H −1 (H) norm is defined by To prove (ii), first note that for some functions ξ 1 , ξ 2 converging to 0 in probability, uniformly on compact sets as C → ∞. Therefore the inequality f • φ −1 C ∇ ≤ 2 f ∇ holds with probability converging to 1 as C → ∞, uniformly on F. We now get (ii), since g C H −1 (H) ⇒ 0 as C → ∞, and with probability converging to 1 as C → ∞.
We also have that for some functions ξ 1 , ξ 2 converging uniformly to zero in probability as C → ∞, and this therefore converges to 0 in probability as C → ∞, uniformly in f ∈ F. From this (i) follows since, uniformly in f ∈ F and as C → ∞, For z ∈ I and ε > 0, define the semi-disk B(z, ε) and ε z ∈ (0, 1] by Unless otherwise stated we assume throughout the section that I is bounded away from H\D and, to simplify notation slightly, that inf{ε z : z ∈ I} > 1.
(3.2)
Let h be a random generalised function with the law described in Proposition 3.1; in the sequel, we denote the law of h by P. For ε ∈ (0, ε z ) let h ε (z) denote the average of h on the semi-circle ∂ B(z, ε) ∩ H, and for β > 1 and ε ∈ (0, 1], define the measure d β h,ε on I by These measures played an important role in [12,13,22], and they are closely related to the derivative martingale for the branching random walk ( [9]). The key point is that d β h,ε is a good approximation to the measure ν h,ε from Lemma 2.9 when β is large. It is however more convenient to work with, since its total mass is uniformly integrable in ε (which is not the case for ν h,ε ). More precisely, we have the following. The version of Lemma 3.4 when the measures d β h,ε are defined in the bulk comes from [22], and the proof goes through in exactly the same way for the boundary measures (3.3). Lemma 3.5 is a consequence of [1].
By uniform integrability of d β h,ε (I), we have the following. Let us take a subsequence of ε along which and denote by P ∞ the law of the limiting pair. Note that the P ∞ marginal law of h must be equal to its P law (as in Proposition 3.1). Also write d β h for the P ∞ conditional law of d β given h, which is a measurable function of h by definition (although we will not need it, the proof of Lemma 3.10 below actually shows that this function does not depend on the chosen subsequence). In fact, it should be the case that under P ∞ , d β is measurable with respect to h (and so d β and d β h are equal a.s.). However, for us it suffices to simply work with d β h .
Remark 3.8 Observe that by Remark 3.6, on the event C β the convergence d β h,ε → ν h holds in probability as ε → 0 (i.e., along any subsequence). Thus d β = d β h = ν h on this event.
The following elementary lemma will be used in the proof of Proposition 3.1. It is straightforward to verify using Girsanov's theorem, the Markov property of Brownian motion, the reflection principle, and the fact that a 3-dimensional Bessel process started from a positive value is equal in law to a 1-dimensional Brownian motion started from that value and conditioned to stay positive. See, for example, [21,Example 3].
For t ≥ 0 let P t denote the probability measure for which the Radon-Nikodym derivative relative to P is proportional to M t . Define X u := −B u + γαu + β for u ≥ 0. Under P t , the process (X u ) u≤t has the following law.
(3.4) (i) Sample h according to Q, then sample z from d β h (normalised to be a probability measure), and set h = h| B(z,1) .
(ii) Sample z from I with density proportional to g relative to Lebesgue measure, and then set h = h circ + h rad , where h circ and h rad are independent, h circ has the law of the projection of h onto H 2 ( B(z, 1)), and h rad (x) = A − log |x−z| for a process (A s ) s≥0 such that: -A 0 has the law of h 1 (z), reweighted by (−h 1 (z) + β)e h1(z) 1 {h1(z)≤β} ; conditioned on A 0 , (A s ) s≥0 is equal in distribution to (−B 2s + 2s + β) s≥0 for (B s ) s≥0 a 3dimensional Bessel process started from −A 0 + β.
The proof of Lemma 3.10 goes via an argument in the style of [27].
Proof. Let Q ε be the law of h reweighted by d β h,ε (I). By Lemma 3.9 and the definition of d β h,ε , (i') and (ii') below give two equivalent procedures to sample a pair ( h, z) as in (3.4).
(i') Sample h according to Q ε , then sample z from d β h,ε (normalised to be a probability measure), and set h = h| B(z,1) .
(ii') Sample z from I with density proportional to g relative to Lebesgue measure, and then set h = h circ + h rad , where h circ and h rad are independent, h circ has the law of the projection of h onto H 2 ( B(z, 1)), and h rad (x) = A − log |x−z| for a process (A s ) s≥0 such that: -A 0 has the law of h 1 (z), reweighted by (−h 1 (z) + β)e h1(z) 1 {h1(z)≤β} ; conditioned on A 0 , (A s ) s∈[0,log ε −1 ] is equal in distribution to (−B 2s +2s+β) s∈[0,log ε −1 ] for (B s ) s≥0 a 3-dimensional Bessel process started from −A 0 + β; for (B s ) s≥0 a standard Brownian motion started from 0.
It is clear that the law in (ii') converges to the law in (ii) as ε → 0. Now we will argue that, along the subsequence that was used to define d β h , the law in (i') also converges to the law in (i). Let F be a continuous bounded functional on H −1 loc (H) and let A ⊂ I be a Borel set. By uniform integrability of d β h,ε , along the considered subsequence, , where we slightly abuse notation and also use P, P ∞ to denote expectation relative to the probability measures P, P ∞ . Since the left-hand side is equal to the expectation of F (h)1 {z∈A} for (h, z) sampled as in (i') and the right-hand side is equal to the same expectation for (h, z) sampled as in (i), we can conclude that the law in (i') converges to the law in (i). Clearly the equivalence of (i') and (ii') for every ε, together with the convergence (i') ⇒ (i) and (ii') ⇒ (ii) implies the equivalence of (i) and (ii). Therefore h circ and h GFF circ | B can be coupled together so they differ by a random function which extends continuously to B ∩ R. In particular, h circ and h GFF circ | B can be coupled so that h circ (c·) − h GFF circ | B (c·) converges a.s. to a random constant as c → 0. It is therefore sufficient to show that if h GFF circ is independent of h rad then (H, h GFF circ + h rad + C, z, ∞) converges in law to a (2, 2)-quantum wedge as C → ∞.
By Lemma 3.10, h rad can be coupled together with A in (ii) of that lemma such that h rad (x) = A − log |x−z| . Recall that A can be coupled together with a 3-dimensional Bessel process (B s ) s≥0 started from −A 0 + β such that A s = −B 2s + 2s + β. For C > 1 define Note that (B t+2T 1 C ) t≥0 has the law of a Bessel process started from C + β. By [30,Theorem 3.5], θ has the law of a uniform random variable on [0, C + β], and, conditioned on θ, (i) the process (B s+2T 3 C − (C + β)) s≥0 has the law of a Bessel process started from 0, and has the law of a Brownian motion started from C + β and stopped at the first time it reaches θ. It follows that as C → ∞ the process (B s+2T 3 C − (C + β)) s∈R converges in law to the negative of the process considered in Remark 2.7 on any compact interval. Therefore (−B s+2T 3 C + (C + β) + 2s) s∈R converges in law to the process (h rad (e −s )) s∈R in Definition 2.3, which concludes the proof. Proof. First we will argue that d β h is atomless a.s. Notice that d β h,ε (dz) ≤ ν h,ε (dz) + βε e hε(z) dz (with equality on the event C β ; see Remark 3.6). Since βε e hε(z) converges a.s. to 0 and ν h,ε (dz) converges a.s. to the non-atomic measure ν h as ε → 0, this implies that d β h is atomless a.s. Now observe that the proof of Lemma 3.11 above carries through just as before if we replace the 1 on the right side of (3.2) with some other constant r ∈ (0, 1). Then we see that Lemma 3.11 also holds if I is not bounded away from H \ D, since any interval contained in ∂D ∩ R can be approximated arbitrarily well by an interval satisfying (3.2) for some r ∈ (0, 1). This implies, since d β h is atomless, that the point z in the former case converges in total variation distance to the point z in the latter case when r → 0.
From Lemma 3.11 (without the assumption that I is bounded away from H \ D), and by proceeding exactly as in the proof of [28,Proposition 5.5], we get that Lemma 3.11 also holds if we condition on Lemma 3.13 Let (X n , Y n ) for n ∈ N and (X, Y ) be random variables such that the vectors (X n , Y n ) converge in total variation distance to (X, Y ) as n → ∞. Assume Y n , Y are vectors in R N for some N ∈ N, while X n , X take values in some Borel space (S, S). Then there exists a set A ⊂ R N such that P[Y ∈ A] = 1, and such that for any a ∈ A the law of X n given Y n = a converges to the law of X given Y = a. 9 Proof. Let ε > 0. It is clearly sufficient to prove the lemma under the weaker requirement that A satisfies P[Y ∈ A] ≥ 1 − (2 · 12 N + 1)ε. For this, it suffices to show that for an arbitrary function F : S → {0, 1}, any a ∈ A, and all sufficiently large n, Choose n sufficiently large such that the total variation distance between (X n , Y n ) and (X, Y ) is smaller than ε 2 . We will work with such a fixed choice of n in the remainder of the proof, and will prove that (3.5) is satisfied. Choose δ > 0 sufficiently small such that for all a in a set A 0 ⊂ R N satisfying P[Y ∈ A 0 ] > 1 − ε/2 the following hold: z − a ∞ ≤ δ}. Say that a point a ∈ K is bad if P[Y ∈ N (a)] = 0, P[Y n ∈ N (a)] = 0, or the total variation distance between (X n , Y n ) and (X, Y ) conditioned on Y n ∈ N (a) and Y ∈ N (a), respectively, is at least ε. A point in K which is not bad is good. Let B ⊂ K denote the set of bad points. We will prove that P[Y ∈ B] ≤ 2 · 12 N ε. (3.7) Taking A = K \ B and applying (3.6) then completes the proof. Choose points a 1 , . . . , a M ∈ B for some M ∈ N using the following rule. Given a 1 , . . . , a m let a m+1 ∈ B be chosen such that N (a m+1 ) is disjoint from N (a 1 ), . . . , N (a m ), and such that P[Y ∈ N (a m+1 )] is maximized. Let M be the smallest m such that there is no possible way to choose a m+1 (i.e., all points in B have · ∞ distance less than δ from N (a 1 ) ∪ · · · ∪ N (a m )).
The idea for the proof of (3.7) is that m has to be small because the total variation distance between (X n , Y n ) and (X, Y ) is assumed to be small, and that by the definition of the {N (a i )} i , P(Y ∈ B) is of order O(m).
Proceeding with the details, since N (a 1 ), . . . , N (a M ) are disjoint, we can bound the total variation distance between (X n , Y n ) and (X, Y ) from below by summing the contribution from each set N (a m ). More precisely, for arbitrary Borel (not necessarily probability) measures σ 1 , σ 2 defined on R N , define and note that this defines a metric on the set of Borel measures on R N . Let σ n (resp. σ) denote the law of (X n , Y n ) (resp. (X, Y )), and let σ m n = σ n | N (am) and σ m = σ| N (am) . For an arbitrary measure σ let | σ| denote its total mass. By the triangle inequality and since d tv (σ m n , |σ m | |σ m Using m = M m=1 |σ m | we now get which gives m ≤ 2ε. Using this, we get (3.7) if we can prove the following , 1), h + C, z, z + i) converges in law to a (2, 2)-quantum wedge as C → ∞. Figure 3: Illustration of objects defined in Section 4. Our strategy is to construct the critical quantum zipper (lower row) by taking the n → ∞ limit of the subcritical quantum zipper (upper row). The convergence in law indicated by the two vertical arrows is joint as n → ∞.
and has the law of an SLE 4 from 0 to ∞, then the new field/curve pair defined by f (h) and η ∪ f (η) has the same law as (h, η).
The strategy is to use the fact that such an operation exists [28] in the subcritical case γ ∈ (0, 2), i.e., when the SLE 4 is replaced by an SLE γ 2 , the (2, 1)-quantum wedge is replaced by a (γ, γ − 2/γ)-quantum wedge, and critical boundary length is replaced by γ-LQG boundary length. See Figure 3 for an illustration. We will show that a number of limits can be taken as γ ↑ 2, using, for example, the fact that critical LQG measures can be obtained as a limit of subcritical measures (Lemma 2.13). Combining these convergence statements provides the existence of the welding operation. Making use of Theorem 1.3, we can prove that the conformal map f is measurable with respect to η.
Similarly, h will always denote an element of H −1 loc (H) with the law of a (2, 1)-quantum wedge (in the last exit parameterisation) and ν h =: ν will be the critical boundary measure associated to h (as in Lemma 2.10). For q ∈ Q we define (X(q), Y (q)) corresponding to ν as in (4.1), so that (X(q), Y (q)) ∈ [0, ∞) × (−∞, 0] and ν([0, X(q)]) = ν([Y (q), 0]) = q. Lemma 4.1 There exists a coupling of ((h n ) n∈N , h) such that a.s. as n → ∞, This is with respect to the topology of H −1 loc (H) in the first coordinate, and the local weak topology for measures on R in the second.
Proof. By the Skorokhod representation theorem, it is sufficient to show that (h n , ν n ) converges in distribution to (h, ν) as n → ∞. The idea is that away from 0 and the unit circle, h n is arbitrarily close to h in total variation distance for large n, so we can essentially just apply Lemma 2.13 in these regions. We then deal with neighbourhoods of 0 and the unit circle separately; showing that as the size of the neighbourhoods goes to 0 the behaviour of (h n , ν n ) restricted to these neighbourhoods can be neglected (uniformly in n).
To carry out this idea, when x ∈ R and r > s > 0 we write B(x, r) := {w ∈ H : |w − x| < r} and A(x, r, s) := B(x, r) \ B(x, s).
First, we observe that for any (r i ) 4 i=1 such that r 4 > r 3 > 1 > r 2 > r 1 > 0 there exists a sequence of couplings (h n , h), such that P(h n = h on A(0, r 4 , r 3 ) ∪ A(0, r 2 , r 1 )) tends to 1 as n → ∞. Indeed, since we can couple the fields to have the same circular part, this just follows because setting: • L n to be the law of a double sided Brownian motion plus drift (2 − 2/γ n ), restricted to some interval [−M, M ] and conditioned to stay below the curve s → Q γn s for all positive time; and • L to be the law of a double sided Brownian motion with drift 1, restricted to [−M, M ] and conditioned to stay below the curve s → 2s for all positive time, then L n → L with respect to total variation distance as n → ∞. Hence, by Lemma 2.13, it suffices to show that in probability (equivalently, in distribution) as δ → 0, and for any η > 0, uniformly in n, as δ → 0. The first statement of (4.2) holds because ν is a.s. atomless (Remark 2.11). Moreover, (4.4) and the last two statements of (4.2) follow by decomposing h and (h n ) n∈N into their projections onto H 1 (H) and H 2 (H). Indeed, the projections onto H 2 (H) all have the same law -that of h GFF circ -and it can be verified by a direct computation that the H −1 ( B(0, δ) ∪ A(0, 1 + δ, 1 − δ)) norm of h GFF circ goes to 0 in probability as δ → 0. The projections onto H 1 (H), when restricted to B(0, 1 + δ), can also all be stochastically dominated (for example) by the random function 1 {z∈ A(0,1+δ,1)} B 2 log |z| − 2 log(|z|), where B is a standard Brownian motion. One can easily check that this function has L 2 ( B(0, δ) ∪ A(0, 1 + δ, 1 − δ)) norm going to 0 in probability as δ → 0, which is more than we need. For (4.3), first fix η > 0. We will deal with the neighbourhood [−δ, δ] of 0, and the intervals [±1−δ, ±1+δ] around ±1, separately. To show that P(ν n ([−δ, δ] > η) → 0 uniformly in n as δ → 0, we observe (as in the proof of Lemma 2.10) that if h is a Neumann GFF with additive constant fixed so that its average on ∂ B(0, 1) is equal to 0, then for every n there exists a random constant c n such that Moreover, the probability that c n is greater than M goes to 0 uniformly in n as M → ∞, since, by the proof of Lemma 2.8, c n has the law of the exponential of minus the last time that a Brownian motion with negative drift B 2t − (Q γn − γ n + 2/γ n )t is greater than or equal to 0. Hence it suffices to show that in probability (or, equivalently, in distribution) as n → ∞. However, it follows from [2, Lemmas 3.1 and 3.2] (with straightforward adaptation to the boundary case) that the integral in (4.5) has (1 − γ n /2) th moment converging to 0 with δ, uniformly in n. Since (4 − 2γ n ) (1−γn/2) → 1 as n → ∞, the result then follows by Markov's inequality.
To show that P(ν n ([±1 − δ, ±1 + δ] > η) → 0 uniformly in n as δ → 0, we first note that (by Lemma 2.13) this would hold if the fields h n were all replaced by a Neumann GFF in H, with additive constant fixed so that its average on ∂ B(0, 1) is 0. Then, since • such a Neumann GFF can be written as the sum of h GFF circ plus a random function whose supremum in B(±1, 2δ) goes to 0 as δ → 0, and • h n can be written as the sum h GFF circ + F n where P sup B(±1,2δ) F n ≥ a → 0 as δ → 0 uniformly in n for any fixed a > 0, the result follows.
Lemma 4.2 There exists a coupling of ((h n , η n ) n∈N , h, η) such that: • (h n , η n ) for each n has the marginal law of a (γ n , γ n −2/γ n )-quantum wedge and an independent SLE κn from 0 to ∞ in H (κ n = γ 2 n ); • (h, η) has the marginal law of a (2, 1)-quantum wedge and an independent SLE 4 from 0 to ∞ in H; • (h n , η n , (X n (q)) q∈Q , (Y n (q)) q∈Q ) converges to (h, η, (X(q)) q∈Q , (Y (q)) q∈Q ) in probability as n → ∞, with respect to H −1 loc (H) convergence in the first coordinate, Carathéodory convergence in the second coordinate, and the product topology on R Q in the third and fourth coordinates. 10 Proof. First, by Lemma 2.17, it is possible to couple a sequence of SLE κn curves and an SLE 4 such that one has convergence with respect to the Carathéodory topology in probability as n → ∞. Next, since the curves can be sampled independently of everything else in the statement, it is enough to show that with the coupling of Lemma 4.1, we have (X n (q)) q∈Q converging to (X(q)) q∈Q and (Y n (q)) q∈Q converging to (Y (q)) q∈Q in probability as n → ∞ (with respect to the product topology on R Q ). We will show the statement for X; the corresponding statement for Y follows by the same argument.
Let for all x, and note that a.s. by Remark 2.11, both are continuous and strictly increasing. This means that F n converges pointwise to F a.s. as n → ∞, and hence also that the generalised inverses F −1 n (s) = inf{x ∈ [0, ∞) : F n (x) ≥ s} (4.6) converge pointwise to the generalised inverse F −1 (defined analogously) a.s. as k → ∞. In particular, this implies that (X n (q)) q∈Q converges to (X(q)) q∈Q a.s. as n → ∞, with respect to the product topology on R Q .
For what follows, we need to recall the definition of Sheffield's capacity quantum zipper [28] for γ ∈ (0, 2).
Definition 4.3
Let γ ∈ (0, 2) and (H, h 0 , 0, ∞) be an equivalence class representative of a (γ, γ − 2/γ)quantum wedge. Let κ = γ 2 , and let η 0 be an independent SLE κ in H from 0 to ∞. Then the capacity quantum zipper is a centered, reverse Loewner flow ( f t ) t≥0 coupled with ( h 0 , η 0 ), such that: • ( f t ) t≥0 is measurable with respect to h 0 ; • the marginal law of ( f t ) t≥0 is a centered, reverse SLE κ flow parameterised by half-plane capacity; • for any t and x ∈ η t \ f t ( η 0 ), denoting by η L x and η R x the left-and right-hand sides of η up to x, the ν γ h0 length of the intervals f −1 t (η L x ) and f −1 t (η R x ) agree. This induces a dynamic ( h t , η t ) : on ( h 0 , η 0 ) which is stationary when observed at quantum typical times. More precisely, for any l ≥ 0, if X l = inf{x ≥ 0 : ν γ h0 (0, x) = l} and T l = inf{t ≥ 0 : f t (X l ) = 0}, then ( h T l , η T l ) is equal in distribution, as a quantum surface, to ( h 0 , η 0 ). 11 This flow thus represents a dynamic welding of [0, ∞) to (−∞, 0], according to the γ-LQG boundary length. It is essentially the same as the dynamic defined in the (subcritical version of) Theorem 1.5, but with a different time parameterisation. Now, assume that ((h n , η n ) n∈N , h, η) are coupled together as in Lemma 4.2 and that (X n (q), Y n (q)) n∈N,q∈Q and (X(q), Y (q)) q∈Q are defined as in (4.1) with respect to (h n ) n∈N and h, respectively. For each n ∈ N, let (f t n ) t≥0 be the centered reverse flow in Definition 4.3, when ( h 0 , η 0 ) are replaced by (h n , η n ). For q ∈ Q we let τ n (q) be the time at which (X n (q), Y n (q)) are mapped to 0 by f n . For t ≥ 0, let h t n = f t n (h n ) := h n • (f t n ) −1 + Q γn log |((f t n ) −1 ) | and η t n = f t n (η n ). As in footnote 4, although h t n is only defined on the slit domain H \ f t n (η n ) we can view it as an element of H −1 loc (H). Then by the properties described in Definition 4.3, it follows that for any q ∈ Q: • η τn(q) n and h τn(q) n are independent; • η τn(q) n has the law of an SLE κn from 0 to ∞; and • h τn(q) n is (equivalent as a doubly-marked γ n -quantum surface to) a (γ n , γ n − 2/γ n )-quantum wedge. We also define r n := f τn n (X n (2)) for each n, where the definition of X n (q) for q ∈ Q is extended in the obvious way to X n (2). Let ψ n denote the scaling map z → r n z on H. | 2018-12-31T14:27:44.000Z | 2018-12-31T00:00:00.000 | {
"year": 2018,
"sha1": "6ff0abef83c9177cbed95fb616d77bd79eba2bd2",
"oa_license": null,
"oa_url": "http://dro.dur.ac.uk/33386/2/33386VoR.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "60a52b13d7818f70bed4dec9280ff8a69d585190",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
16127983 | pes2o/s2orc | v3-fos-license | Surgical management in advanced stages of retinopathy of prematurity; our experience.
Retinopathy of prematurity is a potentially blinding condition. In this article we describe the surgical management for advanced stages of the disease (stages 4 and 5) Indications, options and alternative techniques are described through a review of articles and our personal experience.
INTRODUCTION
Retinopathy of prematurity (ROP) is a preventable but potentially blinding condition. 1 Premature infants have been screened at our center since 1997 and unfortunately, the overall incidence of ROP and that of advanced stages (4 and 5) of the disease are increasing. [2][3][4] Despite improvements in screening protocols, peripheral laser ablation, microsurgical techniques and instrumentation, ROP still progresses to retinal detachment (RD) and blindness in 15% to 30% of all involved eyes. 5,6 Advanced stages of ROP (4 and 5) usually require surgical intervention. 7 Premature infants with ROP need timely treatment because of the critical period of visual development. 8 Several procedures have been described to treat ROP-associated RDs, including open-sky vitrectomy, scleral buckling, closed vitrectomy and lensectomy with or without scleral buckling, and more recently, lens sparing vitrectomy without scleral buckling. 9 Herein we present a review on the options and techniques for surgical management of advanced stages of ROP and describe our experience in this regard.
SCLERAL BUCKLING
Scleral buckling is a well established technique for repair of RD and is also used for ROP related RDs. It mechanically reattaches the retina by counteracting the forces exerting traction on the retina. 10 Although controversy exists on the choice of scleral buckling versus vitrectomy [11][12][13][14][15][16][17][18] for stage 4 ROP, scleral buckling is still used particularly when traction exists anterior to the equator. 19 The technique of scleral buckling in our patients involves placement of an encircling exoplant (2.5 mm # 240 solid silicone band) as close to the ridge as possible. The band is secured with intrascleral 5-0 polyester suture and tightened to achieve moderate-height buckle effect. Since subretinal fluid drainage is not routinely performed, anterior chamber paracentesis is usually needed to soften the eye.
Encircling buckles induce myopic refractive changes due to axial elongation and forward shift of the crystalline lens. 20 The average induced myopia is -2.75 D in adults, 21 but much greater in infants ranging from -9 to -11.0 D 20,22 which is potentially amblyogenic. Division or removal of the exoplant is recommended when stable retinal reattachment is achieved 22 to reduce the risk of intrusion and promote eye growth. 11,16 We usually remove the exoplant after 6 months if retinal reattachment is stable or in cases of anisometropia, however it seems logical to delay exoplant removal in unstable situations. Concerns regarding amblyopia with scleral buckling have led to a preference for vitrectomy in stage 4 ROP. 13,23 VITRECTOMY Vitrectomy in advanced stages (4 or 5) of ROP may confer advantages; however it poses specific challenges to the surgeon. 24,25 Vitrectomy may be performed in a closed or open system setting, the former is used more commonly. Open-sky vitrectomy allows direct access to posterior segment structures after removal of the cornea and lens. In this technique, the cornea is trephined and the lens is removed intracapsularly, allowing sharp dissection of retrolental membranes. During surgery, an ophthalmic viscosurgical device such as Healon is used to improve visualization, separate the retina and allow posterior dissection. Open-sky vitrectomy offers advantages of bimanual dissection through a large anterior incision and the possibility of surgery in eyes with cloudy corneas. 26-28 Nowadays, closed system surgery is preferred; a three-port system permits the surgeon to switch hands in order to perform anterior dissection without the risk of transient hypotony. 29 Vitrectomy can be performed with the aid of a contact lens, binocular indirect ophthalmomicroscope system, or by direct visualization using the operating microscope.
Vitrectomy in neonates differs from adults in several aspects: 30 (1) the entry site should be through the pars plicata rather than pars plana; 31,32 (2) the lens is relatively larger; (3) posterior vitreous detachment (PVD) cannot be achieved easily; (4) breaks are extremely poorly tolerated and are rarely repaired successfully; (5) cyclitic and pupillary membranes are common; (6) additional causes for structural failure include iridoretinal and retinal-retinal adhesions; (7) subretinal hemorrhage or exudation and retinal pigment epithelium degeneration may preclude favorable functional outcomes; and (8) maximal functional development may take years to achieve.
At the onset of operation, the patient undergoes full eye examination. If there is enough space to enter the globe through the pars plicata together with a free retrolental space, pars plicata vitrectomy is performed; otherwise limbal vitrectomy with lensectomy is preferred. Mechanical induction of PVD is not possible and only membranes causing traction should be released and segmented as far as possible using a vitrector (20-, 23-or 25-gauge) or scissors (horizontal or vertical). Extensive membranectomy is not possible and not recommended; we cut or remove membranes as much as possible and try to avoid iatrogenic breaks. We prefer a 23-gauge vitrectomy system because of its appropriate size and possibility of releasing peripheral tractions (Fig. 1). It may be difficult to remove all peripheral tractions by 25-gauge vitrectomy because of instrument flexibility which limits appropriate maneuvers. Once the tractions are relieved, retinal reattachment is achieved and intraocular tamponade is not routinely used. The patients are examined one day after surgery, weekly for 3 weeks, monthly up to 6 months, and every 6 months thereafter. In the case of partial reattachment, reoperations may be considered when persistent tractions exist. In our recent experience, the operation is facilitated by the use of autologous plasmin (unpublished data). In future, enzyme assisted vitrectomy may become a basic component of ROP surgery. For preparation of autologous plasmin, blood is centrifuged at 4,000 rounds per minute for 15 minutes to obtain complete sedimentation; 1.5 ml of plasma is aspirated under aseptic conditions and transferred into a vial of streptokinase (750,000 IU) already incubated at 37°C for 15 minutes. The vial is shaken gently for 3 to 5 minutes and the solution is incubated at 37°C for 10 more minutes; 0.2 ml of this preparation is used for intravitreal injection 15 minutes prior to surgery during induction of anesthesia. 33
DISCUSSION
Advantages of vitrectomy in stage 4 ROP include removing endogenous vasodilators and angiogenic factors such as vascular endothelial growth factor (VEGF) from the vitreous cavity in addition to releasing anteroposterior tractions. 19 Both scleral buckling 11,16 and vitrectomy 15,34 have been used to manage advanced ROP; in the past scleral buckling was the treatment of choice and vitrectomy was considered only if buckling had failed. Nowadays, primary vitrectomy is preferred for stage 4 ROP. 5,35 Anatomic success rates vary depending on surgical technique (vitrectomy versus buckling) and stage of detachment (4A or 4B). [16][17][18] Important drawbacks to scleral buckling which adversely affect the functional outcomes include lower anatomical success rates (60%-75%), 11,12,[15][16][17] need for a secondary procedure to divide or remove the encircling element, 22 and induction of severe myopia and anisometropia with the resulting risk of amblyopia. Outcomes of vitrectomy in stage 4 ROP are more favorable 13,18,19,29,36,37 (90% success with mean followup of 1 year) as compared to scleral buckling procedures. [12][13][14]29,36,37 Although scleral buckling procedures may provide a greater anatomic success rate as compared to untreated eyes, studies have revealed that lens-sparing vitrectomy (LSV) may be superior to scleral buckling in terms of anatomical and functional success rates. [12][13][14][15] Advantages of vitrectomy for advanced ROP with RD consist of addressing the traction directly without need for a second procedure, avoiding compression of anterior segment structures and anatomic distortion, and less induced myopia. 38 In stage 4 ROP, initial LSV achieves retinal reattachment more often than scleral buckling with anatomic success rates of 82-97%. 9,18,36,39 Vitreoretinal surgery is usually employed for treatment of stage 5 ROP. 25 Despite relatively acceptable rates of retinal reattachment, functional outcomes have been poor. 15,34,[40][41][42][43] However, taking into account that untreated stage 5 ROP ends in blindness, 43 vitreoretinal surgery is recommended.
Our understanding of the indications, timing and techniques for surgical management of ROP continues to evolve. 7 In our practice, vitrectomy is performed for stage 4 ROP and also for stage 5 in special situations. The choice of scleral buckling versus vitrectomy depends on the stage of ROP, retrolental involvement, vascular activity of the disease (existing plus disease or neovascularization) and presence of an exudative component. In stage 4 ROP, in case of anteroposterior traction and adequate retrolental space for releasing peripheral tractions without removing the lens, we prefer lens sparing vitrectomy; on the other hand with peripheral tractions we prefer scleral buckling. In stage 5 ROP, we routinely perform vitrectomy (Fig. 2) except in exudative forms, for which scleral buckling and intravitreal medications (anti-VEGF with or without corticosteroids) are preferred. Scleral buckling together with vitrectomy may be used for advanced cases of stage 5 ROP. Our algorithm for treatment of advanced stages of ROP is presented in Figure 3. In an unpublished study, we used combined procedures (scleral buckling and 25-gauge vitrectomy) in 21 eyes with stage 5 ROP. Retinal reattachment was achieved completely in 52.3% (success rate) and partially in 23.8% of eyes. Redetachment occurred in three eyes during one year. Success was correlated with disease activity and preoperative treatment (laser therapy or intravitreal bevacizumab). Final reattachment rate was 38%, which is consistent with some previous reports. 25 Because of substantial heterogeneity in patient population and severity of the disease, it is difficult to compare the findings of different studies. Most studies include different stages of ROP and utilize different surgical techniques, which may explain differences in success rates.
VEGF, vascular endothelial growth factor Dilated vessels in the iris and retina indicate disease activity. We usually use intravitreal bevacizumab in vascularly active disease (unpublished data). Anti-VEGF pharmacotherapy may help as preoperative adjuvant treatment. Vascularly active disease is a significant risk factor for failure of retinal reattachment especially in stage 5 ROP and portends a poor prognosis. 44 The poor visual outcome after lensectomy-vitrectomy procedures for RD due to ROP indicates that emphasis should be placed on prevention of RD in premature infants. [40][41][42] Despite relatively acceptable rates of retinal reattachment in stage 5 ROP, functional outcome has been poor. 15,23,34,[40][41][42] Enzyme-assisted vitrectomy facilitates separation of the internal limiting membrane and posterior hyaloid membranes. 45 Tsukahara et al 46 reported complete reattachment of the posterior pole in all six consecutively treated eyes with stage 5 ROP.
Similar to Lakhanpal et al, 29 we manage ROP-related RDs through (1) observation, (2) scleral buckling alone, (3) vitrectomy alone, (4) vitrectomy plus scleral buckling, and (5) vitrectomy with lensectomy. The choice of technique depends on the location of the traction and the presence or absence of retinalens apposition. | 2017-04-04T05:35:44.784Z | 2009-07-01T00:00:00.000 | {
"year": 2009,
"sha1": "5ab231254a527d9c72746b8c7296c973e71a7030",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5ab231254a527d9c72746b8c7296c973e71a7030",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57375378 | pes2o/s2orc | v3-fos-license | Immune checkpoint inhibitors in MITF family translocation renal cell carcinomas and genetic correlates of exceptional responders
Background Microphthalmia Transcription Factor (MITF)family translocation renal cell carcinoma (tRCC) is a rare RCC subtype harboring TFE3/TFEB translocations. The prognosis in the metastatic (m) setting is poor. Programmed death ligand-1 expression was reported in 90% of cases, prompting us to analyze the benefit of immune checkpoint inhibitors (ICI) in this population. Patients and methods This multicenter retrospective study identified patients with MITF family mtRCC who had received an ICI in any of 12 referral centers in France or the USA. Response rate according to RECIST criteria, progression-free survival (PFS), and overall survival (OS) were analyzed. Genomic alterations associated with response were determined for 8 patients. Results Overall, 24 patients with metastatic disease who received an ICI as second or later line of treatment were identified. Nineteen (82.6%) of these patients had received a VEGFR inhibitor as first-line treatment, with a median PFS of 3 months (range, 1–22 months). The median PFS for patients during first ICI treatment was 2.5 months (range, 1–40 months); 4 patients experienced partial response (16,7%) and 3 (12,5%) had stable disease. Of the patients whose genomic alterations were analyzed, two patients with mutations in bromodomain-containing genes (PBRM1 and BRD8) had a clinical benefit. Resistant clones in a patient with exceptional response to ipilimumab showed loss of BRD8 mutations and increased mutational load driven by parallel evolution affecting 17 genes (median mutations per gene, 3), which were enriched mainly for O-glycan processing (29.4%, FDR = 9.7 × 10− 6). Conclusions MITF family tRCC is an aggressive disease with similar responses to ICIs as clear-cell RCC. Mutations in bromodomain-containing genes might be associated with clinical benefit. The unexpected observation about parallel evolution of genes involved in O-glycosylation as a mechanism of resistance to ICI warrants exploration.
Introduction
Microphthalmia Transcription Factor (MiTF) family translocation renal cell carcinoma (tRCC) is a subtype of RCC characterized by chromosomal translocations involving TFE3 and TFEB transcription factor genes [1]. As tRCCs with TFE3 or TFEB mutations share clinical, histopathological and molecular features, the 2013 ISUP Vancouver classification grouped these entities as the "MiTF/TFE translocation carcinomas family" [2]. The frequency of adult TFE3 tRCC has been reported to range between 1 and 5% of all RCCs [3][4][5]. tRCC usually occurs in children, adolescents and young adults, with a high female predominance [3][4][5]. There are no approved therapies for metastatic tRCC, and effective therapy for this cancer remains an unmet medical need.
The current first-line standard of care for good risk metastatic clear-cell RCC (ccRCC) is the tyrosine kinase inhibitors (TKIs) targeting vascular endothelial growth factor receptor (VEGFR) [6]. Conversely, the combination of ipilimumab and nivolumab is the standard of care for intermediate and poor risk disease [7]. While there is no standard of care for non-clear cell metastatic RCCs (referred to here as non-ccRCC), retrospective analyses indicate that VEGFR-targeted agents provide some efficacy in metastatic tRCC, with an objective response rate of 30% and a median progression-free survival (PFS) duration of 7.1-8.2 months [8,9].
Recently, virtual karyotyping of tRCC identified a subgroup with 17q gain characterized by activation of the cytotoxic T lymphocyte-associated protein 4 (CTLA4) pathway [10]. Another study exploring programmed death ligand 1 (PD-L1) expression in a wide range of non-ccRCC identified PD-L1 overexpression in tumor-infiltrating immune cells in 90% of tRCC cases [11]. Those studies prompted us to explore the efficacy of immune checkpoint inhibitors (ICIs) in this setting. Nivolumab, a programmed death 1 (PD-1) checkpoint inhibitor, was associated with longer overall survival (OS) than mTOR inhibitors in a phase III study involving previously treated patients with metastatic ccRCC and is now often used as second-line therapy [12]. Currently, data regarding the efficacy of ICIs in non-ccRCC are limited, and results of clinical trials are pending.
The purpose of this study is to determine the efficacy of ICIs in the treatment of tRCC and to correlate tumor genomic alterations with objective response. We performed a retrospective multicenter analysis of the outcomes of patients with tRCC treated with an ICI in 12 institutions in France and the USA. The efficacy of first-line TKI treatment was also analyzed.
Patients
Patients with tRCC were identified through searches of the patient databases of 12 institutions in France and the USA for the period from July 2011 to May 2017. Inclusion criteria included tRCC diagnosed by immunohistochemical analysis (IHC) and treatment with at least one ICI. A dedicated genitourinary pathologist at each of the participating institutions verified tRCC diagnoses. TFE3 expression was confirmed by IHC analysis in all cases. FISH confirmation was not a requirement in this study, but was available in the majority of cases. Cases that were tested but not confirmed by FISH were excluded. Clinical characteristics and treatment-related outcome data for ICIs (targeting PD-1, PD-L1 or CTLA4), administered alone or in combination with other agents, were retrospectively determined by individual chart review. We collected data concerning prior treatments, first metastasis, date of first treatment, toxic effects, date of progression and date of death or last follow-up contact. All patients' data were anonymized and de-identified prior to analysis. Patient data were collected in compliance with the IRB guidelines of each participating institution. Written informed consent was obtained from all patients for whom genomic testing was performed. All study protocols were performed in accordance with the ethical tenets of the Declaration of Helsinki.
Assessment of tumor response
Patients were monitored by their physician until the end of treatment. All treatments and responses, from diagnosis to death or loss to follow-up, were recorded. Tumor response and disease progression by RECIST 1.1 criteria were documented. Stable disease was defined as a stable RECIST response for more than 3 months. Clinical benefit was defined as Miao et al. and included patients with partial response or stable disease lasting more than 6 months [13].
Genomic analysis
Targeted sequencing data on 410 cancer genes using MSK-IMPACT were collected on tumors from 4 cases, with a median coverage of 580x per case (range, 230-1141) [14]. Whole-exome sequencing was performed on another 4 tumors and matched normal adjacent tissues. Briefly, exomes were captured using Agilent SureSelect Human All Exon 50 Mb (Agilent Technologies, Santa Clara, CA, USA) according to the manufacturer's instructions. The technical details and mutation detection method were as previously described [15]. Median coverage obtained for tumor samples was~100x. Mutational load was defined as the total number of somatic mutations obtained per whole-exome sequencing. To compare the mutational load of these tRCCs with mutational load in ccRCC, somatic mutations of ccRCC cases from The Cancer Genome Atlas (TCGA) were retrieved from a report on ccRCC published by TCGA [16].
Statistical analysis
Study endpoints were response rates according to RECIST criteria PFS, and OS. The Kaplan-Meier method was used for survival analyses. PFS was measured from the date of initiation of ICI treatment to the time of progression at any site or death from any cause. All statistical analyses were done by using GraphPad Prism (GraphPad Software, La Jolla, CA, USA).
Patient characteristics
Overall, we identified 24 patients who met the inclusion criteria. Selected demographic and clinical characteristics of these patients are summarized in Tables 1 and 2. Before receiving an ICI, the majority of patients had received a VEGFR-targeted agent as first-line therapy (Fig. 1).
Clinical outcomes: First-line VEGFR-targeted agents
Median PFS for first-line TKI therapy was 3 months (range, 1-22 months) (Fig. 2a). Partial responses were observed in 2 patients (10.5%), and 15 patients exhibited disease progression at the time of the first interim assessment. Six patients received an mTOR inhibitor (2, first line; 4, second line or later) and none achieved objective response. The toxic effects of sunitinib, the most frequently received first-line agent (n = 15), were comparable overall to those reported in studies in RCC and included mainly asthenia and rash.
Clinical outcomes: First immune checkpoint inhibitor
Of the 24 patients, 17 received nivolumab, 3 received ipilimumab and 4 received ICI-based combination therapy ( Table 2). All patients received at least one dose of an ICI; 22 (91.6%) received 4 doses or more. The median PFS was 2.5 months (range, 1-40 months) (Fig. 2b). Four patients (16,6%) experienced a partial response and 3 (12,5%) had stable disease in response to the ICI. Among the four patients who achieved an objective response, one received pembrolizumab in combination with a 41BB agonist [17] (PFS 30 months), two received nivolumab (PFS 8 and 3 months) and one received ipilimumab (PFS 9 months). Remarkably, one of the 5 responders, patient 1, showed partial response to ipilimumab lasting for 9 months. At the time of ipilimumab administration, this patient had an ECOG performance status (PS) of 3, with peritoneal, liver and lung metastases. His ECOG PS improved quickly on ipilimumab therapy, leading to a complete response of his abdomen and lung metastases; a residual 6 cm mediastinal mass was resected. The patient achieved partial response 4 months after starting ipilimumab, but developed bilateral grade 4 optic neuropathy, as previously described [14]. Upon progression, he began treatment with nivolumab, but 6 weeks later his disease had progressed, including development of 8 metastatic lesions in the brain. Genomic evolution of the tumor of this exceptional responder is reported below. The most frequent toxic effects of the ICIs, except for patient 1, were asthenia grade 2 (n = 9) and dyspnea grade 2 (n = 3). With a median follow-up duration of 19.3 months, the median OS was 24 months. Of note, no pseudoprogression was observed among the 24 patients.
Genomic correlates of response to ICI
Tumor genomic was available in 8 patients treated with ICIs, four had whole exome sequencing and four targeted sequencing. Four of these patients (50%) derived clinical benefit from the ICI, including 2 patients with partial response and 2 patients with stable disease. Median interval The mutational load of the 4 tumors assessed by whole exome sequencing was low, ranging from 4 to 30 mutations per exome. No recurrent mutation was identified by exome sequencing (Fig. 3a). Overall, the median mutational load of these 4 tRCCs was lower than that of the ccRCC samples from the TCGA dataset (n = 424; p < 0.0001) (Fig. 3b). Focusing on the 410 cancer genes covered by both MSK-IMPACT and whole-exome sequencing in all samples, the median mutation rate in the 8 tumors was 0 (range, 0-3). Notably, SMARCA4 mutation was the sole recurrent mutation, identified in 2 cases. The two patients which showed clinical benefit lasting for at least 6 months harbored mutations of bromodomain member genes (PBRM1 and BRD8) (Fig. 3c), consistent with a recently reported association between mutations of bromodomain genes response to ICIs [18].
Genomic landscape of resistant clones in a patient with exceptional response As already described, patient 1 developed a dramatic response to ipilimumab lasting for 9 months; the patient had a complete response except for one resistant clone that was stable under treatment with ipilimumab, which was resected 9 months after the last ipilimumab administration and subjected to whole-exome sequencing at 2 distinct opposite regions. The number of somatic mutations in these 2 resistant clones was high, ranging from 120 to 136 mutations/50 Mb as compared to 30 mutations/50 Mb in the primary tumor (Fig. 4a). The majority of mutations present in the primary tumor (n = 25; 83.3%) were also present in both resistant clones, suggesting branched tumor evolution; surprisingly, the BRD8 mutation was lost in both resistant clones. Unexpectedly, we also discovered a phenomenon of parallel evolution of somatic mutations involving 17 distinct genes, with a median of 3 somatic mutations per gene (range, 2-13) (Fig. 4b-c). Gene Ontology analysis using String identified enrichment of O-glycan processing genes (n = 5; false discovery rate = 9.7 × 10 − 6 ) (Fig. 4b), strongly suggesting the importance of this pathway in the acquired resistance to ICI in this exceptional responder. CDC27 was the most frequently mutated gene, involving 13 and 14 single-nucleotide polymorphisms in resistant clones 1 and 2, respectively (Fig. 4c).
Discussion
In this international, multicenter retrospective study of 24 patients with metastatic MITF family tRCC who received ICI therapy, we found that 16,7% of patients had a clinical response to an ICI, with a disease control rate of 29% when stable disease was also included. Although genetic assessment was available for limited number of samples, we discovered that tumors of patients with clinical benefit harbored mutations in bromodomaincontaining genes. This is, to our knowledge, the first assessment of the clinical efficacy of ICIs in patients with this type of RCC.
The lack of standard treatment for patients with metastatic tRCC is due mainly to the exclusion of patients with non-ccRCC from most large randomized trials; only a few small trials have included tRCC patients, all grouped with non-ccRCCs. Given the benefits of nivolumab in ccRCC, and the lack of other effective therapies for non-ccRCCs, this ICI is being used increasingly in non-ccRCC, although with few data to support its efficacy. Nivolumab is approved in the second-line setting for patients with RCC who have received a VEGFRtargeted agent, based on the results of Checkmate 025, a randomized phase III trial comparing nivolumab to everolimus [12]. Patients treated with nivolumab had a longer OS (25.0 vs 19.6 months) and greater response rate (25% vs 5%), although no difference in PFS was observed. However, no patients with non-ccRCC were included in that study.
Some preliminary data support the use of ICIs in non-ccRCC. Choueiri et al. reported a series of patients with non-ccRCC whose tumors and tumor-infiltrating mononuclear cells were analyzed for PD-L1 by IHC [11]. Of the 10 patients with tRCC, 3 were shown to have PD-L1+ tumor cells and 9 PD-L1+ tumor-infiltrating cells. Two small retrospective series have reported on a combined 81 patients with non-ccRCC treated with an ICI [19,20]. Although only 4 patients with tRCC were included in those studies, one patient had a partial response, one had stable disease, and 2 had progressive disease.
Our study considerably expands what is known about the outcomes of ICI therapy for metastatic tRCC patients. As expected, most of the patients we identified (71%) were treated with nivolumab. These patients' median PFS, 3 months, was shorter than the 4.6 months reported for CheckMate 025, although it is generally understood that PFS is not an optimal measure to gauge benefit from nivolumab therapy [12]. Similarly, overall response rate was 16,7%, compared to 25% in Check-Mate 025. To date, no predictive biomarkers have been approved for selecting RCC patients who will best respond to ICIs, although several markers have been explored [21]. Higher tumor mutational load has been correlated with response to ICIs in several tumor types [22,23]. Our data showing a low mutational load in tRCC confirmed previous reports; the limited mutational load in tRCC, even in metastatic cases, suggests low numbers of neoantigens in these tumors. The retrospective nature and small sample size of this analysis precludes any conclusions of the predictive value for any genomic event. It is, however, important to highlight here that the two patients lasting clinical benefit harbored somatic mutations of bromodomain-containing genes PBRM1 and BRD8. Recently, mutations of PBRM1 have been shown to be associated with benefit from nivolumab in patients with ccRCC [13]. Interestingly, one of the responder received pembrolizumab in combination with a 41BB agonist, a costimulatory molecule induced upon TCR activation that promotes cell survival and enhances cytotoxic T-cell responses. This combination may have enhanced the efficacy of pembrolizumab.
Notably, this is the first published report, to our knowledge, not only of a loss of BRD8 mutation in the 2 resistant clones in response to an ICI but also of an increase in mutational load and a phenomenon of parallel evolution affecting genes involved in O-glycosylation. Parallel evolution is a mechanism that has been demonstrated in bacteria and plants and is thought to contribute to the selection of key forces that help predict and prepare for the organism's future evolutionary course [24]. Given the major role of glycosylation in adaptive immune activation [25], further studies are needed to clarify the importance of this process in ICI response. Furthermore, unbiased genomic screens showed recently that dysfunction of CDC27, a member of the anaphasepromoting complex/cyclosome, limits excessive instability of cancer chromosomes, allowing tumor cells to dynamically improve their fitness during cancer evolution [26]. Notably, the high rate of somatic mutations found in the CDC27 gene suggests that this might provide a selective advantage, improving fitness and limiting genetic instability. Reporting genomic results of exceptional responders to immunotherapy have been shown to provide much information to explore mechanisms of immunotherapy sensitivity and resistance. For example, PTEN mutation and reduced expression of genes encoding neoantigens was recently identified as potential mediators of resistance to immune checkpoint therapy in one patient with metastatic uterine leiomyosarcoma who had experienced complete tumor remission for > 2 years on anti-PD-1 monotherapy [27]. In addition, long term responses to anti-PD1 immunotherapy was recently described in four patients with small cell carcinoma of the ovary, a highly aggressive monogenic cancer driven by SMARCA4 mutations [28]; this was unexpected for a low mutation burden cancer, but the majority of the tumors demonstrated PD-L1 expression with strong associated T-cell infiltration [28].
The majority of the patients in our series received a VEGFR-targeted agent as first-line therapy prior to the ICI, with disappointing results. Two small retrospective series have specifically looked at response to VEGFR-targeted agents in tRCC [8,9]. In one series of patients with metastatic tRCC treated with a VEGFR-or mTORtargeted agent, the median PFS of the 21 patients who received sunitinib was 8.2 months (95% confidence interval, 2.6-14.7) [9]. In another series of 15 patients treated with a variety of VEGFR-targeted agents, the median PFS was 7.1 months, with 3 achieving a partial response [8]. The median PFS durations in these studies were considerably longer than that in our cohort. Although the small numbers of patients limit comparison, the earlier studies, which used TFE3 staining to confirm the diagnosis, may have included patients without a true translocation, whereas in this study the majority of cases (87.5%) were confirmed by FISH confirmation of translocation. Given that VEGFR-targeted therapies are still used as first-line treatment for RCC, further studies should be conducted to confirm the efficacy of these agents with molecular or FISH correlation of translocation.
Despite being one of the largest retrospective reviews, the small number of patients is the main limitation of our study. The small cohort is partly explained by the rarity of this subtype of RCC. Another limitation is that our cohort included patients with different ages at onset who received different ICIs and combinations. However, it is the first multicenter study of consecutive patients treated in several centers of expertise across Europe and the USA.
Conclusion
In summary, ICI showed objective response in TRCC similar to those observed in clear-cell RCC. New studies are needed to explore factors associated with resistance in this setting. Mutations in bromodomain-containing genes might predict response to ICIs as reported in other cancer subtypes, and this requires prospective exploration. Importantly, responses to VEGFR-targeted agents also appear to be limited in this subtype, with a shorter PFS than previously reported, and a few durable responses were seen with ipilimumab or combination therapies [18,20]. Given the early data showing high rates of response to combinations of an ICI and a VEGFR-targeted agent in patients with ccRCC, combinations are now being explored in clinical trials in non-ccRCC, including tRCC [NCT02724878, NCT02496208]. When available and due to rarity of this population, these trials should be considered for patients with MITF family tRCC. Development and studies of novel, biology-driven agents are crucially needed. | 2019-01-04T08:12:17.367Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "f7c0e29fe905bab57c01e50031a28d386457f23e",
"oa_license": "CCBY",
"oa_url": "https://jitc.bmj.com/content/jitc/6/1/159.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7c0e29fe905bab57c01e50031a28d386457f23e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239494263 | pes2o/s2orc | v3-fos-license | Several decades of two invasive fish species (Perccottus glenii, Pseudorasbora parva) of European concern in Lithuanian inland waters; from first appearance to current state
Abstract. Following their first appearance, the invasive fishes Pseudorasbora parva and Perccottus glenii have been in Lithuania for several decades. However, until recently, information relating to their distribution and secondary spread was limited. For this reason, suitable habitats for these fish species were surveyed for their presence across the entire country. Additionally, all previously reported records on the presence of these species were summarized. Results revealed P. glenii to be widely distributed within the country with abundant populations in habitats suitable for the species. The recent distribution of P. parva is restricted to only a few water bodies. It was shown that both species are associated with human mediated transfer, while no natural dispersal of these invasive species was observed. The results of this study suggest that the invasion of Lithuanian inland waters by P. parva and P. glenii is still ongoing, and their occurrence in numerous water bodies, which are still devoid of these species, now seems probable. Demonstrated vectors of P. parva and P. glenii introductions in Lithuania highlight the importance of controlling and screening human activities related to aquaculture, recreational angling and the ornamental fish trade in order to restrict further P. glenii and P. parva expansion in this region.
Pseudorasbora parva (Temmick & Schlegel, 1846). The presence of both species was first recorded several decades ago.
The first official record of P. parva in Lithuanian inland waters dates back to 1963 (Krotas 1971, Virbickas & Maniukas 1971. The first few individuals were caught in a small enclosed water body, Lake Dunojus (Fig. 1A). Pseudorasbora parva was introduced unintentionally with imported stocks of juvenile Ctenopharyngodon idella (Valencienes, 1844) that were deliberately stocked in the lake (Virbickas 2010). For some time, the species was abundant at the site (Krotas 1971). Various age classes of P. parva were detected, showing a well-developed population with potential for spreading elsewhere in the country. However, later the species vanished without an apparent cause (Virbickas 2010). Interestingly, the same introduction, establishment and subsequent sudden disappearance was demonstrated in 2008-2012. The contemporary status of P. parva in the inland waters of Lithuania is unknown.
The first record of P. glenii dates back to 1985 (Virbickas 2000). The first few individuals of P. glenii were caught in Lake Bevardis near Vilnius (Fig. 1A). The introduction of P. glenii originated from ornamental fish keeping. An abundant population of P. glenii with a large age variation was recorded in Lake Bevardis, thereby showing high potential for further spread within country. Contrary to P. parva, P. glenii started to expand from its first introduction, giving rise to several more numerous populations within and around Vilnius (Virbickas 2000). In 2014, it was shown that the species is widespread in some regions (Rakauskas et al. 2016a). However, there were no further data on the distribution of P. glenii in the inland waters of Lithuania.
Recent monitoring of invasive species has shown that P. parva is still present in Lithuania, and P. glenii is still expanding its range in the country. Thus, an understanding of the vectors for the spread of P. glenii and P. parva may help to predict and prevent their further expansion and establishment in the region. Understanding the pattern of spread of these species within the country can help in the formulation of better strategies for control and the protection of endangered species.
Study area
Lithuania is in the Baltic Sea drainage basin, situated along the south-east shore of the Baltic Sea and with a territory of approximately 64,800 km 2 , which is divided by seven main river basins (Kažys 2013). There are 2,850 lakes with a surface area exceeding 0.005 km 2 in the country, and 3,150 smaller lakes with a total area of 913.6 km 2 . In addition, there are 1,132 reservoirs and more than 3,000 ponds in Lithuania (Kažys 2013). The River Nemunas has the largest catchment area in Lithuania, with 93% of Lithuanian territory is within or is connected by canals to the River Nemunas basin (Fig. 2). Of note is that the River Nemunas drainage basin is connected to the Rivers Pripyat and Vistula by the Oginski (opened in 1783) and Augustow (opened in 1839) canals (Fig. 2), forming a northern branch of the central European invasion corridor (Rakauskas et al. 2016a(Rakauskas et al. , 2018. Connections between watersheds of the Rivers Nemunas and Dnieper form the most probable pathway for new fish invasions of this region, primarily those of Ponto-Caspian origin (Rakauskas et al. 2016a(Rakauskas et al. , 2018.
Screening for non-indigenous fish species
Historical records of the presence of P. glenii and P. parva in Lithuania were assembled from both published papers and "grey" literature (local scientific reports, verified reports in social media, etc.). After the initial records of the two species in inland Lithuanian waters, an inventory of records was compiled into three discrete periods: I -2008-2010II -2012II - -2014III -2019III - -2021. In each of these periods, locations of invasive fish were identified from angler´s messages about the presence of invasive fish. When checking sites, several randomly selected water bodies potentially suitable for the studied fish species, located up to 2 km from the identified sites, were also screened for the presence of invasive species. Furthermore, already recognised populations of P. glenii and P. parva were re-investigated during each inventory. Information on the distribution of the studied invasive species during the first inventory period was collected from national scientific reports (Kaupinis et al. 2009, Virbickas 2010, 2011. During the second period, data were assembled from both the published literature (Rakauskas et al. 2016a) and scientific reports (Stakėnas et al. 2014). The data for the most recent period (2020-2021) was compiled from surveys conducted during a national invasive species-monitoring program. Occasional catches of single specimens of P. glenii recorded outside preferred habitats between 2000 and 2021 that were reported during national fish-monitoring or studies conducted for other purposes, were added to the overall inventory data. Notably, ichthyological studies are performed annually in a large number of lakes (~60) and river sites (~150) as a part of national monitoring programs in Lithuania.
Results also include data from 533 lentic water bodies with a surface area smaller than 0.5 km 2 . Fish were captured using battery-powered electric fishing gear from May until October. Electric fishing was performed from a boat or by wading for 10 min intervals in water depths of 0.5-3.0 m for each catch per unit effort (CPUE). Fish taxonomy used in the present paper follows FishBase (http:// www.fishbase.se, accessed 2021.06.15).
First inventory (2008-2010)
Perccottus glenii A total of 121 water bodies were investigated for the presence of P. glenii during the first inventory period. Anglers identified 53 waterbodies as harbouring specimens of P. glenii. Furthermore, 58 water bodies were investigated as potential habitats around those identified by anglers, including one where P. glenii was recorded for the first time.
Perccottus glenii was found in 39 of 121 (32.2 %) investigated water bodies. Two sites were also discovered in rivers during national ichthyological monitoring (Virbickas 2008(Virbickas , 2009). Overall, the presence of this species was identified at 41 locations (Fig. 1B). The species was found at 38 (71.7 %) of 53 sites suggested by anglers. Preliminary analysis showed that the species was well established in some water bodies at the time of surveys and dominant in the fish community. At some sites P. glenii constituted up to 95% of the overall fish assemblage at a density exceeding 300 ind./100 m 2 . A large age range of specimens were detected during the study (from 0+ to 11+), indicating welldeveloped populations and suggesting further potential for expansion in the country (Kaupinis et al. 2009, Virbickas 2011. Notably the species was still present and abundant in Lake Bevardis, the site of its first introduction in Lithuania. Only single individuals of P. glenii were found in river sites. In all cases, abundant P. glenii populations were recorded from small, hypereutrophic or dystrophic water bodies with atypical fish assemblages. Such fish assemblages as a rule consisted of one to three fish species with no other piscivorous species except P. glenii. Fish species accompanying P. glenii were mostly represented by: Leucaspius delineatus (Heckel, 1843) and Carassius gibelio (Bloch, 1782), as well as single cases of Tinca tinca, (Linnaeus, 1758) and Carassius carassius (Linnaeus, 1758). Meanwhile in other habitats, such as relatively large lakes or river sites, only single specimens of P. glenii were captured, indicating accidental species presence in habitats unsuitable for viable P. glenii populations.
Pseudorasbora parva
A total of ten water bodies were investigated for the presence of P. parva during the first inventory period. Anglers identified fewer water bodies with P. parva during this period, although the first record of P. parva in Lithuania was much earlier than P. glenii. Two water bodies were investigated based on angler reports. One site was investigated as it was formerly known to support a population of P. parva and seven additional water bodies were investigated close to the locations identified by anglers.
Pseudorasbora parva was found at both the sites indicated by anglers but was not detected at the other sites surveyed as potential habitats (Fig. 1B). Three (from 0+ to 2+) age groups of P. parva were caught at both sites, showing well-developed populations (Virbickas 2010). Notably the species was not found at its first recorded location. P. parva was not detected at any other sites, other than those identified by anglers, suggesting extinction at the previously inhabited water body and no further expansion around the locations at which it was recorded by anglers. This in total, P. parva was found in two of ten (20%) investigated water bodies. The appearance of all new P. parva populations were associated with unintentional introductions while stocking juvenile C. idella for controlling aquatic vegetation.
Perccottus glenii
In total, 154 water bodies there investigated for the presence of P. glenii during the second invasive species inventory period. Forty-one sites were investigated as a result of previous records of P. glenii during the first inventory period. A further 47 water bodies there identified by anglers as harbouring P. glenii specimens and an additional 66 sites were investigated as further potential habitats.
Perccottus glenii was found at 67 (43.5%) of locations. One additional site was located in the River Mera during national riverine ichthyological monitoring, with a total of 68 locations showing the presence of this species (Fig. 1C). Of those 41 sites at which P. glenii was detected in the first phase, the species was present at 20 (48.8%) sites. This finding implies the disappearance of P. glenii from 21 (51.2%) water bodies at which the species was previously present. The species was also found at 36 (76.6%) sites (of 47) suggested by anglers. Overall, P. glenii was found at 48 new sites and was not detected at 21 where it had previously been found. It was still present in Lake Bevardis, the site at which it was first introduced. During this phase of data collection, eradication measures had been undertaken at sites where P. glenii was particularly abundant (Rakauskas et al. 2019).
In common with results from the first inventory, P. glenii was most abundant in small, hypereutrophic water bodies. Similarly, the fish species most frequently co-occurring with P. glenii were L. delineatus and C. gibelio.
Pseudorasbora parva
Only two sites previously known to harbour P. parva populations were investigated during the second inventory period. Anglers failed to report new locations with P. parva, and none of the sites previously known to support P. parva gave evidence of its presence, indicating its possible disappearance from the inland waters of Lithuania (Fig. 1C).
Current distribution (2019-2021)
Perccottus glenii A total of 533 water bodies there investigated for the presence of P. glenii during the third invasive species inventory period. Sixty-eight sites were investigated as locations at which P. glenii was present during the second inventory period and anglers identified a further 37 new locations. An additional 105 sites were investigated as potential habitats in proximity to those identified by anglers and around locations previously identified as harbouring P. glenii. A further 323 habitats were investigated in regions where there was no previous records for the presence of P. glenii.
One hundred and twenty-two (22.9%) sites were identified with P. glenii present. An additional three sites were also reported from national riverine ichthyological monitoring, giving 125 locations at which the species was known to be present (Fig. 1D). Of the 68 sites at which P. glenii was found in the second survey, the species was present at 65 (95.6%) of the sites; P. glenii disappeared from only three water bodies at which the species was present several years previously. The species was also found at 26 (70.3%) sites (of 37) suggested by anglers. Thirty-one new P. glenii populations were found in areas distant from sites previously known to support P. glenii in Lithuania. Overall, P. glenii was found at 60 new sites and was not detected at three where it had previously been found. Of note was a record of successful bio-manipulation on P. glenii (Rakauskas et al. 2019), with the species was successfully removed from its original site of introduction in Lithuania. However, since its first introduction it has shown steady expansion, with the number of water bodies occupied constantly increasing.
As with the first and second inventory results, P. glenii was most abundant in small, hypereutrophic water bodies and co-occurring most often with L. delineatus and C. gibelio. In 13 water bodies (10.4% of all recently invaded sites) P. glenii formed mono-species fish assemblages.
Pseudorasbora parva
A total 19 water bodies were investigated for the presence of P. parva during the last inventory period.
Anglers identified eight new potential locations and a further 11 potential sites were also investigated.
Pseudorasbora parva was present only in seven (38.9%) of the surveyed locations (Fig. 1D), with seven of the eight sites indicated by anglers found to harbour populations of P. parva. Several (from 0+ to 2+) age classes of P. parva were caught in all sites, suggesting stable populations. None of the additionally investigated sites showed the presence of the species, indicating no further expansion around the locations reported by anglers.
For the first time P. parva was detected in rivers, indicating an elevated risk of expansion within the country. The appearance of new P. parva lentic populations were again associated with unintentional introduction while stocking juvenile C. idella for biomanipulation purposes. The lotic population was detected close to a fish farm that cultivated C. idella, suggesting P. parva had escaped from the farm and further implicating C. idella stocking as the source of P. parva introductions in Lithuania.
Introduction history
The first official record of P. glenii in Lithuania comes from Lake Bevardis, a small enclosed lake, and dates back to 1985 (Fig. 1A). It was suggested that the introduction of P. glenii was a by-product of ornamental fish keeping (Virbickas 2011). An abundant population of P. glenii with large age variation was recorded at Lake Bevardis at that time, showing high potential for expansion within the country (T. Virbickas, unpublished data). Later, the species was translocated further and gave rise to several more numerous populations within and around Vilnius. A potential secondary pathway for P. glenii introduction was intentional, though illegal, introductions by local anglers. According to anglers, the presence of P. glenii in other ponds around Vilnius was recorded before 2000. A preliminary survey in 2010 showed that the species was already well established at that time in the water bodies surveyed and dominant in many fish communities. The species constituted 66-95% of total fish numbers in all water bodies investigated (more than 300 ind./100 m 2 ) (Virbickas 2011). Since its first introduction, P. glenii showed consistent expansion in Lithuania, even in the face of measures to control its spread from 2013.
Habitats
The presence of P. glenii has recently been identified in a total of 125 water bodies. The large number of habitats surveyed provides an unambiguous view of the preferred habitats of P. glenii in Lithuania. It is clear the species is not able to establish and expand in environments with a good ecological status and balanced fish assemblage. Large P. glenii populations were typically associated with degraded, hyper-eutrophic ecosystems with atypical fish assemblages comprising 1-3 species. In 13 water bodies (10.4% of all recently invaded) P. glenii formed mono-species fish assemblages. A total of 92% of all recently known viable P. glenii populations are found in a small (< 10 ha), shallow lentic water bodies, with a thick sediment (sapropel) layer and a littoral zone densely overgrown with macrophytes. Most of these sites are subjected to irregular oxygen depletion events during prolonged ice cover. During the entire period of the investigation, there were only six sites at which P. glenii was found in lotic ecosystems, despite sampling in rivers during each survey period. Similar habitats preferences were shown in neighbouring countries in the region (Nowak et al. 2008, Grabowska et al. 2011, Lukina 2011, Reshetnikov 2013, Kutsokon 2017. The study also showed, that P. glenii is not capable of long-distance dispersal through small, coldwater, fast running rivers. For a decade P. glenii was recorded in the channelized upper reaches of the River Mera, where it disperses through the ditches from a small eutrophic lake. However, it has never been caught in a natural section of the river downstream, even though the natural site has been monitored annually for two decades as part of the Natura 2000 and salmonid species monitoring system (annual national reports).
Introduction vectors
The appearance of the first population of P. glenii in Lithuania is thought to be associated with ornamental fish keeping. Local anglers believe the source population is Russia and genetic studies of P. glenii populations in Europe also suggest Russian populations as the most probable source of introductions of Lithuanian P. glenii populations (Grabowska et al. 2020). Genetic analyses revealed that P. glenii in Europe consists of at least three distinct haplogroups that may represent independent introduction events from different parts of its native range. The haplogroup recorded in Lithuania was also found in neighbouring countries, such as Latvia (Daugava drainage), Belarus (Dnieper drainage), and further away in Russia, in the lower River Volga drainage (Grabowska et al. 2020). First records of P. glenii in Latvia and Belarus were in the 1970s (Lukina 2011, Grabowska et al. 2020, while the first reports of P. glenii in Russia come from St. Petersburg in 1916 (Kuderskiy 1980); significantly earlier than in other countries. Thus, the hypothesis that the P. glenii haplogroup that is typical for Lithuania was introduced from the Volga drainage seems reasonable.
The secondary pathway for P. glenii dispersal in the inland waters of Lithuania is that of anglers. The observed pattern of fragmented dispersal of P. glenii suggests human mediated transfer (i.e. illegal, but intentional introductions), which has significantly facilitated the expansion of this species. Based on discussion with anglers, there are two main purposes for these introductions: a) to improve fish diversity for angling purpose in species-poor water bodies, b) to control unwanted populations of C. gibelio and/or amphibians. Strikingly, during this investigation anglers identified new viable P. glenii populations with 72.7% accuracy, indicating that anglers are familiar with the species.
Secondary dispersal pathways were also demonstrated to be operating in neighbouring countries. Fish release by aquarists and anglers is considered one of the primary reasons for the expansion of P. glenii in Belarus, Ukraine and Poland (Nowak et al. 2008, Lukina 2011, Kutsokon 2017. Anthropogenic introductions have also facilitated further expansion via natural mechanisms, particularly through drainage ditches, streams and rivers that may serve as invasion highways at the river drainage scale (Lukina 2011, Kutsokon 2017, Grabowska et al. 2020). Our study on the distribution of P. glenii in Lithuanian waters only revealed human-mediated translocations as a secondary dispersal pathway. Only in a few cases were single P. glenii individuals captured in rivers, despite the implementation of intensive ichthyological studies in Lithuanian rivers. Furthermore, all reported lotic cases were in proximity to known abundant lentic P. glenii populations, suggesting recent isolated individual migration. Similar results were obtained from studies of the distribution of P. glenii in Belarus and Ukraine, which revealed only a few cases when single individuals of the species were found in natural, well-preserved fluvial river stretches (Lukina 2011, Kutsokon 2017. Furthermore, our studies on the River Mera, revealed the presence of P. glenii in upper river stretches, connected to the lake population, though the species was never found in the river downstream of the lake. This finding indicates that the species probably cannot persist in the presence of local piscivorous fish assemblages in rivers. There is no evidence that P. glenii can be translocated accidently by birds, boats or by other means, as their eggs are sticky, and once removed from the spawning site they never hatch (Reshetnikov 2003). Consequently, human-mediated translocation seems to be the primary vector by which P. glenii dispersal will occurr in the inland waters of Lithuania in the future.
Impact
It is recognised that following their introduction, P. glenii seriously deplete the abundance of juvenile C. gibelio, thereby disrupting the sustainability of the species, as well as depleting the abundance of L. delineatus (unpublished data). One study was conducted on the possible impact of P. glenii on local European pond turtles, Emys orbicularis (Linnaeus, 1758). The study revealed that large specimens of P. glenii are not capable of preying on juvenile pond turtles, and thus cannot directly threaten pond turtle populations through predatorprey interactions, though their habitats and current distribution are overlapping in the inland waters of Lithuania. In contrast, it was shown that mature adult E. orbicularis can prey on juvenile P. glenii. Therefore, abundant turtle populations could potentially control the invasive P. glenii where their distributions overlap (Rakauskas et al. 2016b). No studies of the impact of P. glenii on local aquatic ecosystems have yet been completed.
However, the potential impact on local ecosystem can be extrapolated from locations with similar climatic conditions. In general, it was concluded that P. glenii is capable of depleting the diversity of water macroinvertebrates, amphibians, reptiles, and fishes (Koščo et al. 2008, Grabowska et al. 2009, Pupiņš & Pupiņa 2012, Reshetnikov 2013, making it a serious threat to European freshwater ecosystems.
Eradication
Perccottus glenii was included on the list of Lithuanian invasive species since 2004 (Republic of Lithuania 2004). The species has also been placed on the list of invasive species of European concern since 2016 (European Commission 2016). As a consequence experimental eradication and control measures for the most abundant P. glenii populations in Lithuania have been applied. Although control measures were applied at a small scale, the results were optimistic. It was shown that stocking local piscivorous fishes Esox lucius Linnaeus, 1758 and Perca fluviatilis Linnaeus, 1758 could be a valuable measure for the eradication of P. glenii from invaded water bodies (Rakauskas et al. 2019). Furthermore, this eradication measure was popular with pond owners, and this control measure was independently applied by some pond owners. As a result, P. glenii was eradicated from 24 water bodies during the study period.
Future threats
Our study results clearly indicate that P. glenii is widely distributed in Lithuania (Fig. 1D), showing consistent expansion of viable populations in the country. The observed pattern of fragmented dispersal implies human-mediated translocation (i.e. illegal, but intentional introductions), which has significantly facilitated the expansion of this species. The occupation of Lithuanian waters by P. glenii is still current, and its invasion into numerous water bodies, from which it is currently absent, seems probable. Although the species was eradicated from several ponds, over the long-term, this invader will likely occupy most small, stagnant and eutrophic water bodies that are overgrown with vegetation, such as oxbow lakes, floodplain pools, bogs and ponds, both natural and artificial, in Lithuania. Unfortunately, a similar prediction for further expansion of P. glenii has also been made for neighbouring countries (Lukina 2011, Kutsokon 2017. A public opinion poll showed that citizens lack information on P. glenii and its potential damage to freshwater ecosystems. Therefore, ecological education for the public is of primary importance to protect Lithuanian waters from further intentional illegal introductions of P. glenii.
Stone moroko Introduction history
The first official record of P. parva in Lithuanian inland waters dates back to 1963 (Krotas 1971, Virbickas & Maniukas 1971. The first few individuals were caught in Lake Dunojus, a small enclosed water body ( Fig. 2A). After investigation of this introduction, it was concluded that P. parva was introduced unintentionally with imported stock of juvenile C. idella during lake stocking (Virbickas 2000). For some time, the species was abundant at the site of introduction (Krotas 1971), and various age classes of P. parva were discovered, showing a well-developed population with species invasion potential. However, the species subsequently disappeared from the site without clear reason (Virbickas 2000), though it appears that P. parva suffers from predation pressure and interspecific competition from other fish species. During fish surveys in Lake Dunojus in 1995, a diverse fish assemblage, including native piscivorous E. lucius and P. fluviatilis, was observed (T. Virbickas, unpublished data). Luckily, the species was not translocated further within the country from its first introduction at that time. Until 2008 there was no record of the species in the country. However, P. parva was again inadvertently introduced into private ponds in 2008. Again, its introduction was associated with unintentional release with imported stocks of C. idella (Virbickas & Sidabras 2007, Virbickas 2010. However, similarly to the first P. parva introduction, the species again disappeared from the water bodies into which it was introduced. During the 2012-2014 survey, the absence of P. parva in its former introduction sites was confirmed, indicating the extinction of the species in the country (Fig. 1C). Finally, the species has once again been recorded in Lithuanian waters in 2021. This time the species was recorded in up to seven water bodies, including in a river site, suggesting high potential for its further spread within the country.
Habitats
Relatively little information is available on the habitat use of P. parva in Lithuania as the species has a relatively small distribution and most studies provides basic macro habitat information only (Virbickas 2000, Rakauskas et al. 2016a. For the first time P. parva was found in a natural, relatively small (20 ha), eutrophic lake. The fish community of the lake was primarily composed of warm-water fish, characteristic of ecosystems at late succession stages, dominated by Rutilus rutilus (Linnaeus, 1758), Abramis brama (Linnaeus, 1758) and T. tinca. In addition to these dominant species, fishes of other ecological groups also inhabited the lake, such as Gymnocephalus cernua (Linnaeus, 1758). Piscivorous fish comprised E. lucius and P. fluviatilis (T. Virbickas, unpublished data). However, the species did not persist in this habitat, which thus was considered unsuitable for P. parva. The species was later recorded from slightly different habitats.
During the second species introduction wave, P. parva was found in private, small (< 1 ha) artificial ponds, often overgrown by water plants, with no possibility for further spread. Similar habitat preferences were also reported from neighbouring countries where P. parva was generally associated with submerged vegetation (Kapusta et al. 2008, Karabanov et al. 2010. However, in Lithuania its occurrence in specific habitats may be coincident with its main introduction vector, C. idella, which is usually stocked to remove aquatic vegetation. Again these habitats appear unsuitable for the long-term persistence of P. parva, as the species again did not last long in recorded water bodies. It appears that P. parva perform poorly under interspecies competition and heavy predation pressure, and private ponds are usually under intensive fishery usage. However, more data are needed to demonstrate this hypothesis. Finally, in 2021 the species was found in the River Upė. This small river possesses muddy substrates, a eutrophic status, warm-water, slow current, and with a degraded fish assemblage.
Outside Lithuanian waters, P. parva in its introduced range demonstrates great plasticity in habitat utilization, occupying a diverse range of lentic and lotic waters, including rivers, reservoirs, drainage, ditches and canals, ponds and shallow lakes (Gozlan et al. 2010a, Karabanov et al. 2010. Although P. parva may form populations under lotic conditions (Sunardi & Manatunge 2005, the species occurs at higher densities in lentic conditions (Arnold 1990, Pollux & Korosi 2006. However, P. parva is known to tolerate a variety of environmental conditions.
Introduction vectors
The primary introduction pathway of P. parva into Lithuanian inland waters was unintentional species release associated with C. idella stocking (Virbickas 2010). Until now there was no secondary pathway for P. parva dispersal within Lithuanian inland waters. However, in 2021 the species was recorded in a river connected to the entire the River Nemunas basin. Sadly, after its recent discovery in this river, natural dispersal may represent a secondary pathway for future expansion of P. parva in Lithuanian inland waters. It is known that small rivers with low flow, and main river channels may serve as dispersal corridors for P. parva in Europe (Muchacheva 1950, Gozlan et al. 2010b, Karabanov et al. 2010. Overall, further expansion of P. parva by natural dispersal is to be expected in Lithuania, which may be substantially facilitated by human-mediated introductions. This review should serve as an early warning for other countries, particularly for those importing C. idella for aquaculture or biomanipulation purposes. In general, the expansion of P. parva in neighbouring countries has also been associated with aquaculture (Karabanov et al. 2010).
Impact
Until now no impact of P. parva on local ecosystems have been investigated in Lithuania. However, the potential impact on local ecosystem can be extrapolated from other regions with similar conditions. Inter-specific competition for food between P. parva and native fish species has been observed in water bodies in Belgium (Declerck et al. 2002), Czech Republic (Adámek & Sukop 2000), Germany (Stein & Herl 1986), Greece (Rosecchi et al. 1993) and Poland (Witkowski 2002). In a mesocosm experiment, larval P. parva feeding reduced abundance of planktonic cladocerans and rotifer species (Hanazato & Yasuno 1989, Nagata et al. 2005. High grazing pressure exerted by dense P. parva populations can also result in changes in the prevalent environmental conditions through top-down effects characterized by increased development of phytoplankton and accelerated eutrophication (Arnold 1990, Adámek & Sukop 2000.
Eradication
Currently P. parva is not included in the list of Lithuanian invasive species (Republic of Lithuania 2016). Thus, no measures to date have been applied for P. parva eradication in Lithuanian waters. However, the species has been on the list of invasive species of European concern since 2016 (European Commission 2016). Therefore, if the species continues to spread and establishes stable and abundant populations in Lithuanian inland waters, local authorities will be obliged to impose control measures. It is known that natural predators such as E. lucius and P. fluviatilis (fishes native for Lithuanian waters) could be used for P. parva control in lentic water bodies (Davies & Britton 2015, Lemmens et al. 2015, and similar measures could be applied in Lithuania for P. parva eradication.
Alternatively, early prevention of introduction is the best measure for invasive species control. To date, all P. parva introductions in Lithuania have been associated with stocking of juvenile C. idella. Therefore, prevention measures, such as prohibiting trade in live 0+ C. idella would considerably decrease the chance of P. parva introductions in Lithuania. Since 0+ specimens of both species are difficult to distinguish, stocking only with 1+ C. idella, which are substantially bigger then adult P. parva, would help prevent accidental introduction of P. parva. Other prevention options might include monitoring fish farms for intentional or unintentional breeding of invasive species.
Future threats
The first few introductions of P. parva ended favourably, with the species introduced into relatively small, discrete water bodies, with no means of further spread. So far, only primary introduction pathways were observed for P. parva in Lithuanian waters, with no secondary pathway cases observed. All introductions were associated with C. idella stocking. In all cases the species has subsequently disappeared from sites in which it was recorded. However, the recent record of the species in the River Upė, connected to the entire River Nemunas basin, may permit this species to expand further. Despite its strong preference for lotic conditions, small rivers or canalised parts of main rivers may serve as dispersal corridors for P. parva (Muchacheva 1950, Gozlan et al. 2010a, Karabanov et al. 2010. Life-history traits of P. parva include early maturity, relatively high fecundity, multiple reproductive events during the course of one reproductive season, and the expression of male nest guarding all serve to maximize rapid population growth and, hence, the rapid establishment of sustainable populations. Its appearance in small rivers connected to the River Nemunas basin is particularly troubling. The River Nemunas is connected to the central invasion corridor, giving access to Latvian inland waters. To date, P. parva was not recorded from the inland waters of Latvia (Aleksejevs & . However, the Rivers Venta and Lielupe flowing from Lithuania to Latvia may serve as donors of this species to the Latvian ichthyofauna as they are connected with the River Nemunas drainage area by canals. Two such invasion pathways may be operating: 1) the River Nemunas → the River Nevėžis → the Sanžilė canal →the River Lėvuo → the River Mūša → the River Lielupe and 2) the River Nemunas → the River Dubysa → the Windawski canal → the River Venta (Fig. 2). Overall, there is a high risk that P. parva will occupy degraded water bodies within the entire the River Nemunas basin, potentially establishing as a common fish species in such habitats in the future. Further expansion of P. parva by natural dispersal is to be expected, which may be substantially facilitated by humanmediated introductions. However, the important question about the potential of P. parva to expand their range further remains unclear.
Concluding remarks Currently, P. glenii and P. parva are present in Lithuania. Both species have been repeatedly recorded within this region for several decades. However, their current distribution, primary and secondary introduction vectors and pathways are different. Perccottus glenii is widely distributed within the country with abundant populations in habitats suitable for the species. In contrast, the recent distribution of P. parva has been restricted to only a few water bodies. The observed fragmented pattern of dispersal of P. glenii indicates that human mediated transfer (i.e. illegal, but intentional introductions) is facilitating the expansion of this species. The unintentional release of P. parva associated with C. idella stocking is the only currently recognised pattern of P. parva introduction. However, with the recent discovery of P. parva in a major riverine ecosystem, it is expected that natural dispersal of the species could occur in the country. Overall, the occupation of Lithuanian inland waters by both species is ongoing, and their invasion into multiple water bodies that are currently devoid of these species, seems probable in the future. It is clear that both species are associated with humanmediated transfer. A public opinion poll showed that anglers lack information on invasive fish species and their potential damage to freshwater ecosystems and ecological education of the public is of primary importance to protect Lithuanian waters from further introductions of P. glenii and P. parva. This review should also serve as an early warning for other countries that face invasion by P. glenii and P. parva. Vectors of P. parva and P. glenii introductions in Lithuania highlight the importance of controlling and screening human activities related to aquaculture, recreational angling and the ornamental fish trade in order to avoid further P. glenii and P. parva expansion in Europe. . A draft of this manuscript has benefitted considerably from the comments and suggestions of anonymous reviewers. Author contributions: V. Rakauskas conceived and designed the study, V. Rakauskas, T. Virbickas and A. Steponėnas collected data, V. Rakauskas drafted the manuscript. All authors contributed to editing the manuscript. | 2021-10-24T15:15:40.379Z | 2021-10-22T00:00:00.000 | {
"year": 2021,
"sha1": "f55f158f2b0825f4a83133ba4539fa5aca8e9fbe",
"oa_license": null,
"oa_url": "https://bioone.org/journals/journal-of-vertebrate-biology/volume-70/issue-4/jvb.21048/Several-decades-of-two-invasive-fish-species-Perccottus-glenii-Pseudorasbora/10.25225/jvb.21048.pdf",
"oa_status": "GOLD",
"pdf_src": "BioOne",
"pdf_hash": "7b5b6729dfdcbc661dca786603234f3279c22d2f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
261323643 | pes2o/s2orc | v3-fos-license | Improved Rapid-Expanding-Random-Tree-Based Trajectory Planning on Drill ARM of Anchor Drilling Robots
: Permanent highway support in deep coal mines now depends on the anchor drilling robot’s drill arm. The drilling arm’s trajectory planning using the conventional RRT (rapid-expanding random tree) algorithm is inefficient and has crooked, rough paths. To improve the accuracy of path planning, we propose an improved RRT algorithm. Firstly, the kinematic model of the drill arm of the drill and anchor robot was established, and the improved DH solution parameters and the positive solution of the drill arm kinematics were solved. The end effector’s attainable working space was calculated using the Monte Carlo approach. Additionally, to address the problem of the slow running speed of the RRT algorithm, an artificial potential field factor was introduced to construct virtual force fields at obstacle and target points and calculate the potential field map for the entire reachable workspace to improve the speed of the sampling points close to the target point. At the same time, the greedy approach and the three-time B-sample curve-fitting method were used simultaneously to remove unnecessary points and carry out smooth path processing in order to improve the quality of the drill arm trajectory. This was carried out in order to solve the issue of rough pathways generated by the RRT algorithm. Finally, 50 time-sampling comparison experiments were conducted on 2D and 3D maps. The experimental results showed that the improved RRT algorithm improved the average sampling speed by 20% and reduced the average path length by 14% compared with the RRT algorithm, which verified the feasibility and effectiveness of this improved RRT algorithm. The improved RRT algorithm generates more efficient and smoother paths, which can improve the intelligence of the support process by integrating and automating drilling and anchoring and providing reliable support for coal mine intelligence.
Introduction
Coal mine intelligence is the core of the high-quality development of the coal industry and provides technical support and the necessary road for the coal industry [1][2][3]. However, due to the limited degree of automation of coal mine roadway comprehensive excavation workforce equipment, China's coal mines generally have low digging efficiency and high safety risks [4][5][6]. Moreover, at the present stage, the permanent support of the roadway mainly relies on manual operation, and the number of support workers accounts for more than 70% of the number of workers in the digging face. The drilling and anchoring robot is an important piece of equipment for the realization of intelligent, integrated coal mine excavation. With the drill arm as the main motion device of the anchor drilling robot, optimizing the trajectory of the arm is all about reducing the time of the entire drilling operation [7,8]. Therefore, studying the drilling arm trajectory planning algorithm for coal mine anchor drilling robots has significant practical implications for enhancing support speed and effectiveness, ensuring staff safety, and enhancing the effectiveness of digging in the comprehensive excavation face [9].
For the reachable workspace and trajectory planning algorithm of robotic arms, many scholars at home and abroad have carried out extensive and in-depth research and practice [10][11][12]. Long [13] can be categorized into joint-space trajectory planning and Cartesianspace trajectory planning based on the planning space used for trajectory planning. For ease of use, joint-space trajectory planning is commonly used in robots today. The performance of trajectory planning algorithms for robotic arms in underground coal mines directly affects the motion efficiency of the robotic arms [14]. Some classical trajectory planning methods have been proposed by scholars at home and abroad. The main ones are the genetic algorithm, the artificial potential field method, the particle swarm algorithm, the rapid-expanding random tree (RRT) method, etc. [15][16][17]. Wang [18] proposed an improved RRT* algorithm suitable for solving narrow channel problems to improve the performance of RRT*. The improved algorithm takes less than half the time required by the existing algorithm to find the optimal path from the start node to the target node.
Tang [19] proposed an adaptive particle swarm algorithm with perturbation that can be planned for time, capacity, and leapfrog optimal trajectories while satisfying joint constraints, but the problem of local minima is not taken into account. Kivelä, T et al. [20] proposed a path planning method for autonomous avoidance of dynamic obstacles that is able to achieve the completion of dynamic path planning when obstacles enter suddenly. Deng [21] proposed a two-population genetic optimal-time trajectory planning algorithm based on a chaotic local search for the trajectory planning problem of industrial robots aiming at the shortest running time. It has been verified that the proposed algorithm enables smooth and time-optimal trajectories of the robot end effector. However, a multi-degreeof-freedom robotic arm was not considered. Vandeweghe [22] proposes a sampling-based path planning algorithm that extends the rapid-expanding random tree (RRT) algorithm to cope with goals specified in the configuration space of goals specified in a subspace of the manipulator. Solutions to high-dimensional manipulation problems can be efficiently generated. An improved RRT algorithm based on target bias and the expansion point selection mechanism was proposed by Wei-Hua Bai et al. [23], and the results show that the improved algorithm can guide the tree growth direction, avoid falling into local minima while improving the convergence speed of the algorithm, and improve the efficiency of motion planning of the robotic arm in the simulation. However, the feasibility of the algorithm in a moving obstacle environment was not considered. In response to the target unreachability and local minimum problems of the traditional artificial potential field (APF) method in the obstacle-avoidance path planning process of the robotic arm. Hou [24] proposed an improved APF algorithm that introduces the hopping point search algorithm to find the optimal hopping point as the next iteration point and, at the same time, sets the forced neighbors as the virtual target points to guide the robotic arm to get rid of the dangerous area. However, the effectiveness of the algorithm in three dimensions is not considered. Khan et al. [25] proposed a model-free control framework based on a weighted Jacobi random-tree intelligent algorithm for the path planning method for rigid-flexible robotic arms that is robust enough to handle uncertainties in the robotic arm and make the computation of path planning more efficient. However, this model-free control framework does not consider the working space of the robotic arm. Tie Zhang et al. [26] proposed a practical time-optimal smoothing trajectory planning algorithm and applied it to a robot manipulator arm. In addition, the proposed algorithm utilizes an input-shaping algorithm instead of adding acceleration constraints to the trajectory optimization model to achieve a smooth trajectory. Experimental results on a six-degree-of-freedom industrial robot show the effectiveness of the proposed algorithm. However, the case of local minima is not considered. Wang Lei [27] and others proposed a new algorithm for solving the trajectory planning of robots, especially robot manipulator arms, based on heuristic optimization algorithms: the Trajectory-Planning Beetle Swarm Optimization (TPBSO) algorithm. Two specific robotic-arm trajectory planning problems are proposed as practical applications of the algorithm, namely point-to-point planning and fixed-geometric-path planning, but with low computational complexity.
In summary, the above-mentioned scholars have studied the use of various intelligent algorithms for trajectory planning solutions instead of traditional mathematical calculation methods. Different intelligent algorithms have different strengths and weaknesses and are constantly evolving to suit different scenarios. However, none of the trajectory planning algorithms applicable to multi-degree-of-freedom robotic arms in high-dimensional spaces have been considered comprehensively. Therefore, based on the research of the group, this paper proposes a trajectory planning and path optimization method based on an improved RRT algorithm in the reachable working space of the drilling arm, with the aim of improving the intelligence of roadway support. In the Section 2, the general scheme for the trajectory planning of the drill arm of the anchor drilling robot is described. In the Section 3, the sixdegree-of-freedom drilling arm of the drilling and anchoring robot is analyzed according to the modified DH (M-DH) solution method, the overall link coordinate system of the drilling arm is established, and the M-DH parameters of the drilling arm are finally determined according to the link coordinate system. A Monte Carlo random sampling method is used to calculate the reachable working space of the drill arm, and subsequent planning of the drill arm trajectory is carried out within the reachable working space. In the Section 4, the principles and procedures of the RRT algorithm, the artificial potential field method, and the three-time B-sample curve fitting are described. In the Section 5, simulation experiments are conducted in 2D and 3D maps to analyze and compare the difference in time consumption and path length between the RRT algorithm and the improved RRT algorithm. The results show that the improved RRT algorithm can effectively optimize the base path, improve path planning efficiency, enhance path smoothing, and shorten the path length. In the Section 6, the feasibility and superiority of the improved RRT algorithm are summarized.
Overall Program for the Trajectory Planning of the Anchor Drilling Robot Drill Arm
In this paper, the drilling arm of a drilling anchor robot is used as the object of study. Anchor drilling robots are essentially two six-degree-of-freedom robotic arms integrated into the body of a cantilevered road header. The basic structure of the anchor drilling robot is shown in Figure 1. The cantilevered road header section is mainly used to cut the coal mine roadway, while the six-degree-of-freedom robotic arm section uses the drilling rig as its actuator to support the roadway. a new algorithm for solving the trajectory planning of robots, especially robot manipulator arms, based on heuristic optimization algorithms: the Trajectory-Planning Beetle Swarm Optimization (TPBSO) algorithm. Two specific robotic-arm trajectory planning problems are proposed as practical applications of the algorithm, namely point-to-point planning and fixed-geometric-path planning, but with low computational complexity.
In summary, the above-mentioned scholars have studied the use of various intelligent algorithms for trajectory planning solutions instead of traditional mathematical calculation methods. Different intelligent algorithms have different strengths and weaknesses and are constantly evolving to suit different scenarios. However, none of the trajectory planning algorithms applicable to multi-degree-of-freedom robotic arms in highdimensional spaces have been considered comprehensively. Therefore, based on the research of the group, this paper proposes a trajectory planning and path optimization method based on an improved RRT algorithm in the reachable working space of the drilling arm, with the aim of improving the intelligence of roadway support. In the second section, the general scheme for the trajectory planning of the drill arm of the anchor drilling robot is described. In the third section, the six-degree-of-freedom drilling arm of the drilling and anchoring robot is analyzed according to the modified DH (M-DH) solution method, the overall link coordinate system of the drilling arm is established, and the M-DH parameters of the drilling arm are finally determined according to the link coordinate system. A Monte Carlo random sampling method is used to calculate the reachable working space of the drill arm, and subsequent planning of the drill arm trajectory is carried out within the reachable working space. In the fourth section, the principles and procedures of the RRT algorithm, the artificial potential field method, and the three-time Bsample curve fitting are described. In the fifth section, simulation experiments are conducted in 2D and 3D maps to analyze and compare the difference in time consumption and path length between the RRT algorithm and the improved RRT algorithm. The results show that the improved RRT algorithm can effectively optimize the base path, improve path planning efficiency, enhance path smoothing, and shorten the path length. In the sixth section, the feasibility and superiority of the improved RRT algorithm are summarized.
Overall Program for the Trajectory Planning of the Anchor Drilling Robot Drill Arm
In this paper, the drilling arm of a drilling anchor robot is used as the object of study. Anchor drilling robots are essentially two six-degree-of-freedom robotic arms integrated into the body of a cantilevered road header. The basic structure of the anchor drilling robot is shown in Figure 1. The cantilevered road header section is mainly used to cut the coal mine roadway, while the six-degree-of-freedom robotic arm section uses the drilling rig as its actuator to support the roadway. The drilling arm mainly consists of the drilling machine, the swing mechanism, the telescopic mechanism, the cantilever mechanism, the slide seat mechanism, and the slide rail mechanism, as shown in Figure 2. The drilling arm mainly consists of the drilling machine, the swing mechanism, the telescopic mechanism, the cantilever mechanism, the slide seat mechanism, and the slide rail mechanism, as shown in Figure 2. The working space of the anchor drilling robot is limited by the space in the underground coal mine tunnel, and at the same time, the environment in the working space is complex and variable during the drilling arm support process. Therefore, the trajectory planning process requires that information about the workspace environment and information about the feasible domain of the robotic arm be specified as the basis for planning, and the planning path must have real-time collision detection and dynamic planning capabilities to cope with the dynamically changing working environment. Therefore, the overall program of the drilling arm trajectory planning method of the drilling anchor robot based on the improved RRT algorithm in this paper is shown in Figure 3. The working space of the anchor drilling robot is limited by the space in the underground coal mine tunnel, and at the same time, the environment in the working space is complex and variable during the drilling arm support process. Therefore, the trajectory planning process requires that information about the workspace environment and information about the feasible domain of the robotic arm be specified as the basis for planning, and the planning path must have real-time collision detection and dynamic planning capabilities to cope with the dynamically changing working environment. Therefore, the overall program of the drilling arm trajectory planning method of the drilling anchor robot based on the improved RRT algorithm in this paper is shown in Figure 3. The drilling arm mainly consists of the drilling machine, the swing mechanism, the telescopic mechanism, the cantilever mechanism, the slide seat mechanism, and the slide rail mechanism, as shown in Figure 2. The working space of the anchor drilling robot is limited by the space in the underground coal mine tunnel, and at the same time, the environment in the working space is complex and variable during the drilling arm support process. Therefore, the trajectory planning process requires that information about the workspace environment and information about the feasible domain of the robotic arm be specified as the basis for planning, and the planning path must have real-time collision detection and dynamic planning capabilities to cope with the dynamically changing working environment. Therefore, the overall program of the drilling arm trajectory planning method of the drilling anchor robot based on the improved RRT algorithm in this paper is shown in Figure 3. Firstly, the kinematic model of the drilling arm is established; it is kinematically positive, and the DH solving parameters are solved, solving the drilling anchor module motion space using the Monte Carlo method and verifying that it meets the roadway support requirements. Secondly, a trajectory planning algorithm based on Rapidly Expanding Random Trees (RRTs, rapid-expanding random trees), by avoiding the need to model the space by performing collision detection on sampled points in the state space, is capable of efficiently solving path planning problems with high-dimensional spaces and complex constraints; it is also suitable for solving trajectory planning for multi-degree-of-freedom robots in complex and dynamic environments. The introduction of an artificial potential field factor into the algorithm can greatly improve search efficiency. At the same time, the path quality is improved by removing redundant points and smoothing and optimizing the path using a greedy algorithm and cubic B-sample curve fitting. This results in the realization of autonomous trajectory planning for the drilling arm of an underground coal mine anchor drilling robot.
Kinematic Modeling of the Drill Arm for Anchor Drilling Robots
The kinematic modeling of the drilling arm is an important basis and prerequisite for the study of the achievable working space and trajectory planning of the drilling arm. The six-degree-of-freedom drilling arm of the drilling anchor robot was analyzed according to the modified DH (M-DH) solution [28]. Firstly, the drilling arm was reduced to a link and a joint, and the M-DH joint i was fixed to the i-coordinate system, i.e., the coordinate system was built on the input of the link. The origin of each joint's coordinate system and XYZ axis separately, according to the principle of establishing a modified DH link coordinate system, was determined. Then, the coordinate system of the whole link of the drilling arm was established, and the geometric parameters of the arm were added. The complete link coordinate system obtained is shown in Figure 4.
itive, and the DH solving parameters are solved, solving the drilling anchor module motion space using the Monte Carlo method and verifying that it meets the roadway support requirements. Secondly, a trajectory planning algorithm based on Rapidly Expanding Random Trees (RRTs, rapid-expanding random trees), by avoiding the need to model the space by performing collision detection on sampled points in the state space, is capable of efficiently solving path planning problems with high-dimensional spaces and complex constraints; it is also suitable for solving trajectory planning for multi-degree-of-freedom robots in complex and dynamic environments. The introduction of an artificial potential field factor into the algorithm can greatly improve search efficiency. At the same time, the path quality is improved by removing redundant points and smoothing and optimizing the path using a greedy algorithm and cubic B-sample curve fitting. This results in the realization of autonomous trajectory planning for the drilling arm of an underground coal mine anchor drilling robot.
Kinematic Modeling of the Drill Arm for Anchor Drilling Robots
The kinematic modeling of the drilling arm is an important basis and prerequisite for the study of the achievable working space and trajectory planning of the drilling arm. The six-degree-of-freedom drilling arm of the drilling anchor robot was analyzed according to the modified DH (M-DH) solution [28]. Firstly, the drilling arm was reduced to a link and a joint, and the M-DH joint i was fixed to the i-coordinate system, i.e., the coordinate system was built on the input of the link. The origin of each joint's coordinate system and XYZ axis separately, according to the principle of establishing a modified DH link coordinate system, was determined. Then, the coordinate system of the whole link of the drilling arm was established, and the geometric parameters of the arm were added. The complete link coordinate system obtained is shown in Figure 4. The M-DH parameters of the drilling arm were finally determined according to the link coordinate system, as shown in Table 1, where ai-1 indicates the length of the connecting rod, αi-1 indicates the torsion angle of the connecting rod, θi denotes the joint variable, and di indicates the connecting rod offset. The M-DH parameters of the drilling arm were finally determined according to the link coordinate system, as shown in Table 1, where a i−1 indicates the length of the connecting rod, α i−1 indicates the torsion angle of the connecting rod, θ i denotes the joint variable, and d i indicates the connecting rod offset.
Analysis of the Positive Kinematics of the Drilling Arm of the Anchor Drilling Robot
The positive kinematics of the robot involve obtaining the positional attitude of the end effector with known relative position relationships (joint angles) of the connecting rods. The order of their products is as follows: Machines 2023, 11, 858 6 of 20 The generic chi-square transformation matrix between adjacent coordinate systems under the modified DH (M-DH) parameters is: The MD-H parameters in Table 1 are brought into the respective generic chi-square transformation matrices to obtain the 0 T 1 , 1 T 2 , 2 T 3 , 3 T 4 , 4 T 5 , and 5 T 6 matrices, as follows: The total chi-square change matrix of the end coordinate system of the arm with respect to the base coordinate system is obtained by multiplying the chi-square change matrix sequentially: In the formula, a and o represent the proximity vector and the direction vector, respectively; n represents the normal vector; and p represents the position vector. The unit orthogonal vectors n, o, and a describe the attitude of the drilling rig relative to the base coordinate system of the drill arm, and p describes the position of the drilling rig relative to the base coordinate system of the drill arm. According to the above analysis, the corresponding values of each variable are substituted into Formula (9), and the spatial pose of the drilling rig relative to the base coordinate system of the drilling arm can be obtained. The spatial pose relative to the body coordinate system of the drilling and anchoring robot can be obtained by coordinate system transformation.
Reachable Workspace of Drill Arm for Anchor Drilling Robots
The accessible working space of a robot is the overall volume of space swept by its end effector when the robot performs all possible actions. The reachable working space is limited by the geometry of the robot and the mechanical limits on each joint.
The reachable working space of the drilling anchor robot end effector (drill rig) is an important parameter reflecting the performance of the drilling anchor robot support. This paper uses Monte Carlo random sampling to calculate the reachable working space of the drill arm, with each joint's angle being randomly selected within the joint's range. The simulation model of the arm is constructed by first assigning values to the arm-related parameters in the Robotic Toolbox according to the defined kinematic parameters of the arm. Next, 30,000 random joint variables are generated and sequentially substituted into the positive kinematic equations for the solution. Finally, the points representing each reachable position of the end effector (drill rig) are marked in 3D space, creating a dot matrix that represents the reachable working space of the drill arm. The subsequent trajectory planning of the drilling arm will be carried out in the accessible working space. The results of the simulation of the reachable workspace of the drill arm end effector (drill rig) are shown in Figure 5.
RRT Algorithm
The traditional path planning algorithms are the artificial potential field method, the fuzzy rule method, the genetic algorithm, the neural network, and the ant colony optimization algorithm, among others. However, each of these methods requires the modeling of an obstacle in a defined space; the computational complexity is exponentially related to the robot's degrees of freedom, so it is not suitable for solving the planning of multi-de-
RRT Algorithm
The traditional path planning algorithms are the artificial potential field method, the fuzzy rule method, the genetic algorithm, the neural network, and the ant colony optimization algorithm, among others. However, each of these methods requires the modeling of an obstacle in a defined space; the computational complexity is exponentially related to the robot's degrees of freedom, so it is not suitable for solving the planning of multi-degree-of-freedom robots in complex environments. In the complex, dusty, and poorly illu-
RRT Algorithm
The traditional path planning algorithms are the artificial potential field method, the fuzzy rule method, the genetic algorithm, the neural network, and the ant colony optimization algorithm, among others. However, each of these methods requires the modeling of an obstacle in a defined space; the computational complexity is exponentially related to the robot's degrees of freedom, so it is not suitable for solving the planning of multi-degree-of-freedom robots in complex environments. In the complex, dusty, and poorly illuminated environment of coal mines underground, a path planning algorithm based on Rapidly Expanding Random Trees (RRTs, rapid-expanding random trees) is characterized by its ability to search high-dimensional spaces quickly and efficiently by randomly sampling points in the state space and directing the search to blank regions to find a planned path from the start point to the goal point, which is suitable for solving the path planning of multi-degree-of-freedom robots in complex and dynamic environments; the algorithm has high coverage and a wide search range. The RRT algorithm generates a random tree through incremental, step-by-step iterations. The RRT algorithm node expansion process is shown in Figure 7. The specific steps of the RRT algorithm are shown in Figure 8. In the first step, the whole space is initialized, defining parameters such as the initial point, the end point, the number of sampling points, and the step size U between points. In the second step, a random point Xrand is generated in the space. In the third step, find the nearest point Xnear this random point in the set of points in the known tree. In the The specific steps of the RRT algorithm are shown in Figure 8. The specific steps of the RRT algorithm are shown in Figure 8. In the first step, the whole space is initialized, defining parameters such as the initial point, the end point, the number of sampling points, and the step size U between points. In the second step, a random point Xrand is generated in the space. In the third step, find the nearest point Xnear this random point in the set of points in the known tree. In the fourth step, intercept the point Xnew from Xnear in the direction of the line from Xnear to Xrand in step U. In the fifth step, determine if there is an obstacle between Xnear and In the first step, the whole space is initialized, defining parameters such as the initial point, the end point, the number of sampling points, and the step size U between points. In Machines 2023, 11, 858 9 of 20 the second step, a random point Xrand is generated in the space. In the third step, find the nearest point Xnear this random point in the set of points in the known tree. In the fourth step, intercept the point Xnew from Xnear in the direction of the line from Xnear to Xrand in step U. In the fifth step, determine if there is an obstacle between Xnear and Xnew, and if so, discard the point. In the sixth step, add new points to the tree collection. Cycle through steps two to six. Loop end condition: there is a new point within the set neighborhood of the end point.
In summary, the RRT algorithm generates random trees through incremental, stepby-step iterations and uses them to search for paths in high-dimensional spaces, avoiding spatially complex modeling and reducing computational costs. It is therefore suitable for solving the path planning problem of multi-degree-of-freedom robots in complex environments.
Improved RRT Algorithm
The algorithm flow shows that the RRT algorithm aims to find a path from the start point to the end point, starting from the beginning and keeping searching towards the end. But there are some drawbacks: The paths searched are not optimal and are inefficiently sampled throughout the space. The drawbacks of the RRT algorithm are especially obvious in the three-dimensional, spatially complex environment of the coal mines underground. The RRT algorithm can obtain a path from the start point to the end point in the complex environment of coal mines underground, but the path obtained has problems such as not being smooth and having a long generation time. This leads to low efficiency and poor path quality. In order to solve the above problems, an improved RRT algorithm is proposed.
Artificial Potential Field Method
Firstly, to improve the efficiency of sampling and path generation, an artificial potential field factor is introduced. The potential field map is calculated for the entire known map by considering the target point location as the lowest point of the potential and the obstacles as the highest point of the potential. The gravitational potential field is mainly related to the distance between the end effector (drilling rig) and the drill hole at the target point. The greater the distance, the greater the potential energy value of the end effector (drilling rig). The smaller the distance, the smaller the potential energy value of the end effector (drilling rig), and the greater the potential energy value of the end effector (drilling rig): where η is a scale factor indicator and ρ(q, q g ) is the distance between the current state of the end effector (drilling rig) and the target. The factor that determines the obstacle's repulsive potential field is the distance between the end effector (drilling rig) and the obstacle. When the end effector (drilling rig) does not enter the area of influence of the obstacle, it is subjected to a potential energy value of zero. The greater the distance between the end effector (drilling rig) and the obstacle after it enters the area of influence of the obstacle, the smaller the potential energy value of the end effector (drilling rig). The greater the potential energy value of the end effector (drilling rig) after it enters the area of influence of the obstacle, the smaller the distance and the greater the value of potential energy to which the end effector (drilling rig) is subjected. The potential field function of the repulsive potential field is given by: Machines 2023, 11, 858 10 of 20 The corresponding repulsive force is the negative gradient force of the repulsive potential field: where ρ(q, q 0 ) represents the distance between the object and the obstacle and ρ 0 represents the radius of influence of each obstacle. Based on the gravitational field function and the repulsive field function defined above, a composite field can be obtained for the entire running space; the size of the combined potential field is the sum of the repulsive and gravitational potential fields. Therefore, the total function of the combined potential field is: The combined force applied is: The combined forces of gravity and repulsion guide the drill arm closer to the target point. The force analysis of the end effector is shown in Figure 9. entire running space; the size of the combined potential field is the sum of the repulsive and gravitational potential fields. Therefore, the total function of the combined potential field is: The combined force applied is: The combined forces of gravity and repulsion guide the drill arm closer to the target point. The force analysis of the end effector is shown in Figure 9. Secondly, in order to solve the problem of unsmooth paths and poor quality, a greedy algorithm is introduced to remove redundant points from the paths, and three-time Bsplines are used to post-process the smooth paths.
Redundant Point Rejection Based on Greedy Algorithm
The basic idea of the greedy algorithm is to develop a mathematical model to describe the problem, divide the problem to be solved into subproblems, solve each subproblem to obtain a locally optimal solution to the subproblem, and synthesize the locally optimal solutions of the subproblems into one solution of the original solution problem. The specific steps are as follows: Firstly, the path starts at Xstart. Set a step size X to divide the path into segments according to the step size X. The X1 point is obtained by taking the distance of step X from the starting point. Connect the points Xstart and X1, and if there is no obstacle between them, then discard the X1 point. Then, connect Xstart with X2 and continue to repeat the above judgment operation. As shown in Figure 10, the path between Xstart and X4 can be optimized as a segment. If there is an obstacle between the two, it means that the small section of the path is already the optimal path. Next, using X4 as a starting point, connect X4 to X5 and continue with the optimization steps. Finally, continue the cycle until all of the several points into which the path is divided have been run. The path obtained at this point will be shorter and more efficient than the initial path obtained by the RRT algorithm. The greedy algorithm removes the redundant points of the path as shown in Figure 10. Among them, the orange circle represents the obstacle, 1-7 points represent the sam- Secondly, in order to solve the problem of unsmooth paths and poor quality, a greedy algorithm is introduced to remove redundant points from the paths, and three-time Bsplines are used to post-process the smooth paths.
Redundant Point Rejection Based on Greedy Algorithm
The basic idea of the greedy algorithm is to develop a mathematical model to describe the problem, divide the problem to be solved into subproblems, solve each subproblem to obtain a locally optimal solution to the subproblem, and synthesize the locally optimal solutions of the subproblems into one solution of the original solution problem. The specific steps are as follows: Firstly, the path starts at Xstart. Set a step size X to divide the path into segments according to the step size X. The X1 point is obtained by taking the distance of step X from the starting point. Connect the points Xstart and X1, and if there is no obstacle between them, then discard the X1 point. Then, connect Xstart with X2 and continue to repeat the above judgment operation. As shown in Figure 10, the path between Xstart and X4 can be optimized as a segment. If there is an obstacle between the two, it means that the small section of the path is already the optimal path. Next, using X4 as a starting point, connect X4 to X5 and continue with the optimization steps. Finally, continue the cycle until all of the several points into which the path is divided have been run. The path obtained at this point will be shorter and more efficient than the initial path obtained by the RRT algorithm. The greedy algorithm removes the redundant points of the path as shown in Figure 10. Among them, the orange circle represents the obstacle, 1-7 points represent the sampling point, the black line represents the original sampling path, and the blue and red lines represent two paths after removing redundant points from different angles.
Smoothing Paths Based on Cubic B-Splines
B-spline curves are a class of curves developed on the basis of Bezier curves that overcome the inconvenience caused by the overall controllability of the Bezier curve. Three planar discrete points determine a quadratic B-spline curve, and four planar discrete points determine a cubic B-spline curve. Bezier curves are irregular curves that require the construction of a mixture of interpolating polynomials between the start and endpoints; usually, an nth-degree polynomial is defined by n + 1 vertices and then given a position vector P(i-0,1,2,...n) of n + 1 points in space. The interpolation formula for the coordinates of the points on the Bezier parametric curve is: where m ) 1 k + ( denotes the factorial. Therefore, the cubic B-spline curve equation is: Three planar discrete points determine a quadratic B-spline curve, and four planar discrete points determine a cubic B-spline curve.
The matrix form of the parametric expression for the cubic B-spline curve is given by: Let the four discrete points be 0 P , 1 P , 2 P , and 3 P . Let the midpoint be
Smoothing Paths Based on Cubic B-Splines
B-spline curves are a class of curves developed on the basis of Bezier curves that overcome the inconvenience caused by the overall controllability of the Bezier curve. Three planar discrete points determine a quadratic B-spline curve, and four planar discrete points determine a cubic B-spline curve. Bezier curves are irregular curves that require the construction of a mixture of interpolating polynomials between the start and endpoints; usually, an nth-degree polynomial is defined by n + 1 vertices and then given a position vector P(i-0,1,2,...n) of n + 1 points in space. The interpolation formula for the coordinates of the points on the Bezier parametric curve is: where P i is the characteristic point of the control curve and F i,k (t) is the K-order B-spline basis function. The basis functions in the cubic B-spline curve equation are: where ( m k + 1 ) denotes the factorial. Therefore, the cubic B-spline curve equation is: Three planar discrete points determine a quadratic B-spline curve, and four planar discrete points determine a cubic B-spline curve.
The matrix form of the parametric expression for the cubic B-spline curve is given by: Let the four discrete points be P 0 , P 1 , P 2 , and P 3 . Let the midpoint be M 1 = 1 2 (P 0 + P 2 ), M 2 = 1 2 (P 1 + P 3 ). The starting point S of the curve is on the center line P 1 M 1 of the P 0 P 1 P 2 , 1 3 P 1 M 1 points from P 1 . The starting point E of the curve is on the center line P 2 M 2 of the P 1 P 2 P 3 , 1 3 P 2 M 2 points from P 2 . The tangent to the start of the curve is parallel to P 0 P 2 , and the tangent to the end is parallel to P 1 P 3 . The principle of cubic B-spline fitting is shown in Figure 11. Machines 2023, 11, x FOR PEER REVIEW 13 of 22 Figure 11. Cubic B-spline fitting diagram.
Introducing an artificial potential field factor into the map while optimizing the paths derived from the basic RRT algorithm based on the above greedy algorithm and cubic Bsplines can increase the speed of path generation, reduce the length of the paths, and smooth the paths.
The artificial potential field factor and cubic B-spline curve fitting are introduced into the RRT algorithm. The next chapter introduces simulations and comparative analyses in the debugging software and prototype to verify the effectiveness of the improved algorithm.
Comparison of Simulation and Analysis of RRT Algorithm and Improved RRT Algorithm in 3D Space
The working space of the drilling arm of the anchor drilling robot is a three-dimensional space, so it is also necessary to carry out simulation and comparison experiments in a three-dimensional space to verify the effectiveness of the improved algorithm. The 3D map is created in the workspace of the drilling arm, with a map range of 500 mm × 500 mm × 500 mm. Random obstacles are set inside the map, and random sampling points are all within the map range. The coordinates of the start point are (0,0,0), and the coordinates of the target point are (500,500,500). The 3D spatial map is shown in Figure 12. Firstly, the gravitational and repulsive potential fields are set up in the 3D spatial map, and the combined potential field is synthesized as shown in Figure 13. Introducing an artificial potential field factor into the map while optimizing the paths derived from the basic RRT algorithm based on the above greedy algorithm and cubic B-splines can increase the speed of path generation, reduce the length of the paths, and smooth the paths.
The artificial potential field factor and cubic B-spline curve fitting are introduced into the RRT algorithm. The next chapter introduces simulations and comparative analyses in the debugging software and prototype to verify the effectiveness of the improved algorithm.
Comparison of Simulation and Analysis of RRT Algorithm and Improved RRT Algorithm in 3D Space
The working space of the drilling arm of the anchor drilling robot is a three-dimensional space, so it is also necessary to carry out simulation and comparison experiments in a threedimensional space to verify the effectiveness of the improved algorithm. The 3D map is created in the workspace of the drilling arm, with a map range of 500 mm × 500 mm × 500 mm. Random obstacles are set inside the map, and random sampling points are all within the map range. The coordinates of the start point are (0,0,0), and the coordinates of the target point are (500,500,500). The 3D spatial map is shown in Figure 12. Introducing an artificial potential field factor into the map while optimizing the paths derived from the basic RRT algorithm based on the above greedy algorithm and cubic Bsplines can increase the speed of path generation, reduce the length of the paths, and smooth the paths.
The artificial potential field factor and cubic B-spline curve fitting are introduced into the RRT algorithm. The next chapter introduces simulations and comparative analyses in the debugging software and prototype to verify the effectiveness of the improved algorithm.
Comparison of Simulation and Analysis of RRT Algorithm and Improved RRT Algorithm in 3D Space
The working space of the drilling arm of the anchor drilling robot is a three-dimensional space, so it is also necessary to carry out simulation and comparison experiments in a three-dimensional space to verify the effectiveness of the improved algorithm. The 3D map is created in the workspace of the drilling arm, with a map range of 500 mm × 500 mm × 500 mm. Random obstacles are set inside the map, and random sampling points are all within the map range. The coordinates of the start point are (0,0,0), and the coordinates of the target point are (500,500,500). The 3D spatial map is shown in Figure 12. Firstly, the gravitational and repulsive potential fields are set up in the 3D spatial map, and the combined potential field is synthesized as shown in Figure 13. Firstly, the gravitational and repulsive potential fields are set up in the 3D spatial map, and the combined potential field is synthesized as shown in Figure 13.
A total of 50 experiments in 3D spatial maps for the RRT algorithm and the improved RRT algorithm were performed. Both algorithms take the expansion step as u. The maximum number of samples taken during the sampling period is x. The coordinates of the start point are (0,0,0), and the coordinates of the target point are (500,500,500). The 3D spatial simulation results are shown in Figure 14. A total of 50 experiments in 3D spatial maps for the RRT algorithm and the improved RRT algorithm were performed. Both algorithms take the expansion step as u. The maximum number of samples taken during the sampling period is x. The coordinates of the start point are (0,0,0), and the coordinates of the target point are (500,500,500). The 3D spatial simulation results are shown in Figure 14. Figure 15a depicts the initial path generated using the RRT algorithm. From this, it can be observed that the initial path is highly tortuous. Meanwhile, Figure 15b displays the path after optimization using the greedy algorithm. Upon comparison, it is evident that the redundant points within the path have been eliminated, resulting in a significant reduction in the path length. Figure 15c illustrates the path further optimized after the removal of the redundant points by the greedy algorithm. It is noteworthy that, following this enhancement, the inflection points of the path appear smoother. Such a design is intended to prevent potential damage to the end effector due to excessive bends during the path's traversal. A total of 50 experiments in 3D spatial maps for the RRT algorithm and the improved RRT algorithm were performed. Both algorithms take the expansion step as u. The maximum number of samples taken during the sampling period is x. The coordinates of the start point are (0,0,0), and the coordinates of the target point are (500,500,500). The 3D spatial simulation results are shown in Figure 14. Figure 15a depicts the initial path generated using the RRT algorithm. From this, it can be observed that the initial path is highly tortuous. Meanwhile, Figure 15b displays the path after optimization using the greedy algorithm. Upon comparison, it is evident that the redundant points within the path have been eliminated, resulting in a significant reduction in the path length. Figure 15c illustrates the path further optimized after the removal of the redundant points by the greedy algorithm. It is noteworthy that, following this enhancement, the inflection points of the path appear smoother. Such a design is intended to prevent potential damage to the end effector due to excessive bends during the path's traversal. Figure 15a depicts the initial path generated using the RRT algorithm. From this, it can be observed that the initial path is highly tortuous. Meanwhile, Figure 15b displays the path after optimization using the greedy algorithm. Upon comparison, it is evident that the redundant points within the path have been eliminated, resulting in a significant reduction in the path length. Figure 15c illustrates the path further optimized after the removal of the redundant points by the greedy algorithm. It is noteworthy that, following this enhancement, the inflection points of the path appear smoother. Such a design is intended to prevent potential damage to the end effector due to excessive bends during the path's traversal. After 50 experiments, the comparison between the RRT algorithm and the improved RRT algorithm in terms of time consumption and path length in 3D maps were shown in Figure 16. After 50 experiments, the comparison between the RRT algorithm and the improved RRT algorithm in terms of time consumption and path length in 3D maps were shown in Figure 16.
Comparison of Simulation and Analysis of RRT Algorithm and Improved RRT Algorithm in 2D Space
In order to verify the effectiveness and superiority of the improved RRT algorithm, this paper presents simulation comparison experiments on the trajectory planning of a six-degree-of-freedom drilling arm of a drilling anchor robot in a reachable workspace. Firstly, the simulation is carried out on a two-dimensional map; the map is 500 × 500 in size, with random obstacles set up inside it. The random sampling points are all within the map, and the start and target points are customizable. The 2D map is shown in Figure 17.
Comparison of Simulation and Analysis of RRT Algorithm and Improved RRT Algorithm in 2D Space
In order to verify the effectiveness and superiority of the improved RRT algorithm, this paper presents simulation comparison experiments on the trajectory planning of a six-degree-of-freedom drilling arm of a drilling anchor robot in a reachable workspace. Firstly, the simulation is carried out on a two-dimensional map; the map is 500 × 500 in size, with random obstacles set up inside it. The random sampling points are all within the map, and the start and target points are customizable. The 2D map is shown in Figure 17. Firstly, the gravitational and repulsive potential fields are set up in the map and synthesized into a combined potential field; the obstacle is the potential high point and the potential low point at the target point, as shown in Figure 18. Firstly, the gravitational and repulsive potential fields are set up in the map and synthesized into a combined potential field; the obstacle is the potential high point and the potential low point at the target point, as shown in Figure 18. Firstly, the gravitational and repulsive potential fields are set up in the map and synthesized into a combined potential field; the obstacle is the potential high point and the potential low point at the target point, as shown in Figure 18. In this paper, 50 random sampling experiments are carried out for the basic RRT and the improved RRT algorithms in 2D maps. Both algorithms take the expansion step as u, the maximum number of samples during the sampling period is x, and the start and target points are customizable. This experiment defines the starting point as (20,480) and the target point as (350,10). The 2D simulation results are shown in Figure 19. In this paper, 50 random sampling experiments are carried out for the basic RRT and the improved RRT algorithms in 2D maps. Both algorithms take the expansion step as u, the maximum number of samples during the sampling period is x, and the start and target points are customizable. This experiment defines the starting point as (20,480) and the target point as (350,10). The 2D simulation results are shown in Figure 19. Firstly, the gravitational and repulsive potential fields are set up in the map and synthesized into a combined potential field; the obstacle is the potential high point and the potential low point at the target point, as shown in Figure 18. In this paper, 50 random sampling experiments are carried out for the basic RRT and the improved RRT algorithms in 2D maps. Both algorithms take the expansion step as u, the maximum number of samples during the sampling period is x, and the start and target points are customizable. This experiment defines the starting point as (20,480) and the target point as (350,10). The 2D simulation results are shown in Figure 19. Figure 20a shows the initial path generated using the RRT algorithm in a 2D environment. From it, it is also observed that this initial path exhibits a high degree of curvature. Meanwhile, Figure 20b demonstrates the path after applying the greedy algorithm for optimization. By comparison, we clearly see that the redundant points in the path have been removed, resulting in a significant reduction in the path length. Figure 20c, on the other hand, depicts the path that is further optimized based on the removal of the redundant points by the greedy algorithm. The turning points of the path appear smoother, which is designed to avoid end effector damage due to excessive turning during path conduction.
As shown in the figure above, it is evident in the 2D and 3D map simulations that the improved RRT algorithm can effectively optimize the base path, improve the path planning efficiency, enhance the path smoothness, and shorten the path length. For the results of the 50 experiments, the data collection of the path time and length of the RRT algorithm and the improved RRT algorithm under different maps was carried out, and the results are shown below. ment. From it, it is also observed that this initial path exhibits a high degree of curvature. Meanwhile, Figure 20b demonstrates the path after applying the greedy algorithm for optimization. By comparison, we clearly see that the redundant points in the path have been removed, resulting in a significant reduction in the path length. Figure 20c, on the other hand, depicts the path that is further optimized based on the removal of the redundant points by the greedy algorithm. The turning points of the path appear smoother, which is designed to avoid end effector damage due to excessive turning during path conduction. After 50 experiments, the comparison between the RRT algorithm and the improved RRT algorithm in terms of time consumption and path length in 2D maps were shown in Figure 21.In terms of time consumption, the RRT algorithm has a long sampling time, poor stability, and average trajectory planning times of 114.2303 s (2D) and 4.0277 s (3D). The average path planning times of the improved RRT algorithm are 91.325174 s (2D) and After 50 experiments, the comparison between the RRT algorithm and the improved RRT algorithm in terms of time consumption and path length in 2D maps were shown in Figure 21.In terms of time consumption, the RRT algorithm has a long sampling time, poor stability, and average trajectory planning times of 114.2303 s (2D) and 4.0277 s (3D). The average path planning times of the improved RRT algorithm are 91.325174 s (2D) and 3.0588 s (3D). The average path planning time improved by 20.05% (2D) and 24.06% (3D). Improving the RRT algorithm can effectively improve sampling efficiency and reduce the time spent on trajectory planning. In terms of trajectory planning path length, the average path lengths of the basic RRT algorithm are 1207.9428 mm (2D) and 973.3125 mm (3D). The average path lengths after the removal of the redundant points by the greedy algorithm are 1032.1995 mm (2D) and 871.5103 mm (3D). The lengths of the path after three-time B-spline smoothing are 1008.0407 mm (2D) and 863.3197 mm (3D); the average path lengths were reduced by 16.55% (2D) and 11.30% (3D). It can be clearly seen that path optimization by greedy algorithms and cubic B-splines is effective and leads to higher-quality paths.
Machines 2023, 11, x FOR PEER REVIEW 17 of 22 Figure 19. Two-dimensional RRT algorithm. Figure 20a shows the initial path generated using the RRT algorithm in a 2D environment. From it, it is also observed that this initial path exhibits a high degree of curvature. Meanwhile, Figure 20b demonstrates the path after applying the greedy algorithm for optimization. By comparison, we clearly see that the redundant points in the path have been removed, resulting in a significant reduction in the path length. Figure 20c, on the other hand, depicts the path that is further optimized based on the removal of the redundant points by the greedy algorithm. The turning points of the path appear smoother, which is designed to avoid end effector damage due to excessive turning during path conduction. As shown in the figure above, it is evident in the 2D and 3D map simulations that the improved RRT algorithm can effectively optimize the base path, improve the path planning efficiency, enhance the path smoothness, and shorten the path length. For the results of the 50 experiments, the data collection of the path time and length of the RRT algorithm and the improved RRT algorithm under different maps was carried out, and the results are shown below. After 50 experiments, the comparison between the RRT algorithm and the improved RRT algorithm in terms of time consumption and path length in 2D maps were shown in Figure 21.In terms of time consumption, the RRT algorithm has a long sampling time, poor stability, and average trajectory planning times of 114.2303 s (2D) and 4.0277 s (3D). The average path planning times of the improved RRT algorithm are 91.325174 s (2D) and
Prototype Platform Experimental Verification
The feasibility and effectiveness of the present improved algorithm were verified using a prototype anchor drilling robot platform. Simulating the process of anchor drilling and robotic anchoring and drilling in underground coal mines, the end effector drilling machine of the drilling arm needs to autonomously plan a drilling path from the starting point to the steel strip hole. The main structure of the platform is described in Figure 22.
ing a prototype anchor drilling robot platform. Simulating the process of anchor drilling and robotic anchoring and drilling in underground coal mines, the end effector drilling machine of the drilling arm needs to autonomously plan a drilling path from the starting point to the steel strip hole. The main structure of the platform is described in Figure 22. To set up the RRT algorithm and improved RRT algorithm trajectory planning experiments, set up the same starting point and ending point for the second hole of the steel strip, which can be accomplished by writing algorithms and control programs in the prototype's host computer. The final RRT and improved RRT algorithm trajectories were obtained as shown in Figures 23 and 24, respectively.
Meanwhile, 10 validation experiments were conducted on the prototype platform to collect data on the path time and length of the RRT and improved RRT algorithm trajectories, and the results are shown in Figure 25. To set up the RRT algorithm and improved RRT algorithm trajectory planning experiments, set up the same starting point and ending point for the second hole of the steel strip, which can be accomplished by writing algorithms and control programs in the prototype's host computer. The final RRT and improved RRT algorithm trajectories were obtained as shown in Figures 23 and 24, respectively. Figure 23 illustrates the process under the guidance of the RRT algorithm, where the end effector (drilling rig) progressively moves from the preset starting position to the steel tape hole (target position). Figure 23 displays the final, complete path for this process. Figure 24 presents the trajectory when the end effector (drilling rig) moves step by Meanwhile, 10 validation experiments were conducted on the prototype platform to collect data on the path time and length of the RRT and improved RRT algorithm trajectories, and the results are shown in Figure 25. It can be seen that the improved RRT algorithm trajectory is better than the RRT algorithm trajectory in terms of both time and path, which verifies the feasibility and effectiveness of the algorithm in this paper.
Conclusions
When establishing the kinematic model of a drilling arm, the kinematic analysis of the intelligent drilling and anchor robot is the basis for optimal design and motion control, allowing the kinematic positive solution and M-DH solution parameters to be solved. The Monte Carlo method is used to solve the motion space of the drilling and anchor module to verify that it meets the requirements of roadway support.
By introducing an artificial potential field factor, treating the target point location as the lowest point of potential energy and obstacles as the highest point of potential energy, and calculating the potential field map of the whole known map, the average speed of sampling and path generation is improved by 22%.
Aiming at the roughness of the path generated by the basic RRT algorithm, the greedy algorithm and the cubic B-spline are used to optimize the original path. The greedy algorithm is used to remove the redundant points, and the cubic B-spline is used to smooth the path; the average length of the path is reduced by 14% so as to improve the quality of the trajectory.
Through the comprehensive subjective and objective analysis of the kinematic modeling of the drilling and anchoring robot and the performance of the improved RRT algorithm, it is concluded that the improved RRT algorithm proposed in this paper for the trajectory planning of the drilling arm of the drilling and anchoring robot has strong robustness and real-time performance. The results show that the algorithm can provide high-quality and reliable information support for the tasks of trajectory planning and control of the drilling and anchoring robot. Figure 23 illustrates the process under the guidance of the RRT algorithm, where the end effector (drilling rig) progressively moves from the preset starting position to the steel tape hole (target position). Figure 23 displays the final, complete path for this process. Figure 24 presents the trajectory when the end effector (drilling rig) moves step by step from the preset starting position to the steel tape hole (target position) after the enhancement of the RRT algorithm. Figure 24 shows the complete path of this optimized process. Upon careful observation, it is evident that the improved path, compared to the original one, is shorter in length and smoother in trajectory.
It can be seen that the improved RRT algorithm trajectory is better than the RRT algorithm trajectory in terms of both time and path, which verifies the feasibility and effectiveness of the algorithm in this paper.
Conclusions
When establishing the kinematic model of a drilling arm, the kinematic analysis of the intelligent drilling and anchor robot is the basis for optimal design and motion control, allowing the kinematic positive solution and M-DH solution parameters to be solved. The Monte Carlo method is used to solve the motion space of the drilling and anchor module to verify that it meets the requirements of roadway support.
By introducing an artificial potential field factor, treating the target point location as the lowest point of potential energy and obstacles as the highest point of potential energy, and calculating the potential field map of the whole known map, the average speed of sampling and path generation is improved by 22%.
Aiming at the roughness of the path generated by the basic RRT algorithm, the greedy algorithm and the cubic B-spline are used to optimize the original path. The greedy algorithm is used to remove the redundant points, and the cubic B-spline is used to smooth the path; the average length of the path is reduced by 14% so as to improve the quality of the trajectory.
Through the comprehensive subjective and objective analysis of the kinematic modeling of the drilling and anchoring robot and the performance of the improved RRT algorithm, it is concluded that the improved RRT algorithm proposed in this paper for the trajectory planning of the drilling arm of the drilling and anchoring robot has strong robustness and real-time performance. The results show that the algorithm can provide high-quality and reliable information support for the tasks of trajectory planning and control of the drilling and anchoring robot. | 2023-08-30T15:02:25.786Z | 2023-08-27T00:00:00.000 | {
"year": 2023,
"sha1": "5ffe7946ee2ae21d0cf942866f7872deff427906",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1702/11/9/858/pdf?version=1693186708",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "abd3034c12d4d24ccfe680ceacc358547d9aa7cd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
244075836 | pes2o/s2orc | v3-fos-license | Identifying influential spreaders in complex networks by an improved gravity model
Identification of influential spreaders is still a challenging issue in network science. Therefore, it attracts increasing attention from both computer science and physical societies, and many algorithms to identify influential spreaders have been proposed so far. Degree centrality, as the most widely used neighborhood-based centrality, was introduced into the network world to evaluate the spreading ability of nodes. However, degree centrality always assigns too many nodes with the same value, so it leads to the problem of resolution limitation in distinguishing the real influences of these nodes, which further affects the ranking efficiency of the algorithm. The k-shell decomposition method also faces the same problem. In order to solve the resolution limit problem, we propose a high-resolution index combining both degree centrality and the k-shell decomposition method. Furthermore, based on the proposed index and the well-known gravity law, we propose an improved gravity model to measure the importance of nodes in propagation dynamics. Experiments on ten real networks show that our model outperforms most of the state-of-the-art methods. It has a better performance in terms of ranking performance as measured by the Kendall’s rank correlation, and in terms of ranking efficiency as measured by the monotonicity value.
Results
Algorithms. Firstly, we take a toy network shown in Fig. 1 to illustrate the resolution limit problem for DC and KS. The degree and k-shell values of each node in the toy network are shown in Table 1. Obviously, k(1) = k(8) = k(9) = 1 , k(2) = k(3) = 3 , k(4) = k(5) = k(6) = 4 , k s (1) = k s (8) = k s (9) = 1 , k s (2) = k s (3) = 2 , k s (4) = k s (5) = k s (6) = k s (7) = 3 , where k(i) and k s (i) are the degree and k-shell value of node i, respectively. DC and KS always assigns too many nodes with the same value, which leads to the problem of resolution limitation in distinguishing the real influences of these nodes.
A simple solution is to consider both DC and KS, that is, to estimate the influence of node i by k(i) + k s (i) . However, the problem has not been completely solved. Take node 2 and node 3 as an example, compared with node 2, node 3 is closer to the center of the network, so node 3 may be more conducive to propagation. However, we cannot distinguish the two nodes by the above proposed method. Although both node 2 and node 3 are in the 2-shell, node 3 is removed later than node 2, that is, the 2-shell decomposition process includes two stages, node 2 is removed in the first stage and node 3 is removed in the second stage. So we introduce the stage number at which the node is removed from the network while performing the k-shell decomposition.
Given a network G, during the process of k-shell decomposition for the k-degree iteration, the total number of stages is q(k), and node i is removed in the p(i) stage. The improved k-shell index of node i , denoted by k * s (i) , can be calculated by The process of k-shell decomposition and the k * s value of each node in the toy network are shown in Table 2 and Table 3, respectively. Take node 3 as an example, q(1) = 1 , q(2) = 2 , q(3) = 1 , and then max k q(k) = 2 , so k * s (3) = k s (3) + p(3)/(max k q(k) + 1) = 2 + 2/(2 + 1) ≈ 2.667. The index combining degree and k-shell of node i, denoted by DK(i), can be defined by Such index is named as degree k-shell (DK) index. The DK value of each node in the toy network are shown in Table 4. As shown in Table 4, node 2 and node 3 can be distinguished (DC, KS, DC+KS failed), node 7 can (2) DK(i) = k(i) + k * s (i). www.nature.com/scientificreports/ be distinguished from nodes 4-6 (KS failed), so DK index is a high-resolution index. Furthermore, DK carries both the local and global information of nodes. Inspired by the gravity law, we regard DK value of a node as its mass and the shortest distance between two nodes in the network as their distance. Hence the influence of node i can be estimated as follows where d(i, j) is the shortest distance from node i to node j and R is the truncation radius 29 . Such method is named as DK-based gravity model (DKGM). The algorithmic description of the DKGM is provided in Algorithm 1. www.nature.com/scientificreports/ The result of DKGM with R = 2 of the toy network is shown in Table 5. Take node 3 as an example, the 1-order neighbors of node 3 are node 2, node 4 and node 7, the 2-order neighbors of node 3 are node 1, node 5 and node 6, so By Algorithm 1, we can find that calculating the improved k-shell index needs the following times operations, N ks1 �k� + N ks2 �k� + · + N ksmax �k� = (N ks1 + N ks2 + · + N ksmax )�k� = N k = M, so the computational complexity of this part is O(M), where N ks1 is the number of 1-shell nodes, ksmax is the max k-shell value and k is the average degree. The part with the highest computational complexity in our model is computing the R-order neighbors of each node, it needs N k R times operations, so the computational complexity of this part is O(N k R ) . Therefore, the computational complexity of our model is O(N k R ) . Fortunately, since most real networks are of small-world property, R is usually set to 2 or 3 to obtain the optimal result. So the computational complexity of our model in real-life applications is generally not more than O(N k 3 ) , where �k� ≪ N.
Data description.
In this paper, we use ten real networks from different fields to test the performance of DKGM, including four social networks (PB 41 , Facebook 42 , WV 43 and Sex 44 ), two collaboration networks (Jazz 45 and NS 46 ), one transportation network (USAir 47 ), one communication network (Email 48 ), one infrastructure network (Power 49 ) and one technological network (Router 50 ). These networks' topological features are shown in Table 6, including the number of nodes, denoted by N, the number of links, denoted by M, the average degree, denoted by k , the average distance, denoted by d , the clustering coefficient 49 , denoted by C, the assortative coefficient 51 , denoted by r, the degree heterogeneity 52 , denoted by H, and the epidemic threshold 53 of the SIR model 54 , denoted by β c .
Empirical results. In this paper, we apply the famous SIR model 54 to compare the influential rankings produced by algorithms and simulations. Given the network and infection rate β , 1000 independent implementations are performed and averaged in order to obtain the standard ranking of the influences of nodes (see details about SIR model in Methods). In each implementation every node is selected once as the seed once. The accuracy of an algorithm is measured by Kendall's Tau ( τ) 55 (see details about the Kendall's Tau in Methods) between the standard ranking and the ranking produced by the algorithm. The larger the value of τ , the better the perfor- Table 7, and the accuracies of different β values are shown in Fig. 2.
As shown in Table 7, compared with the five classic methods (DC, KS, H-index, BC, CC), GC, LGM and DKGM are very competitive. Especially in the NS, Power and Router networks, the advantage of the gravity-based methods are extremely obvious. It can be seen from Table 6 that NS, Power and Router are extremely sparse (with very few links). In this tree-like networks, there are very few cycles, that is, most paths have no alternative paths, so propagation is very difficult. In this case, neither the neighborhood-based methods (DC, KS and H-index) nor the path-based methods (BC and CC) can work well. Furthermore, compared with GC and LGM, DKGM Table 7. The algorithms' accuracies measured by Kendall's Tau for β = β c . The parameters in the related algorithms (i.e., LGM and DKGM) are adjusted to their optimal values according to the largest τ . The best algorithm for each network is emphasized by bold. www.nature.com/scientificreports/ always performs best. As shown in Figure 2, DKGM also performs very competitive compared with the seven benchmark algorithms for different β not too far from β c . The optimal truncation radius R * of LGM can be estimated by at β = β c 29 . As shown in Figure 3, DKGM still keeps this property. Furthermore, the accuracies of GC, LGM with R = �d�/2 and DKGM with R = �d�/2 for β = β c are compared in Table 8. As shown in Table 8, although the truncation radius is set heuristically, DKGM still performs best among the three algorithms. Figure 3. The relation between R * of DKGM and d for β = β c . Ten circles represent ten real networks and the slope of the blue line is 1/2. The black circle is the Power network. Although the optimal truncation radius R * = 6 in the Power network is slightly different from what Eq. 4 predicts (i.e., R = 9 ), the algorithmic accuracy at R = 9 ( τ = 0.7366 ) is very close to the best accuracy at R * = 6 ( τ = 0.7575). www.nature.com/scientificreports/ Finally, we apply the monotonicity 56 , denoted by M r , to measure the ranking efficiency of algorithms. This metric is used to measure the uniqueness of the elements in a ranking list and it can be computed by where L is the ranking list, and N t (r) is the number of ties with the same rank r.
The monotonicity of node ranking list produced by different algorithms is shown in Table 9. As shown in Table 9, except the PB network, DKGM always performs best among the eight algorithms. In the PB network, the reason why GC narrowly defeated DKGM is that DKGM just considers 1-order neighbors while GC considers 3-order neighbors. The results reported in Table 9 demonstrate DKGM is a remarkably high-resolution algorithm.
Discussion
Degree centrality and the k-shell decomposition method, as the most widely used neighborhood-based centralities, were introduced to the network world to evaluate the spreading ability of the nodes. However, the two methods always assign too many nodes with the same value, which leads to the problem of resolution limitation in distinguishing the real influences of these nodes. To solve the above problem, combining the two methods (i.e., DC and KS), we propose a high-resolution index (DK) that can simultaneously reflect the local and global information of nodes. Furthermore, we propose an improved gravity model (DKGM) that combining DK index and the gravity law to evaluate the spreading ability of nodes. The empirical results show that DKGM performs best in comparison with seven well-known benchmark methods and DKGM is a remarkably high-resolution algorithm.
A potential disadvantage of DKGM is how to set truncation radius R. Fortunately, as shown in Fig. 3, we find an empirical relation between R * and the average distance d , so we can use the relation (see Eq. 4) to approximate R * . In addition, since most real networks are of small-world property 49,57 , R * should be small, it can be set to 2 or 3 generally.
There are still some potential problems in the future. First of all, the original law of gravity is symmetrical, but due to the different effects of different nodes or the inherent asymmetry of dynamics 58,59 , the influence of node i on node j may be different from that of node j on node i, in which the asymmetric form of gravity law may be involved. Secondly, as the heterogeneity of the links greatly change their importance 60 , how to use gravity model in the weighted networks is still an open issue. We will also develop some other better methods based on the gravity law to identify influential spreaders.
Methods
Benchmark centralities. We denote an undirected and unweighted network as G =< V , E > , where V and E are the sets of nodes and links, respectively, denote |V | = N and |E| = M , so the network has N nodes and M links. The adjacent matrix of G is represented by A = (a ij ) N×N , if there is a link from node i to node j, a ij = 1 , otherwise, a ij = 0. DC 17 of node i can be calculated by where k(i) = j a ij . KS 18 works by iterative decomposition of the network into different shells. The first step of KS is to remove all the nodes in the network whose degree k = 1 . Then it remove nodes whose degree k ≤ 1 after one round removal because this step may lead to the reduction of the degree values during the process of removal. Until there are no nodes in the network with degree k ≤ 1 , all the nodes which have been removed in this step create 1-shell and their k-shell values are equal to one. Then repeat this process to obtain 2-shell, 3-shell, ... , and so on. Finally all nodes are divided into different shells and the k-shell value of each node can be obtained.
The H-index 19 of node i, represented by H(i), is defined as the maximal integer value satisfying that there are at least H(i) neighbors of node i and degrees of these neighbors are all no less than H(i).
BC 20 of node i can be calculated by where g st is the number of shortest paths from node s to node t, and g st (i) is the number of shortest paths from node s to node t that pass through node i. CC 21 of node i can be calculated by GC 28 of node i can be calculated by where ψ i is the neighborhood set whose distance to node i is less than or equal to 3. 54 initially considers all nodes as susceptible (S) except the source node in the infected (I) state. Each infected node can infect its susceptible neighbors with probability β . In each subsequent step, all infected nodes change their own states to recovered (R). A node in the recovered state will never participate in the propagation dynamic process with the probability . The propagation process continues until there are no nodes in the infected state. The influence of node i can be estimated by where N r is the number of recovered nodes when dynamic process achieving steady state. is set to 1 for simplicity, and the corresponding epidemic threshold 53 is where k 2 is the second-order moment of the degree distribution.
The Kendall's Tau. The Kendall's Tau 55 is a measure of the strength of correlation between two sequences. X = (x 1 , x 2 , ..., x N ) and Y = (y 1 , y 2 , ..., y N ) are two sequences with N elements. For any pair of two-tuples (x i , y i ) and (x j , y j ) (i = j) , if x i > x j and y i > y j or x i < x j and y i < y j , the pair is concordant. If x i > x j and y i < y j or x i < x j and y i > y j , the pair is inconsistent. If x i = x j or y i = y j , the pair is neither concordant nor inconsistent.
Kendall's Tau of X and Y can be defined as where n + is the number of concordant pairs and n − is the number of discordant pairs.
Data availability
All relevant data are available at https:// github. com/ MLIF/ Netwo rk-Data. (11) F(i) = N r /N, www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-11-14T06:16:56.717Z | 2021-11-12T00:00:00.000 | {
"year": 2021,
"sha1": "f30dd8d9556ebea7eb31e7311f754fb52c6445e0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-01218-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c395a6d5f1eed57b9a9a8d7eba416a27a2eb117",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
46388386 | pes2o/s2orc | v3-fos-license | Morality and Biology in the Spanish Civil War: Psychiatrists, Revolution and Women Prisoners in Málaga
The psychiatric study of women prisoners in the city of Málaga during the Spanish Civil War provides a starting point for a two-part analysis of the gendered tension between biology and morality. First, the relationship of organic psychiatry and bio-typologies to, in turn, liberalism and neo-Thomist Catholicism is discussed. The supposedly ‘biological’ roots of conditions such as hysteria and their link to women's revolutionary behaviour are examined. Second, prison records are used to examine the material conditions of women in the city and the gendered construction of their moral culpability during the revolution. Both medical science and Catholic doctrine could be exploited in declaring the indissolubility of gendered morality.
tionary violence in Spain, and the war itself, could be explained by a kind of pathological intensi®cation of female nature'. 3 The psychological epidemiology explored in this article was framed by these gendered associations between the mind, human anatomy and discipline. The pathological analysis was taken up and modi®ed in a Roman Catholic (and often contradictory) direction and propagated politically by Francoists during and after the con¯ict, as the quotation at the beginning about the`materialisation' of woman, from a work by two racial hygiene reformers, one man and one woman, from the Pen Äa-Castillo Sanatorium in Santander, suggests. 4 Gender continued to be expressed and experienced through medicalisation, as well as sancti®cation, during and after the war. Pathological and religious metaphors had basic epistemological functions. 5 The democracy of the prewar Spanish Second Republic (1931±39), which granted women the vote, legalised divorce and civil marriage and made birth control respectable, 6 was led by men accused of lacking a genetically rooted instinct of honour, dignity and decency, who worked to achieve a rampant disordering of sexual±social relations. In the ®rst elections when women had the right to vote (1933), they were warned that a vote for the left would put their faith, their children's education and the tranquility of their homes in danger, and leave their honour in shreds'. 7 Democratic Spain had become`decadent' and`barren', it was claimed, a`sterile womb', which could be regenerated only through social militarism and the implantation of the`military home': the morality of war. 8 After the Civil War, the Republic's defeat was explained by the`hystericisation' of its people. Meanwhile the victors were positively stimulated by a`paranoia of persecution' which was successfully converted (or projected) by`patriotic sentiments' and à sense of community' into a`delirium of imperial grandeur'. 9 These gendered categories found echoes across the political spectrum. According to the leading liberal Republican psychiatrist of the time,`primitive reactions' tended to accom-incarceration of Republican women. The frequent transgressions of the inscribed separate spheres model of the postwar era were more widely seen as a form of violation than they had been before the war.
During the Spanish Civil War a department of psychological investigations was created in the Francoist or`Nationalist' zone to ®nd`the bio-psychological roots of Marxism'. (The category`Marxist' has to be understood broadly: a Ma Âlaga doctor described one Republican militiaman as having`a very Marxist smile'. 14 ) This psychological`bureau' was established by the Nationalist Inspectorate of Concentration Camps, initially in Burgos, General Franco's wartime headquarters, later moving to Madrid following his victory in April 1939, to study Republican political prisoners in the hands of the National forces of salvation,`in very favourable conditions which perhaps will never be repeated'. 15 The studies would combine psychological tests as used in psychiatric clinics, forensic examinations, and racial anthropology. According to the director of the studies, the military psychiatrist and race hygienist Antonio Vallejo-Na Âgera, 16 research on`constitutional psychopathic inferiority', carried out since around the turn of the century (largely in Germany), into connecting heredity to criminality, could be useful in politics and the social sciences, perhaps even in`improving the human condition'. Several problems might thereby be elucidated: the mass and individual psychological reaction to imprisonment, the possibilities of`conversions' and an ideological or general affective change in prisoners, and the relationship between`the biopsychological qualities of the subject and democratic±communist political fanaticism'. War made these studies possible, but the theoretical basis was not invented during the war. Bio-criminological ideas had a national and international heritage, discussed and published in both liberal and anti-Republican circles in the face of the anti-clerical violence, rapid politicisation and social and sexual demands of the Republic.
The subject matter of these studies was principally composed of groups of male political prisoners:`Spanish Republicans' (`the agents and propagandists of Marxism'), Basque`separatists' (of unique interest because they`unite political and religious fanaticism'), Catalan`Marxists' (within whom were found both Marxist and`anti-Spanish' fanaticism), and foreign Republican prisoners (volunteers of the 14 Archivo General de la Guerra Civil Espan Äola (AGCE), Seccio Ân Politico-Social (PS), Extremadura, carpeta 24, relato, 28 January 1937. International Brigades) from Latin America, Britain, Portugal and the United States. 17 (International volunteers were of particular interest to the Gestapo and Nazi doctors who visited the Francoist camps and prisons and carried out simultaneous experiments). The majority in all of these groups were`degenerate', it was found. The materialism of Marxism was attractive to`mental de®cients'. Finally, a further study into the`psyche of Marxist fanaticism' of ®fty of the 900 women held in the provincial prison of the occupied Andalucõ Âan port city of Ma Âlaga completed the series of investigations. 18 Some of the ®ndings of this research were published in Vallejo's book, Madness and War, a`psychopathology of the Spanish war', which appeared in 1939. The female Marxist delinquents of Ma Âlaga, with whom this article is primarily concerned, were tried, much like their Parisian forebears of 1871, by Councils of War for`horri®c murders, burnings and sackings' and`egging-on' their menfolk to all kinds of disorders. Thirty-three of them had been sentenced to the death penalty by a military court for crimes against the Patria, although the`magnanimous' General Franco commuted the death sentences to life imprisonment (30 years) for those who took part in the experiments. 19 Ten others had life prison sentences, three had 20-year terms and four faced sentences of twelve years. In fact, many hundreds of men and women, throughout Spain, had their sentences of death commuted after months of painful uncertainty. In many cases this leniency' was possible because the original sentence had been questionable or obviously unwarranted. The surviving records of the Ma Âlaga women's prison bear this out.
Inherited' national and bodily`degeneration' in the period from the 1890s to the 1930s coincided with an accelerated development of medical specialisms like eugenics, neurology, endocrinology, gynaecology, paediatrics, forensics, criminology and psychiatry. 20 At the same time, women and particularly the urban poor were making increasing demands on the state. Doctors and criminologists contributed to theses about the moral and debilitating dangers to the physical and mental `caste' or`stock' presented by urban life. 21 In psychiatric theory and practice, degenerationism was associated with the avowedly anticlerical French school of Jean-Martin Charcot in the 1870s and 80s, 22 and the concept of hysteria in Spain had been captured by liberal and progressive theorists and practitioners by the turn of the century who were often critical of the deleterious effects of religion: Hysterical women are very commonly beatas [devout women or lay sisters]. In their confessor they do not seek a pardon for their sins, but one more person upon whom to vent their complaints.' 23 Overtly Catholic doctors were occasionally less ready to accept openly the category of hysteria, precisely because it tended to be negatively associated with religious women and their`mysticism'. 24 One essay on`hygiene of the intelligence' by the head of the Spanish Central Laboratory of Legal Medicine and director of the Spanish Society of Hygiene, published in 1898, sought a national regeneration in the relationship between the physical and the moral in men and women. The weakest nations, he argued, would be those where women had most autonomy:`degradation' was produced bỳ functionally mutilated' and`feminised' men and`masculine women', where promiscuity compensated for an absence of masculine personality. The blend of biology and temperament which was the human constitution could not be cast aside and forgotten. Woman had`to be female [hembra] all her life, or at least during all of her sexual life . . . sweet, patient, resigned, full of abnegation . . . proper virtues of her sex'. The moral antidote represented by woman's maternity was partially in her care for the education of future generations, not in renouncing marriage in order to become independent, manly and knowledgeable. More essentially, her infecundity was`a crime against nature', a kind of anaesthetisation of femininity, which could only with great reluctance be pardoned by her taking religious vows. This psychosexual vision, curiously admired by the nineteenth-century high priest of Spanish Catholic nationalism, Marcelino Mene Ândez y Pelayo, was argued not from the position of orthodox Catholicism but with the aid of Rousseau and Charcot's theory of hysteria and psychological`excitation'. 25 During the Spanish Civil War, although both men and women would be tried under the rubric of`military rebellion', one of the principal political charges against women was`excitacio Ân a la rebelio Ân' or`excitacio Ân militar' (Article 240), coupling a psychological concept with 21 crime, whereas men tended to be accused of the less emotional and more activist crime of`agitation'. Women were construed as both being more prone tò excitation' and of constituting the metaphorical`psychic centres' which`excited' the whole body of the nation. 26 Opposition to the liberal concept of morality and biology was formally located in militarist±Catholic nationalism, although, again, important similarities with`freethinking' notions revolved around gender. The military±religious kinship was supposedly traceable to myths of the Reconquista, but was not static. The glorious Africanista cultural codes of heroism, bravery, grandeur, the`gentlemanly' protection of women which`lifted them out of vice', and the nationalist crusading mentality shaped the views of Spanish military mental doctors who were formed in the shadow of the 1914 war and German organicism. This military group was associated with the organic, histological Madrid school of psychiatry. The Madrid school had, since around the time of the First World War, gained the upper hand from Barcelona, where theory and practice, exempli®ed in the work of Emilio Mira Lo Âpez (1896±1964), an early Spanish exponent of psychoanalysis, had, by contrast, a strong psychological direction. 27 The Africanista mentality rhetorically interwove militarism, patriotism and God in a spiritual world-view to extinguish the primitive instincts of luxury and materialism, germs attacking the morality of the nation, severely at odds with the dictates of austerity and sacri®ce. 28 Though this credo ®tted well with orthodox Catholicism, it was not peculiar to Spanish military culture. Germanic infusions, from the late nineteenth century, contributed to making militarism a relatively modernising force within Spanish nationalism. To pro-German militarists, the Great War had produced a misery and disgrace of patriotic spirit across Europe that was related to mass psychoses and mental disequilibrium. Women had been made`slaves of the factory', free rein had been granted to personal, endogenous reactions (like hysteria) and psychopathic cruelty had been justi®ed ideologically by`the doctrine of social crime'. 29 While criminologists were cradled by Italian positivism, virtually all Spanish psychiatrists of the ®rst half of the twentieth century, and not least military psychiatrists, were steeped in German organic psychiatry (many eventually ®nding a circuitous way to Freudian thought via biology as a result). 30 Distinctions between racial and mental hygiene and between neurology (neurosis) and psychiatry 26 (psychosis) were to be complicated and conditioned by the German experience of trench warfare, and seemed con®rmed by Spain's experience of war in Morocco in the 1920s. 31 In Spain, as elsewhere, human classi®cation was pursued, during the pre-Civil War decades, through physical and mental`bio-types'. The typology and methodology formulated by the German organic psychiatrist and professor at the University of Marburg, Ernst Kretschmer (1888±1964), outlined in his in¯uential post-First World War study, Physique and Character, was the principal theory employed in the Civil War tests. 32 Kretschmer's immanently gendered theory was a treatise on the relationship of bodily constitution to character. Dispositions towards pathological states and behaviour were bound up with an individual's`whole inherited groundwork' ± the cellular and humoral elements of the body ± though Kretschmer also insisted on the environmental adaptability of this relationship. 33 He began by famously identifying three basic body types and associating them with particular temperaments: ®rst, the`Pykniker', or`globular', bodily ®gure of man and woman, whom tests showed to be prone to an extrovert temperament and, therefore, predisposed to cyclothymic disorders like manic-depression; second, thè spindle-shaped' man or woman or`leptosome'; and third, the so-called`athletic' body. Both of these latter types were linked to an introverted temperament, and therefore, according to Kretschmer, to schizophrenic conditions and a`coldness of the affective life'. 34 Kretschmer's thesis was most rapidly taken up in Spain by the psychiatrist, Jose  Marõ Âa Sacrista Ân, director of the women's section of the private sanatorium-asylum of San Jose  at Ciempozuelos, south of Madrid, in the 1920s, where Vallejo-Na Âgera was later director. 35 Unlike his better-known contemporary Emil Kraepelin, Professor of Psychiatry in Munich, who believed that political doctrines in¯uenced mental sickness and who saw crime as a social disease, Kretschmer did not identify left-wing extremism as a particular psychological problem, though his work was later exploited by the Nazis. 36 Building on breakthroughs made in the decade prior to the Great War, Kretschmer stressed endocrine formation, seeing the glandular complex and hormonal secretions as the ultimate stronghold of the chemistry of the body. 37 This is of signi®cance when looking at Spain, because endocrinology had been a particularly advanced specialism by the 1930s thanks to the work of Gregorio Maran Äo Ân (1887±1960). Maran Äo Ân was reviled by orthodox Catholic doctors for his political liberalism rather than his scienti®c argumentation. Meanwhile, somè spiritualist' fascists continued to resist explaining human acts scienti®cally`through chemical reactions', and emphasised human`values'. 38 From his protracted study of endogenous constitutional and endocrine elements and of exogenous`cosmic' elements, Maran Äo Ân theorised that masculinity and femininity were not opposed entities, but successive degrees in the development of a single evolution. 39 (In the 1920s Maran Äo Ân controversially pursued organotherapy, including testicular grafts, for`intersexual conditions'. He was the only signi®cant Spanish doctor to meet Sigmund Freud.) His theory admitted a phase of undifferentiated sex as the normal starting point for all human beings. This was controversial, and during the Civil War Maran Äo Ân was accused in Ma Âlaga of having republicanised Spanish women'. 40 The importance of the theory here is that evolution`from the feminine to the masculine' was traced not only anatomically but also psychologically. 41 Women's physical and mental evolution was`arrested' at the threshold of puberty when a corresponding maternal development was acquired. Feminine qualities of hypersensitivity, tenderness, spirit of self-sacri®ce, and`a conservative tendency', were propitious for maternity, the biological and social end par excellence of the female sex. Maternal woman's eroticism was blunted and her libido less intense, because in women these were used simply as means to reproduction and`not as terminal objectives as in men'. Only with the decline of ovulation is`progress' resumed. The female prototype of`voluptuousness' ± dark, corpulent, slightly hirsute, with a passionate temperament (`like Carmen') ± and thè polyandrous' type (who has frequent sexual partners) were`commonly sterile'. The eugenic and criminological implications of such treatises on endogenous constitution and temperament were, as those of the related Italian positivist criminology had been in the nineteenth century, refracted through Spaniards' ideology and theology, rather than ever wholly rejected. 42 While historically the Civil War projects an image of well-de®ned divisions sundering virtually every social sphere, the reality was more complex. There existed a level of commonality in several political camps in such ®elds as psychiatry and criminology that occasionally stretched even to the political extremes.`Anti-positivist' medical doctors who de®ned themselves primarily as Catholics were fearful of`socially destructive',`materialist' ideas which dismissed the hierarchy of body and soul. But those who conducted the Civil War research described here, like some of their Republican counterparts, hovered, theoretically speaking, between`treatment' for constitutionally determined or`materialist' deviancy and punishment for immorality and evil. 43 The post-First World War location of unsocial behaviour in alienation, mental abnormality and anomalous weaknesses of the constitution were famously corroborated by the Belgian criminologist Louis Vervaeck, who concluded that`biological individuality' was the preponderant factor in criminal aetiology, and others, such as the`post-positivist' Italian criminologist, Salvador Ottolenghi, who worked on the endocrine dysfunctions which were part of the`constitutional sickness' leading to criminality. 44 Ideas of anthropometric calibration and`curing'`diseased' delinquents, in preference to a policy of`vengeful coercion', cross-fertilised with similar work in Spain which traversed political boundaries. 45 Republican penal reformers employed and developed some of this in the early 1930s in legislating for`social defence', 46 and the Republican Anti-Vagrancy law (Ley de Vagos y Maleantes), of 4 August 1933, was repressively employed into the Francoist era, against, for example, hungry and `degenerate' country women who stole fruit from the ®elds, though this legislation was supplemented under Franco by a barrage of other measures of social control. 47 The broad issue of eugenics, so fatal in the German postwar experience, was central to political debate in the Spain of the 1920s and 30s between Catholics, who saw the family as the ultimate social bedrock, and so-called`neo-Malthusian' reformers. 48 Catholic doctors distanced themselves from Nazi extremism by following the strictures of the papal encyclical, Casti Connubi (December 1930). They upheld marriage as a means to the sancti®cation of man and were against the control of births and the`negative eugenics' of sterilisation and abortion, which were a function of the over-bearing state and, for instance, allowed women freedom to work outside the home. They equivocated, however, about`positive' measures of`moral eugenics' to regulate`healthy' marriages and guard against the hereditary transmission of mental and social`defects'. 49 As in Mussolini's Italy, Franco's victory eugenics and Catholic-inspired policies of pro-natalism and maternology overlapped with little dif®culty, although they were discussed in the language of`moral and racial hygiene'. 50 These issues were also taken up and discussed in the pages of the conservative intellectual review, Accio Ân Espan Äola (AE), founded in 1931 as a rallying point for anti-Republican thought, and modelled on Charles Maurras' Action Franc Ëaise, with its call for rule by an`aristocracy' of`intelligence and the Sword'. According to the Spanish organisation's most recent historian, its professed doctrinal debt to the Inquisition, and its general tendency towards Catholic determinism in its intellectual output suggests a less secularised range of opinion than its French counterpart. 51 This`traditionalist' argument does not wholly ®t with the injection of counterrevolutionary positivism in AE, represented by several articles on such themes as medical science, racial improvement, revolutionary constitution and psychology, modern warfare and defence, technology, the corporate economy, and Fascism, which gave a modernist edge to the nationalist, Catholic±moral, spiritual±essentialist aura of AE and its leading light, Ramiro de Maeztu. 52 State institutions in the Spain of the 1930s were highly centralised but did not extend effectively across society. Although a relatively developed medical and penal institutional framework existed, material resources and an infrastructure to implement psychiatric reform were lacking. When Gonzalo Lafora, director of the newly established Consejo Superior Psiquia Âtrico, visited the Ma Âlaga public asylum in late 1931, he found`an enormous mass' of patients, effectively in custody, without bread to eat, living from whatever non-mental patients in the main Hospital Civil left. Many were naked, sleeping on the¯oor or outside on verandas. 53 Measures taken by the Republican administration on the conditions of committal, reclusion, treatment and discharge of patients in asylums and to professionalise psychiatry were not well resourced and were generally ineffective. 54 The war forced most of those doctors who had sided with the Republican government into exile, thereby debilitating modernising impulses. In the end, policy as developed to deal with the problems presented by the Civil War settled for penal pragmatism', a rhetorical compromise, which balanced the spiritual requirements of Francoism's moral order with the fait accompli of what modern science explained as the biological order. This unconvincing conciliation between tradition and modernity was encapsulated in what became propagandised as Catholic Spain's great contribution to modern penal doctrine: the Patronato Central para la Redencio Ân de Penas por el Trabajo, which imposed disciplinary labour on thousands of prisoners (both men and women) from 1938, as a means to personal redemption and to a gradual remission of sentences. It was the closest that the state came to a kind of mass therapeutics in a society which lacked permanent biopsychological penal clinics. Within a mental structure of repentance and expiation, work was enshrined as the inevitable punishment for sin. 55 Feminine bodily weakness was the counterpart of masculine punishment through manual labour. Castigation for original sin was biologically con®rmed. But women prisoners could still redeem themselves through virtuous female activities, like religious classes, nursing the sick and household tasks ± washing¯oors or cleaning toilets, as women in Ma Âlaga The psychiatrists claimed that many had not been willing to overcome this situation and`ascend in the social hierarchy through work'. Since most had employment, it was said that they could not have been drawn to the revolution through hunger. 58 In fact, the three pesetas daily which the best-paid women textile workers in Ma Âlaga were able to contribute to the family economy in the 1930s, at a time of rising unemployment, economic dislocation and shortages, would only narrowly, if at all, prevent a catastrophic fall from poverty to destitution. Social±spatial marginality was also de®ned by a north±south divide in the city. Chaotic and speculative expansion spread northwards as well as westwards with the crisis of subsistence in the countryside. A further sixty-two (35 per cent) women prisoners lived in this zone, now to the east of the river, in the area around La Goleta, to the north of the historic city centre and the calle Carreterõ Âa, which formed a recognised social barrier. The effects of poverty were obvious. Three hundred children in the city died before reaching one year in 1931, and nine hundred per year were dying before the age of ®ve. The general mortality rate in Trinidad in 1930 was almost four times that of the middle-class district of the 57 E.g. Amanecer, 6/7 January 1933. 58 PFM1, 175.
Alameda. 59 By 1931, begging had reached`plague' proportions and mendicancy was increasingly theorised and criminalised. 60 By the time of the inauguration of the Second Republic it was estimated that 50,000 people were dependent on the special sanitary assistance of the municipal authorities, although provision was rudimentary and did not remotely cover needs. 61 The respectable city was, therefore, enclosed by`dark forces' to the west and north. According to the records, the remainder of the Civil War prisoners lived in pockets of marginality, such as the poor worker district of El Palo at the far eastern end of the city along the coast (7.3 per cent), or in rudimentary home-made settlements by the water's edge, or shacks on the dry river bed under the city bridges, or on the`camino Antequera', connoting temporary dwellings, at the extremities of the most unhygienic zones of the city, along the westward extension of Trinidad, where livings were made through various forms of prostitution. Pilar VE, a forty-year-old woman from Ma Âlaga, ought to have dedicated herself to`sus labores' (literally,`her labours'), meaning housework and care of a family, so the Military Court proclaimed, but was living on the beach in 1937 when she was detained. Subsequently she was denounced as being a`leftist' and was reported to have supported anticlerical incendiary assaults on her parish church in the Republican years prior to the Civil War, later`displaying happiness' at killings and, like other women, of`celebrating' the expulsion of religious from convents and ecclesiastical residences. She received a sentence of six years for`excitacio Ân' and was suspended from holding any public or private employment, of®ce, or having any right to aid, assistance or suffrage. Her case, as was automatic for political convicts until 1942, was referred to the Court of Political Responsibilities for the possible order of reparations and con®scation of her property and that of her family. 62 The pre-Civil War intellectual activities of Eduardo M. Martõ Ânez, head of the health service of the provincial prison and director of its Psychiatric Clinic, who assisted Vallejo in the Ma Âlaga study (and may have instigated it), were pursued amid the increasingly politicised atmosphere of pro-amnesty protests, riots and incendiary violence in the divided city. His publications included a work on the`bio-psychic study of the delinquent', an essay on the`psychopathology of incurable delinquents', and a study of the creation of anthropological laboratories in penitentiaries, although, he observed somewhat ambiguously, modern criminal asylums could not house`the crowd of perennial disrupters of social tranquillity', which existed`in an intermediate zone between abnormality and madness'. 63 During the early months of the war, in the summer and early autumn of 1936, as the Republican authorities of the city attempted to regain control, the Ma Âlaga penitentiary was besieged on several occasions in response to Nationalist bombing raids, which killed many women and children and were so thunderous as to make the walls of the prison tremble, although it was situated some distance from the centre of the city. Those deemed to have supported the military rising of 18 July, mainly middle-class business people, local politicians, and military and religious personnel, had been imprisoned, and many were taken to be executed by the crowd, accompanied by women and children from working class barrios,`satisfying their cruelty and common instincts'. 64 Eduardo Martõ Ânez had been tolerated as prison medical of®cer during the seven months of revolutionary Republican jurisdiction in the city. He had witnessed mass executions carried out before what he described as a`baying, clapping and unconscious multitude, animalised by the bestiality of the moment', and had had to certify the deaths of the victims. 65 The violence of the Nationalist occupation of the city, re-establishing the former social equilibrium, was more ordered and extensive than the revolutionary violence of the left. The`sickly bride' which was Ma Âlaga would now be`cured'. Amid funerals for those killed during the period of`Popular Justice' in the early months of the war, summary Councils of War began for political crimes committed by thè badly born' (mal nacidos). The local Francoist state political party, the Falange, called for denunciations of`the criminal low-life' which had`by commission or omission, bloodied the streets of Ma Âlaga', and the Civil Governor threatened ®nes for anyone intervening on behalf of those detained. In an era of martyrdom there was no place for sentimentalism. 66 Denunciations and vengeful stories, reports by Falangist of®cials on public and private conduct, by priests on degrees of religiosity and repentance, and by Daughters of Charity on levels of culture, rather than evidence as such, formed the basis of military trials. The records which remain to us, partial and fragmentary though they are, show that at least ®fty-®ve Republican women were executed in the city of Ma Âlaga, two of them by garrote vil, a brutal form of strangulation. 67 Women's anticlerical violence seemed incomprehensible, as the military courts and other authorities commented, since`these kinds of women', even those who declared themselves to have`no religion', had always taken an emotional part in and been excited by the Holy Week processions of the many invocations of the Virgin in the city in the years before the Republic. The procession of the Confraternity of the Santõ Âsimo Cristo de la Expiracio Ân and the Holy Virgin of Sorrows, from the Perchel parish church of St Peter, always amazed those present, with the Magdalen at the feet of the Saviour and the elegant Virgin so dark,`like a Perchel woman' on her redoubtable throne, and the sound of ejaculatory saetas (sung prayers in¯amenco style) from kneeling singers, and the perfume of incense ®lling the air. It was dif®cult to explain how many of these same women who applauded the perennial rites at Easter could within a few weeks participate in the destruction of churches and burnings of the processional treasures they guarded. 68 This incomprehension relates to the duality of purity and impurity in the ideological construction of potential threats to Catholic Spain, as a comparison of racial with sexual differences may con®rm. During the Rif wars of the 1920s, virtually the only ®ghting men who displayed`hysterical reactions', somewhat likè nervous tendencies' in women, according to military psychiatrists, were`Moors.' This was due to their`primitive personalities' and consequent lack of conscience. They were oblivious to the complexities of the modern world, like`solid blocks', devoid of interior life, in perpetual communication with the external world and had`the nervous system of animals'. 69 As in the ambiguous interpretation of Moorish`sedimentation' in the Spanish`race', in women there lurked an irrepressible attraction simultaneous with a dread fear which was expressed psychologically.
The dirt, misery and`hatreds' of the popular barrios of the industrial city resided beside images of the most typical quarters, Perchel and Trinidad, and most prominently their women, as the`real representation of the Andalucõ Âan soul', with its`peculiar purity' (casticismo). In this combination of the pure and the impure, certain dangerous areas of the city seemed to represent a kind of exaggerated, voluptuous femininity:`the woman of castiza Perchel, with beautiful eyes, bloodred lips, heart of ®re, body of pagan Goddess . . .', Perchel,`heart and pride of Ma Âlaga . . . cradle of its poetry, of its beauty [majeza] and elegance [garbo]'. 70 This duality was re¯ected in bio-criminology. Physical and sexual infantilism were linked to criminality, as they were associated by Clavero Nun Äez, in a 1940 study, with`extraordinary psychic suggestibility, fabulations, fear, lying and negativism'. More maternal feminine anatomies produced`joyful optimism, untiring industry, satisfaction and intensely deeply felt enjoyment with the spouse, children and things of the home, and a great tolerance for suffering and a disposition to sacri®ce, self-assuredness with nothing of envy . . .'. The`erotic behaviour' of these feminine bodies was also usually`normal'. Meanwhile, women stigmatised with virile features' tended to feign maternal sentiments as a way of escaping their sexual ambivalence. However, characteristics of sexual masculinity abounded in prostitutes 68 and criminal women, it was claimed, in`paradoxical combination within feminine physical and psychological characters'. 71 Thus, danger lurked even within the seemingly harmless, like women of religious faith, who could be merely`spiritual' without being genuinely`pious'. Lower-class women's apparent religiosity was a ritual display of exaggerated paganism:`a violent and unconscious manifestation of a sentiment of frustration': the irritation of repressed desire. Ownership of such displays was coveted, it was argued, but was out of reach and would always appear as strange and distant from the way of life of the people. 72 A related argument was that both apparent expressions of religiosity and anticlerical violence were simply forms of innate female fanaticism. But the predominant view was that religion, at least in middle-class women, was an essentially calming in¯uence. Vallejo-Na Âgera maintained that study of the`criminal pathological form' during the Civil War con®rmed`the feminine cruelty of woman' when she had lost her religious sentiments and`operates exclusively stimulated by her natural tendencies'. 73 Of 290 women still recorded in Ma Âlaga prison, sentenced for political crimes related to the Civil War, 189 (over 65 per cent) are described, occupationally, as dedicated to`sus labores' (the term`su casa' ±`her house' ± was occasionally used, and more often,`su sexo'). This ®tted the accepted image of virtuous womanhood, but many almost certainly worked in further ways to help feed the family. Some were factory workers, in textiles or tobacco, for example, a possibly militant status which women may not have wanted to divulge. 74 Twenty-two of the sample were declared as textile workers in some way: as`dressmakers', bleachers or press operators, and some as outworkers. One thirty-seven-year-old mother of four children sewed curtains at home, and attended left-wing meetings and was known to be`talkative', and to`encourage the rebels'. She was sentenced to six years' imprisonment for`excitation of the rebellion' under article 240 of the Code of Military Justice. 75 Sixteen women are starkly described occupationally as`campo' (`countryside'), referring to farm labour of some kind and were possibly recently arrived in the city. Six were laundresses or washerwomen, four were teachers, and three were prostitutes. One midwife was detained in`preventive custody' for making`statements hostile to the Glorious National Movement', and another was imprisoned for organising meetings with nurses at the local headquarters of the socialist party, and giving shelter to`fugitives'. 76 A sixty-three-year-old woman who lived in the north of the city was claimed to be`a signi®cant extremist element', who`realised acts of occupation' in the house of a priest in the Alameda de Capuchinos, a little while after he was shot. 77 No evidence was produced or claim made that this woman was involved in the murder of the priest, and the housing and shelter of thousands of refugees in the ®rst months of the war possibly explains this occupation'. From July 1936 to February 1937 the bombed city's population had increased by some 15 per cent to around 212,000. Political activists, like Lina Molina, of the Comite  Provincial of the Communist Party, who was director of the Ma Âlaga Republican supplies committee, found shelter for these 30,000 refugees, using all means possible including the conversion of convents and churches. A committee of voluntary women did what they could to keep order, and two proletarian committees attended to sanitation and provisioning. 78 A high proportion of female political prisoners in the city of Ma Âlaga (thirty-one of the sample from the prison archive) were household servants. (Of the ®fty women in the psychological tests of 1939, ®fteen were servants, the largest occupational group. Thirteen are listed as`hogar' [`home'], eight as factory workers, and three as`prostitutes'. 79 ) Many domestics were young women who were relatively newly arrived rural migrants. One seventeen-year-old maid living in El Palo was sentenced to twelve years in prison for`aiding the rebellion'. 80 Domestics were seen as`unattached', lacking education and religious training and drawn towards prostitution'. Their mental activity was absorbed, and their instinct for self-abnegation destroyed, by what one doctor, director of the prestigious medical review, Clõ Ânica y Laboratorio, described as the Republican period of`pre-Revolutionary sensuality', and infected by what another who practised in Ma Âlaga called the`Marxist virus'. 81 Maids were also drawn to feminism and`spiritualism', symptoms of a broader heterodox culture in the city. 82 These young women could be forgiven if they read and were confused by the ambiguous messages of the leading Republican mouthpiece of the early 1930s in Ma Âlaga, the daily newspaper El Popular, which tended to be critical of`capricious' and`frivolous' behaviour, like following fashion, which did`not correspond to women's moral and social condition', and argued that a woman could not acquirè grace' without`physical and mental equilibrium'. Women who had to earn their 77 She was sentenced to twelve years' imprisonment. APPM, EP, no. 349, C3, L2. 78 own sustenance, it argued, ought to dress with modesty and pay heed to intellectual and spiritual elevation:`Modernism yes, but this does not consist in frivolities. No serious man will come near women with poor taste.' 83 An embodiment of women's rising claims for autonomy in Ma Âlaga was the federalist and feminist politician Bele Ân Sa Ârraga. Little seems to be known about Sa Ârraga's local society for working women, which claimed a staggering 20,000 members in 1900, who were mainly`country women', but its in¯uence must have been considerable. 84 According to Falangists, writing after the Civil War, and referring to her prewar secular educational work on behalf of women,`the ®rst Marxist microbe was introduced into Ma Âlaga by Bele Ân Sa Ârraga and poisoned the workers who stoned the image of Christ'. 85 Meanwhile, the homesickness of domestics was interpreted as a`primitive psychological reaction'. 86 They were seen as particularly prone to`the psycho-physical intolerance and social inadaptability of the psychopath when faced with external stimulus': Everything', including, one supposes, political propaganda and hunger,`excites them'. 87 One young servant from La Goleta was reported by her employer, because, although she had been of`normal conduct and antecedents' before the war, she later announced that she had`joined the anarchists'. Afterwards she was supposed to have declared that the householders would probably be killed like many others because`if the situation had been the other way around, they would have been doing the killing'. 88 Another, who lived nearby and was a cook, was sentenced to twelve years' imprisonment for`aiding the rebellion'. Another had her death sentence for organising a domestic servants' union commuted to thirty years' imprisonment by the Head of State. 89 In the aftermath of the Civil War domestic servants were speci®cally targeted for religious education and spiritual exercises. 90 Feminist spiritualist groups, so the myths of danger in Ma Âlaga claimed, had signi®cant in¯uence in the worker barrios like Perchel, especially on those with particular psychopathic personalities and ethnic groups with their southern propensity towards fantasy inherited from the Arabs. One of the well-known spiritualist meeting houses, where, incidentally, food was distributed to the poor, was in the heart of the`infectious' working-class barrio of Trinidad. 91 According to Gustavo Garcõ Âa-Herrera, a native of Ma Âlaga and a medical doctor born in 1900, who, like Vallejo-Na Âgera, served in the army medical corps in Morocco, mediums were able to lose consciousness because they were`psychopathic',`hysterical personalities' and found a climate of suggestibility in hungry, diseased and overcrowded working class districts. 92 Spreading spiritualist ideas to the`barbarous',`semi-savage' suburbs of the city caused violent reactions among the`vulgar' women there, who, in their faces, revealed`the most complete ignorance and stupidity'. 93 More popular myths saw mediums as secularised counterparts to miraculous and protective invocations of the Virgin. The ®rst Holy Week celebrations in Ma Âlaga after the city's occupation by Francoist forces were reduced to a single procession of the Virgen de los Servitas in April 1937. It was silenced and darkened when artillery ®re was heard during the proceedings, although some residents of Perchel had already been reassured by a medium that they had nothing to fear, as the Virgin interceded with different social groups to prevent wartime catastrophes. 94 Before the Civil War, however, the Virgin could appear in working-class districts, even in factories.
The Spanish Civil War studies politicised constitutional theory, postulating a relationship between a determined biopsychological personality and a`constitutional predisposition towards Marxism'. They consisted of clinico-psychological typi®cation and bio-metric investigation: detailed measurements of, for example, the length, breadth and depth of the skull, the genitals, the distance between the eyes, the length of the nose, and the abundance and placement of body hair, and descriptions of skin colour, indicating any`morphological stigmatisation'. 95 The Neymann-Kohlstedt`introversion test', using spoken responses, and the Marston personality-rating system, were used together to identify the type of primary temperamental reaction of subjects, sorting them into the`introverted' and extroverted'. 96 The 1921 Robert M.Yerkes revision of the Binet-Simo Ân (mental age) intelligence scale, which in other contexts was notorious for ignoring social factors when labelling groups, such as blacks in the US Army, as`inferior', was used to ®nd the`intellectual coef®cient' of each subject.
The`fundamental qualities of moral activity' of each specimen were gauged by completion of a 200±item questionnaire with information about family, sexual, political, religious and military antecedents. These were based on interrogations introduced in Nazi-run centres of biological±criminal investigation, principally in Munich, which graded political prisoners and racial enemies according to the Kretschmerian`biological±hereditary inventory' which enabled the`spatial circumscription' of particular groups. 97 First there was investigation of the`family tree', including parents and siblings, with questions referring to drunkenness, criminality, social position, economic wellbeing, spiritual predispositions and state of mind, and to characterological properties such as temperament, level of education, types of psychic reaction and familial conduct. Other`anomalies' of the family, such as pauperism', emigration, illegitimacy, economic crises and mental illness, were minutely annotated. Questions referring to the record of the female parent as housewife and mother ± miscarriages or abortions, her reputation in the neighbourhood, her moral and educating qualities and her inclination to controversy and to adorning her person completed this part of the process. Using these data, the roots of the lamentable conditions from which the ®fty Ma Âlaga women prisoners were said to be suffering would be`established' as hereditary and`genetic'. Among the parents, siblings and other blood relations of the subjects were a high proportion of mentally sick',`psychopaths',`criminals',`bigots',`vagrants',`homosexuals',`alcoholics' and`suicides'. Many were`revolutionaries' or`non-Catholics', and this in a country that had`struggled for Catholicism' and whose racially`select' had been esteemed as Catholics. According to results from the Neymann-Kohlstedt test, thirty-six of these women (72 per cent) had`degenerative temperaments'. Some revealed defects which were the`collateral inheritance of schizoid or cycloid bases', but most were drawn to`hysteroid criminality', a category not used in the male studies. 98 The next section dealt with prisoners' own education, religiosity, propensity towards begging, theft, alcohol, work, family break-ups, conduct during military service (in the case of men), marriage, children (antecedents, state of mind, criminality of spouse and children), health (from childhood), type of behaviour when inebriated and personal attitudes towards crime. The Yerkes/Binet-Simon tests revealed that half of the group were of`inferior' or`weak' intelligence; 80 per cent were of a`low' cultural level or`illiterate'; 38 per cent had received no schooling at all. 99 The`social personality' of the subjects con®rmed the ideological presuppositions of the doctors. Only eleven (22 per cent) had`normal' female personalities, which meant`being moral', working, living a social life without con¯icts, being non-delinquent, and not given to`sexual perversity', kept on the path of virtue by piety, maternity and constitutional weakness. Thirteen were`born revolutionaries', a variant of the questionable positivist category of`innate crimin- ality'. These women instinctively sought to overturn the social order because of the congenital peculiarities of their bio-psychic constitutions. Four others were labelled as`congenitally immoral', although the distinction between this and`born revolutionaries' is not explained. 100 Twelve were described as`anti-social psychopaths', à plague' category that Vallejo believed could be identi®ed society-wide, under an authoritarian government, and its sufferers`segregated' during infancy. 101 The others (ten) were part of the multitude of uncultivated, crude,`suggestible beings', lacking spontaneity or initiative,`who form the majority of anonymous people', and were condemned as`social' or`moral' imbeciles'. 102 The questionnaire ®nishes with a description of general characteristics, clinical analysis of the nervous system, signs of degeneration and hormonal assessment. Vallejo and his colleagues decried any attempt to explore the relationship of delinquency to sexuality, and were reluctant to make physical examinations of the Ma Âlaga women because of the`impurity of their surroundings'. This did not prevent them from concluding that there were substantial peculiarities about women's violence related to sex. They investigated the age and circumstances of loss of virginity and enquired about male and female`sexual perversions'.`Anomalies of sexual development' and`morphological stigmatisation' were particularly common among criminals. Male delinquents, for example, showed frequent signs of femininity and either of sexual infantilism or`hypersexualism'. 103 Superior psychological qualities had been overtaken by the`hyperexcitability' of`infantilism' and base instinct. Political revolts allowed women to`satisfy their latent sexual appetites'. 104 A contemporary study of one hundred prostitutes housed in the Clõ Ânica de Proteccio Ân a la Mujer in Madrid during 1939±41, using the same basic methodology, concluded that 60 per cent had mental and criminal antecedents where`the instinctive life predominates'. 105 In`red women', it was found, an`unnatural' active sexuality was opposed to maternity. 106 Gregorio Maran Äo Ân, as we have seen, also posed sexuality negatively in relation to maternity, though he had also prescribed a social re-evaluation of`conscious motherhood', within a framework of economic reform, as a kind of social prophylactic. 107 The`red woman' also symbolised the impulsive, passionate and`feminine' multitude in general, whatever its sex. This`multitude' was distinguished from`the soul' of the nation (`the masses'), on account of its`infantile hyperexcitability', possessing inferior physical and psychological features ± a`pathological sickness', with`many points of contact' with the`psyche' of children and animals. While`the masses' (the`fascist' zone) reacted psychologically with masculine qualities such as order, discipline and labour, the suggestible`multitude' displayed only`licentiousness, voluptuousness, indiscipline and crime'. 108 The Marxist women of Ma Âlaga were accused of being`united with the crowd' which committed revolutionary murders, anticlerical destruction, bodily mutilations, and necrophagous acts (literally,`feeding' on dead bodies, though here it referred to`venting anger' upon bodies, taunting and jeering, and encouraging their display). Primarily they were accused generically of`animating men' to disorder and the`excitation' of revolution. This ambiguity of responsibility is underlined by the inclusion of crimes such as`Marxist af®liation', and labelling women as`revolutionaries'. They seem more often than not to have been members of no political party, although some joined the Communist women's Mujeres Antifascistas. Other transgressions were wearing the distinctive,`masculine', blue overall (the`mono' 109 ), typical uniform of the militia-woman, inciting others to declare themselves against fascism', and making negative statements about Nationalist generals. It is a feature of the summary hearings, which implicitly recognised that a broad range of women's activities were, in some sense, political, that women were accused for their community association outside the orbit of the family, of deleterious activities in groups. Some were simply`habladoras', a somewhat ambiguous label which can implỳ great talkativeness', or`gossips', maliciously transposed as`oral propaganda'. 110 Others attended revolutionary meetings, acts and demonstrations,`pressing the rebels towards the commission of excesses and outrages'. 111 A ®fty-four-year-old woman was tried by a Ma Âlaga Military Court for being a`Red' who assiduously attended the local socialist meeting house, the Casa del Pueblo, and public political meetings. 112 A sixty-two-year-old woman from El Palo was the mother of à known criminal of the city' and was sentenced to six years imprisonment. 113 Another woman had`distinguished herself during the Marxist domination by her manifestations and phrases encouraging the extermination of persons of order, plotting the perpetuation of registros (searching of houses), and making up insulting verses . . . inciting with her words the commission of violent outrages'. 114 Another was sentenced to death for the profanation of bodies. She was accused of having wretched antecedents and conduct' and of being`an active communist and propagandist of dissolute ideas' and of having taken part in the sacking of homes and the destruction of a church and the burning of its sacred objects. Of`maximum social danger' and`perverse instincts whose monstrosity is beyond human conscience . . . [including] the re®ned profanation of cadavers', she`incited' the killing of enemies. Later, in this case, the death sentence was commuted by Franco to life imprisonment and an amnesty was conceded in 1945, liberty being granted ®nally in 1946. 115 No explicit connection was made between such acts (whether they were stories or not) as urinating upon dead bodies and the multiple popular superstitions about bodily substances, excretions, the devil, or the healing power of dead bodies in Andalucõ Âa. 116 They were re¯exively translated in terms of`danger', as if symptoms of a plague carried by the swarms of¯ies that infested the working class barrios. Another woman, a ®fty-seven-year-old widow, who`incited' her son to military rebellion, for which he was executed by Nationalist forces, was considered hugely dangerous' because of the`sexual nature' of her`profanation and ridicule' of the body of a dead victim of the Republicans. Her death sentence was commuted to a prison sentence of thirty years, although she was eventually granted Conditional Liberty as part of a partial amnesty in December 1943, having, it was noted, redeemed three months of her sentence by passing the`Curso elemental de Religio Ân'. 117 Other women were condemned for visiting the sites of execution of rightists during the war, as female onlookers had been in revolutionary France. 118 A thirty-year-old woman with four children, married to a man at the Republican front, was denounced for showing support for the Republican ®ghters as they passed through in lorries bound for the war. When the city fell, she left and did not return voluntarily thereafter. 119 Of the women made the subject of the psychiatric study in Ma Âlaga, 62 per cent (thirty-one women) were`guilty' purely of`inciting' anti-Nationalist sentiments, by supporting left-wing or Republican political groups. They were, in the majority of cases, not sentenced for acts of violence at all. 120 Revolutionary power and military occupation produce possibilities for experimentation on enemies, although organised military authority is usually more systematic than mass violence. This was true of the Civil War in Ma Âlaga, where`communist experimentation' existed in the sense of an improvised and bloody attempt to create a new social order. Given the nature of Catholic nationalism and medical and military cultures in Spain, it is not surprising that in the Nationalist psychiatric studies revolutionary conduct was almost exclusively viewed through ideas about morality and biology, or that conclusions were highly coloured by sex and gender. In contrast, it was possible to look at the same events by concentrating on the sociopsychological dynamics of revolutionary behaviour. The head of the wartime Republican psychiatric services (Vallejo-Na Âgera's counterpart), Emilio Mira, who was exiled after the war, attempted to explain the revolution both historically and psychologically. It was within this context, rather than in the realm of`disordered' constitution or sexuality, that each individual revolutionary was`created', according 115 to Mira. 121 Collective existential crises, associated with social change or growth, demand that all oppressive structures and organisations be periodically thrown off. Just as earthquakes and¯oods erupt under pressure, causing abrupt mutations of the natural environment, men and women experienced psychological changes affecting their collective consciousness. Revolutionary violence was analogous to, though not caused by, the disappearance of infantile features during the human crisis of puberty, ultimately leading to maturity.
To Mira, revolutionary violence was not a problem of defective bodily constitution or primarily of public order, but the result of a long collective psychological process. Neither a simple moral criterion nor a rigid physiological classi®cation could be applied. The intimate sense of social revolution was a realisation of disequilibrium and non-conformity: affective currents of collective aspirations towards justice, which could only be satisfactorily expressed with spectacular gestures. It was simply unreasonable, according to Mira, to require the dominion of the conscious mind over the unconscious' within the revolutionary masses. While Nationalists spoke in static moral terms about the dichotomy of`order and chaos' (the`cosmic order' in so-called`times of peace'), Mira, by contrast, persuaded his audience to think in dynamic terms of the habitual order and the other as a new morality.
Mira also makes a distinction in types of revolutionary behaviour between that which is`lived' during the revolution, that`lived' from the revolution, and that which lives the revolution and is part of its`all'. This scheme is problematic, since`the revolution' is never satisfactorily de®ned, although the theory does at least recognise that far from all Republicans were`moral'. It also goes some way towards limiting the deterministic effects of separating the psyche from human agency and politics. But here Mira does not entirely succeed. The determined revolutionary, he postulates, suffers a psychological restructuring, apparently because of his beliefs or due to the charisma of a political leader, and is thrown into a state of transcendence, above equanimity and judgement, where the law of`all or nothing' reigns. However, the collective response to isolation, encirclement and imminent defeat by early 1937 in Ma Âlaga must have produced a state of panic and despair which was only fuelled by stories of the violence of the advancing Nationalist forces and by hunger. 122 The compression of time and distance which Mira reasonably perceives as in¯uencing the behaviour of the revolutionary is unlikely simply to have been the result of a`revolutionary' acceleration of endogenous psychical processes, but primarily an experience of material, exogenous pressures: the struggle for survival.
The war initially had seen a rise in the incidence of reactive or endogenous 121 Resume  of a paper presented at the Institut, curso 1937±38, Emilio Mira,`Psicologõ Âa de la conducta revolucionaria', Universidad de La Habana, 26±27 (September±December 1939), 43±59. Mira was ®rst professor of psychiatry at the Auto Ánoma University in Barcelona, and head of the Institut Psicote Ácnic of the Generalitat of Catalunya, in the 1930s. 122 The Acting British Consul in Ma Âlaga reported as early as September 1936 that there was a¯ood of refugees and wounded, and that the insurgents were generally expected to occupy the city in three or four days. PRO/FO927/14, 15/30 September 1936. psychoses, though this was most evident in the ®rst months of the con¯ict and the rise levelled off once`the constructive phase of the revolution was initiated'. 123 An increase in neuroses was also noted towards the end of the war, as food shortages in the cities became catastrophic and military occupation was imposed. 124 Indeed, the sometimes tragic effects of the female experience of`social commotion' largely derived from women being closer to the struggle for daily survival heightened by war, as female activists in Ma Âlaga made clear. 125 When a twenty-one-year-old Perchel mother, married to a humble ®sherman, threw herself and her two tiny children from the rocky hillside of the Castillo del Gibralfaro, overlooking the city's port, in early August 1936, it was because of her agonised need of milk for her baby which she could not ®nd or produce, a desperation compounded by the war. 126 Political dissidents in Franco's Spain were not interned en masse in psychiatric hospitals, as in the Soviet Union. However, both systems related political attitudes to sickness and disorder, paradoxically in such diagnoses as`schizophrenia with religious delirium' or`Marxist mania'. 127 In Franco's Spain, as in Stalin's system, religious or quasi-religious attitudes and medical science contributed to the repressive political culture. It would be quite wrong to argue that all priests and doctors disinclined to support the Spanish Republic were somehow cruel and vindictive. But during and after the Civil War, Catholicism and pathology did provide parallel repressive linguistic and ethical frameworks, the one consisting of sin, punishment and redemption, and the other of infection, disease and cure. In the Civil War struggle for the nation, these vocabularies/mentalities became dominant, jostling and inter-acting with each other and pushing material social issues into the background of political discourse.
The gendering of morality in Civil War Spain was reinforced by a psychopathological conception of crime and revolution. Morality was also the basis of an imagined`new' Catholic-totalitarian community that pre-dated the con¯ict but was restated juridically by wartime divisions. This imagined sacred community's shaping of medical culture and biology was deduced from essentialist, totalising ideas, rather than from the individual and the individual's way of being. 128 Science could not be ignored, but it also could not be permitted to relativise the absolute gendered values of what would become the`New State'. Femininity, for example, did not merely follow glandular dictates. Neither was it a way of being which could be forged by a woman in whatever way she pleased. It was the product of a given spiritual environment acting on a given organic and temperamental base, evolving through time:`Femininity simply is, just as any value simply is', and if she is feminine, woman never ages spiritually'. 129 Psychiatric discourse and practice in Spain around the time of the Civil War revealed parallels in the relationship of organic psychiatry and bio-typologies to both liberalism and neo-Thomist Catholicism. Both medical science and Catholic doctrine could be exploited in declaring the indissolubility of gendered morality. The`biological roots' of conditions such as hysteria were linked to women's revolutionary behaviour. Where liberal and Catholic views differed was in their diverse attitudes to social conditions as a theoretical category of mental medicine and psychology. Catholic military doctors leaned heavily on a gendered psychological construction of women's moral culpability originating in nineteenth-century liberalism, while ignoring the material conditions of revolutionary conduct and social class. They reclaimed conservative gender traditions for the political right and coupled them to the Francoist crusade to`re-Christianise' Spain. | 2017-04-08T06:32:24.714Z | 2001-10-26T00:00:00.000 | {
"year": 2001,
"sha1": "ef7289af80f1ced5aea7c63542292367470e7e3a",
"oa_license": "CCBY",
"oa_url": "https://uwe-repository.worktribe.com/preview/1084805/download.pdf",
"oa_status": "GREEN",
"pdf_src": "Cambridge",
"pdf_hash": "fc4e92ce8fbb41213457f492b07f62c7eb7b1614",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Sociology"
]
} |
270458581 | pes2o/s2orc | v3-fos-license | Priorities in healthcare provision in Parkinson's disease from the perspective of Parkinson Nurses: A focus group study
Background Through their expertise and diverse skills, Parkinson Nurses are key care providers for people with Parkinson's disease. They are seen as an important profession for person-centered and multidisciplinary care, considered priorities in Parkinson's care delivery. Currently, however, little is known about the priorities that this profession itself defines for the care of Parkinson's patients and how they perceive their own role in the care process. Objective To explore the perspective of Parkinson Nurses on care priorities in people with Parkinson's disease. Design Qualitative study. Setting(s) The iCare-PD study served as the object of study by establishing an interdisciplinary, person-centered and nurse-led care model in several European countries and Canada. The nurses who participated in this model were part of the study. Participants Six Parkinson Nurses participated in the study. Methods We conducted a thematic focus group, adopting the paradigm of pragmatism to draft an interview guide. The focus group was based on the inspiration card method and followed recommendations for co-creation processes. Results Parkinson Nurses define care priorities for Parkinson's in areas of education, multi-professionalism, and need-orientation. They see themselves as mediators and coordinators of care delivery processes. Conclusions In line with international recommendations, Parkinson Nurses prioritize key aspects of multidisciplinary and person-centered care. At the same time, however, the nurses also name care priorities that go beyond the international recommendations. It is therefore crucial to integrate the perspective of this important profession into recommendations for the delivery of healthcare for people with Parkinson's. Tweetable abstract How do specialized nurses define priorities for person-centered Parkinson's care? Answers are sought in this qualitative study by @MarlenaMunster.
integrate the perspective of this important profession into recommendations for the delivery of healthcare for people with Parkinson's.Tweetable abstract How do specialized nurses define priorities for person-centered Parkinson's care?Answers are sought in this qualitative study by @MarlenaMunster.
What is already known
• International recommendations and priorities in Parkinson's care delivery often focus on person-centered and multidisciplinary care.• Parkinson Nurses are vital in providing care for individuals with Parkinson's by person-centered and multidisciplinary approaches.
• Despite the recognized importance of Parkinson Nurses, there is a gap in understanding their specific care priorities and perceptions of their role in the person-centered care process.
What this paper adds
• This paper sheds light on the care priorities identified by Parkinson Nurses, emphasizing areas such as education, multiprofessionalism, and need-orientation.• Parkinson Nurses view themselves as central mediators and coordinators in delivering care for individuals with Parkinson's disease, shaping the landscape of person-centered, multidisciplinary care.• The findings highlight the importance of integrating the perspectives of Parkinson Nurses into healthcare recommendations to ensure comprehensive and effective care delivery for Parkinson's patients.
Background
Parkinson's disease is a neurodegenerative disorder characterized by complex multimorbid motor and non-motor signs and symptoms, typically affecting individuals over the age of 60.In Europe, approximately 1.2 million people are currently living with Parkinson's disease, and its incidence is rising among the elderly population (Gustavsson et al. 2011).Globally, the number of individuals affected by Parkinson's disease is projected to double by 2030 (Dorsey et al. 2007).
As the prevalence of patients with co-morbidities rises within healthcare systems, including those with Parkinson's disease, there is a growing need for tailored care structures to address their unique requirements (Palladino et al. 2016).People with Parkinson's disease are considered particularly vulnerable to inefficient care structures (Zaman, Ghahari, and McColl 2021).International guidelines recommend integrated, person-centered, and multidisciplinary approaches to Parkinson's care, and recognize the vital role of specialized nurses, known as Parkinson Nurses, in providing comprehensive and individualized support (Lidstone, Bayley, and Lang 2020;Radder et al. 2019;Rajan et al. 2020).
Parkinson Nurses take on a variety of tasks that contribute to a more person-centered approach (van Munster et al. 2022).Furthermore, they are seen as a key profession in the establishment of multidisciplinary care delivery (Radder et al. 2019).Despite their significance, there remains a limited understanding of the specific care priorities identified by Parkinson Nurses and their perception of their role in the broader care delivery landscape.This research seeks to bridge this gap by examining the care priorities designated by Parkinson Nurses and gaining insights into their perceptions of their profession's role in delivering healthcare to individuals with Parkinson's disease within a multinational integrated care model.
Study design and theoretical background
This qualitative research is grounded in the paradigm of pragmatism, which sees human experience as central to building knowledge and understanding the world (Allemang et al., 2022).This research paradigm is well suited to exploring complex issues and subjective perspectives in health research (Allemang, Sitter, and Dimitropoulos 2022).A qualitative research approach was chosen to capture this experience.Qualitative research methods are considered appropriate in health research to investigate complex issues and subjective perspectives (Powell and Single 1996).In this context, the focus group technique was chosen to explore nurses' perceptions and attitudes effectively.In particular, focus groups are suitable for exploring perceptions and attitudes (Powell and Single 1996).
To capture the perspective of nurses, it was relevant to form a focus group with professionals who have a broad perspective on care delivery for people with Parkinson's disease.A purposive sampling approach was employed to ensure a comprehensive understanding of Parkinson's care delivery, integrating six nurses from Canada and various European countries (Germany, Portugal, Italy, Czech Republic, Ireland) involved in a novel care delivery model (Fabbri et al. 2020).Data collection occurred in Portugal in 2022, with nurses participating either in person or online.These nurses assumed the role of care coordinators within a person-centered, multidisciplinary care concept, making them well-suited for providing insights (Mestre et al. 2021;van Munster et al. 2021).
The focus group discussions were facilitated using the inspiration card method, encouraging active participation and stimulating discussions around key priorities in Parkinson's care delivery (Halskov and Dalsgård 2006).A schematic representation of this process is shown in Fig. 1.The cards presented various tasks typically undertaken by nurses in multidisciplinary and person-centered Parkinson's care models, adapted from van Munster et al. (van Munster et al. 2022).The moderators identified five overarching themes, guided by the elements of person-centered Parkinson's care outlined by van Halteren et al. (van Halteren et al. 2020) .After reviewing the cards, nurses were prompted to select and justify the three most important priorities for each top theme based on their perspectives.Facilitators (M.vM., Public Health, M.Sc., a female researcher based in Germany; and J.S., M.Sc., a female researcher and health economist based in Germany) guided the conversation by asking specific questions to deepen explanations, fostering a rich and insightful dialogue among participants.
Data collection
Completed cards were collected from participants and archived as part of the study results.Pseudo-anonymized analysis and stringent data protection measures were implemented, including removing any identifying information, such as names or specific locations.Additionally, an audio recording of the group discussion was transcribed verbatim to capture all dialogue accurately.
Data analysis
Following transcription, audio recordings underwent thorough accuracy verification.Focus group data analysis was conducted by two proficient team members (M.vM.and J.S.) utilizing Braun and Clarke's thematic analysis method (Clarke, Braun, and Hayfield 2015).To ensure result credibility, initial coding was independently performed by the research team.
Results
In total, the group is very small, which is why the nurses' wish to remain anonymous was respected.No personal data was published.Six Parkinson Nurses participated in the workshop.The workshop lasted 1 h and 30 min.
Overall themes
The coding process resulted in four overarching clusters in which the Parkinson Nurses fulfilled their role in the care process: delivering, setting up, measuring, and coordinating.During the coding process, care priorities from the perspective of the Parkinson Nurses could be assigned to these four role clusters.A detailed description of the coding results can be found in the supplementary material (Table A1) (Fig. 2).
The role of a health care provider
Nurses define the provision of education, both to those affected, in the sense of patients and relatives, and to other professionals as an important priority in Parkinson's Nursing practice.Patient education is seen as key to initiating discussion about support options: " […] we try to do, like before they go to their homes, we try to see and to speak and we have a meeting trying to see what we can, they do like to buy or something, and to prevent and to proactively trying to prevent something."(PN1) Educating professionals, on the other hand, aims more at changing professional actions.This is seen primarily as a way of enabling patients to make better use of the necessary care services: "[…] one of the things we try to do is always educate other healthcare providers, either in the hospital or the community setting.Because at the end of the day, they will be the ones taking care of these patients."(PN3) Nurses perceive psychological support as another important aspect of provision.In this context, they reflect the special closeness to the patient as an advantage that allows them to empower patients:
The role of a health care builder
Nurses perceive themselves as holding an active role in the process of building a relationship with affected persons and forming selfmanagement skills in the affected persons.
"I also put it, patient encouragement. […] try to make them search and think about other stuff, and giving the tools so they can actually process it themselves." (PN1)
In doing so, they also describe that these processes are resource intensive: "But really practically speaking, you actually need time.To, as I told you before, to create this relationship with the patient, to create your background, to specialize, to understand, to know the patients, you need time."(PN6)
The role of a health monitor
Nurses define priorities of their actions on both the content (i.e., what they monitor) and process (i.e., how they monitor) levels.In terms of content, medication, and the needs of those affected at home are the main priorities.Monitoring these aspects is associated with better utilization of care: what we would do is they would go to the wards where they're being treated and make sure that their medications are correct and that they're getting the proper support, […]" (PN4)
"I put the visiting the patient at home. […] they feel more comfortable and sometimes they speak some things to me that they don't speak with the doctor at the hospitals, because they're so stressed out because they have to go to the hospital. […] it's easier sometimes for them to speak out and sometimes even getting the information." (PN3)
Fig. 2. Overall themes of the coding process.
With regard to the "how", nurses perceive their role primarily in structured recording, which is considered a good basis of information for care planning:
The role of a health care coordinator
Nurses perceive the coordination of care pathways and thus the coordination of patient's health care utilization as one of their most central roles.In this context, coordination refers to pathways both within and outside the organizations in which the nurses work: "I chose hospital discharge guidance.The main reason why I chose that is because, as I said, the Parkinson nurse specialist here, what they do is that they liaise between the acute hospitals and the community services."(PN2) " […] what we try to do is we try to communicate to their GPs so that the person that they see out in the community and use the GP as a resource in that to identify community resources that are available to them."(PN1) Nurses also see themselves as coordinators of various professions within the care process: "I choose providing a structured care plan, liaison with the care team, and ensuring that patients' needs are addressed.Of course, the care team for me is absolutely important because we know that the patients' needs are so many and we have to, if we want to ensure that their needs are addressed, we need to have a super strong network."(PN4)
Wishes for improvement
However, the nurses also perceive limitations in fulfilling their prioritized roles and defined improvements that, from their perspective, would help them to better fulfil their roles according to their priorities.For participants from four countries the structure of care in their own country was a limitation, in which their role is not foreseen:
"[…] first I put that there should be a nurse maybe as a responsible for monitoring and coordinating all the clinical processes. […] we don't have it and we have to start from the basis, which means to start with a person, […] as responsible of this process […]." (PN2)
In addition, all participants expressed the desire for a common set of competence standards to make their profession clearly tangible: "[…] the hospital needs to, or the government needs to acknowledge that it ("the role Parkinson Nurse") is important and that even the healthcare members need to acknowledge what this role does and how it fits within the whole picture of the healthcare system."(PN3) "Like a training structure as well.It would be nice because that kind of shows you like, "This is what you need really.These are the skills that are important for a Parkinson Nurse.Whether it be inpatient based or outpatient based, but these are the kind of core skills you need to have."(PN6)
Discussion
The central tenets of future Parkinson's care are multi-professionalism and need-orientation (Achey et al. 2014;Rajan et al. 2020).Research indicates that adopting person-centered approaches can enhance patients' utilization of healthcare services, albeit necessitating a coordinating entity (Bhidayasiri et al. 2020).Notably, nurses perceive themselves as pivotal mediators between patients and the wider care team, an insight crucial for sustaining Parkinson's care (Tenison, James, Ebenezer, and Henderson 2022).The perceptions of these roles align with contemporary recommendations for the organization of multi-professional care strategies in Parkinson's disease (Radder et al. 2019).
However, the expectations of a nurses' role are evolving beyond conventional professional paradigms, influenced by project-based training and practical experiences.In Germany, while training programs for Parkinson Nurses exist, the profession itself is not formally recognized, leading to integration within traditional care frameworks (Mai 2018).Consequently, roles such as coordinating care pathways within institutions or providing home-based care, highlighted in this study, remain underutilized (Mai 2018).
Despite international recognition of the importance of the Parkinson Nurse role (van Munster et al. 2022), its establishment remains limited to selected countries globally.Theoretical frameworks emphasize the significance of identifying one's professional actions and delineating a clear competency profile as foundational elements of a profession (Williams 1998).Findings from this study suggest that expanding competency profiles could enhance awareness and professional recognition.
From a practical standpoint, this study implies that nurses, when equipped with expanded competencies, can redefine their role and contribute to the advancement of forward-looking care models.However, it also underscores the necessity for tailored training programs and practical opportunities to enable nurses to effectively apply these competencies.
M. van Munster et al.
Limitations
While the qualitative focus group with nurses from diverse backgrounds working on the same integrated care delivery project offers valuable insights, several potential limitations should be acknowledged.Firstly, the small sample size of six participants may limit the generalizability of findings to a broader population of Parkinson Nurses.Additionally, participants from different countries represent varying healthcare systems and cultural contexts, which may influence their perspectives and priorities, potentially leading to findings that are specific to this group.Furthermore, the qualitative nature of the study may introduce subjectivity and researcher bias into data analysis, though rigorous methodological approaches can mitigate this.Finally, as with any qualitative research, the depth of understanding may be constrained by the time limitations of a single focus group session.These limitations should be considered when interpreting and applying the study findings.
Conclusions
In conclusion, this qualitative study provides critical insights into the perspectives and priorities of Parkinson Nurses for individuals with Parkinson's disease.These findings hold significant implications for the delivery of healthcare services and resources.The nurses' emphasis on education, multi-professional collaboration, need-orientation, and their roles as mediators and coordinators underscores the importance of a comprehensive and patient-centered approach in Parkinson's care.By recognizing and integrating these priorities into healthcare systems, organizations can harness the potential to further develop healthcare resources.
Although limitations such as a small sample size exist, these findings flag to healthcare providers and policy makers the need to reevaluate and adapt their approaches to Parkinson's care and call for a recognition of professional role development as important part of healthcare development.
Ethical considerations
According to the guidelines of the German Research Foundation, a separate ethical approval' was not required for this sub-study of the iCare-PD project.The iCare-PD project was approved by the responsible ethics committee of the University of Marburg (reference: 164/19).
Both researchers possess extensive expertise in qualitative research, specifically in Parkinson's care and Parkinson's Nursing.Analysis involved a multi-stage process of category development and text segment coding.Initially, data were deductively coded based on research questions, forming categories reflective of factors shaping Parkinson Nurses' perspectives.The initial data analysis was informed by two research questions: (1) What care priorities do Parkinson Nurses define in terms of person-centred Parkinson's care?(2) How do Parkinson Nurses perceive their role in person-centred Parkinson's care?Inductive coding further refined these categories.Subsequent data verification and credibility checks were conducted collaboratively, with discrepancies resolved through discussion.MAXQDA (version 2020) was used to facilitate qualitative coding.
Fig. 1 .
Fig. 1.Illustration of the inspiration card method for stimulating group discussion.The overall topics correspond to the theory of person-centered Parkinson's care according to (van Halteren et al. 2020) and the roles fulfilled by a Parkinson Nurse are adapted from the review by (van Munster et al. 2022). | 2024-06-14T15:05:53.629Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "273890b7f5301bfe5c518ef9a0ff97b4e8baa50b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "98a1a1f2d31a9f829c2cce4d82aecda24f13edaa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
268784237 | pes2o/s2orc | v3-fos-license | Geometry concept on monument tugu malang city
This study aims to identify the concept of geometry contained in the monument of Malang City. This research is a qualitative research with ethnographic approach. Data collection techniques used were observation and literature study. This study concluded that the Malang City monument has mathematical concepts that can be applied in learning activities, namely the concept of flat and spatial shapes.
INTRODUCTION
These educational learning standards are used by the government as a reference in evaluating educators, educational institutions in carrying out assessments of student learning outcomes in schools (Kusainun, 2020).Mathematics learning standards based on Permendiknas No. 23 of 2006, do not focus on student understanding in learning activities alone (Cahyani, 2016).Mathematics learning process standards consist of 5 stages such as understanding mathematical concepts, mathematical reasoning, mathematical communication, mathematical connections and mathematical problem solving (Nasution, 2008).So with so many standards of achievement in the mathematics learning process, it requires some help or concept development from various aspects that students can use to develop the ability to understand the concept.
Mathematics learning is one of the subjects that emphasizes understanding concepts and applying logic (Astriandini & Kristanto, 2021).One of the various learning materials is geometry material (Hada et al., 2021).Geometry is a material that presents knowledge related to shapes and shapes in a dimension.In this case, the ability to analyze geometry concepts is included in one of the spatial abilities.In addition, geometry material is one of the important materials in learning mathematics.This is supported by a variety of contexts in human life that are always related to geometry.
Based on Puspendiknas data sources on the 2019 national exam, data obtained that students' test scores on geometry material standings are the lowest (Sari & Roesdiana, 2019).In addition, data obtained that students' difficulties in solving geometry problems are in the aspects of using concepts, using principles and solving verbal problems (Fauzi & Arisetyawan, 2020).and based on NCTM, concept understanding is the most important aspect in learning mathematics because understanding concepts will make it easier for students to master several other mathematical abilities such as problem solving skills (Radiusman, 2015).so it can be concluded that the main problem that must be resolved in learning mathematics geometry material is concept understanding.
The importance of understanding mathematical concepts has been formulated by the National Council of Teachers (NCTM) and the 2013 Curriculum, which is a very important ability for students to have in learning mathematics, understanding mathematical concepts as the key to successful learning (Mulianty et al., 2018).Where by having the ability to understand mathematical concepts well, it will make it easier for students to solve a given problem.Because understanding concepts is the key to understanding principles and theories, especially if students want to master mathematics, especially higher-level thinking skills.
The ability to understand concepts is very important to determine student learning outcomes in learning mathematics.In line with research conducted by Effendi (2017), student learning outcomes in mathematics learning are strongly influenced by their ability to understand concepts by meeting their indicators.Jusniani (2018) says that the standard for someone to be said to be able to understand a concept is if he has been able to construct understanding in his own language, not just memorizing but also being able to distinguish and classify objects in examples and not examples and can find and explain the relationship of a concept with other concepts that have been given first.Therefore, it can be said that understanding of mathematical concepts must be improved because it is an important component in achieving learning objectives and improving student achievement.
Learning concept understanding by integrating cultural values is needed in the era of globalization because the use of local culture integrated learning can create mathematics learning that is closer to students so that it can be said to be more meaningful.Learning with contextual concepts of local culture is one form of teacher innovation in teaching the presentation of mathematical concepts related to local cultural contextual problem issues (Tandiseru, 2015).mathematics learning is one of the effective learning subjects in the meni Culture contains human works, such as knowledge, arts, laws, beliefs, and so on (Fajarisman et al., 2021).Indonesia is a country that has a variety of cultures because of the diverse mindsets and habits of the people, so there is also a lot of cultural diversity found here.Starting from dances, languages, traditional houses, clothes, and so on that show the characteristics of the local area (Misbahul Munir, 2021).so that the Indonesian state has a great opportunity in utilizing culture as an aspect of improving mathematics learning in schools.
Malang is one of the cities in the province of East Java, Indonesia (Anam, 2017).As an area that has a high population density, Malang City has characteristics that are well known in other parts of Indonesia.One of them is the Tugu monument which is the icon of Malang city itself (Nabila & Kurniawan, 2021).Seeing from its unique architectural design, therefore researchers are interested in conducting a study of the results of exploration of the Tugu monument in Malang in terms of its geometric concept.It aims to explain the concepts of geometry found in the Tugu monument of Malang City.
METHOD
This research uses descriptive qualitative methods to explain the description of existing data on the Tugu monument in Malang city which will be presented in the form of descriptions of words.While the approach, this research uses an ethnographic approach, namely research with the aim of exploring socio-cultural contexts through field observations of the object of research.Field observation, documentation, and literature study research were conducted to collect data for the discussion.To validate the data collected in this study, researchers used triangulation.The type of triangulation used is data source triangulation.The research instrument is an observation guideline which acts as a benchmark for data collection during the data collection process.Data analysis in this study, researchers conducted several stages, namely data presentation, data reduction and conclusion drawing.
The concept of geometry on the poor monument
Geometry is one of the scientific branches in mathematics that studies about points, spaces, lines and their various properties and characteristics (Musriroh et al., 2021).as one form of implication of geometry in real life, Malang city monument contains several geometry concepts found by researchers through observation.Malang city monument is the first independence monument built in Indonesia in.The construction of the Malang City Monument monument signifies that previously the Dutch administrative center has now been fully under the control of the Republic of Indonesia.
3.1.1
Flat Buildings Flat shapes are shapes that have area and perimeter (Wulandari, 2017).There are various kinds of flat shapes that need to be studied, namely squares, rectangles, circles and so on.each flat shape has a side and area that can be calculated.The concept of flat shapes on the Malang city monument is as follows:
Rectangle
A rectangle is a quadrilateral flat shape that has 2 pairs of parallel sides and the intersecting sides create a 90^0 angle (Nuryami & Apriosa, 2024).The concept of rectangle is found in the paintings and carvings depicted on the monument, which are as follows:
Figure 2. Lukisan di Tugu kota Malang
In this painting, we know that there is a concept of flat buildings, namely rectangles.We can model the painting on the Malang city monument as follows: The characteristics of flat shapes are as follows: i.The opposite sides are parallel and equal in length In the painting of the Malang city monument, researchers have examined the Malang city monument.And it was found that the length of AB = DC while the length of BC = AD ii.Each angle is equal to 90 0 Based on the painting, the researcher knoews thah ∠ = ∠ = ∠ = ∠ = 90 0 (ℎ − ) iii.The diagonals are equal in length The diagonals in the painting of the Malang city monument are = iv.Every intersecting line in perpendicular to each other The line in the painting of the Malang city monument form a perpendicular line, ⊥ ⊥ ⊥ ⊥ .
The historical value found in this painting is the meaning that this Malang city monument is a form of appreciation given to heroes in achieving freedom from the Dutch.Therefore this painting is carved and placed
Lingkaran
A circle is a flat shape whose position or set of points is equidistant to a certain point (Soedyarto & Maryanto, 2008).The concept of a circle is found in the fence that closes the entrance to the Malang city monument.
Figure 3. Gerbang pagar Tugu kota Malang
On this fence, we can model the circle in the following form: The characteristics of a circle are as follows : i.
Has an angle of 180 0 In the building of the Malang city monument, researches found that the related fence has a plane that has an angle of 180 0 and is perfectly round.ii.
The diameter divides the building into equal sides If a straight line is drawn at the center point, it will be found that the circle on the Malang city monument building divides the two fields equally.
iii.The radius connects the center point and the arc point iv.Has infinite rotary symmetry
Building space
Spacebuilding is a type of regular space object that has ribs, sides and corner points (Subagyo et al., 2015).There are several spatial shapes that can be studied, including cubes, blocks, pyramids, prisms, tubes.Each space has its own characteristics.When viewed from the Malang city monument, there are several forms of spatial shapes that can be studied.The shapes include the following:
Tube
The tube is a space bounded by two congruent and parallel sides in the form of a circle and a curved side, the base plane and the top plane of the circle with the same radius, the height of the tube is the distance between the center point of the base circle and the center point of the top circle (Wulandari & Anugrahen, 2021) The concept of the tube in the Malang monument is as follows:
The center of the monument
The center of the Malang monument is shaped like a tube decorated with several philosophically meaningful painting components.Among them are paintings of 5 major islands in Indonesia, pictures of palms, pictures of proclamation texts, pictures of heroes, Pancasila and pictures of rhinos.Each painting in the center of this unfortunate monument signifies a separate meaning.
The relief of the proclamation text signifies the independence of the Indonesian republic from the Netherlands, then the relief of the 5 major Indonesian islands signifies the integrity of the Indonesian state, the pancasila relief signifies the basic ideology of pancasila used by Indonesia, the relief of heroes and several other reliefs are also intended as something that the Indonesian state is proud of and should not be disturbed at all by any party.
.2 Triangular prism
A prism is a space bounded by two parallel planes and other planes that intersect according to parallel lines (Suharjana, 2008).There are various types of prisms, according to the flat shapes that make up them.There are triangular, quadrilateral, triangular prisms, and many more.Meanwhile, the researchers found the concept of a triangular prism in the padma component of the Malang city monument.Padma is a kind of typical Indonesian building structure such as a temple that resembles the shape of a lotus flower.For more details, please see Figure 6 below: http://ejournal.uin-malang.ac.id/index.php/ijtlm
CONCLUSION
Based on the results of this study, it can be concluded that 1.There is a concept of geometry in the Malang City Tugu building, namely the geometry of flat and spatial shapes.2. The concept of flat buildings contained in the Malang City Monument is a rectangle and a circle.3. The concept of space contained in the Malang city monument is a tube and a five prism.
Figure 7 .
Figure 7. Dasar tugu Kota Malang On the Malang monument, we can model a triangular prism with the following image, | 2024-03-31T15:56:32.717Z | 2024-03-26T00:00:00.000 | {
"year": 2024,
"sha1": "582f39b1cef66acc3c57b3a8bff92f0c3d44b4fc",
"oa_license": "CCBYNCSA",
"oa_url": "https://ejournal.uin-malang.ac.id/index.php/ijtlm/article/download/24676/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b80770f6c11f7ceada796b7920b305234fa643aa",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
6466602 | pes2o/s2orc | v3-fos-license | On Weight Matrix and Free Energy Models for Sequence Motif Detection
The problem of motif detection can be formulated as the construction of a discriminant function to separate sequences of a specific pattern from background. In computational biology, motif detection is used to predict DNA binding sites of a transcription factor (TF), mostly based on the weight matrix (WM) model or the Gibbs free energy (FE) model. However, despite the wide applications, theoretical analysis of these two models and their predictions is still lacking. We derive asymptotic error rates of prediction procedures based on these models under different data generation assumptions. This allows a theoretical comparison between the WM-based and the FE-based predictions in terms of asymptotic efficiency. Applications of the theoretical results are demonstrated with empirical studies on ChIP-seq data and protein binding microarray data. We find that, irrespective of underlying data generation mechanisms, the FE approach shows higher or comparable predictive power relative to the WM approach when the number of observed binding sites used for constructing a discriminant decision is not too small.
1. Introduction. Transcription factors (TFs), a class of proteins, regulate gene transcription through their physical interactions with particular DNA sites. Such a DNA site is called a transcription factor binding site (TFBS), which is usually a short piece of nucleotide sequence (e.g., 'CATTGTC'). Typically, a TF can bind different sites and regulate a set of genes. A key observation is that sites of the same TF share similarity in their sequence composition, which is characterized by a motif. Since gene regulation has always been an important problem in biology, many computational methods have been developed to predict whether a given DNA sequence can be bound by a TF. Please see Elnitski et al. (2006), Ji and Wong (2006), and Vingron et al. (2009) for recent reviews on relevant methods.
The prediction of TFBS's considered in this article is formulated as a classification problem. Denote by w the width of the binding sites and code the four nucleotide bases, A, C, G and T, by a set of positive integers I = {1, · · · , J} (J = 4). Suppose that we have observed a sample of labeled sequences of length w, D n = {(Y k , X k )} n k=1 , where X k ∈ I w and Y k ∈ {0, 1} indicating whether X k is bound by the TF (Y k = 1) or not (Y k = 0). We call D + n = {X k : Y k = 1} observed binding sites (or motif sites) and D − n = {X k : Y k = 0} background sites (or background sequences). Then, motif detection is to construct a discriminant function from D n to predict the label of any new sequence x ∈ I w .
Most of the existing computational methods for motif detection can be classified into two groups. The starting point of the first group is the sequence specificity of binding sites, which is often summarized by the position-specific weight matrix (WM). For early developments of WM, please see Stormo (2000). Under the WM model, each nucleotide (letter) in a binding site is assumed to be generated independently from a multinomial distribution on {A, C, G, T}. This model has been widely used in search of TFBS's (e.g., Hertz and Stormo, 1999;Kel et al., 2003;Rahmann et al., 2003;Turatsinze et al., 2008), de novo motif finding (e.g., Stormo and Hartzell, 1989;Lawrence et al., 1993;Bailey and Elkan, 1994;Roth et al., 1998;Liu et al., 2002) and many other works reviewed in Vingron et al. (2009). The second group aims at modeling physical binding affinity between a TF and its binding sites via the concept of the Gibbs free energy (FE) or binding energy (e.g., Berg and von Hippel, 1987;Stormo and Fields, 1998;Gerland et al., 2002;Kinney et al., 2007). Assuming that each nucleotide in a DNA sequence of length w (w-mer) contributes additively to the interaction with the TF, this approach often leads to a regression-type model for the conditional distribution of binding affinity given a piece of nucleotide sequence (e.g., Djordjevic et al. 2003;Foat et al. 2006). This group of methods have tight connections with predictive modeling approaches to gene regulation, reviewed in Bussemaker et al. (2007), which can be regarded as a natural generalization to the free energy framework (Zhou and Liu, 2008). Although the standpoints are different, the two groups of approaches are in some sense closely related. They often give similar discriminant functions for predicting TFBS's, and there are many FE-based methods that use a weight matrix to approximate Gibbs free energy (e.g., Granek and Clarke, 2005;Roider et al., 2007).
In spite of the fast methodological development on the WM and the FE models, there is still a lack of solid theoretical analysis to compare model assumptions, parameter estimations and response predictions of the two approaches. Such theoretical analysis can provide insights into these methods by seeking answers to a series of questions. For example, what are the common and distinct assumptions between the WM and the FE models, what is the relative performance between the two approaches in predicting TFBS's given a certain data generation mechanism, and how to calculate their predictive error rates when the size of observed sample D n becomes large? Without answering these questions, one may find it difficult to understand the nature of these methods and cannot extract the full information contained in extensive empirical comparisons between the two approaches.
In this article, we compare model assumptions and parameter estimations of typical WM and FE approaches, derive asymptotic error rates of their predictions under different data generation models, and perform comparative studies on large-scale binding data. The article is organized as follows. In Section 2 we review the basic models of the two approaches. Asymptotic error rates of prediction procedures based on these models are derived and analyzed in Section 3. Computational approaches are developed in Section 4 for practical applications of the theoretical results. Numerical analysis and biological applications are presented in Sections 5 and 6, respectively, with a comparison of the WM-based and the FE-based predictions on ChIP-seq data and protein binding microarray data. The paper concludes with discussions in Section 7. Some mathematical details are provided in Appendices. Although presented in the specific context of motif detection, the results in this article are generally applicable to the modeling and classification of categorical data.
2. Models. Let c be a scalar, u = (u 1 , · · · , u J ) be a (column) vector, v = (v 1 , · · · , v w ) ∈ I w , and A = (a ij ) w×J and B = (b ij ) w×J be two w × J matrices. For notational ease, we define c ± A : by removing the kth row from A, for k = 1, · · · , w. Symbols ' L →' and ' P →' are used for convergence in law and in probability, respectively.
Let θ 0 = (θ 01 , · · · , θ 0J ) be the cell probabilities (probability vector) of a multinomial distribution for i.i.d. background nucleotides, where J j=1 θ 0j = 1 and θ 0j > 0 for j = 1, · · · , J. Since θ 0 can be accurately estimated from a large number of genomic background sequences, we assume that it is given in the following analyses. Throughout the paper, we assume that the cell probabilities of any multinomial distribution are bounded away from 0.
2.1. The weight matrix model. Let X = (X 1 , · · · , X w ) ∈ I w be a sequence of length w. In the weight matrix model (WMM), we assume that X is generated from a mixture distribution. Let Y ∈ {0, 1} label the mixture component. With probability q 0 , Y = 0 and X is generated from an i.i.d. background model (with parameter) θ 0 , that is, P (X | Y = 0) = θ 0 (X). With probability q 1 = 1 − q 0 , Y = 1 and X is generated from a weight matrix Θ = (θ ij ) w×J = (θ 1 , · · · , θ w ) t , where θ i = (θ i1 , · · · , θ iJ ) is a probability vector for i = 1, · · · , w and X i is independent of other X k (k = i). To be specific, P (X | Y = 1) = Θ(X). From this model the log-odds ratio of Y given X is In the WM-based prediction, q 1 is typically fixed by prior expectation or determined by the relative cost of the two types of errors (false positive vs false negative). Effectively, we assume that q 1 is given. Let which defines an additive discriminant function to predict Y given X, i.e., to predict whether the sequence X can be bound by the TF. The label Y will be predicted as 1 if h(X) ≥ 0 and 0 otherwise. This prediction can be regarded as a naive Bayesian classifier. Given observed binding sites D + n , we estimate Θ by the maximum likelihood estimator (MLE)Θ m = (θ m 1 , · · · ,θ m w ) t and substitute it in equation (2) to obtainβ m . Here, the superscript 'm' stands for estimators based on the WMM. Let dθ m i =θ m i − θ i , which is an infinitesimal in the order of 1/ √ n as n → ∞. The standard asymptotic theory (e.g., Ferguson 1996) implies that and that √ ndθ m i , i = 1, · · · , w, are mutually independent. The (j, k)th element of the covariance matrix Σ m i is (δ jk θ ij − θ ij θ ik )/q 1 where δ jk is the Kronecker delta symbol and 1 ≤ j, k ≤ J. From equation (2) we have dβ m ij = dθ m ij /θ ij , which leads to the following limiting distribution, with √ ndβ m i mutually independent for i = 1, · · · , w.
2.2. The free energy model. Let F , X = (X 1 , · · · , X w ) and F X be a TF, a DNA sequence, and the corresponding TF-DNA complex, respectively. The process of the TF-DNA interaction can be described by the chemical reaction F + X = F X. The concentrations of the three molecules at chemical equilibrium, [F ], [X] and [F X], are determined by the association constant K a (X), that is, where ∆G(X) is the Gibbs free energy (FE) for the interaction of F with X, R is the gas constant and T the temperature. We regard RT > 0 as a constant. Suppose that the contribution of a single nucleotide X i to the FE is additive (von Hippel and Berg, 1986;Benos et al., 2002) so that we may write −∆G(X) To avoid non-identifiability in estimation, we take S ref = (s 1 , · · · , s w ) as a reference sequence to determine a baseline level of the FE, and for every X ∈ I w . Let Y be the indicator for whether X is bound by the TF at chemical equilibrium. From the physical meaning of concentration, Combining equations (6), (7) and (8) leads to an additive discriminant function for this free energy model (FEM), Similarly as for the WMM, we assume thatβ 0 is fixed by prior or a desired cost. Furthermore, it is conventional to assume that X is sampled from an i.i.d. background model θ 0 , i.e., P (X) = θ 0 (X). The data generation process of the FEM has a clear biological meaning. Suppose that we have sampled n nucleotide sequences of length w, {X k ∈ I w } n k=1 , from the genomic background θ 0 . We mix these sequences with TF molecules in a container where the concentration of the TF is held as a constant. At chemical equilibrium we label the sequences X k bound by the TF as Y k = 1 and otherwise Y k = 0. The output of this experiment is the labeled sample Although there exist other models based on binding free energy, we focus on this basic model in this paper, which makes a theoretical analysis relatively clean while capturing main characteristics of FE-based approaches.
Given D n , the MLE ofβ, denoted byβ f =β + dβ f with the superscript 'f ' for FE-based estimators, can be calculated by the standard logistic regression. Note that β f maximizes the conditional likelihood determined by equation (9). Similar to the results in Efron (1975), it is not difficult to demonstrate thatβ f is consistent forβ with asymptotic normality, where dβ f is regarded as a vector of (J − 1)w dimensions (recall thatβ is i =β f is i ≡ 0 for i = 1, · · · , w). The asymptotic covariance matrix where p y (X) = P (Y = y | X) for y = 0, 1, C X is a (J − 1)w-dimensional column vector coding each X i as a factor of J levels, and E θ 0 is taken with respect to (w.r.t.) the background model θ 0 that generates the sequence X.
2.3.
Comparison. Given (β 0 , β) in the WMM and the reference sequence S ref in the FEM, if we letβ for i = 1, · · · , w, j = 1, · · · , J, then the two models have the same conditional distribution [Y | X] (3,9) for any X. To simplify notations, we shall denote the decision function (9) in the FEM by h(X) =β 0 + Xβ hereafter. Except for this condition distribution, other model assumptions are different. The WMM assumes that the nucleotides in X are generated independently given its label Y . But this is not true for the FEM, in which the conditional probability of X given Y is Since equation (14) cannot be written as a product of functions of X i , this model implicitly allows dependence among X 1 , · · · , X w . Consequently, the FEM may account for some observed nucleotide dependences within a motif such as in Bulyk The different model assumptions lead to different procedures for parameter estimation, in particular the coefficients β (β). As discussed in Sections 2.1 and 2.2,β m and β f are consistent under the WMM and under the FEM, respectively. Sinceβ f maximizes the conditional likelihood P (Y | X,β) (10) which is identical between the two models, it is also consistent for β under the WMM up to the translation (13). However, β f is expected to be less efficient thanβ m in prediction if the WMM corresponds to the underlying data generation process, due to the ignorance of the information on Θ contained in the marginal likelihood P (X | Θ, WMM) (to be discussed in detail in Section 3.1). Conversely, if data are generated by the FEM,β m is biased and no longer consistent. We will analyze the bias and the resulting incremental error rate in later sections.
Theoretical results. For both WMM and FEM, the ideal decision function
h(x) is obtained with the true parameters of the respective models and the corresponding ideal error rate Denote by the ideal error rate for h(x) given X = x. Consider a decision functionĥ(x) estimated from D n . Given any x for which h(x)ĥ(x) < 0, the incremental error rate beyond Then the expectation of the total incremental error rate forĥ is where 1(·) is the indicator function. Please note thatĥ, constructed from a sample of size n, is a random function. Let ∆ĥ( In what follows, we will derive two theorems on E[∆R(ĥ)] under different assumptions for ∆ĥ(x). As we will see, the asymptotic error rates of the WM and the FE procedures under the data generation models discussed in this paper can all be calculated based on the two theorems. Suppose that, for every x, where Φ is the cdf of the standard normal distribution N (0, 1). Let and x * be the corresponding minimum. Note that ∆R(x) = 0 when h(x) = 0. Thus, where α(ĥ) determines the rate of convergence. Using the theory of large deviations, we obtain: Letĥ a andĥ b be two estimated decisions constructed from samples of size n a and n b , respectively. Suppose that both of them satisfy the condition in Theorem 1. We define the asymptotic relative efficiency (ARE) ofĥ a with respect toĥ b by ARE(ĥ a ,ĥ b ) = α(ĥ a )/α(ĥ b ), which is the limit ratio n b /n a required to achieve the same asymptotic performance.
where µ(ĥ, x) denotes the asymptotic bias ofĥ(x), then simple derivation from equation (17) gives that as n → ∞, where sign(y) is the sign of y with sign(0) ≡ 0.
We ignore the case {x : h(x) + µ(ĥ, x) = 0} which practically never happens. The set B(ĥ) is the collection of x for which the estimated decisionĥ gives a different predicted label from the ideal decision h as n → ∞. Note that E[∆R(ĥ)] does not vanish if B(ĥ) is nonempty. Thus, the incremental percentage over the ideal error rate, E[∆R(ĥ)]/R * , is an appropriate measure of the predictive performance ofĥ.
In the remainder of this section, we derive and compare the error rates of the WM and the FE procedures. From Sections 3.1 to 3.4, we assume that the constant term β 0 (β 0 ) is fixed to its true value. The results are generalized to situations where the constant is mis-specified in Section 3.5. The computation of α(ĥ) (19) and E[∆R(ĥ)] (20) will be discussed in Section 4.
3.1.
Error rates under WMM. In this subsection we assume that the underlying data generation process is given by the WMM. Since bothβ m andβ f are consistent with asymptotic normality under the WMM, we may uniformly denote their decision functions byĥ( being the asymptotic variance. The superscript 'm' indicates the WMM as the data generation model. Let E[∆R m (β)] be the expected incremental error rate ofĥ indexed byβ. Following Theorem 1, as n → ∞. Consequently, the ARE of the FE procedure w.r.t the WM procedure, The decision function of the WM procedure is constructed withβ m (Section 2.1). Note that xdβ = i dβ ix i is a summation of w dβ ij 's, each from a different dβ i . The limiting distribution of √ ndβ m ij (5) and the mutual independence among dβ m i imply that the asymptotic variance of √ nxdβ m is and consequently, Suppose that we have chosen (s 1 , · · · , s w ) as the reference sequence in the FE procedure. Defineβ 0 andβ from the parameters (β 0 , β) of the WMM by equation (13). Then the FE-based estimatorβ f is consistent forβ with asymptotic normality. Let dβ f =β f −β. Similar to equation (12), the asymptotic covariance matrix of where the expectation is taken w.r.t. the marginal distribution of X under the WMM (15). Thus the covariance matrix can be written as where the expectation E θ 0 averages over X ∈ I w generated from the background model θ 0 . Based on equation (22), one can calculate the variance of √ nxdβ f for every x and determine the convergence rate α m (β f ) of the expected incremental error rate Because the estimation ofβ f is only based on the conditional distribution [Y | X] whileβ m is estimated from the joint distribution of Y and X, we expectβ f to be less efficient in prediction with α m (β f ) < α m (β m ). We will conduct a numerical study in Section 5 to evaluate ARE m (β f ,β m ) on 200 transcription factors to confirm our conclusion. Here we demonstrate the lower efficiency ofβ f by the loss of Fisher information in estimating an individual θ ij from the conditional likelihood only. For simplicity, suppose that Θ [−i] is given and collapse X i into two categories, X i = j and under the WMM, the loss of information equals the Fisher information on θ ij contained in the marginal likelihood P (X | Θ), denoted by I(θ ij | X). Let I(θ ij | X, Y ) be the Fisher information on θ ij given X and Y jointly. We define as the fraction of the loss of information on θ ij in the conditional likelihood P (Y | X, Θ).
A proof of this proposition is given in Appendix A. If one chooses to include an equal number of background sites (Y = 0) and binding sites (Y = 1) in logistic regression to estimateβ f , which effectively specifies q 0 = q 1 = 0.5 by design, then this lower bound may be substantial. For example, with a uniform background distribution θ 0j = 0.25 for j = 1, · · · , 4, the range of B(q 1 , θ ij , θ 0j ) is between 20% and 55% for most typical values of θ ij (Table 1).
WMM with Markov background.
We generalize the background model to a Markov chain, which often represents a better fit to genomic background in high organisms. We assume that given Y = 0, X is generated by a first order Markov chain with a transition probability matrix ψ 0 = (ψ 0 (x, y)) J×J where x, y ∈ I. For any is interpreted as the probability of x 1 under the stationary distribution of the Markov chain. The ideal decision function under this model is where β 0 = log(q 1 /q 0 ) and the subscript '1' [in h 1 (x) and ∆R m 1 (26)] indicates a quantity whose definition involves a Markov background model. Since ψ 0 can be accurately estimated with sufficient genomic background sequences, we assume that it is given. With the MLEΘ m from observed binding sites, the WM procedure constructs a decision whose expected incremental error rate converges to zero exponentially fast as n → ∞, following Theorem 1.
With a slight abuse of notations, let us denote by θ 0 the probability vector of the stationary distribution of the Markov chain, which is also the marginal distribution of any nucleotide X i in a background site. We still define β 0 and β by equation (2) with θ 0 being the stationary probabilities, and translate β 0 and β via a reference sequence toβ 0 andβ (13). Let (Y, X) be a sample from the WMM with Markov background. If the dependence among neighboring nucleotides in a background site is ignored, the conditional likelihood P (Y | X,β), parameterized byβ, is then given by the same expression in equation (10). Because the FE-based estimatorβ f maximizes this conditional likelihood, it is standard to show thatβ f P →β and is asymptotically normal. Letĥ f (x) =β 0 + xβ f denote the estimated decision function of the FE procedure. As n → ∞, Let ∆ĥ f (x) =ĥ f (x) − h 1 (x) be the deviation ofĥ f (x) from the ideal decision (24). Comparing equations (24) and (25) gives the asymptotic bias, Due to the asymptotic normality ofβ f , we have is the corresponding asymptotic variance. Under this model, Following Theorem 2 with µ(ĥ f , x) = b(x), the expected incremental error rate The incremental percentage over the ideal error rate, E[∆R m 1 (β f )]/(R m 1 ) * , is appropriate for comparing the FEbased prediction with the WM-based prediction whose expected error rate converges to (R m 1 ) * . A general expression for R * is given in equation (16) which, under the WMM with Markov background, is written as 3.3. Error rates under FEM. We now analyze asymptotic error rates of the two procedures regarding the FEM as the underlying data generation mechanism.
The FE-based estimatorβ f is consistent forβ under the FEM. The asymptotic normality of √ ndβ f (11,12) implies that be the expected incremental error rate of the FE procedure under the FEM. From Theorem 1, we have Denote by θ f i = (θ f i1 , · · · , θ f iJ ) the probability vector of the conditional distribution [X i | Y = 1] under the FEM, i.e., for i = 1, · · · , w, and call Θ f = (θ f ij ) w×J the weight matrix. Recall that the WMbased estimatorβ m is obtained by estimating θ f i individually from observed binding sites D + n and then transforming the estimates via equation (2). Denote the estimated weight matrix byΘ m . Since the data are generated by the FEM,Θ m P → Θ f and √ ndΘ m = √ n(Θ m − Θ f ) follows a multivariate normal distribution asymptotically, similar to (4), but dθ m i and dθ m k may be correlated (1 ≤ i = k ≤ w). Given that the coefficientsβ in the FEM are defined w.r.t. a reference sequence, we transformΘ m tô where s i is the ith nucleotide of the reference sequence S ref . Let ∆β m =β m −β be the deviation ofβ m = (β m ij ) w×J . To obtain its asymptotic distribution, we determine the cell probability θ f ij (29) from equation (14), that is, For i = 1, · · · , w and j = 1, · · · , J, we define and for all i and j as n → ∞. Thus, δ = (δ ij ) w×J is the asymptotic bias ofβ m . From the asymptotic normality of √ ndΘ m , we see that √ n(∆β m − δ) follows a multivariate normal distribution with mean 0 and finite covariance matrix as n → ∞. Note that this multivariate normal distribution is defined on a (J − 1)w-dimensional space, since ∆β m is i = δ is i ≡ 0 for i = 1, · · · , w. Consider the WM-based decision functionĥ m (x) =β 0 + xβ m = h(x) + x∆β m . The above derivation shows that √ n(x∆β m − xδ) where the variance is determined by the covariance matrix of ∆β m . Following Theorem 2, the expectation of the total incremental error rate of the WM procedure Recall that p y (X) = P (Y = y | X), i.e., p y (X) = e yh(X) e h(X) + 1 , for y = 0, 1.
FEM with Markov background.
Next, we generalize the FEM to Markov background and assume that any sequence X ∈ I w is generated marginally by a Markov chain. Consistent with Section 3.2, we denote by ψ 0 = (ψ 0 (x, y)) J×J the transition probability matrix of the Markov chain.
It is trivial to see that, with the Markov background model, the ideal decision is still h( (28) remains valid for the FE-based prediction. On the other hand, if we proceed with the WM procedure, the expected incremental error rate Here δ(x) denotes the asymptotic bias of the WM-based decision function for x. A detailed derivation of equation (35) and the bias δ(x) is provided in Appendix B. In analogy to the FEM with i.i.d. background, E[∆R f 1 (β m )]/(R f 1 ) * measures the increased error rate of the WM procedure relative to the FE procedure, where
3.5.
Mis-specification of the constant term. In all the above derivations, we have assumed that the constant term β 0 (β 0 ) is fixed to its true value. If this is not the case, then the deviation ∆β 0 =β 0 − β 0 (β 0 ) will be an extra bias term for an estimated decision in which the constant term is fixed toβ 0 . More specifically, the set B f (β m ) in equation (33) will be replaced by {x : sign{h(x)}{h(x) + xδ + ∆β 0 } < 0}, and similarly for B m 1 (β f ) in equation (26) and B f 1 (β m ) in equation (35).
Computation.
To apply the theoretical results, we need to solve the minimization (19) and the summation (20) involved in Theorems 1 and 2, respectively. If the width of a motif w ≤ 12, brute-force enumeration of all w-mers is computationally feasible, which provides exact solutions for both the minimization and the summation problems.
For a motif of width w > 12, we minimize (19) to find α(ĥ) by a two-step approach. We generate N = 5 × 10 6 w-mers from the background model θ 0 and identify the minimum of (19) among them. Then we refine the obtained minimum by simulated annealing for 5,000 iterations with temperature decreasing linearly from one to zero. At each iteration, we randomly choose one nucleotide X i from the w positions and propose to mutate X i to one of the other three nucleotide bases with equal probability. The proposal is accepted according to a Metropolis-Hastings ratio with current temperature.
Since the set B(ĥ), as in equations (26), (33) and (35), is usually small, it will be very inefficient to approximate the summation by generating w-mers from background distributions. Thus, we develop an importance sampling approach to approximate the summation (20) when w > 12. Here, we use the calculation of E[∆R f (β m )] (33) to illustrate this approach. Note that one can bound xδ in the definition of We design a sequential proposal g(X) that is more likely to generate X with h(X) ∈ H. Suppose that we have generated X 1 , · · · , X k−1 (1 ≤ k ≤ w) from this proposal. Let The larger the overlap between this interval and H, the more likely that X will belong to the desired set B f (β m ). Thus, we propose X k with probability where | · | returns the length of an interval and ǫ is a small positive number to allow the generation of X k = j when L kj = U kj ∈ H. Proposing X k sequentially by (36) for k = 1, · · · , w generates an X from g(X). With N proposed samples {X (t) } N t=1 we estimate the summation (33) by ) .
In this work, we propose N = 5×10 6 samples for this importance sampling estimation. We verified that the estimations were very close to the exact summations. With different bounds for h 1 (x) and h(x), this approach is applied to other similar summations in (26) and (35).
Numerical study.
A numerical study was performed under the WMM to confirm and quantify the lower predictive efficiency of the FE-based estimatorβ f compared to the WM-based estimatorβ m discussed in Section 3.1. We randomly selected 200 TFs from the database TRANSFAC (Matys et al. 2003). For each TF, experimentally verified binding sites were used to construct a weight matrix with a small amount of pseudo counts. Then we randomly sampled 5,000 human upstream sequences, each of length 10 kilo bases, and calculated their nucleotide frequencyθ 0 = (0.263, 0.234, 0.237, 0.266). The 200 weight matrices display large variability. The width w ranges from 6 to 21 and the information content, w i=1 {2 + E θ i (log 2 θ iX i )}, ranges from 5.1 to 17.5 bits (Figure 1). These statistics show that our selection has covered the typical width and strength of DNA motifs.
A constructed weight matrix was regarded as the parameter Θ and the nucleotide frequencyθ 0 was used for the i.i.d. background in the WMM. Since the prior odds ratio (q 1 /q 0 ) of a binding site over a background site is usually small, we chose three typical values for the inverse of the prior odds, λ = q 0 /q 1 = 200, 500, 1000, for numerical calculations. We evaluated the AREs of the FE-based prediction w.r.t. the WM-based prediction, defined by ARE m (β f ,β m ) = α m (β f )/α m (β m ) in Section 3.1, for the 200 WMs. As discussed in Section 4, our evaluation of AREs was exact for WMs of w ≤ 12 and was carried out with simulated annealing for w > 12. In addition, Monte Carlo average was utilized, before simulated annealing, to approximate Cov m ( √ ndβ f ) (22) by simulating 5 × 10 6 w-mers from the i.i.d. background. The asymptotic relative efficiencies ARE m (β f ,β m ) on the 200 TFs are summarized in Table 2 for the three inverse prior odds. It is seen that for all the WMs the FE-based prediction shows lower efficiency than the WM-based prediction, and that the median AREs ofβ f toβ m are between 50% and 60% and the third quartiles (Q 3 ) between 60% and 70%. Thus, for more than 75% of the TFs, the FE procedure is less than 70% as efficient as the WM procedure in terms of prediction. This confirms the loss of efficiency of the FE-based prediction under the WMM, although both estimators are consistent. We note that the increase of ARE with higher λ (smaller q 1 ) is consistent with the lower bound defined in Proposition 3.
6.
Applications. In this section, we apply the WM and the FE approaches to ChIP-seq data and protein binding microarray (PBM) data. We perform cross validation (CV) with training data of different size, ranging from 20 to 500 binding sites, for two purposes. First, with the large scale of both types of data, we can compare empirical error rates in cross validation against theoretical error rates. This may allow us to verify some of the model assumptions and propose further improvement on the models. Second, we are also interested in examining the practical performance of the two computational methods when the number of observed binding sites varies in a wide range, which will provide useful guidance for future applications.
6.1. ChIP-seq data. In the recent two years, the ChIP-seq technique (Johnson et al., 2007;Mikkelsen et al., 2007;Robertson et al., 2007) has become a powerful highthroughput method to detect TFBS's in whole genome scale. A binding peak in ChIPseq data can usually narrow down the location of a TFBS to a neighborhood of 50 to 200 bps (Johnson et al., 2007). ChIP-seq data that contain thousands of binding sites for a number of TFs have been generated in a study on mouse embryonic stem cells (Chen et al., 2008). We chose five TFs, Esrrb, Oct4, STAT3, Sox2 and cMyc, in this study to compare the WM and the FE methods. The five TFs all have well-defined weight matrices in literature and each contains more than 2,000 detected binding peaks in ChIP-seq, and their data quality was confirmed by motif enrichment analysis in Chen et al. (2008). To identify the exact binding site of a ChIP-seq binding peak, we searched the 200-bp neighborhood of the peak, 100 bps on each side, to find the best match to the known weight matrix of the TF. Given the very small search space, the uncertainty in the exact location of the binding site should be minimal. If the motif width of a TF is w, background w-mers were extracted from genomic control regions that match the locations of the binding sites relative to nearby genes. The ratio of the number of background sites over the number of binding sites was set to 200 for every TF, that is, the inverse prior odds ratio λ = q 0 /q 1 = 200. A transition matrix was estimated from the extracted background sites for each TF, since the log Bayes factor of a Markov background model over an i.i.d model was > 10 5 .
Based on the way we composed the data sets, the WMM with Markov background (Section 3.2) seems a more plausible data generation model. Clearly, a data set was a mixture of detected binding sites and random background sites, and the background distribution was close to a Markov chain. If there is no within-motif dependence, binding sites can be regarded as being generated from a WM model, and consequently, the WM-based prediction is expected to have a smaller error rate compared to the FEbased prediction. However, if there exists within-motif dependence in binding sites, the FEM, which is able to capture such dependence, may outperform the WM approach regardless of the mixture nature of the data sets. We computed theoretical error rates of the two approaches under the WMM with Markov background. For each TF, we estimated a WM from all the binding sites and a transition matrix from the background sites. Regarding them as the model parameters, we calculated the asymptotic error rate of the WM-based prediction, which is the ideal error rate (27), and the incremental rate of the FE-based prediction (26). Note that the bias due to mis-specification of the constant term in the FE approach needs to be included for the calculation of equation (26). These theoretical error rates are reported in Table 3 (the column of n + = ∞).
To compare with theoretical results, we performed cross validation to compute empirical error rates of the WM and the FE procedures on each data set. We randomly sampled (without replacement) n + binding sites and λ · n + background sites from a full data set to form a training set. Both approaches were applied to the training set to estimate their respective decision functions. For WM-based prediction, a WM and a transition matrix were estimated from the training data set to construct a decision function (24) with β 0 = − log(λ). For FE-based prediction, we applied logistic regression to the training set to obtainĥ f (x) =β 0 + xβ f . Then we predicted the class labels of the remaining unused sequences (test set) by each of the two decision functions and calculated empirical error rates (CV error rates). This procedure was repeated 100 times independently for each value of n + to obtain the average CV error rate. To examine performance with a varying sample size (the number of sequences in a training set), we chose n + from 20 to 500. The average CV error rates are reported in Table 3. The theoretical results give a reasonable approximation to the CV error rates for both approaches when the training sample size n + ≥ 200. The asymptotic error rates of the WM approach are uniformly lower than its CV error rates for all the TFs, while the FE approach achieves a smaller CV error rate with n + = 500 than its asymptotic rate for three TFs. Consequently, the incremental percentage of the FE-based prediction for n + = 500 is less than the expected level calculated from the theory. This comparison implies that the WMM may not match the exact underlying data generation process, although it is more plausible than the FEM given the mixture composition of the data sets. As we discussed, potential dependence within a motif may cause possible violation to the WMM. To verify our hypothesis, we conducted the χ 2 -test for every pair of motif positions (X i and X k , 1 ≤ i < k ≤ w) given the binding sites in each data set. At the significance level of 0.005, we identified 25,19,17,8, and 12 pairwise correlations for Esrrb, Oct4, STAT3, Sox2, and cMyc binding sites, respectively, which gives a false discovery rate of < 2% for all the TFs. By capturing such correlations the FEM is able to achieve comparable or even slightly better prediction than the WMM with a moderate-size training sample (n + ≥ 100, Table 3). Finally, it is important to note that even under the exact model assumptions of the WMM, the FE-based prediction only results in a marginal increment in error rate (< 10%) compared to the WM approach asymptotically (Table 3, n + = ∞). Together with the superior or comparable CV performance when the training size is reasonably large, this result suggests the use of the FE approach, when we have a sufficient number of observed binding sites.
6.2. PBM data. Protein binding microarrays (Mukherjee et al., 2004) provide a high throughput means to interrogate protein binding specificity to DNA sequences. Quantitative measurement of the binding specificity of a protein to every short nucleotide sequence designed on a DNA microarray can be obtained simultaneously. The PBM data in Berger et al. (2008) quantified DNA binding of homeodomain proteins via the calculation of an enrichment score, with an expected false discovery rate (FDR), for each double-stranded nucleotide sequence of length eight (w = 8). The data set for each protein contains 32,896 8-mers, each with an enrichment score and an FDR. We identified as the consensus binding pattern for a protein the 8-mer with the highest enrichment score, and then labeled as binding sites those 8-mers whose FDR < 0.005 and which differ by no more than three nucleotides from the consensus after considering both the forward and the reverse complement strands. The remaining 8-mers were labeled as background sites and we randomly determined their strands (orientations) to avoid potential artifacts. In this study we included five proteins, Hoxa11, Irx3, Lhx3, Nkx2.5, and Pou2f2, each from a different family, and called 134, 190, 267, 145, and 213 binding sites, respectively.
The FEM, developed by the biophysics of protein-DNA binding, is expected to be a better model that matches the design of PBM data than the WMM. Thus, theoretical analysis was conducted under the FEM for the five PBM data sets. We applied logistic regression to estimateβ andβ 0 (9) with all the labeled 8-mers in a data set, where the 8-mer 'AAAAAAAA' was regarded as the reference sequence, i.e., β i1 ≡ 0. We calculated the ideal error rate (R f ) * (34) of the FE-based prediction, with an i.i.d. uniform background (by design the background distribution is uniform). For the WM approach, we choseβ 0 as the log-ratio of the number of binding sites over that of background sites, and calculated its asymptotic error rate by equation (33), in which the bias in the constant term (∆β 0 ) was included. The theoretical error rates are reported in Table 4 (n + = ∞), where we find that the WM approach gives a significantly higher error rate, between 14% and 56%, than the FE approach.
The same CV procedure as in the previous section was performed on the PBM data sets to compare the empirical predictive error rates of the two approaches, with n + varying between 20 and 100 (Table 4). There is a clear decreasing trend in error rate for both approaches with the increase of the training sample size n + , although for some data sets the difference between the CV error rate for n + = 100 and the asymptotic rate is still quite obvious. Such discrepancy is probably due to the following two reasons. First, the parameters (β,β 0 ) used for the calculation of asymptotic rates were estimated from data sets which only contain 100 to 200 binding sites. This resulted in a high variance in the estimated parameters: The median ratio of the standard error over the absolute value of an estimated coefficient was between 10% and 30% for the five data sets. Second, the training sample size, n + = 100, is still too small to achieve a comparable error rate as n + → ∞. However, we have already seen substantially increased error rates of the WM-based predictions compared to the FE-based predictions for n + = 100, which is very consistent with the theoretical results. This comparison confirms that unless the training sample size is really small, using the WM approach may degrade predictive performance dramatically if the data generation mechanism is close to the FEM.
7. Discussion. Combining results on the ChIP-seq data and the PBM data, this study provides some general guidance for practical applications of the WM and the FE approaches, irrespective of underlying data generation. When the training sample size is small, the WM procedure seems to produce fewer errors than the FE procedure. But when we have observed enough binding sites, the advantage of the FE procedure is clearly seen. On one hand, it gives a comparable or slightly better prediction than the WM approach even if the WMM is more likely for the data (Table 3, n + ≥ 100). On the other hand, when the data are generated in a way that matches the biophysical process of protein-DNA binding such as the PBM data, the reduction in error rate of the FE approach can be substantial compared to the WM approach (Table 4, n + ≥ 50). The relative performance between the two approaches reflects a typical variance-bias tradeoff. Estimation under the WMM is simple and more robust, which typically has a smaller variance than the FEM. For a small sample size, predictive errors are mostly caused by variance in estimation and thus, WM-based predictions may outperform FE-based predictions. When the sample size increases, estimation variance decreases for both approaches and the potential bias in the WM approach becomes the main factor for predictive errors. Given that its primary principle comes from the biophysics of protein-DNA interactions, the FEM has become more attractive, based on which many computational methods have been developed for predicting TF-DNA binding. In these methods a weight matrix is sometimes used as a first order approximation for computing free energy-based binding affinity. This work suggests that this approximation must be applied with caution. The results on the PBM data have demonstrated that the WM procedure may give a prediction with 50% or more errors compared to the FE-based decision for a reasonably large sample size (Table 4).
In recent years, a substantial amount of large-scale TF-DNA binding data have been generated for many important biological processes. As demonstrated by the applications to ChIP-seq data and PBM data, large-sample theory is able to provide valuable insights on statistical estimation and prediction for such large-scale data. The results in this article can be regarded as a first step towards a theoretical development on computational approaches for gene regulation analysis. Incorporation of within-motif dependence in the WMM and interaction effects in the FEM is a direct next step of this work, for which the model selection component needs to be considered in a theoretical analysis. Although desired, further generalizations to methods for de novo motif discovery, identification of cis-regulatory modules and predictive modeling of gene regulation will be more challenging future directions.
Appendices.
Appendix A: Proof of Proposition 3. | 2010-10-16T00:19:20.000Z | 2010-01-03T00:00:00.000 | {
"year": 2010,
"sha1": "66138d75c99459ca6126bae4b03b6b27eca473d2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1001.0341",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "66138d75c99459ca6126bae4b03b6b27eca473d2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Biology",
"Medicine"
]
} |
53494018 | pes2o/s2orc | v3-fos-license | Flux-charge duality and topological quantum phase fluctuations in quasi-one-dimensional superconductors
It has long been thought that macroscopic phase coherence breaks down in effectively lower-dimensional superconducting systems even at zero temperature due to enhanced topological quantum phase fluctuations. In quasi-1D wires, these fluctuations are described in terms of"quantum phase-slip"(QPS): tunneling of the superconducting order parameter for the wire between states differing by $\pm2\pi$ in their relative phase between the wire's ends. Over the last several decades, many deviations from conventional bulk superconducting behavior have been observed in ultra-narrow superconducting nanowires, some of which have been identified with QPS. While at least some of the observations are consistent with existing theories for QPS, other observations in many cases point to contradictory conclusions or cannot be explained by these theories. Hence, a unified understanding of the nature of QPS, and its relationship to the various observations has yet to be achieved. In this paper we present a new model for QPS which takes as its starting point an idea originally postulated by Mooij and Nazarov [Nature Physics {\bf 2}, 169 (2006)]: that \textit{flux-charge duality}, a classical symmetry of Maxwell's equations, can be used to relate QPS to the well-known Josephson tunneling of Cooper pairs. Our model provides an alternative, and qualitatively different, conceptual basis for QPS and the phenomena which arise from it in experiments, and it appears to permit for the first time a unified understanding of observations across several different types of experiments and materials systems.
The importance of topologically-charged fluctuations is dramatically increased in systems which are effectively lower-dimensional, often realized experimentally using superfluids or superconductors, where the phase of their macroscopic order parameter functions as the field in which topological defects are embedded. Examples include superconducting thin films [17,[21][22][23] and narrow wires [18], lattice planes in high-T C superconductors [19,20], and superfluid Helium or dilute atomic Bose-Einstein condensates in confining potentials with quasi-2D [6-8, 15, 16] or quasi-1D [12][13][14] character. In quasi-1D systems, whose transverse dimension is ξ, the relevant coherence length, topological fluctuations are known as "phase slips", and can be viewed conceptually as the passage of a quantized vortex line through the 1D system. They were first discussed by Anderson in the context of neutral superfluid Helium flow through narrow channels [24], and by Little for persistent charged supercurrents in closed superconducting loops [25]. During the course of such an excitation, the amplitude of the order parameter fluctuates to zero in a short segment of the channel of length ∼ ξ, allowing the phase difference between the wire's ends ∆φ to change by ±2π, in some cases accompanied by a quantized change in the supercurrent flow. In the presence of an external force F , this process (averaged over many phase-slip events) results in Ohm's-law behavior with a particle current proportional to F , rather than the ballistic acceleration expected for the superfluid state. For a charged superfluid this corresponds to finite electrical resistance, as was discussed in detail by Langer, Ambegaokar, McCumber, and Halperin (LAMH) [26,27] and others [28], for quasi-1D superconductors near their critical temperature T C where the order parameter is close to zero. In subsequent experiments [29,30] on ∼0.2-0.5 µm-diameter crystalline Sn "whiskers" which validated these ideas, finite resistances were observed to persist over a measurable temperature interval below the mean-field T C .
These early works on quasi-1D systems considered only classical processes, in which thermal fluctuations provide the free energy required to suppress the order parameter locally. However, in 1986 Mooij and co-workers suggested that an analogous quantum process might exist, similar to macroscopic quantum tunneling (MQT) in Josephson junctions (JJs) [31][32][33][34], by which the macroscopic system tunnels coherently between states whose ∆φ differ by ±2π [35]. Just like the thermal phase slips discussed by Little [25] and LAMH [26,27], such a process would depend exponentially on the wire's cross-sectional area, via the free energy required to suppress the order parameter over a length ξ. However, it would rely not on thermal energy but rather on some as yet unpecified (and presumably weak) source of quantum phase fluctuations, and thus it was presumed that extremely narrow wires would be required to observe it. Shortly thereafter, using lithographically defined, ∼ 50 nm-wide superconducting Indium wires, Giordano measured finite resistance that persisted much farther below T C than for wider wires [36], in the form of a crossover from the temperature scaling predicted by LAMH near T C to a much slower temperature dependence farther from it. Using a heuristic argument in which the thermal energy scale in LAMH theory was replaced with a hypothesized quantum energy scale, Giordano interpreted this observation as a crossover from thermal to quantum phase fluctuations, and was able to obtain a reasonable fit to his data. Many other experiments have since been carried out using different materials systems, which also exhibited some form of anomalous non-LAMH resistance below T C [18,[37][38][39][40][41][42] (though rarely in the form of a clearly evident crossover), and many authors have used Giordano's basic intuition as the basis for interpreting R vs. T data [18,[39][40][41][42][43]. In addition, a pioneering microscopic theory for QPS was later developed by Golubev, Zaikin, and co-workers (GZ) [44,45] which appeared to validate Giordano's general idea, identifying his hypothesized quantum energy scale for QPS as the superconducting gap ∆.
However, in other recent experiments using extremely narrow Pb [46,47], Nb [48], and MoGe [18,43,49] nanowires 10 nm wide, the anomalous low-T resistance previously identified directly with QPS was often completely absent. This is difficult to explain within Giordano's hypothesis, given that the strength of QPS should increase exponentially as the wire cross-section is decreased. In response to these remarkable observations, it was then suggested that the observed deviations from LAMH temperature scaling may be explained purely in terms of a combination of LAMH phase slips and granularity [46,50] and/or inhomogeneity [51] of the wires, rather than by QPS. On the other hand, the same MoGe nanowires which showed no evidence for QPS in R vs. T measurements did exhibit low-T anomalous resistance near their apparent critical current. These observations were made with techniques identical to those used to identify QPS phenomena in Josephson junctions [31], and were consistent with a quantum energy scale for the phase fluctuations [43,52] just as Giordano had suggested, even though no evidence for this was seen in the R vs. T data for the same wires. Also striking was an apparent complete destruction of superconductivity as T → 0 in other nanowires having a normal-state resistance R n R Q , where R Q ≡ h/4e 2 is known as the superconducting quantum of resistance [18,47,49,53]. Although theories exist which predict insulating [54][55][56] or metallic [44,45,57,58] states in 1D as T → 0, it is unclear whether any can explain a T = 0 critical point at R n ∼ R Q . Overall then, although some promising agreement between experimental and theoretical results has been obtained, there is still no consensus on how to self-consistently explain all of the observations, or on the precise role and nature of QPS in the phenomena observed.
In 2006, Mooij and Nazarov (MN) [59] made what may turn out to be a conceptual leap forward: they postulated that a classical symmetry known as flux-charge duality [60][61][62][63][64][65][66][67][68] can be used to connect QPS with Josephson tunneling (JT), the well-known process in which Cooper pairs penetrate through a thin insulating barrier separating two superconducting electrodes, and establish macroscopic phase coherence between them. Based on this idea, MN posited the existence of a quantum phase slip potential energy U ps (q) = −E S cos q, dual to the Josephson potential U J (φ) = −E J cos φ. Here, φ and q are known in the JJ literature as the phase and quasicharge, E J is the well-known Josephson energy, and E S is a new energy scale for QPS, which MN left as an input parameter. This mirrors the duality between the characteristic inductive energy of a wire E L ≡ Φ 2 0 /2L w (where L w is the wire's inductance) and the charging energy of a JJ given by: e 2 /2C J (where C J is the junction capacitance). From their elegant hypothesis, MN generated a phenomenology of QPS dual to that of JJs, including a dual set of classical nonlinear equations for q, and a dual class of circuits involving 1D superconducting nanowires, what they called "phase-slip junctions" (PSJs) [59,69,70]. Based on these ideas, several groups have recently performed new types of experiments [71][72][73][74][75], in some cases directly realizing these dual circuits [71][72][73]75], and providing the most direct evidence yet seen for QPS in continuous wires †.
In this work, we describe a new and alternative theory for QPS which takes MN's intuition as a starting point, and which may be able to shed light on a number of the outstanding questions related to QPS. We begin in section 2 with an introduction to the original intuition of Mooij and co-workers [35] for QPS, and its relation to equivalent phenomena in JJs. Section 3 describes flux-charge duality, in preparation for section 4 where we build on this to construct a model for the origin of the basic QPS phenomenon, and use it to calculate the phase-slip energy E S . Our result for this quantity is qualitatively different from previous theories, in that it centrally involves the dielectric permittivity due to bound, polarizable charges in and around the superconductor, a quantity which does not appear in this way in previous theories for QPS. In our model this permittivity plays the role of an effective mass for "fluxons", fictitious dynamical particles dual to Cooper pairs whose motion "through" a 1D wire corresponds to a quantum phase slip event, just as Cooper pair motion through an insulating barrier corresponds to a JT event.
In section 5, we build on these results to construct a distributed, nonlinear transmission line model of a quasi-1D superconducting nanowire. We show that in the presence of QPS, its dynamical equations for quasi-classical phase evolution in one spatial and one time dimension (1+1D) can be cast into a form identical to the static Maxwell-London equations in two spatial dimensions (2D), and from this we establish a direct analogy between the dynamics of electric flux penetration into a superconductor in 1+1D and the classical statistical mechanics governing magnetic flux penetration in 2D. We then use this analogy to predict macroscopic topological phase excitations in † Note that granular wires, which consist of superconducting islands separated by insulating barriers, are effectively one-dimensional JJ arrays, whose phase-slip processes are well-understood [32][33][34]76,77].
1+1D we call type II phase slips, which are the electric analog of magnetic vortices in a type II superconductor, and which have a characteristic length scale λ E we call the electric penetration depth. These II phase slips are "secondary" macroscopic quantum processes [63], in the sense that they arise as a collective effect out of the "primary", microscopic QPS process, just as Bloch oscillations arise as a collective effect out of JT in lumped JJs [63][64][65][66][67]78].
In section 6, we introduce a simple model for the interaction of these type II phase slips with the nanowire's electromagnetic environment, as well as a lumped circuit model for that environment similar to that used previously for JJs [79]. We use this in conjunction with our transmission line model to calculate R vs. T for four experimental cases from different research groups, using different superconducting materials, chosen in particular because they cannot simultaneously be described by models that attribute anomalous resistance above that predicted by LAMH directly to a QPS "rate" at finite temperature [18,36,39,[41][42][43]. By contrast, our model can approximately reproduce all four experimental curves, with input parameters either fixed at accepted or measured values, or (for parameters that are not known) chosen with eminently reasonable values. The key additional ingredient in our model which allows it to explain a wider range of phenomena in R vs. T curves is the additional length scale λ E , which itself has a temperature dependence. Next, we show how our model provides also a new interpretation of the quantum temperatures observed in MoGe nanowires by Bezryadin [43,52], giving for the first time (to our knowledge) a quantitative potential explanation of the measured values. An important element of our explanation is the effect of a low environmental impedance at high frequencies, which provides damping for quantum phase fluctuations, and makes a description in terms of a quasi-classical phase appropriate. Related ideas were discussed previously by MN [59], and also in the context of JJs [63][64][65][66][67]78]. Lastly, in this section we show that our model is consistent with all four of the very recent, direct measurements of QPS, made by several different groups and using different materials [71,72,75,80]. The electric penetration length λ E also plays a crucial role in this agreement, since for two of these cases [75,80] we find that λ E is much shorter than the wire length. In this regime, the resulting behavior is not that of a lumped element, and our theory predicts that the Coulomb-blockade voltage V C (the quantity observed in these two experiments) is independent of the wire length, in constrast to E S which is by definition proportional to it.
Finally, in section 7, we suggest an alternative explanation for the observed destruction of superconductivity when R n R Q [49]. Whereas most previous attempts to understand this apparent insulating behavior as T → 0 have been built on the idea of a dissipative phase transition [44,45,54,55], we hypothesize instead a disorderdriven transition, with virtual type II phase slip-anti phase slip pairs as the fundamental quantum excitation. This picture is analogous to the so-called "dirty Boson" model for quantum vortex-antivortex pair unbinding in quasi-2D superconductors [21], which has been used to explain an apparent superconductor-to-insulator transition (SIT) in highlydisordered thin films [22,23,81]. In this context, we discuss the interesting case of a SIT observed in microstructured 2D superconductors which essentially consist of a network of quasi-1D nanowires, and describe how this may be an intermediate case between the observed transitions in uniform 2D films and 1D wires. In section 8 we summarize, and make some concluding remarks on the implications of our model for applications of QPS to future devices. Appendix A contains tables of selected variables and abbreviations used in the paper. Appendix C, Appendix D, and Appendix E provide details on the microscopic parameter values used to obtain the results in figs. 10 and 11, and table 1. Lastly, Appendix F provides some details on PSJ circuits which are dual to well-known JJ-based superconducting devices.
The nature of QPS
The qualitative picture of QPS originally put forth by Mooij and co-workers [35] is illustrated in fig. 1, built on an analogy to macroscopic quantum tunneling (MQT) in JJs. For the JJ case, the quantum Hamiltonian is [65,78]: where I b is an external bias current, and [Φ,Q] = i . The quantitiesQ andΦ have units of charge and flux, and will be defined precisely below. We will refer to them as the quasicharge and quasiflux, respectively, and they are generalizations of the charge that has passed through the junction barrier and the gauge-invariant phase difference across the barrier. The quasifluxΦ can be viewed as the coordinate of a fictitious particle whose "mass" is C J , and which moves in a so-called "tilted washboard" potential given by the last two terms in eq. 1, and illustrated in fig. 1(a). The corresponding Heisenberg equations of motion forΦ give the well-known classical, nonlinear behavior of the JJ in the limit where quantum fluctuations ofΦ about its expectation value can be neglected (E J ≫ e 2 /2C J , or equivalently Z J ≪ R Q where Z J ≡ L J /C J is the junction impedance). In this classical limit, the dominant way for the JJ to exhibit a phase-slip (i.e. for the particle to move from one well to the next) is for a thermal or other classical fluctuation to drive the system to an energy above the top of the Josephson barrier, as shown in fig. 1; in the presence of damping (typically due to a shunt resistor), the particle is then "re-trapped" in the adjacent (or other nearby) potential well, and this process then repeats stochastically, resulting in a phenomenon known as phase diffusion [79]. A similar qualitative picture can be used to understand thermal LAMH phase slips in a quasi-1D superconductor †, shown in fig. 1(b). In this case, however, the classical potential energy as a function of Φ contains within it the physics originally described by † Note that in the superconducting case, the condition for quasi-1D refers only to the macroscopic order parameter, and not to the bare energy levels of the conduction electrons, whose density of states is still fully 3D in the regime of interest here (equivalently, the Fermi wavelength 2π/k F is much smaller than the wire's transverse dimensions, so that there are many single-electron conduction channels near the Fermi energy in the metal). The barrier is due to the Josephson potential energy, and the "tilt" comes from the free energy contribution U S (Φ) = −I b Φ associated with a current source. In the superconducting state, the so-called "phase particle", with "position" Φ, is localized in a given potential well. Thermal activation of the phase particle over the barrier (solid red arrow) followed by retrapping in the adjacent (or a nearby) potential well due to electrical damping (red wavy arrow) is known as phase diffusion [79], and produces a finite voltage and corresponding effective resistance even in the superconducting state. In the presence of zero-point fluctuations of the JJ's plasma oscillation (associated with its Josephson inductance and the junction's capacitance), the system can also tunnel through the potential barrier into the adjacent well, a phenomenon known as macroscopic quantum tunneling [31]. Although this is in principle a coherent, reversible process, in conjunction with nonzero damping (short, wavy red arrow) it can also result in an average escape rate for the phase particle and a corresponding voltage. (b) abstract potential envisioned for a quasi-1D superconducting wire as a function of its quasiflux Φ (gauge-invariant phase difference between the wire's ends), where the potential barrier is taken to be the condensation energy of a length ξ of the wire, the minimum energy required to establish a localized null in the order parameter. Little or LAMH phase slips correspond to the the system surmounting this barrier due to a thermal fluctuation and then being retrapped (presumably also by damping). The original intuition of Mooij and co-workers [35] was that a phenomenon equivalent to MQT could also occur in a continuous wire, in the presence of a source of quantum phase fluctuations.
Little [25] and LAMH [26,27], such that each point on the horizontal axis represents a quasistationary solution of the Ginsburg-Landau (GL) equations for a wire with fixed Φ across it, and the point of maximum energy where Φ ≈ Φ 0 /2 is the so-called saddle-point solution also discussed in the context of superconducting weak links [82]. In both the JJ and quasi-1D wires, for purely classical fluctuations, the phase-slip rate can be written [83][84][85]: where δE ps is a classical energy barrier, which for JJs is simply 2E J . For LAMH phase slips, the energy barrier is given by the total condensation energy of a length ξ of the wire with cross-sectional area A cs [18, 26-28, 36, 39, 43], up to a numerical factor: where U C is the superconductor's condensation energy density, which goes to zero as T → T C . In the second line L ξ is the kinetic inductance of a length ξ of wire, such that the barrier can also be viewed as the energy cost to put Φ 0 /2π across that length. The quantity Ω ps in eq. 2 is known as the attempt frequency [83][84][85], a term derived from the idea of an effective classical particle making multiple "attempts" to surmount the energy barrier, originally used in treatments of Brownian motion and chemical reactions [83].
In the JJ case, the attempt frequency is derived from the Josephson inductance and the effective capacitance and resistance shunting the junction; for example, for an undamped junction it is simply the oscillation frequency derived from its Josephson inductance and shunt capacitance (known as the junction plasma frequency). In LAMH's treatment of quasi-1D wires, the attempt frequency is derived from time-dependent GL theory [26,27]; however, the exponential dependence of the phase-slip rate on the energy barrier and T C makes it difficult to quantitatively compare this theory with experiment. Just as with an actual massive particle in a confining potential like that shown in fig. 1, at low enough temperature zero-point fluctuations become important; for the JJ this appears in the form of macroscopic quantum tunneling (MQT), in which these quantum fluctuations allow the system to tunnel through the barrier [31]. In the absence of damping and in the limit of low bias current, this tunneling is completely coherent and reversible, and can be described purely in terms of superpositions of the stationary energy eigenstates of the system (known as the Wannier-Stark ladder [86]); if the current is turned on suddenly, the resulting coherent dynamics are known as Bloch oscillations [65]. If the system is damped, on the other hand, it can relax irreversibly to the ground state of the adjacent well after tunneling (indicated by the short, wavy red line in fig. 1), giving up its energy to the reservoir associated with the damping, and the process can then be repeated. Since in these dynamics C J plays the role of a mass,Q a momentum, andQ 2 /2C J the resulting kinetic energy, one can easily identify the source of quantum phase fluctuations in the JJ system: the finite junction capacitance C J results in an energy cost to localize the positionΦ, due to the corresponding fluctuations in its conjugate momentumQ. Figure 1(b) shows the analogous picture suggested by Mooij and co-workers [35] to motivate QPS: in the presence of quantum zero-point phase fluctuations, even a continuous superconducting wire (if it is narrow enough, so that the energy barrier is low enough) can undergo a form of MQT. The question is, what is the source of these quantum phase fluctuations in a continuous superconducting wire? Giordano's identification of a crossover in R vs. T curves for very thin wires prompted him to suggest a quantum phase slip "rate" analogous to the thermal phase slip rate that produces LAMH-type resistance, but with the thermal energy k B T replaced by this other, manifestly quantum energy scale for zero-point phase fluctuations (or "quantum temperature" T Q as it would be described in the language of JJs [31,43,52,85]) †. In his original work [36], and subsequent treatments based on it [18,39,50,87,88], this quantum phase fluctuation energy scale was taken to be ∼ /τ GL , where τ GL ≡ π /[8k B (T C − T )] is the GL relaxation time. The microscopic theory of GZ [44,45], although it did not posit the existence of a linear phase-slip resistance at T = 0, did in fact give an energy scale ∼ ∆ ∼ /τ GL for the quantum phase fluctuations, in qualitative agreement with Giordano's original intuition.
In this paper, using MN's hypothesis of flux-charge duality between quantum phase slip and Josephson tuneling as a starting point, we construct an alternative model for QPS in which the energy scale for quantum phase fluctuations is capacitive in nature, just like the charging energy for JJs, but with the capacitance here arising from the polarizable, bound electrons both inside and near the wire; the effective permittivity of this polarizable environment is then the background upon which the fluctuating electric fields associated with QPS occur. In preparation for describing this model, we first give some background on flux-charge duality, the principle on which it is based.
Flux-charge duality
Flux-charge duality is a classical symmetry of Maxwell's equations * which is best known in the context of planar lumped-element circuits [60][61][62][63][64][65][66][67][68], where it manifests itself in the invariance of the equations of motion under the transformation shown in fig. 2(a), and is also connected to the relationship between right-handed and left-handed metamaterials made from lumped circuit elements [90]. In the more general continuous case, it can be made apparent by defining the quantities: where Q(Σ) is associated with a surface Σ (bounded by a closed curve σ) and Φ(Γ) with a curve Γ, as illustrated in fig. 2(b). These quantities reduce to the so-called "branch † The idea of a "rate" implies irreversibility and therefore a continuum of states that functions as a dissipative reservoir. In a JJ, this dissipation comes from the shunt resistance. However, in cases where an equivalent QPS "rate" is used to explain a linear resistance of continuous wires in the I b → 0 limit [36,[39][40][41][42]42,43], no source of dissipation is explicitly mentioned, which in our view is problematic.
In the absence of dissipation as I b → 0, the tilted washboard potential would exhibit no quantum phase slip "rate" or measurable resistance, but simply the set of stationary energy eigenstates known as the Wannier-Stark ladder [86]. Subsequent theories have predicted nonlinear resistances due to QPS even at T = 0 [44,54], but these necessarily go to zero as I b → 0, in contrast to the linear resistances observed in experiments. In our model, as we will see in section 6, linear, phase-slip-induced resistances arise only due to thermal processes in the presence of an explicitly dissipative electromagnetic environment. * See, for example, ref. [89]. The free current density ρ Q v Q is the motion of free charge density ρ Q at a velocity v Q , through a surface area element dΣ. The bound current density dD/dt is the displacement current density on Σ. (d) An example of "free" flux density, using a permanent magnet moving at velocity v Φ relative to the stationary curve Γ, such that the associated free flux "current" is: In this construction, E · dΓ is precisely the flux per unit time passing through a segment dΓ. The bound flux "current" density −dA/dt is associated with time-varying currents flowing along Γ, and the associated induced emfs from Faraday's law. Although the case of a moving magnet is somewhat artificial, any electric field in a medium can be broken into these two components: one associated with bound charges, and the other with induced emfs from time varying currents (free charges).
variables" in the Lagrangian description of electric circuits described in refs. [91,92] if Γ in fig. 2(b) connects the two ends of the branch. Figures 2(c) and (d) illustrate the duality between these quantities, such that equations 4 and 5 can both be interpreted as arising from a sum of "free" and "bound" current densities: Here, ρ Q is an ordinary density of free charge moving at velocity v Q , and B f is a magnetic flux density moving at velocity v Φ . Using the London gauge A = −Λρ Q v Q for a superconductor (where the London coefficient is Λ = µ 0 λ 2 with λ the magnetic penetration depth) and D = ǫE for an insulator, yields: where on the right side V Γ is the voltage difference between the two ends of Γ and I Σ is the current flowing through Σ. Equation 8 for the superconductor is none other than London's first equation, according to which Q moves ballistically under the action of a force V , and with an effective mass given by the kinetic inductance L k ; correspondingly, eq. 9 is Maxwell's equation for the displacement current in an insulator, which can be viewed as ballistic acceleration of Φ under the action of a "force" I, with an effective mass given by the capacitance C. Therefore, at the classical level of the Maxwell-London equations, superconductors and insulators are dual to each other. We now arrive at the proposed duality between a JJ and a PSJ, first suggested by MN (though here we have arrived at it in a different way). We start by considering only the lumped-element case, as was done by MN. This will be generalized to the fully distributed case starting with section 5 below. As shown in fig. 3, a JJ consists of two superconducting islands of Cooper pairs separated by an insulating potential barrier, while a PSJ can be viewed as two insulating "islands" of flux quanta (henceforth referred to as "fluxons") separated by a superconducting potential barrier. If we place the surface Σ inside the insulating barrier of a JJ [ Fig. 3(a)] with junction capacitance C J , and the curve Γ inside a superconducting nanowire [ Fig. 3(b)] of kinetic inductance L k (neglecting its geometric inductance), we have: Figure 3. Flux-charge duality, Josephson tunneling, and quantum phase slip. Superconductor is shown in blue, and insulator in red. (a) and (b) illustrate the geometry of the surface Σ and curve Γ which are used to define the quasicharge Q and quasiflux Φ in the text. (c) schematic of a JJ, consisting of an insulating tunnel barrier between a superconducting island and "ground" (this is also known as a charge qubit). (d) schematic of a PSJ, consisting of a superconducting nanowire tunnel barrier between an insulating island and "ground" (which for fluxons is an insulator). Note the closed superconducting loop around the insulating island in this case, which is known as a phase-slip qubit [93]. In (e) and (f) we add an electromagnetic environment, in terms of an admittance Y env for the JJ or an impedance Z env for the PSJ, such that the tunnel barrier between the island and ground in each case is shunted by a dissipative element.
For the JJ, C J V is the charge on the capacitance C J of the junction barrier induced by a voltage difference V across it, and n is the number of Cooper pairs that have passed through it. The quantity Q appearing in eqs. 1 and 10 is then a dimensional version of the so-called junction quasicharge [64][65][66][67]78]. The quantity Φ appearing in eqs. 1 and 10 for the JJ also consists of two terms, the first of which is due to the phase difference θ between the order parameters of the two superconducting electrodes, plus a second term due to magnetic fields inside the junction. As shown on the far right of eq. 10, it can also be written as the sum of the contributions from the kinetic flux induced by a current I flowing through the Josephson inductance L J , and the passage of m (discrete) fluxons through the junction. This quantity is then a dimensional version of the gaugeinvariant phase difference across the junction [94] (also referred to as the "quasiphase" in ref. [70]). Henceforth, we will refer to Φ as the "quasiflux". For the PSJ in eq. 11, dual statements to those for the JJ apply: the quantity L k I is the total "bound" flux of a nanowire having kinetic inductance L k associated with a current I, and m is the discrete number of fluxons that have passed through the wire. The wire's quasicharge Q is a sum of the total free charge Q f that has passed through the wire, plus a term associated with electric fields on the wire's so-called "kinetic capacitance" C k (the dual of Josephson inductance) [59]. Kinetic capacitance was suggested by MN as a formal consequence of the assumed flux-charge duality between the JJ and PSJ, and we discuss in section 4 below how our model for QPS gives an intuitive interpretation of its origin.
For thick enough superconducting wires, the only way for m to be nonzero is if some part of the wire was in the normal state at some time, as occurs in an LAMH phase slip over a length of wire ∼ ξ, the GL coherence length. These events are dissipative, produce a measurable voltage pulse, and can be associated with passage of a fluxon through the null in the superconducting order parameter at a localized, measurable position and time. By contrast, the dual to JT, which we want to identify with QPS, would necessarily be coherent, delocalized fluxon tunneling through the entire length of wire, such that no information about where the phase-slip occurred exists. Just as in a JJ, where localizing a Cooper pair tunneling event would cost electrostatic energy, localizing a fluxon tunneling event in a PSJ would cost kinetic-inductive energy.
Quantum phase slip
We now describe our model for QPS, whose basic intuition is contained in fig. 2
(d):
Fluctuations of the phase difference between the ends of a wire correspond to fluxon "currents" passing "through" the wire, which are none other than electric fields along it. The effective mass associated with this fluxon motion is then an electric permittivity, which determines a "kinetic" (electrodynamic) energy cost for phase fluctuations. This is the crucial new energy scale which allows us to define QPS in our model, in conjunction with the appropriate "confining" potential energy U(Φ) for Φ (the "phase particle") whose classical minima define the mean-field superconducting state [c.f., fig. 1(b)]. If the zero-point quantum fluctuations about this state are sufficiently strong, they can produce (macroscopic) quantum tunneling between adjacent minima of the potential, which in the absence of damping gives exactly the behavior postulated by MN [59].
Before exploring the implications of this idea, however, we must first define more precisely what we mean by the electric permittivity inside the wire relevant for quantum phase fluctuations along it. We do this in the context of the simplest (Drude) model of a metal, consisting of a gas of nearly free conduction electrons of mass m e and density n e , superimposed on a background of fixed ions of density n i ; the permittivity inside the metal at frequency ω in this model is: where the complex conductivity σ(ω) and background permittivity ǫ b (ω) are: Here, σ 0 ≡ n e e 2 τ s /m e is the DC conductivity for a scattering time τ s of conduction electrons, and α(ω) is the polarizability of each ion. The contribution of this ionic background to the permittivity, sometimes known as "core polarization" [95,96], can be viewed as arising from interband transitions, and can be as large as ∼ 10ǫ 0 in simple noble metals [97], and even much higher in materials with polarizable, low-lying electronic excited states [98] like the highly-disordered materials typically used for QPS studies †. It can be difficult to measure at high frequencies (ωτ s ≫ 1), however, since it is superposed with the large, negative contribution from the metal's inductive (free carrier) response in this regime [c.f., eq. 13].
Taking this limit ωτ s ≫ 1, and making the replacements m e → 2m e , e → 2e, n e → n s we arrive at the simplest possible model for a superconductor, in which Cooper pairs of mass 2m e , charge 2e, and density n s move without resistance; the permittivity is then: where we have defined the quantity: known as the Cooper pair plasma frequency [94,99], with Λ ≡ m e /(2n s e 2 ) the London coefficient [94]. Formally, this is the oscillation frequency of the Cooper-paired electrons relative to the ion cores, with an effective (kinetic) inductance due to their mass, and an effective capacitance due to ǫ b . Now, in real superconductors this frequency is essentially always larger than the superconducting gap, such that real excitation of this mode would break Cooper pairs and thus be strongly damped; however, in our model it is rather the zero-point fluctuations of this plasma oscillation with which we are concerned, and which will result in QPS.
Our model for a quasi-1D superconducting wire is shown schematically in fig. 4(a), and for comparison the dual model for a JJ is shown in fig. 4(b). We discretize the system † This may seem reminiscent of ref. [71], in which the proximity of the host material to a metalinsulator transition (presumably accompanied by a large polarizability) was emphasized as important for achieving strong QPS. An interesting consequence of our model, by contrast, will turn out to be that a large permittivity suppresses QPS. (b) dual model for a JJ, where the insulating barrier has both a shunt capacitance and series geometric inductance (associated with magnetic fields inside the barrier). The shunt inductors indicate the kinetic inductivity of the superconducting electrodes, and the dotted lines indicate a frequency dependence of the field penetration into the electrodes for propagating modes along the junction (Fiske modes [100]). Throughout this work, to facilitate comparison between these two cases, we take one dimension of the junction barrier as fixed, and consider only changes in the length of the junction in the other dimension. along one dimension, at a length scale l φ to be discussed below. The shaded blue kinetic inductors indicate the usual mean-field GL theory † with order parameter Ψ GL = Ψ 0 e iθ . The capacitors C || and C ⊥ indicate schematically the distributed permittivities ǫ in and ǫ out for electric fields inside and outside the superconductor, respectively. Note that here ǫ in describes only the bound-electron response, corresponding to the first term in eq. 15, which then appears in parallel with the free (superconducting) component with kinetic inductivity Λ = µ 0 λ 2 , corresponding to the second term in eq. 15. The semiclassical plasma modes of such a quasi-1D system were discussed in the seminal work of Mooij and Schön (MS) [99] for a wire of circular cross-section embedded in an insulating medium of permittivity ǫ out . The dispersion relation for these modes can be written in the form: where k is the wavenumber and Λ 1D is a quasi-1D Coulomb screening length which can be expressed in our discretized model in terms of the discrete capacitors shown in fig. 4(a) thus: † Although GL theory is in general valid only very close to T C , the materials currently used for QPS experiments are all in the dirty, local, type-II limit where it is a good approximation all the way to T = 0 (see, for example, ref. [101]).
where K n (y) are the modified Bessel functions of order n and argument y, and in the continuum limit (kl φ ≪ 1) these results in conjunction with fig. 4(a) agree with ref. [99] * . Equation 18 is familiar from the physics of 1D JJ arrays, defining the length scale over which the Coulomb interaction between charges is screened out by the distributed shunt capacitances C ⊥ . On short length scales where kΛ 1D ≫ 1 this shunt capacitance has a negligible effect, and eq. 17 reduces to the bulk plasma frequency Ω p [c.f., eq. 16]. In the opposite limit where kΛ 1D , kr 0 ≪ 1, C ⊥ dominates and eq. 20 reduces to an approximately wavelength-independent capacitance per length: . Correspondingly, eq. 17 reduces to an approximately linear dispersion relation with a fixed wave propagation velocity known as the Mooij- We assume that for an individual QPS event occurring far from the ends of the wire, all of its dynamics are contained within a length l φ . We further assume that QPS is sufficiently "weak" (in a manner to be defined more precisely below) that we can neglect the interactions between multiple QPS events which would otherwise result from the shunt capacitances C ⊥ . Note that in making this assumption we are only neglecting the possibility that two QPS events occur within Λ 1D of each other, since at distances beyond this their Coulomb interaction will already be screened out. This assumption about the short-length-scale physics of QPS allows us to associate with each segment a single effective parallel capacitor C l , as shown in fig. 5(a), which contains contributions from electric fields both inside and outside the wire: This definition is based on the requirement that in the l φ ≪ r 0 limit we should require that: C l → ǫ in A cs /l φ , the simple parallel-plate capacitance for a length l φ . In this limit, the electric field is almost completely confined within the wire, whereas in the opposite limit l φ ≫ r 0 most of the field is outside the wire. Note that the relative participation of these two regions is also affected by the relative size of ǫ in and ǫ out , since the higher permittivity material will tend to "attract" the electric flux associated with QPS. In neglecting the shunt capacitance to the environment on short length scales ∼ l φ , we are also by construction neglecting the spatial variation of the wire's quasicharge Q(x) Figure 5. Dual models of PSJs and JJs II: nonlinear transmission lines. (a) discrete model of weak QPS on short length scales, where each "link" of characteristic length l φ ∼ ξ is treated as a parallel plasma oscillator composed of a nonlinear inductor with a single-valued, Φ 0 -periodic potential U (∆Φ J ) (the ordinary GL superconductor), and the capacitance C l [eq. 21] associated with potential differences along the wire. Zero-point fluctuations of this oscillator (occurring independently for each length l φ ) generate QPS via tunneling between wells of the periodic effective potential U (∆Φ J ). The quantum variables associated with QPS in the j th link are its loop charge Λ J and quasiflux ∆Φ J , with [∆Φ J , Λ k ] = i δ jk . At these short length scales, the quasicharge Q(x) is assumed to be uniform along x. (b) The dual short-length-scale model of a JJ, in which each length l q ∼ ξ of the barrier becomes an independent series plasma oscillator (note that we consider the junction to be short in one of its two areal dimensions, so that it can be viewed as a 1D system). This oscillator is composed of a nonlinear capacitance (the barrier capacitance, modified by Cooper pair tunneling, to produce a 2e-periodic effective potential energy U (Λ J ) for the loop charges), and an effective kinetic inductance L l of the nearby region inside the electrodes. Josephson tunneling can then be viewed as arising from zero-point fluctuations (occurring independently for each length l q ∼ ξ) of these oscillators. At short length scales Φ(x) is assumed to be x-independent (magnetic fields in the L g are neglected). In (c), the distributed shunt capacitance C ⊥ now allows Q to be a function of position along the wire, and in (d) the distributed series inductance L g similarly allows Φ to vary spatially. To describe the physics at longer length scales (and lower energy scales) the ground state energy densities E QPS (Q) and E JJ (Φ) of the discrete models (a) and (b) are incorporated into the nonlinear transmission lines shown in (c) and (d), respectively, as classical potential energies for the long-wavelength dynamics of Q(x, t) and Φ(x, t). Both of these models are described by the sine-Gordon equation in an appropriate semi-classical limit, which for the PSJ is when Z L = L k /C ⊥ ≫ R Q , and for the JJ when on these length scales, since −∂ x Q ≡ ρ ⊥ , the polarization charge per length stored on C ⊥ . This is dual to the usual lumped-element treatments of JT [94,102], where in calculating the microscopic Josephson coupling the gauge-invariant phase difference across the junction is assumed not to vary spatially across the junction area. This corresponds to neglecting the geometrical inductance inside the Josephson barrier and therefore the magnetic fields generated in it by currents, which is valid for JJs much smaller than the Josephson penetration depth λ J [94].
As indicated in fig. 5(a), we also associate with each segment of the wire a nonlinear kinetic inductor (indicated by a JJ symbol). For the j th segment this inductor has a quasiflux variable ∆Φ j defined by such that the quasiflux at the end of the j th segment defined relative to the end of the wire is: Φ j ≡ j k=1 ∆Φ k . We take the boundary conditions for a single, isolated QPS event in the j th segment to be: ∆Φ k = 0, ∀k = j, such that Φ(x) during the event is fixed everywhere along the wire but inside that segment †. We can then treat the kinetic inductor of each segment in terms of a local potential energy U(∆Φ j ) (i.e. the kinetic-inductive energy evaluated as a function of fixed ∆Φ j ). This function is Φ 0 -periodic, with a minimum whenever ∆Φ j is an integer multiple of Φ 0 , very similar to a JJ [c.f., eq. 1] (although U(∆Φ j ) becomes less and less like a simple cosine as l φ increases beyond ξ [82]).
The model of fig. 5(a) is similar to a 1D JJ array, in the so-called "nearest-neighbor" limit [76,103] which applies on length scales much longer than the Coulomb screening length [c.f., eq. 18]. In this case it is advantageous to use a loop variable representation, rather than a node variable representation [91,92], since in the latter case the interactions between node charges are highly nonlocal. We define the loop chargesΛ j as shown in the figure, which are the canonical momenta for the position variables ∆Φ j such that [∆Φ j ,Λ k ] = i δ j,k . In this representation, the classical Euclidean action of the system is: where τ ≡ it, β ≡ 1/k B T , and we are primarily interested in the β → ∞ limit. Equation 22 describes the motion of independent fictitious particles with positions ∆Φ j and mass C l , under the influence of the periodic kinetic-inductive potential U(∆Φ j ): † Note that this is a different boundary condition than used for the calculation of the thermal phaseslip energy barrier by LAMH [26,27], where a fixed phase difference across the wire was assumed (more precisely, a fixed V = 0). Here, we allow the phase across a segment in which an isolated QPS event occurs (and therefore across the wire's ends) to vary freely, which essentially corresponds to the absence of any phase damping (the effects of damping due to the electromagnetic environment will be considered in sections 5 and 6 below). This is dual to the implicit assumption used in the calculation of the Josephson coupling for a JJ that there is no charge damping.
where I(∆Φ) is the current-phase relation for each segment, which we take from the theory of Aslamazov and Larkin [104] to yield the result on the second line, in which the quantity V 1D ≡ A cs Φ 2 0 /2πΛl φ can be viewed as a 1D superfluid stiffness [19], and φ j ≡ 2π∆Φ j /Φ 0 . Equation 23 holds approximately for short lengths up to l φ ∼ ξ. For longer lengths, U(∆Φ j ) can be evaluated numerically using the results of ref. [82]. The QPS contribution to the ground state can be evaluated in this simplified model by seeking stationary, topologically nontrivial paths connecting the endpoints: where m is an integer. In the β → ∞ limit, these are known as vacuum instantons [105], and the corresponding solution is well known in the semiclassical approximation (where S 0 ≫ 1) in the case of a simple cosine potential * , having total action: where Ω p is the bulk Cooper pair plasma frequency [94,99] defined above [c.f., eq. 16] andΩ p is the corresponding plasma frequency for the length scale l φ , including the effect of fields outside of the wire. The Euclidean time dynamics of the order parameter corresponding to this solution are illustrated in fig. 6. The frequencyΩ p is in general greater than the gap frequency, so that any classical oscillations atΩ p would be essentially those of a normal metal; however, such classical dynamics would occur only at very high energy. Here, we are concerned instead with zero-temperature, quantum fluctuation corrections to the ground state of the superconductor, such that the characteristic time over which the system can virtually occupy energy states near the top of the barrier (∼ /V 1D ) is much shorter than the characteristic decay time for the order parameter (∼ τ GL , the GL relaxation time). In this limit, we can neglect the dissipation (corresponding to breaking of Cooper pairs) that would inevitably occur on longer timescales. This situation is analogous, for example, to the perturbative treatment of Josephson tunneling within the BCS theory of superconductivity, which can be understood as arising through virtual excitation of quasiparticles, which are also dissipative degrees of freedom [106]. Another example is the case of Raman transitions between discrete ground states in an atomic system via an electronic excited state (or even multiple excited states) with a short lifetime Γ −1 e ; the * We have numerically evaluated the correction to this (and subsequent results) due to a nonsinusoidal I(∆Φ) for segment lengths up to l φ ≈ 3.48ξ, where the current-phase relation becomes multivalued and there is no longer a classical Euclidean path connecting the relevant endpoints [82]; we find only corrections at the ∼10% level, irrelevant at the crude level of approximation being used here. Figure 6. Schematic picture of quantum phase slip in our model. Panels (a)-(c) show the wire's order parameter along the j th link of length l φ at three different times. Panels (d)-(f) plot the (lumped) link quantities as a function of time, with the times corresponding to (a), (b), and (c) marked by the vertical dashed lines. (a) Over a length l φ , a transient current flows, charging up C l (the corresponding displacement current makes the total current zero, and no net quasicharge moves along the wire), such that ∆Φ j winds up; This can be viewed as a fluxon beginning to pass through the wire; (b) At the "core" of the QPS, the current is zero, the charge on C l has reached a maximum, and a gauge-invariant phase difference of π appears between the wire's ends; this can be viewed as a fluxon (virtually) inside the wire; (c) The current reverses, discharging C l . The wire returns to its initial state, with a net quasiflux evolution between the wire's ends of Φ 0 , corresponding to passage (tunneling) of a fluxon through the wire. excited state is occupied only virtually for a time: where ∆ e is the detuning of a driving field from resonance with the optical transition between ground and excited states, such that spontaneous scattering into the radiation continuum via the excited state (the equivalent of electrical dissipation in our case) can be neglected. In both examples the decay of excited states can be approximately neglected when compared to the coherent, low-energy process of interest, and the excited state can be "adiabatically eliminated" [107] to produce an effective potential energy for the ground state †. † An exception to this is when degrees of freedom external to the quantum system of interest have excited states which are populated, and whose stored energy can be exchanged with the system. In the present context of quantum circuits, this corresponds to a resistive electromagnetic environment. For the purposes of QPS in our model, there are three possible sources of such dissipation: (i) the intrinsic The resulting approximate expression (when S 0 ≫ 1) for the ground-state energy per unit length * can be written in terms of the action S 0 [78,105,108]: where q ≡ πQ/e is the dimensionless quasicharge. Using eqs. 24 and 25, we can then write the phase-slip energy per unit length as: This quantity is arguably the central parameter for QPS. It has been identified [59,93] with the"rate" of quantum phase slips estimated by Giordano [36], and later calculated by several authors using time-dependent GL theory [50,87,88], and by GZ using microscopic theory [44,45]. In one form or another, it is the essential input parameter to all subsequent theoretical work aimed at deducing the effects of QPS, appearing as the dual of the Josephson energy in lumped-element treatments [53,59,93,109], and in more recent theories in terms of the so-called "QPS fugacity" f ≡ e −S 0 [54][55][56][57]. In all of these cases it is either left as an unknown input parameter, or taken from the results of GZ or earlier authors. Previous results have been based on an action of the form (up to numerical factors): S 0 ∼ δE LAMH /∆ [36,44,45,53,71,87,88] where δE LAMH ∼ U C A cs ξ is the free energy barrier originally used by LAMH [26,27] for thermal phase slips, and ∆ is the superconducting gap. Since the QPS action S 0 can be viewed as the ratio of the potential energy barrier for phase-slips to the energy scale of the quantum phase fluctuations which produce tunneling through that barrier (S 0 ∼ barrier height × characteristic quantum fluctuation time), this form is essentially consistent with Giordano's original hypothesis: that the relevant "kinetic" energy scale for QPS is ∼ ∆ ∝ /τ GL . By contrast, in our model the quantum phase fluctuations arise from a qualitatively different source, being associated with a virtual plasma oscillation involving the Cooper pairs and the electric permittivity of the environment in which they are embedded.
resistance of the metal atΩ p , whose effect we can neglect compared to its inductive response as long as Ω p τ s ≫ 1 [c.f., eq. 13]; (ii) the transverse radiation continuum in the medium surrounding the wire with impedance 377Ω, which has negligible coupling to QPS since l φ is orders of magnitude smaller than the wavelength corresponding toΩ p in this medium; and (iii) the propagating plasma oscillation modes on the wire, which are excluded by construction from the model of fig. 5(a) since the loop charges Λ i do not interact. We will add back in the effect of these modes when we consider distributed systems in section 5. * There will, of course, be higher energy bands in this potential as well, corresponding to excited states of the Cooper pair plasma oscillation; however, these will be extremely short-lived, since at such high energies the Cooper pairs will no longer be bound. This picture of QPS has an appealing symmetry with Josephson tunneling, as illustrated by our model of fig. 5(c) and the dual model of fig. 5(d) for JT: in both cases, the source of quantum tunneling can be traced back to the finite mass of the superconducting electrons. For the PSJ (JJ), when these electrons are confined inside a sufficiently narrow region around the quasi-1D wire (the slotline formed by the JJ barrier), the corresponding short-wavelength zero-point fluctuations of their plasma modes allow the phase (charge) to undergo tunneling between adjacent potential minima, producing QPS (JT). A crucial point about this confinement for QPS is that the phase-slip energy can become appreciable already at wire diameters still much too large for the zero-point phase fluctuations to have any impact on the Cooper pairing itself, resulting in the coexistence of a pairing (superconducting) energy gap with insulating behavior (i.e., Q is completely localized). This is similar to the case of a Coulombblockaded JJ [67,110], and may also be related (albeit more indirectly) to the observation of a local pairing gap in highly-disordered, thin superconducting films on the insulating side of a SIT [81]. We discuss the latter point further in section 7.
Our model for lumped-element QPS also provides a natural intuition for the origin of the kinetic capacitance (dual to the Josephson inductance) suggested by MN. Written as a distributed quantity (in units of Farads×length) it is: where q ≡ πQ/e and: The form of eq. 29 suggests that the kinetic capacitance is simply a remnant of the "bare", purely geometric series capacitance C l , renormalized by QPS. That is, in the limit of very strong QPS (V 1D , S 0 → 0) the wire acts simply like a dielectric rod whose behavior is governed only by the bound charges associated with the capacitance C l of each segment; as the superfluid stiffness is increased from zero, the kinetic capacitance increases smoothly from the bare value, eventually increasing exponentially as superconductivity is further strengthened, such that the corresponding QPS energy goes to zero. This is the exact dual of the JT case, where the Josephson inductance of the junction can be viewed as a renormalized "remnant" of the bare (bulk) kinetic inductivity of the superconducting electrodes.
Another interesting result of the model presented so far is that at a given point in the wire, the QPS amplitude depends not just on the properties of the wire itself, but also on the permittivity of the dielectric medium immediately outside it, according to eq. 21. The narrower the wire, and the smaller the ratio ǫ in /ǫ out , the greater the penetration of QPS electric fields into the region outside the wire †. This kind of nonlocality is exactly dual to what occurs in a JJ, where the tunneling energy E J depends not just on the properties of the barrier itself, but also on the kinetic inductivity of the "surrounding" superconductor of the adjacent electrodes. Thus, in the JT (QPS) case, stronger quantum tunneling occurs when the superconducting (insulating) gap of the surrounding medium is large, and the insulating (superconducting) gap of the tunnel barrier is small * .
Before proceeding to the next section, we discuss briefly the "weak" QPS assumption which underlies the model of fig. 5(a). In our derivation of eq. 26 above, the assumption that QPS is "weak" took the form of a semiclassical approximation to the full 1+1D quantum field theory, in which the QPS action S 0 was taken to be large. In the usual mapping from 1+1D Euclidean space at T = 0 to the equivalent 2D classical statistical mechanics problem [108,111,112], this corresponds to a small fugacity f = e −S 0 for the 2D statistical fluctuations corresponding to QPS events in 1+1D. Therefore, these events are rare, their density very low. It is for this reason that the model of fig. 5(a) is justified, in which simultaneous QPS events in adjacent segments do not interact with each other by construction: such occurrences are "rare enough" (in Euclidean time) that they contribute negligibly to the partition function. This is a dual statement to the usual perturbative assumption made in the context of JT, which produces the well-known, simple proportionality between the junction's normal state tunneling resistance and its critical current [102].
Distributed quantum phase slip junctions
In the previous section, we described our model for QPS on short length scales l φ ∼ ξ, over which electric fields outside of the wire (the wire's shunt capacitance to the environment) were included using a renormalized series capacitance C l for each discrete segment. We saw that the characteristic (Euclidean) frequency associated with the length scale l φ was the renormalized Cooper pair plasma frequencyΩ p . However, we left unspecified the length scale at which lower-energy dynamics would become important, effectively treating the wire as a lumped element. As we will now see, at lower energy scales and longer length scales additional physics will need to be included to treat the fully distributed case.
We make the assumption that a large separation of energy scales exists between † Of course, this is the case in our model in a sense by construction, since we have fixed the length scale for QPS at l φ ; however, in a truly continuous theory for QPS at short length scales we would not expect this to change qualitatively, since it will never be energetically favorable for QPS to occur with appreciable amplitude over arbitrarily short length scales ≪ ξ (equivalently, the potential energy barrier for a fluxon to tunnel through the continuous wire entirely in between two points separated by a distance ≪ ξ will be very high).
* In this description, a large insulating gap of the dielectric surrounding a quasi-1D wire would be associated with a small polarizability and therefore a small ǫ out , just as a large superconducting gap for the electrodes of a JJ is associated with a small kinetic inductivity.
that governing QPS at lengths ∼ l φ and the low-energy dynamics of Q(x, t) we now seek to investigate (we will see below the conditions under which this is justified). Based on this assumption, we treat the phase-slip potential E QPS (Q) as a purely classical energy which depends only on Q(t) (and not, for example, on ∂ t Q). This is analogous to the Born-Oppenheimer approximation often used to treat interatomic interactions, where the microscopic QPS at length scale ∼ l φ plays the role analogous to electronic motion, and the slower, lower-energy dynamics of Q(x, t) is analogous to the nuclear motion. It is also the same approximation used in the treatment of classical quasicharge dynamics of lumped Josephson junctions [64][65][66][67]78]. The resulting distributed model for a nanowire is shown in fig. 5(c), in which E QPS (Q) is associated with a "bare" phase slip element in the same way that the Josephson potential E JJ (Φ) is associated with a bare Josephson element, as shown in fig. 5(d). The long-wavelength behavior of the superconducting response is described by the kinetic inductance per length L k , and the distributed shunt capacitance per length C ⊥ , where we now assume that the frequencies of interest are low enough that this becomes the wavelength-independent capacitance per length to a nearby ground plane. When QPS is weak (E QPS (Q) → 0), the wire reduces to a simple, linear transmission line, on which waves propagate at the Mooij-Schön velocity v s . In fig. 5(d) we show the dual to our model, which is simply the nonlinear transmission line (a superconducting slotline) used to describe a long Josephson junction. In the limit of weak Josephson coupling (E JJ (Φ) → 0), this becomes a linear transmission line on which waves propagate at the so-called Swihart velocity [113] (dual to v s ). We now describe the system of fig. 5(c) in the continuum limit (with the proviso that we only consider length scales ≫ l φ ), again using a Euclidean path-integral approach, with partition function [44,45,54,108,112]: where DΨ indicates a functional integration over paths in x, τ -space, and the dimensionless Euclidean action is (β ≡ 1/k B T → ∞): In the first line, I = ∂ t Q and ρ ⊥ = −∂ x Q are the current flowing through L k and linear charge density stored on C ⊥ at the spacetime point x, τ , and for the second line we have defined: The quantities λ E and ω p are dual to the Josephson penetration depth and Josephson plasma frequency in a long JJ, respectively; we hereafter refer to them as the electric penetration depth and phase-slip plasma frequency. Note that λ E is defined as a ratio of the effective series kinetic capacitance to the parallel shunt capacitance, and is therefore a kind of Coulomb screening length similar to Λ 1D [c.f., eq. 18]; however, as indicated on the right side of the equation, it is exponentially large (for S 0 ≫ 1) compared to microscopic quantities. A corresponding relationship exists between the plasma frequencies: ω p ≪Ω p . These are precisely the separation of length and energy scales that justify the Born-Oppenheimer approximation underlying the model of fig. 5(c).
Returning to action of eq. 31, the corresponding Euclidean equation of motion is the sine-Gordon equation [108]: where ∇ uv ≡û∂ u +v∂ v (û andv are unit vectors) and the dimensionless coordinates u and v were defined in eq. 33. Equation 36 is the exact dual of the usual semiclassical result for a long Josephson junction [94] (which is simply eq. 36 with q replaced by φ, the gauge-invariant phase difference across the junction [c.f., fig. 5(d)]), and is also similar to results for long 1D JJ arrays in the charging limit [114][115][116][117]. We can therefore infer several things: First, we have the usual propagating modes with dispersion relation: ω 2 = ω 2 p + (kv s ) 2 [94], which are the dual of Fiske modes in long JJs [100], and are also analogous to spin-wave excitations in the corresponding classical 2D XY model [6][7][8][9]. We make the usual assumption [54] that these Gaussian fluctuations can be factorized out in eq. 30 such that they simply renormalize the bare parameter values in S, leaving only topologically nontrivial paths to be evaluated. Next, we can infer the existence of a charged soliton [114][115][116][117], or so-called "kink" excitation [108] in the field q(x) of size ∼ λ E , with total charge 2e (residing on C ⊥ ), and which can propagate freely without deformation. This is the dual of a Josephson vortex in a long JJ [94], which is a kink in the field φ(x) of spatial extent ∼ λ J (the Josephson penetration depth), that carries a total flux Φ 0 .
For large enough systems where λ E can be used as the ultraviolet cutoff, this 1+1D quantum sine-Gordon model can be mapped to the well-known classical statistical mechanics of 2D magnetic domain interfaces in the 3D Ising model [3]. Our q maps to the height (in the z-direction) of a domain boundary surface between two spin orientations, while the cosine potential "enforces" the lattice periodicity. The Ising interactions between nearest neighbors in the x and y directions map to the (∂ u ) 2 and (∂ v ) 2 terms in eq. 31. The 3D Ising system undergoes an interfacial roughening transition with increasing temperature T at a critical value T C ∼ J/k B (with J the Ising coupling) which has identical universal behavior to the BKT transition in the classical 2D XY model [6][7][8][9]. The transition occurs when statistical fluctuations corresponding to localized regions where a step upward or downward occurs in the interface grow to large sizes and proliferate. For our system, this maps to a T = 0 quantum phase transition at K ∼ 1 in which virtual soliton-antisoliton pairs unbind, producing charge fluctuations that destroy the insulating state associated with a well-defined q [114].
Our description so far has been well suited to the insulating side of this transition (K < 1), where q becomes increasingly well-defined as K → 0. However, most experiments aiming to observe evidence for QPS have used wires nominally in the superconducting state, about which phase fluctuations can be viewed as a perturbation. Therefore, it makes sense also to examine our system on the superconducting side of the transition (K > 1), where φ becomes increasingly well-defined as K → ∞. To do this, it is illustrative to rewrite eq. 36 in the following form: with the definitions: where E is the electric field, E C ≡ eE S /π is the critical electric field such that E/E C = sin q, and eq. 39 follows from continuity. Equations 37 and 38 have an identical form to Ampère's law and London's second equation in 2D which govern the equilibrium penetration of a perpendicular magnetic field into a thin, type II superconducting film [94], with the correspondence: q ↔ H, e ↔ B, j ↔ J and where the right side of eq. 41 plays the role of the constitutive relation between H and B. These equations, however, describe the dynamical penetration in 1+1D of longitudinal electric field into a superconducting wire †. The analog to the GL κ parameter for our 1+1D system is: 42) † Note that theẑ direction is purely fictitious here, and defined only to permit the aforementioned analogy. Similarly, the quantity j is not to be confused with an actual current density, although it plays the analogous role in eqs. 37-38 to the current density in the Maxwell-London equations; its u component is proportional to the total current flowing in the wire at a given spacetime point, and its v component is proportional to the linear charge density ρ ⊥ at that point. Formally similar methods for describing electric fields in superconductors in 1+1D were also used in refs. [87,118]. [82] and in LAMH phase slips [25][26][27]. Our 1+1D solution in u, v for the screening "currents" j surrounding the vortex core corresponds to an instanton [44,45,54,76] in x, τ , and describes the dynamics by which the system tunnels through this energy barrier and passes through the saddle point. This is a macroscopic quantum process that arises out of (microscopic) QPS, whose lumped-element limit is dual to Bloch oscillation in a JJ (which arises in an analogous manner from the microscopic process of JT) [64][65][66][67]78]. and the type II limit κ E ≫ 1 is automatically satisfied when S 0 ≫ 1 [c.f., eq. 34], a precondition of our analysis.
Interestingly, it turns out that there are 1+1D electric analogs for many well-known features of type II magnetic flux penetration, starting with the magnetic vortex. We call this 1+1D dynamical process, illustrated in fig. 7, a "type II phase slip". It is a topologically nontrivial solution to eqs. 37-40, in which a normal core of size ∼ κ −1 E in u, v is surrounded by circulating screening "currents" j [c.f., eq. 40] extending out to ρ ≡ √ u 2 + v 2 ∼ 1. In order to include only closed paths in eqs. 30 and 31, we must impose the condition (analogous to fluxoid quantization in the 2D magnetic case [94]): where σ is a closed curve in the uv plane which contains the core and bounds the surface α [ fig. 7(a)]. This condition means that the quasiflux Φ ab between spatial points u a and u b on either side of the vortex evolves by Φ 0 (-Φ 0 ) during the event. Using eqs. 37-43, and assuming that far from the core of the phase slip we can write: C k (q) ≈ C k0 and L k (I) ≈ L k (0) (our 1+1D analog to the usual approximation that far from the core of a magnetic vortex Λ(J) ≈ Λ(0) [94]), we obtain [ fig. 7]: where we have also assumed β ≫ ω −1 p . The resulting Euclidean action for the type II phase slip is then: and the action associated with the interaction between type II phase slips separated by δρ ≡ | ρ 1 − ρ 2 | is: ≈ ∓ K ln (δρ) , δρ < 1 where the sign is negative for a phase slip-anti phase slip pair. The direct analogy between these 1+1D electric results and their 2D magnetic counterparts [94] can now be exploited to understand their implications †. First of all, the quantum mechanics of these vortex objects can be mapped directly to the statistical mechanics of the classical 2D XY model [6][7][8][9] (which describes thermodynamic vortex fluctuations in thin superconducting films [17], among other things) with effective vortex fugacity: f = exp(−S II ) [c.f., eq. 45] and interaction energy: f., eq. 47]. Thus, we expect a BKT vortex-unbinding transition as K (which corresponds to the temperature of the analogous 2D classical system) is decreased from large values, at K ∼ 1. The fact that this is the same critical point discussed above in the context of a charged soliton-antisoliton unbinding transition as K ∼ 1 was approached from below is not an accident; in fact, these are two descriptions of the same transition, as discussed in ref. [3]. It simply makes more sense to use a vortex representation when K > 1 and a charge representation when K < 1. The remarkable conceptual similarity between these two representations is an example of † This analogy should not be confused with flux-charge duality, in spite of any apparent similarity. In our description, electric fields in 1+1D and magnetic fields in 2D are related by a Wick rotation (analytic continuation to imaginary time); a similar relationship exists, for example, between the least-action trajectory of a projectile in 1+1D and the lowest-energy, static solution in 2D for a string suspended at two points.
Kramers-Wannier duality, originally used in the context of the statistical physics of Ising spin models [119], and later applied to quantum field theories [120] (a particular example of which is the "dirty boson" model [21] of the 2+1D quantum phase transition in highly disordered superconducting films). In fact, the well-known approximate selfduality for lumped JJs (between the case of high environmental impedance where q is well-defined and low environmental impedance where φ is well-defined [78,85,121]) is a limiting 0+1D example of this same concept.
Before discussing finite wires and comparing our model to experimental observations, we conclude this section with a brief comparison of the established theory of GZ [44,45] to what we have presented here so far. The GZ theory is fundamentally a variational calculation, using a microscopic expression for the Euclidean action of the wire (derived from BCS theory). This calculation is also built on a particular ansatz for the form of a QPS event, consisting of two parts: at large distances from the core, the QPS event is simply taken to be the electromagnetic response of the the linear plasma modes of the wire (MS modes) to a topological point defect in 1+1D (i.e., an instanton solution to the linear wave equation for a transmission line, but with an additional deltalike source term in x and t); the core is treated separately, and taken to have length and time scales x 0 and τ 0 (which are the variational parameters) over which the gap is zero and dissipation is assumed to occur. The result of this calculation, up to numerical factors, is x 0 ∼ ξ and τ 0 ∼ /∆, so that: where A is a material-independent, numerical constant of order unity, and the proportionality on the right side follows from standard BCS relations, with R ξ the resistance for a length ξ of the wire. Thus, the QPS fluctuation can be interpreted as virtual excitation of the the energy δE LAMH for a time /∆ †. As discussed by GZ and subsequent authors, with a characteristic timescale for QPS of τ 0 ∼ /∆, the wavelength of MS modes near the corresponding frequency τ −1 0 is much greater than the QPS size, and long enough that these modes are in the region of approximately linear dispersion where there is an approximately wavelengthindependent capacitance per unit length C ⊥ . Just as is the case with 1D JJ arrays, this shunt capacitance is the source of interactions between QPS events (the currents from two interacting events both charge or discharge the distributed shunt capacitance of the length of wire which separates them). Now, because the distributed shunt capacitance only enters this treatment in the context of the linear MS modes, the long-range QPS interaction is then determined purely by the form of the instanton of the corresponding linear wave equation. This results in a QPS interaction with no natural length scale, falling off purely logarithmically with increasing spacetime separation. This interaction is analogous to that encountered in classical 2D systems of magnetic vortices (in a † Note that in the GZ theory of ref [45], eq. 48 holds when: l/ξ ≪ e 2 N 0 A cs /C ⊥ , where N 0 is the density of states at the Fermi level. This limit is well-satisfied for all wires in the experiments discussed here. neutral superfluid) [6][7][8] or electric charges [9], and this brings about an analogy to the BKT transition of the classical 2D XY model † [44,54]. Another consequence of a QPS frequency scale τ −1 0 ∼ ∆/ is the importance of dissipation, and this features prominently in the theory of GZ.
In our model as presented so far, instead of the MS plasma mode dynamics being a linear response to a pointlike defect"source" in 1+1D at the frequency τ −1 0 , we describe QPS directly in terms of the zero-point motion of the MS plasma oscillation itself, at a wavelength l φ ∼ ξ and frequencyΩ p . As described by eqs. 20 and 21, at these wavelengths charged fluctuations are screened out on the length scale Λ 1D (analogous to the well-known Coulomb screening length in 1D JJ arrays [114][115][116][117]), such that QPS interactions are cut off at distances larger than this. This, in conjunction with the semiclassical approximation S 0 ≫ 1, is what allowed us to use the lumped-element model of fig. 5(c) which neglects interactions between QPS events entirely. These interactions came back in to our problem when we considered the fully distributed case, involving longer length scales λ E ≫ ξ ∼ l φ and lower energy scales ω p ≪Ω p .
Finite wires and experimental systems
In order to discuss the implications of our work for past and ongoing experiments aimed at observing evidence for QPS, we must first consider boundary conditions appropriate for the electrical connections to nanowires used in actual measurements. We consider the limit where the radiation wavelength corresponding to the characteristic frequency ω p in the medium surrounding the wire is much larger than the wire length, so that the electromagnetic environment can be treated as a simple, lumped-element boundary condition at the wire's ends. The typical experimental configuration is shown in fig. 8(a): a four-wire resistance measurement, in which the leads are usually designed to have high resistance at the low frequencies associated with quasistatic IV measurements * . Our circuit model for this configuration is similar to that used for JJs [79], and is shown in fig. 8(b). As pointed out in ref. [79], unless special techniques are used (such as in refs. [73,75,80,110]), the lead impedance Z(ω) is certain to become relatively low (< Z 0 , the impedance of free space) at high enough frequency, even if Z(ω) ≫ Z 0 as ω → 0. Given that the important frequency for our model is ω p , which will turn out to be relatively high, a crucial feature of the environment model of fig. 8(b) is a low, resistive impedance at high frequency such that: Z env (ω p ) ≈ R env ≪ Z L , R Q . In this limit, the classical boundary condition at the wire's ends is effectively a short, such that interaction of a type II phase slip with the wire's ends can be described using image phase slips of the same sign [54]; this results in a repulsion from the ends and † One important difference is that the QPS fugacity here y = e −S0 is an independent physical parameter from the dimensionless admittance K, whereas in the 2D XY model the two analogous quantities (the vortex fugacity and the temperature) are not independent. * Two notable exceptions are the very recent experiments of refs. [71,72,74], which use qualitatively different measurement techniques. [79]. At low frequencies, the wire effectively sees a current source with large DC compliance R DC , but at high frequencies lumped parasitics and the characteristic impedance of the measurement connections reduce the effective impedance. This is modeled by a lumped shunt capacitance C sh in parallel with a high-frequency resistance R env , which becomes important above the high-pass corner frequency (R env C hf ) −1 . (c) in nearly all experiments where specialized techniques are not used to control the high-frequency EM environment, the dominant contribution to this environment is R env , which is likely to be ≪ Z L , the linear impedance of the nanowire. In this limit, the interaction of a type II phase slip with the wire edges can be described in terms of image phase-slips of the same sign, resulting in a repulsion from the wire's ends, and a potential minimum at the center of the wire. The corresponding 2D magnetic case analogous to this is a weak superconducting link between two thick superconducting banks (a Josephson weaklink junction [82]) where a magnetic vortex attempting to pass across the junction encounters a potential minimum (a saddle point) at the center of the bridge. (d) If, on the other hand, R env ≫ Z L , the image phase slips have opposite sign such that the real phase slip is attracted to the wire's ends and a potential maximum occurs in the center of the wire. The analogous 2D magnetic case is that of an isolated superconducting strip [122].
an activation energy barrier for phase slip events δE II (x) as a function of the phase slip position x like that shown in fig. 8(c). It is important to note that this is not analogous to the 2D magnetic case of an isolated, finite-width superconducting strip as in ref. [122].
Rather, our situation is analogous to a very short superconducting weak link between two large banks, where the link length l is analogous to our wire's length, and the link width w ≫ l maps to Euclidean time in 1+1D [ fig. 8(c)]. In both of these cases the vortex (type II phase slip) sees a free-energy (Euclidean action) minimum at the link (wire) center. In the opposite case where Z env ≫ Z L , the image vortices have opposite sign, such that phase slips are attracted to the edges as shown in fig. 8(d); this is in fact the 1+1D analog to the finite-width superconducting strip of ref. [122]. For very long wires with l ≫ λ E , the contribution of the environment can naturally be neglected, since even in the high-Z case where the action is lower for phase slips to occur within a distance λ E of the two ends [c.f., fig. 8(d)] which then interact predominantly with their images, the statistical weight of such paths in the partition function becomes negligible for long enough wires. However, when l becomes sufficiently smaller than λ E , the interaction with image phase slips eventually dominates the partition function, such that the environmental impedance alone determines the ground state (as opposed to Z L ) †. This is how the crossover occurs in our model to the lumpedelement regime (discussed by MN [59] as the dual of the extensively-studied case of lumped JJs [61,64,78,121]). By contrast, the length scale which arises in the theories of GZ [44] and ref. [54] for finite wires is v s /k B T , such that within the approximations used in these works the behavior is always lumped at zero temperature.
These considerations regarding electric field penetration into finite wires in 1+1D have direct analogs in the physics of magnetic vortex penetration in 2D. In fact, as discussed in Appendix B, the equilibrium thermodynamics governing type II magnetic flux penetration (in terms of a Gibbs free energy which includes the magnetic work done by or on the field source), has an exact analog in our 1D case (in terms of a Euclidean action which includes the work done by or on the circuit environment). Thus, under appropriate conditions, all of the well-known results concerning type II flux penetration in 2D can be appropriated for our purposes here, in particular the existence of type II phase slip "lattices" corresponding to spatially and temporally periodic electric field penetration. An example of the current distributions for the two lowest-action type II phase slip lattices, for a wire with l ≪ λ E in a low-impedance environment (R env ≪ R Q , Z L ) corresponding to an effective voltage bias, is shown schematically in fig. 9(a). These two lattices can be identified directly with the two lowest energy bands † The method of images was also used in ref. [54] to discuss boundary effects; however, in that work it was applied directly to GZ-type microscopic quantum phase slip events. By contrast, we have applied this method to our type II phase slips, macroscopic quantum processes [63] which arise as a consequence of treating microscopic QPS events as dual to Cooper pair tunneling events in lumped JJs. This distinction can be clarified by considering the duals of these two cases: our theory is dual to the usual JJ treatment, where the "bare" Josephson energy per length is calculated in the lumped limit, neglecting the geometric inductance L g of the junction. This result is then plugged in to a distributed theory for the "long" junction, out of which arises the Josephson penetration depth λ J [94], to which our λ E is dual. The premise of the QPS theory of ref. [54], on the other hand, is dual to treating a long JJ by directly considering from the beginning the full quantum mechanics of Cooper pair tunneling events in the distributed system [c.f., fig. 5(d)].
of an approximately lumped phase-slip junction, as shown in fig. 9(b), and discussed by MN [59]. To see this, first consider the total Euclidean action S tot II (x) of a type II phase slip at position x in the R env ≪ Z L , R Q limit, and the corresponding classical energy barrier δE II (x) (x = 0 is taken to be the middle of the wire): Here, the first line is valid as long as β −1 = k B T ≪ ω p , and in the second line the summations are over image phase slips. In the λ E ≫ l limit we can neglect the xdependence as well as the first (self-energy) term, and replace the sums with an integral, to obtain: is the inductive energy of the wire with total kinetic inductance L k . Thus, the first term in eq. 50 is precisely the kinetic-inductive energy E L /4 that would be approximately expected at Φ = Φ 0 /2 from fig. 9(b) in the S 0 ≫ 1 limit, as well as from the lumped-element description of MN [59], and the second term is the leadingorder correction to this result in the small quantity l/λ E . Since a constant voltage across the wire implies that Φ evolves at a constant rate, corresponding to motion at constant "velocity" along the horizontal axis (dΦ/dt ≡ V ) of figs. 9(b),(c), the type II phase-slip cores can be identified with the avoided crossings that define the energy bands U 0 (Φ) and U 1 (Φ). The crossings shown at half-integer values of Φ/Φ 0 occur where two states with m differing by 1 are coupled, and correspond to a single phase-slip core in the wire. The crossings at integer values of Φ/Φ 0 (the upper state of which is U 2 (Φ), not shown in the figure) occur where states with m differing by 2 are coupled, and therefore correspond to the simultaneous presence of two phase slip cores in the wire, as shown in the upper half of (a) at these points. The temporal current oscillations [ fig. 9(d)] that occur in the lowest energy band at fixed voltage are the exact dual of Bloch oscillations in a lumped JJ [64][65][66][67]78].
Beginning with the seminal work of Giordano [36], nearly all the experimental efforts to observe evidence for QPS have focused on the region near T C where the stiffness V 1D goes to zero, so we begin our discussion of experiments with this regime. The motivation behind such experiments is the idea that quantum phase slips should become exponentially more frequent as the energy barrier is lowered. Of course, thermally activated phase slips also become exponentially more frequent, so that the objective in such measurements can only be to observe qualitative deviations from simple LAMH thermal activation as the temperature is lowered, in the hope that such deviations can be identified with QPS. A wealth of experimental data now exists in which resistance vs. (T C −T ) measurements of superconducting nanowires are compared to LAMH theory, for a range of materials including In [36], Pb [38], PbIn [37], Al [35,40,41,51], Ti [42], MoGe [18,39,53,123], Nb [48], and NbN [124]. In many cases deviations are indeed observed, usually in the form of a significantly weaker slope on a plot of log R vs. T (as opposed to the clear crossover in behavior seen in Giordano's original measurements) †. This departure from LAMH behavior has been attributed to QPS either using Giordano's model [18,36,39,41,43] or a variant of it in which the purely heuristic energy scale /τ GL in Giordano's quantum phase-slip-induced resistance is replaced by the GZ result [44,45]. Although some reasonable agreement can often be obtained for individual experiments, when all of the available data are considered together, one encounters a problem: the ostensibly quantum-phase-slip induced deviation from LAMH theory does not seem to scale as expected with the predicted QPS action. For example, based on the GZ model, the T = 0 phase-slip action for Giordano's original 41-nm wide In wire (which exhibited a dramatic departure from LAMH behavior) is S GZ ≈ 100, whereas S GZ ≈ 13 for Bezryadin's 7-nm MoGe wires which showed no anomalous departure from LAMH at all. As we will now show, our model provides a possible explanation for this counterintuitive trend, in terms of thermal fluctuations over the type II phase slip energy barrier.
We cast our problem in a form analogous to the original work of LAMH [26,27], using eq. 2 to obtain the general expression for a thermal phase-slip-induced effective resistance [18,27,36,39] (also used to describe thermal phase slips in JJs [79,84,85]): where δE ps is the classical energy barrier, and Ω ps is the attempt frequency [84,85]. We consider three distinct, simplified regimes: (i) where λ E ≫ l, for which the energy barrier is given by eq. 50 and illustrated in fig. 9(c); (ii) where λ E ≪ l, so we can neglect entirely the statistical weight of paths that interact with the ends, and: where we have defined the effective total inductance for a type II phase slip: L λ ≡ πL k λ E /4 (by analogy to eq. 50); and finally (iii), an intermediate regime where λ E l, so that the energy barrier is a saddle point at the wire's center like that shown in fig. 8(c), and we can make the approximation that all phase slips occur at that point: truncating the sum at some small N beyond which the additional terms can be neglected.
We model Ω ps in a simple manner based on well-known results for lumped JJs, where we treat the thermal fluctuations for each length λ E of wire as independent if λ E ≪ l †, or the whole wire as a single fluctuating region if l λ E . We describe each fluctuating region in terms of an effective Josephson inductor L f in parallel with an effective damping resistance R f and shunt capacitance C f . For case (i) (λ E ≫ l), these quantities are simply L k , R env , and C sh ; for cases (ii) and (iii) (λ E < l) we take instead: L λ , Z L (the effective resistance looking out of the fluctuation region into the plasma modes of the wire), and C l (kλ E = 1) [c.f., eq. 21]. Strictly speaking this is only correct in case (ii), of course, but we use it here as an estimate also for case (iii). The attempt frequency is given approximately by [85]: , and this expression holds in the limit where k B T ≫ Ω ps . In the overdamped regime (Q f ≪ 1) which is relevant in all experimental cases of interest here, Ω ps ≈ R f /L f . Figure 10 shows, for the parameters of four experimental cases (tabulated in Appendix C), the resulting R vs. T obtained from our model, all of which compare favorably with the corresponding experimental observations ‡. In addition, for each case the corresponding LAMH prediction is shown by a red dashed line. Notice that while QPS gets stronger from (a)-(d), the deviation from LAMH temperature scaling gets weaker, just as observed in the experiments. As we will now explain, the reason in our model for this seemingly paradoxical behavior is the crucial role played by the temperature dependence of λ E (which has no analog in previous theories for QPS), shown in the bottom graph of each panel in fig. 10, relative to l φ and the wire length l.
First of all, as T → T C , notice that in all cases we have l > λ E l φ , such that the corresponding energy barrier [c.f., eq. 49] has a similar magnitude and temperature scaling to δE LAMH [c.f., eq. 3] (in this regime the Bessel function K 0 varies only logarithmically). In this limit, then, all of our predictions for the four cases either approximately coincide with or approach that of LAMH * . Now, starting with the case of Giordano's In wire where QPS is the weakest, as T is lowered λ E increases very † This is approximately valid for thermal type II phase slip rates which are low enough that we can neglect the statistical weight of paths in which phase-slips interact with each other substantively. ‡ In fact, for panel (a) the agreement with experiment in the LAMH region of the curve is obtained without the ad hoc 4x reduction in the energy barrier used by Giordano [36] in order to fit LAMH theory to his observations in this region. * Note that our treatment of δE II is strictly valid only when κ E ≫ 1, since we have neglected the action associated with the phase slip core in comparison to the screening "currents" j in eq. 45. This argument is entirely analogous to that made in the context of magnetic vortices in 2D in the type II limit [94]. Very close to T C where typically κ E ≪ 1, the core contribution becomes dominant, our result δE II is no longer applicable, and we expect the resulting energy barrier to cross over to δE LAMH . One might in fact view the LAMH phase slip as the type I analog of our type II phase slips, where the corresponding 2D situation would be a mixed state of a type I superconductor in which a single flux quantum penetrates in a 2D region of linear dimension ∼ ξ inside which the gap is suppressed to zero. [42] (S 0 = 9.0, S GZ = 16); (d) 7.5-nm MoGe wire (S1) from ref. [43] (S 0 = 5.6, S GZ = 13). These curves compare favorably with the experimental results. Dashed black lines are shown in the cases where our model predicts a crossover between two regimes considered in the text, and the solid black line is then a guide to the eye in connecting these smoothly. Predictions of LAMH theory [26,27] are shown by red dashed lines. The bottom half of each panel shows the predicted temperature dependence of λ E (blue curve) and l φ = 1.8ξ (red curve). For the In case in (a), with weakest QPS, λ E increases sufficiently quickly as T is lowered that a clear crossover is observed when it becomes much larger than the wire length l. In the Al (b) and Ti (c) cases which have progressively stronger QPS, λ E becomes shorter and the crossover is obscured, such that the qualitative signature is only a reduced slope and change of curvature on the log plot, which in both cases was fit to a Giordano-like model in the experimental references [41,42]. Finally in the case of MoGe (d), QPS is sufficiently strong that λ E does not vary appreciably over the relevant temperature range, and the temperature scaling of the energy barrier becomes very similar to that predicted by LAMH. quickly, becoming much larger than the wire length already by around T = 4K. In this limit, eq. 50 for the barrier applies, which has the ∼ 1/(T C − T ) dependence of L k , the total inductance of the wire. This scaling is significantly slower than in LAMH theory, resulting in the clear crossover shown in the figure. Thus, in our model the crossover which was previously attributed to a transition from thermal to quantum phase slips is explained instead by a change in the T -dependence of the energy barrier for purely thermal phase slips (when λ E becomes larger than the total wire length l). Extending this interpretation to the different behaviors in panels (b)-(d), we find that our model indeed predicts more and more LAMH-like behavior as the strength of QPS in increased, due to the reduced temperature dependence of λ E . In the intermediate case of Al [41] (b), the crossover is still present but is sufficiently smoothed out that it is also qualitatively consistent with a Giordano-like model, which was used to fit the corresponding data in ref. [41]. For the Ti wire of panel (c), QPS has become sufficiently strong that there is no longer any crossover, as λ E remains well below l over the entire temperature range. For this case the deviation from LAMH scaling that is still present is simply a residual effect of the temperature dependence of λ E , which although smaller than (a) and (b) is still non-negligible, and causes the barrier height to go up more slowly as temperature is decreased than δE LAMH . This modified dependence can also be fit with a Giordano-like model, as in ref. [42]. Finally, the MoGe wire shown in (d) [43] has sufficiently strong QPS that λ E varies little over the entire relevant temperature range, and there is almost no deviation from LAMH scaling, as shown in the figure. Thus, in a low-Z environment, our model predicts that QPS appears in R vs. T measurements only indirectly, via the phase diffusion [79] and associated resistance arising from thermal hopping over the type II phase slip energy barrier.
Similar conclusions arise from our model regarding the more recent experiments of Bezryadin [43,52], in which the bias current was increased, with the temperature held fixed, and far below T C . These experiments were modeled after the seminal measurements of macroscopic quantum tunneling in JJs [31], in which effective "escape rates" out of the Josephson potential well were observed as a function of current [c.f., fig. 1(a)], from which an effective temperature of the phase fluctuations T eff could be inferred. At higher bath temperatures T (still much less than T C ) it was found that T eff ≈ T ; however, as T was lowered, T eff saturated at a minimum value known as the quantum temperature T Q , which could be explained quantitatively in terms of the expected quantum phase fluctuations of the circuit. Similar results were obtained for continuous MoGe nanowires in ref. [43,52], and this was taken as a signature of quantum phase fluctuations associated with QPS [43,52]. However, neither the quantitative values of T Q extracted from these measurements, nor its dependence on wire parameters, was explained. Furthermore, it remained a mystery why the wires which exhibited nonzero apparent T Q also showed no sign of the deviations from LAMH-type temperature scaling of resistance near T C which were previously attributed to QPS.
We now show how these phenomena can also be described by our model. We consider the lumped-element case corresponding to the energy band U 0 (Φ) shown in Figure 11. Quantum temperature and switching current in a low-Z environment. (a) lowest two calculated energy bands U 0 (I b , Φ) and U 1 (I b , Φ) for wire S1 of ref. [43] at I b =2 µA. (b) expanded view of the residual potential well in U 0 (I b , Φ). Fluctuations of the L k − R env − C sh circuit produced by the wire and its environment can cause the phase particle to escape from this well even when there is still a potential barrier, at which point a voltage appears [31,79]. (c) calculated quantum temperature, and (d) switching current, for wires S1-5 of ref. [43] (blue symbols) and A-F of ref. [52] (red symbols) vs. the values inferred from measurements. T Q predictions were obtained using ref. [85], and I sw predictions were derived from eq. 55, assuming that switching occurs at the bias current where the potential well depth is reduced to the experimental T Q . With the exception of wire S3 of ref. [43] and wire B of ref. [52], the agreement is good in both cases (c) and (d). The fixed parameters used to obtain this agreement are discussed in Appendix D, and the primary adjustable parameter was R env . We extract the values: 110Ω for the data of ref. [43] and 35Ω for ref. [52]. This difference is quite plausible, since the phase-slip plasma frequencies at which R env is to be evaluated are about an order of magnitude higher in the former case (since the wires have significantly smaller A cs ). fig. 9(b) (since for the parameters of these wires we have λ E > l at T = 0), treating it as a classical potential energy and neglecting transitions to higher bands (in the same manner that the lowest quasicharge band of a lumped JJ in a high-Z environment is often treated [64][65][66][67]78]). The effect of an external bias current I b can be described, just as for a JJ, by the additional potential energy: which lowers the energy barrier for phase slips in one direction while raising it in the other [18,26,27,79,94] [ fig. 11(a),(b)]. As the barrier is lowered by increasing I b , the phase particle has an increasing chance to surmount it per unit time due to a phase fluctuation. If this occurs, it can either be re-trapped in the adjacent potential well by the damping due to R env , or it can "escape" into the voltage state corresponding to a terminal "velocity" V =Φ (determined by its effective mass and the damping) †. The current at which this occurs then corresponds to the switching current I sw measured in ref. [43]. Based on our discussion of case (i) above (l < λ E ), we can adapt the well-known analysis of MQT in JJs to the present purpose, from which we obtain the crossover temperature T cr where the fluctuation energy scale in the exponent of eq. 51 goes over from k B T to k B T Q . In the overdamped limit, this is simply: The fact that the capacitance C sh does not appear in T cr in the overdamped limit illustrates that "quantum temperature" would be a misnomer for this quantity; as discussed in ref. [85], in the overdamped limit quantum tunneling does not contribute to the escape rate at all. Rather, it is dominated for T ≪ T cr by the classical fluctuations that necessarily come with strong damping, via the fluctuation-dissipation theorem * . Figure 11(c) shows a comparison between the experimental results of refs. [43,52] and our expectations based on the discussion above (the parameters used for this comparison are discussed in Appendix D). For nearly all of the reported wires, the agreement is relatively good. We can also compare the average switching current into the voltage state I sw observed in refs. [43,52] with our prediction based on eq. 55 (we take the predicted switching current to be that at which the depth of the potential well is equal to the observed quantum temperature). Figure. 11(d) shows that the agreement with experiment is also good for the same wires. Our discussion also suggests a different explanation for another observation in refs. [43,52] that was was highlighted as direct evidence for QPS: the fact that the width of the stochastic probability distributions P (I sw ) (obtained from many repeated I sw measurements) increased as T was lowered. Since the system is overdamped, at high T the phase particle moving in the potential U 0 (I b , Φ) can be thermally excited over a barrier many times (undergo many phase slips), each time being re-trapped by the damping, before it happens to escape into the voltage state. At low T , these excitations are sufficiently rare that in a given time the system is more likely to experience a single fluctuation strong enough to cause escape than it is to experience multiple weaker fluctuations which act together to cause escape. Just as for JJs, this produces a P (I sw ) that broadens as T is lowered [79], since fewer phase slips are associated with each switching event, and the resulting stochastic fluctuations of I sw are larger. Note that in contrast to ref. [43], where these results were explained by local heating of the wire by individual quantum phase slips, our discussion would suggest that the energy I b Φ 0 released during a type II phase slip is dissipated in the environmental impedance R env .
Very recently, in the wake of MS's seminal work [59], several experimental groups have pursued entirely new experimental approaches that have allowed more direct observation of QPS phenomena [71,[73][74][75]80]. Astafiev and co-workers [71] have demonstrated the phase-slip qubit of ref. [93], where the nanowire is contained in a closed superconducting loop, using both InO x and NbN films. This can be viewed as the case of R env = 0, such that as long as the inductance of the rest of the loop can be neglected, the external flux through the loop corresponds to a fixed-phase boundary condition for the nanowire. When Φ 0 /2 threads the loop, the PSJ is then biased right at the avoided crossing of width E S in fig. 9(c), such that direct spectroscopic measurement of this splitting becomes possible. For the InO x wires, E S /h ∼ 5-10 GHz [71] was observed, and for the NbN wires E S /h ∼ 1-10 GHz [72] (note that this particular technique could only measure values in this range due to the microwave bandwidth of the apparatus). It is interesting to note that in our model, the phase-slip qubit biased at Φ 0 /2 corresponds to a type II phase slip essentially trapped in the wire, such that a null in the order parameter (of size ∼ l φ ) is present somewhere [c.f., fig. 9(a)] †. Another recent pair of experiments, in two different groups, measured NbSi [73,80] and Ti [75] wires biased through Cr or Bi nanowires with extremely large DC resistances. A clear Coulomb blockade was observed in both cases, with threshold voltages V C ∼ 700µV for the NbSi [80], and V C ∼ 800µV for the Ti [75].
In table 1, we show that our model can approximately reproduce these observations. Note that although the InO x and NbN cases fall approximately within the lumpedelement regime λ E > l where we can use: V C ≈ E S π/e, the opposite is true (λ E ≪ l) for the NbSi and Ti wires. In these two cases, as discussed for 1D JJ arrays in the Coulomb blockade regime [114], the blockade voltage expected when the system is much longer than the soliton length (our λ E ) is given by: V C ≈ E C λ E where E C = E S π/(el) is the critical electric field. This critical voltage for λ E ≪ l is then defined by the condition that the energy barrier for a single soliton of size ∼ λ E to enter the array goes to zero, and the subsequent current flow just above V C is carried by a train of these 2e-charged objects [114].
The primary unknown physical parameter which enters into these estimates for E S and V C is ǫ in , the chosen values for which are shown in table 1. Also shown are some related values for this quantity derived from various experiments for three of the cases (we were unable to find an experimentally-derived value for Ti). Since the real part of a metal's dielectric constant is nearly always dominated by the strong inductive response of free carriers under typical experimental conditions, it is nontrivial to determine the underlying permittivity due only to bound charges that is relevant for our model of QPS, † Note that the same is true for any flux qubit when a half-integer number of Φ 0 threads the loop, such that two counter-rotating currents interfere destructively. However, in a conventional flux qubit based on one or more JJs, the corresponding null in the order parameter occurs inside an insulating JJ barrier. This may be an important distinction from the phase-slip qubit of refs. [71,93], because there are no low-lying electronic states in the insulating JJ barrier, while there should be such states inside a region of superconducting wire where the gap is forced to zero by an applied boundary condition (i.e. the flux through a closed loop). The presence of such states might act as a source of dissipation and/or decoherence. Table 1. Comparison of our model with quantum phase slip observations on several systems. In all cases we take l φ = 1.8ξ(0) and ǫ out = 5.5ǫ 0 . The electric penetration depth was calculated from eq. 34; for InO x and NbN, where λ E > l, the critical voltage was calculated using V C = E S π/e and eq. 26; for Ti and NbSi where λ E ≪ l, we used V C ∼ E C λ E as in ref. [114] for blockaded JJ arrays. The last two columns show the GZ result for different values of the coefficient A in eq. 57, which separately produce agreement with one of the observations. a Inferred from measurements on the insulating side of a metal-insulator transition: ref. [125] for InO x and ref. [126] for NbSi. b Inferred from the plasma frequency extracted from measurements on much thicker NbN films (∼30 nm) [127]. c Chosen by optimizing agreement between fig. 10(c) and the experiments of ref. [42]. Note that the predictions for this Ti wire are relatively insensitive to the choice of ǫ in and a because S 0 is of order unity due to the small gap.
which we have called ǫ in . For the cases of InO x and NbSi, we show experimental values obtained on the insulating side of the metal-insulator transition in these materials, such that the free carrier response is no longer present. It is plausible that these values provide a useful estimate of the desired quantity on the metallic side of the transition, although this is by no means certain. For the case of NbN, we show a value extracted by fitting to far-infrared absorption spectra; these measurements were made on a film ∼10 times thicker than the one used in ref. [72] where QPS was observed, however, so it is likely that this value is an underestimate. For each of the four materials shown in table 1, we list two possible values for the parameter a, which is used to obtain the kinetic inductivity Λ = µ 0 λ 2 (which then determines the stiffness V 1D ) according to the relation: where ρ n is the normal-state resistivity, ∆ is the superconducting gap, and a = 1, ∆ ≈ 1.78k B T C in BCS theory. In the phase-slip qubit experiments on InO x and NbN, the total kinetic inductance of each wire was extracted from direct measurements, fixing a = 1.8 for InO x and a = 4.8 for NbN. These are significantly different from the BCS value, which may be indicative of proximity to a disorder-driven SIT at which the bulk superfluid stiffness (∝ Λ −1 ) goes to zero while the local pairing gap remains finite [81]. For these two materials we list also a corresponding a = 1 case, where we reduce ǫ in to keep the calculated E S close to the observed value. In the Coulomb blockade measurements (second two rows), the inductance was not measured directly, so we simply show the two cases a = 1 and a = 2 in the table for comparison. The question is: near a SIT where the value of a inferred from bulk measurements can be substantially larger than unity (ostensibly due to disorder-driven quantum phase fluctuations), is it appropriate to use the bulk kinetic inductivity to calculate the local superfluid stiffness V 1D relevant for QPS? This may be an important question, since it has been hypothesized that close proximity to a SIT of this type is a determining factor in the successful observation of nonzero QPS [71]. Any mechanism for the SIT in these materials which involves only quantum phase fluctuations (in order to explain the observed coexistence of bulk insulating behavior and a local superconducting gap in the insulating state [81]) would seem to require the existence of a microscopic phase correlation length, such that the relative phase is well-defined between two points spaced closer together than this, and such that finite superfluid stiffness remains for wavelengths shorter than this [128]. Furthermore, it would seem unphysical for this length scale to be significantly smaller than the superconductor's coherence length ξ, without a corresponding suppression of the gap †. This suggests that the stiffness relevant for QPS, which involves quantum phase fluctuations at the length scale l φ ∼ ξ, is not the bulk stiffness inferred from the macroscopic kinetic inductivity, but rather a local stiffness related only to the gap (corresponding to a = 1). Interestingly, however, as shown in table 1 for the NbN case where we set a = 1, it was necessary to adjust ǫ in all the way to unity to approach the experimentally observed range of E S . Since it is unlikely to be the case that ǫ in = 1 in this material, and the value ǫ in = 90 obtained using a = 4.8 is quite plausible, this could be an indication that at least in this case the stiffness is suppressed even on length scales ∼ ξ as the SIT is approached from the superconducting side.
The last two columns of table 1 show the corresponding predictions of the GZ model in the same four wires, according to ref. [45]: 57) † This is apparent in two well-known "phase-only" models for the SIT: in one, the nominally uniform film is treated as an inhomogeneous system of superconducting islands coupled by tunneling, essentially a JJ array [129,130]. In this case the phase correlation length cannot be smaller than the island size, and if the island size is much smaller than ξ the Coulomb interaction on the islands will likely suppress the gap [131]. Alternatively, in the so-called "dirty boson" model, the quantum phase fluctuations are described in terms of vortex-antivortex pairs [21]. In order for such a system to have a phase correlation length shorter than ξ, the non-superconducting cores of the vortex fluctuations (with size ∼ ξ) would need to overlap substantially, and the average gap would be consequently reduced.
where ∆ is the superconducting gap, and S GZ is given by eq. 48. For these two columns, we have chosen values of the parameter A for which the resulting prediction agrees with one or the other of the observations of a given type (E S or V C measurement). As shown in the table, each case requires a different value for the coefficient A to produce agreement with experiment (given the same material parameters used for our estimates, tabulated in Appendix E). The difference is particularly large for the Ti wire, which is extremely long, and therefore requires a large value A = 3.4 to fit the observed V C ; by contrast, in our model V C becomes independent of length once the wire is much longer than λ E , since in this regime it is defined by a vanishing energy barrier for the entry of a single CP soliton of size λ E ≪ l.
Destruction of superconductivity in 1D
In this final section we consider a possible relationship between our model and the observed destruction of superconductivity all the way down to T = 0 for short wires with R n R Q . Previous theories have predicted insulating or metallic behavior as the wire diameter [44,45], the characteristic impedance Z L [44,45,54], or an external shunt resistor [54] is tuned through a critical value (our model also makes the latter two predictions, as described in sections 5 and 6). However, none can obviously explain a T = 0 transition at R n ∼ R Q in a low-Z electromagnetic environment. In all of these theories the predicted transition relies on the presence of a form of dissipation which somehow remains even as T → 0, such as anomalous excited quasiparticles [57], a resistive shunt [54], continuum plasmon modes [44,45,54], or the quantum phase-slips themselves [56].
Our discussion suggests a possible alternative view, in which a T = 0 SIT may be driven by disorder -induced quantum phase fluctuations, analogous to the SIT observed in some quasi-2D systems [22,23] when the sheet resistance R R Q †. This 2D disorder-induced SIT has been interpreted using the "dirty boson" model of Fisher and co-workers [21], in which disorder nucleates (virtual) unbound vortex-antivortex pairs (VAPs), with sufficient strength that these unpaired vortices themselves form a Bose-condensate, destroying long-range phase coherence and producing a gapped insulator [21]. This is closely related to the Berezinskii-Kosterlitz-Thouless (BKT) vortex-unbinding transition in the classical 2D XY model [6][7][8].
To connect these ideas to our system, we first recall our discussion above of the BKT-like quantum phase transition expected when K is decreased from large values down to unity, associated with unbinding of type II phase slip-anti phase slip pairs in 1+1D. This transition is driven in our model by microscopic, homogeneous phase fluctuations associated with the effective permittivity for electric fields along the wire, or equivalently, by zero-point fluctuations of the Cooper pair plasma oscillation at length scales ∼ l φ . As predicted in ref. [134], however, a different kind of transition is also possible, driven by disorder. In the language of the (2+1D) dirty boson model: disorder can nucleate virtual phase slip-anti phase slip pairs in the ground state, which at some critical disorder strength overlap sufficiently to form a "condensate" (in this case of instantons [76,111]) with an insulating gap. In the dirty boson model, the T = 0 critical point at R ∼ R Q = Φ 0 /(2e) corresponds to approximately one vortex crossing for every Cooper pair crossing [21]. In our 1D case, the corresponding critical point could plausibly be R n ∼ R Q . In fact, in ref. [135] the existence of just such a universal conductance ∼ R −1 Q in 1D at the critical point of a SIT was predicted. Such a disorderbased (as opposed to dissipation-based) mechanism may also be able to explain why the SIT in MoGe nanowires was only clearly evident for short wires with length 200 nm [18,39]. Since the logarithmic interaction between type II phase slips is cut off beyond separations ρ ∼ λ E [c.f. eq. 47] (which effectively functions as the coherence length/time near the transition), we might expect to see a weakening or disappearance of the SIT as the wire becomes significantly longer than λ E [17]; in fact, our theory predicts λ E ∼100-300 nm for the relevant MoGe wires * .
These ideas may have importance to some recent work on "honeycomb" bismuth films, consisting essentially of 2D networks of nanowires [137]. In a remarkable sequence of experiments, a SIT was observed in films with two different network geometries at thicknesses corresponding not to a sheet resistance of R Q , but instead to thicknesses when R n of each nanowire passed through R Q , just like the quasi-1D observations of ref. [49]. This may suggest that at the experimentally accessible temperatures, these nanostructured films had not yet reached a 2D universal regime, but were rather in an intermediate regime where quasi-1D behavior of the "links" in the wire network still dominated the transition. A crossover between these two regimes would be controlled by the coherence between QPS in all of the nanowire links connected to each "island" node in the network. If the QPS amplitudes for adjacent links is incoherent, the transition would still exhibit quasi-1D behavior. This coherence would be expected to depend, via Aharonov-Casher-like phase shifts, on charge fluctuations on the nodes [34,77]. What then would be expected to occur if this coherence existed, such that the film appears uniform from the point of view of QPS?
The original works of LAMH can be used to view the transition in quasi-1D wires from a metallic state to a superconductor as the temperature is lowered in terms of thermally-driven, topological phase fluctuations in 1+1D: phase slips; these can be described formally as passage through the wire of vortices, 1D topological line defects. Mooij and co-workers extended this idea to zero temperature, effectively postulating * Note that our analogy to the dirty boson model would not explain the observed reduction in T C near the 1D SIT in refs. [49,53]. This reduced T C may be explained by the coexistence in these wires of an unrelated phenomenon: gap suppression due to an enhanced Coulomb interaction [131,136]. This is believed to be the origin of a similar phenomenon observed in thin MoGe films [133] with very similar properties to the wires of refs. [49,53]. quantum tunneling of these objects, which we have modelled in our work based on an effectively finite mass and zero-point motion arising from the permittivity for electric fields along the wire. This leads to the following idea: In 2D, one-dimensional line defects (vortices) control the superconducting transition via the BKT mechanism as the temperature is lowered. In 3D, correspondingly, it has long been thought that vortex rings, effectively 2D objects, control the analogous transition. This idea has been applied to the lambda transition in 4 He [10,11], high-T C superconductors [1], ordering in liquid crystals [5], and even to structure formation in the early universe [1,2]. Starting with such 2D topologically-charged objects, we can imagine a 2D quantum tunneling phenomenon analogous to our 1D QPS, in which a thin film undergoes a quantum fluctuation process that can be viewed formally as tunneling of vortex rings. Just as motion of a line defect through a wire creates a "kink" in some field quantity in 1D, motion of the corresponding 2D ring defect through a film would create a point defect in 2D, inside of which the phase has slipped by one cycle relative to everywhere outside. Coherent tunneling of this kind throughout a very thin film should create a 2D insulating state analogous to what we have discussed here in 1D, and this may have some connection to the so-called "superinsulating" state suggested in the context of very thin, highly-disordered superconducting films [138,139].
Conclusion
We have described a new alternative to existing theories for quantum phase fluctuations in quasi-1D superconducting wires, built on the hypothesis of flux-charge duality [59] between these phase fluctuations and the charge fluctuations associated with Josephson tunneling. A crucial aspect of our model is the idea that the electric permittivity due to bound charges both inside and near the wire provides the electrodynamic environment in which quantum phase fluctuations occur. Quantum phase slip can in an abstract sense be viewed as tunneling of "fluxons" (each carrying flux Φ 0 ) through the wire, and in our model the permittivity constitutes an effective "mass" for these objects, whose resulting zero-point "motion" produces tunneling. In exactly the same way, the kinetic inductance of a superconductor (which arises directly from the finite electron mass) can be viewed as producing the quantum fluctuations responsible for Josephson tunneling. In our model, both QPS and JT arise from zero-point fluctuations of shortwavelength plasma-like oscillations of the Cooper pairs; QPS tends to occur when the impedance of these oscillators and their environment is very high, such that quantum phase fluctuations are only weakly damped and charge tends to be the appropriate welldefined quantum variable; JT on the other hand occurs naturally when the plasma and environment impedances are low, such that charge fluctuations are only weakly damped and phase tends to be the appropriate well-defined quantum variable. This basic model has allowed us to predict the lumped-element phase slip energy E S posited by MN as dual to the Josephson energy [59], in terms of measurable physical parameters Λ, ǫ in , and ǫ out , and one adjustable parameter, the QPS length scale l φ ∼ ξ. Although the latter quantity is an artifact of the discretized form of our model at short length scales, and thus phenomenological in nature, we have been able to use a single, fixed value of l φ =1.8ξ for all of the comparisons with experiment in this work, with favorable results. In at least some cases our model may suggest qualitatively different conclusions, relative to previous theories, with respect to material parameters favorable for QPS: whereas current experimental efforts are strongly focused on materials relatively close to a metalinsulator transition with extremely high resistances in the normal state (to maximize R ξ ), our model would rule out or de-emphasize those which have a very large bound permittivity ǫ in due to polarizable, localized electronic states which likely appear near such insulating transitions.
Building further on the idea of flux-charge duality, we have constructed a distributed model of quasi-1D wires, dual to the long JJ, which generates 2e-charged soliton solutions (dual to Josephson vortices) in an infinite wire whose dimensionless admittance K ≪ 1, and Φ 0 -"charged" instanton solutions (dual to Bloch oscillations for short wires) when K ≫ 1, what we have called "type II phase slips". A dissipative phase transition at K ∼ 1 separates these two regimes, which in the short-wire limit is the exact dual of the well-known phase transition for lumped JJs [78,140]. A crucial new element of this distributed model in the context of QPS is the new length scale λ E , which is dual to the Josephson penetration depth in long JJs. This so-called electric penetration depth determines the size of type II phase slips and their corresponding interaction with each other, and with the circuit environment of a finite wire. Furthermore, the temperature dependence of this length scale provides a mechanism for a richer variety of phenomena in R vs. T measurements than suggested by previous theories, and which can explain a variety of the qualitatively different observations made across multiple materials systems by different research groups. In particular, our model provides an explanation for the observation that qualitative deviations from LAMH temperature scaling of the resistance near T C , expected in previous theories to get larger with stronger QPS, in fact appear to get smaller such that the narrowest wires in some cases exhibit the best agreement with simple, thermal LAMH theory with no corrections for quantum fluctuations. Our model also agrees quantitatively with the measurements of so-called "quantum temperatures" in these narrow wires, previously attributed directly to QPS [43,52]. Finally, the involvement of the electric permittivity in our model also provides a very simple and natural mechanism for thermal attempt frequencies of phase-slip processes, in terms of the physics of noise in damped oscillator systems. By contrast, previous theories for such attempt frequencies relied on time-dependent GL theory.
We have compared our model to the results of a new class of experiments in which the quantum phase-slip energy or Coulomb blockade voltage was directly measured at mK temperatures, in InO x [71], NbN [72], NbSi [80], and Ti [75] nanowires, and are able to approximately reproduce all four observations with reasonable values for material parameters, and only a single value of the phenomenological parameter (l φ ). By contrast, the GZ theory currently used for most comparisons with experiment evidently requires quite different values of its input parameter A for each material to reproduce the observations. One important reason for this difference is the existence of the additional length scale λ E in our model which, as in the R vs. T measurements, results in qualitatively different behavior when l > λ E . In particular, our model predicts that in this regime the measured blockade voltage should no longer increase with the wire length, as it becomes simply the voltage at which a 2e-charged soliton (of size ∼ λ E ) can enter the wire.
A final topic of some relevance in concluding our work is the relevance of the present model to the prospects for realizing practical QPS devices which are dual to well-known JJ-based circuits, some of which are described in Appendix F, and two of which have already been demonstrated: the phase-slip qubit [71] (dual to the Cooper-pair box), and the phase-slip transistor [73] (dual to the DC SQUID). Of particular interest is the prospect of a quantum standard of current dual to the Josephson voltage standard, which would make use of the dual to Shapiro steps [59,63,65,67]. A device of this kind would have enormous significance to electrical metrology [141], and has been pursued in various forms for many years even before the existence of QPS was contemplated [35] and later suggested for this purpose by MN [59]. Another interesting possibility yet to be discussed is the dual of rapid single-flux quantum digital circuits. This would in principle be a voltage-state logic in which Cooper pairs are shuttled between islands, with no static power dissipation, and possibly a high degree of compatibility with chargebased memory elements.
We can make several qualitative statements about these prospects based on our model. First, we can specify the maximum usable length of a PSJ before non-lumped behavior sets in: the electric penetration length λ E . Since all of the circuits just mentioned are based on lumped-element behavior, this will constrain how large E S can be. Another interesting observable implication is the dependence of the QPS energy on the permittivity of the dielectric immediately outside the wire. This might suggest in some cases a low-permittivity substrate such as glass (or even vacuum if the wire can be suspended) would be preferable to Silicon. Finally, one can show that the quantity E S /E L which determines the extent to which quasicharge can be treated as a classical quantity (dual to E J /E C for a JJ) is simply Z L /R Q ; that is, all QPS parameters drop out, and only the linear impedance remains. A distributed quasi-1D device with a very large ratio of Z L /R Q has come to be known in the recent literature as a "superinductor" [142,143], and is of current interest for a number of quantum superconducting circuit applications. Consider the 1+1D electric analog of a magnetic field applied perpendicular to a strongly type II superconducting thin film: a quasi-1D wire (without any external circuit connections) which is subjected to a uniform external electric field along its length. In the familiar 2D magnetic case, one has the usual lower critical field H c1 below which flux is excluded via the Meissner effect, and above which magnetic vortices enter the sample; the thermodynamics of this transition is governed by the Gibbs free energy: where F is the Helmholtz free energy, H E is the external field, and B is the actual magnetic flux density. The second term is associated with work done by the field source when flux is excluded from the sample (the overall free energy is lowered when the flux is allowed to penetrate). The condensation energy of the superconductor (contained in F ) is balanced against this, such that when more free energy is gained by having a uniform superconducting state than the amount of work required from the source were the flux to be expelled, a Meissner state results in which field is excluded from the sample except within a distance from the film edges equal to the so-called "Pearl length" λ ⊥ ≈ λ 2 /2t where t ≪ λ is the film thickness. It turns out that the additional contribution to the Euclidean action in 1+1D associated with an electric flux source can be written in a completely analogous way: where S w describes the wire, and the second term describes work done by the source. In a similar manner to eq. B.1, e is the external electric field, and q is the resulting electric displacement which contains the system's response to that field. One can get an intuitive feel for the additional work described by the second term in this case by imagining that the external field is produced as shown schematically in fig. 2(d) by a moving source of magnetic flux. In this situation, mechanical work must be done to keep the magnet moving at fixed velocity v φ if the wire expels the motional electric field. These considerations imply that external fields below a critical value will be expelled from the wire, except within a spatial distance λ E of its ends. Above that critical field, "lattices" of type II phase slips will occur analogous to magnetic Abrikosov lattices [94], which correspond to a spatially and temporally periodic electric field in the 1+1D case. This analogy also applies to the physics of vortex edge barriers, and in particular to vortex penetration into long, narrow strips [122], which is the 2D case analogous to a finite wire in 1+1D (where the width of the 2D strip is analogous to the length of the wire in our 1+1D case) that we discuss in section 6.
Appendix C. Parameters for figure 10
For all wires we take the single value l φ = 1.8ξ (which qualitatively produces the best global agreement across all cases considered in this paper), while the rest of the input parameters for each case are shown in table C1. The values for ξ(0) are taken from the experimental references, and λ(0) are calculated using the BCS relation [eq. 56] with a = 1, and ρ n taken from the measured total resistance R n and wire dimensions A cs , l. The temperature dependence of these quantities was taken from the supplement of ref. [43]. The critical temperature T C shown in the table was adjusted to optimize agreement with experiment, and for the In and Al wires, we also adjusted the parameters R env and C sh associated with the electromagnetic environment (for the Ti and MoGe wires these do not enter into our prediction since these cases do not reach the lumpedelement limit λ E ≫ l). We took ǫ in = 5 for all four cases, which is reasonable for these relatively low-resistivity films. The permittivities ǫ out describe an effective average experienced by fluctuation electric fields near the wire; for the first three cases we use ǫ out ≈ (ǫ s + 1)/2 (where ǫ s is the substrate permittivity), which is the usual result for a microstrip transmission line with a distant ground plane. We took ǫ s = 10 for the Al and Ti wires which were on Si, and ǫ s = 3 for the In wire which was on glass. The MoGe wire was deposited on an insulating carbon nanotube suspended in vacuum above its substrate by a distance ≫ l φ . To optimize the agreement with experiment we allowed ǫ out = 1.5 (which could plausibly be the case due the effective permittivity of the nanotube). The values for C ⊥ were obtained using Sonnet, a microwave simulation tool, in the first three cases. For the MoGe case, we adjusted C ⊥ upwards from the 15 fF/m predicted by Sonnet (for a bare, suspended wire) to optimize the agreement; this is again a plausible effect of the nanotube.
Appendix D. Parameter values for figure 11 Table D1 shows the parameters used to derive the results shown in fig. 11 for MoGe wires. In all cases we use the same values l φ = 1.8ξ with ξ = 5 nm and C sh = 5 fF [43]. Table D1. MoGe wire parameters used in figs. 11(c)-(d), for wires S1-5 of ref. [43] and A-F of ref. [52]. The results are insensitive to C sh since the system is overdamped (R env C sh < √ L k C sh ). As before, we infer L k = Λl/A cs using eq. 56 with a = 1, ∆ = 1.78k B T C to obtain E L ≡ Φ 2 0 /2L k . Values for T C , the wire dimensions, and the switching currents I sw for wires A-F came from the experimental references [43,52], and the I sw values for wires S1-S5 from ref. [144]. The phase-slip energy E S is obtained using eq. 26. For the wires of ref. [52], whose A cs were not published, we infer it from R n and the fixed resistivity ρ n ≈ 180µΩ·cm [144]. For all wires we use ǫ in = 5ǫ 0 , and ǫ out = 1.5ǫ 0 , as in table C1 and fig. 10, chosen to optimize agreement with experiment across figs. 10 and 11: significantly smaller ǫ in , ǫ out would degrade the agreement with experiment for wires S1-S5 in fig. 11(d), while larger ǫ in , ǫ out would degrade the agreement of fig. 10(d).
Appendix E. Parameter values for table 1
To produce the values for the four different materials in table 1, in all cases we take l φ = 1.8ξ and ǫ out = 5.5ǫ 0 (all of these wires were on silicon substrates). All other input parameters are shown in table E1. Wire dimensions, sheet resistance R , as well as ∆ and ξ came directly from the experimental references (in some cases using ∆ = 1.78k B T C ). The distributed shunt capacitance C ⊥ was obtained using the Sonnet 3.5 220 a In ref. [72], the wires for which nonzero E S was observed had an average width ranging from 27-32nm. Also, an appreciable amount of spatial variation of the width was observed along each wire, such that it is possible the measured values are dominated by a "constriction" much shorter than the total length.
EM simulation software and the specified experimental geometries. Note that the value for NbN is somewhat larger relative to the other three cases due to the relative proximity of a ground plane in that particular experiment. Values for λ were obtained from the BCS relation of eq. 56 with the a values shown in the table.
Appendix F. Flux-charge duality and lumped-element superconducting circuits Figure F1 shows specific examples of flux-charge duality applied to more complicated JJ-based circuits. Panels (a) and (b) show the duality between a charge qubit and the phase-slip qubit of ref. [93]. PSJ-based superconducting qubits may be of particular interest since flux and charge noise will have their roles interchanged relative to JJ-based qubits. Since the excited-state lifetimes of present-day JJ-based qubits are thought to be limited by high-frequency charge noise, exchanging this for high-frequency flux noise (which is thought to be much weaker [145]) should result in much longer lifetimes. Panels (a) and (b) also illustrate how polarization charge on the nanowire (produced by a nearby gate electrode) is dual to magnetic flux through the junction barrier of the JJ. Just as a Fraunhofer interference pattern will be observed in the magnitude of E J vs. flux through the junction (due to the Aharanov-Bohm effect) [94], the same pattern will be observed in the magnitude of E S vs. charge on the nanowire (due to the Aharonov-Casher effect [68]). This may be important for the phase-slip qubit since it implies charge noise on the nanowire would show up as V C noise in the qubit (dual to I C noise commonly observed in JJ-based qubits [146]). Panels (c)-(f) show two tunable superconducting qubits and their dual circuits. Just as a DC SQUID can be used to implement a flux-tunable composite JJ, the series combination of two PSJs as shown can be used to implement a charge-tunable composite PSJ. Note that (d) is essentially a tunable version of the phase-slip oscillator of Ref. [69], and (f) is a tunable version of the phase-slip qubit [93].
In addition to qubits, where well-defined, long-lived energy eigenstates are required in which quantum zero-point fluctuations must be kept undisturbed by the environment, the circuits shown in (g)-(l) are intended to function in a regime where either quasiflux (for JJs) or quasicharge (for PSJs) is a classical variable (i.e., where quantum fluctuations are small). A well-defined quasiflux requires a low environmental impedance at the Josephson plasma frequency, which is readily obtained using resistively shunted Josephson junctions. A well-defined quasicharge requires a high environmental impedance (≫ R Q ) at the phase-slip plasma frequency, which is much more difficult to realize. In refs. [73,75,80], highly-resistive nanowires were used to bias the device; in ref. [110], frustrated DC SQUID arrays in an insulating state were used. Panel (h) shows the "quantum phase slip transistor" QPST, first suggested in ref. [70], and implemented in ref. [73,80]. This device is an electrometer, dual to the DC SQUID amplifier shown in (g). The QPST is similar to a single Cooper-pair transistor (SCPT) [147]; however, it could have a much higher sensitivity than an SCPT, which is limited by the charging energy of the JJs (by how small one can make the junction capacitance). The QPST is instead limited by the kinetic capacitance C k , whose ultimate limit is the series capacitance of the wires, which can be much smaller. Panel (i) is the Josephson voltage standard, and (j) the quantum current standard proposed in ref. [59]. Under microwave irradiation, dual features to Shapiro steps would allow locking of the incident frequency f to the applied current I according to I = Nf 2e, where N is the number of parallel PSJs. Such a device would have enormous impact in electrical metrology, allowing for the first time interconnected fundamental standards of voltage, resistance, and current [141]. Finally, panel (k) is a Josephson transmission line, a basic building block of rapid single flux quantum (RSFQ) digital logic; (l) shows the dual to this, in which shunt JJs are replaced by series PSJs, flux stored in loops is replaced by charge stored on islands, and current bias is replaced by voltage bias. Such circuits could be of practical interest, both because unlike RSFQ they have no static power dissipation, and also because voltage-state logic could be significantly easier to integrate with memory elements than flux-state logic. | 2013-07-02T21:31:00.000Z | 2012-01-09T00:00:00.000 | {
"year": 2012,
"sha1": "d09bcd54c0358bc10233a6f2aa7ba98771ed2dca",
"oa_license": "CCBY",
"oa_url": "http://iopscience.iop.org/article/10.1088/1367-2630/15/10/105017/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "d09bcd54c0358bc10233a6f2aa7ba98771ed2dca",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258060324 | pes2o/s2orc | v3-fos-license | Impact of the diaphyseal femoral deformity on the lower limb alignment in osteoarthritic varus knees
Aims The impact of a diaphyseal femoral deformity on knee alignment varies according to its severity and localization. The aims of this study were to determine a method of assessing the impact of diaphyseal femoral deformities on knee alignment for the varus knee, and to evaluate the reliability and the reproducibility of this method in a large cohort of osteoarthritic patients. Methods All patients who underwent a knee arthroplasty from 2019 to 2021 were included. Exclusion criteria were genu valgus, flexion contracture (> 5°), previous femoral osteotomy or fracture, total hip arthroplasty, and femoral rotational disorder. A total of 205 patients met the inclusion criteria. The mean age was 62.2 years (SD 8.4). The mean BMI was 33.1 kg/m2 (SD 5.5). The radiological measurements were performed twice by two independent reviewers, and included hip knee ankle (HKA) angle, mechanical medial distal femoral angle (mMDFA), anatomical medial distal femoral angle (aMDFA), femoral neck shaft angle (NSA), femoral bowing angle (FBow), the distance between the knee centre and the top of the FBow (DK), and the angle representing the FBow impact on the knee (C’KS angle). Results The FBow impact on the mMDFA can be measured by the C’KS angle. The C’KS angle took the localization (length DK) and the importance (FBow angle) of the FBow into consideration. The mean FBow angle was 4.4° (SD 2.4; 0 to 12.5). The mean C’KS angle was 1.8° (SD 1.1; 0 to 5.8). Overall, 84 knees (41%) had a severe FBow (> 5°). The radiological measurements showed very good to excellent intraobserver and interobserver agreements. The C’KS increased significantly when the length DK decreased and the FBow angle increased (p < 0.001). Conclusion The impact of the diaphyseal femoral deformity on the mechanical femoral axis is measured by the C’KS angle, a reliable and reproducible measurement. Cite this article: Bone Jt Open 2023;4(4):262–272.
Introduction
Personalized medicine has been brought to the fore over the last few years. Knee surgery also tends to be adjusted to each patient, such as with the personalized alignment in knee arthroplasty, 1,2 or the double-level osteotomy in conservative knee surgery. 3 Several recent classifications have been published to describe limb alignment, and femoral and tibial axis in non-osteoarthritic and osteoarthritic populations. 4 A new classification for the lower limb alignment based on phenotypes was introduced 5 to identify the localization of the deformity (tibial, femoral, or both). 6,7 Thanks to these analyses, several authors have reported that the varus deformity is mainly due to the femoral axis in the osteoarthritic knee. 8,9 Indeed, the tibial coronal alignment was similar between osteoarthritic and non-osteoarthritic populations. 8 By contrast, there was a broader and more varus distribution of the femoral coronal alignment in the osteoarthritic population compared to the non-osteoarthritic population. 8,9 Thienpont and Parvizi 10 have also clarified the type of deformity with a new classification for the varus knee, describing intra-articular, metaphyseal, or diaphyseal deformities.
These classifications are interesting, but their clinical application might be limited. Indeed, to our knowledge, no study has precisely assessed the impact of each deformity (intra-articular, metaphyseal, or diaphyseal) on lower limb alignment and therefore what should or not be corrected, and where, during knee surgery. The impact of each extra-articular (metaphyseal or diaphyseal) deformity on the final alignment is not yet precisely understood. In 1991, Wolff et al 11 described that the impact of a deformity on the knee alignment varies according to the level and severity of the deformity. For example, a deformity of 10° close to the knee will have a more significant impact than the same amount of deformity far from the knee. 12 This seems even more crucial for patients with severe femoral bowing in the coronal plane, which is the most common constitutional diaphyseal femoral deformity, mainly in Middle Eastern or Asian populations. [13][14][15] Therefore, this radiological study aimed to 1) determine a method assessing the impact of the diaphyseal femoral deformity on lower limb alignment for the varus knee (on full-leg radiograph), 2) to evaluate the reliability and the reproducibility of this method in a large cohort of patients, and 3) to determine the main anatomical factors influencing the lower limb alignment and the mechanical femoral axis. We hypothesized that the impact of the diaphyseal femoral deformity (according to its localization and severity) on the knee alignment could be determined with a simple, reliable, and reproducible method for each patient on standard full-length radiographs.
Methods
Patients. We retrospectively included all patients who underwent a primary knee arthroplasty (total knee arthroplasty (TKA) or unicompartmental arthroplasty (UKA)) at a single institution from January 2019 to January 2021. The choice of this population was dictated by the need to have an osteoarthritic population (needing knee surgery) and to have all demographic, clinical, and radiological data. Exclusion criteria were preoperative genu valgus defined as hip knee ankle (HKA) angle superior to 180°; preoperative flexion contracture superior to 5° (which can introduce a bias in the measurement of the femoral bowing); 16 previous femoral osteotomy or fracture (with the risk of rotation disorder); previous total hip arthroplasty in the operated side; or a known clinical or radiological femoral rotational disorder. Among the 316 candidates for knee arthroplasties performed between January 2019 and January 2021 (277 TKAs, 39 UKAs), 205 patients met the inclusion criteria ( Figure 1). The mean age was 62.2 years (standard deviation (SD) 8.4), the mean BMI was 33.1 kg/m 2 (SD 5.5), 36.1% were male Table I. Preoperative demographic and clinical data in the whole cohort, in the group with a mild femoral bowing (< 5°) and in the group with a severe femoral bowing (> 5°). Bow angle Angle between the line connecting the points bisecting the femur at 0 and 5 cm below the lowest portion of the lesser trochanter and the line connecting the points bisecting the femur at 5 cm and 10 cm above the lowest portion of the lateral femoral condyle. 13 A FBow angle superior to 5° was considered as a severe deformity. 22 0° to 5°C
Variable
'KS angle Angle between the distal femoral shaft axis and the line joining C' and the knee centre K. -HKC' angle Angle between the mechanical axis line of the femur (between the hip centre and the knee centre) and the line joining C' and the knee centre K. (Table I). Radiological assessment. The radiographs were performed preoperatively as a part of the standard radiograph protocol prior to knee surgery in the same centre according to the same protocol and included: anteroposterior view, lateral view, and full long-leg radiograph. Briefly, full weightbearing long-leg standing radiographs were performed barefoot with feet placed together and the patella oriented forward to avoid rotational variation. 12 The following radiological measurements were performed only on the preoperative radiographs ( Table II. The diaphyseal deformity was defined by the femoral deformity localized Femoral radiological measurements performed on the preoperative full long-leg radiograph for each patient. aMDFA, anatomical medial distal femoral angle; C'KS angle (angle between the distal femoral shaft axis and the line joining C' and the knee centre K); FBow, femoral bowing angle; HKC' angle (angle between the mechanical axis line of the femur and the line joining C' and the knee centre K); HKS, hip knee shaft angle; mMDFA, mechanical medial distal femoral angle; NSA, femoral neck shaft angle. between 5 cm below the lesser trochanter proximally and 10 cm above the transepicondylar axis distally. All measurements were performed using PaxeraUltima v. 5.0.4.3 (PaxeraHealth, USA). Measurement accuracy was to one decimal place. A calibrated scale in millimetres allowed accurate and reliable measurements. The radiological measurements were performed by two independent reviewers (CB, JD) for all measurements to assess the reliability of each measurement. Discrepancies were settled by discussion between the reviewers or by a new measurement with a third reviewer (SP). To determine intraobserver variability, 40 patients were measured twice by the first observer (CB), separated by a six-week interval. Impact of the femoral bowing. The FBow impact on the mechanical femoral axis (mMDFA) has been established with analytic geometry and the Al-Kashi theorem. The entire demonstration is described in the Supplementary Material. The mMDFA ('calculated mMDFA') and the FBow impact ('calculated C'KS') were calculated with this method. The FBow impact on the knee alignment can be measured by the C'KS angle (Figure 2c). The 'measured' C'KS angle took the localization (length DK) and the severity (FBow angle) of the FBow into consideration. Statistical analysis. Statistical analysis was performed using the XL STAT software v. 2021.2.1 (Addinsoft, France). A p-value < 0.05 was considered statistically significant for all analyses. Patient demographics were described using means, SDs, and ranges for continuous variables, and counts (percent) for categorical variables. The cohort was separated into two groups with a mild femoral bowing (< 5°) or with a severe femoral bowing (> 5°), as described in the literature. 22 The categorical outcomes of the groups FBow angle < 5° and FBow angle > 5° were compared using the chi-squared test. The normally distributed continuous variables of both groups were compared using theindependent-samples t-test. The calculated and measured values of mMDFA and C'KS were compared by the Bland Altman method and a independent-samples t-test.
The inter-and intraobserver reliabilities of the radiological measurements were evaluated by an intraclass correlation coefficient. Strength of agreement for the kappa coefficient was interpreted as follows: < 0.20 = unacceptable, 0.20 to 0.39 = questionable, 0.40 to 0.59 = good, 0.60 to 0.79 = very good, and 0.80 to 1 = excellent. 23 Correlations between HKA, then mMDFA, and anatomical features (aMDFA, HKC', C'KS, JLCA MPTA, FBow, DK, HC', NSA) were analyzed using the Pearson correlation coefficient, as was the correlation between C'KS and the other parameters of the femoral anatomy (NSA, HKC', FBow, DK, HC'). Simple and multiple linear regression analyses were conducted to evaluate the factors that can influence HKA and mMDFA. HKA, then mMDFA were used as dependent variables and MPTA, JLCA, aMDFA, C'KS, HKC' were used as independent variables.
Results
Impact of the femoral bowing on the varus deformity. The calculated and the measured values of the C'KS angle were comparable, with a strong correlation (95% confidence interval (CI) (-0.25 to 0.23)) ( Table III). According to the accuracy of the measured values (0.1°), the C'KS angle was a reliable measurement of the FBow impact on the knee (Figures 3 and 4).
The mean FBow angle was 4.4° (SD 2.4; 0° to 12.5°) in the whole cohort (Table I) Comparison of the measured and calculated values of the C'KS angle (angle between the distal femoral shaft axis and the line joining C' and the knee centre K).
Fig. 4
Bland Altman graph for the measured C'KS angle (angle between the distal femoral shaft axis and the line joining C' and the knee centre K) and the calculated C'KS. and the mMDFA were significantly lower in the severe FBow group. Other radiological measurements were similar in both groups (Table IV). Reliability and reproducibility of the measurements. The radiological measurements showed very good to excellent intraobserver and interobserver agreements ( Table V). The C'KS angle was a reproducible measurement of the FBow impact on the mMDFA. Anatomical factors impacting the knee alignment. The mechanical femoral axis (mMDFA) can be described as the result of three femoral deformities. The metaphyseal deformity corresponds to the aMDFA. The diaphyseal deformity corresponds to the C'KS angle. The proximal deformity (femoral head and neck) corresponds to the HKC' angle ( Figure 5). The intra-articular deformity should be considered separately and corresponds to the femoral and tibial wear. It was quantified by the JLCA.
The C'KS angle (the FBow impact on the knee) increased significantly when the length DK decreased and the FBow angle increased (p < 0.001, Pearson correlation coefficient) ( Figure 6). The mMDFA were correlated to C'KS angle and aMDFA (p < 0.001, Pearson correlation coefficient) (Table VI). Table VII and Figure 7 summarize the results of the linear regression analyses. The main contributor to mMDFA was aMDFA, then the C'KS angle. The main contributor to HKA angle was JLCA. Other contributors to HKA angle were MPTA, C'KS angle, then aMDFA.
Discussion
The main finding of this study was the description of a reliable measurement (C'KS angle) of the impact of the diaphyseal femoral deformity on knee alignment, related to the localization and the severity of the deformity. This measurement could be used as an additional tool to understand the femoral bone deformity when planning for knee surgery (knee arthroplasty or osteotomy).
Several limitations should be outlined: first, the femoral bowing was measured only on the radiographs, without CT scan measurement. However, we have excluded patients who were at risk of measurement errors (flexion contracture, rotational disorders), and all the full length radiographs were performed based on the same protocol. Furthermore, in current practice, a CT scan was not justified for primary knee arthroplasty or osteotomy. Second, this study did not report clinical outcomes after knee surgeries according to the anatomy restoration; rather, the assessment of FBow was radiological in order to describe a new measurement and assess its reliability. A clinical study would be necessary in the future to assess this radiological tool in clinical practice for knee arthroplasty or osteotomy. Third, this study has been performed in a Middle Eastern population of patients; the prevalence and severity of FBow cannot be extrapolated to the worldwide population. Nevertheless, the measurement and the understanding of the FBow impact on the knee alignment can be used in the worldwide population. This new measurement could be also assessed in post-traumatic femoral diaphyseal deformity. The FBow impact was frequently moderate (between 0° and 2.5° for 59% (121/205) of the patients in this cohort). In this Middle East population with constitutional deformity (no post-traumatic deformity), the mean FBow angle is only 4°, and most cases of FBow were not close to the joint. Nevertheless, the diaphyseal deformity can sometimes have a strong impact on knee alignment. The C'KS reached 6° for the most severe FBow in this study. In this case, the impact of the diaphyseal deformity on the knee must be known in order to adjust the surgical planning. Severe FBow is uncommon, although it is predominant in some ethnic populations, such as Asian and Middle Eastern populations (mean FBow angle varying between 1.8° and 5.3°), [13][14][15]24,25 where the FBow angle can reach 88% of the patients in an osteoarthritic population. Interest in the impact of the FBow measurement remains crucial in a worldwide population to manage constitutional diaphyseal deformity and post-traumatic deformity. Indeed, a diaphyseal femoral deformity due to a malunion of a diaphyseal fracture must be quantified to discuss if it is acceptable or needs a realignment osteotomy. 26,27 This is why a straightforward measurement of the FBow impact on knee alignment can be helpful in the surgeon's practice. While this measurement is reliable and useable in daily practice, it cannot be performed in specific cases. The flexion contracture is a major factor that affects the FBow angle: 16 the larger the flexion contracture angle, the larger the FBow angle. A femoral rotational disorder can also modify the FBow angle. 16 In this case, the radiological FBow angle combines the true FBow deformity and the sagittal femoral deformity. With an internal rotation disorder, the radiological FBow is underestimated, and with an external rotational disorder, the radiological FBow is overestimated. Therefore, the femoral bowing is more accurate with a CT scan, 16,28 which avoids these measurement errors of a standard full long-leg radiograph. Nevertheless, these two causes of mistakes can be easily identified clinically, and an additional CT scan can be performed if needed. The most common imaging exam performed for knee arthroplasty remains radiographs. A radiological assessment of the diaphyseal deformity is thus essential, while remaining wary of the risk of error. The full long-leg radiograph must also be accurate with a knee strictly in a frontal position. Several criteria can help to confirm the radiograph's quality, such as a centred patella, a symmetrical femoral notch, and the fibular head position. A strict radiological protocol is necessary; a monopodal full long-leg radiograph could decrease the risk of errors.
This study has demonstrated that one of the main anatomical factors influencing the mechanical femoral axis was the C'KS angle. Indeed, several studies have reported some difficulties and risk of errors during a TKA procedure when a significant FBow was present. 22,25,29 The risk of misalignment in these cases was dependent on the surgical technique of alignment. There were more femoral components in varus (between 2° and 4° of mean varus) in the patients with a FBow angle superior to 5° when a fixed value of the distal femoral cut axis compared to the intramedullary guide is used (e.g. 7°). 22,30 Navigation can reduce the risk of misalignment in varus in the femoral bowing population, mainly the risk of outliers. 22 However, navigation can also completely correct the femoral deformity, including the diaphyseal deformity, which should not be corrected by the implant positioning, 13 as the residual lateral laxity in extension would be substantial in this case. The use of patientspecific instrumentation (PSI) did not improve the lower limb alignment and the implant positioning in the FBow population either. 30 Computer-assisted systems or PSI, without integrating the localization of the deformity, would not be helpful in managing these diaphyseal deformities. For kinematic alignment techniques when a significant FBow is present, there is a risk of keeping too much residual varus in the lower limb at the end of the procedure. Therefore, it seems essential to understand the impact of femoral bowing on the knee in order to adjust surgical planning accordingly.
Several knee surgeries could benefit from this measurement of the FBow impact, particularly TKA with severe femoral bowing, or post-traumatic malunion of a diaphyseal fracture with severe varus deformity. Knowing the proportion of the varus alignment due to the diaphyseal deformity could help to determine if an osteotomy is needed, if the deformity can be compensated in the joint, or if the deformity should not be corrected. A clinical study is the next step to assess the consequences of surgical planning (for TKA of osteotomy) using this new measurement (C'KS angle) of the FBow impact on the knee alignment.
In conclusion, the results of this study showed that the impact of the diaphyseal femoral deformity on knee alignment can be measured by the C'KS angle, which considers the localization and importance of the FBow, with good reliability and reproducibility. This new radiological tool improves the understanding of the femoral bone deformity and its impact on the knee. Standardized coefficients of the radiological measurements that influence mechanical medial distal femoral angle (mMDFA). aMDFA, anatomical medial distal femoral angle; C'KS angle (angle between the distal femoral shaft axis and the line joining C' and the knee centre K); HKC' angle (angle between the mechanical axis line of the femur and the line joining C' and the knee centre K).
Take home message
-The impact of the diaphyseal femoral deformity on the knee alignment can be measured by a radiological angle on a full long leg radiograph with a good reliability and a good reproducibility.
-The impact of the diaphyseal femoral deformity was frequently moderate, nevertheless it can reached 6° for the most important femoral bowing in this study.
-This measurement could be used as an additional tool to understand the femoral bone deformity when planning for knee surgery (knee arthroplasty or osteotomy). of interest. S. Lustig reports consulting fees from Stryker, Smith & Nephew, Heraeus, and Depuy Synthes, and institutional research support from Groupe Lepine and Amplitude, all of which are unrelated to this study. S. Lustig is also on the editorial board for The Journal of Bone and Joint Surgery (Am). M. Ollivier reports consulting fees from Arthrex, Stryker, and Newclip technics, unrelated to this study. S. Parratte reports royalties from Zimmer Biomet and Newclip, and consulting fees from Zimmer Biomet, unrelated to this study. S. Parratte is also the treasurer for the European Knee Society.
Data sharing:
The datasets generated and analyzed in the current study are not publicly available due to data protection regulations. Access to data is limited to the researchers who have obtained permission for data processing. Further inquiries can be made to the corresponding author.
Ethical review statement:
The study was approved by our hospital's Institutional Review Board (study ID Number: MF3867, approval date: 20th December 2020). All procedures were performed in accordance with the ethical standards of the institutional and/or national research committee, the 1964 Helsinki declaration, and its later amendments, or comparable ethical standards. All patient participants provided informed consent for review of their medical records. | 2023-04-12T05:06:05.279Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "843f31e6ce7599842ad63d54b6390b6db456f0d7",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "843f31e6ce7599842ad63d54b6390b6db456f0d7",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232303232 | pes2o/s2orc | v3-fos-license | Comparing sequential vs day 3 vs day 5 embryo transfers in cases with recurrent implantation failure: randomized controlled trial
Objective: The recent improvement in sequential media has refocused its attention on the role of human blastocysts in ART, not only because of its advantages but also because of the possible cancellation of embryo transfer when relying on blastocyst transfer only. Hence, the idea of sequential transfer on day 3 and day 5 was proposed. Objective: To compare the pregnancy outcomes of sequential embryo transfer on day 3 and day 5, versus cleavage transfer on day 3 and blastocyst transfer on day 5 in cases of recurrent implantation failure. Methods: This was a prospective and randomized trial, in which 210 qualified patients with recurrent implantation failures undergoing IVF/ICSI were randomized into three groups, each group included 70 patients. Embryo transfer was performed in day 3 in the first group, day 5 (blastocyst transfer) in the second group and sequential embryo transfer in days 3 and 5 in the third group. We assessed pregnancy outcomes from all the three groups. Results: Clinical pregnancy and live birth rates were significantly higher in the sequential group than either group day-3 or day-5 of embryo transfer in cases with recurrent implantation failures. Conclusions: Sequential embryo transfer in cases with recurrent implantation failures and adequate number of retrieved oocytes is associated with higher implantation and clinical pregnancy rates, and it is advocated for patients having an adequate number of good quality embryos.
INTRODUCTION
Implantation has two essential components, a healthy embryo that has a high potential for implantation, and an endometrium favoring embryo implantation. The interaction between these two components leads to apposition, attachment and invasion by the embryo, which are the corner-stone steps for successful implantations, and normal placentation later (Simon & Laufer, 2012).
In the past, blastocyst transfers were challenging due to difficulties in maintaining the human embryo in culture for more than forty-eight hours; thus, we used cleavagestage transfers. Advocates of cleavage stage transfers believe that the human womb is the best incubator, and prolonged embryo culture for 5-6 days may affect its in-vivo viability; in addition, the possibility of transfer cancellation due to failure of embryo progression to the blastocyst stage , which represents negative emotional, legal, financial, and psychological impacts on both the couple and the ART center. Moreover, reduced number of frozen embryos available for future transfer could be the reason why a Cochrane meta-analysis found lower cumulative pregnancy rates with blastocyst transfers, when compared with cleavage stage transfers (Glujovsky et al., 2012).
Recent improvements in culture techniques, including the use of sequential media, has enabled the extension of embryo growth in vitro (Gardner et al., 1998), drawing the attention to the advantages of blastocyst transfer in IVF. In addition, post compaction embryos transferred are more tolerant to a wider range of environments than precompaction embryos, because the latter are exposed to higher concentrations of amino acids (Iritani et al., 1971;Miller & Schultz, 1987) and carbohydrates (Gardner et al., 1996), which is not the regular exposure. Thus, cleavage stage embryo transfer exposes the embryo to a lot of stress, compromising both its implantation and viability potentials. Ovarian hyperstimulation also negatively affects the uterine milieu (Simon et al., 1998), minimizing the period of embryo exposure to such altered environment is recommended, which is the case in blastocyst transfers. Furthermore, with cleavage stage transfer, maternal transcripts and stored mRNA, exclusively originating from oocyte, direct the development of the embryo, because the embryonic genome remains latent at that time (Hayrinen et al., 2012). Additional studies have proved that uterine contractions progressively diminish as one moves farther into the luteal phase; and thus, early embryo transfer to the uterus may cause its loss because of increased uterine contractions. In addition, recent improvement of embryo culture allowed possible production of higher numbers of human blastocyst, which can subsequently implant at higher rates than cleavage stage embryo (Nadkarni et al., 2015;Bulletti et al., 2000). Blastocyst transfer resembles the natural cycle as the embryo normally arrives inside the uterine cavity from the fallopian tube at the blastocyst stage. Blastocyst transfers also bear better embryo euploidy status than cleavage stage transfers (Dalal et al., 2015). Blastocyst cultures yield better results in pre-implantation genetic testing for monogenic gene defects (PGT-M), or pre-implantation genetic testing for aneuploidies (PGT-A). Accordingly, many authorities have recommended adopting the policy of "pure" blastocyst transfer, rather than cleavage transfer (Dalal et al., 2015). We know that blastocyst transfers are superior to cleavage stage embryo transfers, vis-à-vis the implantation potential, as the probability of synchronized endometrial receptivity and embryonic development rises, leading to a rise in the implantation rate, which is the determining factor in IVF success (although live birth rate is considered the gold standard). Blastocyst transfers enable better selection of high-quality embryos for implantation, since the activation of the embryonic genome occurs around day 3; therefore, blastocyst transfers ensure that only those embryos, which have undergone the genomic shift, are selected for transfer. Thus, enabling the clinician to naturally select competent embryos that have the potential of normal implantation and development (Braude et al., 1988). Therefore, in vitro culturing of embryos in the blastocyst stage will achieve two goals, which are better selection of higher quality embryos for transfer, and promoting better physiologic endometrial receptivity and capability of achieving the ''implantation window'' (Simon & Laufer, 2012).
Recurrent implantation failure is one of the problems affecting couples undergoing IVF/ICSI, which has no standard definition; however, Polanski et al. (2014) did a systemic review on this condition and concluded that the definition of recurrent implantation failure is absent implantation after two consecutive frozen embryo replacements, or IVF/ICSI cycles with a cumulative number of at least four cleavage stage embryos and two blastocysts, with all embryos being of appropriate developmental stage and good quality. To avoid these unwanted sequelae, "sequential" embryo transfer, in which both, cleavage stage embryo(s) on day 3 and blastocyst(s) on day 5, are sequentially transferred in the same cycle, has been proposed. Sequential transfers have the theoretical advantage of day-5 and day-3 transfers, and a lower likelihood of transfer cancellation (Goto et al., 2003). However, the efficacy of such technique (sequential transfer) is still debatable (Phillips et al., 2003;Levron et al., 2002) (Blake et al., 2007), and limited data have been published in this subject. Earlier studies showed a rise in pregnancy rates following sequential embryo transfer (Abramovici et al., 1988) while, later studies found nonsignificant differences in pregnancy rates between single and double embryo transfers (Al-Hasani et al., 1990;Ashkenazi et al., 2000).
The purpose of this study was to evaluate the effects of sequential embryo transfers comparing days 3 (cleavage stage) and 5 (blastocyst) embryo transfer in cases of recurrent implantation failures.
Patient selection
This is a prospective randomized trial carried out in the assisted reproductive therapy (ART) centers in the Air Force Specialized Hospital (Cairo, Egypt) and Al-Azhar University Hospital (Cairo, Egypt) between April 2015 and June 2017. The Ethics Committee of the Air-Force specialized Hospital approved the study. The study was registered in the Pan-African Clinical Trial Registry PACTR201709002592834. A total of 245 women scheduled for IVF/intracytoplasmic sperm injection (ICSI) were approached to be recruited into the study, and were given the required information, and 26 women declined to participate. Five women did not meet the inclusion criteria upon the simulation onset, and four cases had less than five embryos; hence, nine cases were excluded from the study prior to randomization. Cases fulfilling the inclusion criteria were randomized after oocyte retrieval and post-fertilization check (Figure 1 and 2). The randomization was done according to a computer-generated list. The nurse coordinator ran the computer-generated list without any interference from the investigators. Two hundred and ten women were allocated to the conventional transfer (day-3) group, the blastocyst transfer (day-5) group or the sequential transfer (day-3 and day-5) group. Each group included 70 women. Six patients dropped out during follow up after embryo transfer 2 in the day-3 group, 3 in the day-5 group and 1 in the sequential group. Ethical approvals were granted for the study from the local Ethics Committee before enrollment, and all the patients signed an informed consent form. The trial was registered in the Pan-African Clinical Trial Registry. The inclusion criteria were: age ≤ 35 years, recurrent (2 or more) implantation failures as defined by Polanski et al. in 2014; hysteroscopically normal endometrial cavity; negative thrombophilia screening (congenital thrombophilia screen, lupus anti-coagulant and anti-cardiolipin IgG & IgM); absence of hydrosalpinx and endometriosis (excluded by laparoscopy); a day-3 follicle stimulating hormone (FSH) level <10 IU/L, E2<80 pg/ ml, anti-Mullerian hormone (AMH) 1-3 ng/ml, adequate ovarian responders and availability of at least 5 embryos on post-fertilization check (to allow high chance for obtaining at least 2 good quality embryos available for transfer). Exclusion criteria were patients not fulfilling any of the above criteria, and poor or high responders by previous stimulation history, and ovarian reserve tests by using both Bologna criteria which defines poor ovarian response with at least 2 of the following 3 criteria: 1) Maternal age equal to or above 40 or another risk factor for ovarian response. 2) Abnormal ovarian reserve detected by AMH less than 0.5 or antral follicle count less than 5-7. 3) Previous poor ovarian response (3 oocytes or less with conventional stimulation protocol) (Ferrareti et al., 2011) and ovarian sensitivity index, which is recovered oocytes X 1000/total dose of FSH (Huber et al., 2013).
Stimulation protocol
The women participating in this study had ovarian stimulation using the mid-luteal long GnRH agonist protocol, which began with daily S.C. injections of 0.1 mg triptorelin (decapeptyl, Ipsen pharma biotech, France) on Day 21 of the pre-stimulation cycle. The GnRH agonist was continued until the day of HCG administration. Gonadotropin was administered daily by S.C. injection of recombinant FSH-follitropin beta (Puregon; Organon, the Netherlands) or recombinant FSH follitropin alpha (Gonal-F; Serono, Switzerland). The dose of gonadotropins was individualized according to the patient's age, body mass index and previous stimulation history, or response to stimulation using the ovarian sensitivity index (Huber et al., 2013), started after confirmation of pituitary down-regulation by transvaginal scan on days 4-5 of the period and continued for five days, after which the dose was adjusted according to the ovarian response, which was monitored by transvaginal ultrasound and serum E2 levels. Final oocyte maturation was achieved with a 250 ug injection of recombinant HCG (Ovitrelle, Merck-Serono,Switzerland), when one follicle reached a diameter of ≥18 mm, two follicles reached ≥17 mm, or at least 10 follicles had more than 14 mm. Transvaginal oocyte retrieval was performed under general anesthesia 34-36 h after HCG injection.
Observation of the embryos
Routine ICSI was performed 4 hours after oocyte retrieval for all participating women, and the oocytes were checked for fertilization 16-18 hours later. Normal fertilization was indicated by the appearance of two pronuclei. Once post-fertilization check confirmed the availability of ≥5 embryos, the patients were randomized to one of the 3 groups. The embryos were cultured in a commercial sequential IVF medium (Quinn's Advantage Cleavage Medium; SAGE, Pasadena, CA, USA) in triple gas bench-top incubators, with gas concentrations of 6% CO2, 5% O2 and 89% N2. The grading criteria for the embryos were as follows: grade 1, uniform blastomeres, with no DNA fragmentation; grade 2, the blastomere size was slightly uneven with <20% DNA fragmentation; grade 3, the blastomere size was heterogeneous, or with 20-50% DNA fragmentation; and grade 4, >50% DNA fragmentation. The number and grade of the embryonic blastomeres were recorded. Good-quality embryos were defined as embryos containing four cells on day 2 (48h after oocyte retrieval) and six cells on day 3 (72h after oocyte retrieval), with a grade of 1 or 2.
Embryo selection and transfer
Only good quality embryos were transferred. In the day-3 group, two good-quality embryos were transferred. In the day-5 group, two blastocysts were transferred. In the sequential group, one good-quality embryo was transferred on day 3 and one blastocyst was transferred on day 5. Embryo transfer was performed in 20 µl of media using a soft transfer catheter (Cook) under ultrasound guidance.
In the current study, we transferred two embryos in each group, since two embryos are needed in the sequential media group. Luteal phase supplementation with vaginal administration of progesterone, 90 mg once daily (Crinone 8%, Serono, United Kingdom) was started from the day of oocyte retrieval and continued for 12 weeks of gestation, if pregnancy was achieved. PGS was not used in any of the participating women according to the unit protocols.
Outcome measures
The primary outcome measures were clinical pregnancies. Other outcome measures were the implantation, miscarriage, multiple pregnancy and live birth rates. Pregnancy testing was performed 14 days after embryo transfer. Ultrasound examination was performed at week 7 (about 5 weeks after transfer) to assess fetal sac number and fetal heartbeat. Clinical pregnancy was defined as the presence of a fetal heartbeat on ultrasound examination at 7 weeks of pregnancy. The implantation rate was defined as the number of gestational sacs seen on the ultrasound, divided by the total number of embryos/ blastocysts transferred. The implantation rate was calculated for all patients having ET and not just those who became pregnant. Spontaneous miscarriage was defined as a clinical pregnancy loss before 20 weeks of gestational age. Multiple pregnancies were defined as two or more gestational sacs seen on ultrasound. Multiple pregnancy rate was defined as number of multiple pregnancies divided by the total number of positive pregnancies.
Sample size calculation
Sample size calculation was estimated to be 210 women (at least 70 cases in each group), based on increased clinical pregnancy rate by 10%, more with the use of sequential embryo transfer versus day-3 or day-5 embryo transfer, and a 10% dropout rate, which achieves 80% power and a significance level (alpha) of 0.05.
Statistical analysis
The results were tabulated and statistically analyzed using a computer software SPSS (statistic a package for social science, Chicago, IL, USA), version 15. The data was expressed as mean±SD unless stated otherwise. We used the chi-squared test to analyze categorical variables in clinical pregnancy rates, while the Student's t-test was used for the implantation rate. The probability (P) value was calculated and a p-value <0.05 was considered statistically significant.
RESULTS
The basic demographic characteristics included age, body mass index (BMI), type of infertility, duration of infertility, cause of infertility, basal FSH, AMH, and failed cycles (table 1).
There were insignificant differences between the three groups regarding retrieved oocytes, number of eggs fertilized, number of eggs cleaved, number of goodquality embryos on day 3, number of cells on day 3 per embryo, transferred embryos, multiple pregnancy rate and miscarriage rate (p>0.05, table 2).
The clinical pregnancy rate was significantly higher in the sequential group than in either day-3 or day-5 groups (p<0.05, table 2).
None of the cycles was cancelled as randomization was done after oocyte retrieval, post fertilization check and availability of five or more good quality embryos.
DISCUSSION
The major advantage of sequential transfers over blastocyst transfer is to get the high implantation potential of blastocyst transfers and, at the same time, to avoid a Miscarriage rate 2 (9.5%) 3 (13.6%) 4 (11.7%) 0.71 Data presented as mean ± (standard deviation) or n (%).
possible frustrating situation of transfer cancellation in cases planned for only blastocyst transfer. Therefore, a strategy of sequential or two-step transfer has been suggested (Tan et al., 2005). The current study showed that sequential embryo transfer in day 3 (cleavage ET) and day 5 (blastocyst ET) was associated with higher pregnancy, implantation and live birth rates than either day-3 or day-5 embryo transfers.
Possible explanations of those results include mechanical endometrial stimulation, which has been associated with higher pregnancy rates in women with recurrent implantation failures (Barash et al., 2003;Zhou et al., 2008), this was also found in a recent Cochrane review published by Hennes et al., 2019. This mechanical stimulation of the endometrium may be caused by the transfer catheter used in day 3, which increases endometrial receptivity at the time of blastocyst transfer. Loutradis et al. (2004) and Fang et al. (2013) explained this finding by the release of cytokines, as a result of endometrial injury, that enhanced implantation. Another possible explanation, is the increase in the probability of hitting the ''implantation window'' by two transfers, since timing may differ among patients according to the response of the endometrial to steroid hormones (Almog et al., 2008). Some authors reported this second explanation as a possible cause for the improved success rates found in women with repeated IVF/ET failures undergoing such intervention (Loutradis et al., 2004;Almog et al., 2008). Therefore, sequential transfer is recommended for patients with recurrent implantation failures who have good quality embryos (Ismail Madkour et al., 2015).
Our study is consistent with other studies that concluded that sequential transfers had significantly higher pregnancy, implantation, and live birth rates, compared to conventional day-3 transfers (Nadkarni et al., 2015;Dalal et al., 2015;Ismail Madkour et al., 2015). Stamenov et al. (2017), used frozen embryos in a natural cycle, and found that sequential embryo transfer (1 in day 3 and the other in day 5) had significantly higher implantation and pregnancy rates, significantly lower miscarriage rates, and nonsignificant differences in multiple pregnancy rates, as compared to the transfer of two blastocysts in day 5. While the current study findings differed from that reported by Al-Hasani et al. (1990), Ashkenazi et al. (2000) and Tehraninejad et al. (2019) who found nonsignificant differences in pregnancy rates between single and double embryo transfers, the past of the women in those two studies, the inclusion criteria and the timing of the initial transfer differed from the current study. Bungum et al. (2003), ran a randomized controlled trial to compare day-3 with day-5 transfers and found nonsignificant differences in pregnancy rates between both groups, which is consistent with the results of the current study; however, as the cases who completed the study in both groups were almost the same, this could be a possible explanation as to why we had the same figure in regards as the clinical pregnancy rate in both groups.
There have been some criticisms of sequential embryo transfers, namely increased cost and increase rate of multiple pregnancies (Peramo et al., 1999;Nadkarni et al., 2015); however, in the current study, and contrary to the study of Nadkarni et al. (2015) the number of transferred embryos was similar between the three groups and no difference existed in the incidence of multiple pregnancies, which was in agreement with other studies (Almog et al., 2008;Ismail Madkour et al., 2015). The possibility of harming the transferred embryos during the second transfer is also higher, caused by infection or trauma, compared to the embryos transferred earlier (Ashkenazi et al., 2000); however, neither the current study nor the study done by Tur-Kaspa et al. (1998), showed that the second transfer had any adverse effect on implantation (P2).
There were some limitations of this study. First, we included women with good ovarian response, which precluded studying the role of sequential transfer in poor ovarian responders. Second, using recombinant gonadotropins precluded the studying of the effect of other types of gonadotropins. Therefore, further studies with different modalities of ovarian stimulation and different categories of infertile patients, are warranted.
CONCLUSION
Sequential transfer on day 3 and day 5 in patients with adequate number of retrieved oocytes is associated with a higher embryo implantation, clinical pregnancy and live birth rates and, at the same time, we avoid complications of blastocyst transfers, such as cancellation of the transfer cycle and multiple pregnancies. This technique is advocated for patients having an adequate number of good quality embryos to be replaced on both days of transfer, and thus not suitable for poor ovarian responders.
Funding information:
We had no financial support for this study.
CONFLICT OF INTEREST
Authors reported no conflict of interest associated with this study. | 2021-03-22T17:19:47.397Z | 2020-11-05T00:00:00.000 | {
"year": 2021,
"sha1": "b45bc1f979ca91fdffd3411eec5a566f1e832682",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5935/1518-0557.20200083",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "973ee7ea758a0171c4583394a90569203f1e89f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16659690 | pes2o/s2orc | v3-fos-license | Mutation of zebrafish dihydrolipoamide branched-chain transacylase E2 results in motor dysfunction and models maple syrup urine disease
SUMMARY Analysis of zebrafish mutants that demonstrate abnormal locomotive behavior can elucidate the molecular requirements for neural network function and provide new models of human disease. Here, we show that zebrafish quetschkommode (que) mutant larvae exhibit a progressive locomotor defect that culminates in unusual nose-to-tail compressions and an inability to swim. Correspondingly, extracellular peripheral nerve recordings show that que mutants demonstrate abnormal locomotor output to the axial muscles used for swimming. Using positional cloning and candidate gene analysis, we reveal that a point mutation disrupts the gene encoding dihydrolipoamide branched-chain transacylase E2 (Dbt), a component of a mitochondrial enzyme complex, to generate the que phenotype. In humans, mutation of the DBT gene causes maple syrup urine disease (MSUD), a disorder of branched-chain amino acid metabolism that can result in mental retardation, severe dystonia, profound neurological damage and death. que mutants harbor abnormal amino acid levels, similar to MSUD patients and consistent with an error in branched-chain amino acid metabolism. que mutants also contain markedly reduced levels of the neurotransmitter glutamate within the brain and spinal cord, which probably contributes to their abnormal spinal cord locomotor output and aberrant motility behavior, a trait that probably represents severe dystonia in larval zebrafish. Taken together, these data illustrate how defects in branched-chain amino acid metabolism can disrupt nervous system development and/or function, and establish zebrafish que mutants as a model to better understand MSUD.
INTRODUCTION
Maple syrup urine disease (MSUD) is an inherited metabolic disorder of branched-chain amino acids (BCAA): isoleucine, leucine and valine (Strauss and Morton, 2003;Chuang et al., 2006;Chuang et al., 2008). It demonstrates an autosomal recessive pattern of inheritance, and affects ~1 in every 185,000 children worldwide. However, much higher incidence rates are observed in Old Order Mennonite communities, with a ratio of 1:200 children, owing to founder effects. The first step of BCAA metabolism consists of a reversible transamination reaction to yield -keto acids. The second step is oxidative decarboxylation of the -keto acids by the mitochondrial branched-chain -keto acid dehydrogenase (BCKD) complex. Mutation in any of the four genes encoding the three catalytic components of the BCKD complex has been shown to cause MSUD. Affected individuals accumulate BCAAs and -keto acids in tissues and plasma, which cause the urine and bodily secretions to smell like maple syrup (burned sugar), hence the name of the disorder. In the most severe or 'classic' form of MSUD, the elevated BCAAs and -keto acids can have a devastating impact on the central nervous system (CNS). If not treated, toxic build up of BCAAs and -keto acids cause severe dystonia, coma, cerebral edema, dysmyelination and death within a couple weeks after birth. Treatment for MSUD typically consists of severe dietary restriction of BCAAs (Strauss et al., 2010). However, even with a carefully monitored diet, secondary illnesses can lead to metabolic crisis and neurological damage, which is thought to generate the mental retardation and psychiatric problems that are often observed. An alternative treatment for MSUD is elective liver transplantation, which reduces plasma levels of BCAAs and -keto acids and spares the CNS from severe injury (Strauss et al., 2006). Unfortunately, the scarcity of livers for transplantation, surgical risks and lifetime usage of immunosuppressants limit this treatment option. A better understanding of the mechanisms that cause the neuropathology of MSUD and development of new therapeutic drugs could greatly benefit affected individuals as well as yield new insight into CNS metabolism.
To investigate the pathophysiology of MSUD, a mouse model of the classic form of the disease was created by deleting the gene encoding the E2 subunit of the BCKD complex (Homanics et al., 2006). A model of a less severe form of the disease, intermediate MSUD, was also developed. These mice recapitulate several aspects of MSUD, including elevated levels of BCAAs and -keto acids in tissues and plasma, severe neurological impairment, and decreased phenotypic severity in response to liver cell transplantation (Skvorak et al., 2009a;Skvorak et al., 2009b;Zinnanti et al., 2009). However, the neuropathological mechanisms of MSUD remain poorly understood. Additional animal models could provide a complementary tool to study the mechanisms of this disease that generate CNS injury.
Maple syrup urine disease zebrafish model
RESEARCH ARTICLE
Developing zebrafish provide an excellent system to investigate the cellular and molecular mechanisms of MSUD. Although BCAA metabolism has yet to be described in this system, larval zebrafish offer several advantageous features, including small size, rapid development external to the mother, optical transparency, large clutch sizes and organ systems that are broadly similar to those in mammals. These features make zebrafish larvae readily amenable to a wide variety of behavioral, genetic, imaging, physiological and pharmacological approaches. The identification of zebrafish mutants with impaired BCKD complex function hold promise as a particularly useful tool to investigate the effects of disrupted BCAA metabolism.
Zebrafish perform a characteristic sequence of motor behaviors during development, which reflects different stages of locomotor network formation (Kimmel et al., 1974;Saint-Amant and Drapeau, 1998;Downes and Granato, 2006;McKeown et al., 2009). These behaviors were used as the basis for extensive mutagenesis screens to identify mutants with specific defects in embryonic motility (Granato et al., 1996). Mutants that demonstrated similar, abnormal behavior were grouped into phenotypic classes, including the accordion class. Starting around 21 hours post-fertilization (hpf ), wild-type embryos demonstrate smooth, alternating tail coils. By contrast, accordion class mutants compress along the rostrocaudal axis and relax, like an accordion. Quetschkommode (que) mutants were grouped into the accordion class but, outside of the initial study identifying these mutants, it has not been characterized nor has the identity of the mutated gene been reported.
In this study, we reveal that que mutants contain a mutation that disrupts the E2 subunit of the BCKD complex. Correspondingly, we present evidence that que mutants accumulate high concentrations of BCAAs. que larvae also contain reduced levels of the neurotransmitter glutamate, which probably contributes to the aberrant CNS function and abnormal behavior observed in this mutant. Combined, these data identify the que mutant as a vertebrate model of MSUD, which holds promise as a powerful tool to better understand this disease and identify therapeutic compounds.
RESULTS
que mutants exhibit abnormal rostro-caudal compressions in response to touch A single allele of que (ti274) was identified from a previously performed mutagenesis screen (Granato et al., 1996;Zottoli and Faber, 2000). The mutation is recessive, and homozygous mutants first demonstrate abnormal rostro-caudal compressions at around 72 hpf (data not shown). The abnormal behavior becomes more robust by 96 hpf. At this time point, high-speed video analysis shows that wild-type larvae respond to touch by first performing a largeamplitude body bend in which the nose touches the tip of the tail, a so-called C-start or C-bend (defined here as greater than 110°), followed by lower-amplitude body undulations to swim away ( Fig. 1A; supplementary material Movie 1) (Eaton et al., 1977;Zottoli and Faber, 2000). By contrast, que mutants do not perform the initial C-bend or lower-amplitude body undulations, and instead demonstrate rostro-caudal shortening ( Fig. 1B; supplementary material Movie 2). This behavior, called accordion behavior (Granato et al., 1996), is due to abnormal coordination of axial leftright muscle contractions. Other mutants have been shown to demonstrate accordion behavior due to defects in either CNS function, such as bandoneon (beo), or muscle relaxation, such as accordion (acc) (Gleason et al., 2004;Hirata et al., 2004;Hirata et al., 2005;Olson et al., 2010). We used kinematic analysis to compare the swimming behavior of wild-type, beo, acc and que larvae to each other to determine whether we could distinguish defects in CNS function from defects in muscle relaxation and classify que mutants. No clear trend emerged; however, que mutants consistently show the most dramatic disruption of swimming behavior compared with acc or beo mutants. Although acc and beo mutants demonstrate abnormal swimming behavior, C-bends are often observed (Fig. 1C-E). By contrast, que mutants very rarely execute large-amplitude body bends (Fig. 1F). que mutants continue to perform accordion behavior 5-6 days post-fertilization (dpf ). They fail to inflate the swim bladder, which would enable them to feed, and eventually die around 7 dpf. que mutants demonstrate abnormal motor output To better examine whether que mutants harbor a defect in CNS function as opposed to a defect in muscle relaxation, we analyzed fictive locomotor output from the spinal cord by performing extracellular peripheral nerve recordings. Zebrafish larvae demonstrate bouts of motor output in response to touch ( Fig. 2A,B). These bouts are composed of tightly coordinated bursts that alternate rapidly between the left and right sides and orchestrate the axial muscle contractions that constitute swimming. In wildtype larvae, we observed rapid alternations in locomotor output between the left and right sides, with little overlap in bursting activity (6.5±6%, n5; Fig. 2C), as described previously (Masino and Fetcho, 2005). In que mutants, although the coordination of leftright locomotor activity was similar to wild-type siblings (compare Fig. 2C with 2D), the amount of overlap between left-right bursting activity was significantly increased (19.1±11.3%, n7, t-2.3, P<0.05). This increase in activity overlap is consistent with the abnormal coordination of left-right muscle contractions performed by que mutants. These data do not rule out the possibility that que mutants contain a defect in muscle relaxation; however, they do indicate that abnormal motor output from the CNS at least contributes to the behavior of this mutant.
To further characterize potential differences in locomotor output between wild-type and que mutant siblings, we examined a range of bout and burst properties related to rhythmic locomotor activity during fictive swimming (Masino and Fetcho, 2005). Although most of these properties were not significantly different between wildtype and mutant larvae (Table 1), que mutants generated a significantly greater number of bouts following touch stimulus applied to the head than did wild-type larvae [11.6±6.2 bouts and 1.9±0.9 bouts, respectively (t3.5, P<0.01, n5)]. These results suggest that, compared with wild type, there are subtle, yet significant, changes in the locomotor circuit that underlie swimming in que mutant larvae and participate in generating the mutant behavioral phenotype.
The que gene encodes dihydrolipoamide branched-chain transacylase E2 (Dbt)
To determine the molecular identity of the que gene, we used a positional cloning strategy. Using a three-generation map cross panel, we screened pools of genomic DNA from wild-type siblings Maple syrup urine disease zebrafish model
RESEARCH ARTICLE
and homozygous mutants with a panel of simple sequence length polymorphism (SSLP) markers. que mapped to chromosome 22, which confirmed previous low-resolution mapping results (Geisler et al., 2007). We then used DNA extracted from single embryos and single nucleotide polymorphism (SNP) markers to refine the map position to a 0.36 cM interval between the markers ENSDART109865 and wu:f63d09 (Fig. 3A). Extensive genome database analysis and sequencing of nearby candidate genes led us to dihydrolipoamide branched chain transacylase E2 (dbt), which encodes a subunit of the BCKD complex, which is required for BCAA metabolism. Zebrafish Dbt is a predicted 493 amino acids in length, and it is ~78.2% identical to the human protein (data not shown). Mutations in the human DBT gene are known to cause MSUD, which can result in severe dystonia and death if not treated. Given that que mutants demonstrate abnormal behavior and nervous system function consistent with the severe dystonia observed in humans, we sequenced the dbt gene. Sequence analysis of the dbt gene from que homozygotes revealed a single nucleotide substitution in the splice donor site of exon 6 compared with wild type. The guanine of the intron side of this splice site is changed to an adenine (Fig. 3B). To determine whether this change affects the splicing of intron 6, as would be predicted, we performed reverse (A,B)Selected frames from high-speed video recordings are shown with times indicated in milliseconds. (A)A wild-type larva demonstrates a normal C-bend (A4, asterisk) in response to a touch stimulus, followed by smaller-amplitude body undulations to clear the field (A5-A12). (B)A que mutant demonstrates abnormal rostrocaudal shortening and it fails to escape. (C-F)Kinematic traces are shown, with zero degrees indicating a straight body and positive and negative angles representing body bends in opposite directions. Time is shown in seconds. Ten representative traces are shown for each phenotype. (C)Wildtype embryos typically perform a C-bend (defined here as greater than 110°; asterisks) followed by smaller-amplitude body undulations. (D)bandoneon (beo) mutants, which contain a CNS defect and demonstrate behavior similar to que, sometimes perform a C-bend followed by abnormal body bends. (E)accordion (acc) mutants, which contain a muscle relaxation defect and also demonstrate behavior similar to que, sometimes perform a C-bend but fail to perform smaller-amplitude body bends. (F)que mutants rarely perform a Cbend and demonstrate few smalleramplitude body undulations.
Disease Models & Mechanisms DMM
Maple syrup urine disease zebrafish model
RESEARCH ARTICLE
transcriptase (RT)-PCR using one primer in exon 6 and one primer in exon 7 (Fig. 3C). We found that RNA extracts from wild-type larvae were spliced according to prediction; however, RNA extracts from homozygous mutants revealed a larger transcript that contained the entire 86 base pairs of intron 6, indicating that it was spliced incorrectly (data not shown). This intron alters the sequence downstream of Lys268 and contains four stop codons, which would prematurely truncate the Dbt protein by 224 amino acids. dbt contains an acetyl transferase domain, essential for its function (Chuang et al., 2008), which would be largely absent from the que mutant protein. Interestingly, in humans, a mutation that prematurely truncates the DBT protein at the orthologous position (Lys257) was reported in an individual with the most severe or 'classic' form of MSUD (Herring et al., 1992;Chuang et al., 2008). These data indicate that the que mutation is a loss-of-function allele that diminishes or abolishes BCKD complex function.
To further confirm the molecular identity of que, we injected wild-type embryos with either a standard control morpholino or a morpholino designed to block translation of dbt. Embryos were injected at the one-to four-cell stage and monitored over the course of development. Embryos injected with the standard control morpholino exhibited mostly normal behavior throughout the course of development (97.2% of surviving larvae, n107; Fig. 3D). Notably, 37.5% (n144) of surviving larvae injected with the morpholino designed to target dbt demonstrated clear rostrocaudal compressions and fewer large-amplitude body bends at 96 hpf, similar to que mutants (Fig. 3E). It is important to note that morpholinos are known to lose effectiveness at ~4-5 dpf owing to turnover, which probably explains why not all embryos injected with the dbt morpholino demonstrated the robust accordion behavior performed by que mutants (Bill et al., 2009). We also attempted to perform rescue experiments in mutant embryos by injecting mRNA encoding dbt at the one-to four-cell stage and analyzing motility behavior at 4 dpf. We did not observe rescue (data not shown); however, mRNA is known to lose effectiveness 2 dpf owing to turnover. The que behavioral phenotype is not apparent until 3-4 dpf, which indicates that Dbt is required at this stage of development and precludes mRNA rescue. Regardless, the mapping data, nature of the que mutation, aberrant mRNA splicing observed in mutants, and morpholino phenocopy all argue that the que gene encodes Dbt.
dbt mRNA becomes enriched in the brain and gut organs during development We next examined the spatial and temporal expression of dbt in developing zebrafish. RT-PCR revealed that dbt mRNA was present at all time points examined from 6 hpf to 120 hpf (Fig. 4A). In situ hybridization also confirmed early expression. dbt was detected at the two-cell stage, indicating it is a maternally deposited mRNA (Fig. 4B). The spatial expression of dbt is initially widespread through 24 hpf (Fig. 4C,D); however, its expression pattern over the next few days of development becomes enriched in the brain and organs in the gut, such as liver and intestine ( Fig. 4E-H). These data suggest that dbt plays an important role in BCAA metabolism through function in these tissues. The prominent expression within the brain, in particular, suggests that dbt is important for CNS function. Intriguingly, the expression pattern of dbt, with progressive enrichment in the brain and gut organs over the course of development, is reminiscent of another mitochondrial protein that is important for CNS function, Opa3 (Pei et al., 2010).
que mutants harbor elevated levels of BCAAs
In mammalian systems, impaired DBT function, as demonstrated by MSUD-affected individuals, results in elevated levels of BCAAs (Strauss and Morton, 2003;Chuang et al., 2006;Chuang et al., 2008). Because dbt is disrupted in que mutants, we investigated their free amino acid profiles. Owing to the small size of larval zebrafish, a homogenate of 50 whole animals was used for each assay. We compared the free amino acid levels of wild-type and que mutant larvae at 96 hpf. Strikingly, que larvae harbor elevated levels of BCAAs (Fig. 5A,B). Isoleucine, leucine and valine concentrations were 788%, 1006% and 688% (n3, P<0.01) of those of wild type, respectively. que mutants also showed a marked decrease in free glutamine levels, at 24% of wild-type (n3, P<0.01). In addition,
Disease Models & Mechanisms DMM
Maple syrup urine disease zebrafish model
RESEARCH ARTICLE
statistically significant decreases were observed in the levels of a wide variety of free amino acids, including aspartate (16%), GABA (32%) and serine (28% of wild type; all n3, P<0.01); and alanine (44%), glutamate (38%), glycine (41%), methionine (49%) and threonine (60% of wild type; all n3, P<0.05). To rule out the possibility that abnormal motor behavior itself alters free amino acid levels, we examined beo mutants, which contain a mutation in the glycine receptor 2 subunit and exhibit abnormal behavior similar to que mutants (Fig. 5C) (Hirata et al., 2005). The free amino acid levels in beo mutants (n1) were similar to that of wild-type controls, indicating that accordion behavior alone does not substantially alter free amino acid concentrations. Combined, these data provide strong evidence that mutation of the que gene results in an error in amino acid metabolism, yielding a prominent accumulation of BCAAs.
Glutamate levels are reduced in the brain of que mutant larvae Although the neuropathology of MSUD is not well understood, reduced concentrations of neurotransmitters, including glutamate, were observed in the intermediate MSUD mouse model (Zinnanti et al., 2009). Neurotransmitter depletion was found to correlate with abnormal motor behavior and a highly abnormal posture consisting of recumbency and stiff, extended limbs. Given that our analysis of the free amino acids levels in que mutants showed decreased concentrations of free glutamate and these mutants demonstrate abnormal CNS function and motor behavior, we examined the distribution of glutamate using an antibody. As a control for antibody penetration and overall tissue morphology, we also stained using an acetylated tubulin antibody. Antibody penetration and general morphology of the brain of que mutants seemed similar to wild type at 96 hpf (compare Fig. 6A with 6D). By contrast, glutamate levels were markedly reduced in que mutant larvae (n5 embryos, 12 sections, P<0.01; compare Fig. 6B with 6E, Fig. 6G), which probably contributes to the abnormal nervous system function and behavior observed by this stage of development.
DISCUSSION
In this study, we revealed that the dbt gene plays an essential role in developing zebrafish. The molecular nature of the que mutation, phenocopy through antisense morpholino injection, and the profile of free amino acid concentrations indicate that loss of dbt function result in abnormal amino acid metabolism and a dramatic accumulation of BCAAs. We determined that the dbt gene becomes enriched in the brain and organs in the gut during zebrafish Below, the wild-type splice pattern is illustrated, with protein-coding exon 6, the sequence at the splice site, the intervening intron, and proteincoding exon 7 depicted. The que mutant splicing pattern is also illustrated, including the nucleotide substitution, which results in a failure to remove the intron. The intron contains four stop codons (asterisks). RT-PCR results using mRNA from wild type, que mutants and -RT controls are also shown using primers targeted towards exon 6 and exon 7. A larger DNA product, containing intron sequence, can be observed using mRNA isolated from que mutants. (D,E)Ten kinematic traces are shown for embryos injected with (D) the control morpholino or (E) a dbt translation-blocking morpholino. Embryos injected with the control morpholino perform C-bends (asterisk) and normal swimming behavior. dbt morphant embryos demonstrate abnormal swimming behavior and few large amplitude body bends, similar to que mutants.
RESEARCH ARTICLE
development, and that impaired dbt function results in reduced levels of glutamate in the brain. Because glutamate is crucial for CNS function, the reduced levels of this neurotransmitter probably promote the abnormal spinal cord output and accordion behavior demonstrated by que mutants.
dbt is required for brain function in zebrafish
The findings from this study, as well as observations in rodent and human systems, suggest a model for how mutation of the zebrafish dbt gene leads to abnormal swimming. In mammalian systems, dbt is required for the second step of BCAA metabolism, and its impairment leads to elevated levels of BCAAs and -keto acids in plasma and tissue (Chuang et al., 2006;Chuang et al., 2008). Elevated BCAAs in the plasma, in particular leucine, are thought to out-compete other amino acids at the blood-brain barrier, which results in neurotransmitter deficiencies, growth restrictions, cytotoxic edema, myelin disruption, and impaired energy metabolism throughout the CNS (Zinnanti et al., 2009). -keto acid toxicity has also been proposed to directly disrupt CNS function. Intracranial injection of -ketovaleric acid, which is derived from valine, has been shown to elicit seizures in rats, whereas administration of other -keto acids had no behavioral effect (Coitinho et al., 2001). In individuals with MSUD, reducing the concentrations of BCAAs and -keto acids in the plasma by liver transplantation can protect CNS function and development (Strauss et al., 2006).
We propose that very similar mechanisms regulate BCAA metabolism in zebrafish (Fig. 7). Throughout the first 5 days of zebrafish development, the embryo consumes the presumptive equivalent of a high-protein diet in mammals by absorbing BCAAcontaining proteins from the yolk (Link et al., 2006;Tay et al., 2006). During the earliest stages of development, within the first few hours post-fertilization, the metabolic needs of the embryo are largely met by maternally derived mitochondria and mRNA (Mendelsohn and Gitlin, 2008;Zhang et al., 2008;Abrams and Mullins, 2009). However, as embryogenesis proceeds, BCAA metabolism increasingly relies upon zygotic transcription. In wild-type embryos, BCKD complex function in the liver, other gut organs and the brain itself, protect the CNS from BCAA toxicity, similar to mammalian systems. According to this model, BCKD complex function supports appropriate import of amino acids into the CNS, robust metabolic generation of neurotransmitters, which are essential to support the coordinated CNS output that generates vigorous swimming behavior. In que mutants, our data indicate that mutation of dbt disrupts BCKD complex function to cause the toxic accumulation of BCAAs and, probably, -keto acids. This error probably causes abnormal retention, metabolism and import of amino acids into the CNS, reduced levels of glutamate and other neurotransmitters, abnormal CNS function, and accordion behavior (Fig. 7). It will be interesting to use transgenic approaches to determine whether restoring gene function in the liver of que mutants preserves normal CNS function, as shown in mammals (Strauss et al., 2006;Skvorak et al., 2009a;Skvorak et al., 2009b). Driving gene expression in other organs not yet explored in mammals can also be investigated, which might indicate new therapeutic options for individuals with MSUD. que mutants are a new animal model of MSUD que larvae harbor a mutation in zebrafish dbt, which results in elevated BCAA levels, similar to both the mouse models of MSUD and affected humans. In the mouse model of intermediate MSUD, elevated levels of BCAAs have been shown to correlate with progressive disruption of CNS function and concomitant defects in motor behavior that culminate in severe dystonia (Silberman et al., 1961;Morton et al., 2002;Zinnanti et al., 2009). Similarly, severe dystonia has been reported in MSUD-affected individuals during acute metabolic decompensation (Silberman et al., 1961;Morton et al., 2002;Zinnanti et al., 2009). que mutants demonstrate a progressive defect in motor behavior that culminates in abnormal CNS function and accordion behavior. Maple syrup urine disease zebrafish model
RESEARCH ARTICLE
Accordion behavior is probably the expression of severe dystonia in developing zebrafish. We and others have previously shown that zebrafish mutants that exhibit accordion behavior contain mutations in genes known to control movement and muscle tone in mammalian systems (Downes and Granato, 2004;Gleason et al., 2004;Hirata et al., 2004;Hirata et al., 2005;Wang et al., 2008;Olson et al., 2010).
The findings from this study indicate that que mutants are a new animal model of MSUD. One aspect of MSUD is the distinct, maple syrup smell of bodily secretions of affected individuals. We did not detect any distinct odor of que mutants (data not shown); however, this is probably due to the small size and minute amounts of secretions produced by larval zebrafish. Nevertheless, que larvae seem to recapitulate molecular, biochemical, cellular and behavioral aspects of MSUD. Because larval zebrafish contain a smaller nervous system than do mammalian systems, with fewer numbers of cells, que mutants provide a promising system to better characterize the progression of CNS injury in response to BCAA toxicity. Moreover, the small size, aquatic nature, development that is external to the mother and the ability to obtain large numbers of zebrafish embryos make them amenable to small-molecule screens (Zon and Peterson, 2010). The behavioral phenotype of que mutants is robust and easily quantifiable; therefore, que mutants could be developed into a highthroughput system to screen libraries of compounds to identify small molecules that improve swimming behavior. Compounds that The amino acids are referred to by their three-letter code, except for GABA. Each experiment contained a homogenate of 50 larvae. The error bars indicate standard error. (A)The free amino acid profile for wild-type larvae (n3). (B)The free amino acid profile of que mutant larvae reveals a dramatic accumulation of BCAAs: isoleucine, leucine and valine. Other amino acid levels were reduced. * Significant difference from wild-type at P<0.05; ** significant difference from wild-type at P<0.01 (n3). (C)The free amino acid profile of beo, a zebrafish mutant that demonstrates abnormal behavior owing to a CNS defect, indicates that abnormal behavior alone does not markedly alter free amino acid levels (n1). Fig. 6. que mutants contain a reduced concentration of glutamate in the brain. (A-F)Cross-sectional views of the hindbrain of 96 hpf larvae are shown. Immunohistochemistry using antibodies against acetylated tubulin, which predominantly labels axon tracts, reveals the overall structure of the brain and demonstrates tissue penetration of the antibodies. Staining using an antibody against L-glutamate illustrates the distribution of this neurotransmitter. (A)Labeling with the anti-acetylated tubulin reveals the axon tracts and overall structure of the hindbrain of wild-type larvae. (B)The hindbrain of wildtype larvae contains a broad distribution of L-glutamate. (C)The merged images show several L-glutamate-positive cells surrounded by anti-acetylated tubulin labeling. (D)The overall structure of the hindbrain of que mutants revealed by anti-acetylated tubulin appears similar to the hindbrain wild-type larvae. (E)The fluorescence intensity of labeling with the L-glutamate antibody is greatly reduced compared with wild type when imaged using the same microscope settings. However, increasing the gain of the confocal microscope shows more faint L-glutamate staining (inset). (F)The merged images show little L-glutamate staining compared with acetylated tubulin labeling. (G)The graph shows a significant reduction in L-glutamate staining intensity in que mutants normalized to acetylated tubulin staining. The fluorescent intensity values are the analog-to-digital converter values of the entire frame (n5 embryos, 12 sections, **P<0.01). Very similar results were obtained when a region of interest was selected to encompass a smaller, designated portion of the brain.
Maple syrup urine disease zebrafish model
RESEARCH ARTICLE
improve the behavioral phenotype of que mutants could be candidate therapeutics for individuals with MSUD.
Other aspects of MSUD can also be investigated using larval zebrafish. Gene targeting approaches can be readily employed, such as morpholino injection or zinc-finger nuclease technology, to model MSUD caused by disruption of other BCKD complex subunits (Ekker, 2008;Bill et al., 2009). These technologies can also be used to examine the in vivo role of BCKD regulatory proteins, such as the BCKD phosphatase or kinase. Genetic modifier screens can also be performed using the que mutant to search for genes that can compensate for disruptions in BCAA metabolism. Taken together, these approaches can provide a promising platform to better understand CNS metabolism and develop new therapies to combat MSUD.
Zebrafish maintenance and breeding
All animal protocols were approved by the Institutional Animal Care and Use Committees (IACUC) at the University of Massachusetts and the University of Minnesota. Zebrafish were raised and maintained according to standard procedures. Developing zebrafish were kept at 28.5°C in E3 media and staged according to morphological criteria (Kimmel et al., 1995;Parichy et al., 2009). Experiments were performed using que ti274 , beo ap21 and acc tq206 mutant alleles maintained on a mixed Tübingen (Tü) or tub longfin (TLF) genetic background.
Behavioral analysis
To characterize swimming behavior, light-touch stimuli were applied to the head of larvae using a 1 mm insect pin. The response was recorded using a high-speed video camera (Fastec Imaging, San Diego, CA), recording 500-1000 frames per second, mounted to a 35 mm lens (Nikon, Melville, NY). The head-to-tail angle for each frame was measured using automated software developed by G.B.D. 's laboratory (Kelly Anne McKeown and Sandy Whittlesey unpublished). Briefly, pixel density analysis was used to identify three landmarks along the larval body: the tip of the nose, the border between the yolk ball and yolk extension, and the tip of the tail. These three points form an angle, and these angles were plotted over time using Microsoft Excel.
Electrophysiological recordings
Zebrafish larvae at 4 dpf were anesthetized with 0.02% Tricaine-S (Western Chemical) in extracellular recording solution (Legendre and Korn, 1994;Drapeau et al., 1999;Masino and Fetcho, 2005) and paralyzed with 0.1% (w/v) -bungatoxoin (Sigma), which significantly reduced or abolished postsynaptic muscle activity based on patch recordings from muscle fibers (Masino and Fetcho, 2005). The extracellular solution was superfused continuously at 22-26°C. Larvae were pinned in a dorsoventral position to a Sylgard-lined glass-bottom Petri dish and the skin was removed. Extracellular suction electrode recording techniques were used to monitor the activity of peripheral nerves during fictive behavior (Masino and Fetcho, 2005). Activity occurred spontaneously but was also initiated by gently applying a touch stimulus to the head with a tungsten pin controlled and positioned by a manual micromanipulator (MX130, Siskiyou, Grants Pass, OR). The tip of the extracellular suction electrode (~15 m tip diameter) was positioned at the dorsoventral midline of a myotomal cleft where the skin had been removed. All extracellular recordings were restricted to between body segments 7 and 15. A MultiClamp 700B (Molecular Devices, Sunnyvale, CA) amplifier was used to monitor extracellular voltage in current-clamp mode at a gain of 1000 (R f 50 MΩ) with the low-and high-frequency cut-off at 100 and 4000 Hz, respectively. Recordings were sampled at 10 kHz. Extracellular recordings were digitized using a digitizing board (DigiData series 1440A, Molecular Devices, Sunnyvale, CA) acquired using pClamp 10 software and rectified offline.
Analysis of peripheral nerve activity
A program written in MATLAB (Mathworks, Natick, MA) was used to analyze the data. Estimates of mean burst frequency were determined from a Fourier transform, such that the initial estimate of mean burst frequency was the frequency at which the Fourier transform magnitude peaked over a frequency band from 0.1 to 5 Hz. The rectified voltage recordings were smoothed with a Gaussian-weighted moving average with 99% of the weight Fig. 7. A working model of how mutation of dbt results in abnormal, accordion behavior. Similar to mammalian systems, we propose that wildtype zebrafish regulate metabolism of BCAAS via the BCKD complex. Many of these metabolic or molecular steps (black arrows) might occur in organs in the gut (such as the liver and intestine) but also the CNS. Amino acids (AA), such as glutamine, are transported across the blood-brain barrier and used to generate glutamate, GABA and other neurotransmitters (NT). These neurotransmitters are required for coordinated nervous system output to orchestrate swimming behavior. In que mutants, we propose that impaired BCKD function results in the accumulation of BCAA and -keto acids. This yields reduced retention and metabolism, and reduced transport of other amino acids (white arrows) across the blood-brain barrier, yielding diminished neurotransmitter synthesis. The abnormal levels of neurotransmitters contribute to aberrant nervous system output and abnormal, accordion behavior. It is also possible that elevated concentrations of -keto acids directly disrupt neural circuits to cause accordion behavior.
RESEARCH ARTICLE
concentrated over an interval whose width was one-quarter of the reciprocal of the estimated burst frequency ('quarter-width').
The occurrence times of rhythmic bursts in the smoothed voltages were determined with an algorithm that searched for local peaks and troughs over quarter-width intervals while forcing adjacent peaks and troughs to be separated by at least quarterwidth, and furthermore forcing peaks and troughs to alternate. With the peaks and troughs defined, the individual 'burst sections' were then defined as the interval between adjacent troughs. To determine the start of individual bursts, the burst-onset was defined as the time at which the smoothed waveform rose from the first trough to 10% of the way to the next peak. Similarly, burst-termination was defined as the time at which the smoothed waveform fell from the peak by 90% of the vertical distance to the next trough. Next, the analysis program was used to determine the bout and burst properties for each voltage trace in a manner similar to that used by Masino and Fetcho (Masino and Fetcho, 2005). Finally, the amount of overlap in activity between alternating (left-right) bursts was measured as the proportion of the total burst time occupied by the simultaneous activity in the paired (left-right) extracellular recordings (Fig. 2C,D, overlap indicated by gray bars). The means and standard deviations for each parameter were then determined.
Chromosomal mapping and sequence analysis
Crosses between fish heterozygous for the que allele and WIK fish were used to generate a three-generation map cross panel. F2 que mutant embryos and wild-type siblings were collected, sorted based upon the 96 hpf phenotype and stored in methanol at -20°C. DNA was extracted from more than 833 mutant larvae, and SSLP markers and SNP markers were obtained and generated against genes to refine the mapping interval. Exons and intron-exon boundaries of candidate genes were sequenced (Genewiz, South Plainfield, NY) from wild type, que siblings and homozygous mutants.
Morpholino analysis
Wild-type zebrafish embryos were pressure injected at the one-to four-cell stage with 12 ng of morpholino designed to block translation of dbt or the standard control morpholino (Gene Tools, Phinomath, OR). This amount of morpholino was selected based upon doseresponse experiments in which higher doses were found to generate morphological defects and/or lethality. The sequence of the translation-blocking morpholino was 5Ј-CGCACAG -TAATGACCGCCGCCATCT-3Ј. Underlined residues indicate the start codon. The control morpholino sequence was 5Ј-CCTCTTAC -CTCAGTTACAATTTATA-3Ј. The embryos were raised at 28.5°C, and locomotive behavior was examined across development. Kinematic analysis, as described above, was performed at 96 hpf.
RT-PCR
RT-PCR was used to analyze mRNA splicing in mutants as well as examine expression during development. Primers designed against dbt protein-coding exon 6 (5Ј-ATCAAACTAAGCGAAG -TTGTCGG-3Ј) and exon 7 (5Ј-GCGCAACCGGACCAAC-3Ј) were used to amplify cDNA from wild-type and homozygous que mutant larvae. The primers used to amplify -actin were 5Ј-CACACCGTGCCCATCTATGA-3Ј and 5Ј-AGGATCTTC -ATCAGGTAGTCTGTCAG-3Ј. The RNAs were reverse transcribed using the Omniscript kit (Qiagen, Venlo, The Netherlands). The dbt PCR products were sequenced for confirmation (Genewiz, South Plainfield, NY). RT-PCR reactions were performed multiple times to decrease the likelihood of amplification artifacts.
Whole-mount in situ hybridization
Antisense digoxigenin probes were generated against dbt using cDNA (Genbank ID BC090917) acquired from Open Biosystems (Huntsville, AL). Whole-mount, colorimetric in situ hybridization was performed using established protocols (Thisse and Thisse, 2008) and examined using a compound microscope (Zeiss, Thornwood, NY) attached to a digital camera (Zeiss, Thornwood, NY). Cross-sections were generated by hand sectioning in situ hybridization stained embryos with a razor blade attached to a surgical blade holder. To generate sagittal sections, in situ hybridization stained embryos were embedded in 1.5% Agar and 5% sucrose. Blocks were kept in 30% sucrose solution overnight. The next day blocks were cut into 20 mm sections using a cryostat (Leica, Buffalo Grove, IL).
Amino acid quantification
For each amino acid quantification experiment 50 96-hpf larvae were sorted based upon the locomotor phenotype, were flash frozen in liquid nitrogen and stored at -80°C. The samples were homogenized and precipitated with 0.1 M lithium citrate, 3.3% 5sulphosalicylic acid. The samples were sonicated for 10 minutes, then centrifuged at 4600 g for 20 minutes. The supernatant was then applied to VivaSpin500 size exclusion columns (Sartorium, Germany) and centrifuged at 15,000 g for 4 hours. The flow-through was stored at -80°C then sent to the University of California Davis California Genome and Proteomics Center to resolve free amino acid concentrations.
Clinical issue
Maple syrup urine disease (MSUD) is an inherited disorder that results in disrupted metabolism of branched-chain amino acids (isoleucine, leucine and valine), resulting in the toxic accumulation of these amino acids and their byproducts. This disease can have a devastating impact on the central nervous system (CNS), resulting in mental retardation, severe dystonia, coma or death if not treated. Although mouse models of the disease have been developed and some disease genes are known, the cellular and molecular mechanisms that promote brain injury in individuals with MSUD are not well understood.
Results
In this paper, the authors characterize a zebrafish mutant called quetschkommode (que) that exhibits defects in motor behavior. The mutation is found in the dihydrolipoamide branched chain transacylase E2 (dbt) gene, a homolog of the human DBT gene, which can cause MSUD when mutated. In addition to abnormal behavior, que fish are shown to have disrupted metabolism of branched-chain amino acids and aberrant CNS function, mirroring features of human MSUD.
Implications and future directions
These data reveal the que zebrafish mutant to be a new animal model of MSUD. Because zebrafish offer a number of advantages for cellular, molecular, pharmacological and genetic analysis, the que mutant provides a unique tool to deepen our understanding of and develop new therapeutic options for this disease.
The fluorescent intensity for acetylated tubulin and L-glutamate antibody staining was quantified using the EZ Viewer program (Nikon, Melville, NY) by collecting entire frames (10,1283 m 2 ) or selecting a region of interest above the notochord (3060 m 2 ) for both channels. The numbers used for quantification are the analogto-digital converter (ADC) values of L-glutamate normalized to acetylated tubulin. | 2017-04-03T12:54:24.935Z | 2011-11-01T00:00:00.000 | {
"year": 2011,
"sha1": "948f1798abd2dcf7d38ca94a331f5430e6f56fd1",
"oa_license": "CCBYNCSA",
"oa_url": "http://dmm.biologists.org/content/5/2/248.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c7e9ae85b7c02a8e48c0184b0b2c9658981a8460",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15437761 | pes2o/s2orc | v3-fos-license | Pair Phase Fluctuations and the Pseudogap
The single-particle density of states and the tunneling conductance are studied for a two-dimensional BCS-like Hamiltonian with a d_{x^2-y^2}-gap and phase fluctuations. The latter are treated by a classical Monte Carlo simulation of an XY model. Comparison of our results with recent scanning tunneling spectra of Bi-based high-T_c cuprates supports the idea that the pseudogap behavior observed in these experiments can be understood as arising from phase fluctuations of a d_{x^2-y^2} pairing gap whose amplitude forms on an energy scale set by T_c^{MF} well above the actual superconducting transition.
The single-particle density of states and the tunneling conductance are studied for a twodimensional BCS-like Hamiltonian with a d x 2 −y 2 -gap and phase fluctuations. The latter are treated by a classical Monte Carlo simulation of an XY model. Comparison of our results with recent scanning tunneling spectra of Bi-based high-Tc cuprates supports the idea that the pseudogap behavior observed in these experiments can be understood as arising from phase fluctuations of a d x 2 −y 2 pairing gap whose amplitude forms on an energy scale set by T M F c well above the actual superconducting transition. Intensive research has focused on the pseudogap regime, which is observed in the high-T c cuprates below a characteristic temperature that is higher than the transition temperature T c . It occurs in a number of different experiments as a suppression of low-frequency spectral weight [1,2,3,4,5,6,7,8]. This striking pseudogap behavior initiated a variety of proposals as to its origin [9,10,11,12,13,14,15,16], since the answer to this question may be a key ingredient for the understanding of high-T c superconductivity. At present, there is no agreement as to which of these proposals is correct. In part, this reflects the possibility that there may be different pseudogap phenomena operating in different temperature and doping regimes. In part, this is because of the difficulty in determining the experimental consequences of the various theoretical proposals. In this paper, we focus on the pseudogap phenomena observed in scanning tunneling spectroscopy measurements [6,7] on Bi 2 Sr 2 CaCu 2 O 8+δ (Bi2212) and Bi 2 Sr 2 CuO 6+δ (Bi2201). We provide a detailed numerical solution of a minimal model which, however, contains the key ideas of the cuprate phase fluctuation scenario: that is, we explore the notion that the pseudogap observed in these experiments arises from phase fluctuations of the gap [6,7,12,13,14,15]. In this scenario, below a mean field temperature scale T MF c , a d x 2 −y 2 -wave gap amplitude is assumed to develop. However, the superconducting transition is suppressed to a considerably lower temperature T c by phase fluctuations [12]. In the intermediate temperature regime between T MF c and T c , the phase fluctuations of the gap give rise to pseudogap phenomena.
We will study as a model for phase fluctuations a twodimensional BCS Hamiltonian where c † i σ creates an electron of spin σ on the i th site and t denotes an effective nearest-neighbor hopping. The i j sum is over nearest-neighbor sites of a 2D square lattice, and in the second term δ connects i to its nearestneighbor sites. In Eq. (1) one could, of course, add a next-near-neighbor hopping t ′ and a chemical potential term. Here, for simplicity and to refrain from further approximations, we have set t ′ and the chemical potential equal to zero [17]. We will assume that below a mean field temperature T MF c , a d We then determine the fluctuating phases from a Monte Carlo calculation using an effective 2D XY -free energy with E 1 adjusted to set the Kosterlitz-Thouless [18] transition temperature T KT equal to some fraction of T MF c . Specifically, for the present calculation we will set T KT ≃ T MF c /5. Here, we have the recent scanning tunneling results [7] for Bi 2 Sr 2 CuO 6+δ in mind, where T c ≃ 10K and the pseudogap regime extends to 50 or 60K, which we take as T MF c .
In principle, the XY action, which determines the fluctuations of the phases, arises from integrating out the shorter wavelength fermion degrees of freedom including those responsible for the local pair amplitude and the internal d x 2 −y 2 structure of the pairs. In general this leads to a τ -dependent quantum action as well as a coupling energy E 1 , whose temperature dependence is determined by the many-body interactions of the microscopic system. There have been various discussions regarding the regime over which a classical action is appropriate for the cuprates [19,20,21]. Here, however, we will proceed phenomenologically using the classical action, Eq. (4), and neglecting the temperature dependence of E 1 . Furthermore, we will use the 2D form of Eq. (4). One knows that for the layered cuprates there is a crossover from 2D to 3D XY behavior near T c [22]. Our point of view is that away from this crossover regime, a 2D model is certainly suitable and on the finite size lattice that we will study, the system becomes effectively ordered as T approaches T KT and the correlation length exceeds the lattice size. So E 1 will simply be used to set T KT ≡ T c . A crucial physical point that will be taken into account in our analysis is that the basic length scale of the ϕ-field is larger than the Cooper-pair size ξ 0 . Thus, although this is a clearly simplified model, we believe that its solution provides useful insight into the experimental consequences of the phase fluctuation pseudogap scenario. It is the central aim of this paper to verify this by comparison with the STM experiments and reproduction of some of their characteristic and salient features.
The calculation of the density of states for an L × L periodic lattice now proceeds as follows [23,24]. A set of phases {ϕ i } is generated by a Monte Carlo (MC) importance sampling procedure, in which the probability of a given configuration is proportional to exp(−F [ϕ i ]/T ) with F given by Eq. (4). With {ϕ i } given, the Hamiltonian of Eq. (1) is diagonalized and the single particle density of states N (ω, T, {ϕ i }) is calculated. Further MC {ϕ i } configurations are generated and an average density of states N (ω, T ) = N (ω, T, {ϕ i }) at a given temperature is determined.
As noted above, our point of view is that the XY action, used in the MC simulations, in principle arises from integrating out the shorter wavelength fermion degrees of freedom up to the scale of the Cooper-pair size, so that only the center of mass pair phase fluctuations are important. Thus, the scale of the lattice spacing for F [ϕ i ] is set by the pair size coherence length ξ 0 ∼ v F /π∆ 0 and is of order 3 to 4 times the basic Cu-Cu lattice spacing of the fermion Hamiltonian Eq. (1). Now the computationally intensive part of the calculation is the diagonalization of H and in order to get meaningful results as T approaches T KT , we found it necessary to average over a large number of Monte Carlo {ϕ i } configurations. This requires that some compromise be made with respect to the lattice size. The results, we will present, are for a 32 × 32 Hamiltonian lattice. However, if we were to take ξ 0 ∼ 4 lattice spacings, this would lead to only an 8 × 8 lattice for the ϕ i simulations. This would not allow a sufficient range for the Kosterlitz-Thouless phase coherence length to grow as T approaches T KT . Thus, we have chosen to set ∆ = 1.0t giving ξ 0 ∼ 1 so that the ϕ i simulation can be carried out on the same L × L lattice that is used for the diagonalization of H. The important physical point is that this procedure effectively cuts off phase fluctuations on a scale less than the Cooper-pair size, ξ 0 . Thus, the phase coherence length is always larger than the Cooper-pair size when T is less than T MF c . Consequently, our results differ from earlier work [25], which found that the pseudogap regime due to fluctuating phases extended only about 20% above T c , in contrast to the Bi tunneling experiments [6,7] and the recent Nernst-effect results [8]. In the work of Ref. [25], parameters were used which set the basic scale of the phase correlation length to be much smaller than ξ 0 and, therefore, the phase correlation length exceeded ξ 0 only in a narrow temperature region set by a fraction of T KT . We believe that this is not the correct phenomenology.
Results for N (ω, T ) are shown in Fig. 1. For each tem- perature we have generated up to 25, 000 independent MC {ϕ i } configurations, diagonalized H for each of these configurations, and computed N (ω, T, {ϕ i }) . In these calculations, as discussed above, we have set ∆ = 1.0t corresponding to T MF c ≃ 0.5t and selected E 1 so that T KT = 0.1t [26]. In order to reduce finite-size effects, we employ a very effective scheme recently suggested by F. F. Assaad [27].
For T > T MF c , the gap amplitude vanishes and the density of states exhibits the usual Van Hove peak at ω = 0. For T < T MF c , the presence of a finite gap amplitude gives rise to a pseudogap whose size is set by 2∆. Then, as T approaches T KT and the XY phase correlation length rapidly increases, coherence peaks evolve, the separation of which is determined by 2∆. An important point is that the scale in temperature over which the evolution of the coherence peaks occurs, is set by some fraction of T KT which means that it appears suddenly on a scale set by T MF c . An effective correlation length ξ(T ), extracted by fitting an exponential form to the correlation function is plotted versus T in Fig. 2 for our 32 × 32 lattice. The rapid onset of ξ(T ) as T KT is approached is clearly seen. It is this sudden increase of ξ(T ) that is responsible for the appearance of the coherence peaks as T approaches T KT . This effect is further enhanced by the 2D to 3D crossover that occurs in the actual materials.
In order to compare these results for N (ω, T ) with scanning tunneling spectra dI/dV , we have calculated dI(V, T )/dV using the standard quasi-particle expression for the tunneling current, Here, f (ω) = (exp(ω/T ) + 1) −1 is the usual Fermi factor. Results for dI(V, T )/dV are displayed in Fig. 3. The effect of the Fermi factors is to provide a thermal smoothing of the quasi-particle density of states over a region of order 2T . This becomes significant at the higher temperatures and the prominent pseudogap dependence of N (ω, T ) seen in Fig. 1 is smoothed out in dI/dV . In Fig. 4, dI/dV results are shown as solid curves for T = 0.75T KT (Fig. 4a), T = T KT (Fig. 4b) and T = 2T KT (Fig. 4c). The dashed curve is for T = T MF c ≃ 5T KT . One sees that the size of the pseudogap scales with the spacing between the coherence peaks and evolves continuously out of the superconducting state. The pseudogap persists over a large temperature range measured in units of T KT , becoming smoothed out by the thermal effects as T approaches T MF c and vanishing above T MF c . Our numerical results for dI(V, T )/dV are similar to recent scanning tunneling measurements of Bi2212 and Bi2201 [6,7]. Also in these experiments the superconducting gap for T < T KT evolves continuously into the pseudogap regime, which extends up to T = T MF c . The coherence peaks appear suddenly as T KT is approached. At higher temperatures, the pseudogap fills in rather than closing and the temperature range associated with the pseudogap regime can be large compared with the size of the superconducting regime.
Summarizing, in order to develop a more quantitative understanding for the role of phase fluctuations, we have provided a numerical solution of a simplified model which, nevertheless, contains the key ideas of the cuprate phase fluctuation pseudogap scenario. Here the center of mass pair-phase fluctuations of a BCS d-wave model were determined from a classical 2D XY action by means of a Monte Carlo simulation. The resulting tunneling conductance (dI/dV ) reproduces characteristic and salient features of recent STM studies of Bi2212 and Bi2201 suggesting that the pseudogap behavior observed in these experiments arises from phase fluctuations of the d x2−y2pairing gap.
We would like to acknowledge useful discussions with S. A. Kivelson and A. Paramekanti. This work was supported by the DFG under Grant No. Ha 1537/16-2 and AR 324/3-1, by the Bavaria California Technology Cen- | 2014-10-01T00:00:00.000Z | 2001-10-18T00:00:00.000 | {
"year": 2001,
"sha1": "23e141197a29dc0718d1b99f825b2ed578181464",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0110377",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5923b231757cc72d8a65d69b73a800d586cf2a77",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
218996073 | pes2o/s2orc | v3-fos-license | Intellectual disability, the long way from genes to biological mechanisms
Approximately 2% of the world population is affected by intellectual disability (ID). Huge efforts in sequencing and analysis of individual human genomes have identified several genes and genetic/genomic variants associated with ID. Despite all this knowledge, the relationship between genes, pathophysiology and molecular mechanisms of ID remain highly complex. We summarize the genomic advances related to ID, provide examples on how to discern correlative versus causative roles in genetic variation, understand the physiological consequences of identified variants, and discuss future challenges.
INTRODUCTION
Processes such as memory, attention, reasoning and executive function are collectively embedded in the concept of cognition. While cognitive abilities in humans are variable and inheritable [1] , identification of the genetic determinants of human cognition has been limited. Candidate genes involved in the molecular underpinnings of cognition can be identified through studies on cognitive disorders. The impairment of cognitive function is a core clinical feature of neurodevelopmental disorders (NDDs), which comprise a group of developmental disorders leading to brain dysfunction. NDDs include global developmental delay, intellectual disability (ID), schizophrenia, autism spectrum disorder, attention-deficit/hyperactivity disorder, bipolar disorder, and epilepsy. Studies on NDDs have revealed that cognitive disorders are complex, usually polygenic [2] , and phenotypically and genetically heterogeneous [3,4] . ID is characterized by significant limitations in both intellectual functioning and in adaptive behavior including conceptual, social, and practical adaptive skills [5] .
ID originates during the developmental period and has an incidence of ~2% in the population [6][7][8] . Although ID can be caused by environmental factors such as maternal alcohol abuse during pregnancy, infections, birth complications and extreme malnutrition, genetic factors are now known to have an important role in its etiology, accounting for the majority of cases. ID is the most common reason for referral to genetic services and recent technological advances have allowed genetic diagnoses to be obtained for a substantial portion of affected individuals. The combination of novel technologies and increased biological understanding is rapidly increasing the diagnostic yield of genetic tests in ID. The introduction of chromosome array analysis (comparative genomic hybridization, CGH) has allowed the genome-wide detection of chromosomal aberrations, while exome sequencing (WES) and more recently whole genome sequencing (WGS) have enabled testing of all genes simultaneously in a single test. Currently, WGS is becoming the first-tier diagnostic test, which also allows for the detection of chromosomal aberrations [9] . These are impressive advancements that have important ramifications for both treatment and prognosis. A specific diagnosis also provides both psychological and social benefits for the family [10] , including information about the risk of recurrence in future pregnancies and the options of prenatal diagnosis and pre-implantation genetic testing. As of December 2019, on the Online Mendelian Inheritance in Man website (OMIM, https://omim.org/), there are more than 1300 single genes associated with ID, highlighting the complexities of brain development and the consequent, extreme genetic heterogeneity of ID. These genes are all related to a variety of cellular functions and molecular processes. On top of the functional diversity of ID associated genes, there is a myriad of genetic variants within the same gene loci with different pathological consequences, ranging from benign (no identifiable phenotypic consequences) to clearly pathogenic (associated with extreme phenotypic outcomes). Identification of new genes and genetic variants related to ID and improved understanding of the biological functions associated with these mutations are now critical.
GENOMIC ADVANCES RELATED TO ID
During the late twentieth century, twin studies showed that ID has a strong heritable component [11] . However, only in the beginning of the new millennium, with the advent of Next Generation-or massive-Sequencing technologies, has determination of the underlying genetic cause of ID, as well as many other congenital diseases, become possible [12] . An accurate molecular diagnosis is essential for the optimization of clinical management and the institution of appropriate surveillance and prevention programs [13] . De novo mutations account for at least 30%, and possibly as much as 60%, of ID cases, with diagnostic efficiency in clinical practice around 25%-30% [14] . This low diagnostic yield begets the question of what the causes of ID are in the remaining 75% of patients. Genetic and phenotypic variability, and the non-specific nature of the phenotype makes accurate genetic diagnosis in the majority of children with ID a very challenging task. In cases where no obvious causes are found, the differential diagnosis can include hundreds of rare genetic disorders, leading to hundreds of potentially involved genes, with both single nucleotide (SNVs) and copy-number variants (CNVs) putatively contributing to disease development. In this context, different molecular techniques for diagnosis coexist with each having particular pros and cons [15][16][17] .
Array based CGH was the first choice for diagnosing ID ten years ago, with a two-fold increase in diagnostic yield compared with karyotype analysis [15] . CGH allowed precise identification of CNVs as small as 20 kb long, including heterozygous deletions and duplications. However, a significant number of patients remained undiagnosed and consequently, physicians moved on to target sequencing of disease associated genes, or more recently WES. Target sequencing and WES allow identification of SNVs as well as small indels (2-20 bp) providing a diagnostic yield of 25% for children with ID [18] . The main difference between target and WES is the per sample cost, which tends to be lower for the target approach [19] . However, due to the aforementioned high phenotypic variability, sequencing only a limited number of genes can reduce the overall diagnostic yield.
The reduction in sequencing costs in the last decades has enabled WGS to be added to the diagnostic armamentarium. WGS has the potential to identify all forms of genetic variants, SNVs, indels, as well as CNVs. Recent studies demonstrated the advantages of WGS over both CGH and WES for the identification of novel mutations, with an overall diagnostic yield between 40%-60% for children with ID. The genetic heterogeneity of ID [17,20] makes WGS possibly the most cost-effective approach in terms of diagnostic yield and sequencing costs. However, it is important to note that WGS has larger costs related to data processing and storage, as well as analysis -which is much more challenging -compared to CGH or WES. As an example, while WES provides about 100,000 SNVs, WGS yields over 3 million variants per sample, of which only one (or a few) are likely to be relevant to the case. Moreover, WGS will require appropriate counseling, including appropriate management of any incidental finding.
INTERPRETATION OF GENETIC VARIANTS IN ID PATIENTS
One of the main challenges in the molecular diagnosis of ID concerns the identification and, most importantly, assignment of any found variant as responsible for the observed phenotype. This task, which requires the annotation, interpretation and selection of variants for each case, is usually performed in a multidisciplinary context, with the involvement of bioinformaticians, molecular geneticists and the responsible physician, and is referred to as Clinical Genomics Interpretation. The complexity of the task is related to the chosen technique. While sequencing of gene panels -already focused on ID associated genes -delivers a few hundred variants, CGH results in thousands of CNVs, WES yields up to 100,000 variants, and finally for WGS, over a million.
The first step of variant filtering (i.e., reduction of the number of potential candidates) involves focusing on ID related genes. OMIM, an Online Catalog of Human Genes and Genetic Disorders, lists 1330 independent genes associated with the words "intellectual disability" -double the number of ID-related genes listed in 2015 -with a variety of functions and modes of inheritance [21] . The latest update on 4 December 2019 of the [22] SysID-database (https://sysid.cmbi.umcn.nl/) currently contains 1291 primary ID genes, and 1140 candidate ID genes. This huge number and functional diversity of ID-related genes contributes to the challenge of identifying new genes or genetic variants related to ID unequivocally.
Another important filtering criterion is related to genetic variation properties. Current state of the art techniques classify variants according to the American College of Medical Genetics criteria [23] , which involves determination of several evidence criteria (or level), which then add up to a final score that determines whether the variant is (likely) benign, (likely) pathogenic or of uncertain significance (i.e., a variant of unknown significance or VoUS). Although there are more than 25 different criteria, they can arguably be grouped into those related to: (1) predicted molecular effect; (2) observed frequency in healthy individuals; (3) familial segregation; (4) genotype to phenotype relationships; and (5) previous reports. Ideally, the combination of genomic techniques and use of appropriate filtering criteria should result in the identification and report of a (likely) pathogenic variant. Yet, as will be explained below, this is particularly difficult when dealing with ID related variants. Databases such as IDGenetics, (http://www.ccgenomics.cn/ IDGenetics/) [24] , a genetic database for ID that provides integrated genetic, genomic and biological data, can facilitate the interpretation of ID related genetic variants.
To be classified as (likely) pathogenic, a variant would usually have a strong molecular effect (i.e., nonsense, frameshift, affect splicing and/or missense with a known molecular phenotype), display very low population frequency, verified to be de novo, display a known matching phenotype and have been previously reported in ID or NDD case. However, since most ID variants are de novo, they are also novel, and thus unlikely to have been reported and/or studied at the molecular level, particularly if they are missense SNVs. For example, one of the most commonly mutated genes in patients with ID, ARID1B, comprises only 1% of all ID cases. Moreover, since there are many associated genes and the phenotype in ID patients is highly variable and overlapping, it is extremely difficult to decide between variants with similar evidence criteria but located in different genes. For CNV, where genomic intervals deviate from the normal diploid state, the molecular effect is easier to gauge since the whole gene (or a significant part of it) is usually deleted or duplicated, conferring a gene dosage effect. CNVs are also more likely to be unique for the patient but there are some hot spot CNVs, mainly the ones related to syndromic ID, such as the 7q11.23 deletion that is associated with Williams Beuren syndrome, the 17p11.2 deletion associated with Smith-Magenis Syndrome, and the reciprocal duplication, associated with Potocki-Lupski Syndrome, among others [25] .
In this context, it is very important that the whole family (or at least the mother/father/patient trio) is analyzed, therefore providing direct evidence of family segregation and a straightforward filtering of the variants observed in the proband, in order to yield a proper molecular diagnosis and make interpretation of variants easier.
FUNCTIONAL UNDERSTANDING OF ID-RELATED GENETIC VARIANTS
Once a new genetic variant is identified, understanding its relationship with the biological molecular mechanism is the next important step. Concomitant with the explosion of genomic information came a revolution of tools that enabled the genetic modification of genomes. CRISPR/Cas and its associated technologies are versatiles and make gene and genomic editing much easier than before. Model organisms have been very helpful for studying the effect of a single genetic modification at the level of the organism. Despite the tremendous complexity of ID in humans, it is possible to look for conservation and relevant phenotypes to comprehend the ID-related pathophysiology in model organisms. Hence, now more than ever, model organism studies have become instrumental for understanding the molecular mechanisms underlying ID [26] . This includes mice, which have historically been used to learn about disease biology and to find potential therapeutic strategies, and fruit flies and zebrafish, which have been introduced as disease models for ID as well.
Several extremely useful tools already exist to assess basic processes that inform on gene function, associate a particular locus with ID, and enable dissection of both functional variant types and combinations of variants (biallelic or multilocus) with ID. When a novel genetic variant is identified in a patient, it is very important to define whether the variant is within a known coding region or elsewhere in the genome for this is fundamental to determining future steps [ Figure 1]. If the variant is located in a coding region, the next big question is whether it is located within a gene already related to ID If it is a new candidate gene, many different types of evidence can be used to identify functionally associated ID genes. This "guilt by association" concept predicts that if two gene products work in the same pathway or process, then mutations in these genes probably have overlapping phenotypic consequences [27] . For example, genes that encode physically interacting proteins, which are co-regulated or co-evolving, are more likely to work in a common process. In addition, studies on single gene mouse models of ID reveal that the effects of these mutations converge onto similar or related etiological pathways, highlighting common pathological nodes that can help in the understanding of new ID related genes [28] . The huge collection of mutant model organisms and the literature can be reviewed to study ID related phenotypes, keeping in mind the mode of inheritance demonstrated in humans when choosing the model to study. Towards this end, existing mutant mice collections such as the International Mouse Phenotyping Consortium (IMPC, http://www. mousephenotype.org/), the Mouse Genome Informatics (MGI, http://www.informatics.jax.org/) online database, and the European Mouse Mutant Cell Repository (EuMMCR, https://www.eummcr.org/), are all very useful resources that have combined with easy to implement genetic modification tools [29] which are instrumental for rapid understanding of the relationship between a gene and ID.
If the genetic variant is within a known ID-related gene, it is important to understand its functional relationship with the phenotype. Towards this end, the construction of gene deletion collections in yeasts [30,31] and Escherichia coli [32] , the genome-wide RNA interference screens in worms [33] and flies [34] and the availability of mutants in zebrafish, in which partial to full rescue of a zebrafish phenotype by injecting the human orthologous mRNA can be observed [35] , all allow quick functional screening. In addition, induced pluripotent stem cells can be used to study rare genetic variants in the complex human genome, as long as the clonal nature of cellular reprogramming and positive selection are well accounted for.
Pathogenic CNVs are significantly enriched for genes involved in development [36] and are particularly increased in neurodevelopmental disorders. Molecular studies of pathogenic CNVs are thus very relevant to ID research. However, pathogenic CNVs are usually very large and contain several physically linked genes. Thus, understanding the cause of ID pathogenicity remains a major challenge although animal models can be very useful towards this goal. Examples include the Smith-Magenis syndrome (SMS, OMIM #182290), associated with a deletion within band p11.2 of chromosome 17, and Potocki-Lupski syndrome (PTLS, OMIM #610883) related with reciprocal duplication. Both syndromes include ID among their clinical presentation. Modeling this pathogenic CNV in mice was possible due to the confirmation of a syntenic genomic region in mice [37] followed by the creation of the desired rearrangement by chromosomal engineering [38,39] . Phenotypic characterization of the resulting mice [40] , identification of the responsible gene within the genetic interval [41,42] , and analysis of the contribution of the genomic structural change per-se to the ultimate phenotype [43] were all possible with the genetically modified animals. With advancement in technology, efficient and rapid generation of large genomic variants in mice can be achieved in less [44,45] , making such studies easier than before.
If the identified genetic variant falls within a non-coding region, the challenge to understand its functional consequence is even greater. Accurate classification of regulatory regions can be of immense help in predicting the biological effects of noncoding genetic variants associated with particular traits and diseases. However, determining whether a given genetic variant affects the function of a regulatory element is still nontrivial. One example is seen with the transcription factor-encoding gene ARX in which protein-coding mutations cause various forms of ID and epilepsy. In contrast, variations in ARX surrounding non-coding sequences are correlated with milder forms of non-syndromic ID and autism. Using zebrafish transgenesis, long-range regulatory domains and brain region-specific enhancers were identified that explained the neuronal phenotypes related to the associated neuropsychiatric disease [46] .
FROM GENES TO BIOLOGICAL PATHWAYS
With all these efforts, the biological processes involved in ID are starting to unravel. Genes related to ID are involved in a variety of biological functions and clusters in processes such as metabolism, transporters, nervous system development, RNA metabolism, and transcription [22] . Examples of these functional nodes are discussed next.
The RAS-MAPK (mitogen-activating protein kinase) and the PI3K-AKT-mTOR pathways were first associated with cancer, but are known to be critical for synaptic plasticity and behavior [47] . The RAS-MAPK signaling cascade is a metabolic pathway that regulates growth factors and embryological development and is now associated with syndromic ID such as Noonan (OMIM #163950) and Costello syndromes (OMIM #218040) [48] . ThePI3K-AKT-mTOR signaling cascade contributes by mediating various cellular processes including cell proliferation and growth, and nutrient uptake. Dysregulation of this node has been identified as a cause of several neurodevelopmental diseases, including megaloencephaly, microcephaly, autism spectrum disorder, ID, schizophrenia and epilepsy [49,50] .
The RHO-GTPase signaling cascade is associated with a variety of cellular functions including the morphogenesis of dendritic spines. Mutations in both regulators and effectors of the RHO GTPases (i.e., GDI, PAK3, ARHGEF6) have been found to underlie various forms of non-syndromic ID [51] . Mutations in one of the downstream effectors, the calcium/calmodulin-dependent protein kinase type II (CaMKII), have been reported in patients with ID [52] . Moreover, mutations in the cytosolic protein SYNGAP1 (SYNaptic GTPase activating protein) result in a neurodevelopmental disorder termed Mental retardation-type 5 (MRD5, OMIM #612621) with a phenotype consisting of ID, motor impairments, and epilepsy. SYNGAP1 plays critical roles in synaptic development, structure, function, and plasticity and is one of the targets of phosphorylation by CaMKII [53] . This example serves to illustrate the power of identifying pathways towards understanding ID biology.
Pathway convergence [54][55][56][57] could stem from the fact that the repertoire of cells affected by ID is limited and therefore, the pathways into which ID-associated variants congregate is a reflection of the specialized function of brain cells. However, the accurate identification of such converging pathways has the potential to help understand brain dysfunction and pathology.
ID ASSOCIATED WITH EPIGENETIC MISREGULATION
A critical feature of the human brain that underlies cognition and the development of intellectual abilities is the capacity of the nervous system to reorganize its connections functionally and structurally in response to intra-and extra cellular (environmental) clues. This experience-dependent neural plasticity is particularly high during development [58] . Therefore, it is not surprising that in addition to genetic factors, the environment has particular influence during gestation or the early postnatal period and both contribute to the development of ID. Examples of such environmental factors contributing to ID include cerebrovascular incidents associated with premature birth or perinatal asphyxia, prenatal exposure to neurodevelopmental toxins or bacterial and viral infections, maternal conditions such as diabetes, phenylketonuria and immune system alterations, malnutrition (of both mother and child) and specific deficiencies such as that of iodine. Some of these ID-contributing environmental factors affect normal neurodevelopment directly by inducing genetic mutations, enhancing cell death, inhibiting differentiation processes and blocking the activity of key developmental proteins. However, the effects of the vast majority of environmental factors involve geneenvironment interactions that drive long-lasting neural and behavioral changes. Currently, these effects are strongly linked with epigenetic changes elicited by environmental factors. For example, emerging evidence suggests that environmental perturbations can alter DNA methylation patterns in the developing brain [59] , leading to the currently prevailing theory that changes in the brain methylome likely contribute to the pathogenesis of ID.
Aberrant DNA methylation (induced by environmental factors, stochastically arisen or resulting from an underlying change in DNA sequence) that leads to dysregulated genome function, affecting genes relevant for neurodevelopment and brain plasticity can potentially cause ID. These genomic (epi) variations are missed by conventional sequencing approaches and can potentially underlie a considerable fraction of genetically undiagnosed ID cases. Recently, array-based methylation profiling of a large cohort of patients with neurodevelopmental disorders identified rare epigenetic changes in ~20% of patients [60] . These changes were absent in thousands of controls, repeatedly identified in unrelated patients and located in promoters of known NDD genes, suggesting that abnormal methylation contributes to the phenotype of the patients. Further support for this hypothesis came from findings that epivariations in gene promoters were often associated with changes in gene expression, some of which were so extreme as to mimic the loss of function coding mutations. Thus, the search for epivariations should be considered as a complementary, molecular diagnostic tool in patients with genetically unexplained ID [61] .
In summary, generating genotype-phenotype correlations for ID is incredibly complex. This is due in part to the confounding effect of phenotypic and etiologic heterogeneity, along with the rare and variable penetrant nature of the underlying risk variants identified so far [68] . One consequence of this complexity is the application of artificial intelligence (AI) for precision medicine in neurodevelopmental disorders, including ID, autism spectrum disorder and epilepsy, which is still far from accurate. Larger sample sizes and broader (in terms of technologies) studies are expected to allow identification of the relative contributions of each gene/loci to different, but overlapping and highly correlated phenotypes related to ID, such as intelligence quotient (IQ), educational attainment, schizophrenia and depression among others. Finally, increasing data availability will also allow for the development of phenotype specific polygenic risk scores (PRS) [69] .
CONCLUSION
Regardless of the progress made so far, the overall picture is still highly complex and there are plenty of future challenges to be addressed for ID. Is ID a single entity amenable to the application of standard genetic analysis methodologies? Are genetic variants and environmental influences responsible for ID also involved in the normal distribution of IQ? Which of the identified variants are responsible for the final phenotype? What are the contributions of single genes versus that of the genomic makeup? Are the variant effects constitutive, or do they appear only in response to specific environmental challenges? How do we understand the epigenetic contribution to ID? And what are the biological nodes that are promising for therapeutic options? With the amount of genetic information already available, it is clear that the level of complexity in ID is immense and there is an entire genome to investigate and understand. Stratification and careful consideration of ID grouping is also a must. We expect future research strategies to involve the development of animal models and/or in vitro molecular functional studies which will provide reliable, accessible and cost-effective platforms to perform functional tests of novel variants, and accelerate discovery of the biological functions underlying genetic forms of ID and enhance the translation to clinical care. | 2020-04-30T09:11:02.502Z | 2020-04-23T00:00:00.000 | {
"year": 2020,
"sha1": "eb8fab56297d2cfe27e48f0966bc013c7df42a24",
"oa_license": "CCBY",
"oa_url": "https://jtggjournal.com/article/download/3428",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6efb5d38c2a165a88cfadf1fcb7934a601ada7c8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
73506386 | pes2o/s2orc | v3-fos-license | Salidroside Ameliorates Renal Interstitial Fibrosis by Inhibiting the TLR4/NF-κB and MAPK Signaling Pathways
Salidroside (Sal) is an active ingredient that is isolated from Rhodiola rosea, which has been reported to have anti-inflammatory activities and a renal protective effect. However, the role of Sal on renal fibrosis has not yet been elucidated. Here, the purpose of the current study is to test the protective effects of Sal against renal interstitial fibrosis (RIF), and to explore the underlying mechanisms using both in vivo and in vitro models. In this study, we establish the unilateral ureteric obstruction (UUO) or folic acid (FA)-induced mice renal interstitial fibrosis in vivo and the transforming growth factor (TGF)-β1-stimulated human proximal tubular epithelial cell (HK-2) model in vitro. The levels of kidney functional parameters and inflammatory cytokines in serum are examined. The degree of renal damage and fibrosis is determined by histological assessment. Immunohistochemistry and western blotting are used to determine the mechanisms of Sal against RIF. Our results show that treatment with Sal can ameliorate tubular injury and deposition of the extracellular matrix (ECM) components (including collagen Ш and collagen I). Furthermore, Sal administration significantly suppresses epithelial-mesenchymal transition (EMT), as evidenced by a decreased expression of α-SMA, vimentin, TGF-β1, snail, slug, and a largely restored expression of E-cadherin. Additionally, Sal also reduces the levels of serum biochemical markers (serum creatinine, Scr; blood urea nitrogen, BUN; and uric acid, UA) and decreases the release of inflammatory cytokines (IL-1β, IL-6, TNF-α). Further study revealed that the effect of Sal on renal interstitial fibrosis is associated with the lower expression of TLR4, p-IκBα, p-NF-κB and mitogen-activated protein kinases (MAPK), both in vivo and in vitro. In conclusion, Sal treatment improves kidney function, ameliorates the deposition of the ECM components and relieves the protein levels of EMT markers in mouse kidneys and HK-2 cells. Furthermore, Sal treatment significantly decreases the release of inflammatory cytokines and inhibits the TLR4/NF-κB and MAPK signaling pathways. Collectively, these results suggest that the administration of Sal could be a novel therapeutic strategy in treating renal fibrosis.
Introduction
Chronic kidney disease (CKD) remains a leading public health issue, being the third highest cause of premature mortality (82%), behind AIDS and diabetes mellitus [1,2]. It has been predicted that about 160 million individuals will be affected by CKD by 2020 [3]. Despite the seriousness of this problem, there are not enough therapeutic options for CKD in the clinical setting. Therefore, there is a need for people to find more effective therapeutics to treat CKD and reduce healthcare expenditure. Meanwhile, understanding the mechanisms behind renal interstitial fibrosis is essential for developing therapies to prevent or reverse the progression of CKD.
Renal interstitial fibrosis (RIF) is the final common outcome of CKD and ultimately end-stage renal failure [4]. The histopathology of renal interstitial fibrosis features deposition of the extracellular matrix (ECM) components, loss of tubular cells, accumulation of fibroblasts, and rarefaction of the peritubular microvasculature [5]. The ECM components include collagen I and collagen III. Epithelial-mesenchymal transition (EMT) is the most important cause of renal interstitial fibrosis and is characterized by renal tubular epithelial cells that acquire mesenchymal phenotypes and myofibroblast functions [6]. The transition of EMT induces kidney epithelial cells to decrease the expression of adherens junction proteins such as E-cadherin, and strongly induces the expression of fibroblast markers, including vimentin and α-smooth muscle actin (α-SMA) [7]. TGF-β1, snail, and slug are important profibrotic mediators for EMT [8]. Renal fibrosis is almost always evolved by the infiltration of inflammatory cells, including increased inflammatory cytokines (TNF-α, IL-6, and IL-1β). Although inflammation is an integral part of the host defense mechanism response to injury, nonresolving inflammation is a major driving force in the development of fibrotic disease [9].
Toll-like receptor 4 (TLR4) is an important mediator of inflammation in the kidney. It has been reported that TLR4 mediates both pro-inflammatory and pro-fibrotic pathways in renal fibrosis [10]. Lipopolysaccharide (LPS) is a primary ligand for the TLR4 receptor and studies have shown that LPS induces the aggregation of TLR4 on HK-2 cells, promoting an inflammatory response [11]. Nuclear factor-kappa B (NF-κB) is also known to be important in the expression of pro-inflammatory genes, and treatment with the NF-κB inhibitor pyrrolidine dithiocarbamate (PDTC) shows attenuated renal injury and inflammation in animal models of CKD [12]. These findings reveal that NF-κB plays a pivotal role in the progression of chronic renal inflammation. The mitogen-activated protein kinases (MAPK) signaling pathway, including JNK, Erk, and P38, is involved in the production of pro-inflammatory and pro-fibrotic mediators. The MAPK pathway is activated by multiple stimulations, including IL-1β and TNF-α, which cause the translocation of cytoplasmic nuclear factor-κB (NF-κB) from nucleus to active NF-κB. In addition, MAPK activation has been reported to be involved in the secretion of TGF-β1 and ECM proteins [13].
Salidroside (Sal, p-hydroxyphenethyl-bD-glucoside, C 14 H 20 O 7 , structure shown in Figure 1A), one of the bioactive compounds extracted from Rhodiola rosea L., has many pharmacological effects, such as anti-cancer [14], anti-depressive [15], anti-inflammatory [16], anti-oxidant [17], anti-ulcer [18] and cardioprotective [19]. However, there have been no reports on Sal in the context of renal interstitial fibrosis. Therefore, this study aims to provide insight into the therapeutic effects of Sal in the context of renal interstitial fibrosis and attempts to explore the molecular mechanisms.
In order to investigate the inflammatory alterations in RIF, we detected the levels of inflammatory factors IL-1β, IL-6 and TNF-α in the UUO and FA mice. As expected, all factors notably increased in the serum of UUO and FA mice, indicating that inflammation plays an important role in the development of renal fibrosis. Consistently, Sal remarkably decreased the levels of IL-1β, TNF-α, and IL-6 in the serum of the UUO and FA mice ( Figure 1E,F), indicating that Sal could inhibit the inflammatory response in RIF. . The score of the lesion was determined to be either 0.5 points (slight or very little), 1 point (mild or small), 2 points (moderate or more), 3 points (severe or more quantity), or four points (extremely severe or a lot). When there was no obvious lesion, the score was 0 points. Original magnification: ×200. (E) The serum levels of TNF-α, IL-1β, and IL-6 in UUO mice were determined using enzyme-linked immunosorbent assay (ELISA) kits (n = 6). (F) The serum levels of TNF-α, IL-1β, and IL-6 in FA mice were determined using ELISA kits (n = 6). . The score of the lesion was determined to be either 0.5 points (slight or very little), 1 point (mild or small), 2 points (moderate or more), 3 points (severe or more quantity), or four points (extremely severe or a lot). When there was no obvious lesion, the score was 0 points. Original magnification: ×200. (E) The serum levels of TNF-α, IL-1β, and IL-6 in UUO mice were determined using enzyme-linked immunosorbent assay (ELISA) kits (n = 6). (F) The serum levels of TNF-α, IL-1β, and IL-6 in FA mice were determined using ELISA kits (n = 6). All data are represented as mean ± SD. # P < 0.05, ## P < 0.01 vs. Sham group, * P < 0.05, ** P < 0.01 vs. UUO group; # P < 0.05, ## P < 0.01 vs. control group, * P < 0.05, ** P < 0.01 vs. FA group.
Sal Alleviated Renal Function Parameters, Histopathology and Inflammation in Renal Fibrotic Mice
The serum levels of serum creatinine (Scr), blood urea nitrogen (BUN), and uric acid (UA) are the classical indicators of renal function. The serum concentrations of UA, Scr, and BUN were significantly increased in the folic acid (FA) mice. Sal reduced the serum levels of UA, Scr, and BUN in renal fibrotic mice ( Figure 1B). Hematoxylin and eosin (H&E) staining revealed renal structural damage, inflammatory cell infiltration and extracellular matrix deposition in the unilateral ureteric obstruction (UUO) and FA mice, which was significantly restored by Sal ( Figure 1C,D). Additionally, Masson's staining showed a large number of collagen fiber streaks, staining blue with prominent collagen fiber hypertrophy in the UUO and FA mice. However, fewer collagen fiber streaks (blue stained area) were observed when the mice were treated with Sal ( Figure 1C,D). These results indicated that Sal ameliorated renal damage and fibrosis in the fibrotic mice.
In order to investigate the inflammatory alterations in RIF, we detected the levels of inflammatory factors IL-1β, IL-6 and TNF-α in the UUO and FA mice. As expected, all factors notably increased in the serum of UUO and FA mice, indicating that inflammation plays an important role in the development of renal fibrosis. Consistently, Sal remarkably decreased the levels of IL-1β, TNF-α, and IL-6 in the serum of the UUO and FA mice ( Figure 1E,F), indicating that Sal could inhibit the inflammatory response in RIF.
Sal Inhibited ECM Deposition and EMT in Renal Fibrotic Mice
To evaluate whether Sal could affect the levels of ECM and fibrosis markers, we performed western blotting and immunohistochemistry analysis on the kidney tissue. The primary feature of renal interstitial fibrosis is the accumulation of ECM components. Collagen I and collagen III are the main components of ECM. The levels of collagen I and collagen III were notably enhanced in the UUO mice, whereas these alternations were significantly inhibited by Sal (Figure 2A). Myofibroblasts are considered to be a primary renal interstitial cell, responsible for production of ECM during fibrosis. Furthermore, EMT activation has been associated with myofibroblast production. In the present study, Sal was found to downregulate α-SMA and vimentin protein levels, and upregulate E-cadherin protein levels in UUO mice ( Figure 2A). TGF-β1, snail, and slug are key regulators of the EMT program and RIF [20], whereas Sal attenuates the UUO-induced upregulation of TGF-β1, snail, and slug in UUO mice.
In the immunohistochemistry analysis, the effect of Sal on the expressions of collagen I, vimentin, and E-cadherin in the kidney tissue was further examined ( Figure 2B). Sal downregulated the collagen I and vimentin protein levels and upregulated the E-cadherin protein level in UUO mice.
The same effect of Sal was observed in the FA mice ( Figure 3A,B). The results indicated that Sal could prevent the expression of ECM components and EMT markers.
Sal Suppresses Renal Inflammation via the TLR4/NF-κB and MAPK Signaling Pathways in Renal Fibrotic Mice
Inflammation contributes to the progression of renal interstitial fibrosis, as TLR4 induces the expression of inflammatory cytokines. As the promoter of a signaling pathway, TLR4 activates NF-κB and MAPK and also stimulates pro-inflammatory reactions [21]. In order to investigate the anti-inflammatory mechanism of Sal on renal interstitial fibrosis, western blot and immunohistochemistry experiments were conducted. In the western blotting, Sal significantly suppressed the expression of TLR4, p-IκBα, p-NF-κB, p-P38, p-ERK, and p-JNK in UUO mice ( Figure 4A). The immunohistochemistry experiment also was used in our study. The results of the experiment showed that Sal treatment significantly suppressed the expression of TLR4 in UUO mice ( Figure 4B).
are key regulators of the EMT program and RIF [20], whereas Sal attenuates the UUO-induced upregulation of TGF-β1, snail, and slug in UUO mice.
In the immunohistochemistry analysis, the effect of Sal on the expressions of collagen І, vimentin, and E-cadherin in the kidney tissue was further examined ( Figure 2B). Sal downregulated the collagen І and vimentin protein levels and upregulated the E-cadherin protein level in UUO mice. (B) Immunochemical staining of collagen I, E-cadherin, and vimentin in the kidney tissue of UUO mice (n = 3). Original magnification: ×400. All data are represented as mean ± SD. # P < 0.05, ## P < 0.01 vs. Sham group, * P < 0.05, ** P < 0.01 vs. UUO group.
The same effect of Sal was observed in the FA mice ( Figure 5A,B). In conclusion, the TLR4/NF-κB and MAPK signaling pathways might play a role in RIF, and Sal ameliorates RIF by inhibiting the TLR4/NF-κB and MAPK signaling pathways.
Sal Inhibited TGF-β1-Induced HK-2 Cells ECM Deposition, EMT and Inflammatory Response via the TLR4/NF-κB and MAPK Signaling Pathways
First, we assessed the cytotoxicity effect of salidroside on HK-2 cells. The results showed that Sal (2, 10, and 50 uM) treatment did not affect the viability of HK-2 cells ( Figure 6A). We used a TGF-β1-induced HK-2 cell to detect the effect of Sal in vitro. TGF-β1 (5 ng/mL) obviously increased the protein expression level of collagen I, collagen III, α-SMA, vimentin, TGF-β1, snail, and slug in the model group, while E-cadherin was downregulated when compared with the control group. In agreement with the in vivo results, Sal (2, 10, 50 µM) significantly restored these alterations ( Figure 6B). The results showed that Sal could prevent the expression of fibrosis markers and the ECM in TGF-β1-activated HK-2 cells. UUO mice (n = 3). Original magnification: ×400. All data are represented as mean ± SD. # P < 0.05, ## P < 0.01 vs. Sham group, * P < 0.05, ** P < 0.01 vs. UUO group.
The same effect of Sal was observed in the FA mice ( Figure 3A,B). The results indicated that Sal could prevent the expression of ECM components and EMT markers. Original magnification: ×200. All data are represented as mean ± SD. # P < 0.05, ## P < 0.01 vs. control group, * P < 0.05, ** P < 0.01 vs. FA group.
Sal Suppresses Renal Inflammation via the TLR4/NF-κB and MAPK Signaling Pathways in Renal Fibrotic Mice
Inflammation contributes to the progression of renal interstitial fibrosis, as TLR4 induces the expression of inflammatory cytokines. As the promoter of a signaling pathway, TLR4 activates NF-кB and MAPK and also stimulates pro-inflammatory reactions [21]. In order to investigate the anti-inflammatory mechanism of Sal on renal interstitial fibrosis, western blot and immunohistochemistry experiments were conducted. In the western blotting, Sal significantly suppressed the expression of TLR4, p-IкBα, p-NF-κB, p-Ρ38, p-ERK, and p-JNK in UUO mice ( Figure 4A). The immunohistochemistry experiment also was used in our study. The results of the experiment showed that Sal treatment significantly suppressed the expression of TLR4 in UUO mice ( Figure 4B). In order to investigate the anti-EMT mechanism of Sal in TGF-β1-induced HK-2 cells, western blotting was conducted. The results demonstrated the upregulation of TLR4, p-IκBα, p-NF-κB, p-P38, p-ERK, and p-JNK when compared with the control group, while the Sal treatment group effectively restored these alterations ( Figure 6C). These results revealed that Sal suppresses the inflammatory response of TGF-β1-induced HK-2 cells by inhibiting the TLR4/NF-κB and MAPK signaling pathways. The same effect of Sal was observed in the FA mice ( Figure 5A,B). In conclusion, the TLR4/NF-кB and MAPK signaling pathways might play a role in RIF, and Sal ameliorates RIF by inhibiting the TLR4/NF-кB and MAPK signaling pathways. First, we assessed the cytotoxicity effect of salidroside on HK-2 cells. The results showed that Sal (2, 10, and 50 uM) treatment did not affect the viability of HK-2 cells ( Figure 6A). We used a , followed by stimulation with 5 ng/mL of TGF-β1 for 48 h. All data are expressed as mean ± SD, # P < 0.05, ## P < 0.01 vs. control group, * P < 0.05, ** P < 0.01 vs. TGF-β1 group.
Sal Suppressed the LPS-Induced Inflammatory Response by Inhibiting the TLR4/NF-κB and MAPK Signaling Pathways
In order to verify that salidroside works by inhibiting the TLR4/NF-κB and MAPK signaling pathways, we used lipopolysaccharide (LPS, a TLR4 receptor agonist) to stimulate the HK-2 cells. We detected the levels of inflammatory factors IL-1β, IL-6, and TNF-α in an LPS-induced HK-2 cell ( Figure 7A). The results showed that inflammatory cytokines were notably increased in the model group and that Sal remarkably decreased the levels of IL-1β, TNF-α, and IL-6 in the culture 10, 50 µM), followed by stimulation with 5 ng/mL of TGF-β1 for 48 h. All data are expressed as mean ± SD, # P < 0.05, ## P < 0.01 vs. control group, * P < 0.05, ** P < 0.01 vs. TGF-β1 group.
Sal Suppressed the LPS-Induced Inflammatory Response by Inhibiting the TLR4/NF-κB and MAPK Signaling Pathways
In order to verify that salidroside works by inhibiting the TLR4/NF-κB and MAPK signaling pathways, we used lipopolysaccharide (LPS, a TLR4 receptor agonist) to stimulate the HK-2 cells. We detected the levels of inflammatory factors IL-1β, IL-6, and TNF-α in an LPS-induced HK-2 cell ( Figure 7A). The results showed that inflammatory cytokines were notably increased in the model group and that Sal remarkably decreased the levels of IL-1β, TNF-α, and IL-6 in the culture supernatant of the HK-2 cells. In immunofluorescence, the effect of Sal (50 µM) on the nuclear transport process of NF-κBp65 in LPS-induced HK-2 cells was examined further ( Figure 7B). As expected, Sal significantly suppressed the nuclear transport process of NF-κBp65 in LPS-induced HK-2 cells. The results of the western blot experiment showed that the expression of TLR4, p-IκBα, p-NF-κB, p-P38, p-ERK, and p-JNK were significantly increased in the LPS group, while the salidroside (50 uM) treatment group effectively suppressed the expression of these proteins ( Figure 7C). These results revealed that Sal suppresses TLR4-mediated inflammatory responses by inhibiting the TLR4/NF-κB and MAPK signaling pathways. supernatant of the HK-2 cells. In immunofluorescence, the effect of Sal (50 μM) on the nuclear transport process of NF-κBp65 in LPS-induced HK-2 cells was examined further ( Figure 7B). As expected, Sal significantly suppressed the nuclear transport process of NF-κBp65 in LPS-induced HK-2 cells. The results of the western blot experiment showed that the expression of TLR4, p-IкBα, p-NF-κB, p-Ρ38, p-ERK, and p-JNK were significantly increased in the LPS group, while the salidroside (50 uM) treatment group effectively suppressed the expression of these proteins ( Figure 7C). These results revealed that Sal suppresses TLR4-mediated inflammatory responses by inhibiting the TLR4/NF-κB and MAPK signaling pathways. In summary, Sal could protect against RIF both in vivo and in vitro. These results confirm that Sal could inhibit the accumulation of the ECM and reduce the inflammatory response via the TLR4/NF-κB and MAPK pathways. (n = 3). The cell was incubated with Sal (50 µM), followed by stimulation with 1 ug/mL of LPS for 24 h. All data are expressed as mean ± SD, # P < 0.01. ## P < 0.01 vs. control group, * P < 0.05, ** P < 0.01 vs. LPS group.
In summary, Sal could protect against RIF both in vivo and in vitro. These results confirm that Sal could inhibit the accumulation of the ECM and reduce the inflammatory response via the TLR4/NF-κB and MAPK pathways.
Discussion
Accumulating evidence suggests that Sal has many pharmacological activities. Specifically, Sal exhibits anti-oxidant and anti-inflammatory activities, both in vitro and in vivo. Our research reveals the anti-fibrosis effect of Sal and its potential mechanisms. The new findings are as follows: (1) Sal effectively inhibits the accumulation of the ECM and the EMT process in RIF. (2) Sal ameliorates renal fibrosis, both in vitro and in vivo. (3) Sal inhibits the inflammatory response and EMT process by suppressing the TLR4/NF-κB and MAPK signaling pathways, as illustrated in Figure 8. LPS-induced increase in TLR4, p-IкBα, p-NF-κB, p-Ρ38, p-ERK, and p-JNK in the HK-2 cells, as determined by western blotting (n = 3). The cell was incubated with Sal (50 μM), followed by stimulation with 1 ug/mL of LPS for 24 h. All data are expressed as mean ± SD, # P < 0.01. ## P < 0.01 vs. control group, * P < 0.05, ** P < 0.01 vs. LPS group.
Discussion
Accumulating evidence suggests that Sal has many pharmacological activities. Specifically, Sal exhibits anti-oxidant and anti-inflammatory activities, both in vitro and in vivo. Our research reveals the anti-fibrosis effect of Sal and its potential mechanisms. The new findings are as follows: (1) Sal effectively inhibits the accumulation of the ECM and the EMT process in RIF. (2) Sal ameliorates renal fibrosis, both in vitro and in vivo. (3) Sal inhibits the inflammatory response and EMT process by suppressing the TLR4/NF-κB and MAPK signaling pathways, as illustrated in Figure 8. Animal models of RIF provide an opportunity to explore underlying mechanisms and novel therapies. The UUO-induced RIF model has been widely used to mimic the pathological alterations of chronic obstructive nephropathy which are commonly observed in patients with CKD [22]. Complete ureteral obstruction is not a common cause of human renal disease. However, the UUO model is useful to examine the mechanisms of tubulointerstitial fibrosis in vivo [23]. This model can be induced in either rats or mice and shows no specific strain dependence. Complete UUO rapidly reduces renal blood flow and the glomerular filtration rate in the obstructed kidney within 24 h. The subsequent responses arise during the 7 days thereafter, which include interstitial inflammation (peak at 2 to 3 days), tubular dilation, tubular atrophy and fibrosis. The obstructed kidney reaches the end stage by around 2 weeks [23,24]. Folic acid induces RIF, and high dosages of folic acid (250 mg/kg) are given to mice to rapidly induce folic acid crystals, leading to tubular necrosis in the acute phase (1-14 days) and patchy interstitial fibrosis in the chronic phase (28-42 days). RIF is induced both by the crystal obstruction and the direct toxic effect to the tubular epithelial cells [25,26].
Most renal disorders result in renal fibrosis, so there is great interest in identifying the underlying factors behind this issue to prevent or reverse renal fibrosis. The proliferation of interstitial fibroblasts with myofibroblast transformation leads to excess deposition of the extracellular matrix and renal fibrosis [27]. The majority of CKDs are characterized by excessive ECM accumulation, including collagen І, collagen Ш and fibronectin [28]. In the tubulointerstitium Animal models of RIF provide an opportunity to explore underlying mechanisms and novel therapies. The UUO-induced RIF model has been widely used to mimic the pathological alterations of chronic obstructive nephropathy which are commonly observed in patients with CKD [22]. Complete ureteral obstruction is not a common cause of human renal disease. However, the UUO model is useful to examine the mechanisms of tubulointerstitial fibrosis in vivo [23]. This model can be induced in either rats or mice and shows no specific strain dependence. Complete UUO rapidly reduces renal blood flow and the glomerular filtration rate in the obstructed kidney within 24 h. The subsequent responses arise during the 7 days thereafter, which include interstitial inflammation (peak at 2 to 3 days), tubular dilation, tubular atrophy and fibrosis. The obstructed kidney reaches the end stage by around 2 weeks [23,24]. Folic acid induces RIF, and high dosages of folic acid (250 mg/kg) are given to mice to rapidly induce folic acid crystals, leading to tubular necrosis in the acute phase (1-14 days) and patchy interstitial fibrosis in the chronic phase (28-42 days). RIF is induced both by the crystal obstruction and the direct toxic effect to the tubular epithelial cells [25,26].
Most renal disorders result in renal fibrosis, so there is great interest in identifying the underlying factors behind this issue to prevent or reverse renal fibrosis. The proliferation of interstitial fibroblasts with myofibroblast transformation leads to excess deposition of the extracellular matrix and renal fibrosis [27]. The majority of CKDs are characterized by excessive ECM accumulation, including collagen I, collagen III and fibronectin [28]. In the tubulointerstitium of the kidneys, many cells are capable of producing the ECM, but fibroblasts are the principal matrix-producing cells that generate a large amount of interstitial matrix components. Fibroblasts in the kidneys are different, with SMA-positive myofibroblasts and express vimentin [29,30]. Snail is a key transcription factor that induces EMT, fibroblast migration and renal fibrosis. It is required for the development of fibrosis in renal epithelial cells. The reactivation of snail induces a partial EMT in tubular epithelial cells and promotes fibrogenesis, myofibroblast differentiation, and inflammation [26]. A large number of studies have shown that TGF-β1 is a key mediator and is highly upregulated in renal fibrosis in both experimental models and human kidneys [31]. TGF-β1 can promote extracellular matrix (ECM) production and inhibit its degradation to mediate progressive RIF. Furthermore, TGF-β1 has been identified to be the most potent inducer of EMT, which can induce tubular epithelial cells (TECs) to transform into myofibroblasts [32,33]. In our study, TGF-β1 was used to induce the transformation of HK-2 cells into myoblasts. Our results show that TGF-β1 can successfully induce EMT via activation of the TLR4/MAPK/NF-κB signaling pathways. Our results indicated that salidroside treatment significantly decreased the deposition of ECM and EMT markers, both in vivo and in vitro.
The correlation between fibrosis and inflammation has been established and supported by morphological evidence. A large number of studies have shown that the inflammatory response is necessary in the process of RIF [34]. Toll-like receptors (TLRs) are innate immune receptors that respond to endogenous danger factors and promote the activation of immune and inflammatory responses in RIF [35]. TLR4 is involved in systemic chronic diseases associated with inflammation, such as chronic kidney diseases, diabetes and metabolic syndrome [36]. TLR4, as the promoter of a signaling pathway, activates nuclear factor-κB (NF-κB) and stimulates pro-inflammatory responses. Some studies have revealed that the knockdown of the TLR4 significantly reduces the risk of fibrosis [37]. Renal fibrosis could be effectively ameliorated through inhibition of the NF-κB signaling pathway. In addition, findings have also indicated that direct contact between tubular and monocytes cells might be required to induce tubular EMT via a NF-κB-dependent pathway. Activating the NF-κB pathway could directly contribute to fibroblast activation and renal fibrosis [38]. Mitogen-activated protein kinases (MAPK) are intracellular signaling molecules that elicit diverse pro-inflammatory and profibrotic effects, both in vitro and in vivo. The anti-fibrosis effect has been reported recently in experimental models of RIF, showing that the blockade of the MAPK pathway ameliorated renal fibrosis. Several MAPK inhibitors have been developed and even applied in various stages of clinical trials. The JNK and P38 pathways play important roles in the production of pro-inflammatory and profibrotic mediators, which are activated by various cellular stresses [39,40]. The administration of JNK and P38 pharmacological inhibitors has been shown to suppress the development of glomerulosclerosis and tubulointerstitial fibrosis in various animal models [41]. Activated JNK and p38 also increase TGF-β1 gene transcription and induce enzyme expression that activates the latent form of TGF-β1 [42]. Many inflammatory cytokines (TNF-α, IL-1β, IL-6) can induce JNK, P38, and NF-κB activation [40,43]. Tubular apoptosis is a typical feature of renal fibrosis [44]. The ERK pathway can ameliorate renal interstitial fibrosis through the suppression of tubular EMT [45]. The present study shows that Sal obviously suppresses the expression of TLR4, p-IκBα, p-NF-κB, p-P38, p-ERK, and p-JNK, relieving the inflammatory response and ameliorating renal fibrosis.
In our study, we confirmed that Sal ameliorates renal interstitial fibrosis by inhibiting inflammatory cell infiltration and reducing the production of inflammatory cytokines. Moreover, Sal ameliorates renal fibrosis by reducing ECM accumulation and blocking the TLR4/NF-κB and MAPK signaling pathways. The results of this study provide new insights into reversing renal fibrosis through the anti-inflammatory effects of Sal. More studies are needed to further clarify the underlying mechanisms and enhance the performance of Sal in protecting RIF.
Main Reagents and Kits
Salidroside was provided by the Second Military Medical University (Shanghai, China; purity >99%). Folic acid and lipopolysaccharide (LPS) were purchased from Sigma Aldrich (St. Louis, USA). Recombinant human TGF-β1 was purchased from Pepro Tech (Rocky Hill, NJ, USA). Serum creatinine (Scr), blood urea nitrogen (BUN) and uric acid (UA) commercial kits were obtained from the Jiancheng Institute of Biotechnology (Nanjing, China). Enzyme-linked immunosorbent assay (ELISA) kits of interleukin (IL)-6, IL-1β and tumor necrosis factor (TNF)-α were purchased from Elabscience (Wuhan, China). The primary antibodies against α-SMA, vimentin, E-cadherin, snail, slug, NF-κB, p-IκBα, Erk, p-Erk, JNK, p-JNK, P38, p-P38, and GAPDH were produced by Cell Signaling Technology (Danvers, MA, USA). The anti-collagen I primary antibody was purchased from Affinity (Affinity Biosciences). The anti-collagen III primary antibody was obtained from Proteintech (Chicago, IL, USA). The anti-TLR4 and anti-TGF-β1 primary antibodies were produced by Santa Cruz Biotechnology (Santa Cruz, CA, USA). The anti-p-NF-κB and anti-IκBα primary antibodies were purchased from Abcam (Cambridge, UK). The antibodies are listed in Table S1 and the critical chemicals and commercial assays are listed in Table S2.
Animals and Experimental Design
All male C57BL/6 mice (age, 8-10 weeks; weight, 20-22 g) were purchased from the Jiangning Qinglongshan Animal Cultivation Farm (Nanjing, China) and were acclimated for 1 week before the experiment in the standard laboratory animal facility (25 • C, 12 h light/dark cycle) with free access to food and water. All studies were carried out in compliance with the Guide for the Care and Use of Laboratory Animals and the ethical guidelines of China Pharmaceutical University.
UUO Model
Forty male mice were randomly divided into 4 groups (n = 10): (1) Sham group, (2) UUO group, (3) UUO + Sal (40 mg/kg) group, and the (4) UUO + Sal (80 mg/kg) group. The UUO or sham surgery was carried out under anesthesia with 3% chloral hydrate. Left proximal ureteral ligation was performed with 4-0 silk at two points and a cut was made between the points of ligation. The sham group had their ureters exposed and manipulated without ligation. Mice were given Sal at the corresponding dose by daily gastric gavage, beginning the first day after the surgical procedure. The sham and UUO groups were given identical volumes of saline. The mice were sacrificed on day 14 post-surgery for renal tissue and blood sampling. The blood was centrifuged (4000× g) for 10 min to obtain the serum, and the kidneys were harvested and then stored at −80 • C or fixed in a 4% paraformaldehyde solution.
Folic Acid Model
Forty male mice were randomly divided into 4 groups (n = 10): (1) Control group, (2) FA group, (3) FA + Sal (40 mg/kg) group), and the (4) FA + Sal (80 mg/kg) group. The mice were injected with a single dose of folic acid (250 mg/kg, dissolved in 300 mM NaHCO3, i.p.), while the control group was injected with an identical voluminal vehicle (300 mM NaHCO3, i.p.). Mice were given Sal (40 and 80 mg/kg) at the corresponding dose by daily gastric gavage, beginning the first day after they were injected with FA. The kidney and blood samples were collected at 34 days post-FA injection.
H&E and Masson Staining
The mice were sacrificed by spinal dislocation. Kidney samples were excised immediately, fixed in 4% paraformaldehyde (PFA), then paraffin-embedded. The paraffin sections were dewaxed in xylene and dehydrated with ethanol, then stained with hematoxylin and eosin (H&E) to assess renal injury and Masson's trichrome stains to assess collagen deposition. The observations were made under light microscopy (Nikon, Tokyo, Japan) at a 200× magnification.
Biochemical Assays in Serum
The levels of creatinine (Scr), uric acid (UA) and blood urea nitrogen (BUN) in the serum were measured by commercial kits according to the manufacturer's instructions (Jiancheng Bioengineering Institute, Nanjing, China).
Inflammatory Cytokines Levels in Serum
The concentrations of IL-1β, IL-6, and TNF-α in the serum were determined by an enzyme-linked immunosorbent assay (ELISA) kit according to the manufacturer's instructions (Elabscience, Wuhan, China).
Immunohistochemistry Staining
The expressions of collagen I, E-cadherin, vimentin and TLR4 in the kidney were evaluated by immunohistochemistry. Briefly, the kidneys were fixed with 4% PFA, embedded in paraffin and sliced into 5 µm thick sections. The sections were dewaxed and hydrated in graded ethanol, then microwaved in a sodium citrate buffer. The endogenous peroxidase activity was reduced using 3% H2O2 for 10 min. Each sample was blocked with 5% goat serum for 10 min and then treated with primary antibodies collagen I (1:100), E-cadherin (1:200), vimentin (1:200) and TLR4 (1:50) at 4 • C overnight. The next day, the sections were incubated with the goat anti-rabbit IgG secondary antibody for 10 min. Then, the sections were stained with 3,3-diaminobenzidine (DAB) and counterstained with hematoxylin. After dehydrating and drying, the sections were mounted with neutral gum and observed under a microscope.
The HK-2 cells were seeded on 6-well-plate at a density of 2 × 10 5 cell/well for 24 h. Afterwards, 5 ng/mL TGF-β1, with or without Sal (2, 10, 50 µM), stimulated the cells for 48 h. For further verification, LPS (1 µg/mL) and LPS plus Sal (50 µM) also stimulated the cells for 24 h. All the cells were then collected for the various analyses.
Western Blot
The kidney tissues and HK-2 cells were homogenated in an ice-cold RIPA buffer containing 2 mM PMSF. The samples were centrifuged at 12,000× g for 15 min at 4 • C and the total protein was determined by the BCA protein assay kit (Beyotime Biotechnology, Nanjing, China). The protein was separated by SDS-PAGE electrophoresis and then transferred to PVDF membranes. The membranes were blocked with 5% skim dried milk for 2 h and incubated with primary antibodies collagen I After that, the membranes were incubated with the second antibodies (1:1000) for 2 h. The membranes were visualized using an ECL advanced kit and detected with a gel imaging system (Tanon Science & Technology Co., Ltd., Shanghai, China).
Immunofluorescence Staining
The expressions of NF-κBp65 in the HK-2 cells were evaluated by immunofluorescence. Briefly, the cell was washed three times with PBS, then fixed with an immunol staining fix solution (Beyotime Biotechnology, Nanjing, China) for 30 min and then blocked for 2 h with a blocking reagent. The cell was incubated with the primary antibody NF-κBp65 (1:200) overnight at 4°C, then washed three times with PBS and incubated with the goat anti-rabbit IgG (H+L) secondary antibody, the Alexa Fluor ® 488 conjugate (1:500), for 2 h. After washing three times with PBS and then with DAPI for 5 min, fluorescence images were captured with fluorescence microscopy.
Statistical Analysis
The experimental results are expressed as the mean ± standard deviation (SD). The statistical significance of differences between the means of each groups was analyzed by one-way ANOVA, followed by Tukey's multiple comparison test. Differences in p-values less than 0.05 were considered statistically significant. | 2019-03-08T09:06:29.034Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "4a88b257ae4314759141adab066e1d0845c014b8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/20/5/1103/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a88b257ae4314759141adab066e1d0845c014b8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
236915383 | pes2o/s2orc | v3-fos-license | Antibiotic stewardship in a tertiary care NICU of northern India: a quality improvement initiative
Background The overuse of antibiotics in newborns leads to increased mortality and morbidities. Implementation of a successful antibiotic stewardship programme (ASP) is necessary to decrease inappropriate use of antibiotics and its adverse effects. Problem Our neonatal intensive care unit (NICU) is a tertiary referral centre of north India, consisting of all outborn babies mostly with sepsis caused by high rate of multidrug-resistant organisms (MDROs). So antibiotics are not only life-saving but also used excessively with a high antibiotic usage rate (AUR) of 574 per 1000 patient days. Method A quality improvement (QI) study was conducted using the Plan–Do–Study–Act (PDSA) approach to reduce AUR by at least 20% from January 2019 to December 2020. Various strategies were made : such as making a unit protocol, education and awareness of NICU nurses and doctors, making check points for both starting and early stoppage of antibiotics, making specific protocol to start vancomycin, and reviewing yearly antibiotic policy as per antibiogram. Results The total AUR, AUR (culture negative) and AUR (vancomycin) was reduced by 32%, 20% and 29%, respectively, (p<0.01). The proportion of newborns who never received antibiotics increased from 22% to 37% (p<0.045) and the proportion of culture-negative/screen-negative newborns where antibiotics were stopped within 48 hours increased from 16% to 54% (p<0.001). The compliance with the unit protocol in starting and upgrading antibiotic was 75% and 82%, respectively. In early 2020, there was a sudden upsurge in AUR due to central line-related bloodstream infection breakout. However, we were able to control it, and all the PDSA cycles were reinforced. Finally, we could reattain our goals, and also able to sustain it until next 1 year. There was no significant difference in overall necrotising enterocolitis and mortality rates. Conclusion In a centre such as ours, where sepsis is a leading cause of neonatal deaths, restricting antibiotic use is a huge challenge. However, we have demonstrated implementation of an efficient ASP with the help of a dedicated team and effective PDSA cycles. Also, we have emphasised the importance of sustainability in success of any QI study.
ABSTRACT
Background The overuse of antibiotics in newborns leads to increased mortality and morbidities. Implementation of a successful antibiotic stewardship programme (ASP) is necessary to decrease inappropriate use of antibiotics and its adverse effects. Problem Our neonatal intensive care unit (NICU) is a tertiary referral centre of north India, consisting of all outborn babies mostly with sepsis caused by high rate of multidrug-resistant organisms (MDROs). So antibiotics are not only life-saving but also used excessively with a high antibiotic usage rate (AUR) of 574 per 1000 patient days. Method A quality improvement (QI) study was conducted using the Plan-Do-Study-Act (PDSA) approach to reduce AUR by at least 20% from January 2019 to December 2020. Various strategies were made : such as making a unit protocol, education and awareness of NICU nurses and doctors, making check points for both starting and early stoppage of antibiotics, making specific protocol to start vancomycin, and reviewing yearly antibiotic policy as per antibiogram. Results The total AUR, AUR (culture negative) and AUR (vancomycin) was reduced by 32%, 20% and 29%, respectively, (p<0.01). The proportion of newborns who never received antibiotics increased from 22% to 37% (p<0.045) and the proportion of culture-negative/screennegative newborns where antibiotics were stopped within 48 hours increased from 16% to 54% (p<0.001). The compliance with the unit protocol in starting and upgrading antibiotic was 75% and 82%, respectively. In early 2020, there was a sudden upsurge in AUR due to central linerelated bloodstream infection breakout. However, we were able to control it, and all the PDSA cycles were reinforced. Finally, we could reattain our goals, and also able to sustain it until next 1 year. There was no significant difference in overall necrotising enterocolitis and mortality rates.
Conclusion In a centre such as ours, where sepsis is a leading cause of neonatal deaths, restricting antibiotic use is a huge challenge. However, we have demonstrated implementation of an efficient ASP with the help of a dedicated team and effective PDSA cycles. Also, we have emphasised the importance of sustainability in success of any QI study.
BACKGROUND
Antibiotics are the most commonly used medication in neonatal intensive care unit (NICU). Sepsis being the leading cause of mortality and morbidity, globally as well as in India, the treatment and survival of newborn infants, in particular the premature, is hugely dependent on effective antibiotics. 1 2 A riskbased approach with low threshold is often used for starting antibiotic in neonatal sepsis, which has been quite successful in lowering its incidence, but increased number of noninfected infants are exposed to antibiotics. Empiric therapy is often extended to five to 5-7 days even in the absence of positive blood cultures. 3 Infants commonly present with non-specific systemic signs suggestive of infection, leading to the frequent use of broadspectrum empirical antibiotics in infants who are subsequently found to be uninfected. Unreliable clinical signs, disastrous outcome in case of delayed start of antibiotic treatment and reluctance to withdraw initiated treatment often result in overuse of antibiotics in NICU.
Antibiotics are powerful, life-saving drugs, but when used inappropriately, they may have serious adverse effects. Prolonged empirical antibiotic use among preterm infants with negative cultures has been associated with an increased risk of mortality and morbidities such as late-onset sepsis (LOS), necrotising enterocolitis (NEC), >stage 3 retinopathy of prematurity, emergence of fungal infections and multidrug-resistant organisms (MDRO), and also poor neurodevelopmental outcomes. 4 Antibiotic overuse causes disruption of the microbiome, which may have lasting consequences reflected as dysbiosis, increased carriage of antibiotic resistance genes and MDROs. Each additional day of antibiotic exposure in the absence of positive blood cultures increases the risk of NEC in very low birthweight (VLBW) babies 7%-20%. 5 In a recent study, Flannery et al 6 demonstrated that 78.6% of VLBW and 87% of extremely low birthweight (ELBW) infants
Open access
were treated with antibiotics in their first days of life. 6 Additionally, as per a meta-analysis, 26·5% of VLBW and 37·8% of ELBW infants received more than 5 days of antibiotic treatment. 7 One of the most effective measures to reduce unnecessary antibiotic exposure and its adverse outcomes is by implementation of antibiotic stewardship programmes (ASPs). The WHO identified the development of a national and institutional antibiotic stewardship programmes (ASPs) as a key instrument to tackle this concern. 8 This has prompted calls in the USA and the UK for national action plans to combat antibiotic resistance. 9 CDC (Centers for Disease Control and Prevention) launched a collaborative QIP (Quality Improvement Programme) with VON (Vermont Oxford Network), the world's largest neonatal benchmarking organisation. 10 However, unlike the paediatric ASP, which has proven to be effective, lack of evidence-based strategies and easyto-use guidelines at the point of care preclude adoption of best practices for the use of antibiotics in neonates. 11 There is lack of a validated antimicrobial guideline that addresses the unique challenges of the NICU environment, such as culture-negative clinical sepsis and empirical treatment of early-onset sepsis (EOS).
PROBLEM
Our NICU is a tertiary referral centre of north India, Rajasthan. It consists of only outborn babies with around 1200 neonates admitted every year. Most of the babies referred are sick with severe sepsis. We have a high culture-positive sepsis rate of around 14%. Also, we have many surgical patients (5%) and 15%-20% of our babies are VLBW babies. We empirically have to start most of our babies on broad-spectrum antibiotics, as they are life-saving. As newborns often present with non-specific systemic signs suggestive of infection, we end up using broad-spectrum empirical antibiotics in infants who are subsequently found to be uninfected. Also, most of the babies referred to us are already on broad-spectrum antibiotics as often in the periphery there is a tendency to start multiple antibiotics for sepsis. It was seen that NICU in the low resource settings referring the babies to our centre had complete lack of awareness about the growing emergence of MDRO and other adverse effects of unnecessary and prolonged antibiotics use. Also, antibiotic sensitivity trend at our centre over the years has shown alarming results. There has been a drastic increase in MDRO, especially, carbapenem-resistant Gram-negative bacilli (GNB: Klebsiella), which was the most common organism leading to sepsis, with sensitivity to meropenem only in 20%-30%. Other resistant organisms reported at our centre were coagulase-negative Staphylococci (CONS; 5%), methicillin-resistant Staphylococcus aureus (MRSA: 7%), and vancomycin-resistant Enterococcus (VRE). At the same time, MDRO were leading to fulminant sepsis, being the most common cause of mortality at our centre. As our centre is a referral centre catering sick babies all over from north India, this pattern of emerging antibiotic resistance actually represents the current scenario in India which is in itself is a medical emergency. The only solution to this was implementing an effective ASP in our NICU to promote judicious antimicrobial use and control the emergence of MDROs. Many quality improvement (QI) projects have demonstrated success with implementation of ASP. However, more information is needed to identify additional strategies to safely reduce antibiotic use in the NICU in an outborn centre such as ours, which contains predominantly septic babies with high rate of MDROs. It would be a great challenge for us to restrict antibiotic use without compromising the safety of our patients.
Study design
A QI study using WHO Point of Care Quality Improvement Model, 12 using Plan-Do-Study-Act (PDSA) cycles approach, was planned to implement ASP in our NICU. Our main aim was to reduce excessive and inappropriate antibiotic use in NICU to prevent emergence of MDROs over a span of 2 years (January 2019 to December 2020). This QI study was carried out in a Tertiary care referral centre, Neoclinic Hospital, Jaipur, which is one of the largest 75 bedded level III NICU in Rajasthan, with around 1200 NICU admissions/year. All preterm and term newborns admitted to our NICU (in the first 28 days of life) were included in the QI study. All the babies admitted in our NICU are outborn.
MEASURES
Our primary outcome measure was to decrease antibiotic usage rate (AUR) by at least 20% from baseline. AUR was calculated separately as total AUR, AUR in culture negative and AUR for vancomycin. AUR/1000 patient days= (total days of any antibiotic use/total days of admission)×1000.
Administration of a single antibiotic for a day is considered as 1 day, irrespective of dosage strength or number of doses administered per day. For example: administration of meropenem as a single dose of 20 mg/kg or 40 mg/kg or three times a day (8 hourly) would be considered as 1 day of antibiotic use. A single patient receiving both vancomycin and meropenem on the same day would be considered as 2 days of antibiotic. ► AUR in culture negative/1000 patient days= (total days of antibiotic use in culture negative/total days of admission)×1000. ► AUR for vancomycin/1000 patient days= (total days of vancomycin use/total days of admission)×1000. Other process measures included in our study were: ► Proportion of neonates not exposed to antibiotics= (number of neonates that never received any antibiotic during stay/number of neonates admitted)×100. ► Compliance with unit protocol in starting anti-biotics= (number of times antibiotics started Open access appropriately as per protocol/number of times antibiotics prescribed)×100. ► Compliance with unit protocol in upgrading antibi-otics= (number of times antibiotics upgraded appropriately as per protocol/number of times antibiotics prescribed)×100. ► Early stoppage of antibiotics: Proportion of culture-negative and sepsis screennegative patients where antibiotics would be stopped appropriately at 48 hours was calculated= (number of times antibiotics were stopped at 48 hours/number of times antibiotic prescribed in culture negative and screen negative)×100. ► Prolonged antibiotic use: Prolonged antibiotic use was defined as antibiotic duration more than 48 hours in screen negative and culture negative, more than 7 days in screen positive and culture negative, more than 14 days in culture positive and more than 21 days in cases with meningitis.
In addition, we also measured other complications associated with antibiotic overuse, such as NEC and overall mortality rates, using data from the admission register and medical records. Our study was divided into three phases: initial observation phase (12 weeks), implementation phase (five PDSA cycles: 10 months) and postimplementation phase/ sustainability (10 months: to be continued).
Observation phase: Baseline data were collected on predesigned proforma at our centre between January 2019 and March 2019 (12 weeks). In our baseline data, it was seen that total AUR was 574, AUR in culture-negative babies was 451 and AUR of vancomycin was 62/1000 patient days. Overall, 22% of newborns were not exposed to antibiotics during NICU stay. Antibiotics were stopped within 48 hours in culture-negative and screen-negative patients in only 16% of cases, whereas in culture negative but screen-positive patients antibiotic stopped at 7 days in 54%. Overall, 28% of the babies received prolonged antibiotics.
We further did a root cause analysis of our problem and identified various factors which were contributing to unrestricted antibiotic use in our babies, such as: lack of a unit antibiotic protocol, lack of antibiotic policy based on antibiogram, lack of motivation and awareness regarding burden of antimicrobial resistance among healthcare personnel, lack of knowledge about when to start and stop antibiotics appropriately.
A multidisciplinary antibiotic stewardship QI team was formed comprising two neonatologists, a microbiologist, nursing-in-charge, four nursing staff and a fellow student. All the team members were assigned a specific duty. The team members worked together to evaluate factors contributing to overuse of antibiotics, and further plan and implement strategies to reduce the inappropriate use of antibiotics (AUR/1000 patient days) in NICU by at least by 20%.
Strategy
Strategies were made in the form of PDSA cycles to implement ASP in the NICU. 1. First PDSA cycle: (April 2019 to May 2019). Plan: Formulation of unit sepsis and antibiotic protocol, as there was no specific unit protocol for antibiotic prescription. Do: A meeting was held and adaptation of The National Institute for Health and Care Excellence guidelines, 2016 13 for EOS and an adaption of neonatal and paediatric sepsis 14 guidelines for LOS was done to ensure uniformity of antibiotic prescription (refer: online supplemental file 1). Educational interventions including presentations and posters (refer : online supplemental file 2) outlining sepsis guidelines, data from the baseline period and information about antibiotic abuse including emerging antibiotic resistance were disseminated in the unit. A sheet of protocol was attached to every patient's file so as to review the actual need to start, stop, upgrade or continue antibiotics as per protocol. Study: Baseline AUR decreased from 574 to 457 per 1000 patient days. Act: This PDSA cycle was adopted and continued. 2. Second PDSA cycle: (June 2019 to July 2019). Plan: Early stoppage of antibiotics in non-septic babies. Do: A mandatory checkpoint was made at 48 hours of starting antibiotics. If blood culture was negative and 2 CRP were negative 24 hours apart, the patient being asymptomatic, then antibiotics would be stopped. Similar checkpoint was made at 7th day, and antibiotics in culture-negative and screen-positive patients were not given for more than 7 days. Study: AUR decreased from 457 to 431 per 1000 patient day. Act: This cycle was adopted and continued. 3. Third PDSA cycle: (August 2019 to September 2019).
Plan: Restriction of antibiotic initiation. Do: Before any baby is started on antibiotics, it was reviewed by the consultant in NICU, and antibiotics were started only as per the protocol. Study: AUR decreased from 431 to 426 per 1000 patient days. Act: This cycle was adopted and continued. 4. Fourth PDSA cycle: (November 2019 to December 2019). Plan: Formulation and implementation of new antibiotic policy as per antibiogram. Do: An antibiogram was made, after reviewing the sepsis and antibiotic sensitivity pattern of last 1 year, and first, second and third-line antibiotic were revised, and a poster regarding it was put in all NICUs. Study: Here, there was increase in AUR from 426 to 443. The reason for this was central line-related bloodstream infection (CLAB-SI) breakout in the unit and also change of fellow residents due to training session changeover. Act: The cycle was adapted and along with that a senior neonatologist in the unit was given the responsibility of ASP and formation of antibiogram and sepsis control measures, so that the project would remain unaffected by change of fellow residents. 5. Fifth PDSA cycle: (January 2020 to February 2020).
Plan: Reducing the use of vancomycin (AUR). Do: A Open access meeting was held, and a written protocol was made for starting and stopping of vancomycin in NICU, which was circulated among all the staff and doctors (refer online supplemental file 3). Study: Vancomycin AUR decreased from 54 to 37 per 1000 patient days, and overall AUR decreased from 443 to 411 per 1000 patient days. Act: This cycle was adopted and continued. Post Implementaion phase: Sustaining QI initiatives (March 2020 to December 2020).
Sustaining an improvement is a must and for this the QI team met weekly to discuss the various PDSA cycles and review the antibiotic usage data. A regular monitoring of AUR and other measures was done via a trend line, a checklist to ensure compliance with the unit protocol, and a regular audit by feedback system in monthly statistical meet was held. During this QI study, there were no major costs incurred and no additional staff were recruited.
STATISTICAL ANALYSIS
Continuous analysis was done on statistical process control (SPC) charts to evaluate the trend of AUR, and the process measures. Preliminary analysis was expressed as mean (SD), percentage and frequencies. For parametric data, χ 2 /Fisher exact test and for continuous variables t-test were used. Comparison between observation and implementation phase was done in terms of baseline characteristics, various interventions and complications of antibiotic abuse. The p value of less than 0.05 was taken as significant. Analysis was done using the SPSS V.20.0 for Windows (IBM Corporation).
RESULTS
The QI study involved around 2292 newborns over a period of 2 years; 290 in observation phase (January to March 2019), 1138 in intervention phase (PDSA cycles: April 2019 to February 2020) and 864 in sustainability phase (March to December 2020).
We studied the baseline characteristics of newborns enrolled in the study in terms of gender, birth weight and gestational age which was found to be similar in observation, intervention and sustainability phase. During the study, 73.1% were males and 26.9% were females. The mean gestational age was 34 weeks ±4.45, and mean birth weight was 2017±947 g (refer:online supplemental file 4).
Our primary outcome: AUR is our primary outcome and its trend throughout the study has been depicted in figure 1. It presents an SPC chart, U type, which is a Open access time ordered graphical representation of a process, used to determine if a process has been operating in statistical control, helps in maintaining it, and determine any common cause or special cause variation if any. Here, central line (CL) is calculated as mean AUR of the observation period, upper control line (UCL) is calculated as mean+3 SD and lower control line (LCL) is calculated as mean−3 SD. CL has been adjusted two times as special cause variation was identified on two occasions, one is soon after the implementation of PDSA one itself,that is, implementation of unit sepsis and antibiotic protocol after which there has been sudden fall in AUR by almost 20% crossing the LCL, and the other is after the CLABSI breakout in October where there was sudden increase in AUR touching the UCL.
There was a significant fall in AUR from 574 to 457 per 1000 patient days (almost by 20%) after the first PDSA cycle itself, after which there was only mild variation in AUR reflecting common cause variation. But after the third PDSA, during the month of October 2019, there was sudden spike in AUR upto 510 per 1000 patient days which coincided with CLABSI breakout in NICU. We have shown the trend of monthly AUR in each PDSA cycle (figure 1). This spike was mainly due to two reasons; one of the fellow residents who was primarily involved in this QI study completed his term and left the hospital, and a new fellow had just joined who was handed over the QI project. The second reason was there was a CLABSI breakout in the NICU (five CLABSI in 4 weeks). So to control the situation, a senior consultant was permanently given the responsibility of the ASP programme, so that it would be unaffected by the new trainees joining every year. Epidemiological and microbiological surveillance showed the emergence of MDROs in our unit, so we planned our next PDSA cycle accordingly. We made an antibiogram of last 1 year and revised our antibiotic policy accordingly. So, we introduced new PDSA cycles 4 and 5 and reinforced all the previous cycles. Hence, with appropriate clinical management and strengthening infection prevention and control measures, we could control the spike of AUR. By the next month (November 2019), itself AUR came back to its baseline and further decreased which was sustained for at least 10 months. Overall, AUR was decreased from 574 to 390 per 1000 patient days, which was almost reduced by 32% (p=0.001). This was more than what we had targeted to reduce AUR by at least 20%.
We also measured AUR in culture-negative patients, which decreased from 451 to 361 per 1000 patient days, that is, reduced by 20% (p=0.015). As vancomycin was being used quite frequently in our unit, we measured AUR for vancomycin, which was 62/1000 patient days, and there was no reduction in its use initially, so a separate fifth PDSA cycle was introduced, where a protocol for starting vancomycin was introduced. After this, vancomycin usage rate decreased significantly upto 44/1000 patient days, which is almost 29% reduction (p=0.03) (refer table 1).
We measured many other process indicators that would indicate towards judicious use of antibiotics (table 1). One of these most important was percentage of babies not exposed to antibiotics at all during NICU stay, which was initially 22% which showed good increase from first PDSA cycle itself and gradually increased to 37% in sustainability phase (p=0.045). We also measured compliance with our new protocol during initiation and upgradation of antibiotics, which was 60% and 54%, respectively, during the first PDSA cycle, and it increased upto 75% and 82% in the sustainability phase (p<0.05). We measured early stopping of antibiotics at 48 hours in culture-negative and sepsis screen-negative patients and stopping of antibiotics at 7 days in culture negative but sepsis screen-positive patients, which was only 16% and 54%, respectively, initially. After introduction of first PDSA cycle itself that focused on early stoppage of antibiotics, it increased drastically and was maintained in sustainability phase upto 54% (p<0.000). Also, prolonged use of antibiotics decreased from 28% to 20% (p<0.038).
In October 2019, during CLABSI breakout and change of fellow student in the NICU, with increase in AUR, there was also increase in use of prolonged antibiotic (26%), percentage of babies not exposed to newborn was decreased (30%), compliance to protocol for starting antibiotics remained 70% and antibiotic stopped early at 48 hours in 44% (as shown in figure 2). The best part was, we demonstrated a sustainability phase of 10 months where all are targets were achieved and also maintained. We also looked into overall NEC and mortality rates after starting the QI study. NEC rates decreased slightly from 7.3% (2018) to 5.7% (2020); however, it was not statistically significant (p=0.56), whereas the mortality rate was almost similar: 6.8% in 2018 and 6.5% in 2020 (p=0.87).
DISCUSSION
A multidisciplinary QI initiative was taken to ensure judicious antibiotic use at our NICU. This was the first time we initiated antibiotic stewardship in our hospital. Reviewing the usage of antibiotics at the NICU helped us identify the various barriers for safe antibiotics prescriptions. Further, PDSA cycles helped us to implement the solutions, which were effective and efficient.
Overall, our QI approach was found to be quite effective, as we were able to reduce overall AUR from 574/1000 patient days to 390/1000 patient days while in culturenegative patients AUR reduced from 451/1000 patient days to 361/1000 patient days. An overall decrease of 32% in overall AUR and 20% in AUR in culture-negative patients was achieved against a target of 20%. Our study showed a comparatively higher AUR, which could be because of the fact that all babies in our NICU are outborn and referred cases with high rate of sepsis. So we could bring our AUR down mainly by decreasing antibiotic usage in culture-negative babies, as culturepositive babies would require appropriate antibiotics with adequate duration. However, it was seen that our Open access AUR in culture negative decreased only by 20% and in total by 32%. So we can focus more on decreasing antibiotic usage in culture-negative patients. For this, we had introduced two PDSA cycles, that is, first, restriction in initiation of antibiotics by having a strict compliance to unit protocol and second, by early stoppage of antibiotics by making mandatory checkpoints at 48 hours for culture-negative/screen-negative and at 7 days for culture-negative and screen-positive patients that helped us in decreasing the prolonged use of antibiotics. There was an increase in compliance for initiation and upgradation of antibiotics after each PDSA cycle, increasing from initial 60% to 75% and from 54% to 82%, respectively. We were able to increase early discontinuation of antibiotics at 48 hours from 16% to 54% over the study period and decrease prolonged use of antibiotics from 28% to 20%. The proportion of newborns never exposed to antibiotics increased from 22% to 37%.
Similar to our study, Lu et al also concluded that the ASP was feasible and effective in reducing the AUR by 30% among the neonates in a predominantly outborn tertiary centre. The proportion of infants colonised with MDRO during the study decreased from 1.4% to 1.0% post intervention. The safety matrices such as readmission for sepsis (1.2% vs 1.1%) and sepsis-related mortalities (0.24% vs 0.23%) did not show significant changes over time. 15 Thus, the ASP was effective in reducing antibiotic exposure without affecting the quality of care. We also need to sustain our study and continue it further to see significant effect on long term outcomes such as rates of MDROs. We did evaluate the overall NEC and mortality rates during our study, which were similar and there was no decrease in NEC and mortality rates. May be we need to sustain the study further, and further improve our outcome and other process measures to bring significant effect on mortality and NEC rates. Also, there are many other factors that have an impact on NEC and mortality which need to be kept in mind.
One of the common causes of LOS in newborns is CONS species, MRSA, VRE and extended-spectrum betalactamase producing organisms, which is often resistant to all beta-lactam antibiotics, requiring treatment with vancomycin. 16 Thus, vancomycin is a frequently used antibiotics for suspected LOS in the NICU. Most common infection in our NICU is GNB sepsis (Klebsiella, 60%), out of which almost 50% is carbapenem (meropenem) resistant, which is our second-line antibiotic. So, whenever we suspect GNB sepsis or blood culture is appearing to be positive, we upgrade the antibiotic to our third-line antibiotics, which is colistin as per the unit antibiogram. Gram-positive cocci such as CONS and MRSA are the second most common cause of sepsis in the unit, which is mostly (>80%) sensitive to most of the antibiotics. It was observed that in our unit whenever we suspected ventilator-associated pneumonia (VAP) or CLABSI or postoperative, often vancomycin was started empirically. Also, in last few years, we have started encountering cases of VRE (2-3/year). So, we noticed that we could avoid Table 1 Various outcome and process measures during the observation phase, intervention phase (PDSA cycles) and sustainability phase during the quality improvement programme Open access using vancomycin, and instead use drugs that act on both gram negative as well as Gram-positive organisms. So, we planned to specifically decrease the use of vancomycin, as this is an essential antibiotic which should be reserved for future use, as it would not be possible for us to decrease the use of other antibiotics such as meropenem, gentamicin, amikacin, colistin significantly as per our organisms and it's sensitivity pattern. Thus, including a protocol to restrict the use of vancomycin in NICU in our ASPs could help us avoid emergence of antibiotic-resistant organisms. Hence, we introduced the fifth PDSA cycle to decrease vancomycin usage, in which we were successful in decreasing AUR of vancomycin by 29%. In a similar study by Chiu et al, vancomycin starting rates were reduced from 6.9 to 4.5 per 1000 patient days (35% reduction; p 0.01). 17 Implementation of an NICU vancomycin use guideline significantly reduced exposure of newborns to vancomycin without adversely affecting short-term patient safety. Further studies are required to evaluate the long-term effect of vancomycin restriction on NICU patient safety, particularly among institutions with higher rates of MRSA infections.
CHALLENGES WE FACED
The initial challenge was to take risks of deviating from the usual practice of starting antibiotics in most of the babies by the NICU doctors in fear of adverse clinical incidents. This was managed by bringing awareness among the doctors and staff especially about the alarming situation of emerging antibiotic resistance and its adverse outcomes, this motivated all to decrease antibiotic usage. As our centre is a referral centre, we have babies being referred from all over Rajasthan, which are quite sick and predominantly septic or requiring surgery. Often it becomes quite difficult to differentiate between septic and non-septic babies as signs and symptoms in newborns are very non-specific. So, there is tendency of adding antibiotics to most of the babies in fear of any clinical deterioration. Also, many babies are coming with sepsis via MDRO, and are already on broad-spectrum multiple antibiotics when referred to our centre, so the decision to stop or even downgrade antibiotics could be quite risky. But, however, we have studied the antibiotic sensitivity pattern of our centre as well as many referring centres, and thus planned out antibiotics policy as per the sensitivity pattern, which was very helpful. Also, we focused more in decreasing AUR in culture-negative babies. Whenever we took the decision of stopping or not starting the antibiotic in a baby, it was followed by a very close monitoring of the babies with frequent reviews so that we could intervene as early as possible in case of any worsening. Also, we had a CLABSI break out in October 2019, where our ASP almost was on the verge of collapsing. All the improvement that was done so far, suddenly reached to the baseline antibiotic usage from where we started. But it was the motivation and confidence go the team leader that kept us going, and we learnt that success of any ASP also depend on maintaining asepsis in the unit Open access and following CLABSI and VAP bundles simultaneously. Focussing on one programme does not mean we can ignore other important ongoing protocols in the NICU, especially, maintaining asepsis protocols.
STRENGTHS
Our enthusiastic and dedicated team was our main strength who worked passionately to decrease AUR in the NICU. We have successfully demostrated decrease in antibiotic usage by an effective QIP using simple and effective PDSA cycles. The emerging antibiotic resistance pattern seen in our NICU also represents the sensitivity pattern of the rest of Rajasthan from where babies are referred to us. As we take blood cultures of all babies at admission that represents the sepsis profile of the referral centres. For example, most common organism causing sepsis was found to be Klebsiella both at our centre and rest of the referral centre, and Klebsiella at our centre had only 15% sensitivity to meropenem whereas in the periphery its sensitivity was 30% which is also quite low. Hence, there is an urgent need of ASP in most of the NICUs around in North India and the best way to implement it would be through a QI programme, similar to ours. The strategies and PDSA cycles used by us are simple enough which, can be followed by other NICUs as well. Hence, our study and its methodology can be generalised to most of NICUs of north India of similar settings. Also, the study did not add any extra cost, manpower, or equipments, hence it can be replicated even in resource limited settings. We have not only been able to achieve our goals in terms of decreasing AUR, but also able to sustain it. We have demonstrated a sustainability of around 10 months by frequent reviews of AUR, monthly sepsis meet and direct involvement of a senior neonatologist and a microbiologist in leading the entire ASP. We did weekly reviews, motivating the team to check antibiotic use, along with lectures, poster and study material, so that staff remained motivated and confident throughout.
LIMITATIONS
One of the major limitation of our study was that all our babies are outborn and most of the babies referred to us were already on antibiotics and also received antibiotic outside. We did not take this into account as it was not possible for us to control it. This could have effected the impact of our QI study and may be we could have shown better improvement if our babies were inborn. Often, data from different neonatal units may not be comparable directly owing to the differences in admission populations, baseline rates of sepsis, and variations in practices. But as ours is a referral centre, we assume that our data and research outcome are generalisable to other NICUs.
CONCLUSION
ASP may be implemented in all NICUs to decrease inappropriate antibiotic usage, and QI initiative is an effective way of doing it. Introducing a unit protocol for sepsis/ antibiotic policy as per antibiogram and making mandatory checkpoints at 48 hours to stop antibiotic in culturenegative sepsis were the most effective PDSA cycles. Maintaining sustainability is the key to success of any QI programme as demonstrated by our study. We are further motivated to continue with our QI programme so that we can see if there is any effect on rates of MDRO. Also, our study is applicable to other NICUs of similar settings. Infact we look forward to motivate and sensitise our referring centres to start an ASP by making them aware of the emerging resistance pattern of microbes at their centre. | 2021-08-05T06:18:20.848Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "f3f3842aecd5ef69218866e09b271fc2daea1936",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopenquality.bmj.com/content/bmjqir/10/Suppl_1/e001470.full.pdf",
"oa_status": "GOLD",
"pdf_src": "BMJ",
"pdf_hash": "fe81dc4c35089bade8e6cd80c1a7d40f5002088a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262436685 | pes2o/s2orc | v3-fos-license | Self-Referring to the International Criminal Court: A Continuation of War by Other Means
Weak sub-Saharan African states use international law and its institutions to legitimate their actions and delegitimate their internal enemies. In this essay, I argue that during internal armed conflicts, African states use international criminal law to redefine the conflict as international and thereby rebrand domestic political opponents as international criminals/enemies who are a threat to the entire community. This in turn sets the stage for invoking belligerent privileges under international humanitarian law (IHL).
tion and the legitimacy of its military operations. For example, in July 2012, the Malian State referred 6 the situation in Mali to the ICC to investigate the crimes committed by "various armed groups," 7 which the Court's report 8 demonstrates were comprised mainly of the MNLA (National Movement for the Liberation of Azawad), the AQIM (Al Qaeda in the Islamic Maghreb), and Ansar Dine, suggesting a pre-determination of who the criminals were. Similarly, Nouwen and Werner 9 demonstrate how Uganda used the self-referral as part of its military strategy and international reputation campaign, when the government knew that it was unlikely to be victorious if it declared war on Sudan to force it to withdraw support to the Ugandan rebels, the Lord's Resistance Army (LRA). African states are also relying on the referrals to deflect international focus from human rights violations committed by their military forces. 10 An examination of the preliminary investigation reports in situations that have reached the ICC through self-referrals highlight a disturbing trend of charges only against leaders of armed opposition groups. It can be seen that African states use the Court to delegitimize their enemies, thus legitimizing the government's military approach against the "uncivilized" 11 rebels.
While Africa has been called the guinea pig for post-Cold War humanitarianism, weak African states are finding it increasingly useful to employ the rhetoric of IHL to redefine other forms of violence as an armed conflict, as evident from a report by the International Committee of the Red Cross. 12 In addition to creating grounds for legitimating their military operations and war crime prosecutions, such redefinition of political violence as an armed conflict has also been used as a ground to request external military intervention. For example, in 2012 the United Nations Security Council authorized 13 a military intervention, at the repeated request of the Malian State, explicitly to assist the Malian armed forces against the violations committed by "various armed groups." Even if the request for military intervention creates the veneer of a postwar concern with the protection of civilians and the prosecution of those alleged to have committed the impugned crimes, a closer look reveals the underlying political nature of international criminal law, of the self-referrals, as war by other means.
My claim is that the political character of the Court must be understood and acknowledged in order to fully gauge the political meaning of its judicial interventions. Despite multiple critiques, the OTP has adopted the policy of welcoming self-referrals. These self-referrals are a means to deal with a skeptical international community. They also allow the OTP to pacify its sceptics and respond to the pressure to open investigations, thus justifying the ICC's existence, as Nouwen and Werner persuasively argue. Self-referrals are also a way to assuage the fears of states about the use of the proprio motu power, and have further been viewed as a mechanism for obtaining the co-operation of the relevant state in investigation and enforcement, crucial areas in which the ICC lacks competence and resources. States tend to self-refer situations in an attempt to obtain legitimacy in the international arena and domestic political mileage against their opponents. This observation is buttressed by the fact that the OTP has, to date, not investigated the government of any self-referring state, but has concentrated on members of rebel groups alone. Additionally, in some situations, self-referrals also 6 Government of Mali, Referral Letter (2012). 7 ICC, Press Release, ICC Prosecutor opens investigation into war crimes in Mali: "The legal requirements have been met. We will investigate" (2013). give rise to the classic "peace v. justice" debate. Here, the Ugandan case will serve as a prime example of the ICC investigations perhaps exacerbating the conflict, 14 in contrast to the current conflict in Mali and CAR as situations where "war" between the state and its opponents is continued in the form of a self-referral to the ICC.
Serving or Targeting Civilians?: Blurring the Civilian-Combatant Distinction
While African states have repeatedly taken a clear stand against the impunity of their leaders, an objective that the ICC's involvement could be perceived as significantly advancing, the loudest outcries against the Court were with respect to the two cases that were initiated against President Bashir of Sudan and the continuation of charges against Uhuru Kenyatta of Kenya, who was elected while charged in The Hague. Selfreferrals, on the other hand, enabled the states to demonstrate their desire to prosecute criminals, whilst reinforcing the superior role of the state vis-à-vis a nonstate entity through the responses of the Court.
Through their interpretation of IHL's core distinction between civilians and combatants, weak states like Uganda, DRC, Mali and more recently, CAR, treat the members of their opposition groups as unlawful combatants, making them targetable and prosecutable for war crimes under the Rome Statute. As nonstate armed groups possess neither the de jure privilege of a combatant nor the immunity of a civilian, states can rely on the membership of individuals in such groups to justify carrying out "legitimate operations" against them, often labeling them "enemies" and more recently, "enemy combatants." In these African conflicts, the differing levels in rights and privileges afforded to states and nonstate parties under IHL establish an asymmetry between them. Lacking the prospect of both combatant privilege and civilian immunity, nonstate operatives exist on a continuum between civilians and combatants. The difficulty that state forces had in distinguishing the increasing number of nonstate actors (terrorists, rebels, insurgents, military contractors) from the civilian population led to the International Committee of the Red Cross' (ICRC) Interpretive Guidance on Direct Participation in Hostilities in 2009. The ICRC developed this Guidance in response to claims by state actors that relying solely on direct participation in hostilities (DPH) to determine permissible targets advantaged rebel groups. While the ICRC's Guidance states that DPH makes members of nonstate armed groups targetable based on their membership in such groups, on the ground that it amounts to a continuous form of civilian participation in hostilities (also called continuous combat function), African states were actually relying on membership-based targeting long before 2009 to suppress their political opponents. For example, the Ugandan government commenced a controversial military operation in 1990 to wipe out LRA rebels, which included forced displacement and internment camps housing millions of Ugandans. 15 Similarly, in 2012, the Malian government, for the first time explicitly called for international military support against rebels. That year, alongside French forces, the Malian government initiated Operation Serval against Islamist and Tuareg rebels. This operation included extrajudicial killings of those suspected to be rebels, according to Amnesty International reports. 16 In March 2014, with the support of the ICRC and African peacekeeping forces, the Malian government designated "anti-Balaka" rebels in CAR as enemy combatants. 17 The foregoing examples demonstrate that even if it is an arduous task to distinguish enemy combatants from lawful combatants or civilians taking direct part in hostilities from other civilians, in order to determine who can legitimately be targeted, detained, or prosecuted, states treat these distinctions as if they were clear and carry out operations as if war were justified everywhere, all the time. African states rely on the distinction to legitimately use force, target, or suppress opposing forces; and, further, seek external military and/or judicial intervention to establish the international legitimacy of such actions, adding credibility to the distinction itself.
The uncertainty that surrounds several aspects of the concept of DPH makes this area of the law one that merits close examination. The need for clarity is obvious, given the serious consequences that result from unlawful participation and the danger to innocent civilians posed by unlawful combatants. Taking direct part in hostilities is usually taken to mean to engaging in a specific attack or attacks on an enemy combatant or object during a situation of armed conflict. The ad hoc International Criminal Tribunal for the Former Yugoslavia (ICTY) in Prosecutor v. Krnoljelac stated: An "attack" can be defined as a course of conduct involving the commission of acts of violence. The concept of "attack" is distinct and independent from the concept of "armed conflict". In practice, the attack could outlast, precede, or run parallel to the armed conflict, without necessarily being a part of it. 18 The jurisprudence of the ad hoc International Criminal Tribunal for Rwanda (ICTR) indicates that there is no substantive difference between the terms "active" and "direct." In its Akayesu judgement, the Trial Chamber found that these terms should be treated synonymously. 19 The ICC also adopted a broad definition of active participation in hostilities for the purpose of Article 8(2)(e)(vii) of the Rome Statute in prosecuting Thomas Lubanga, who took no active part in hostilities. 20
"Othering" Within the Boundaries of a State
Whereas Third World Approaches to International Law (TWAIL) scholars tend to study the relationship between international law and Africa as one of resistance, participation, or acquiescence, my premise is that attention is also needed to responses previously attributed to Western states, like complicity and exploitation. In keeping with Judith Butler's account of the oppression and exclusion of the other, 21 the postcolonial state, described in this context as a weak state, can be understood as suffering from a predisposition to reproduce the patterns of exclusion that are core to the reproduction of the self and the other. While the self/other distinction conventionally concerns the West's "othering" of the Third World or the Third World's selfdefinition as the "other," it should also concern an internal "othering" by the bourgeoisie within the thirdworld state-an "othering" that has been of only peripheral interest to TWAIL scholars. The self/other distinction relied upon by TWAIL scholars has been with regard to other types of asymmetrical conflicts like America's War against Terror and colonial wars, with lesser regard for situations that expose internal structural inconsistencies.
One of TWAIL scholars' 22 core criticisms of international law is its perpetuation of the structural problématique of colonialism through modern institutional power hierarchies. I find much merit in the argument that these structures were designed to subordinate the Third World. The ICC is one such organization that has come under scrutiny for targeting African states 23 , whilst ignoring the crimes of other parts of the world. Despite that oft-repeated critique by the African states, to view the Court as the only exploiter is to miss an important dimension.
My analysis of self-referrals demonstrates that the Third World articulates its agency in a legal form. But the structural asymmetry of this legal form, which gives the power to states and not to nonstate actors, enables the embedded logic of the "dynamic of difference" to be reproduced internally. The assertion in Asad Kiyani's essay in this symposium that ICL's selectivity is equally manifested internally and internationally is interestingly argued through a comparison of the self-referral by Uganda with the Security Council referral of Darfur. 24 The key distinction between "internal othering," as argued here, and Kiyani's description of "internal dissension" is that his analysis is from an institutional point of view, focusing on the selectivity of International Criminal Law and its institutional apparatuses, whilst this essay analyzes selectivity from the angle of the state towards its own people internally.
The concept of self-referrals has also been criticized 25 as contravening the ICC's espousal of positive complementarity. 26 The expectation that the domestic systems would handle the bulk of the cases had turned into an (unwritten) obligation, according to which the ICC would be treated as the court of last resort only if the states demonstrated genuine incapacity. Self-referrals seemed to absolve the states of such obligation, enabling its externalization.
While both critiques, of case selection and the contravention of the principle of complementarity, are of a procedural character, the structural shortcomings of the ICC require examination in order to understand the concept of what is called "internal othering." Relying on Antony Anghie's concept of a "dynamic of difference," TWAIL adherents argue that the principle of complementarity enshrined in the Rome Statute was not, as it often appears, a compromise meant to protect state sovereignty, but a technique to perpetuate the "civilized/uncivilized" dichotomy between the West and the Third World. Yet, they completely ignore the other side of the coin; namely, the creation of that same dichotomy within the so-called "uncivilized" world.
Conclusion
My contention is that the doctrinal and institutional terrains of international law today are as much a tool of the weak as the strong, causing the reproduction of the self/other distinction within weak states. While TWAIL methods could be useful for a critical understanding of international criminal law's origin and modes of operation, in theorizing the relationship between Africa and international law, TWAIL scholars fail to push far enough to take into account an important dimension: the reproduction of the subordination within thirdworld states demonstrated here in the context of the ICC. Unlike many third-world states, states like Mali, Uganda, DRC and CAR are constantly threatened by coups, "de-democratization" and cycling between an apparent democracy and a failed state. Wars waged internally in African states tend to destabilize them, thus making the "communication of legitimacy" an important political tool wielded through purely legal means. Similarly, states rely on the diversity within the meaning of terms like "combatants" in order to delegitimate their enemies-also a strategic tool to further their own legitimacy. Treated as forgotten crises and forgotten states, weak states caught in internal conflicts on the periphery of international law are using international law to advance their interest in a concerted manner today. They are, I believe, through such institutional mechanisms as the ICC, also contributing to the jurisprudence of international law, contrary to the claims made by TWAIL scholars. As much as I agree with the tenets of TWAIL scholarship, I think it defeats its own thesis when it treats all alike, all as equally subjugated, even within the Third World. | 2019-05-19T13:04:57.036Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "2709992e8a076c8bad3c40fa5cb4145432faae3a",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/4BAF722AD83554D1C95178CF175D22C5/S2398772300001562a.pdf/div-class-title-self-referring-to-the-international-criminal-court-a-continuation-of-war-by-other-means-div.pdf",
"oa_status": "GOLD",
"pdf_src": "Cambridge",
"pdf_hash": "2709992e8a076c8bad3c40fa5cb4145432faae3a",
"s2fieldsofstudy": [
"Political Science",
"Law"
],
"extfieldsofstudy": [
"Political Science"
]
} |
8154045 | pes2o/s2orc | v3-fos-license | Superfluid Motion of Light
Superfluidity, the ability of a fluid to move without dissipation, is one of the most spectacular manifestations of the quantum nature of matter. We explore here the possibility of superfluid motion of light. Controlling the speed of a light packet with respect to a defect, we demonstrate the presence of superfluidity and, above a critical velocity, its breakdown through the onset of a dissipative phase. We describe a possible experimental realization based on the transverse motion through an array of waveguides. These results open new perspectives in transport optimization.
Superfluidity, the ability of a fluid to move without dissipation, is one of the most spectacular manifestations of the quantum nature of matter. We explore here the possibility of superfluid motion of light. Controlling the speed of a light packet with respect to a defect, we demonstrate the presence of superfluidity and, above a critical velocity, its breakdown through the onset of a dissipative phase. We describe a possible experimental realization based on the transverse motion through an array of waveguides. These results open new perspectives in transport optimization. Next year will bring the opportunity to celebrate the 100th anniversary of the discovery of superconductivity [1]. This remarkable property is often related to a more fundamental phenomenon, the Bose Einstein condensation, where a single quantum state is occupied by a macroscopic number of particles. The bosons that condense may be coupled electrons that form Cooper pairs, as in superconducting metals [2], atoms [3], like in the original experiments in superfluid (SF) 4 He [4], or molecules [5]. They may also be formed of more complex particles, like fermionic atom pairs [6], or polaritons, a composite of a photon and an exciton [7]. Superfluidity of polaritons in semiconductor cavities was explicitly tested recently [8].
There are different definitions of superfluidity, each one may emphasize a particuler physical aspect. Here superfluidity means the existence of a finite critical velocity v c > 0 below which the motion of the fluid is dissipationless. A particularly simple way to experimentally implement this test is by moving through the fluid an obstacle, or localized external potential. When the potential is weak, it has been shown long ago [9] that v c = c s , where c s is the speed of sound in the fluid [10]. Above the critical velocity, superfluidity is broken and dissipative effects appear.
It is important to note that a finite critical velocity is directly related to the presence of interactions between the bosons. The interactions control the long wavelength structure of the dispersion relation of the fluid. In a mean field approximation, weakly interacting bosons may be modeled, with good accuracy, by the Gross-Pitaevsky equation. The latter corresponds to a Schrödinger equation with an additional nonlinear term that describes the interactions. In particular, this equation reproduces correctly the Bogoliubov dispersion relation mentioned above for the excitations of the interacting fluid. Interestingly, when the propagation of light is considered through a nonlinear medium of the Kerr-type uniform in one direction, in the paraxial approximation a similar equation is obtained for the slowly varying envelope of the optical field of a given wavenumber and frequency. The analogy between the Gross-Pitaevsky equation and the light propagation in nonlinear media has been exploited in the past to test basic quantum effects with optics, like Bloch oscillations [11,12] or Anderson localization [13]. Because of the similarities of the two equations, and since the Gross-Pitaevsky equation predicts SF motion, it is natural to push further the analogies and consider the possibility to observe a new state of light, e.g. superfluidity in an optical nonlinear medium. Based on a selfdefocusing refractive medium inside a Fabry-Pérot cavity, an optical analog of a SF has indeed been proposed [14]. However, the results of the numerical simulations based on a transient regime were not conclusive. Moreover, no clear evidence of the existence of a SF critical velocity was provided. Therefore, the existence of photonic superfluidity in nonlinear media, as well as its experimental observation, remain open issues.
Our purpose here is (i) to provide clear evidence of SF motion of light in a nonlinear medium as well as of its breakdown and (ii) propose an experiment that allows the observation of these effects. For simplicity, we will focus on the propagation of light in an effective onedimensional array of waveguides.
In these materials, light propagates in a medium where the refractive index has been spatially modulated. A typical set up consists of a periodic modulation of a twodimensional layer, where an array of equally spaced identical waveguides is formed (see Fig.1). Outside each of the waveguides the optical field intensity decreases exponentially. When the distance is such that the overlap between the fields of neighbouring waveguides is small, the optical tunnelling between adjacent guides may simply be modelled by a hopping term. Light propagates along the guides in the longitudinal direction and hops from guide to guide in the transverse direction. Moreover, the width of each waveguide may be engineered in order to modify the energy of the local (quasi) bound state light mode, and Kerr materials may be used to include nonlinear effects. Under these conditions, the optical field amplitude A k of light at the k th lattice site (or waveguide) obeys the following discrete nonlinear Schrödinger equation [12] (in the paraxial approximation): where C is the tunneling rate between two adjacent sites, ǫ k the on-site energy, and γ > 0 the strength of the self-defocusing nonlinearity of the medium. The left hand side describes the propagation along the longitudinal z-axis of the waveguide, and replaces time in the Schrödinger equation. We measure all distances in units of the incident light wavenumber, hence z, C, ǫ k , and γ|A k | 2 are dimensionless. The possibility to engineer the different characteristics of the array make these photonic lattices unique in their ability to control the flow of light. A laser beam is shone on the input facet of an array of N waveguides, propagates across the photonic lattice, to finally reach the output facet where the intensity distribution is measured (as shown in Fig.1). The length of the structure in the zdirection thus determines the time of propagation across the lattice.
In order to test superfluidity of light, we are interested in the scattering properties of an incident pulse on a localized defect. In the absence of a defect, the light pulse spreads in a way that strongly depends on the nonlinear coefficient. We are not interested in this free propagation, which was studied in detail in the past [12]. Ideally, we would like to analyze the propagation of a packet whose shape is, in the absence of the obstacle, independent of time, in order to clearly single out the influence of the defect on its propagation. There are different ways to realize this. One possibility, which is easily accessible experimentally, is to control the on-site energies ǫ k by modulating the width of each waveguide to build a harmonic confining potential, ǫ k = ǫ 0 + 1 2 ω 2 k 2 , where ǫ 0 is the reference on-site energy, and ω measures the frequency in units of normalized 1/z [15]. The site k = 0 defines the center of the lattice. The advantages of such a set up are multiple. One can shine on the lattice a light packet whose center is located at an arbitrary distance d from the bottom of the potential. As it propagates in the longitudinal direction, the packet will oscillate in the transverse direction with frequency ω [16]. One can show that for γ > 0 the frequency ω coincides with that of γ = 0 [17]. Moreover, the shape of the packet does not vary in time if initially it is given by the (translation) of the stationary ground state solution of Eq.(1). Thus, with a positive nonlinearity, such a light packet oscillates with a frozen shape, and its velocity at the bottom of the potential is v = ω · d. This allows to control the transverse speed of light.
We now include the defect at the center of the harmonic potential. A simple way to experimentally implement it is by a local variation of the on-site energy, ǫ k = ǫ 0 + 1 2 ω 2 k 2 + U 0 δ k,0 (where U 0 represents the defect strength). The purpose now is to study, for different relative velocities v, the oscillations of the light packet in the presence of the defect. If the light is scattered by the defect, dissipative processes are induced that transform the coherent collective oscillation into disordered fluctuations of the light intensity. As a consequence a damping of the collective character of the oscillations is expected. On the contrary, if SF motion occurs, the light pulse is able to move through the defect without losing collectivity (e.g., without changing its global shape). It only creates a local intensity depletion around the defect [18]. By analogy with the dispersion relation of the corresponding (continuous) Gross-Pitaevsky equation, Eq.(1) predicts a SF motion for velocities below a critical threshold v c which, for weak perturbations, is of the order of Cγ|A| 2 , where |A| 2 is the light intensity at the center of the light pulse. For typical experimental set ups [19], this velocity is v c ≈ 2 · 10 −2 , which corresponds, in the original units, to the ratio of the transverse speed to the speed of light in the photonic structure. Figure 2 shows, for different initial positions d, the first oscillations of the light packet. In the absence of an obstacle (Fig.2a), the packet oscillates freely with constant shape and amplitude. In presence of the defect and for small amplitudes (Fig.2b), no damping or dissipative process is observed (see right column of the Figure). The only manifestation of the presence of the defect on the light pulse is a local intensity depletion at the position of the defect, which is clearly visible in Fig.2b as a horizontal red line. The motion is qualitatively similar to the free oscillation of the light pulse, shown in Fig.2a, aside from a slight modification of the frequency, that can be explained theoretically [20]. Increasing the amplitude and thus the relative velocity with respect to the defect, there is a critical speed above which the shape of the light pulse is qualitatively modified as it propagates (Fig.2c). What is observed is, in particular, the emission of grey solitonlike perturbations (dark blue trajectory), which detach from the defect and travel across the light packet with a non trivial dynamics. This dissipative process produces a damping of the oscillations. As the velocity is further increased (Fig.2d), the shape of the light pulse is subjected to massive deformations through phonon-like and solitonic emissions. The complex dynamics of the excitations signals the onset of a strong dissipative process that destroys the collectivity of the oscillations, thus leading to a strong damping. A quantitative way to characterize the dissipative process observed in Fig.2 is to numerically evaluate the fluidity factor, defined as the ratio of the amplitude of the oscillation around some final time, to the initial amplitude, Σ k k|A k | 2 /Σ k k|A 0 k | 2 , where the A 0 k 's are the incident amplitudes of the field. This factor varies from 0 for a totally damped motion to 1 for an undamped one. We show in Fig.3 the computation of the fluidity factor for different velocities and different attractive (U 0 < 0) and repulsive (U 0 > 0) defect strengths. The final time corresponds to 50 free oscillation periods. The velocities are normalized to the perturbative critical velocity, defined as c s = √ 2Cµ, where µ is the chemical potential of the incident light packet, µ = Σ k A 0 At low velocities, the fluidity factor is equal to one and the light pulse presents a perfect transmission through the scattering potential. A local peak (resp. dip) is observed on the light intensity when the defect is attractive (resp. repulsive), but this does not affect qualitatively the dynamics of the oscillations. This demonstrates that the transverse motion of the light is superfluid for a well defined parameter range in transverse speed of light and defect strength.
As the velocity increases a sharp transition towards a phase of damped dynamics is observed. This border defines the critical velocity v c . Above v c , non local dissipative excitations are allowed, and superfluidity breaks down. As shown in Fig.3, v c coincides with the perturbative one, c s , for weak defect strengths. However, as the strength increases, v c deviates from the perturbative estimate. A non-perturbative analysis is required to describe the threshold [18,20]. That analysis also allows to explained the dissymmetry observed in Fig.3 between positive and negative values of U 0 . The critical velocity as a function of the strength of the defect, shown in Fig.3, is computed by solving the equation The method used here to test superfluidity of light has a close counterpart in the physics of ultracold atoms. The influence of a local potential on the flow of a condensate [18,21] or on the damping of dipole oscillations [20,22,23] became in recent years an important and accepted experimental and theoretical tool to analyze the dynamics of ultracold Bose-Einstein condensates, in particular as a test of superfluidity.
We have considered here nonlinear optics at a purely classical level. In the paraxial approximation, its formal description is identical to the Gross-Pitaevsky equation, which describes a mean-field dilute condensate of atoms. Many open issues, related to the underlying microscopic quantum theory of light and to its connections with the phenomenon of Bose-Einstein condensation, deserve further investigation [14]. In a wider context, the physics described here is similar to that observed in the wave resistance of a moving disturbance at the surface of a liquid [24], or to the Cherenkov radiation of a charged particle moving through a dielectric medium [25].
To conclude, we have shown that, in a typical experiment, for transverse speeds of the order of 10 −2 the speed of light in a self-defocusing nonlinear medium the light motion becomes superfluid. In contrast, superfluid- ity does not occur in a focusing (γ < 0) medium. The effect described is not inherent to discrete lattice structures, and is expected to occur in continuous media as well, provided the refraction index can be carefully designed. The main interest of the set up based on an array of waveguides is the ability to easily control the different parameters. Furthermore, in our effective onedimensional geometry we have also identified the emission of solitonic and phonon-like excitations as the main mechanisms that contribute to the breakdown of the SF motion above the critical velocity. In two dimensions superfluidity is also expected to occur, with a transition to a dissipative flow related to the emission of optical vortices. We believe the SF motion described here is a general property, which may be observed for an arbitrary scattering potential (not limited to a localized defect). The propagation in the presence of, e.g., random fluctuations of the on-site energies or of the refraction index is of particular interest, since randomness is inherent to any fabrication process. In analogy with similar recent studies in the physics of ultracold atoms [23,26], one may expect the existence of SF motion of light in presence of disorder. | 2010-09-15T12:11:16.000Z | 2010-09-15T00:00:00.000 | {
"year": 2010,
"sha1": "414e6de208ba4e0e0fc0d6dfdc163dcbebcbc201",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1009.2904",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "414e6de208ba4e0e0fc0d6dfdc163dcbebcbc201",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
229340524 | pes2o/s2orc | v3-fos-license | Family Ties: Relating Poncelet 3-Periodics by their Properties
We compare loci types and invariants across Poncelet families interscribed in three distinct concentric Ellipse pairs: (i) ellipse-incircle, (ii) circumcircle-inellipse, and (iii) homothetic. Their metric properties are mostly identical to those of 3 well-studied families: elliptic billiard (confocal pair), Chapple's poristic triangles, and the Brocard porism. We therefore organized them in three related groups.
Introduction
We have been studying loci and invariants of Poncelet 3-periodics in the confocal ellipse pair (elliptic billiard). Classic invariants include Joachmisthal's constant J (all trajectory segments are tangent to a confocal caustic) and perimeter L [26].
A few properties detected experimentally [21] and later proved can be divided into two groups: (i) loci of triangle centers (we use the X k notation in [17]), and (ii) invariants.
We continue our inquiry into loci and invariants by now considering 3-periodic families three other non-confocal though concentric ellipse pairs. Referring to Figure 1: • Family I: outer ellipse and incircle, incenter X 1 is stationary. • Family II: outer circumcircle and inellipse, circumcenter X 3 is stationary • Family III: an axis aligned pair of homothetic ellipses, the barycenter X 2 is stationary.
One goal is to identify properties of the above common with previously-studied 3-periodic families, namely, (i) the confocal pair (elliptic billiard), (ii) Chapple's porism [8] and (iii) the so-called Brocard porism [3,15]. A quick review of their geometry appears in Section 2.
Main Results. Here are our main results: • Family I 1 Figure 1. Poncelet 3-periodic families in the various concentric ellipse pairs studied in the article. Properties and loci of the confocal pair (elliptic billiard) were studied in [21,12,11]. For each family the particular triangle center which is stationary is indicated.
-It conserves the circumradius, the sum of cosines, and the sum of sidelengths divided by their product. -Its sum of cosines is identical to that of the confocal pair which is its affine image. -The family is the image of Chapple's poristic family [19] under a variable rigid rotation. -The poristic family is the image of the confocal family under a variable similarity transform [10]. Therefore family I retains several all scalefree invariants identified for the elliptic billiard, including the sum of cosines. • Family II -It conserves the cosine product and the sum of squared sidelengths.
-Its product of cosines is identical to that of the excentral triangles in the confocal pair which is its affine image. -In the elliptic billiard, the locus of the incenter (resp. symmedian point) is an ellipse (resp. quartic) [11]. Here the roles swap: the incenter describes a quartic, and the symmedian is an ellipse.
-The orthic triangles of this family are the image of the poristic family under a variable rigid rotation. • Family III -It conserves area, sum of sidelengths squared, sum of cotangents (the latter implies that the Brocard angle is invariant). -Again in contradistinction with the elliptic billiard, the locus of the incenter X 1 is non-elliptic while that of X 6 is an ellipse. -The locus of irrational triangle centers X k , k =13,14,15,16, i.e., the isodynamic and isogonic points, are circles! In the billiard, they are non-conic. -As shown in [20], this family is the image of Brocard porism triangles [3] under a variable similarity transform.
Thus, the following group Poncelet families is proposed with mostly identical properties: (i) family I: confocal, poristics; (ii) family II: confocal excentrals, poristic excentrals; (iii) family III: Brocard porism. Table 1 shows how loci types are shared and/or differ across families, and Figure 10 gives a bird's eye view of the kinship across these families via various transformations.
Related Work. Romaskevich proved the locus of the incenter X 1 over the confocal family is an ellipse [23]. Schwartz and Tabachnikov showed that the locus of barycenter and area centers of Poncelet trajectories are ellipses though the locus of the perimeter centroid in general isn't a conic [25]. For N = 3, the former correspond to X 2 and the latter to the Spieker center X 10 . Garcia [9] and Fierobe [7] showed that the locus of the circumcenter of 3-periodics in the elliptic billiard are ellipses. Indeed, the loci of 29 out of the first 100 triangle centers listed in [17] are ellipses [11]. Tabachnikov and Tsukerman [27] and Chavez-Caliz [4] studied properties and loci of the "circumcenters of mass" of Poncelet N-periodics. This is a generalizations of the classical concept of circumcenter to generic polygons, based on triangulations, etc.
The following invariants for N-periodics in the elliptic billiard have been proved: (i) sum of cosines [1,2], (ii) product of cosines of the outer polygons [1,2], and (iii) area ratios and products of N-periodics and their polar polygons (excentral triangle for N=3); interestingly, these depend on the parity of N [2,4]. Result (i) also holds for the Poncelet family interscribed between an ellipse and a concentric circle [1,Corollary 6.4].
Article structure. We start by reviewing the confocal, Chapple's, and Brocard porisms in Section 2. We then describe properties, invariants, and transformations of families I, II, and III in Sections 3, 4, and 5, respectively. We summarize all results in Section 6. Highlights include (i) a graph representing affine and/or similarity relations between the various families ( Figure 10), (ii) a table of conserved quantities which we have found to continue to hold for N > 3 (proof pending), and (iii) a table with links to videos illustrating some phenomena herein.
Review of Classic Porisms and Proof Method
Grave's Theorem affirms that given a confocal pair (E, E ′′ ), the two tangents to E ′′ from a point P on E will be bisected by the normal of E at P [18]. A consequence is that any closed Poncelet polygon interscribed in such a pair, if regarded as the Figure 2. The poristic triangle family (blue) [8] has a fixed incircle (green) and circumcircle (purple). Let r, R denote their radii. Its excentral triangles (green) are inscribed in a circle of radius 2R centered on the Bevan point X40 and circumscribe the MacBeath inconic (dashed orange) [28], centered on X3 with foci at X1 and X40. A second configuration is also shown (dashed blue and dashed green). Video path of a moving particle bouncing elastically against the boundary, will be Nperiodic. For this reason, this pair is termed the elliptic billiard; [26] is the seminal work. It is conjectured as the only integrable planar billiard [16]. One consequence, mentioned above, is that it conserves perimeter L. An explicit parametrization for 3-periodic vertices appears in Appendix A.1.
Referring to Figure 2, poristic triangles are a one-parameter Poncelet family with fixed incircle and circumcircle discovered in 1746 by William Chapple. Recently, Odehnal [19] has studied loci of its triangle centers. showing many of them are either stationary, ellipses, or circles. Surprisingly, the poristic family is the image of billiard 3-periodics under a variable similarity transform [10], and these two families share many properties and invariants.
Referring to Figure 3, the Brocard porism [3] is a family of triangles inscribed in a circle and circumscribed to a special inellipse known as the "Brocard inellipse" [28,Brocard Inellipse]. Notably, the family's Brocard points are stationary and coincide with the foci of the inellipse. Also remarkable is the fact that the Brocard angle ω is invariant [15]. In [20] we showed this family is the image of family III triangles under a variable similarity transform.
A word about our proof method. We omit some proofs below as they are obtained from a consistent method used previously in [11]: (i) depart from symbolic expressions for the vertices of an isosceles 3-periodic (see Appendix A); (ii) obtain a symbolic expression for the invariant of interest; (iii) simplify it assisted by a CAS, arriving at a "candidate" symbolic expression for the invariant; (iv) verify the latter holds for any (non-isosceles) N-periodic and/or Poncelet pair aspect ratios and if it does, declare it as provably invariant. [28] centered on X39 and with foci at the stationary Brocard points Ω1 and Ω2 of the family. The Brocard angle is invariant [15]. Video
Family I: Outer Ellipse, Inner Circle
Here we study a Poncelet family inscribed in an ellipse centered on O with semiaxes (a, b) and circumscribes a concentric circle of radius r, Figure 4 (left). An explicit parametrization is provided in Appendix A.2.
Corollary 1. For family I 3-periodics, the radius r of the fixed incircle is given by: In the family I 3-periodics the locus of the barycenter X 2 is an ellipse with axes Proof. Consider the explicit expressions derived for 3-periodic vertices in Appendix A.2. Let a first vertex P 1 = (x 1 , y 1 ). From this, we obtain the center X 3 of the orbit's circumcircle: , and radius (a + b)/2. We also obtain that the locus of X 3 is a circle with center (0, 0) and radius (a − b)/2. Proposition 3. Over family I 3-periodics the locus of the orthocenter X 4 is an ellipse of axes Proposition 4. Over family I 3-periodics the locus of the X 5 triangle center is a circle of radius d = (a−b) 2 4(a+b) centered on O = X 1 .
Proposition 5. The power of O with respect to the circumcircle is invariant and equal to −ab.
Proposition 6. Over family I 3-periodics, the locus of X 6 is a quartic given by the following implicit equation: Connection with the poristic family. Below we show that family I 3periodics is the image of the poristic family [19] under a variable rigid rotation about X 1 .
Recall the poristic family of triangles with fixed, non-concentric incircle and circumcircle with centers separated by d = R(R − 2r) [8,19]. Let I be a (moving) reference frame centered on X 1 with one axis oriented toward X 3 . Referring to Proof. This stems from the fact that R, r, and d are constant.
As proved in [10, Thm.3]: Observation 1. The X 1 -centered circumconic to the poristic family is a rigidlyrotating ellipse with axes R + d and R − d.
Since this circumellipse is identical (up to rotation) to the outer ellipse of family I, then R + d = a which is coherent with Proposition 1.
Furthermore, because poristic triangles are the image of billiard 3-periodics under a (varying) affine transform [10,Thm 4], it displays the same scale-free invariants.
Corollary 2. Family I 3-periodics conserve the sum of cosines, product of halfsines, and all scale-free invariants.
Note that invariant sum of cosines for family I N-periodics was proved for all N in [1, Corollary 6.4]. In fact: [8], if the former is observed with respect to a reference system where X1 and X3 are fixed. The fixed incircle (resp. circumcircle) are shown purple (resp. blue). The original outer ellipse (black on both drawings) becomes the X1-centered circumellipse in the poristic case. Over the family, this ellipse is known to rigidly rotate about X1 with axes Proof. Let α, β and α ′′ , β ′′ denote the semi-axes of E I and E ′′ I , respectively. For the pair to admit a 3-periodic family, the latter are given by [9]: · Consider the following affine transformation: This takes E I to an ellipse with semi-axes (a, b), a = α β ′′ α ′′ and b = β and the caustic E ′′ I to a concentric circle of radius β ′′ . In [12, Thm.1] the following expression was given for invariant r/R in the confocal pair: Recall that for any triangle, 3 i=1 cos θ i = 1 + r/R [28, Circumradius, Eqn. 4]. Plugging a = α β ′′ α ′′ and b = β into to (2) yields (3) plus one. It turns out that the proof of [1, Corollary 6.4] implies that for all N, the cosine sum for family I N-periodics is invariant and identical to the one obtained with its confocal affine image [1].
Family II: Outer Circle, Inner Ellipse
This family is inscribed in a circle of radius R centered on O and circumscribes a concentric ellipse with semi-axes a, b; see Figure 5. An explicit parametrization appears in Appendix A.3.
For the N = 3 case, (1) implies R = a + b. By definition X 3 is stationary at O and R is the (invariant) circumradius. As shown in Figure 5: Proposition 7. Over family II 3-periodics, the loci of the orthocenter X 4 and ninepoint center X 5 are concentric circles centered on Proof. CAS-assisted algebraic simplification.
Recall that in the confocal pair the locus of X 1 (resp. X 6 ) is an ellipse (resp. a quartic) [11]; see Appendix C. Interestingly: Proposition 8. Over family II 3-periodics, the locus of the symmedian point X 6 (resp. the incenter X 1 ) is an ellipse (resp. the convex component of a quarticnote the other component corresponds to the locus of the 3 excenters which can be concave). These are given by: Proof. CAS-assisted simplification.
Let s i denote the sidelengths of an N -periodic. Lemma 1. Family II 3-periodics conserve the product of cosines, given by: Proof. CAS-assisted simplification.
The orthic triangle has vertices at the feet of a triangle's altitudes [28]. Let R h denote its circumradius. A known property is that R h = R/2 [28, Orthic Triangle, Eqn. 7]. Therefore, it is invariant over family II 3-periodics. Referring to Let (E II , E ′′ II ) denote the confocal pair which is an affine image of a circle-inellipse concentric pair. Let α, β and α ′′ , β ′′ denote the semi-axes of E II , and E ′′ II , respectively.
Theorem 5. The invariant product of cosines for family II triangles is identical to the one obtained from excentral triangles of 3-periodics in (E II , E ′′ II ).
Proof. Excentrals in the confocal pair conserve the product of cosines [12,Corollary 2]. Recall that for any triangle: where θ ′ i are the angles of the excentral triangle. Plugging a = α ′′ and b = α β β ′′ into (1) yields four times the above identity when r/R is computed as in (3), completing the proof. Right: Family II orthic triangles are identical (up to a variable rotation), to the poristic triangles (red) [19]. Equivalently, the original family is that of poristic excentral triangles (blue), for which both incircle and circumcircle (solid red) are stationary. Also stationary is the excentral MacBeath inellipse (green), i.e., it is the caustic [10], with center X5 and foci X3, and X4, respectively. The original outer circle (black on both images) is also stationary on the poristic case, however the inner ellipse in the Poncelet pair (purple) becomes a rigidly-rotating X3-centered excentral inellipse (dashed purple), whose axes are R + d ′ and R − d ′ . Video 1, Video 2 Lemma 2. Family II 3-periodics are always acute.
Proof. Since X 3 is the common center and is internal to the caustic, it will be interior to Family II 3-periodics, i.e., the latter are acute.
Let I ′ be a (moving) reference frame centered on X 3 with one axis oriented toward X 5 (or X 4 as these 3 are collinear). Referring to Figure 4 (right): Theorem 6. With respect to I ′ , family II 3-periodics are the excentral triangles to the poristic family (modulo a rigid rotation about X 3 ). Equivalently, family II orthics are identical (up to said variable rotation) to the poristic triangles.
Proof. X 5 of a reference triangle is X 3 of the orthic triangle [17]. Since the family is always acute (Lemma 2), X 4 of the reference is X 1 of the orthic triangle [5]. By Proposition 7, d ′ = |X 5 − X 3 | is invariant, i.e., the distance between X 1 and X 3 of the orthic triangle is invariant. The claim follows from noting X 3 , X 5 , X 4 are collinear [28] and that the orthic inradius and circumradius are invariant, Proposition 9.
Which makes sense when one considers the rotating reference frame. Also recall from [10, Thm.1] that: Therefore its focal length is simply 2d ′ = |X 4 − X 3 |. Furthermore, because poristic triangles are the image of billiard 3-periodics under a (varying) affine transform [10, Thm.4], Family II 3-periodics will share all scale-free invariants with billiard excentrals, such as product of cosines, ratio of area to its orthic triangle, etc., see [22].
Family III: Homothetic
This family is inscribed in an ellipse centered on O with semi-axes (a, b) and circumscribes an homothetic, axis-aligned, concentric ellipse with semi-axes (a ′′ , b ′′ ); see Figure 7. An explicit parametrization is provided in Appendix A.4.
Proposition 10. For family III 3-periodics, a ′′ = a/2 and b ′′ = b/2, the barycenter X 2 is stationary at O and the area A is invariant and given by: Proof. family III is the affine image of a family of equilateral triangles interscribed within two concentric circles. The inradius of such a family is half its circumradius. Amongst triangle centers, the barycenter X 2 is uniquely invariant under affine transformations; it lies at the origin for an equilateral. Affine transformations preserve area ratios. A is the area of an equilateral triangle inscribed in a unit circle scaled by the Jacobian ab. This completes the proof.
A known result is that the cotangent of the Brocard angle cot ω of a triangle is equal to the sum of the cotangents of its three internal angles [28,Brocard Angle,Eqn. 1]. Surprisingly, we have: Proposition 11. Family III 3-periodics have invariant ω given by: Proof. Direct calculations using the explicit parametrization of vertices in Appendix A.4.
A known relation is cot ω = ( i is invariant and given by: As mentioned above, in the confocal pair the loci of X 1 (resp. X 6 ) is an ellipse (resp. a quartic) [11]; see Appendix C. Interestingly, we have: Proposition 12. For family III, the locus of the incenter X 1 (resp. symmedian point X 6 ) is a quartic (resp. an ellipse). These are given by: Proof. CAS-assisted simplification.
5.1. Surprising Circular Loci. The two isodynamic points X 13 and X 14 as well as the two isogonic points X 15 and X 16 have trilinear coordinates which are irrational on the sidelengths of a triangle [17]. In the elliptic billiard their loci are non-elliptic. Indeed, in the elliptic billiard we haven't yet found any triangle centers with a conic locus whose trilinears are irrational. Referring to Figure 8, for family III, this is a surprising fact: Proposition 13. The loci of of X k , k =13,14,15,16 are circles. Their radii are Observation 4. Over all a/b, the radius of X 16 is minimum when a/b = 3.
5.2.
Family III and the Brocard Porism. The Brocard porism [3] is a family of triangles inscribed in a circle and circumscribed about a special inellipse known as the "Brocard inellipse" [28,Brocard Inellipse]. Its foci coincide with the stationary Brocard points of the family. Furthermore, this family conserves the Brocard angle ω.
Referring to Figure 7, we showed that over the homothetic family, the aspect ratio of the Brocard inellipse is invariant [20]. This leads to the following result, reproduced from [20, Theorem 3]: [20]. This stems from the fact that the family's Brocard inellipse (purple), centered on X39 and with foci on the Brocard points Ω1, Ω2, has a fixed aspect ratio. Also shown is the elliptic locus of X39. Video Theorem 7. The 3-periodic family in a homothetic pair and that of the Brocard porisms are images of one another under a variable similarity transform.
As shown in [13], the locus of the center X 39 of the Brocard inellipse is an ellipse (it is stationary in the Brocard porism). Table 1. Types of loci for several triangle centers over several Poncelet triangle families, divided in 3 groups A,B,C with closely-related metric phenomena: (i) confocal, fam. I, poristics; (ii) confocal excentral, fam. II, poristic excentral triangles; (iii) fam. III and Brocard porism. Symbols P, C, E, and X indicate point, circle, ellipse, and non-elliptic (degree not yet derived) loci, respectively. A number refers to the degree of the non-elliptic implicit, e.g., '4' for quartic. A singly (resp. doubly) primed letter indicates a perfect match with the outer (resp. inner) conic in the pair. The symbol C5 refers to the nine-point circle. The boldface entries indicate a discrepancy in the group (see text). Note: Xn for the confocal and poristic excentral triangles refer to triangle centers of the family itself (not of their reference triangles). Table 1 summarizes the types of loci (point, circle, ellipse, etc.) for several triangle centers for all families mentioned above. These are organized within three groups A, B, and C with closely-related loci types. Exceptions are also indicated though we still lack a theory for it.
Summary
The first row reveals that out of the 8 families considered only in the confocal case is the locus of the incenter X 1 an ellipse. Additionaly experimentation has suggested an intriguing conjecture: Conjecture 1. Given a pair of conics which admits a Poncelet 3-periodic family, only when such conics are confocal will the locus of either the incenter X 1 or the excenters be a non-degenerate conic.
The plethora of circles in the poristic family had already been shown in [19]. An above-than-expected frequency of ellipses for the confocal pair was signalled in [11]. As mentioned above, irrational centers X k , k ∈ [13,16] [3], however the locus of X 13 and X 14 are circles! Also noticeable is the fact that (i) though in the confocal pair the locus of X 1 and X 6 is an ellipse and a quartic, respectively, in both family II and family III said locus types are swapped. The reasons remain mysterious. It is well-known that there is a projective transformation that takes any Poncelet family to the confocal pair, [6]. In this case only projective properties are preserved. If one restricts the set of possible transformations to either affine ones or similarities (which include rigid transformations), one can construct the two-clique graph of interrelations shown in Figure 10.
As mentioned above, the confocal family is the affine image of either family I or family II. In the first (resp. second) case the caustic (resp. outer ellipse) is sent to a circle. Though the affine group is non-conformal, we showed above that both families conserve their sum of cosines (Theorem 3). One way to see this is that there is an alternate, conformal path which takes family I triangles to the confocal ones, namely a rigid rotation (yielding poristic triangles), followed by a variable similarity (yielding the confocal family).
A similar argument is valid for family II triangles: there is an affine path (nonconformal) to the confocal family though both conserve the product of cosines (Theorem 5). Notice an alternate conformal composition of rotation (yielding poristic excentral triangles) and a variable similarity (yielding confocal excentral triangles). All in this path conserve the product of cosines.
Finally, family III and Brocard porism triangles form an isolated clique. As mentioned in [20], these are variable similarity images of one another but cannot be mappable to the other families via similarities nor affinely. Table 2 summarizes some properties of 3-periodics mentioned herein. The last column reveals that many of the invariants continue to hold for N>3. Animations illustrating some focus-inversive phenomena are listed in Table 3. Table 3. Videos illustrating some phenomena mentioned herein. The last column is clickable and provides the YouTube code. q 1 = 1, where α, though not used here, is the angle of segment P 1 P 2 (and P 1 P 3 ) with respect to the normal at P 1 .
Then P i = (x i , y i ), i = 2, 3 are: Below we list triangle centers amongst X k , k = 1, . . . , 200 for each of the Poncelet pairs mentioned in this article, whose loci are either ellipses or circles.
• 0. Confocal pair (stationary X 9 ) Semi-axes lengths for the elliptic loci of many triangle centers are available in [13].
Appendix C. Loci of Incenter and Symmedian in the Elliptic Billiard
Over 3-periodics in the elliptic billiard, the locus of the incenter X 1 is an origin centered ellipse with axes a 1 , b 1 given by [9]: Over the same family, the locus of X 6 is a convex quartic given by [11,Theorem 2]: locus X 6 : c 1 x 4 + c 2 y 4 + c 3 x 2 y 2 + c 4 x 2 + c 5 y 2 = 0, where: c 5 =a 4 b 2 (3a 4 + 2(2b 2 − a 2 )δ − 5δ 2 ), δ = a 4 − a 2 b 2 + b 4 · Note: this curve has an isolated point at the origin whose geometric meaning is not yet understood.
Appendix D. center of 9-point circle Table 4. Symbols used. | 2020-12-22T02:16:00.481Z | 2020-12-21T00:00:00.000 | {
"year": 2020,
"sha1": "f17df99f582d110505635b4f6b65808dbd3fe04d",
"oa_license": null,
"oa_url": "https://doi.org/10.31896/k.25.1",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "f17df99f582d110505635b4f6b65808dbd3fe04d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
269703167 | pes2o/s2orc | v3-fos-license | Knowledge and Readiness of Teachers Regarding the Utilization of Fun-Based Learning in Teaching Mathematics
This study aims to investigate the relationship between the level of knowledge and readiness of teachers regarding the utilization of play-based methods in teaching mathematics within Malaysia Tamil Schools. The research sample comprised 100 mathematics instructors from twelve primary schools situated in the Segamat district. Data were collected using a questionnaire designed to capture demographic information of the participants, as well as their knowledge and readiness pertaining to the implementation of fun-based methodologies in mathematics instruction within Malaysia Tamil Schools in the Segamat district. Analysis of the data was conducted using SPSS version 23.0, presenting results in the form of mean, percentage, and standard deviation. The findings reveal that the level of knowledge and readiness among mathematics teachers for employing fun-based methods in teaching mathematics in Tamil National-Type Schools is notably high, indicated by a (mean = 4.04, SD = 0.44) mean score of 4.04 with a standard deviation of 0.44 for knowledge, and a (mean = 4.06, SD = 0.46) for readiness. Additionally, the study identifies a significant relationship (r=.371) between the level of knowledge and readiness of mathematics teachers towards the use of fun-based methods in teaching Mathematics within Malaysia Tamil Schools. These results underscore the importance of understanding teachers' preparedness and proficiency in integrating innovative teaching methodologies, particularly within the context of Malaysia Tamil Schools.
Introduction
In Malaysia, the proficiency in Malay language reading among vernacular school students, particularly in SJK (Sekolah Jenis Kebangsaan), has been a subject of concern among educators and policymakers.With the growing emphasis on holistic education and the development of well-rounded individuals, the ability to read effectively in the national language holds significant importance.This introduction seeks to explore the knowledge, skills, and readiness of teachers in employing entertainment teaching methods to tackle Malay language reading issues among vernacular school students in Malaysia.According to recent statistics from the Ministry of Education Malaysia, the proficiency levels in Malay language reading among vernacular school students have shown room for improvement.A survey conducted in 2023 revealed that only 60% of SJK students demonstrated proficiency in Malay language reading, while the remaining 40% struggled with various reading difficulties.These challenges encompassed issues such as poor vocabulary acquisition, limited comprehension skills, and a lack of interest in reading among students.
The utilization of entertainment teaching methods presents a promising approach to address these reading issues effectively (Agus, 2021).By integrating elements of entertainment, such as storytelling, games, and interactive activities, into the teaching pedagogy, educators can create engaging learning environments that stimulate students' interest and motivation to read.However, the successful implementation of such methods hinges upon the knowledge, skills, and readiness of teachers to adapt and apply these strategies in the classroom effectively.
Recent studies have highlighted the importance of equipping teachers with the necessary competencies to utilize entertainment teaching methods proficiently
Objective and Significance
So, this study focused on identified answers for objectives and research questions as mentioned in table 1.What is the relationship between the level of knowledge and readiness of mathematics teachers regarding the use of fun -based methods in teaching in mathematics?
Iii. Material and Method A) Design of Study
The researcher utilizes the quantitative correlational research method, and this survey study refers to current issues.Correlational research is a type of research method that attempts to find relationships or correlations between variables using statistical correlation methods without seeking cause and effect (Idris, 2013).The researcher aims to ascertain the relationship between the level of knowledge and readiness of mathematics teachers towards the utilization of play-based methods in teaching mathematics in Malaysia Tamil Schools.
B) Sampling Method
In this study, the researcher has selected mathematics teachers from Tamil primary schools in the Segamat district, totaling 120 individuals.These mathematics teachers are from 12 different schools.By utilizing Krejcie and Morgan's (1970) sample size determination table, the sample size to be used in the study is 100 teachers.
C) Research Instrument
The researcher utilized a questionnaire to gather information for this study.The research instrument used is divided into three main sections: demographic data, items regarding mathematics teachers' knowledge of the use of play-based methods, and items regarding mathematics teachers' readiness for the use of play-based methods.
Section A includes demographic information about the respondents such as gender, ethnicity, religion, age, academic qualifications, teaching experience, and school category.
The research instrument used in Section B is a questionnaire titled "Teachers' Knowledge of Play-Based Methods in Mathematics Teaching."The questionnaire consists of 10 items, and a 5-point Likert scale has been employed.Typically, the number 5 (strongly agree) indicates a positive attitude, scored as 5 points, while the number 1 (strongly disagree) indicates a negative attitude, scored as 1 point.Respondents are required to provide their response by marking the appropriate symbol within the number representing their opinion on the research topic.Consequently, the overall score will be obtained by summing the scores for each dimension of teachers' knowledge.
Section C comprises a questionnaire assessing mathematics teachers' readiness.The questionnaire consists of 10 items, and the scale used in this questionnaire is also a 5-point Likert scale.In this questionnaire, respondents are required to provide feedback based on the Likert scale provided.The scale refers to respondents' agreement with the items presented.
This study employs the rating scale method to obtain information.A Google Form comprising the rating scale will be distributed to 100 respondents, namely mathematics teachers in the Segamat district.The rating scale will assist the researcher in obtaining information and facilitate the analysis of the data using the SPSS application.The questionnaire forms will be distributed via Google Form sharing, and this process will take place over two weeks.Once completed, the completed questionnaire forms will be analyzed using the Statistical Package for Social Science (SPSS) version 23.0.
D) Research Findings
The study continued with a discussion related to the results of the first objective, which was to identify the level of knowledge of mathematics teachers regarding the use of fun-based methods in teaching mathematics in Malaysia Tamil Schools.The findings in Table 2.1 indicate a high level of knowledge (mean=4.04,SD=0.44) among mathematics teachers regarding the use of fun-based methods in teaching mathematics in Malaysia Tamil Schools.The study found that the research respondents agreed that they were aware of several game techniques to be applied in teaching Mathematics and had watched YouTube videos about some games that could be adapted in the teaching and learning process.
Meanwhile, the findings of the subsequent study relate to the second research objective, which is to identify the level of readiness of mathematics teachers towards the utilization of fun-based methods in teaching mathematics in Malaysia Tamil Schools.Looking at Table 2.1, it can be stated that the level of readiness of mathematics teachers towards the utilization of play-based methods in teaching mathematics in Malaysia Tamil Schools is high, with the data (mean=4.06,SD=0.46).The study results demonstrate that teachers agree that they use ready-made games available in stores and develop their own games to be adapted in the teaching and learning of Mathematics.Furthermore, referring to the findings in Table 3, it is evident that there is a low positive correlation between the level of knowledge and the level of readiness of mathematics teachers towards the utilization of fun-based methods in teaching mathematics in Malaysia Tamil Schools.(r=0.371).The research findings also indicate that there is a significant relationship between the level of knowledge and the level of readiness of mathematics teachers towards the utilization of fun-based methods in teaching mathematics in Malaysia Tamil Schools.(r=0.371,n=100, p=0.000, p<0.05).Finally, the findings also support the null hypothesis of the study, which is that there is a significant relationship between the level of knowledge and the level of readiness of mathematics teachers towards the utilization of funbased methods in teaching mathematics in Malaysia Tamil Schools.
Discussion
The analysis of multiple studies indicates that mathematics teachers in Malaysia Tamil Schools exhibit a high level of knowledge and readiness towards the utilization of fun-based methods in teaching mathematics.
Firstly, teachers demonstrate a strong awareness of various play techniques and strategies, as evidenced by their familiarity with techniques introduced by experts and their engagement with resources such as YouTube videos to enhance their understanding.Additionally, they display competency in managing classroom settings to accommodate play-based activities and have participated in specialized training courses focusing on the integration of games in teaching and learning, further solidifying their knowledge base.Furthermore, teachers exhibit readiness in implementing fun-based methods, as they frequently incorporate ready-made games into their teaching practices and proactively develop their own games tailored to suit instructional objectives.They consistently leverage established theories and collaborate with colleagues to enhance their teaching approaches, demonstrating a proactive approach towards professional development.Moreover, the findings highlight a significant relationship between teachers' knowledge and readiness in utilizing play-based methods, as supported by various studies.Teachers' thorough preparations and willingness to diversify game activities contribute to their overall readiness in conducting effective mathematics teaching.
In summary, the collective evidence suggests that mathematics teachers in Malaysia Tamil Schools possess both a strong knowledge base and a high level of readiness in integrating play-based methods into their teaching practices.Their proactive approach towards professional development and their ability to effectively leverage play-based strategies underscore their commitment to enhancing mathematics education.
Suggestions and Implication
The study offers significant suggestions and implications for future research and educational practice.It suggests exploring the effectiveness of pedagogical training programs tailored for teachers in Malaysia Tamil Schools, specifically focusing on enhancing their knowledge and readiness to employ play-based methods in teaching mathematics.Longitudinal studies could be considered to assess the continued impact of such training on teaching practices and student outcomes, while incorporating qualitative approaches like interviews or focus groups may provide deeper insights into teachers' perspectives and experiences.Additionally, investigating variations in teacher readiness and the use of play-based methods across different school contexts within Malaysia Tamil Schools can offer valuable insights for generalizing findings.Moreover, exploring the correlation between teachers' readiness to use play-based methods and actual student learning outcomes, including mathematical proficiency and problem-solving skills, is essential.On the practical and policy front, the findings can inform targeted professional development programs for teachers, guide curriculum adaptations, influence teacher education programs, encourage parental involvement, inform resource allocation decisions, and serve as a model for educational research methodology.Overall, these suggestions and implications provide valuable insights for advancing research, informing educational practices, and guiding policy decisions related to the integration of play-based methods in teaching mathematics within Malaysia Tamil Schools and beyond.
Conclusion
The data analysis of the study focuses on the knowledge and readiness of teachers regarding the use of play-based methods in teaching mathematics in Malaysia Tamil Schools.The study found that the level of knowledge of mathematics teachers regarding the use of fun-based methods in teaching mathematics in Malaysia Tamil Schools is high.Additionally, the study also indicates that the readiness level of mathematics teachers towards the use of fun-based methods in teaching mathematics in Malaysia Tamil Schools is also high.Furthermore, the study found a significant relationship between the level of knowledge and readiness of mathematics teachers towards the utilization of fun-based methods in teaching mathematics in Tamil National-Type Schools.Finally, the analysis results determine that the null hypothesis of the study is accepted, indicating a significant relationship between the level of knowledge and the readiness of mathematics teachers towards the use of play-based methods in teaching mathematics in Malaysia Tamil Schools.
Future research on the knowledge and readiness of teachers in Malaysia Tamil Schools regarding the use of fun-based methods in teaching mathematics should adopt a comprehensive approach.Firstly, it is important to explore the effects of pedagogical training programs on teacher readiness, using both qualitative and quantitative methods for nuanced understanding.Diversifying studies across various school environments within Malaysia Tamil Schools and assessing the correlation between teacher readiness and student learning outcomes will provide broader perspectives.Additionally, research should focus on the longterm effects of teachers implementing play-based methods on academic performance and students' attitudes towards mathematics.Evaluating different professional development programs, understanding the role of parental involvement, and conducting comparative analyses with other educational settings can provide valuable insights.Exploring the integration of educational technology alongside play-based methods and investigating the sustainability of implementation over time are important considerations for future research in this domain.
The study on the knowledge and readiness of teachers in Malaysia Tamil Schools regarding the use of play-based methods in teaching mathematics has extensive implications.The findings can guide targeted professional development programs, curriculum adjustments, and teacher education reforms to enhance educators' skills and readiness in using effective fun-based methods.Policy makers can benefit from observations to tailor educational policies to the unique needs of these schools, fostering more engaging and student-centered learning environments.The research findings also have the potential to influence parental engagement strategies, as positive findings may encourage collaboration between schools and parents to reinforce play-based methods at home.Furthermore, the research contributes to the broader education community by offering insights applicable to multicultural and multilingual settings, potentially impacting global conversations on effective teaching methodologies.Its implications encompass resource allocation, advocating for strategic investments in materials, training, and support systems.Overall, this study has the potential for a positive impact on teacher practices, student outcomes, parental involvement, and educational discourse at both local and global levels.
Table 2
The level of teachers knowledge and Readiness in Teaching Mathematics
Table 3
The Relationship between the Level of Knowledge and the Level of Readiness of Mathematics | 2024-05-11T16:25:05.447Z | 2024-04-27T00:00:00.000 | {
"year": 2024,
"sha1": "0ac17293fb861f903cadf503dfaeacdbacba38bc",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/21215/knowledge-and-readiness-of-teachers-regarding-the-utilization-of-fun-based-learning-in-teaching-mathematics.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5c1ebab5fef163be0eb6bcec87bda7e376e19444",
"s2fieldsofstudy": [
"Education",
"Mathematics"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.